a_id int64 7.84k 73.8M | a_body stringlengths 61 33k | a_creation_date stringdate 2008-08-11 14:47:31 2022-09-23 20:19:49 | a_last_activity_date stringdate 2008-09-29 09:21:26 2022-09-23 20:19:48 | a_last_edit_date stringdate 2008-10-17 12:22:20 2022-09-23 14:02:27 ⌀ | a_tags float64 | q_id int64 826 73.8M | q_body stringlengths 61 29.9k | q_creation_date stringdate 2008-08-03 21:08:54 2022-09-22 16:12:27 | q_last_activity_date stringdate 2008-12-02 03:34:24 2022-09-24 11:45:56 | q_last_edit_date stringlengths 25 32 ⌀ | q_tags stringlengths 1 103 | _arxiv_links stringlengths 2 6.69k | _n_arxiv_links int64 0 94 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
66,070,459 | <p><strong>I could be wrong,</strong> it should not differ whether if it is a classification or regression. Think about it mathematically.</p>
<p><strong>Generally speaking</strong>, having <code>softmax</code> in the hidden layers is not preferred because we want every neuron to be independent from each other. If you ... | 2021-02-05 20:56:20.433000+00:00 | 2021-02-05 21:27:39.437000+00:00 | 2021-02-05 21:27:39.437000+00:00 | null | 66,069,636 | <p>I have done manual hyperparameter optimization for ML models before and always defaulted to <em>tanh</em> or <em>relu</em> as hidden layer activation functions. Recently, I started trying out Keras Tuner to optimize my architecture and accidentally left <em>softmax</em> as a choice for hidden layer activation.</p>
<... | 2021-02-05 19:43:28.030000+00:00 | 2021-02-05 21:27:39.437000+00:00 | null | python|keras|softmax | ['https://arxiv.org/pdf/1410.5401v2.pdf'] | 1 |
13,017,935 | <p>Daniel Lemire has a couple of papers on pre-sorting to increase compression and performance.
Here's the latest: <a href="http://arxiv.org/abs/1207.2189" rel="nofollow">http://arxiv.org/abs/1207.2189</a></p>
<p>You might look at his EWah variant as well. </p>
<p>The prevailing feeling is that Bitmap array compressi... | 2012-10-22 18:59:16.480000+00:00 | 2013-02-12 22:35:03.423000+00:00 | 2013-02-12 22:35:03.423000+00:00 | null | 11,924,954 | <p>I have developing word align bitmap compression algorithm for data indexing.algorithm is based on the WAH compression research paper.compression bitmap perform well on bit-wise operation and it's very space efficient. but modifying the compressed bitmap not very efficient ,because modifying need splitting compressed... | 2012-08-12 19:01:42.053000+00:00 | 2013-02-12 22:35:03.423000+00:00 | null | c++|database|performance|compression|bit-manipulation | ['http://arxiv.org/abs/1207.2189', 'https://github.com/lemire/javaewah', 'https://github.com/lemire/EWAHBoolArray'] | 3 |
64,773,818 | <p>Many people use "Doc2Vec" to refer to the word2vec-like algorithm introduced by a paper titled <a href="https://arxiv.org/abs/1405.4053" rel="nofollow noreferrer">Distributed Representation of Sentences and Documents</a> (by Le & Mikolov). That paper calls the algorithm 'Paragraph Vector', without usin... | 2020-11-10 17:32:04.837000+00:00 | 2020-11-11 20:12:59.113000+00:00 | 2020-11-11 20:12:59.113000+00:00 | null | 64,772,221 | <p>I am a student (computer science). This is my first question in stackoverflow. I really would appreciate your help! (The package I am referring to is called 'word2vec', thats why the tags/title are a bit confusing to choose.)</p>
<p>In the description of the doc2vec function (here <a href="https://cran.r-project.org... | 2020-11-10 15:52:26.460000+00:00 | 2020-11-21 08:27:51.943000+00:00 | 2020-11-21 08:27:51.943000+00:00 | r|word2vec|doc2vec | ['https://arxiv.org/abs/1405.4053'] | 1 |
43,505,854 | <p>A way to get around this without rooting the phone is to send your packets via multicast UDP*. These packets will make it from GO1 to GO2. </p>
<p>There are some side effects to this: </p>
<ul>
<li><p>To use this for networking you must perform encapsulation and routing at the OSI Application level (not efficie... | 2017-04-19 20:53:29.257000+00:00 | 2017-04-19 20:53:29.257000+00:00 | null | null | 36,867,687 | <p>I have two Android KitKat phones, both are running WiFi-Direct groups as Group Owners, let's call them GO1 and GO2</p>
<p>I managed to connect GO1 as a legacy client to GO2 without breaking any of the (previously set) wifi-direct groups.</p>
<p>The problem is that, as you might know, the GO IP address is hardcoded... | 2016-04-26 14:19:00.243000+00:00 | 2017-04-19 20:53:29.257000+00:00 | 2016-04-27 10:52:02.297000+00:00 | android|ip|ipv6|wifi-direct|wifip2p | ['https://arxiv.org/pdf/1601.00028.pdf'] | 1 |
48,436,520 | <p>You might be interested in my paper <a href="https://arxiv.org/pdf/1801.07779.pdf" rel="noreferrer">The WiLI benchmark dataset for written
language identification</a>. I also benchmarked a couple of tools.</p>
<p>TL;DR:</p>
<ul>
<li>CLD-2 is pretty good and extremely fast</li>
<li><a href="https://pypi.python.org/... | 2018-01-25 05:58:05.390000+00:00 | 2018-02-06 05:43:02.820000+00:00 | 2018-02-06 05:43:02.820000+00:00 | null | 43,377,265 | <p>I am using both <a href="http://www.nltk.org/" rel="noreferrer">Nltk</a> and <a href="http://scikit-learn.org/stable/" rel="noreferrer">Scikit Learn</a> to do some text processing. However, within my list of documents I have some documents that are not in English. For example, the following could be true:</p>
<pre>... | 2017-04-12 18:41:32.477000+00:00 | 2022-04-14 01:45:44.743000+00:00 | 2017-11-29 08:27:32.963000+00:00 | python|scikit-learn|nlp|nltk | ['https://arxiv.org/pdf/1801.07779.pdf', 'https://pypi.python.org/pypi/langdetect?', 'https://github.com/MartinThoma/lidtk'] | 3 |
51,525,403 | <p><strong>Segmentation Accuracy</strong></p>
<p>This is a pretty common problem addressed in image segmentation literature, e.g., <a href="https://stackoverflow.com/questions/13974167/how-to-test-accuracy-of-segmentation-algorithm">here is a StackOverflow post</a></p>
<p>One common approach is to consider the ratio ... | 2018-07-25 18:21:27.220000+00:00 | 2018-08-03 18:39:27.487000+00:00 | 2018-08-03 18:39:27.487000+00:00 | null | 51,525,036 | <p>I have implemented several clustering algorithms on an image dataset.
I'm interested in deriving the success rate of clustering. I have to detect the tumor area, in the original image I know where the tumor is located, I would like to compare the two images and obtain the percentage of success.
Following images:</p>... | 2018-07-25 17:56:05.493000+00:00 | 2018-08-03 18:39:27.487000+00:00 | 2018-07-27 05:45:45.623000+00:00 | python|image-processing|cluster-analysis|analysis | ['https://stackoverflow.com/questions/13974167/how-to-test-accuracy-of-segmentation-algorithm', 'https://arxiv.org/abs/1703.06870', 'http://www.cs.cmu.edu/~aayushb/pixelNet/', 'https://dspace2.flinders.edu.au/xmlui/bitstream/handle/2328/27165/Powers%20Evaluation.pdf?sequence=1&isAllowed=y', 'https://en.wikipedia.org/wi... | 31 |
47,011,724 | <p>This is normal behaviour and happens because your network is too confident of the quality of the input and doesn't learn to rely on the past (on it's internal state) enough, relying soley on the input. When you apply the network to its own output in the generation setting, the input to the network is not as reliable... | 2017-10-30 09:24:15.610000+00:00 | 2017-10-30 09:24:15.610000+00:00 | null | null | 43,459,013 | <p>For several days now, I am trying to build a simple sine-wave sequence generation using LSTM, without any glimpse of success so far.</p>
<p>I started from the <a href="https://github.com/pytorch/examples/tree/master/time_sequence_prediction" rel="noreferrer">time sequence prediction example</a></p>
<p>All what I w... | 2017-04-17 20:23:11.273000+00:00 | 2018-03-07 08:36:10.770000+00:00 | 2018-03-07 08:36:10.770000+00:00 | python|machine-learning|deep-learning|lstm|pytorch | ['https://arxiv.org/abs/1506.03099'] | 1 |
55,873,035 | <h2>Update</h2>
<p><a href="https://github.com/apple/swift/pull/25286" rel="nofollow noreferrer">This</a> implementation of a random number generator in an interval has been merged into the standard library and should perform better than before:</p>
<pre><code>// s = upperBound; r1, r2 = random numbers from generator... | 2019-04-26 18:17:09.863000+00:00 | 2019-07-23 21:01:31.747000+00:00 | 2019-07-23 21:01:31.747000+00:00 | null | 55,872,415 | <p>I have used Int.random() method and arc4random_uniform() for number generation speed tests.<br>
Both tests were run in macOS console with build configuration set to release.
Below are codes which I have used for testing. </p>
<pre><code>public func randomGen1() {
let n = 1_000_000
let startTime = CFAbsolute... | 2019-04-26 17:26:41.613000+00:00 | 2019-07-23 21:01:31.747000+00:00 | 2019-04-26 17:33:32.537000+00:00 | swift|random | ['https://github.com/apple/swift/pull/25286', 'https://stackoverflow.com/questions/55548872/shuffle-struct-by-int/55549494#55549494', 'http://xoshiro.di.unimi.it/', 'https://github.com/mattgallagher/CwlUtils/blob/e4186dae4ba55ffa478264c8477d01a48fd2b459/Sources/CwlUtils/CwlRandom.swift#L80', 'https://lemire.me/blog/201... | 9 |
59,938,045 | <p>40% accuracy is not good. It needs to train more. You should rescale images to <code>128 or 256</code> to save time. Also try increasing epoch count to something like 100 or minimize loss to at least around 1 before testing. Another thing is class imbalance. </p>
<p>According to this, <a href="https://arxiv.org/abs... | 2020-01-27 19:57:38.153000+00:00 | 2020-01-27 19:57:38.153000+00:00 | null | null | 59,937,540 | <p>I am following this guide to learn image classification with neural networks:</p>
<p><a href="https://www.tensorflow.org/tutorials/keras/classification" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/keras/classification</a></p>
<p>And I implement this code for my custom dataset. I have 2300 gray s... | 2020-01-27 19:19:10.547000+00:00 | 2020-01-28 16:09:05.880000+00:00 | 2020-01-28 16:09:05.880000+00:00 | python|tensorflow|machine-learning|keras|neural-network | ['https://arxiv.org/abs/1708.07747'] | 1 |
62,836,902 | <p>This question have been answered by <a href="https://arxiv.org/pdf/1605.05274.pdf" rel="nofollow noreferrer">R.Grigore(2016)</a> in his paper <em>Java Generics are Turing Complete</em></p>
<p>Take the following Java code as an example to his suggestion:</p>
<pre><code>//an empty interface
interface Z {}
//4 generic... | 2020-07-10 14:53:30.250000+00:00 | 2020-07-10 14:53:30.250000+00:00 | null | null | 3,451,519 | <p>The JLS mentions in the type inference algorithm (§15.12.2):</p>
<blockquote>
<p>It is possible that the process above yields an infinite type. This is permissible,
and Java compilers must recognize such situations and represent them appropriately using cyclic data structures.</p>
</blockquote>
<p>However, I'm... | 2010-08-10 17:08:20.650000+00:00 | 2020-07-10 14:53:30.250000+00:00 | 2010-08-11 13:28:39.157000+00:00 | java|programming-languages|wildcard|type-inference | ['https://arxiv.org/pdf/1605.05274.pdf'] | 1 |
60,628,756 | <p>I like your approach! When you mention your optimization I think a good way to go about it is by rotating the hexagonal grid and translating it till you find the least amount of circles that cover the region. You don't need to rotate 360 since the pattern is symmetric so just 360/6.</p>
<p>I've been working on this... | 2020-03-11 03:15:12.600000+00:00 | 2020-03-13 00:59:05.403000+00:00 | 2020-03-13 00:59:05.403000+00:00 | null | 10,648,621 | <p><strong>The following problem:</strong>
Given is an arbitrary polygon. It shall be covered 100% with the minimum number of circles of a given radius.</p>
<p><strong>Note:</strong>
1) Naturally the circles have to overlap.
2) I try to solve the problem for ARBITRARY polygons. But also solutions for CONVEX polygons a... | 2012-05-18 07:40:55.300000+00:00 | 2020-03-13 00:59:05.403000+00:00 | 2017-05-23 12:32:32.410000+00:00 | matlab|geometry | ['https://arxiv.org/abs/2003.04839'] | 1 |
46,173,528 | <p>You can use the idea of face-embeddings, which for example is proposed in the highly-cited paper <a href="https://arxiv.org/abs/1503.03832" rel="noreferrer">FaceNet</a> and implemented in <a href="https://cmusatyalab.github.io/openface/" rel="noreferrer">OpenFace</a> (which also comes pre-trained).</p>
<p>The genera... | 2017-09-12 10:03:32.393000+00:00 | 2017-09-12 10:03:32.393000+00:00 | 2020-06-20 09:12:55.060000+00:00 | null | 46,168,182 | <p>First of all here is my <a href="https://github.com/alucard001/OpenCV-Face-Recognition-and-Comparison/blob/master/Open%20CV.ipynb" rel="noreferrer">github link for the question</a>.</p>
<p>And here is my question:</p>
<p>I would like to do a face comparison function using Python. And I can successfully(?) recogni... | 2017-09-12 04:55:55.683000+00:00 | 2021-10-19 14:41:04.830000+00:00 | 2018-09-27 10:02:08.430000+00:00 | python|opencv|neural-network|convolution|face-recognition | ['https://arxiv.org/abs/1503.03832', 'https://cmusatyalab.github.io/openface/', 'https://cmusatyalab.github.io/openface/demo-2-comparison/'] | 3 |
58,753,908 | <p>In the original paper, the author pushes one sample to the experience replay buffer and randomly samples 32 transitions to train the model in the minibatch fashion. The samples took from interacting with the environment is not directly feeding to the model. To increase the speed of training, the author store samples... | 2019-11-07 17:13:49.143000+00:00 | 2019-11-09 14:55:36.403000+00:00 | 2019-11-09 14:55:36.403000+00:00 | null | 58,600,089 | <p>I am confused why dqn with experience replay algorithm would perform gradient descent step for every step in a given episode? This will fit only one step, right? This would make it extremely slow. Why not after each episode ends or every time the model is cloned?</p> | 2019-10-29 00:37:53.163000+00:00 | 2019-11-09 14:55:36.403000+00:00 | 2019-10-29 01:02:33.857000+00:00 | deep-learning|reinforcement-learning | ['https://github.com/openai/baselines', 'https://arxiv.org/abs/1803.00933', 'https://arxiv.org/abs/1803.00933', 'https://arxiv.org/abs/1507.04296', 'https://web.stanford.edu/class/psych209/Readings/MnihEtAlHassibis15NatureControlDeepRL.pdf'] | 5 |
36,406,057 | <p>in my experience, NaNs, when training a network usually happen because of two problems:</p>
<ul>
<li>first, mathematical error, e.g. log of negative value. It could happen when you are using log() in your loss function.</li>
<li>Second, there is a value that becomes too big so python can't handle.</li>
</ul>
<p>In... | 2016-04-04 14:54:07.443000+00:00 | 2016-04-06 22:30:51.983000+00:00 | 2016-04-06 22:30:51.983000+00:00 | null | 36,381,488 | <p>I am training a simple feed-forward model with 3 or 4 hidden layers and dropouts between each (hidden layer + non linearity) combination.
Sometimes after a few epochs (about 10-11) the model starts outputting Infs and NaNs as the error of the NLL and the accuracy falls to 0.0%. This problem does not happen when I d... | 2016-04-03 04:03:04.203000+00:00 | 2016-04-06 22:30:51.983000+00:00 | null | python|theano|deep-learning | ['http://jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf', 'http://arxiv.org/abs/1502.01852'] | 2 |
46,021,189 | <blockquote>
<p>use a kd-tree</p>
</blockquote>
<p>Unfortunately, in high dimensions this data structure suffers severely from the <a href="https://en.wikipedia.org/wiki/Curse_of_dimensionality" rel="nofollow noreferrer">curse of dimensionality</a>, which causes its search time to be comparable to the brute force se... | 2017-09-03 07:25:31.820000+00:00 | 2017-09-03 07:25:31.820000+00:00 | null | null | 3,962,775 | <p>So I have about 16,000 75-dimensional data points, and for each point I want to find its k nearest neighbours (using euclidean distance, currently k=2 if this makes it easiser)</p>
<p>My first thought was to use a kd-tree for this, but as it turns out they become rather inefficient as the number of dimension grows.... | 2010-10-18 19:46:36.487000+00:00 | 2017-09-03 07:26:10.790000+00:00 | 2017-09-03 07:26:10.790000+00:00 | algorithm|data-structures|computational-geometry|nearest-neighbor|dimensionality-reduction | ['https://en.wikipedia.org/wiki/Curse_of_dimensionality', 'https://en.wikipedia.org/wiki/Dimensionality_reduction', 'https://en.wikipedia.org/wiki/Dimensionality_reduction#Principal_component_analysis_.28PCA.29', 'https://en.wikipedia.org/wiki/Nearest_neighbor_search#Approximate_nearest_neighbor', 'https://arxiv.org/pd... | 8 |
61,577,834 | <p>Document based databases have a big advantage over relational databases as they do not require defining a schema upfront- before being able to enter any data.</p>
<p>Also, you should use a document database if your data is not relational and cannot be stored in a table but rather is a set of images, or for example n... | 2020-05-03 16:22:55.093000+00:00 | 2022-03-11 08:28:16.757000+00:00 | 2022-03-11 08:28:16.757000+00:00 | null | 441,441 | <p>Why should I use document based database like CouchDB instead of using relational database.
Are there any typical kinds of applications or domains where the document based database is more suitable than the relational database?</p> | 2009-01-14 00:21:48.840000+00:00 | 2022-04-07 21:05:03.183000+00:00 | null | database|couchdb|relational|non-relational-database | ['https://arxiv.org/ftp/arxiv/papers/1509/1509.08035.pdf'] | 1 |
71,943,162 | <p>There is this interesting paper <a href="https://arxiv.org/pdf/1802.06222.pdf" rel="nofollow noreferrer">Efficient GAN-based anomaly detection</a>.<br />
To evaluate the anomaly detection, they use the following experimental setting</p>
<blockquote>
<p>MNIST: We generated 10 different datasets from MNIST by successi... | 2022-04-20 16:28:32.400000+00:00 | 2022-04-20 16:28:32.400000+00:00 | null | null | 71,942,290 | <p>There's something about GAN's training that i don't understand. I am making a GAN for Anomaly Detection. To start I followed this guide <a href="https://www.tensorflow.org/tutorials/generative/dcgan" rel="nofollow noreferrer">here</a> to create a DCGAN (and understand how it works) and then move into the Anomaly Det... | 2022-04-20 15:24:05.590000+00:00 | 2022-04-20 16:28:32.400000+00:00 | null | python|tensorflow|deep-learning|generative-adversarial-network|anomaly-detection | ['https://arxiv.org/pdf/1802.06222.pdf'] | 1 |
57,136,880 | <p>L2 utilization and hit rate are orthogonal concepts.</p>
<p>L2 utilization % measures how many operations (reads/writes/atomics) the L2 cache performed, compared to its peak performance. You can alternatively think of this as a proxy for "how much L2 bandwidth did I use" given there is a fixed bandwidth between L1... | 2019-07-21 20:42:24.077000+00:00 | 2019-07-21 20:42:24.077000+00:00 | null | null | 57,135,152 | <p>I'm doing expriments by using cuda.</p>
<p>I thought that if L2 cache hit ratio is high, performance will increase.</p>
<p>However, from nvprof, L2 cache utilization is low even though L2 cache hit rate is about 93%.</p>
<p>Why this happens? Are there examples that make it happen?</p> | 2019-07-21 16:47:08.780000+00:00 | 2019-07-21 20:42:24.077000+00:00 | null | caching|cuda|gpu | ['https://arxiv.org/pdf/1903.07486.pdf', 'https://i.stack.imgur.com/k3aG1.png', 'https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#data-movement-and-conversion-instructions', 'https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#data-movement-and-conversion-instructions', 'https://docs.nvid... | 7 |
70,006,822 | <p>So, In this paper : <a href="https://arxiv.org/pdf/2004.07464.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2004.07464.pdf</a>
They have combined image embedding and text embedding by concatenating them.</p>
<pre><code>X = TE + IE
</code></pre>
<p>Here X is fusion embedding with TE and IE as text and image e... | 2021-11-17 15:04:52.033000+00:00 | 2021-11-17 15:04:52.033000+00:00 | null | null | 44,786,174 | <p>I know the meaning and methods of word embedding(skip-gram, CBOW) completely. And I know, that Google has a word2vector API that by getting the word can produce the vector.
but my problem is this: we have a clause that includes the subject, object, verb... that each word is previously embedded by the Google API, no... | 2017-06-27 17:12:36.157000+00:00 | 2021-11-17 15:04:52.033000+00:00 | null | nlp|information-retrieval|word2vec|google-api-python-client|word-embedding | ['https://arxiv.org/pdf/2004.07464.pdf', 'https://arxiv.org/abs/1708.03629', 'https://github.com/vyraun/Half-Size'] | 3 |
44,093,296 | <p>There is nothing wrong. The problem is that increasing layers does not automatically means a higher accuracy (otherwise machine learning would be kind of solved, because if you need a better accuracy in an image classifier you would just add +1 layer to an inception and claim a victory).</p>
<p>To show you that thi... | 2017-05-21 04:47:03.857000+00:00 | 2017-05-21 04:47:03.857000+00:00 | null | null | 44,092,936 | <p>I'm learning TensorFlow, and trying to create a simple two layer neural network.</p>
<p>The tutorial code <a href="https://www.tensorflow.org/get_started/mnist/pros" rel="nofollow noreferrer">https://www.tensorflow.org/get_started/mnist/pros</a> starts with this simple network, to get 92% accuracy:</p>
<pre><code>... | 2017-05-21 03:40:05.703000+00:00 | 2017-05-21 04:47:03.857000+00:00 | null | machine-learning|tensorflow|neural-network | ['https://arxiv.org/pdf/1512.03385.pdf', 'https://i.stack.imgur.com/msoTt.jpg'] | 2 |
46,787,296 | <p>The fact that the model trains on its own predictions is the whole point of Q-learning: it is a concept called bootstrapping, which means reusing your experience. The insight behind this is:</p>
<ul>
<li>The Agent is initialized with some weights</li>
<li>These weights represent the Agent's current representation o... | 2017-10-17 09:50:12.563000+00:00 | 2017-10-17 10:28:19.490000+00:00 | 2017-10-17 10:28:19.490000+00:00 | null | 46,783,760 | <p>When I am training my model I have the following segment:</p>
<pre><code>s_t_batch, a_batch, y_batch = train_data(minibatch, model2)
# perform gradient step
loss.append(model.train_on_batch([s_t_batch, a_batch], y_batch))
</code></pre>
<p>where <code>s_t, a_</code> corresponds to current states and actions that we... | 2017-10-17 06:25:25.803000+00:00 | 2019-10-19 08:00:48.420000+00:00 | 2019-10-19 08:00:48.420000+00:00 | deep-learning|reinforcement-learning|openai-gym|q-learning | ['https://en.wikipedia.org/wiki/Temporal_difference_learning', 'https://medium.com/emergent-future/simple-reinforcement-learning-with-tensorflow-part-0-q-learning-with-tables-and-neural-networks-d195264329d0', 'http://arxiv.org/abs/1701.07274', 'http://vict0rsch.github.io/thesis/thesisVictorSchmidt.pdf'] | 4 |
18,682,668 | <p>First of all, take a look on File size, here is detailed <a href="http://arxiv.org/pdf/cs/0502012.pdf" rel="nofollow">Performance measurements</a></p> | 2013-09-08 10:26:03.417000+00:00 | 2013-09-08 10:26:03.417000+00:00 | null | null | 18,681,936 | <p>I have a requirement to load a file containing up to 1 million lines of string data. My first thought is to use C# 5.0 async to load the data whilst not blocking the UI thread. If the user tries to access something that relies on the data they will get a loading message.</p>
<p>Still I would like the fastest possib... | 2013-09-08 08:38:16.257000+00:00 | 2013-09-08 19:10:23.917000+00:00 | 2013-09-08 19:10:23.917000+00:00 | c# | ['http://arxiv.org/pdf/cs/0502012.pdf'] | 1 |
52,743,448 | <p>The general answer whenever it comes to the question of "which is faster?" is always: measure how fast each approach runs your application scenario to find out. In this case, I would say that the first approach would seem preferable most of the time (if you had to pick one of those two options for some reason). Unle... | 2018-10-10 15:12:21.693000+00:00 | 2018-10-10 15:12:21.693000+00:00 | null | null | 52,729,965 | <p>Based on my study, there are 2 different strategies to implement tiled version of convolution with cuda. I want to know more about this, and would like to see how they compare with each other, what is the advantage and disadvantage of each strategy, and how to choose. Below is the implementations of the two differen... | 2018-10-09 22:02:59.140000+00:00 | 2018-10-10 15:12:21.693000+00:00 | 2018-10-09 22:17:29.353000+00:00 | c++|3d|cuda|deep-learning|convolution | ['https://petewarden.com/2015/04/20/why-gemm-is-at-the-heart-of-deep-learning/', 'https://arxiv.org/abs/1509.09308'] | 2 |
69,508,740 | <p>Looks great, as you have already followed most of the solutions to resolve gradient exploding problem. Below is the list of all solutions you can try</p>
<p><strong>Solutions to avoid Gradient Exploding problem</strong></p>
<ol>
<li><p><em>Appropriate Weight initialization:</em> utilise appropriate weight Initializa... | 2021-10-09 16:42:13.367000+00:00 | 2021-10-29 16:33:42.460000+00:00 | 2021-10-29 16:33:42.460000+00:00 | null | 69,427,103 | <p>I have a gradient exploding problem which I couldn't solve after trying for several days. I implemented a custom message passing graph neural network in TensorFlow which is used to predict a continuous value from graph data. Each graph is associated with one target value. Each node of a graph is represented by a nod... | 2021-10-03 17:05:28.273000+00:00 | 2021-10-29 16:33:42.460000+00:00 | 2021-10-24 12:36:43.910000+00:00 | python|tensorflow|machine-learning|keras|gradient | ['https://arxiv.org/abs/1502.03167', 'https://keras.io/api/optimizers/', 'https://machinelearningmastery.com/gentle-introduction-backpropagation-time/', 'https://keras.io/api/layers/regularizers/'] | 4 |
57,385,673 | <p>The Batch normalization in LSTM is not that easy to implement. Some papers present some amazing results <a href="https://arxiv.org/pdf/1603.09025.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1603.09025.pdf</a> called Recurrent Batch normalization. The authors apply following the equations</p>
<p><a href="ht... | 2019-08-07 00:55:29.967000+00:00 | 2019-08-07 00:55:29.967000+00:00 | null | null | 48,544,953 | <p>I am trying to use batch normalization in LSTM using keras in R. In my dataset the target/output variable is the <code>Sales</code> column, and every row in the dataset records the <code>Sales</code> for each day in a year (2008-2017). The dataset looks like below:</p>
<p><a href="https://i.stack.imgur.com/mFJgq.pn... | 2018-01-31 14:49:03.390000+00:00 | 2019-08-07 00:55:29.967000+00:00 | null | r|tensorflow|keras|recurrent-neural-network|batch-normalization | ['https://arxiv.org/pdf/1603.09025.pdf', 'https://i.stack.imgur.com/2FhjQ.png', 'https://github.com/OlavHN/bnlstm'] | 3 |
60,792,910 | <p>Target tracking is a very difficult problem. In target tracking you will have <strong>two main issues</strong>: the motion uncertainty problem, and the origin uncertainty problem. The first one refers to the way you model object motion so you can predict its future state, and the second refers to the issue of data a... | 2020-03-21 20:27:41.103000+00:00 | 2020-03-21 20:34:19.133000+00:00 | 2020-03-21 20:34:19.133000+00:00 | null | 60,592,851 | <p>I'm trying to create an application that will be able to track rapidly moving objects in video/camera feed, however have not found any CV/DL solution that is good enough. Can you recommend any computer vision solution for tracking fast moving objects on regular laptop computer and web cam? A demo app would be ideal.... | 2020-03-08 22:54:46.580000+00:00 | 2020-06-08 11:39:46.317000+00:00 | null | opencv|deep-learning | ['https://www.mdpi.com/1424-8220/20/4/1110', 'https://arxiv.org/abs/1511.05121', 'https://towardsdatascience.com/the-unscented-kalman-filter-anything-ekf-can-do-i-can-do-it-better-ce7c773cf88d', 'https://stackoverflow.com/questions/2764238/image-processing-what-are-occlusions/60644446#60644446'] | 4 |
72,182,806 | <p>As you said, <code>train_test_split</code> interprets each list of tags as a label, it doesn't matter what it contains. A sample with tags <code>[1, 2, 3]</code> will not be identified the same as a sample with tags <code>[1, 2]</code>. Hence, you cannot flatten the <code>tags</code> column to check the label counts... | 2022-05-10 08:14:05.597000+00:00 | 2022-05-10 08:14:05.597000+00:00 | null | null | 72,182,217 | <p>I have multilabel dataset (<code>pd.DataFrame</code>) which looks like this:
<a href="https://i.stack.imgur.com/AuOxq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AuOxq.png" alt="" /></a></p>
<p>This is value_counts of flatten <code>tags</code> column:</p>
<pre><code>101 4450171
86 393... | 2022-05-10 07:26:35.447000+00:00 | 2022-05-10 08:14:05.597000+00:00 | 2022-05-10 07:56:21.367000+00:00 | python|pandas|scikit-learn|split|dataset | [] | 0 |
66,679,501 | <p>I was able to bring your code to a version where it would at least converge. In summary, I think there might be multiple problems with it: the normalization (why those values?), some unnecessary relus, too high learning rate, MSE loss instead of cross-entropy and mainly I don't think the softmax in the bottleneck la... | 2021-03-17 18:55:39.553000+00:00 | 2021-03-18 12:25:51.280000+00:00 | 2021-03-18 12:25:51.280000+00:00 | null | 66,667,949 | <p>I'm trying to build a simple autoencoder for MNIST, where the middle layer is just 10 neurons. My hope is that it will learn to classify the 10 digits, and I assume that would lead to the lowest error in the end (wrt reproducing the original image).</p>
<p>I have the following code, which I've already played around ... | 2021-03-17 06:22:52.777000+00:00 | 2021-03-19 01:09:50.773000+00:00 | 2021-03-17 09:53:51.540000+00:00 | python|pytorch|autoencoder|mnist | ['https://www.quora.com/Does-anyone-ever-use-a-softmax-layer-mid-neural-network-rather-than-at-the-end', 'https://arxiv.org/abs/1611.01144', 'https://arxiv.org/abs/1609.02200'] | 3 |
41,315,489 | <p>Stochastic Gradient Descent seems to require significant overparameterization in order to learn, here's one paper along those lines -- <a href="https://arxiv.org/abs/1301.3583" rel="nofollow noreferrer">"Big Neural Networks Waste Capacity"</a></p> | 2016-12-24 17:33:52.760000+00:00 | 2016-12-24 17:33:52.760000+00:00 | null | null | 41,314,819 | <p>I think I'm missing something obvious here but would love some help figuring this out. </p>
<p>Say I have a million words and want to embed them as part of my model.
With TF I can do an embedding lookup, though I need to provide a matrix of size [1m*space_size]. So for 50 dimensions that comes out to 50M trainable... | 2016-12-24 16:08:11.587000+00:00 | 2016-12-26 19:19:35.393000+00:00 | null | tensorflow|deep-learning | ['https://arxiv.org/abs/1301.3583'] | 1 |
45,297,317 | <p>As of SUMO 0.29.0, acceleration is not one of the <a href="http://www.sumo.dlr.de/wiki/TraCI/Vehicle_Value_Retrieval" rel="nofollow noreferrer">variables exposed by the SUMO TraCI API of a vehicle</a> - primarily because it is not one of the state variables of the most common car following models.</p>
<p>You will n... | 2017-07-25 08:01:40.243000+00:00 | 2017-07-25 08:01:40.243000+00:00 | null | null | 45,287,511 | <p>I am using <code>veins-4.5</code> <code>omnet++ 5</code> and <code>sumo 0.29.0</code>.
How can I access the <em>acceleration</em> of a vehicle in veins?</p>
<p>Thanks a lot.</p> | 2017-07-24 18:23:29.063000+00:00 | 2017-07-25 08:01:40.243000+00:00 | 2017-07-25 06:27:26.650000+00:00 | omnet++|veins | ['http://www.sumo.dlr.de/wiki/TraCI/Vehicle_Value_Retrieval', 'http://arxiv.org/abs/1403.4881'] | 2 |
55,811,323 | <p>This problem is a well studied problem: the problem of <strong>Journey planning in public transportation networks</strong>.<br>
Your approach based on Bellman-Ford might become problematic and too expensive depending on the network since you can't consider that a vertex has been 'visited', or that the shortest path ... | 2019-04-23 12:33:19.597000+00:00 | 2019-04-23 12:33:19.597000+00:00 | null | null | 55,802,336 | <p>So suppose you are searching for a train ride. You would be interested in the price of the ride and also the amount of time the ride will take. Now suppose that you have a graph where each edge has a cost and a duration and you want to find the shortest duration path in the graph that doesn't go over a given maximum... | 2019-04-22 23:27:57.550000+00:00 | 2019-04-23 23:59:48.697000+00:00 | 2019-04-23 23:59:48.697000+00:00 | c++|algorithm|bellman-ford | ['http://www.filipyoo.com/multi-objectives-shortest-paths-algorithms-for-multi-transfer-flight-routes/', 'https://arxiv.org/pdf/1504.05140.pdf', 'https://i11www.iti.kit.edu/extra/publications/dpsw-isftr-13.pdf'] | 3 |
29,853,535 | <p>Yes, the <a href="http://eclipseclp.org" rel="noreferrer" title="ECLiPSe">ECLiPSe</a> system does this.</p>
<p>As you suggest, it takes into account a number of simple built-in predicates (such as <code>integer/1, =/2, !/0</code>) for indexing purposes. Your example then executes deterministically, without choicep... | 2015-04-24 17:11:33.927000+00:00 | 2015-04-25 17:04:36.530000+00:00 | 2015-04-25 17:04:36.530000+00:00 | null | 29,605,132 | <p>I want to know how smart first argument indexing is implemented on various Prolog implementations.</p>
<p>In particular, simple type-test goals like <code>integer/1</code> right after a clause "neck" <em>could</em> contribute to better indexing.
Consider:</p>
<pre><code>foo(h(X),X).
foo([],nil).
foo([_|_],cons).
f... | 2015-04-13 12:18:26.433000+00:00 | 2019-01-27 16:03:04.750000+00:00 | 2015-04-13 17:12:52.820000+00:00 | indexing|prolog | ['http://eclipseclp.org', 'http://arxiv.org/abs/1012.4240'] | 2 |
20,955,187 | <p>An alternative approach is to use something like generative backpropagation. In this scenario, you train a neural network updating the weights AND the input values. The given values are used as the output values since you can compute an error value directly. This approach has been used in dimensionality reduction, m... | 2014-01-06 17:02:27.733000+00:00 | 2014-01-06 17:02:27.733000+00:00 | null | null | 15,514,618 | <p>I'm having trouble with some of the concepts in machine learning through neural networks. One of them is <a href="http://en.wikipedia.org/wiki/Delta_Rule" rel="noreferrer">backpropagation</a>. In the weight updating equation, </p>
<pre><code>delta_w = a*(t - y)*g'(h)*x
</code></pre>
<p><code>t</code> is the "targe... | 2013-03-20 03:12:20.327000+00:00 | 2019-04-02 08:42:01.440000+00:00 | 2017-04-19 04:50:59.180000+00:00 | machine-learning|neural-network|unsupervised-learning | ['http://bioinformatics.oxfordjournals.org/cgi/reprint/21/20/3887', 'http://arxiv.org/abs/1312.5394'] | 2 |
15,514,709 | <p>The most common thing to do is train <a href="http://en.wikipedia.org/wiki/Autoencoder" rel="noreferrer">an autoencoder</a>, where the desired outputs are equal to the inputs. This makes the network try to learn a representation that best "compresses" the input distribution.</p>
<p><a href="http://www.freepatentson... | 2013-03-20 03:22:39.767000+00:00 | 2013-03-20 03:22:39.767000+00:00 | null | null | 15,514,618 | <p>I'm having trouble with some of the concepts in machine learning through neural networks. One of them is <a href="http://en.wikipedia.org/wiki/Delta_Rule" rel="noreferrer">backpropagation</a>. In the weight updating equation, </p>
<pre><code>delta_w = a*(t - y)*g'(h)*x
</code></pre>
<p><code>t</code> is the "targe... | 2013-03-20 03:12:20.327000+00:00 | 2019-04-02 08:42:01.440000+00:00 | 2017-04-19 04:50:59.180000+00:00 | machine-learning|neural-network|unsupervised-learning | ['http://en.wikipedia.org/wiki/Autoencoder', 'http://www.freepatentsonline.com/5590218.html', 'http://arxiv.org/pdf/cs/0608115.pdf', 'http://www.rimtengg.com/coit2007/proceedings/pdfs/40.pdf'] | 4 |
47,311,604 | <p>The fundamental difficulty when it comes to adding new users in your system is that you need retraining to be able to give meaningful predictions to new users. Even if you were able to dynamically resize the embedding matrices, what values would you use for the parameters describing the new user?</p>
<p>Taking this... | 2017-11-15 15:43:49.173000+00:00 | 2017-11-15 15:43:49.173000+00:00 | null | null | 47,272,031 | <p>I have a TensorFlow recommendation system based off <a href="https://github.com/songgc/TF-recomm" rel="nofollow noreferrer"><code>TF-recomm</code></a>. Each user has <code>1+numFactors</code> numbers associated with her: a vector of <code>numFactors</code>, and an offset of a single number. Each task also has a bias... | 2017-11-13 19:26:54.860000+00:00 | 2017-11-15 15:43:49.173000+00:00 | null | python|tensorflow|recommendation-engine | ['https://pdfs.semanticscholar.org/6c02/053805434162e0fed26e1d5e035eb1071249.pdf', 'https://arxiv.org/pdf/1511.06939.pdf', 'https://github.com/mesuvash/NNRec', 'https://maciejkula.github.io/spotlight/sequence/implicit.html'] | 4 |
65,950,643 | <p>Here is a more efficient and more stable implementation. Assuming <code>zi</code> and <code>zj</code> are interlaced!</p>
<pre><code>class NT_Xent(tf.keras.layers.Layer):
""" Normalized temperature-scaled CrossEntropy loss [1]
[1] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, “A simple... | 2021-01-29 07:55:30.137000+00:00 | 2021-12-27 15:29:01.250000+00:00 | 2021-12-27 15:29:01.250000+00:00 | null | 62,793,043 | <p>As the title suggests, I'm trying train a model based on the SimCLR framework (seen in this paper: <a href="https://arxiv.org/pdf/2002.05709.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2002.05709.pdf</a> - the NT_Xent loss is stated in equation (1) and Algorithm 1).</p>
<p>I have managed to create a numpy v... | 2020-07-08 10:45:50.177000+00:00 | 2021-12-27 15:29:01.250000+00:00 | null | python|tensorflow|scikit-learn|backpropagation|cosine-similarity | ['https://github.com/gabriel-vanzandycke/tf_layers'] | 1 |
62,793,878 | <p>I managed to figure it out myself!
I did not realise there was a Tensorflow implementation of the cosine similarity function "tf.keras.losses.CosineSimilarity"</p>
<p>Here is my code:</p>
<pre><code>import tensorflow as tf
# Define the contrastive loss function, NT_Xent (Tensorflow version)
def NT_Xent_tf... | 2020-07-08 11:33:39.013000+00:00 | 2020-07-08 11:33:39.013000+00:00 | null | null | 62,793,043 | <p>As the title suggests, I'm trying train a model based on the SimCLR framework (seen in this paper: <a href="https://arxiv.org/pdf/2002.05709.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/2002.05709.pdf</a> - the NT_Xent loss is stated in equation (1) and Algorithm 1).</p>
<p>I have managed to create a numpy v... | 2020-07-08 10:45:50.177000+00:00 | 2021-12-27 15:29:01.250000+00:00 | null | python|tensorflow|scikit-learn|backpropagation|cosine-similarity | [] | 0 |
69,420,043 | <p>Hyperparameter tuning is typically done on the validation set of a train-val-test split, where each split will have something along the lines of 70%, 10%, and 20% of the entire dataset respectively. As a baseline, random search can be used while <a href="https://arxiv.org/abs/1206.2944" rel="nofollow noreferrer">Bay... | 2021-10-02 20:29:58.507000+00:00 | 2021-10-02 20:29:58.507000+00:00 | null | null | 69,419,809 | <p>I almost finished my time series model, collected enough data and now I am stuck at hyperparameter optimization.</p>
<p>And after lots of googling I found new & good library called ultraopt, but problem is that how much amount of fragment of data should I use from my total data (~150 GB) for hyperparameter tunin... | 2021-10-02 19:55:56.360000+00:00 | 2021-10-03 08:58:59.760000+00:00 | 2021-10-03 08:58:59.760000+00:00 | python|performance|machine-learning|large-data|hyperparameters | ['https://arxiv.org/abs/1206.2944', 'https://scikit-optimize.github.io/stable/auto_examples/bayesian-optimization.html'] | 2 |
69,420,445 | <p>A good python library for hyper-parameter tuning is <a href="https://arxiv.org/pdf/1603.06560.pdf" rel="nofollow noreferrer"><code>keras tuner</code></a>. You can utilize different tuners in this library, but for the large data, as you've mentioned, <a href="https://arxiv.org/pdf/1603.06560.pdf" rel="nofollow norefe... | 2021-10-02 21:37:08.380000+00:00 | 2021-10-02 21:37:08.380000+00:00 | null | null | 69,419,809 | <p>I almost finished my time series model, collected enough data and now I am stuck at hyperparameter optimization.</p>
<p>And after lots of googling I found new & good library called ultraopt, but problem is that how much amount of fragment of data should I use from my total data (~150 GB) for hyperparameter tunin... | 2021-10-02 19:55:56.360000+00:00 | 2021-10-03 08:58:59.760000+00:00 | 2021-10-03 08:58:59.760000+00:00 | python|performance|machine-learning|large-data|hyperparameters | ['https://arxiv.org/pdf/1603.06560.pdf', 'https://arxiv.org/pdf/1603.06560.pdf'] | 2 |
64,588,418 | <p>Remember that fine-tuning a pre-trained model like Bert usually requires a much smaller number of epochs than models trained from scratch. In fact <a href="https://arxiv.org/pdf/1810.04805.pdf" rel="nofollow noreferrer">the authors of Bert recommend between 2 and 4 epochs</a>. Further training often translates to ov... | 2020-10-29 09:37:58.480000+00:00 | 2020-10-29 09:37:58.480000+00:00 | null | null | 63,096,908 | <p>I'm training a classification model with custom layers on top of BERT. During this, the training performance of this model is going down with increasing epochs ( after the first epoch ) .. I'm not sure what to fix here - is it the model or the data?</p>
<p>( for the data it's binary labels, and balanced in the numbe... | 2020-07-26 06:36:45.657000+00:00 | 2020-11-03 23:43:59.867000+00:00 | 2020-11-03 23:43:59.867000+00:00 | tensorflow|machine-learning|nlp|language-model | ['https://arxiv.org/pdf/1810.04805.pdf'] | 1 |
52,443,301 | <p>In the context of autoencoders the input and output of the model is the same. So, if the input values are in the range [0,1] then it is acceptable to use <code>sigmoid</code> as the activation function of last layer. Otherwise, you need to use an appropriate activation function for the last layer (e.g. <code>linear<... | 2018-09-21 11:58:45.940000+00:00 | 2019-06-25 07:22:23.607000+00:00 | 2019-06-25 07:22:23.607000+00:00 | null | 52,441,877 | <p>I wrote a vanilla autoencoder using only <code>Dense</code> layer.
Below is my code:</p>
<pre class="lang-py prettyprint-override"><code>iLayer = Input ((784,))
layer1 = Dense(128, activation='relu' ) (iLayer)
layer2 = Dense(64, activation='relu') (layer1)
layer3 = Dense(28, activation ='relu') (layer2)
layer4 =... | 2018-09-21 10:35:59.863000+00:00 | 2019-06-25 07:22:23.607000+00:00 | 2018-09-21 12:37:43.647000+00:00 | machine-learning|neural-network|keras|autoencoder|cross-entropy | ['https://blog.keras.io/building-autoencoders-in-keras.html', 'https://arxiv.org/abs/1708.08487', 'https://www.youtube.com/watch?v=xTU79Zs4XKY', 'http://www.dmi.usherb.ca/~larocheh/index_en.html', 'https://youtu.be/xTU79Zs4XKY?t=330'] | 5 |
44,936,598 | <p>From what I read, I'd be surprised if they're using neural networks. Here's how they say they detect anomalies:</p>
<blockquote>
<p>Detect outliers in a population by building a profile of a “typical” user or machine to know when one starts to stray from the pack.</p>
</blockquote>
<p>Doing anomaly detection lik... | 2017-07-05 21:53:38.927000+00:00 | 2017-07-05 21:53:38.927000+00:00 | null | null | 44,935,473 | <p>I'm very impressed from the new <a href="https://www.elastic.co/products/x-pack/machine-learning" rel="nofollow noreferrer">x-pack</a> ML of the elastic stack. It seems their technique learns data patterns over time and can predict anomalies in multiple domains.</p>
<p><a href="https://i.stack.imgur.com/TrGx5.png" ... | 2017-07-05 20:30:32.270000+00:00 | 2018-07-23 16:14:58.170000+00:00 | 2017-07-06 06:39:31.113000+00:00 | elasticsearch|machine-learning|anomaly-detection|rnn | ['https://arxiv.org/abs/1706.03762'] | 1 |
49,606,866 | <blockquote>
<ol>
<li>Use Convolution2D layers and LSTM layer</li>
</ol>
</blockquote>
<p>In this technique, you stack convolution and LSTM layers. The convolutional layers help you to learn the spatial features and the LSTM helps you learn the correlation in time.</p>
<blockquote>
<p>2.Use ConvLSTM2D</p>
</b... | 2018-04-02 07:11:14.890000+00:00 | 2018-04-11 08:01:36.633000+00:00 | 2018-04-11 08:01:36.633000+00:00 | null | 49,603,498 | <p>Are <code>1</code> and <code>2</code> the same?</p>
<ol>
<li>Use <code>Convolution2D</code> layers and <code>LSTM</code> layers </li>
<li>Use <code>ConvLSTM2D</code></li>
</ol>
<p>If there is any difference, could you explain it for me?</p> | 2018-04-01 23:14:53.400000+00:00 | 2018-04-13 16:40:43.003000+00:00 | 2018-04-13 16:40:43.003000+00:00 | python|tensorflow|keras | ['https://arxiv.org/abs/1506.04214', 'https://stackoverflow.com/questions/49468918/appplication-of-convlstm2d-layers/49472074#49472074'] | 2 |
49,770,553 | <p>They are not exactly the same, here is why:</p>
<h3>1. Use <code>Convolution2D</code> layers and <code>LSTM</code> layers</h3>
<p>As it is known, <code>Convolution2D</code> serves well for capturing image or spatial features, whilst <code>LSTM</code> are used to detect correlations over time. However, by stacking ... | 2018-04-11 08:46:57.363000+00:00 | 2018-04-11 08:46:57.363000+00:00 | null | null | 49,603,498 | <p>Are <code>1</code> and <code>2</code> the same?</p>
<ol>
<li>Use <code>Convolution2D</code> layers and <code>LSTM</code> layers </li>
<li>Use <code>ConvLSTM2D</code></li>
</ol>
<p>If there is any difference, could you explain it for me?</p> | 2018-04-01 23:14:53.400000+00:00 | 2018-04-13 16:40:43.003000+00:00 | 2018-04-13 16:40:43.003000+00:00 | python|tensorflow|keras | ['https://arxiv.org/abs/1506.04214v1', 'https://keras.io/layers/recurrent/#convlstm2d', 'https://github.com/keras-team/keras/blob/master/keras/layers/recurrent.py', 'https://github.com/keras-team/keras/blob/master/keras/layers/convolutional_recurrent.py'] | 4 |
65,766,155 | <p>As Ivan already noted you have a class imbalance problem. This can be resolved via:</p>
<ol>
<li><p><strong>Online hard negative mining:</strong> at each iteration after computing the loss, you can sort all elements in the batch belonging to "no DR" class and keep only the worst <code>k</code>. Then you es... | 2021-01-17 21:30:06.740000+00:00 | 2021-01-18 07:31:56.007000+00:00 | 2021-01-18 07:31:56.007000+00:00 | null | 65,762,961 | <p>I am relatively new to the deep learning landscape, so please don't be as mean as Reddit! It seems like a general question so I won't be giving my code here as it doesn't seem necessary (if it is, here's the link to <a href="https://colab.research.google.com/drive/1x6DOqow1dnZvy29_UZ3nHAKZLaNKr9hM?usp=sharing" rel="... | 2021-01-17 16:12:12.567000+00:00 | 2022-08-05 16:05:04.970000+00:00 | 2021-01-18 00:28:51.830000+00:00 | python|deep-learning|pytorch|conv-neural-network | ['https://arxiv.org/abs/1604.03540', 'https://stackoverflow.com/a/52161194/1714410', 'https://stackoverflow.com/a/58213245/1714410', 'https://stackoverflow.com/a/64365532/1714410'] | 4 |
29,876,936 | <p>Solved it by giving absolute path. I was trying all combinations of paths and also gave absolute path /var/www/arxiv/static/data/name.json and it worked.</p> | 2015-04-26 11:18:38.003000+00:00 | 2015-04-26 11:18:38.003000+00:00 | null | null | 29,876,315 | <p>I am using f = open('name.json','w+') to create a new file and write to it. But i am unable to create the file. Apache server logs show "No such file exists."</p> | 2015-04-26 10:10:09.927000+00:00 | 2015-04-26 11:18:38.003000+00:00 | 2015-04-26 10:31:43.320000+00:00 | python|apache|flask | [] | 0 |
39,249,179 | <p>You will need backtracking, because it is possible to add numbers to the Sudoku board which don't violate any rules immediately, but which will lead to a contradiction later on. If you take any unique-solution Sudoku problem and arbitrarily place any number anywhere, you are bound to experience just this.</p>
<p>I ... | 2016-08-31 12:04:53.170000+00:00 | 2016-08-31 12:04:53.170000+00:00 | 2017-05-23 11:47:14.977000+00:00 | null | 39,246,124 | <p>I am trying to to generate a Sudoku board using this script:</p>
<p>The problem is that I don't know how to validate to generate unique numbers on columns and squares.</p>
<p>Actually is just validating and generating unique numbers only on rows.</p>
<p>Here is that code :
<div class="snippet" data-lang="js" data... | 2016-08-31 09:43:43.907000+00:00 | 2018-08-16 16:38:54.950000+00:00 | 2018-08-16 16:38:54.950000+00:00 | javascript|arrays|validation|math|sudoku | ['https://arxiv.org/abs/cs/0011047', 'https://stackoverflow.com/questions/tagged/sudoku?sort=votes'] | 2 |
End of preview. Expand in Data Studio
StackExchange Dataset
Working doc: https://docs.google.com/document/d/1h585bH5sYcQW4pkHzqWyQqA4ape2Bq6o1Cya0TkMOQc/edit?usp=sharing
BigQuery query (see so_bigquery.ipynb):
CREATE TEMP TABLE answers AS
SELECT *
FROM bigquery-public-data.stackoverflow.posts_answers
WHERE LOWER(Body) LIKE '%arxiv%';
CREATE TEMPORARY TABLE questions AS
SELECT *
FROM bigquery-public-data.stackoverflow.posts_questions;
SELECT *
FROM answers
JOIN questions ON questions.id = answers.parent_id;
NOTE: BigQuery only has the StackOverflow site data, not the other sites. So if we want to query the other sites, we would probably want to download the data dump to a cluster and run a SQL server.
Columns in the raw query output:
'id',
'title',
'body',
'accepted_answer_id',
'answer_count',
'comment_count',
'community_owned_date', # present only if post is community wiki'd
'creation_date',
'favorite_count',
'last_activity_date',
'last_edit_date',
'last_editor_display_name',
'last_editor_user_id',
'owner_display_name',
'owner_user_id',
'parent_id', # if post is answer, then this is the question id; if post is question, this is None
'post_type_id', # 1 = QUESTION, 2 = ANSWER
'score',
'tags',
'view_count',
(Official database schema)[https://meta.stackexchange.com/questions/2677/database-schema-documentation-for-the-public-data-dump-and-sede/2678#2678]
File structure
Each folder represents the different StackExchange sites (~182). The largest one is StackOverflow.
- Downloads last month
- 31