Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
377,700
45,800,520
Specify Multi-Level columns using pd.read_clipboard?
<p>Here's some data from another question:</p> <pre><code>main Meas1 Meas2 Meas3 Meas4 Meas5 sublvl Value Value Value Value Value count 7.000000 1.0 1.0 582.00 97.000000 mean 30 37.0 26.0 33.03 16.635350 </code></pre> <p>I would like to read in this data in s...
<pre><code>In [17]: pd.read_clipboard(sep='\s+', index_col=[0], header=[0,1]) Out[17]: main Meas1 Meas2 Meas3 Meas4 Meas5 sublvl Value Value Value Value Value count 7.0 1.0 1.0 582.00 97.00000 mean 30.0 37.0 26.0 33.03 16.63535 </code></pre>
python|pandas|dataframe
6
377,701
45,793,412
Structure dataset from rows to columns pandas python
<p>I have a dataframe like the following with many feature columns but only 3 mentioned below:</p> <pre><code>productid |feature1 |value1 |feature2 |value2 | feature3 |value3 100001 |weight | 130g | |price |$140.50 100002 |weight | 200g |pieces |12 p...
<p>Here's a reproducible example, check the comments for details. </p> <pre><code>import pandas as pd from StringIO import StringIO data = """ productid|feature1|value1|feature2|value2|feature3|value3 100001|weight|130g|||price|$140.50 100002|weight|200g|pieces|12pcs|dimensions|150X75cm 100003|dimensions|70X30cm|pr...
python|pandas|dataframe
1
377,702
45,751,891
There are two format of Time series datetime in the same series, how to change them to one format?
<p>I want to split a time series into two set: train and test. Here's my code:</p> <pre><code>train = data.iloc[:1100] test = data.iloc[1101:] </code></pre> <p>Here's what the time series looks like: <a href="https://i.stack.imgur.com/nMo1R.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nMo1R.png"...
<p>Consider the simplified series <code>s</code></p> <pre><code>s = pd.Series(1, pd.date_range('2010-08-16', periods=5, freq='12H')) s 2010-08-16 00:00:00 1 2010-08-16 12:00:00 1 2010-08-17 00:00:00 1 2010-08-17 12:00:00 1 2010-08-18 00:00:00 1 Freq: 12H, dtype: int64 </code></pre> <p>But when I subs...
python-2.7|pandas|datetime|timestamp|time-series
2
377,703
46,093,577
what is the equivalent of theano.tensor.clip in pytorch?
<p>I want to clip my tensor (not gradient) values to some range. Is there any function in pytorch like there is a function theano.tensor.clip() in theano?</p>
<p>The function you are searching for is called <code>torch.clamp</code>. You can find the documentation <a href="http://pytorch.org/docs/master/torch.html#torch.clamp" rel="nofollow noreferrer">here</a></p>
theano|pytorch
2
377,704
46,153,647
KeyError: 0 when accessing value in pandas series
<p>In my script I have df['Time'] as shown below.</p> <pre><code>497 2017-08-06 11:00:00 548 2017-08-08 15:00:00 580 2017-08-10 04:00:00 646 2017-08-12 23:00:00 Name: Time, dtype: datetime64[ns] </code></pre> <p>But when i do </p> <pre><code>t1=pd.Timestamp(df['Time'][0]) </code></pre> <p>I get an error ...
<p>You're looking for <code>df.iloc</code>.</p> <pre><code>df['Time'].iloc[0] </code></pre> <p><code>df['Time'][0]</code> would've worked if your series had an index beginning from <code>0</code></p> <p>And if need scalar only use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.iat.html"...
python|pandas|indexing|series|keyerror
37
377,705
45,853,159
Python / Pandas - Calculating ratio
<p>I have this dataframe:</p> <pre><code>bal: year id unit period Revenues Ativo Não-Circulante \ business_id 9564 2012 302 dsada anual 5964168.52 10976013.70 9564 2011 303 dsada a...
<p>I'll assume your dataframe is named <code>df</code>. First rest your index so that <code>business_id</code> is a column, then sort the result on <code>year</code>. Now group the dataframe on <code>business_id</code> and transform the result to get the percent change in revenues. Finally, resort the index to get th...
python|pandas
1
377,706
45,899,613
Divide certain columns by another column in pandas
<p>Was wondering if there is a more efficient way of dividing multiple columns a certain column. For example say I have:</p> <pre><code>prev open close volume 20.77 20.87 19.87 962816 19.87 19.89 19.56 668076 19.56 19.96 20.1 578987 20.1 20.4 20.53 418597 </code></pre> <p>And i wou...
<pre><code>df2[['open','close']] = df2[['open','close']].div(df2['prev'].values,axis=0) </code></pre> <p>Output:</p> <pre><code> prev open close volume 0 20.77 1.004815 0.956668 962816 1 19.87 1.001007 0.984399 668076 2 19.56 1.020450 1.027607 578987 3 20.10 1.014925 1.021393 418597 </cod...
python|pandas|dataframe
9
377,707
46,081,177
Save data frame as csv/text file in pandas without line numbering
<p>I have created a data frame using a text file in pandas.</p> <pre><code>df = pd.read_table('inputfile.txt',names=['Line']) </code></pre> <p>when I do <code>df</code></p> <pre><code>Line 0 17/08/31 13:24:48 INFO spark.SparkContext: Run... 1 17/08/31 13:24:49 INFO spark.SecurityManager: ... 2 17/08/31 13:24:4...
<p>James' answer is likely correct given the special case. However, there's the standard behaviour of <code>pandas</code> to put the line number as a column without header in front. To remove this, simply set the <code>index=</code> argument to <code>None</code>:</p> <pre><code>df.to_csv("outfile.csv", index=None) </c...
python|pandas
6
377,708
45,949,160
Finding z-scores of data in a test dataframe in Pandas
<p>I have data that is grouped, and split into training and test sets. I am looking to compute <code>z</code>-scores. On the training set, this is easy, as I can use built-in functions to compute the mean and standard deviation. </p> <p>Here is an example, where I am looking for the z-scores by place: import panda...
<p><strong>Option 1</strong><br> <code>pd.Series.map</code> </p> <pre><code>test.assign(z= (test.temp - test.place.map(summary['mean'])) / test.place.map(summary['std']) ) place temp z 0 Winterfell 6 0.244977 1 Winterfell -8 -0.351488 2 Dorne 100 0.000000 </code></pre> <hr> ...
python|pandas|dataframe
4
377,709
45,862,139
Extracting slices that are identical between two dataframes
<p>How can I combine 2 dataframe <code>df1</code> and <code>df2</code> in order to get <code>df3</code> that has the rows of <code>df1</code> and <code>df2</code> that have the same index (and the same values in the columns)? </p> <pre><code>df1 = pd.DataFrame({'A': ['A0', 'A2', 'A3', 'A7'], 'B...
<p>First, get the intersection of indices. Next, find all rows where all the columns are identical, and then just index into either dataframe.</p> <pre><code>idx = df1.index &amp; df2.index df_out = df1.loc[(df1.loc[idx] == df2.loc[idx]).all(1).index] print(df_out) </code></pre> <p>You can also use <code>df.isin</co...
python|pandas|dataframe
1
377,710
46,079,644
Tensorflow "Attempting to use uninitialized value ..." Error when restoring
<p>I made a RNN model. After training, I saved it in <code>tf.Session()</code> like this.</p> <pre><code>#main.py ...(training) saver = tf.train.Saver() save_path = saver.save(sess, "Save data/RNN-model.ckpt") </code></pre> <p>and in 'run.py' I tried to restore saved data.</p> <pre><code>#run.py ... with tf.Session(...
<p>The <code>accuracy</code> operation contains some local variable which is not part of the graph, so it should be initialized manually. adding <code>sess.run(tf.local_variables_initializer())</code> after<code>restore</code> will initialize the local variables.</p>
python|tensorflow|initialization
1
377,711
45,789,351
Tensorflow using Python programming
<p>I'm really newbie in python programming specially in tensorflow concept, I already installed tensorflow in my PC, But when I make a simple program to execute "Hello Tensorflow" there is something annoyed me, the out put always appear " b' " like this picture. <a href="https://i.stack.imgur.com/SCtH6.png" rel="nofoll...
<p>In python 3 there are two types of strings.</p> <ol> <li>byte strings</li> <li>strings </li> </ol> <p>byte strings are array of characters which are prefixed by <code>b'</code>. In order to convert byte into string one needs to decode it. byte instances have method <code>decode</code> that will convert the byte t...
python|tensorflow
2
377,712
46,045,096
tensorflow windows create own plugin
<p>I have tensorflow+gpu successfully built on windows 10 with visual studio 2015, from the source code.</p> <p>As a result, I get <code>tensorflow.dll</code> and <code>tensorflow.lib</code>. I have <code>CUDA8.0</code> and <code>cudnn 5.0</code>; with a gtx 1080 gpu equipped.</p> <p>However, my question is not about...
<p>Loading custom op libraries via tf.load_op_library() is not supported on Windows (at least with TensorFlow 1.8). The workaround is to add your custom op into the TensorFlow library itself. Follow the example of tf.user_ops.my_fact implemented in tensorflow\tensorflow\core\user_ops\fact.cc:</p> <ol> <li>Put your C++...
windows|tensorflow
1
377,713
45,799,017
Inplace Forward Fill on a multi-level column dataframe
<p>I have the following dataframe: </p> <pre><code>arrays = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'], ['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']] tuples = list(zip(*arrays)) index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second']) df = pd.DataFrame(np.random.randn(3, 8), ind...
<p>I believe the problem comes due to the <code>inplace=True</code> setting. Try accessing the slice with <code>df.loc</code> and then assigning the <code>ffill</code>ed dataframe slice back:</p> <pre><code>df.loc[:, ["baz", "foo"]] = df[["baz", "foo"]].ffill() </code></pre> <p>Output:</p> <pre><code>first b...
python|pandas|dataframe|hierarchical|fillna
1
377,714
46,084,008
How to groupby in pandas dataframe by one column only if other column value is not same
<p>I have a dataframe as follows:</p> <pre><code>df ID first last 0 123 Joe Thomas 1 456 James Jonas 2 675 James Jonas 3 457 James Thomas </code></pre> <p>I want an output as follows:</p> <pre><code>{'Thomas': [123, 457], 'James':[675, 457]} </code></pre> <p>such that for all the rows ...
<p>Building off of the answer that @Abdou provided in the comment, I can confirm this works in Python version 2.7.13 using Pandas version 0.20.1, and also in Python version 3.6.2 using Pandas version 0.20.3:</p> <pre><code>from __future__ import division, print_function import pandas as pd import sys def main(): ...
python|pandas|dictionary|dataframe|group-by
0
377,715
45,824,837
Tensorflow and Numpy missing
<p>I am using Ubuntu 14.04. I am trying to use the tensorflow module, but although I have it installed, and installed it the same way I would install any other pkg or module, it is not recognized by python as being installed. Even though pip says it is installed... I am not sure what the hell is going on.</p> <p>See f...
<p>Just because you source your virtualenv, this doesn't mean the 'pip' command will reference the pip library of the virtualenv. The 'pip' command is more than likely still linked to your default python interpreter.</p> <p>You can try the following to get it working:</p> <p>Start by uninstalling both modules:</p> <...
python|numpy|tensorflow|virtualenv|python-2.x
1
377,716
46,071,998
TensorFlow: How to use 'num_epochs' in a string_input_producer
<p>I can't enable epoch limits on my string_input_producer without getting a OutOfRange error (requested x, current size 0). It doesn't seem to matter how many elements I request, there is always 0 available. </p> <p>Here is my FileQueue builder:</p> <pre><code>def get_queue(base_directory): files = [f for f in o...
<p>The issue was caused by this: <a href="https://github.com/tensorflow/tensorflow/issues/1045" rel="nofollow noreferrer">Issue #1045</a></p> <p>For whatever reason, tf.global_variable_initialiser does not initialise all variables. You need to initialise the local variables too.</p> <p>Add</p> <pre><code>sess.run(tf...
python|tensorflow
1
377,717
45,957,672
python: dictionary and numpy.array issue
<p>I have a dictionary with arrays (same length) associated to strings. My goal is to create a new dictionary with the same keys but cutting the arrays, keeping only the elements I need. I wrote a function to do it but the problem is that it returns a dictionary with the same array (correct cut length) associated to ev...
<p>In Python, every object is a pointer. So, you should have to create a new instance of <code>a</code> for each iteration of the outer <code>for</code> loop. You could do this, for example, initializing the <code>a</code> array inside of that loop, like this:</p> <pre><code>def extract_years(dic,initial_year,final_ye...
python|arrays|python-2.7|numpy|dictionary
1
377,718
45,867,202
ValueError: shapes (9,) and (4,) not aligned
<p>I am training a NN to play 2048 using reinforcement learning. Or at least I think I am, cause I am new to this.</p> <p>This is what NeuralNetwork.py looks like:</p> <pre><code>import random import numpy as np def nonlin(x, deriv=False): if(deriv==True): return x * (1-x) return 1/(1+np.exp(-x)) n...
<p>It will work if you give an explicit 1-D column representation, using <code>np.newaxis</code>. </p> <p><strong>Note:</strong> If you're looking for a scalar output, the two vectors need to be of <a href="https://en.wikipedia.org/wiki/Dot_product" rel="nofollow noreferrer">equal length</a>. The error message in OP...
python|numpy|dot-product
0
377,719
45,812,652
Delete rows in subsequences that contain leading zeros in a dataframe
<p>I have a data frame in following format with a time series</p> <pre><code>A B C 201401 201402 201403 a1 b1 c1 100 200 300 a2 b2 c2 0 250 0 </code></pre> <p>I have used <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.melt.html" rel="nofollow noreferrer">Pandas.melt</a> to...
<p>I continued with your idea of grouping and then filtering. The basic idea was to take each group and find the first non-zero Value's index assuming they are already sorted by date. And then just ungroup and clean up.</p> <pre><code>def applyFunc(row): row_values = np.array(row.Value) first_non_zero_index = ...
python-2.7|pandas|dataframe
0
377,720
46,074,863
Load saved checkpoint and predict not producing same results as in training
<p>I'm training based on a sample code I found on the Internet. The accuracy in testing is at 92% and the checkpoints are saved in a directory. In parallel (the training is running for 3 days now) I want to create my prediction code so I can learn more instead of just waiting.</p> <p>This is my third day of deep learn...
<p>In the repo you provided, the training and validation sentences are inverted before being fed into the model (as commonly done in seq2seq learning).</p> <pre><code>dataset = DataSet(DATASET_FILENAME) </code></pre> <p>As you can see, the default value for <code>inverted</code> is <code>True</code>, and the question...
python|tensorflow|deep-learning|keras
2
377,721
23,383,253
"tuple index out of range" reading pandas pickled Panel
<p>data is a pandas Panel</p> <p><code>data &lt;class 'pandas.core.panel.Panel'&gt; Dimensions: 16 (items) x 1954 (major_axis) x 6 (minor_axis) Items axis: ADRE to SPY Major_axis axis: 2004-12-01 00:00:00+00:00 to 2012-08-31 00:00:00+00:00 Minor_axis axis: open to price</code></p> <p>Save to disk</p> <pre><code>pand...
<p>pickles are saved by:</p> <pre><code> panel.to_pickle('file_name.pkl') </code></pre> <p>you don't appear to be using a string filename and are adding an extra (non quoted) argument.</p> <p>reading is using a quote filename as well</p> <pre><code> pd.read_pickle('file_name.pkl') </code></pre> <p>On python 27-32 ...
python-2.7|pandas
2
377,722
23,123,625
Different result of code example on book: Python for Data Analysis
<p>I have a question on a book "Python for Data Analysis" if anyone is interested in this book.</p> <p>After running an example on page 244 <em>(Plotting Maps: Visualizing Haiti Earthquake Crisis Data)</em>, my result of dummy_frame.ix doesn't look the same as what the book says as below:</p> <pre><code>dummy_frame =...
<p>That is the same result as far as I can see, Pandas has been changing the default display of dataframes, the example in the book is the summary display and the display you got is the newer format that displays the begin/ end.. read up on display options... In the appropriate version doc that you are using</p>
python|pandas|data-analysis
2
377,723
23,308,578
How to find numpy.argmax() on part of list and save index?
<p>I have:</p> <pre><code>array = [1, 2, 3, 4, 5, 6, 7, 8]; </code></pre> <p>I need to find <code>numpy.argmax</code> only for last 4 elements in array.</p> <p>This does not works, because index is losted:</p> <pre><code>&gt;&gt;&gt; array = [1, 2, 3, 4, 5, 6, 7, 8]; &gt;&gt;&gt; print (array[4:8]); [5, 6, 7, 8] ...
<p>The simple approach would be to, I dunno, just add a 4 to the output? Assuming it isn't always 4, you could always do this:</p> <p><code>print np.argmax(array[x : 8]) + x</code></p>
python|numpy
3
377,724
23,267,054
How to combine two sine waves without cracks
<p>I'm using Python, pyaudio and scipy and I would like to combine two sine waves (two tones) in a way that one tone is played after another (create melody). Let's assume that I have two arrays: <code>tone1</code> and <code>tone2</code>.</p> <p><code>tone1</code> contains data of sine wave with frequency of 350 Hz. <co...
<p>Append them together and apply a Fourier Transform smoothing filter. In the regions with a single tone, the Fourier transform will have only one component, and the filter will do nothing; whereas in the transition region you will get both components (plus the crap coming from the jump), that the filter would hopeful...
python|numpy|scipy|signals|sine-wave
1
377,725
23,390,455
Multiply number of distances in distance matrix prior to histogram binning
<p>I am using scipy.spatial.distance.pdist to calculate the distances from an array of coordinates followed by numpy.histogram to bin the results. Currently this treats each coordinate as though one object were there, however I have multiple objects at that same coordinate. One option is to change the arrays so that ea...
<p>Something like this might work:</p> <pre><code>from scipy.spatial import distance positions = np.random.rand(10, 2) counts = np.random.randint(1, 5, len(positions)) distances = distance.pdist(positions) i, j = np.triu_indices(len(positions), 1) bins = np.linspace(0, 1, 10) h, b = np.histogram(distances, bins=bin...
python|numpy|scipy|histogram
1
377,726
35,563,351
Pandas Split Column String and Plot unique values
<p>I have a dataframe <code>Df</code> that looks like this:</p> <pre><code> Country Year 0 Australia, USA 2015 1 USA, Hong Kong, UK 1982 2 USA 2012 3 USA 1994 4 USA, France 2013 ...
<p>You could use the vectorized <a href="http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.Series.str.split.html" rel="nofollow noreferrer">Series.str.split</a> method to split the <code>Country</code>s:</p> <pre><code>In [163]: df['Country'].str.split(r',\s+', expand=True) Out[163]: 0 ...
python|pandas|plot|bar-chart
6
377,727
35,487,005
Pandas and dataframe: How to transform a ordinal variable in a binary variable?
<p>I have a column of my dataframe <code>df = pd.read_csv('somedata')</code> namely df['rank'] which is an ordinal variable. I want to create a binary column where df['rkGood'] is equal to 1 when df['rank'] ranges from 20 to 40, and 0 otherwise.</p> <p>I am trying something like this, but it is not working:</p> <pre>...
<p>First initialize your column to zeros, then use <code>loc</code> as follows:</p> <pre><code>df['rkGood'] = 0 df.loc[(df['rank'] &gt; 20) &amp; (df['rank'] &lt;= 40), 'rkGood'] = 1 </code></pre> <p>Or...</p> <pre><code>df['rkGood'] = 0 df.loc[df.rank.between(20, 40, inclusive=True), 'rkGood'] = 1 </code></pre>
python|pandas
2
377,728
35,627,827
Numpy: How to query multiple multidimensional arrays?
<p>Assume I have 3 arrays:</p> <pre><code>a=np.array([[1,2,3], [3,4,5], [6,7,8]]) b=np.array([[1], [5], [4]]) c=np.array([[1], [2], [3]]) </code></pre> <p>Now, I want to select all rows from a, which have a matching row with b=4 and c=3.</p> <...
<p>This will do:</p> <pre><code>&gt;&gt;&gt; a=np.array([[1,2,3], ... [3,4,5], ... [6,7,8]]) &gt;&gt;&gt; &gt;&gt;&gt; b=np.array([[1], ... [5], ... [4]]) &gt;&gt;&gt; &gt;&gt;&gt; c=np.array([[1], ... [2], ... [3]]) &gt;&gt;&gt; &gt;&gt;&gt; a[...
python|numpy|scipy
4
377,729
35,534,057
iterating dataframe columns after groupby
<p>I am trying to groupby a a csv data read to a dataframe using pandas. I am doing a groupby to user_id column and able to do so successfully. How can i retrieve the column data after the groupby result. my csv columns are line this: , user_id, status </p> <pre><code>import pandas as pd import csv df = pd.DataFrame(p...
<p>You are almost there</p> <pre><code>for ix, grouped_df in grouped: print grouped_df['status'] </code></pre>
python|csv|pandas
0
377,730
35,461,548
Filling data using .fillNA(), data pulled from Quandl
<p>I've pulled some stock data from Quandl for both Crude Oil prices (WTI) and Caterpillar (CAT) price. When I concatenate the two dataframes together I'm left with some NaNs. My ultimate goal is to run a .Pearsonr() to assess the correlation (along with p-values), however I can't get Pearsonr() to work because of ...
<p>pot shot - have you just forgotten to assign or use the inplace flag.</p> <pre><code>daily_price_df = daily_price_df.fillna(method='pad', limit=8) OR daily_price_df.fillna(method='pad', limit=8, inplace=True) </code></pre>
python|pandas|data-cleaning
2
377,731
35,679,118
Unable to convert MATLAB to Python code for repmat and symmetry
<p>MATLAB code:</p> <pre><code>n = 2048; d = 1; order = 2048; nn = [-(n/2):(n/2-1)]'; h = zeros(size(nn),'single'); h(n/2+1) = 1 / 4; odd = mod(nn,2) == 1; h(odd) = -1 ./ (pi * nn(odd)).^2; f_kernel = abs(fft(h))*2; filt = f_kernel(1:order/2+1)'; w = 2*pi*(0:size(filt,2)-1)/order; filt(w&gt;pi*d) = 0; ...
<p>If I were on my computer, I'd start Octave and IPython sessions, and start replicating the code line by line. I'd use smaller dimensions to easily watch the results. I'd pay special attention to shapes. Since my Matlab is rusty it is easier to do it this way than in my head. And more reliable.</p> <p><code>np...
python|matlab|python-2.7|numpy
0
377,732
35,612,629
Most efficient way to create non-redundant correlation matrix Python?
<p>I feel like numpy, scipy, or networkx has a method to do this but I just haven't figured it out yet. </p> <p><strong>My question is how to create a nonredundant correlation matrix in the form of a DataFrame on from a redundant correlation matrix for LARGE DATASETS in the MOST EFFICIENT way (In Python)?</strong> </...
<p>I think you can create array with <a href="http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.tril.html#numpy.tril" rel="nofollow"><code>np.tril</code></a> and then multiple it with <code>DataFrame</code> <code>DF_1</code>:</p> <pre><code>print np.tril(np.ones(DF_1.shape)) [[ 1. 0. 0. 0. 0.] [ 1....
python|pandas|matrix|dataframe|adjacency-matrix
3
377,733
35,401,041
Concatenation of 2 1D `numpy` Arrays Along 2nd Axis
<p>Executing</p> <pre><code>import numpy as np t1 = np.arange(1,10) t2 = np.arange(11,20) t3 = np.concatenate((t1,t2),axis=1) </code></pre> <p>results in a </p> <pre><code>Traceback (most recent call last): File "&lt;ipython-input-264-85078aa26398&gt;", line 1, in &lt;module&gt; t3 = np.concatenate((t1,t2),a...
<p>Your title explains it - a 1d array does not have a 2nd axis!</p> <p>But having said that, on my system as on <code>@Oliver W.</code>s, it does not produce an error</p> <pre><code>In [655]: np.concatenate((t1,t2),axis=1) Out[655]: array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 11, 12, 13, 14, 15, 16, 17, 18, 1...
arrays|numpy|concatenation|numpy-ndarray|index-error
19
377,734
35,513,574
Multiple Categorical Input Variables in Tensorflow
<p>I have a data-set in which each feature vector has 50 features, 45 of which are categorical. I am having trouble sending the categorical variables into tensorflow. I have found an example <a href="https://medium.com/@ilblackdragon/tensorflow-tutorial-part-3-c5fc0662bc08" rel="nofollow">tutorial for tensorflow with c...
<p>Use a categorical processor to map your categories into integers before inputting, like so</p> <pre><code>cat_processor = skflow.preprocessing.CategoricalProcessor() X_train = np.array(list(cat_processor.fit_transform(X_train))) X_test = np.array(list(cat_processor.transform(X_test))) n_classes = len(cat_processor....
python|machine-learning|tensorflow|skflow
0
377,735
35,493,275
Contouring non-uniform 2d data in python/matplotlib above terrain
<p>I am having trouble contouring some data in matplotlib. I am trying to plot a vertical cross-section of temperature that I sliced from a 3d field of temperature. </p> <p>My temperature array (T) is of size 50*300 where 300 is the number of horizontal levels which are evenly spaced. However, 50 is the number of ver...
<p>For what I understand your data is structured. Then you can directly use the <code>contourf</code> or <code>contour</code> option in <code>matplotlib</code>. The code you present have the right idea but you should use </p> <pre><code>x1, z1 = np.meshgrid(LAT, Z[:,0]) plt.contourf(x1, Z, T) </code></pre> <p>for the...
python|numpy|matplotlib|plot
1
377,736
35,757,439
would it be straight forward to implement a spatial transformer network in tensorflow?
<p>i am interested in trying things out with a spatial transformer network and I can't find any implementation of it in caffe or tensorflow, which are the only two libraries I'm interested in using. I have a pretty good grasp of tensorflow but was wondering if it would be straight forward to implement with the existin...
<p>Yes, it is very straight forward to setup the Tensorflow graph for a spatial transformer network with the existing API.</p> <p>You can find an example implementation in Tensorflow here [1].</p> <p>[1] <a href="https://github.com/daviddao/spatial-transformer-tensorflow" rel="noreferrer">https://github.com/daviddao/...
tensorflow
5
377,737
35,380,933
How to merge two pandas DataFrames based on a similarity function?
<p>Given dataset 1</p> <pre><code>name,x,y st. peter,1,2 big university portland,3,4 </code></pre> <p>and dataset 2</p> <pre><code>name,x,y saint peter3,4 uni portland,5,6 </code></pre> <p>The goal is to merge on </p> <pre><code>d1.merge(d2, on="name", how="left") </code></pre> <p>There are no exact matches on na...
<p>Did you look at <a href="https://pypi.python.org/pypi/fuzzywuzzy" rel="noreferrer">fuzzywuzzy</a>?</p> <p>You might do something like:</p> <pre><code>import pandas as pd import fuzzywuzzy.process as fwp choices = list(df2.name) def fmatch(row): minscore=95 #or whatever score works for you choice,score =...
python|pandas|merge|fuzzy-comparison
6
377,738
35,411,879
Taking an average of an array according to another array of indices
<p>Say I have an array that looks like this:</p> <pre><code>a = np.array([0, 20, 40, 30, 60, 35, 15, 18, 2]) </code></pre> <p>and I have an array of indices that I want to average between:</p> <pre><code>averaging_indices = np.array([2, 4, 7, 8]) </code></pre> <p>What I want to do is to average the elements of arra...
<p>Here's a vectorized approach with <a href="http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.bincount.html" rel="nofollow"><code>np.bincount</code></a> -</p> <pre><code># Create "shifts array" and then IDs array for use with np.bincount later on shifts_array = np.zeros(a.size,dtype=int) shifts_array[...
python|arrays|numpy|mean
1
377,739
35,456,290
How to slice off 1 pixel layer from an image data array with numpy or similar
<p>I have a fits image of specified dimensions, in pixels, and I wish to slice off one pixel off the top and one on the bottom. I have attempted to use:</p> <pre><code>sliced_array = my_array[1:-1,1:0] </code></pre> <p>However, this gives me, when I query the shape of the newly sliced array using <code>print(sliced_a...
<p>You can slice top most layer and bottom most layer like this </p> <pre><code>my_array[1:-1,:] </code></pre> <p>preserving all the columns and excluding top most row and bottom most row</p>
python|arrays|numpy
2
377,740
35,493,764
After calculating a tensor, how can I show it as a image?
<p>I have one dimensional numpy array. After performing a calculation in TensorFlow, I get a <code>tf.Tensor</code> as output. I am trying to reshape it into a 2-dimensional array and show it as an image.</p> <p>If it were a numpy ndarray, I would know how to plot it as an image. But it is a tensor now!</p> <p>Althou...
<p>The immediate error you're seeing is because <code>Tensor.eval()</code> only works when there is a <a href="https://www.tensorflow.org/versions/master/api_docs/python/client.html#get_default_session" rel="nofollow">"default <code>Session</code>"</a>. This requires that either (i) you're executing in a <code>with tf....
python|arrays|image|numpy|tensorflow
3
377,741
35,660,658
find if a list of list has items from another list
<p>we have two list, </p> <pre><code>l=["a","b","c"] s=[["a","b","c"], ["a","d","c"], ["a-B1","b","c"], ["a","e","c"], ["a_2","c"], ["a","d-2"], ["a-3","b","c-1-1","d"]] print l print s </code></pre> <p>Now, I am try to see if each 2nd-level list of <code>s</code> has fuzzy match to any of items in list <code>l</cod...
<p>I prefer this solution for conciseness and readability:</p> <pre><code>&gt;&gt;&gt; [all(any(x.startswith(y) for y in l) for x in sub) for sub in s] [True, False, True, False, True, False, False] </code></pre>
python|pandas
2
377,742
35,605,501
python where function doesn't work
<p>I'm dealing with a longitude array named <code>LON</code>, but I encountered some problems with the <code>numpy.where()</code> function.</p> <pre><code>&gt;&gt;&gt; print LON[777,777] 13.4635573678 &gt;&gt;&gt; print np.where(LON == 13.4635573678)[0] [] &gt;&gt;&gt; print np.where(LON == 13.4635573678)[1] [] </code...
<p>One way to work around this might be to use <code>np.where</code> with an approximate match:</p> <pre><code>&gt;&gt;&gt; X = np.linspace(1, 10, 100).reshape((10,10)) &gt;&gt;&gt; np.where(abs(X - 6.3) &lt; 0.1) (array([5, 5]), array([8, 9])) &gt;&gt;&gt; X[np.where(abs(X - 6.3) &lt; 0.1)] array([ 6.27272727, 6.363...
python|numpy
3
377,743
12,030,398
concatenate multiple columns based on index in pandas
<p>As a follow up to <a href="https://stackoverflow.com/questions/12021730/can-pandas-handle-variable-length-whitespace-as-column-delimeters">this post</a>, I would like to concatenate a number of columns based on their index but I am encountering some problems. In this example I get an Attribute error related to the m...
<p>How about something like this?</p> <pre><code>&gt;&gt;&gt; from pandas import * &gt;&gt;&gt; df = DataFrame({'A':['a','b','c'], 'B':['d','e','f'], 'C':['concat','me','yo'], 'D':['me','too','tambien']}) &gt;&gt;&gt; df A B C D 0 a d concat me 1 b e me too 2 c f yo tambie...
python|pandas
8
377,744
28,401,458
Pandas Time Index pick largest number/last number on given day
<p>I have a Pandas DataFrame object that looks something like this:</p> <pre><code> 'Thing 1': Actual Predicted Error Date 2014-09-15 140.00 0.000000 140.000000 2014-09-15 358.03 127.738344 ...
<p>you can use <code>group by</code> with <code>agg</code>. <code>Agg</code> takes a dictionary of functions. As in each group the highest observation is the last one you can use the <code>last</code> function:</p> <pre><code>df.groupby('Date').agg({'Actual':'last','Predicted':'last','Error':'last'}) </code></pre> <p...
python-3.x|pandas|time-series|dataframe|anaconda
1
377,745
28,576,676
Calculate minimum value for each column of multi-indexed DataFrame in pandas
<p>I have a multi-indexed DataFrame with the following structure:</p> <pre><code> metric1 metric2 experiment1 experiment2 experiment1 experiment2 run1 1.2 1.5 0.2 0.9 run2 2.1 0.7 0.4 4.3 </code></pre> <p>How can ...
<p>You can take the min, max, and mean then use pd.concat to stitch everything together. You'll need to transpose (T) then transpose back to get the dataframe to concat the way you want. </p> <pre><code>In [91]: df = pd.DataFrame(dict(exp1=[1.2,2.1],exp2=[1.5,0.7]), index=["run1", "run2"]) In [92]: df_min, df_max, df...
python|pandas
4
377,746
28,782,487
Read HDF5 based file as a numpy array in Python
<p>How can I load in a <code>.hws</code> file as a numpy array?<br> Based on the description in <a href="http://kingler.net/2007/05/22/90" rel="nofollow">http://kingler.net/2007/05/22/90</a> which says it is a HDF5 based format, so I found <a href="https://confluence.slac.stanford.edu/display/PSDM/How+to+access+HDF5+da...
<p>I downloaded your file and took a look at it. After reading the <code>.hws</code> file you get a dictionary with exactly one key <code>"wfm_group0"</code> (You can see keys in the file by using <code>item.keys()</code>). The value to this key is again dictionary-like with the keys <code>"axes"</code>, <code>"id"</co...
python|arrays|numpy|hdf5
2
377,747
28,507,052
How to split numpy array in batches?
<p>It sounds like easy not i dont know how to do.</p> <p>i have numpy 2d array of </p> <pre><code>X = (1783,30) </code></pre> <p>and i want to split them in batches of 64. I write the code like this. </p> <pre><code>batches = abs(len(X) / BATCH_SIZE ) + 1 // It gives 28 </code></pre> <p>I am trying to do predicti...
<p>I don't really understand your question either, especially what X looks like. If you want to create sub-groups of equal size of your array, try this:</p> <pre><code>def group_list(l, group_size): """ :param l: list :param group_size: size of each group :return: Yields successiv...
python|numpy
14
377,748
28,664,103
How to transform a time series pandas dataframe using the index attributes?
<p>Given a dataframe with time series that looks like this:</p> <pre><code> Close 2015-02-20 14:00:00 1200.1 2015-02-20 14:10:00 1199.8 2015-02-21 14:00:00 1199.3 2015-02-21 14:10:00 1199.0 2015-02-22 14:00:00 1198.4 2015-02-22 14:10:00 1199.7 </code></pre> <p>How can I apply a function tha...
<p>you can pivot on the <code>date</code> and <code>time</code> components of the index:</p> <p>Create the frame:</p> <pre><code>i =pd.to_datetime(['2015-02-20 14:00:00','2015-02-20 14:10:00','2015-02-21 14:20:00'\ ,'2015-02-21 14:30:00','2015-02-22 14:40:00','2015-02-22 14:50:00']) df =pd.DataFrame(in...
python|pandas|time-series|dataframe
1
377,749
28,595,465
ValueError: could not convert string to float, NumPy
<p>I have a script where I am writing a JSON web-service to an Esri file geodatabase. I am receiving the error ValueError: could not convert string to float: Microwaves </p> <p>I have used the exact same script before with U40 being the dtype for all strings. </p> <p>My script and results are below;</p> <pre><code>...
<p>You have</p> <pre><code>keys = ['SRAddress','Longitude','Latitude','CommodityType','ItemType', 'ItemCount'] </code></pre> <p>then the script is making a tuple of values from the dict <code>items</code> using these keys in that order:</p> <pre><code>a = tuple([unicode(item[key]) for key in keys]) </code></pre> <...
python|numpy
1
377,750
28,419,453
h5py: how to read selected rows of an hdf5 file?
<p>Is it possible to read a given set of rows from an hdf5 file without loading the whole file? I have quite big hdf5 files with loads of datasets, here is an example of what I had in mind to reduce time and memory usage:</p> <pre><code>#! /usr/bin/env python import numpy as np import h5py infile = 'field1.87.hdf5' ...
<p>I have a sample h5py file with:</p> <pre><code>data = f['data'] # &lt;HDF5 dataset "data": shape (3, 6), type "&lt;i4"&gt; # is arange(18).reshape(3,6) ind=np.where(data[:]%2)[0] # array([0, 0, 0, 1, 1, 1, 2, 2, 2], dtype=int32) data[ind] # getitem only works with boolean arrays error data[ind.tolist()] # can't r...
python|numpy|dataset|h5py
5
377,751
28,393,803
Python: Joining two dataframes on a primary key
<p>I have two DataFrames A and B. I want to replace the rows in A with rows in B where a specific column is equal to each other.</p> <pre><code>A: 1 2 3 0 asd 0.304012 0.358484 1 fdsa -0.198157 0.616415 2 gfd -0.054764 0.389018 3 ff NaN 1.164...
<p>Just call <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.update.html#pandas.DataFrame.update" rel="nofollow"><code>update</code></a>: this will overwrite the lhs df with the contents of the rhs df where there is a match in your case replace <code>df</code> and <code>df1</code> with <...
python|pandas
2
377,752
28,704,142
How do you use pandas.DataFrame columns as index, columns, and values?
<p>I can't seem to figure out how to ask this question in a searchable way, but I feel like this is a simple question.</p> <p>Given a pandas Dataframe object, I would like to use one column as the index, one column as the columns, and a third column as the values.</p> <p>For example:</p> <pre><code> a b c 0 1...
<p>It's (almost) exactly as you phrase it:</p> <pre><code>df.pivot_table(index="a", columns="b", values="c", fill_value=0) </code></pre> <p>gives</p> <pre><code>b cat dog rat a 1 1 2 6 2 2 0 0 3 4 1 0 </code></pre> <p>HTH</p>
python|indexing|pandas
3
377,753
28,772,573
Python - parallelize a python loop for 2D masked array?
<p>Probably a commonplace question, but how can I parallelize this loop in Python?</p> <pre><code>for i in range(0,Nx.shape[2]): for j in range(0,Nx.shape[2]): NI=Nx[:,:,i]; NJ=Nx[:,:,j] Ku[i,j] = (NI[mask!=True]*NJ[mask!=True]).sum() </code></pre> <p>So my question: what's the easiest way to parallelize th...
<p>I think you want to 'vectorize', to use <code>numpy</code> terminology, not parallelize in the multiprocess way.</p> <p>Your calculation is essentially a dot (matrix) product. Apply the <code>mask</code> once to the whole array to get a 2d array, <code>NIJ</code>. Its shape will be <code>(N,5)</code>, where <code...
python|loops|numpy|parallel-processing|mask
4
377,754
28,761,925
Pandas: Taking slices from a DataFrame and recombining them into a separate DF
<p>I'm trying to take slices from a DataFrame and recombine them into a separate DF. However I'm getting a Value error 'cannot reindex from a duplicate axis' </p> <pre><code>run1 = df['run_1'] run2 = df['run_2'] a = run1[305:340] b = run1[258:270] c = run2[258:270] d = run2[305:340] first_slice = a.combine_first(b) ...
<p>Your code will fail as the parmas to the <code>DataFrame</code> ctor are:</p> <blockquote> <p>pandas.DataFrame(data=None, index=None, columns=None, dtype=None, copy=False)</p> </blockquote> <p>So even if it didn't complain it wouldn't produce what you want. There are various methods of <a href="http://pandas.p...
python|pandas|dataframe|slice
1
377,755
28,505,598
Convert time format in pandas
<p>I have a string object in this format 2014-12-08 09:30:00.066000 but I want to convert to datetime variable. I also want this to be less granular- I want it to be just in the order of second for example</p> <p>2014-12-08 09:30:00.066000 to 2014-12-08 09:30:00</p> <p>I am trying to use pd.to_datetime function but i...
<p>See this:</p> <p><a href="https://stackoverflow.com/questions/13785932/how-to-round-a-pandas-datetimeindex">How to round a Pandas `DatetimeIndex`?</a></p> <pre><code>from pandas.lib import Timestamp def to_the_second(ts): return Timestamp(long(round(ts.value, -9))) df['My_Date_Column'].apply(to_the_second) <...
python|pandas
0
377,756
51,052,416
Pandas dataframe groupby into list, with list in cell data
<p>Consider this input df</p> <pre><code>my_input_df = pd.DataFrame({ 'export_services': [[1],[2,4,5],[4,6], [2,4,5],[1]], 'seaport':['china','africa','europe', 'mexico','europe'], 'price_of_fish':['100','200','250','125','75']}) </code></pre> <p>How to group on a column which contains lists and combine the other c...
<p>First, convert to <strong><code>tuple</code></strong>, which can be hashed:</p> <pre><code>df.export_services = df.export_services.apply(tuple) </code></pre> <p><strong><code>groupby</code></strong> with <strong><code>agg</code></strong></p> <pre><code>df.groupby('export_services').agg(list).reset_index() expo...
python|pandas|dataframe|pandas-groupby
1
377,757
50,799,510
How to run custom GPU tensorflow::op from C++ code?
<p>I follow these examples to write custom op in TensorFlow:<br> <a href="https://www.tensorflow.org/extend/adding_an_op" rel="nofollow noreferrer">Adding a New Op</a><br> <a href="https://www.github.com/tensorflow/tensorflow/blob/r1.8/tensorflow/examples/adding_an_op/cuda_op_kernel.cu.cc" rel="nofollow noreferrer">cud...
<p>This simple example shows the construction and the execution of a graph using <a href="https://www.tensorflow.org/api_guides/cc/guide" rel="nofollow noreferrer">C++ API</a>:</p> <pre><code>// tensorflow/cc/example/example.cc #include "tensorflow/cc/client/client_session.h" #include "tensorflow/cc/ops/standard_ops....
c++|tensorflow
2
377,758
50,978,573
pandas sql using var containing list
<p>I created a list of locations by doing this:</p> <pre><code>list_NA = [] for x in df['place']: if x and x not in list_NA: list_NA.append(x) </code></pre> <p>This gives me a list like this:</p> <pre><code>print(list_NA) ['DEN', 'BOS', 'DAB', 'MIB', 'SAA', 'LAB', 'NYB', 'AGA', 'QRO', 'DCC', 'PBC', 'MIC...
<p>you will want to convert list_NA to a comma separated string with single quotes.</p> <pre><code>"','".join(list_NA) </code></pre> <p>but you'll also need to wrap that in single quotes on either end as well.</p> <pre><code>df2 = pd.read_sql("select airport from "+db+" where airport in ('"+ "','".join(list_NA) +"')...
mysql|pandas|where-clause
1
377,759
50,779,612
Pandaic way to handle empty strings in isin()
<p>The final print statement below is shows three items when only two 'b' and 'c' are wanted. What is the pandaic way to not include the empty strings in the result?</p> <pre><code>print(sys.version) print(np.__version__) print(pd.__version__) 3.6.4 1.14.2 0.22.0 </code></pre> <p>&lt;!- -&gt;</p> <pre><code>import s...
<p>Empty strings do not correspond to NaN, None, etc. Just filter them out like you'd normally do.</p> <pre><code>ds1[ds1.isin(filter(None, ds2))] 1 b 2 c dtype: object </code></pre>
python|string|pandas
4
377,760
50,907,880
Data quality - Pandas
<p>I'm doing a data quality project using Python and Pandas. I have an input dataframe where each column is categorical data, and I want to return a dataframe where each column consists of the top 10 most frequently occuring categories in that column in order, together with the name of said categories (ie a key value p...
<p>You can get value-count pairs in dictionary format, as so:</p> <pre><code>df["column"].value_counts(False).to_dict() </code></pre> <p>And you can use this method <em>iteratively</em> to populate a dataframe, as so:</p> <pre><code>#Import dependencies import numpy as np import pandas as pd #Create dataframe with ...
python|pandas
0
377,761
50,824,464
How can I use pandas to set a value for all rows that match part of a multi part index
<p>I have the following pandas dataframe:</p> <pre><code>&gt;&gt;&gt; import pandas &gt;&gt;&gt; indexes = [['a', 'a', 'c', 'd', 'd', '1'], ['1', '1', '3', '4', '5', '6']] &gt;&gt;&gt; pandas.DataFrame(index=indexes, columns=["Year", "Color", "Manufacturer"]) Year Color Manufacturer a 1 NaN NaN NaN 1...
<p>One way, using <a href="https://pandas.pydata.org/pandas-docs/stable/advanced.html#using-slicers" rel="nofollow noreferrer">slicers</a>:</p> <pre><code>import pandas as pd indexes = [['a', 'a', 'c', 'd', 'd', '1'], ['1', '1', '3', '4', '5', '6']] df = pd.DataFrame(index=indexes, columns=["Year", "Color", "Manufactu...
python|python-3.x|pandas
2
377,762
51,035,439
Mapping using date to create a new column in data frame
<p>I was trying to map two data frame based on the Date. however, I had an error as follow:</p> <blockquote> <p>"InvalidIndexError: Reindexing only valid with uniquely valued Index objects"</p> </blockquote> <p>I am using the following <strong>df1</strong> and create a new column "Fix Week" </p> <pre><code>kicko...
<p>There is problem your <code>Date</code> values are duplicated in <code>df2</code>.</p> <p>So need remove dupes first for unique <code>Date</code> rows:</p> <pre><code>df2 = df2.drop_duplicates('Date') print (df2) Round Date Home Team Away Team 0 1 2016-08-13 Hull Leicester df1['fix'] = df1....
python|python-3.x|pandas
1
377,763
50,968,827
Compare lists of column rows and using filters on them in pandas
<pre><code>sales = [(3588, [1,2,3,4,5,6], [1,38,9,2,18,5]), (3588, [2,5,7], [1,2,4,8,14]), (3588, [3,10,13], [1,3,4,6,12]), (3588, [4,5,61], [1,2,3,4,11,5]), (3590, [3,5,6,1,21], [3,10,13]), (3590, [8,1,2,4,6,9], [2,5,7]), (3591, [1,2,4,5,13], [1,2,3,4,5,6]) ...
<p><strong>Option 1:</strong></p> <pre><code>In [176]: mask = df.apply(lambda r: {1,5} &lt;= (set(r['properties_id_x']) &amp; set(r['properties_id_y'])), axis=1) In [177]: mask Out[177]: 0 True 1 False 2 False 3 False 4 False 5 False 6 True dtype: bool In [178]: df[mask] Out[178]: goods_id...
python-3.x|pandas
2
377,764
51,023,364
Get a value of a column by using another column's value
<pre><code> user min max 1 Tom 1 5 2 Sam 4 6 </code></pre> <p>I got this dataframe, now I know the user is Sam and I wanna get its 'min' value. Like this (the user is unique):</p> <pre><code>df[Sam,'min'] = 4 </code></pre> <p>How can I do this?</p>
<p>First create index by column <code>userid</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>set_index</code></a> and then select by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html" rel="no...
python|pandas|dataframe
2
377,765
50,842,397
how to get standardised (Beta) coefficients for multiple linear regression using statsmodels
<p>when using the <code>.summary()</code> function using pandas statsmodels, the OLS Regression Results include the following fields.</p> <pre><code>coef std err t P&gt;|t| [0.025 0.975] </code></pre> <p>How can I get the standardised coefficients (which exclude the intercept), similarly to...
<p>You just need to standardize your original DataFrame using a z distribution (i.e., z-score) first and then perform a linear regression. </p> <p>Assume you name your dataframe as <code>df</code>, which has independent variables <code>x1</code>, <code>x2</code>, and <code>x3</code>, and dependent variable <code>y</co...
python|pandas|regression|statsmodels|coefficients
10
377,766
51,089,470
How do I edit my function in python such that my output dataframe shows N/A?
<p><img src="https://i.stack.imgur.com/kVjh9.png" alt="enter image description here"></p> <p>I have defined a function below in python to extract hours of operation based on business_id that I have already retrieved:</p> <pre><code>def hours_operation(business_id): day = day_open(business_id) start = day_star...
<p>You can use the python tertiary condition statement, like so:</p> <pre><code>dict1[day1[i]]= start[i] if start[i] is not None else 'N/a' </code></pre> <p>this is assuming that <code>start[i]</code> is the value of the start time, and that the value you'd like to not output is a NoneType object. If it is 0 or some ...
python|pandas|api
0
377,767
50,725,804
Modify DataFrame index
<p>I have a DataFrame with a wrong DateTimeIndex. The hours and minutes must be moved to the left:</p> <p>2016-07-07 00:08:30 -> 2016-07-07 08:30:00</p> <p>I know how to make the change with regex, but I do not know how to replace the index by the modified one. Something like df.index.replace(lambda old_index:new_ind...
<p>By using <code>to_datetime</code> with <code>format</code></p> <pre><code>#idx=pd.Index(pd.to_datetime(pd.Series('2016-07-07 00:08:30'))) pd.to_datetime(pd.Series(idx).astype(str),format='%Y-%m-%d %S:%H:%M') Out[562]: 0 2016-07-07 08:30:00 dtype: datetime64[ns] </code></pre> <p>And for new index use:</p> <pre...
pandas|dataframe
1
377,768
50,929,606
How to concat dataframes so missing values are set to NaN
<p>I have two dataframes df1 and df2. Here is a toy example so show my question. <code>df1</code> looks like</p> <pre><code>col1 col2 1 2 0 7 </code></pre> <p><code>df2</code> looks like</p> <pre><code>col1 col3 9 2 5 3 </code></pre> <p>I would like to do <code>pd.concat([df1,df2])</code> but in such ...
<pre><code>import pandas as pd df1 = pd.DataFrame() df1['col1'] = [1, 0] df1['col2'] = [2, 7] df2 = pd.DataFrame() df2['col1'] = [9, 5] df2['col3'] = [5, 3] x = df1.append(df2) </code></pre> <p>Output:</p> <pre><code> col1 col2 col3 0 1 2.0 NaN 1 0 7.0 NaN 0 9 NaN 5.0 1 5 NaN 3....
python|pandas
1
377,769
50,882,613
How to do a SQL-type INNER JOIN in Python and only naming a couple of column names?
<p>I am experienced with SQL, but am new to Python.</p> <p>I am attempting to use the join or pandas.merge functions to complete the following simple SQL Join:</p> <pre><code>SELECT a.Patient_ID, a.Physician, b.Hospital FROM DF1 a INNER JOIN DF2 b on a.Patient_ID=b.Patient_ID_Number </code></pre> <p>Here is as close...
<p>You can specify the columns in the join statement:</p> <pre><code> output=pd.merge(DF1[['Patient_ID','Physician']], DF2[['Hospital','Patient_ID_Number']], how='inner', left_on='Patient_ID', right_on='Patient_ID_Number') </code></pre> <p>You do have to carry over both columns that you're joining on in your statemen...
python|sql|sql-server|pandas|join
2
377,770
50,858,194
Appending Columns from several worksheets Python
<p>I am trying to import certain columns of data from several different sheets inside of a workbook. However, while appending it only seems to append 'q2 survey' to a new workbook. How do I get this to append properly?</p> <pre><code>import sys, os import pandas as pd import xlrd import xlwt b = ['q1 survey', 'q2 su...
<p>I think if you do:</p> <pre><code>b = ['q1 survey', 'q2 survey','q3 survey'] #Sheet Names list_col = ["Month","Date", "Year"] #column Name xls = "path_to_file/R.xls" #create the empty df named bill to append after bill= pd.DataFrame(columns = list_col) for sheet in b: # read the sheet df=pd.read_excel(xls,sh...
excel|python-2.7|pandas|xlrd|xlwt
1
377,771
50,781,373
Using feed_dict is more than 5x faster than using dataset API?
<p>I created a dataset in TFRecord format for testing. Every entry contains 200 columns, named <code>C1</code> - <code>C199</code>, each being a strings list, and a <code>label</code> column to denote the labels. The code to create the data can be found here: <a href="https://github.com/codescv/tf-dist/blob/8bb3c44f559...
<p>There is currently (as of TensorFlow 1.9) a performance issue when using <code>tf.data</code> to map and batch tensors that have a large number of features with a small amount of data in each. The issue has two causes:</p> <ol> <li><p>The <code>dataset.map(parse_tfrecord, ...)</code> transformation will execute O(<...
tensorflow|tensorflow-datasets
16
377,772
51,080,393
How to use pretrained keras model with batch normalization layer?
<p>I have a pretrained model with batch_normalization model. When I run:</p> <pre><code>model.layers.get_weights </code></pre> <p>I can see that there are beta/gama values in batch_normalization layers, which means that the model has been trained, and the value has meanings.</p> <p>I want to load the model and use i...
<p>There will be beta, gamma values as these parameters can be initialized, before the model is trained. By default gamma will be initialised to 1 and beta to 0. </p>
tensorflow|keras
1
377,773
51,037,433
Why does pandas 2min bucket print NaN although all my row values are numbers (not NaN)?
<p>I know that in my data response_bytes column does not have NaN values because when I run: <code>data[data.response_bytes.isna()].count()</code> I get as a result 0.</p> <p>When I then run 2 min bucket mean and then head I get NaN:</p> <pre><code>print(data.reset_index().set_index('time').resample('2min').mean().he...
<p>It is expected behaviour, because <a href="http://pandas.pydata.org/pandas-docs/stable/timeseries.html#resampling" rel="nofollow noreferrer"><code>resampling</code></a> converts to a regular time interval, so if there are no samples you get <code>NaN</code>.</p> <p>So it means there are no datetimes between some 2 ...
python|pandas
1
377,774
50,672,019
tensorflow bijector construction
<p>I am new to tensorflow distribution and bijector. I know when they design tensorflow distribution package, they partition a tensor's shape into three groups: [sample shape, batch_shape, event_shape]. But I find it hard to understand why when we define a new bijector class, they always defines parent class's event di...
<p><code>event_ndims</code> is the <em>number</em> of event dimensions, not the size of the input. Thus <code>event_ndims=1</code> operates on vectors, <code>event_ndims=2</code> on matrices, and so on. See the <code>__init__</code> docstring for the <code>Bijector</code> class.</p>
python|tensorflow|machine-learning|unsupervised-learning
1
377,775
51,043,372
Replace a column values with its mean of groups in dataframe
<p>I have a DataFrame as</p> <pre><code>Page Line y 1 2 3.2 1 2 6.1 1 3 7.1 2 4 8.5 2 4 9.1 </code></pre> <p>I have to replace column y with values of its mean in groups. I can do that grouping using one column using this code.</p> <pre><code>df['y'] ...
<p>You need list of columns names, <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> parameter <code>by</code>:</p> <blockquote> <p><strong>by</strong> : mapping, function, label, or <strong>list of labels</strong></p> <p>Use...
python|python-3.x|pandas
9
377,776
51,101,386
Subplot of difference between data points imported with Pandas and conversion of time values
<p>I'm relatively new to Python (in the process of self-teaching) and so this is proving to be quite a learning curve but I'm very happy to get to grips with it. I have a set of data points from an experiment in excel, one column is time (with the format 00:00:00:000) and a second column is the measured parameter.</p> ...
<p>If you just read the excel file, pandas will create a RangeIndex, starting at 0. To use your time information from you excel file as index, you have to specify the name (as string) of the time column with the key-word argument index_col in the read_excel call:</p> <pre><code>df = pd.read_excel('rest.xlsx', 'Sheet1'...
python|pandas|matplotlib|time|graph
0
377,777
50,959,398
customize the color of bar chart while reading from two different data frame in seaborn
<p>I have plotted a bar chart using the code below:</p> <pre><code>dffinal['CI-noCI']='Cognitive Impairement' nocidffinal['CI-noCI']='Non Cognitive Impairement' res=pd.concat([dffinal,nocidffinal]) sns.barplot(x='6month',y='final-formula',data=res,hue='CI-noCI') plt.xticks(fontsize=8, rotation=45) plt.show() </code></...
<p>You can use matplotlib to overwrite Seaborn's default color cycling to ensure the hues it uses are red and green.</p> <pre><code>import matplotlib.pyplot as plt plt.rcParams['axes.prop_cycle'] = ("cycler('color', 'rg')") </code></pre> <hr> <p>Example:</p> <pre><code>import seaborn as sns import matplotlib.pyplo...
pandas|dataframe|matplotlib|bar-chart|seaborn
0
377,778
50,710,889
Tensorflow hub tags and exporting
<p>I am very confused how tags are supposed to work in hub and how do i use them when exporting. How can I train on the train part of my graph and export the serving one?</p> <p>I have the following code:</p> <pre><code>def user_module_fn(foo, bar): x = tf.sparse_placeholder(tf.float32, shape[-1, 32], name='name'...
<p>In the best (i.e., simplest) case, your module doesn't need any tags at all, namely when one and the same piece of TensorFlow graph fits all intended uses of the module. For that, just leave <code>tags</code> or <code>tags_and_args</code> unset to get the default (an empty set of tags).</p> <p>Tags are needed if th...
python|tensorflow|tensorflow-serving|tensorflow-estimator|tensorflow-hub
1
377,779
51,040,382
ZeroDivisionError - Pandas
<p>The script is the following - aiming to show the differences in average click through rates by keyword ranking position - highlighting queries/pages with under performing ctrs.</p> <p>Until recently it has been working fine - however it now gives me the below ZeroDivisionError.</p> <pre><code>import os import sys ...
<p>It looks like your getting a row where <code>mad_ctr</code> is zero, so just add a check for that case:</p> <pre><code>row['score'] = round(float( (1 * (ctr - median_ctr))/mad_ctr ), 3 ) if mad_ctr != 0 else 0 </code></pre> <p>This will set <code>score</code> to zero if <code>mad_ctr</code> is zero. But you could ...
python|pandas|division|zero|divide-by-zero
1
377,780
50,849,202
Pandas for each group of 4 calendar months find the date on which maximum value occured
<p>For each group of 4 consecutive calendar months Need to find the date on which maximum value occured </p> <p>My Dataframe looks like this.</p> <p><a href="https://i.stack.imgur.com/7Yggm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7Yggm.png" alt="DataFrame"></a></p>
<pre><code>import numpy as np from numpy.random import randn as rn DatetimeIndex = pd.date_range('1/1/2015', '12/31/2015', freq='B') np.random.seed(101) df = pd.DataFrame(rn(len(DatetimeIndex)),index=DatetimeIndex) df.groupby(pd.Grouper(freq='4MS',label='right')).max()[:3] </code></pre> <p>2015-05-01 2.706850 2015-0...
pandas|datetime|data-science|pandas-groupby|data-science-experience
1
377,781
51,035,281
How to populate dataframe from dictionary in loop
<p>I am trying to perform entity analysis on text and I want to put the results in a dataframe. Currently the results are not stored in a dictionary, nor in a Dataframe. The results are extracted with two functions.</p> <p>df:</p> <pre><code>ID title cur_working pos_arg neg_arg ...
<p>One solution is to create a list of lists via your <code>entities</code> iterable. Then feed your list of lists into <code>pd.DataFrame</code>:</p> <pre><code>LoL = [] for entity in entities: LoL.append([id, entity.name, entity_type[entity.type], entity.salience]) df = pd.DataFrame(LoL, columns=['ID', 'name',...
python|pandas|dictionary|google-natural-language
1
377,782
50,785,716
Add two columns, i,e. mean_a and mean_b
<pre><code># Price 0 1.00 1 12.23 2 3.24 3 12.67 6 149.98 7 19.98 8 1883.23 9 1.99 10 4.89 11 9.99 12 12.99 13 18.23 14 17.99 15 18.98 16 18.11 17 19.10 18 20.30 19 1901.30 20 20.27k </code></pre> <p>Suppose I have the previous dataframe. I would like to add two columns, <code>mean_a</code> and <code>mean_b</c...
<p><a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rolling.html" rel="nofollow noreferrer">You can just using <code>rolling</code></a> (Notice <code>.iloc</code> is to reverse the order of the df)</p> <pre><code>df['mean_a'] = df.Price.rolling(3,min_periods =1).mean() df['mean_b'] = df...
python|pandas|dataframe
3
377,783
50,996,395
Pandas dataframe sort_values set default ascending=False
<p>Could be possible to define a property to always set sort_values(ascending=False)? Using it quite often in descending order then would like to set the default behavior.</p>
<p>Another option would be to declare the settings you often use in the beginning of your code and pass them as kwargs.</p> <p>Personally I would, however, write it out every time.</p> <pre><code>import pandas as pd p = {"ascending":False, "inplace":True} df = pd.DataFrame({ 'col1': [1,6,2,5,9,3] }) df.sort_va...
python|pandas
2
377,784
51,012,084
numpy, taking array difference of their intersection
<p>I have multiple numpy arrays and I want to create new arrays doing something that is like an XOR ... but not quite.</p> <p>My input is two arrays, array1 and array2. My output is a modified (or new array, I don't really care) version of array1.</p> <p>The modification is elementwise, by doing the following:</p> <...
<pre><code>In [73]: np.maximum(0,np.array([0,3,8,0])-np.array([1,1,1,1])) Out[73]: array([0, 2, 7, 0]) </code></pre> <p>This doesn't explicitly address</p> <blockquote> <p>If either array has 0 for the given index, then the index is left unchanged. </p> </blockquote> <p>but the results match for all examples:</p> ...
python|arrays|numpy|multidimensional-array|numpy-ndarray
4
377,785
50,907,980
Adding row values to a dataframe based on matching column labels
<p>I try to get my head around this problem. I have three dataframes and I would like to merge (concatenate?) two of these dataframes based on values inside a third one. Here are the dataframes:</p> <p>df1:</p> <pre><code>index,fields,a1,a2,a3,a4,a5 2018-06-01,price,1.1,2.1,3.1,4.1,5.1 2018-06-01,amount,15,25,35,45,5...
<p>Unpivot <code>df2</code> using <a href="http://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.melt.html" rel="nofollow noreferrer"><code>df.melt</code></a>:</p> <pre><code>df2_melt = df2.melt(["index", "fields"], var_name="product2") </code></pre> <p>Drop redundant column <code>index</code> from refer...
python|python-3.x|pandas|dataframe
1
377,786
50,873,164
How can i find the "non-unique" rows?
<p>I imported CSV files with over 500k rows, one year, every minute. To merge two of this files, i want so re-sample the index to every minute:</p> <pre><code>Temp= pd.read_csv("Temp.csv", sep=";", decimal="," , thousands='.' ,encoding="cp1252") Temp["Time"] = pd.to_datetime(Temp["Time"],dayfirst=True) Temp.set_inde...
<p>You can return a slice of all duplicated rows using <code>df.duplicated()</code></p> <p>In your case</p> <pre><code>Temp[Temp.duplicated(subset=None, keep=False)] </code></pre> <p>where subset can be changed if you want to find duplicates only in a specific column, and keep = False specifies to display all rows that...
pandas|csv|datetime
1
377,787
50,904,144
Write the dataframe in loop to Multiple Excel File
<p>I have 500 excel files, from each file I have to skip starting 4 rows and select few columns. Either I can create new excel file for each file with particular columns, or i can push the data in SQL Server. </p> <p>I need to create one function that can read all files and do the required process and give me output ...
<p>It's convenient to use <a href="https://docs.python.org/3/library/os.html" rel="nofollow noreferrer"><code>os</code></a> library for working with the file system.<br> The function <code>clean_one</code> is from your code with minor changes. The function <code>clean_all</code> applies <code>clean_one</code> to all fi...
python|excel|pandas
0
377,788
50,681,183
Pandas Dataframe splice data into 2 columns and make a number with a comma and integer
<p>I currently am running into two issues:</p> <p>My data-frame looks like this:</p> <pre><code>, male_female, no_of_students 0, 24 : 76, "81,120" 1, 33 : 67, "12,270" 2, 50 : 50, "10,120" 3, 42 : 58, "5,120" 4, 12 : 88, "2,200" </code></pre> <p>What I would like to achieve is this:</p> <pre><code>, male, female, n...
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.split.html" rel="nofollow noreferrer"><code>str.split</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.pop.html" rel="nofollow noreferrer"><code>pop</code></a> for new columns by separa...
python|pandas|dataframe
6
377,789
51,095,631
Annual Return vs Annualized Return vs CAGR
<p>I read on Internet and Stack Overflow different articles on <strong>Annual Return</strong> vs <strong>Annualised Return</strong> vs <strong>CAGR</strong> and I am a bit confused and I do not know if they are a different or same thing.</p> <p>Here what I need. Suppose I have a growth rate of 10% over 3 years. Starti...
<p>Thanks to comments and some days of work I found detailed answers to my questions. I'll put them here for the benefits of others. I'll divide them in two parts: theory and programming.</p> <p><strong>Theory</strong></p> <p>According to my example above if I have an investment with 10% annual return and an initial ...
python|pandas
3
377,790
50,718,634
Can't recognize dtype int as int in computation
<p>I have two columns (serverTs, FTs) in DataFrame which are timestamps in the format of Unix Time. In my code I need to subtract one from another. When i did so I received an error saying I can't subtract strings. So I added types for serverTs and FTs as integers.</p> <pre><code>file = r'S:\Работа с клиентами\Клиенты...
<p>The problem here is that you are passing a <code>names</code> along with a <code>dtypes</code> argument. This causes <code>header</code> to act as <code>None</code>. So consider:</p> <pre><code>In [1]: import pandas as pd, numpy as np In [2]: dt={'serverTs': np.int32, 'FTs': np.int32} In [3]: import io In [4]: s...
python|python-3.x|pandas|dataframe
1
377,791
50,746,096
How to match cv2.imread to the keras image.img_load output
<p>I'm studying deep learning. Trained an image classification algorithm. The problem is, however, that to train images I used:</p> <pre><code>test_image = image.load_img('some.png', target_size = (64, 64)) test_image = image.img_to_array(test_image) </code></pre> <p>While for actual application I use:</p> <pre><cod...
<p>OpenCV reads images in BGR format whereas in keras, it is represented in RGB. To get the OpenCV version to correspond to the order we expect (RGB), simply reverse the channels:</p> <pre><code>test_image = cv2.imread('trick.png') test_image = cv2.resize(test_image, (64, 64)) test_image = test_image[...,::-1] # Adde...
python-3.x|numpy|opencv|image-processing|keras
22
377,792
50,983,646
Pandas drop before first valid index and after last valid index for each column of a dataframe
<p>I have a dataframe like this:</p> <pre><code>df = pd.DataFrame({'timestamp':pd.date_range('2018-01-01', '2018-01-02', freq='2h', closed='right'),'col1':[np.nan, np.nan, np.nan, 1,2,3,4,5,6,7,8,np.nan], 'col2':[np.nan, np.nan, 0, 1,2,3,4,5,np.nan,np.nan,np.nan,np.nan], 'col3':[np.nan, -1, 0, 1,2,3,4,5,6,7,8,9], 'col...
<p>One idea is to use a list or dictionary comprehension after setting your index as <code>timestamp</code>. You should test with your data to see if this resolves your issue with performance. It is unlikely to help if your limitation is memory.</p> <pre><code>df = df.set_index('timestamp') final = {col: df[col].loc[...
python|pandas
1
377,793
50,753,075
Not getting correct subtraction result for difference of two images in python
<p>This is my python code where i want to find difference between two images. </p> <pre><code>import cv import numpy as np img1=cv.imread('/storage/emulated/0/a.jpg',0) print(img1[0:1]) img2=img1 img2[0:1994]=1 print(img2[0:1]) rows,cols=img1[0:1].shape print(rows) print(cols) rows,cols=img2[0:1].shape print(rows) pri...
<p>The problem lies in the way you have copied the image. </p> <p>When you assign an object using the assignment operator (=), the changes made on one object will be reflected in the other image as well. So your this case when you do <code>img2 = img1</code> the changes made in <code>img2</code> are reflected in <code...
python|numpy|opencv
1
377,794
51,071,365
Convert Points to Lines Geopandas
<p>Hello I am trying to convert a list of X and Y coordinates to lines. I want to mapped this data by <code>groupby</code> the IDs and also by time. My code executes successfully as long as I <code>grouby</code> one column, but two columns is where I run into errors. I referenced to this <a href="https://gis.stackexcha...
<p>Your code is good, the problem is your data.</p> <p>You can see that if you group by ID and Hour, then there is only 1 point that is grouped with an ID of 1 and an hour of 17. A LineString has to consist of 1 or more Points (must have at least 2 coordinate tuples). I added another point to your sample data:</p> <pre...
python|pandas|geopandas
12
377,795
50,954,783
Converting RGB data into an array from a text file to create an Image
<p>I am trying to convert txt RGB data from <em>file.txt</em> into an array. And then, using that array, convert the RGB array into an image. (RGB data is found at this github repository: <a href="https://github.com/abood91/RPiMLX90640/blob/master/file.txt" rel="nofollow noreferrer">IR Sensor File.txt</a>).</p> <p>I...
<p>I think this should work - no idea if it's decent Python:</p> <pre><code>#!/usr/local/bin/python3 from PIL import Image import numpy as np import re # Read in entire file with open('sensordata.txt') as f: s = f.read() # Find anything that looks like numbers l=re.findall(r'\d+',s) # Convert to numpy array and ...
arrays|image|numpy
2
377,796
51,064,502
Panda dataframe yield error
<p>I am trying to yield 1 row by 1 row for a panda dataframe but get an error. The dataframe is a stock price data, including daily open, close, high, low price and volume information. </p> <p>The following is my code. This class will get data from MySQL database</p> <pre><code>class HistoricMySQLDataHandler(DataHand...
<p>I found the bug. My code in other place change the dataframe to be a generator. A stupid mistake lol</p> <p>I didn't post this line in the question but this line change the datatype</p> <pre><code># Reindex the dataframes for s in self.symbol_list: self.symbol_data[s] = self.symbol_data[s].reindex(inde...
python-3.x|pandas|dataframe|yield-keyword
0
377,797
50,943,043
What is the fastest way to get the result of matrix < matrix in numpy?
<p>Suppose I have a matrix <code>M_1</code> of dimension (M, A) and a matrix <code>M_2</code> of dimension (M, B). The result of <code>M_1 &lt; M_2</code> should be a matrix of dimension (M, B, A) where by each row in <code>M1</code> is being compared with each element of the corresponding row of <code>M_2</code> and g...
<p>For NumPy arrays, extend dims with <code>None/np.newaxis</code> such that the first axes are aligned, while the second ones are <em>spread</em> that lets them be compared in an elementwise fashion. Finally do the comparsion leveraging <a href="https://docs.scipy.org/doc/numpy-1.13.0/user/basics.broadcasting.html" re...
python|performance|numpy|vectorization
3
377,798
20,805,083
Numpy arrays assignment operations indexed with arrays
<p>I have an array <code>y</code> with indexes of values that must be <em>incremented by one</em> in another array <code>x</code> just like <code>x[y] += 1</code>, This is an example:</p> <pre><code>&gt;&gt;&gt; x = np.zeros(5,dtype=np.int) &gt;&gt;&gt; y = np.array([1,4]) &gt;&gt;&gt; x array([0, 0, 0, 0, 0]) &gt;&g...
<p>Do this:</p> <pre><code>&gt;&gt;&gt; x=np.array([0, 0, 0, 0, 0]) &gt;&gt;&gt; y=np.array([1,4]) &gt;&gt;&gt; x+=np.bincount(y, minlength=x.size) &gt;&gt;&gt; x array([0, 1, 0, 0, 1]) &gt;&gt;&gt; y=np.array([1,1]) &gt;&gt;&gt; x+=np.bincount(y, minlength=x.size) &gt;&gt;&gt; x array([0, 3, 0, 0, 1]) &gt;&gt;&gt; ma...
python|arrays|numpy
3
377,799
20,584,266
Tips for speeding up my python code
<p>I have written a python program that needs to deal with quite large data sets for a machine learning task. I have a train set (about 6 million rows) and a test set (about 2 million rows). So far I my program runs in a reasonable amount of time until I get to the last part of my code. The thing is I have my machine ...
<p>Two suggestions,</p> <ol> <li><p>To check if a key is in a dict, simply use <code>in</code> and the object (this happens in O(1))</p> <pre><code>if key in dict: </code></pre></li> <li>Use comprehensions whenever possible.</li> </ol> <p>So, your code becomes like this</p> <pre><code>result = [train_dict.get(test[...
python|python-2.7|optimization|numpy
4