Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
200
16,049,437
Add new items to some structured array in a dictionary-like way
<p>I want to extend the structured array object in numpy such that I can easily add new elements. </p> <p>For example, for a simple structured array </p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; x=np.ndarray((2,),dtype={'names':['A','B'],'formats':['f8','f8']}) &gt;&gt;&gt; x['A']=[1,2] &gt;&gt;&gt; x[...
<p>Use <code>numpy.recarray</code>instead, in my <code>numpy 1.6.1</code> you get an extra method <code>field</code> that does not exist when you subclass from <code>numpy.ndarray</code>.</p> <p><a href="https://stackoverflow.com/questions/1201817/adding-a-field-to-a-structured-numpy-array">This question</a> or <a hre...
python|numpy
4
201
57,635,128
Split DataFrame into Chunks according to time or index differences
<p>i'm trying to separate a DataFrame into smaller DataFrames according to the Index value or Time. As you can see in the example below, the time resolution of my data is 5 min, and i would like to create a new dataframe when the time difference between each row is greater than 5 min, or when the Index grows more than ...
<p>You have to create a helper series like:</p> <pre><code>s=df.index.to_series().diff().fillna(1).ne(1).cumsum() print(s) Index 0 0 1 0 2 0 58 1 59 1 60 1 92 2 93 2 </code></pre> <p>Then you can store each group in a dictionary and call each key of the dict to refer the df:</p> <pre><code...
python|pandas
2
202
57,391,946
How to do a scalar product along the right axes with numpy and vectorize the process
<p>I have numpy array 'test' of dimension (100, 100, 16, 16) which gives me a different 16x16 array for points on a 100x100 grid. I also have some eigenvalues and vectors where vals has the dimension (100, 100, 16) and vecs (100, 100, 16, 16) where vecs[x, y, :, i] would be the ith eigenvector of the matrix at the poi...
<p>Ok, I played around and with np.einsum I found a way to do what is described above. A nice feature of einsum is that if you repeat doubly occuring indices in the 'output' (so right of the '->'-thing) you can have element-wise multiplication along some and contraction along some other axes (something that you don't h...
python|python-3.x|numpy|linear-algebra
0
203
57,318,726
Append a word after matching a string from looped list in data frame with external lists to new column in same dataframe
<p>I want to loop over a pandas data frame where each row has a list of strings. But for each row, I want to cross-reference it with another set of lists with predefined strings. If the predefined string within the external lists matches with the string in the row, I want to append the matching string to a new column w...
<p>There might be a simpler way for you to run such comparison. The order was not clear which list should be compared first, but below is one way:</p> <p><em>PS: Created a sample data</em>:</p> <pre class="lang-py prettyprint-override"><code>x =[ [['report','shootout','midrand','n1','north','slow']], [['jhbtr...
python|pandas|list|loops
1
204
57,474,737
Selecting rows with pandas.loc using multiple boolean filters in sequence
<p>I have a following DataFrame:</p> <pre><code>import pandas as pd stuff = [ {"num": 4, "id": None}, {"num": 3, "id": "stuff"}, {"num": 6, "id": None}, {"num": 8, "id": "other_stuff"}, ] df = pd.DataFrame(stuff) </code></pre> <p>I need to select rows where "num" is higher than a given number but only...
<p>Add parantheses (because priority operators) with <code>|</code> for bitwise <code>OR</code> instead <code>&amp;</code> for bitwise <code>AND</code>, also for inverted <code>pd.isnull</code> is possible use <code>notna</code> or <code>notnull</code> for oldier pandas versions:</p> <pre><code>df = df[(df["num"] &gt;...
python|pandas
2
205
57,588,604
Multiplying Pandas series rows containing NaN
<p>given this Dataframe :</p> <pre><code>import pandas as pd import numpy as np data = {'column1': [True,False, False, True, True], 'column2' : [np.nan,0.21, np.nan, 0.2222, np.nan], 'column3': [1000, 0, 0, 0, 0 ]} df = pd.DataFrame.from_dict(data) print(df) </code></pre> <hr> <pre><code> colum...
<p>You can give this a try:</p> <pre><code>#replace 0 with nan and create a copy of the df m=df.assign(column3=df.column3.replace(0,np.nan)) #ffill on axis 1 where column2 is not null , and filter the last col then cumprod final=(df.assign(column3=m.mask(m.column2.notna(),m.ffill(1)).iloc[:,-1].cumprod().ffill())) </c...
python|python-3.x|pandas
2
206
43,745,792
Apply function across all columns using the column name - Python, Pandas
<h2>Basically:</h2> <p>Is there a way to apply a function that uses the column name of a dataframe in Pandas? Like this:</p> <pre><code>df['label'] = df.apply(lambda x: '_'.join(labels_dict[column_name][x]), axis=1) </code></pre> <p>Where column name is the column that the <code>apply</code> is 'processing'.</p> <h...
<p>I worked out one way to do it, but it seems clunky. I'm hoping there's something more elegant out there.</p> <pre><code>df['label'] = pd.DataFrame([df[column_name].apply(lambda x: labels_dict[column_name][x]) for column_name in df.columns]).apply('_'.join) </code></pre>
python|pandas
1
207
43,538,399
Deleting time series rows that have too many consequitive equal values
<p>I have a time series, namely a pandas.DataFrame with one column (containing the values) and the index (containing the timestamps). There are many values with 0 and I want to check for consecutive 0s. If there are too many 0 one after the other, I want to delete the 0s that are too much.</p> <p>For example, if I al...
<p>Here's a solution using pandas groupby. Updating answer to show how to apply filter based on one column of dataframe.</p> <p><strong>IMPORT DATA</strong></p> <pre><code>from io import StringIO import pandas as pd import numpy as np inp_str = u""" time value 12:01:01.001 1 12:01:01.002 0 12:01:01.004 6 12:01:01.01...
python|pandas
2
208
43,720,973
Subtracting dataframes with unequal numbers of rows
<p>I have two dataframes like this </p> <pre><code>import pandas as pd import numpy as np np.random.seed(0) df1 = pd.DataFrame(np.random.randint(10, size=(5, 4)), index=list('ABCDE'), columns=list('abcd')) df2 = pd.DataFrame(np.random.randint(10, size=(2, 4)), index=list('CE'), columns=list('abcd')) a b c d A...
<p>I think this is what you want:</p> <pre><code>(df1-df2).fillna(df1) Out[40]: a b c d A 5.0 0.0 3.0 3.0 B 7.0 9.0 3.0 5.0 C -3.0 -5.0 -1.0 -3.0 D 8.0 8.0 1.0 6.0 E 3.0 4.0 8.0 -2.0 </code></pre> <p>Just subtract the dataframes like you would normally, but "package" the result using pa...
python|pandas|dataframe
4
209
43,855,103
Calling a basic LSTM cell within a custom Tensorflow cell
<p>I'm trying to implement the MATCH LSTM from this paper: <a href="https://arxiv.org/pdf/1608.07905.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/1608.07905.pdf</a></p> <p>I'm using Tensorflow. One part of the architecture is an RNN that uses the input and the previous state to compute an attention vector whic...
<p>I'm also trying to reimplement Match_LSTM for Squad for experiment. I use <a href="https://github.com/MurtyShikhar/Question-Answering" rel="nofollow noreferrer">MurtyShikhar's</a> as reference. It works! However, he had to customize AttentionWrapper and use existed BasicLSTM cell.</p> <p>I also try to create a Matc...
tensorflow
1
210
43,583,572
How can I rename strings of indices?
<p>I am looking forward to rename the indices names 'Juan Gonzalez' to 'Jason', 'Jorge Sanchez' to 'George' and 'Miguel Sanz' to 'Michael'  </p> <pre><code>                            age height(cm) weight(kg) People Juan Gonzalez               22     181    60 Jorge Sanchez          34     1...
<p>It seems there are some whitespaces in index values.</p> <p>For remove it use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.strip.html" rel="nofollow noreferrer"><code>strip</code></a>:</p> <pre><code>df.index = df.index.str.strip() </code></pre> <p>Or add parameter <code>skipin...
python|pandas
2
211
73,043,768
Find first row after a specific row with higher value in a column in pandas
<p>I have a pandas dataframe like this:</p> <pre class="lang-py prettyprint-override"><code> first second third 0 2 2 False 1 3 1 True 2 1 4 False 3 0 6 False 4 5 7 True 5 4 2 False 6 3 4 False 7 3 ...
<p>You can try</p> <pre class="lang-py prettyprint-override"><code>m = df['third'].cumsum() out = (df[m.gt(0) &amp; (~df['third'])] # filter out heading False row and the middle True row .groupby(m, as_index=False) # select the first row that value in the second column greater than in the first column ...
python|pandas|dataframe
1
212
73,137,976
pandas dataframe replace multiple substring of column
<p>I have below the code</p> <pre><code>import pandas as pd df = pd.DataFrame({'A': ['$5,756', '3434', '$45', '1,344']}) pattern = ','.join(['$', ',']) df['A'] = df['A'].str.replace('$|,', '', regex=True) print(df['A']) </code></pre> <p>What I am trying to remove every occurrence of '$' or ','... so I am trying to r...
<p>Use:</p> <pre><code>import pandas as pd df = pd.DataFrame({'A': ['$5,756', '3434', '$45', '1,344']}) df['A'] = df['A'].str.replace('[$,]', '', regex=True) print(df) </code></pre> <p><strong>Output</strong></p> <pre><code> A 0 5756 1 3434 2 45 3 1344 </code></pre> <p>The problem is that the character <cod...
python|pandas|dataframe
3
213
73,016,142
I can't import yfinance and pandas in JupyterNotebook or pycharm. (Mac M1)
<p>I'm working on M1. I tried to import pandas in Jupyter. But it doesn't work.</p> <p>When I check it using 'pip show pandas' in Jupyter, it appears like this. <a href="https://i.stack.imgur.com/UpaRE.png" rel="nofollow noreferrer">enter image description here</a></p> <p>But I can't import Pandas in Jupyter. Error app...
<p>I used to have some similar issues with importing after I pip installed the needed modules piecemeal. To fix this, I started from scratch and used Anaconda on both mac and windows machines to install python and the needed modules without any further issues.</p> <p><a href="https://www.anaconda.com/products/distribu...
python|python-3.x|pandas|jupyter-notebook|python-import
0
214
73,071,223
The kernel appears to have died. It will restart automatically. Problem with matplotlib.pyplot
<p>Whenever I am trying to plot the history of a model my kernel outputs this error.</p> <blockquote> <p>Error: &quot;The kernel appears to have died. It will restart automatically.&quot;</p> </blockquote> <p>It only occurs when I am trying to plot the history of any model otherwise matplotlib.pyplot works perfectly. I...
<p>After investing a lot I found that matplotlib.pyplot isn't installed properly on the current env. Creating a new env and then installing the library, resolved my problem.</p>
python|tensorflow|matplotlib|keras|conda
0
215
72,910,978
My dataframe column output NaN for all values
<pre><code>41-45 93 46-50 81 36-40 73 51-55 71 26-30 67 21-25 62 31-35 61 56-70 29 56-60 26 61 or older 23 15-20 10 Name: age, dtype: int64 </code></pre> <pre><code> pd.to_numeric(combined['age'], errors='coerce') </code></pre...
<p>try the below:</p> <pre><code>import pandas as pd df = pd.DataFrame({&quot;age&quot;: [&quot;41-45&quot;, &quot;46-50&quot;,&quot;61 or older&quot;], &quot;Col2&quot;: [93, 81, 23]}) Cols = [&quot;Lower_End_Age&quot;, &quot;Higher_End_Age&quot;,] # list of column names for later # replacing whitespace by delimite...
python|pandas|dataframe|nan
0
216
70,669,308
Python: Only keep section of string after second dash and before third dash
<p>I have a column 'Id' that has data like this:</p> <p>'10020-100-700-800-2'</p> <p>How can I create a new column that would only contain the third number, in this case 700, for each row?</p> <p>Here is an example dataframe:</p> <p>d = {'id': {0: '10023_11_762553_762552_11', 1: '10023_14_325341_359865_14', 2: '10023_1...
<p>Use <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.str.split.html" rel="nofollow noreferrer"><code>str.split</code></a> and take the third argument of the list:</p> <pre><code>df = pd.DataFrame({'Col': ['10020-100-700-800-2']}) df['NewCol'] = df['Col'].str.split('-').str[2].astype(int) print(df)...
python|pandas
0
217
70,507,719
Fill pd.Series from list of values if value exist in another df.Series description
<p>I need to solve tricky problem, and minimize big O notation problem.</p> <p>I have two pandas dataframes:</p> <p>The first df is like as:</p> <pre><code>| source | searchTermsList | |:---- |:------:| | A | [t1, t2, t3,...tn] | | B | [t4, t5, t6,...tn] | | C | [t7, t8, t9,...tn] | </code></pre> <p>Where the fir...
<p>Create<code>Series</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.explode.html" rel="nofollow noreferrer"><code>DataFrame.explode</code></a> for indices of <code>searchTermsList</code> for mapping first:</p> <pre><code>s = df1.explode('searchTermsList').set_index('searc...
python|pandas|dataframe|dictionary
1
218
70,718,030
Is there a function similar to ncdisp to view .npy files?
<p>Is there a function similar to <code>ncdisp</code> in MATLAB to view .npy files?</p> <p>Alternatively, it would be helpful to have some command that would spit out header titles in a .npy file. I can see everything in the file, but it is absolutely enormous. There has to be a way to view what categories of data are ...
<p>Looking at the code for <code>np.lib.npyio.load</code> we see that it calls <code>np.lib.format.read_array</code>. That in turn calls as <code>np.lib.format._read_array_header</code>.</p> <p>This can studied and perhaps even used, but it isn't in the public API.</p> <p>But if you are such a MATLAB fan as you claim,...
python|numpy|file-type
-1
219
42,943,979
Python & Pandas: How can I skip creating intermediate data file when reading data?
<p>I have data files looks like this:</p> <pre><code>ABE200501.dat ABE200502.dat ABE200503.dat ... </code></pre> <p>So I first combine these files into <code>all.dat</code>, and do a little bit clean up </p> <pre><code>fout=open("all.dat","w") for year in range(2000,2017): for month in range(1,13): try: ...
<p>The <a href="https://docs.python.org/2/library/stringio.html" rel="nofollow noreferrer"><code>StringIO</code></a> module lets you treat strings as files.</p> <p>Example from the docs:</p> <pre><code>import StringIO output = StringIO.StringIO() output.write('First line.\n') print &gt;&gt;output, 'Second line.' # ...
python|pandas
1
220
27,184,204
Write a method that accesses to a parametric dimension numpy array
<p>I'm using <strong>python 2.7</strong> and <strong>numpy 1.9</strong>.<br> I have 3 methods that applies a transformation to a couple of numpy arrays.</p> <pre><code>def sum_arrays2(a, b): c = np.zeros(a.shape) c[:, 0:-1] = (a[:, 1:] + b[:, 0:-1]) ** 2 return a[0:-1, 1:] + c[1:, 0:-1] def sum_arrays3(a,...
<p>You can do the following:</p> <pre><code>def sum_arrays(a, b): c = np.zeros(a.shape) c[..., :-1] = (a[..., 1:] + b[..., :-1]) ** 2 return a[:-1, ..., 1:] + c[1:, ..., :-1] </code></pre>
python|arrays|python-2.7|numpy
3
221
26,991,363
Numpy: Extract particular rows in matrix
<p>I have a matrix W and two vectors y1 and y2. I want to extract rows from W. The rows I am interested in are in the range [y1:y2]. What is the best way of doing this in Numpy? <strong>Can this be done without using any for-loops or map method</strong>? For e.g.:</p> <pre><code>W = [[ 1., 2., 3., 4.], [ ...
<p>You need to build the slices for yourself as indices and then use them:</p> <pre><code>indices = np.concatenate([np.arange(iy1, iy2) for iy1, iy2 in zip(y1.ravel(), y2.ravel())] newW = w[indices] </code></pre>
python|numpy|scipy
1
222
30,695,034
set column of pandas.DataFrame object
<p>Ideally, I want to be able something like: </p> <pre><code>cols = ['A', 'B', 'C'] df = pandas.DataFrame(index=range(5), columns=cols) df.get_column(cols[0]) = [1, 2, 3, 4, 5] </code></pre> <p>What is the pythonic/pandonic way to do this?</p> <p>Edit: I know that I can access the column 'A' by <code>df.A</code>, b...
<p>Okay, this is particularly straightforward. </p> <pre><code>df[cols[0]] = [1, 2, 3, 4, 5] </code></pre>
python|pandas
0
223
30,730,718
Merge two daily series into one hour series
<p>I have two daily series and I have to merge them into one hour series with the 1st series for the first 12 hours and the 2nd series for the remaining hours. </p> <p>Is there a more efficient way instead of building a list manually and convert it to series? Thanks </p> <pre><code>a = pd.Series(np.random.rand(5), pd...
<p>possibly:</p> <pre><code>&gt;&gt;&gt; b.index += dt.timedelta(hours=12) &gt;&gt;&gt; pd.concat((a, b), axis=0).sort_index() 2015-01-01 00:00:00 0.150 2015-01-01 12:00:00 0.970 2015-01-02 00:00:00 0.746 2015-01-02 12:00:00 0.937 2015-01-03 00:00:00 0.523 2015-01-03 12:00:00 0.392 2015-01-04 00:00:0...
python|pandas
1
224
30,550,295
Pandas data frame creation inconsistencies
<p>I was creating a pandas dataframe from a python dictionary: </p> <pre><code>import numpy as np import pandas as pd obs_dict = { 'pos':[[0,0],[10,10]], 'vel':[[0,0],[0,0]], 'mass':np.array([10000,10000]) } print pd.DataFrame(obs_dict) </code></pre> <p>Returns:</p> <...
<p>In this code:</p> <pre><code>obs_dict = { 'pos':[[0,0],[10,10]], 'vel':[[0,0],[0,0]], 'mass':np.array([10000,10000]) } print pd.DataFrame(obs_dict)` </code></pre> <p>you just got a dataframe with some object columns, try dtypes:</p> <pre><code>tt = pd.DataFrame(obs_dict) tt.dtypes <...
python|arrays|numpy|pandas
0
225
19,395,019
for loop over 20 data frames on each day simultaneously
<p>I have the following data structure (still subject to changes):</p> <pre><code>pp = ([Pair1, Pair2, Pair3, ..., Pair25]) </code></pre> <p>Each Pair has has the following format: </p> <pre><code> &lt;class 'pandas.core.frame.DataFrame'&gt; DatetimeIndex: 2016 entries, 2005-09-19 00:00:00 to 2013-09-12 00:00:00 D...
<p>If there is no reason they need to be separate data frames you should combine them into one dataframe with a multi index or simply a column indicating which pair they belong to. Then you can group by to perform your function applications.</p> <pre><code>DF.groupby(['Date','pair']).apply(function) </code></pre>
python|numpy|pandas
2
226
19,584,599
scatter function in matplotlib
<pre><code>from numpy import array import matplotlib import matplotlib.pyplot as plt from fileread import file2matrix datingDataMat,datingLabels = file2matrix('iris_data.txt') fig = plt.figure() ax = fig.add_subplot(111) ax.scatter(datingDataMat[:,1], datingDataMat[:,2],15.0*array(datingLabels), 15.0*array(datingLabels...
<p>The array should contains numeric values.</p> <pre><code>&gt;&gt;&gt; 15.0 * array([1,2]) array([ 15., 30.]) </code></pre> <hr> <pre><code>&gt;&gt;&gt; 15.0 * array(['1','2']) Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; TypeError: unsupported operand type(s) for *: 'float...
python|numpy|matplotlib
3
227
13,167,391
Filter out groups with a length equal to one
<p>I am creating a <code>groupby</code> object from a Pandas <code>DataFrame</code> and want to select out all the groups with &gt; 1 size.</p> <p>Example:</p> <pre><code> A B 0 foo 0 1 bar 1 2 foo 2 3 foo 3 </code></pre> <p>The following doesn't seem to work:</p> <pre><code>grouped = df.groupby('A') group...
<p>As of pandas 0.12 you can do:</p> <pre><code>&gt;&gt;&gt; grouped.filter(lambda x: len(x) &gt; 1) A B 0 foo 0 2 foo 2 3 foo 3 </code></pre>
python|pandas|group-by
46
228
13,187,778
Convert pandas dataframe to NumPy array
<p>How do I convert a pandas dataframe into a NumPy array?</p> <p>DataFrame:</p> <pre><code>import numpy as np import pandas as pd index = [1, 2, 3, 4, 5, 6, 7] a = [np.nan, np.nan, np.nan, 0.1, 0.1, 0.1, 0.1] b = [0.2, np.nan, 0.2, 0.2, 0.2, np.nan, np.nan] c = [np.nan, 0.5, 0.5, np.nan, 0.5, 0.5, np.nan] df = pd.Dat...
<h1>Use <code>df.to_numpy()</code></h1> <p>It's better than <code>df.values</code>, here's why.<sup>*</sup></p> <p>It's time to deprecate your usage of <code>values</code> and <code>as_matrix()</code>.</p> <p>pandas v0.24.0 introduced two new methods for obtaining NumPy arrays from pandas objects:</p> <ol> <li><strong>...
python|arrays|pandas|numpy|dataframe
544
229
29,236,097
cuts in the dataset with python
<p>Im studying the sun frequencies during one month every minute. So I have one matrix M with 43200 elements, one per minute.</p> <p>The way to do the power spectrum for all the elements is:</p> <pre><code>import numpy as np import pylab as pl from scipy import fftpack M=np.loadtxt('GOLF-SOHO-Sol.dat') N=len(M) dt=1...
<pre><code># your data values (all ones for illustration) &gt;&gt;&gt; values = numpy.ones( (43200,) ) # reshape to matrix with rows of 720 samples &gt;&gt;&gt; mat = values.reshape( (60, 720) ) # now it's easy to set alternating rows to 0.0 &gt;&gt;&gt; mat[1::2, :] = 0 # and because y is a view of your data, "v...
python|python-2.7|python-3.x|numpy
1
230
29,056,156
How to pad multiple lists with trailing zeros?
<p>Suppose I have two lists containing the same number of elements which are lists of integers. For instance:</p> <pre><code>a = [[1, 7, 3, 10, 4], [1, 3, 8], ..., [2, 5, 10, 91, 54, 0]] b = [[5, 4, 23], [1, 2, 0, 4], ..., [5, 15, 11]] </code></pre> <p>For each index, I want to pad the shorter list with trailing zero...
<p>I'm sure there's an elegant Python one-liner for this sort of thing, but sometimes a straightforward imperative solution will get the job done:</p> <pre><code>for i in xrange(0, len(a)): x = len(a[i]) y = len(b[i]) diff = max(x, y) a[i].extend([0] * (diff - x)) b[i].extend([0] * (diff - y)) pri...
python|arrays|numpy|padding
3
231
22,895,405
How to check if there exists a row with a certain column value in pandas dataframe
<p>Very new to pandas.</p> <p>Is there a way to check given a pandas dataframe, if there exists a row with a certain column value. Say I have a column 'Name' and I need to check for a certain name if it exists.</p> <p>And once I do this, I will need to make a similar query, but with a bunch of values at a time. I rea...
<pre><code>import numpy as np import pandas as pd df = pd.DataFrame(data = np.arange(8).reshape(4,2), columns=['name', 'value']) </code></pre> <p>Result:</p> <pre><code>&gt;&gt;&gt; df name value 0 0 1 1 2 3 2 4 5 3 6 7 &gt;&gt;&gt; any(df.name == 4) True &gt;&gt;&gt; any(df.na...
python|pandas|dataframe
9
232
22,881,825
How to deal with SettingWithCopyWarning in this case
<p>I've read the answers in <a href="https://stackoverflow.com/questions/20625582/how-to-deal-with-this-pandas-warning">How to deal with this Pandas warning?</a> but I can't figure out if I should ignore the SettingWithCopyWarning warning or if I'm doing something really wrong.</p> <p>I have this function that resampl...
<p>As Jeff points out, since this is a MulitIndex column you should use a tuple to access it:</p> <pre><code>resampled_data['price']['close'] resampled_data[('price', 'close')] resampled_data.loc[:, ('price', 'close')]  # equivalent </code></pre> <p>This also disaembiguates it from take the column and the row:</p> ...
python|pandas|resampling
2
233
13,736,115
How to create values from Zipf Distribution with range n in Python?
<p>I would like to create an array of Zipf Distributed values withing range of [0, 1000].</p> <p>I am using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.random.zipf.html" rel="nofollow">numpy.random.zipf</a> to create the values but I cannot create them within the range I want.</p> <p>How can I ...
<p>normalize and multiply by 1000 ?</p> <pre><code>a=2 s = np.random.zipf(a, 1000) result = (s/float(max(s)))*1000 print min(s), max(s) print min(result), max(result) </code></pre> <p>althought isn't the whole point of zipf that the range of values is a function of the number of values generated ?</p>
python|numpy|distribution
3
234
13,691,485
Issue with merging two dataframes in Pandas
<p>I am attempting to left merge two dataframes, but I am running into an issue. I get only NaN's in columns that are in the right dataframe.</p> <p>This is what I did:</p> <pre><code>X = read_csv('fileA.txt',sep=',',header=0); print "-----FILE DATA-----" print X; X = X.astype(object); # convert every column to strin...
<p>Working for me (v0.10.0b1, though I am somewhat confident--but haven't checked-- this would also work in 0.9.1):</p> <pre><code>In [7]: x Out[7]: Chrom Gene Position 0 20 DZANK1 18446022 1 20 TGM6 2380332 2 20 C20orf96 271226 In [8]: y Out[8]: Chrom Position Random 0 2...
python|pandas|dataset
1
235
29,518,923
numpy.asarray: how to check up that its result dtype is numeric?
<p>I have to create a <code>numpy.ndarray</code> from array-like data with int, float or complex numbers.</p> <p>I hope to do it with <code>numpy.asarray</code> function.</p> <p>I don't want to give it a strict <code>dtype</code> argument, because I want to convert complex values to <code>complex64</code> or <code>co...
<p>You could check if the dtype of the array is a sub-dtype of <code>np.number</code>. For example:</p> <pre><code>&gt;&gt;&gt; np.issubdtype(np.complex128, np.number) True &gt;&gt;&gt; np.issubdtype(np.int32, np.number) True &gt;&gt;&gt; np.issubdtype(np.str_, np.number) False &gt;&gt;&gt; np.issubdtype('O', np.numbe...
python|arrays|numpy|types
67
236
62,235,278
Hotencoded values & DataFrame for logistic regression
<p>I am trying to run a logistic regression on a dataset that has features with some categorical values. In order to process those features through regression, I was planning to encode them</p> <pre><code>#Select categorical features only &amp; encode name numerically with LabelEncoder cat_features = df.select_dtypes(...
<p>You should add all 290 columns to your dataframe with the remaining (i.e. non-categorical or numerical) values. For that you can create a dataframe from the array and join it to the original dataframe:</p> <pre><code>final_cat_features_df = pd.DataFrame(final_cat_features, index=df.index) df = df.join(final_cat_fea...
python|pandas|dataframe|encoding|scikit-learn
1
237
62,381,286
How to obtain sequence of submodules from a pytorch module?
<p>For a pytorch <a href="https://pytorch.org/docs/master/generated/torch.nn.Module.html" rel="nofollow noreferrer">module</a>, I suppose I could use <code>.named_children</code>, <code>.named_modules</code>, etc. to obtain a list of the submodules. However, I suppose the list is not given in order, right? An example: ...
<p>In Pytorch, the results of <code>print(model)</code> or <code>.named_children()</code>, etc are listed based on the order they are declared in <code>__init__</code> of the model's class e.g.</p> <p><strong>Case 1</strong></p> <pre><code>class Model(nn.Module): def __init__(self): super().__init__() ...
pytorch|huggingface-transformers
1
238
62,433,943
Trying to apply a function on a Pandas DataFrame in Python
<p>I'm trying to apply this function to fill the <code>Age</code> column based on <code>Pclass</code> and <code>Sex</code> columns. But I'm unable to do so. How can I make it work?</p> <pre><code>def fill_age(): Age = train['Age'] Pclass = train['Pclass'] Sex = train['Sex'] if pd.isnull(Age): ...
<p>You should consider using parenthesis to separate the arguments (which you already did) and change the boolean operator <code>and</code> for bitwise opeator <code>&amp;</code> to avoid this type of errors. Also, keep in mind that if you want to use <code>apply</code> then you should use a parameter <code>x</code> fo...
python|pandas
1
239
62,362,478
what is difference between batch size in data pipeline and batch size in midel.fit()?
<p>Are these 2 the same batch-size, or they have different meaning?</p> <pre><code>BATCH_SIZE=10 dataset = tf.data.Dataset.from_tensor_slices((filenames, labels)) dataset = dataset.batch(BATCH_SIZE) </code></pre> <p>2nd</p> <pre><code>history = model.fit(train_ds, epochs=EPOCHS, validation_da...
<p><strong>Data-Set (Batch Size)</strong></p> <p>Batch size only mean that how much data will pass through a pipeline you defined. In the case of Dateset batch size represent how much data will be passed to the model in one iteration. For-example if you form a data generator and set batch size 8. Now on every iteratio...
tensorflow|machine-learning|neural-network|data-science|batchsize
2
240
48,331,736
Efficiently computing Khatri-Rao like sum (pairwise row sum)
<p>I'm trying to compute <a href="https://en.wikipedia.org/wiki/Kronecker_product#Khatri%E2%80%93Rao_product" rel="nofollow noreferrer">Khatri-Rao</a> like sum (i.e. pairwise row sum) and was able to come up with this solution:</p> <pre><code>In [15]: arr1 Out[15]: array([[1, 2, 3], [2, 3, 4], [3, 4, 5]...
<p>We can leverage <code>broadcasting</code> -</p> <pre><code>(arr1[:,None] + arr2).reshape(-1,arr1.shape[1]) </code></pre> <p>For large arrays, we can gain some further speedup with <a href="http://numexpr.readthedocs.io/en/latest/intro.html#how-it-works" rel="nofollow noreferrer"><code>numexpr</code></a> to transfe...
python|performance|numpy|multidimensional-array|linear-algebra
2
241
48,351,276
Pandas Creating Normal Dist series
<p>I'm trying to convert an excel "normal distribution" formula into python.</p> <p>(1-NORM.DIST(a+col,b,c,TRUE))/(1-NORM.DIST(a,b,c,TRUE)))</p> <p>For example: Here's my given df</p> <pre><code>Id a b c ijk 4 3.5 12.53 xyz 12 3 10.74 </code></pre> <p>My goal: </p> <pre><code>Id a b c 0...
<p><code>NORM.DIST(..., TRUE)</code> means the cumulative distribution function and <code>1 - NORM.DIST(..., TRUE)</code> means the survival function. These are available under scipy's stats module (see <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.norm.html" rel="nofollow noreferrer">ss.nor...
pandas|normal-distribution
2
242
47,995,208
Deep neural network: could a random batch size be useful for training?
<p>I am working with Keras for about three months and now I wonder whether it could be useful to train on batches of different (random) sizes (16, 32, 64, 128), to combine the benefits of the different batch sizes.</p> <p>I didn't found any document that answers this question. Am I totally wrong?</p>
<p>I've seen two most popular strategies of working with the batch size:</p> <ul> <li><p>Select it as large as possible so that the model still fits in GPU memory. This is done mostly to speed up training due to parallelism and vectorization.</p></li> <li><p>Tune batch size, just like any other hyper-parameter, either...
tensorflow|machine-learning|deep-learning|keras
0
243
48,629,866
declaring matrix as np.float32(np.array())
<pre><code> matrix = np.float32(np.array([[0.0 for i in range(dimension)] for j in range(dimension)])) </code></pre> <p>If I want to do matrix operation in single precision, is the declaring array as above sufficient, or do I have to truncate for every arithmetic operation as follows?</p> <pre><code>np.float32(matrix...
<p>No, you can specify the <code>dtype</code> of the array:</p> <pre><code>np.array([[0.0 for i in range(dimension)] for j in range(dimension)], <b>dtype=np.float32</b>)</code></pre> <p>Note that if you work with zeros, you can also use:</p> <pre><code>np.zeros((dimension, dimension), dtype=np.float32) </co...
python|numpy
3
244
48,605,961
how to change the datatype of numpy.random.rand array in python?
<p>I'm learning Numpy module from scratch going through the official Numpy documentation. <br> Now my goal is to create a random (3, 3) array.</p> <pre><code>&gt;&gt;&gt; np.random.rand(3,3) </code></pre> <p>However the output I recieved is a bunch of random float values.</p> <pre><code>array([[ 0.33519419, 0.53502...
<p>If you want to generate random <code>int</code>, use <a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.random.randint.html" rel="nofollow noreferrer">np.random.randint()</a>.</p> <p>If you want to convert <code>float</code> to <code>int</code>, use <code>np.int32(something)</code>.</p>
python|numpy|random
2
245
48,593,014
Launching tensorboard error - ImportError: cannot import name weakref
<p>With python 2.7 on a mac with tensorflow and running <code>tensorboard --logdir= directory/wheremylog/fileis</code> produces the following error <code>ImportError: cannot import name weakref</code></p> <p>I've seen several folks remedy the issue with <code>pip install backports.weakref </code> but that requirement...
<p>For those with the same problem I was able to fix it through the following: 1) find where your tensorflow lives <code>pip show tensorflow</code> and look at the location line, copy it.</p> <p>2) For me it was <code>cd /usr/local/lib/python2.7/site-packages/</code></p> <p>3) <code>cd tensorflow/python/lib</code></p...
python|tensorflow|tensorboard
2
246
48,843,521
Training a RNN to output word2vec embedding instead of logits
<p>Traditionally it seems that RNNs use logits to predict next time step in the sequence. In my case I need the RNN to output a word2vec (50 depth) vector prediction. This means that the cost function has be based off 2 vectors: <code>Y</code> the actual vector of the next word in the series and <code>Y_hat</code>, the...
<p>What is the reason that you want to predict a word embedding? Where are you getting the "ground truth" word embeddings from? For word2vec models you typically will re-use the trained word-embeddings in future models. If you trained a <code>word2vec</code> model with an embedding size of 50, then you would have 50-d ...
python|tensorflow|machine-learning|deep-learning
5
247
70,897,392
How to add rows to a matrix with pad?
<p>I have a matrix like this:</p> <pre><code>profile=np.array([[0,0,0.5,0.1], [0.3,0,0,0], [0,0,0.1,0.9], [0,0,0,0.1], [0,0.5,0,0]]) </code></pre> <p>And I want to add a row before and after filled with zeros. How can I do that? I thought of using ...
<p>You can use <code>np.pad</code>:</p> <pre><code>out = np.pad(profile, 1)[:, 1:-1] </code></pre> <p>Output:</p> <pre><code>&gt;&gt;&gt; out array([[0. , 0. , 0. , 0. ], [0. , 0. , 0.5, 0.1], [0.3, 0. , 0. , 0. ], [0. , 0. , 0.1, 0.9], [0. , 0. , 0. , 0.1], [0. , 0.5, 0. , 0. ], ...
python|python-3.x|numpy
1
248
51,806,675
How can I determine whether a intermediate results has or has no data?
<p>How can I implement "if there exist items in a Tensor then calculate the average value of it, else assign it a certain value"? take <strong>tf.gather_nd()</strong> for example choosing some rows from source_tensor with <strong>shape (?, 2)</strong></p> <pre><code>result = tf.gather_nd(source_tensor, indices) </code...
<p>You should be able to do this via <code>tf.cond</code>, which executes one of two branches depending on some condition. I haven't tested the below code so please report whether it works.</p> <pre><code>mean = tf.cond(tf.size(result), lambda: tf.reduce_mean(result), lambda: some_constant) </code></pre> <p>The idea ...
python|tensorflow|shapes|dimensions|tensor
1
249
51,743,033
How to set values in a data frame based on index
<p>Here's my data</p> <pre><code> customer_id feature_1 feature_2 feature_3 0 1 78 73 63 1 2 79 71 66 2 2 82 76 69 3 3 43 32 ...
<p>Use the <code>set_value</code> function </p> <pre><code>syntax format is: `DataFrame.set_value(index, col, value, takeable=False)[source]` </code></pre> <p>So for your question the answer would be</p> <pre><code>df.set_value(3, 'target', 'bad') </code></pre>
python|pandas|dataframe
2
250
41,918,795
Minimize a function of one variable in Tensorflow
<p>I am new to Tensorflow and was wondering whether it would be possible to minimize a function of one variable using Tensorflow.</p> <p>For example, can we use Tensorflow to minimize 2*x^2 - 5^x + 4 using an initial guess (say x = 1)?</p> <p>I am trying the following:</p> <pre><code>import tensorflow as tf import n...
<p>If you want to minimize a single parameter you could do the following (I've avoided using a placeholder since you are trying to train a parameter - placeholders are often used for hyper-parameters and input and aren't considered trainable parameters):</p> <pre><code>import tensorflow as tf x = tf.Variable(10.0, tr...
python|python-2.7|tensorflow
18
251
41,860,817
Hyperparameter optimization for Deep Learning Structures using Bayesian Optimization
<p>I have constructed a CLDNN (Convolutional, LSTM, Deep Neural Network) structure for raw signal classification task.</p> <p>Each training epoch runs for about 90 seconds and the hyperparameters seems to be very difficult to optimize.</p> <p>I have been research various ways to optimize the hyperparameters (e.g. ran...
<blockquote> <p>Although I am still not fully understanding the optimization algorithm, I feed like it will help me greatly.</p> </blockquote> <p>First up, let me briefly explain this part. Bayesian Optimization methods aim to deal with exploration-exploitation trade off in the <a href="https://en.wikipedia.org/wi...
optimization|machine-learning|tensorflow|deep-learning|bayesian
23
252
64,357,821
How to change column values to rows value based on condition
<p>df:</p> <pre><code>items M1 v1 v2 v3 A c1 56 52 25 A c2 66 63 85 B c1 29 76 36 B c2 14 24 63 </code></pre> <p>df_output:</p> <pre><code>items M1 C1 C2 A V1 56 66 A V2 52 63 A V3 25 ...
<p>You are looking to combine <code>stack</code> and <code>unstack</code>:</p> <pre><code>(df.set_index(['items','M1']) .unstack('M1') # unstack promotes M1 to columns .stack(level=0) # stack turns original columns to index level .rename_axis(columns=None,...
python|pandas
2
253
64,195,126
Order Pandas DataFrame by groups and Timestamp
<p>I have the below sample DataFrame</p> <pre><code> Timestamp Item Char Value 4 1/7/2020 1:22:22 AM B C.B 3.2 0 1/7/2020 1:23:23 AM A C.A 1.0 2 1/7/2020 1:23:23 AM A C.B 1.3 1 1/7/2020 1:23:24 AM A C.A 2.0 5 1/7/2020 1:23:29 AM B C.B 3.0 3 1/7/2020 1:25:23 AM B ...
<p>Let's <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> the column <code>Timestamp</code> on <code>Char</code> and <code>Item</code> and compute the <a href="https://pandas.pydata.org/pandas-docs/version/0.23.4/generated...
python|pandas
4
254
64,455,172
ValueError: could not convert string to float: 'W'
<p>I've been trying to get a Shallow neural network using pandas breast cancer and i keep gettin this error, i would greatly appreciate if someone can tell whats actually wrong and how to fix it.</p> <pre><code>File &quot;D:\Users\USUARIO\Desktop\una carpeta para los oasda proyectos\Ex_Files_Python_EssT\Exercise Files\...
<p>I was about to cuss you out for not showing us where the problem was occuring. But then I happened to match the error message with</p> <pre><code>def predict(W, b, X): WT = np.transpose([&quot;W&quot;]) np.array(WT, dtype=np.float32) ... </code></pre> <p>Of course that would produce this error. An array w...
python|pandas|numpy|neural-network
0
255
49,078,385
Tensorflow: Select one column index per row using a list of indices
<p>I saw questions <a href="https://stackoverflow.com/questions/39684415/tensorflow-getting-elements-of-every-row-for-specific-columns">39684415</a>, <a href="https://stackoverflow.com/questions/37026425/elegant-way-to-select-one-element-per-row-in-tensorflow">37026425</a>, and <a href="https://stackoverflow.com/questi...
<p>It's easy to implementation. For example, suppose you have a tensor data [[1,2],[3,4]], you want to get the first column. you can use <strong>tf.transpose(data, perm=[1, 0])</strong> and <strong>tf.gather_nd(data, [[0]])</strong></p> <pre><code>import tensorflow as tf import numpy as np data = [[1, 3], [2, 4]] a ...
tensorflow
0
256
59,025,803
What will happen if I try to use GPU delegate under android 8.1
<p>Here below is the system architecture for NNAPI. <a href="https://i.stack.imgur.com/NAzIF.png" rel="nofollow noreferrer">enter image description here</a></p> <p>The NNAPI is available on Android 8.1 (API level27) or higher. What will happen if I try to use GPU delegate under android 8.1?</p>
<p>Tensorflow's GPU delegate is not using NNAPI (see the <a href="https://www.tensorflow.org/lite/performance/delegates" rel="nofollow noreferrer">TFLite documentation</a>).</p> <p>A couple of corrections on Shree's answer.</p> <ul> <li>NNAPI will be delegating to GPU or any other available device even in Android 8.1...
android|tensorflow|gpu|tensorflow-lite|nnapi
1
257
58,716,298
How do I set up a custom input-pipeline for sequence classification for the huggingface transformer models?
<p>I want to use one of the models for sequence classification provided by huggingface.It seems they are providing a function called <code>glue_convert_examples_to_features()</code> for preparing the data so that it can be input into the models. </p> <p>However, it seems this conversion function only applies to the g...
<p>Huggingface added a <a href="https://huggingface.co/transformers/custom_datasets.html" rel="nofollow noreferrer">fine-tuning with custom datasets</a> guide that contains a lot of useful information. I was able to use the information in the <a href="https://huggingface.co/transformers/custom_datasets.html#sequence-cl...
python-3.x|tensorflow|machine-learning|nlp
1
258
70,237,135
Iterating over a multi-dimentional array
<p>I have array of shape (3,5,96,96), where channels= 3, number of frames = 5 and height and width = 96 I want to iterate over dimension 5 to get images with size (3,96,96). The code which I have tried is below.</p> <pre><code>b = frame.shape[1] for i in range(b): fr = frame[:,i,:,:] </code></pre> <p>But this is n...
<p>You could swap axis (using <a href="https://numpy.org/doc/stable/reference/generated/numpy.swapaxes.html#numpy.swapaxes" rel="nofollow noreferrer"><code>numpy.swapaxes(a, axis1, axis2)</code></a> to get the second (frame) in first position</p> <pre><code>import numpy as np m = np.zeros((3, 5, 96, 96)) n = np.swapa...
python|numpy-ndarray
0
259
70,224,360
Faster implementation of fractional encoding (similar to one-hot encoding)
<h1>Problem Statement</h1> <p>Create an efficient fractional encoding (similar to a one-hot encoding) for a ragged list of components and corresponding compositions.</p> <h2>Toy Example</h2> <p>Take a composite material with the following <code>class: ingredient</code> combinations:</p> <ul> <li><a href="https://www.to...
<p>A solution that initializes an array with zeros then updates the fields:</p> <pre><code>columns = sorted(list(set(sum(list(components), [])))) data = np.zeros((len(components), len(columns))) for i in range(data.shape[0]): for component, composition in zip(components[i], compositions[i]): j = columns.in...
python|arrays|dataframe|pandas-groupby|one-hot-encoding
2
260
70,092,417
Create Folium map with discrete color fill
<p>I would like to fill in a Folium map with discrete color values. But every shape is returned with the same value (i.e. - the same color)</p> <pre><code>Name Fill_Color Geometry A orange .... B yellow .... C purple ...
<p>The question is, if you use geopandas that contain geometry information, it will be internally converted to geojson when referenced in the style function, so the reference will be <code>r['properties']['fill_color']</code>. Also, no looping is required and the entire data frame can be handled. I have modified <a hre...
python|pandas|folium
0
261
56,152,408
How to convert String to Numpy Datetime64 from a dataframe
<p>I'm dealing with a DataFrame that contains column of time like "hh:mm:ss" and I need to convert those values to the NumPy <code>datetime64</code> type.</p> <pre><code>import pandas as pd data = [dict(voie="V3", Start="06:10", End='06:20'), dict(voie="V5", Start='06:26', End='06:29'), dict(vo...
<p>Just use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_datetime.html" rel="nofollow noreferrer"><code>pandas.to_datetime</code></a> for each column. For example:</p> <pre><code>df.End = pd.to_datetime(df.End) df.End 0 2019-05-15 06:20:00 1 2019-05-15 06:29:00 2 2019-05-15 06:30...
python|pandas|numpy|matplotlib
1
262
56,317,060
Disabling `@tf.function` decorators for debugging?
<p>In TensorFlow 2, the <a href="https://www.tensorflow.org/alpha/tutorials/eager/tf_function" rel="noreferrer"><code>@tf.function</code></a> decorator allows for Python functions to become TensorFlow graphs (more or less) and can lead to some performance improvements. However, when decorated this way, <a href="https:/...
<p>Use <a href="https://www.tensorflow.org/api_docs/python/tf/config/run_functions_eagerly" rel="nofollow noreferrer"><code>tf.config.run_functions_eagerly(True)</code></a>.</p>
python|tensorflow|tensorflow2.0
8
263
56,079,886
how to use previous row value as well as values in other column in same row to compute value of a column in pandas
<p>I have a dataframe <code>df</code>:</p> <pre><code>import pandas as pd df = pd.DataFrame({'A': [1, 1, 1,2,2,2,2], 'B': [10, 0, 0,5,0,0,0], 'C': [1,1,2,2,3,3,3], 'D': [2,3,4,5,2,3,4]}) </code></pre> <p>which looks like:</p> <pre><code> A B C D 0 1 10 1 2...
<p>This should do it:</p> <pre><code>def f(g): g.B = (g.B.shift() + g.C + g.D).cumsum() return g df.B.replace(0, df.groupby('A').apply(f).B) </code></pre> <p>The result is:</p> <pre><code> A B C D 0 1 10 1 2 1 1 14 1 3 2 1 20 2 4 3 2 5 2 5 4 2 10 3 2 5 2 16 3 3 6 2 23 3 4 ...
python|pandas
4
264
56,141,142
How to pass "step" to ExponentialDecay in GradientTape
<p>I tried to use an optimizers.schedules.ExponentialDecay isntance as the learning_rate to Adm optimizer, but i don't know how to pass "step" to it when train the model in GradientTape.</p> <p>I use tensorflow-gpu-2.0-alpha0 and python3.6. And i read the doc <a href="https://tensorflow.google.cn/versions/r2.0/api_doc...
<p>Using an instance of <code>tf.keras.optimizers.schedules.ExponentialDecay()</code> might not work with <code>GadientTape</code>,it is more suitable for Keras's <code>model.fit()</code>, what I understand is that you need to reduce or schedule the learning rate after certain number of iterations/steps, so there is a ...
python|tensorflow2.0
0
265
56,095,054
Cleaning up a column based on spelling? Pandas
<p>I've got two very important, user entered, information columns in my data frame. They are mostly cleaned up except for one issue: the spelling, and the way names are written differ. For example I have five entries for one name: "red rocks canyon", "redrcks", "redrock canyon", "red rocks canyons". This data set is t...
<p>I would look into doing <a href="https://en.wikipedia.org/wiki/Phonetic_algorithm" rel="nofollow noreferrer">phonetic string matching</a> here. The basic idea behind this approach is to obtain a phonetic encoding for each entered string, and then group spelling variations by their encoding. Then, you could choose th...
python-3.x|pandas|pandas-groupby|sklearn-pandas
2
266
55,637,437
How create a simple animated graph with matplotlib from a dataframe
<p>Someone can help me to correct the code below to visualize this data with animated matplotlib?</p> <p>The dataset for X and Y axis are describe below.</p> <pre><code>X- Range mydata.iloc[:,[4]].head(10) Min_pred 0 1.699189 1 0.439975 2 2.989244 3 2.892075 4 2.221990 5 3.45...
<p>Is this what you are trying to get?</p> <pre><code>x = np.arange(1999,2017) y = np.random.random(size=x.shape) fig = plt.figure(figsize=(4,3)) plt.xlim(1999, 2016) plt.ylim(np.min(y), np.max(y)) plt.xlabel('Year',fontsize=20) plt.ylabel('Y',fontsize=20) plt.title('Meteo Paris',fontsize=20) plt.tick_params(labelsiz...
python|numpy|matplotlib
2
267
55,594,537
Fillna not working when combined with groupby and mean
<p>The below code filters my Dataframe for 5 rows with Zambia as the Country Name.</p> <pre><code>df2.loc[df2['Country Name'] == 'Zambia'].head(5) Country Name Year CO2 262 Zambia 1960 NaN 526 Zambia 1961 NaN 790 Zambia 1962 NaN 1054 Zambia 1963 NaN 1318 Zambia 1964 0.94942...
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.transform.html" rel="nofollow noreferrer"><code>GroupBy.transform</code></a> for return <code>Series</code> filled by aggregate values with same size like original <code>DataFrame</code>, so <code>fillna</code> working...
python|python-3.x|pandas|dataframe
1
268
64,954,738
Getting error "AttributeError: 'numpy.ndarray' object has no attribute 'lower' " in word tokenizer
<p>I am trying to train a model to classify multi-label data set by referring <a href="https://stackabuse.com/python-for-nlp-multi-label-text-classification-with-keras/" rel="nofollow noreferrer">this</a> article. I am entirely new to this field and I am getting this error &quot;AttributeError: 'numpy.ndarray' object h...
<p>Is <em>text</em> a str variable? If it's not maybe you could do</p> <pre><code>text = str(text).lower() </code></pre> <p>as long as it's value is something that can be turned into a string</p>
python|tensorflow|machine-learning|keras|multilabel-classification
1
269
64,924,224
Getting a view of a zarr array slice
<p>I would like to produce a zarr array pointing to <em>part</em> of a zarr array on disk, similar to how <code>sliced = np_arr[5]</code> gives me a view into <code>np_arr</code>, such that modifying the data in <code>sliced</code> modifies the data in <code>np_arr</code>. Example code:</p> <pre class="lang-py prettypr...
<p>The <a href="https://github.com/google/tensorstore" rel="noreferrer">TensorStore</a> library is specifically designed to do this --- all indexing operations produce lazy views:</p> <pre class="lang-py prettyprint-override"><code>import tensorstore as ts import numpy as np arr = ts.open({ 'driver': 'zarr', 'kvsto...
python|numpy|zarr
5
270
40,077,188
Pandas - Data Frame - Reshaping Values in Data Frame
<p>I am new to Pandas and have a data frame with a team's score in 2 separate columns. This is what I have.</p> <pre><code>Game_ID Teams Score 1 Team A 95 1 Team B 85 2 Team C 90 2 Team D 72 </code></pre> <p>This is where I would like to get to and then ideally to.</p> <pre><code>1 Team A 95 Te...
<p>You can try something as follows: Create a <code>row_id</code> within each group by the <code>Game_ID</code> and then unstack by the <code>row_id</code> which will transform your data to wide format:</p> <pre><code>import pandas as pd df['row_id'] = df.groupby('Game_ID').Game_ID.transform(lambda g: pd.Series(range(...
python|pandas|dataframe
4
271
40,196,986
Numpy arrays changing id
<p>What is happening here? It seems like the id locations of the array is not remaining steady maybe?is operator is returning False even thought the ids are same. then after printing the arrays the ids of elements are changing. Any explanations?</p> <pre><code>import numpy as np a = np.arange(27) b = a[1:5] a[0] is b[...
<p>First test with a list:</p> <pre><code>In [1109]: a=[0,1,2,3,4] In [1112]: b=a[1:3] In [1113]: id(a[1]) Out[1113]: 139407616 In [1114]: id(b[0]) Out[1114]: 139407616 In [1115]: a[1] is b[0] Out[1115]: True </code></pre> <p>later I tried</p> <pre><code>In [1129]: id(1) Out[1129]: 139407616 </code></pre> <p>So t...
python|python-2.7|numpy
2
272
69,369,036
Tensorflow, possible to downweigh gradients for certain data items
<p>Say I have a multi output model with outputs y_0 and y_1. For some data examples I am confident that y_0 is correct, but know that y_1 may be a complete guess. My idea was to use a custom training loop and multiply by a calculated weight, but this does not seem to be working. Is there a way to do this through the ke...
<p>In the method <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model#compile" rel="nofollow noreferrer">compile</a> of the keras object you have a parameter called loss weights to do that, you only need to implement the lost functions that take one or other output and passed as an array of losses to the ...
tensorflow|keras|gradient
0
273
69,631,906
Correlation of dataframe with itself
<p>I have a dataframe that looks like this:</p> <pre><code>import pandas as pd a=pd.DataFrame([[name1, name2, name3, name4],[text1, text2, text3, text4]], columns=(['names','texts'])) </code></pre> <p>I have implemented a function to perform a cosine similarity between the words in each text using <a hre...
<p><code>scipy.spatial.cdist</code> is vectorized. So you can calculate all the vector representations for the texts and use <code>cdist</code> once:</p> <pre><code>from scipy.spatial import cdist vectors = [np.mean([glove[word] if word in glove else 0 for word in preprocess(s1)] for s1 in df['texts'] ] distance = 1 ...
python|pandas|correlation
0
274
69,413,841
Evaluating a data frame through another Indicator data frame
<p>I have a source dataframe <strong>input_df</strong>:</p> <pre> PatientID KPI_Key1 KPI_Key2 KPI_Key3 0 1 (C602+C603) C601 NaN 1 2 (C605+C606) C602 NaN 2 3 75 L239+C602 NaN...
<p>Finally I was able to solve it :</p> <pre><code>final_out_df = pd.DataFrame() for i in range(len(input_df)): for j in ['KPI_Key1','KPI_Key2','KPI_Key3']: exp = input_df[j].iloc[i] #checking for NaN values if exp == exp: temp_out_df=indicator_df.eval(re.sub(r'(\w+)', r'`\1`', exp)).reset...
python|pandas|date-range
0
275
69,641,146
Tensorflow urllib.error.URLError: <urlopen error Errno 60 Operation timed out>
<p>Recently, I tried to use the elmo in tensorflow, but I meet some mistakes, if you can help me, I would be really appreciate.</p> <p>this is my test_code:</p> <pre><code>import tensorflow as tf import tensorflow_hub as hub import numpy as np import urllib.request if __name__ == '__main__': elmo = hub.Module(...
<p>Which <code>Tensorflow version</code> are you using?</p> <p>You need to disable Tensorlfow eager execution mode to run this code.</p> <p>Use below code:</p> <pre><code>import tensorflow.compat.v1 as tf tf.disable_v2_behavior() </code></pre> <p>before the above code and remove <code>.numpy()</code> as <code>.numpy<...
python|tensorflow|url|nlp|elmo
0
276
41,069,771
Single column matrix and its transpose to create Symmetric matrix in python, numpy scipy
<p>Is there any existing function in numpy or scipy to do following operation?</p> <pre><code>z = [a, b] z*z.T (transpose of the z) = [[a**2, a*b] [b*a, b**2]] </code></pre> <p>Thank you!</p>
<p>Use can use numpy <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.outer.html" rel="nofollow noreferrer">outer</a> function:</p> <pre><code>np.outer([2,4],[2,4]) array([[ 4, 8], [ 8, 16]]) </code></pre>
python|numpy|scipy
2
277
54,148,669
Counting rows back and forth based on time column
<p>I have a dataframe with user ids and two different times. <code>time1</code> is the same for one user, but <code>time2</code> is different. </p> <pre><code>test = pd.DataFrame({ 'id': [1,1,1,1,1,1,1,1,1,1,2,2,2,2,2], 'time1': ['2018-11-01 21:19:32', '2018-11-01 21:19:32', '2018-11-01 21:19:32','2018-11-01 2...
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.cumcount.html" rel="nofollow noreferrer"><code>cumcount</code></a> with no parameter and also with <code>ascending=False</code>:</p> <pre><code>#necessary unique default RangeIndex test = test.reset_index(drop=True) #con...
python-3.x|pandas
2
278
53,908,646
Pandas dataframe to numpy array
<p>I am very new to Python and have very little experience. I've managed to get some code working by copying and pasting and substituting the data I have, but I've been looking up how to select data from a dataframe but can't make sense of the examples and substitute my own data in.</p> <p><strong>The overarching goal...
<p>To convert a whole <code>DataFrame</code> into a numpy array, use</p> <p><code>df = df.values()</code></p> <p>If i understood you correctly, you want seperate arrays for every trial though. This can be done like this: </p> <p><code>data = [df.iloc[:, [0, i]].values() for i in range(1, 20)]</code></p> <p>which wi...
python|pandas|numpy|scipy
1
279
54,131,872
How to make vectorized computation instead of 'for' loops for all grid?
<p>I have a double for loop among all the grid, and I want to make it work faster. <code>r, vec1, vec2, theta</code> are the vectors of the same length <code>N</code>. <code>c</code> is a constant.</p> <pre><code>import numpy as np N = 30 x_coord, y_coord = 300, 300 m1 = np.zeros((x_coord, y_coord)) vec1, vec2 = np....
<p>We can use a trigonometric trick here -</p> <pre><code>cos(A + B) = cos A cos B − sin A sin B </code></pre> <p>This lets us leverage <code>matrix-multiplication</code> for a solution that would look something like this -</p> <pre><code># Get x and y as 1D arrays x = np.arange(x_coord) y = np.arange(y_coord) # G...
python|numpy|vectorization
3
280
53,872,063
Removing quote and hidden new line
<p>I am reading a excel file using pd.read_excel and in one the column, few rows have quotes(") and hidden new lines. I want to remove both of them before doing some further transformation. The sample string is as follows</p> <pre><code>col1 col2 col3 IC201829 100234 "Valuation of GF , Francis ...
<p>You can do this way with single line <code>replace()</code>,</p> <pre><code>import pandas as pd str = '''"Valuation of "GF , Francis Street D8.\nI number: 106698"''' df = pd.DataFrame({'Col3':[str]}) print (df) df = df.replace('\n',' ', regex=True).replace('"', '',regex=True) print (df) </code></pre> <p><strong>R...
python-3.x|pandas
1
281
52,664,433
Efficiently add column to Pandas DataFrame with values from another DataFrame
<p>I have a simple database consisting of 2 tables (say, Items and Users), where a column of the Users is their <strong>User_ID</strong>, a column of the Items is their <strong>Item_ID</strong> and another column of the Items is <strong>a foreign key to a User_ID</strong>, for instance:</p> <pre><code>Items ...
<p>I think it would be more straightforward if you used table <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html" rel="nofollow noreferrer">merges</a>.</p> <pre><code>items.merge(users[['User_ID', 'Name']], left_on='Its_User_ID', right_on='User_ID', how='left') </code></pre> <...
python|pandas|performance|dataframe|series
2
282
52,771,770
Different results for IRR from numpy.irr and the Excel IRR function
<p>I have the following annual cash flows:</p> <pre><code>w=np.array([ -56501, -14918073, -1745198, -20887403, -9960686, -31076934, 0, 0, 11367846, 26736802, -2341940, 20853917, 22166416, 19214094, 23056582, -11227178, 18867100, 24947517, 28733869, 24707603, -1703039...
<p>For the given cash flows, the IRR is not unique; see <a href="https://en.wikipedia.org/wiki/Internal_rate_of_return#Multiple_IRRs" rel="nofollow noreferrer">Multiple IRRs</a>. Both the numpy and Excel values for <code>r</code> satisfy <code>NPV(r) = 0</code>, where NPV is the net present value.</p> <p>Here's a plo...
python|numpy|excel-formula
4
283
52,548,382
How to replace column values in one dataframe from other dataframe on condition?
<pre><code>import pandas as pd df = pd.DataFrame({"A":["foo", "foo", "foo", "bar","Panda", "Panda", "Zootopia", "Zootopia"],"B":[0,1,1,1,0,1,1,1]}) df1 = pd.DataFrame({"A":["foo", "foo", "foo", "bar","Panda", "Panda", "Zootopia", "Zootopia","Toy Story"]}) </code></pre> <p>If column <strong>A</strong> in df and df1 mat...
<p>Here is one solution:</p> <pre><code>num = len(df['A']) if all(df1['A'][:num]==df['A'][:num]): df1['A'][:num] = df['B'] </code></pre> <p>Output for df1:</p> <pre><code> A 0 0 1 1 2 1 3 1 4 0 5 1 6 1 7 1 8 Toy Story </code></pre...
python|pandas
1
284
46,250,972
Split columns into MultiIndex with missing columns in pandas
<p>This is similar to the problem I asked <a href="https://stackoverflow.com/questions/46247302/pandas-split-columns-into-multilevel">here</a>. However, I found out that the data I am working is not always consistent. For, example say :</p> <pre><code>import pandas as pd df = pd.DataFrame(pd.DataFrame([[1,2,3,4],[5,6...
<p>Create a <code>MultiIndex</code> and set <code>df.columns</code>.</p> <pre><code>idx = df.columns.str.split('_', expand=True) idx MultiIndex(levels=[['X', 'Y'], ['a', 'b', 'c']], labels=[[0, 1, 0, 1], [0, 2, 1, 0]]) df.columns = idx </code></pre> <p>Now, with the existing <code>MultiIndex</code>, creat...
python|pandas|dataframe|multi-index
20
285
58,290,318
Create new Python DataFrame column based on conditions of multiple other columns
<p>I'm trying to create a new DataFrame column (Column C) based on the inputs of two other columns. The two criteria I have is if either "Column A is > 0" OR "Column B contains the string "Apple",* then Column C should have the value of "Yes" otherwise it should have the value of "No"</p> <p>*Bonus points if answer is...
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.contains.html" rel="nofollow noreferrer">Series.str.contains</a> with <code>case=False</code> to <strong>not case-sensitive</strong>:</p> <pre><code>df['Column_C']= np.where((df['Column_A']&gt;0) | (df['Column_B'].str.contains...
python|pandas
1
286
68,997,462
Cannot import name Label from pandas._typing
<p>I faced this issue while was trying to export a dataframe to a csv file. I cannot find any similar issue online on this issue. Any help would be highly appreciated.</p> <p>I am using pandas 1.3 with python 3.7.1.</p> <pre class="lang-py prettyprint-override"><code> ImportError Traceback...
<p>I downgraded to pandas 0.20.3 and the issue is now gone!</p>
pandas|dataframe
0
287
69,257,364
TensorBoard ValueError: Expected scalar shape, saw shape: (1,)
<p>When I use the callback function TensorBoard as the following:</p> <pre><code>skip_training = False tensorboad_cb = TensorBoard('logs') def train_model(model, callbacks_list): ''' Input: Model and callback list, Return: Model with best-checkpoint weights. ''' ## TYPE YOUR CODE...
<p>I have same issue on tensorflow 2.6.2, but on tensorflow 2.4.1 it works well.</p> <p>finally, I fixed it by comment the code:</p> <p><a href="https://i.stack.imgur.com/5go6N.png" rel="nofollow noreferrer">comment tensorboard code</a></p>
python|tensorflow|tensorboard
1
288
44,520,581
How can I create a model which is starts with a single convolution layer and then its output given to two different convolution layers
<p>Can I generate a Model in keras which is not sequencial, i.e can I design a model with two train of cascaded convolution layer but the starting input is a common convolution output.</p>
<p>Yes! You just need to use the <a href="https://keras.io/getting-started/functional-api-guide/#multi-input-and-multi-output-models" rel="nofollow noreferrer">Functional API</a>.</p> <p>e.g.</p> <pre><code>main_input = Input(shape=input_shape) conv_1 = Conv1D(32, 3)(main_input) conv_2 = Conv1D(64, 3)(conv_1) conv...
python|tensorflow|keras|keras-layer|keras-2
1
289
44,787,423
Fill zero values for combinations of unique multi-index values after groupby
<p>To better explain by problem better lets pretend i have a shop with 3 unique customers and my dataframe contains every purchase of my customers with weekday, name and paid price.</p> <pre><code> name price weekday 0 Paul 18.44 0 1 Micky 0.70 0 2 Sarah 0.59 0 3 Sarah 0.27 ...
<p>IIUC, let's use <code>unstack</code> and <code>fillna</code> then <code>stack</code>:</p> <pre><code>df_out = df.groupby(['name','weekday']).sum().unstack().fillna(0).stack() </code></pre> <p>Output:</p> <pre><code> price name weekday Micky 0 0.70 1 0.00 2 ...
python|pandas
2
290
44,569,021
Removing NaN from dictionary inside dictionary (Dict changes size during runtime error)
<p>using to_dict() I come up with the following dictionary. I need to drop all nan values. This approach doesn't work because it changes size during iteration. Is there another way to accomplish this?</p> <pre><code>{'k': {'a': nan, 'b': 1.0, 'c': 1.0}, 'u': {'a': 1.0, 'b': nan, 'c': 1.0}, 'y': {'a': nan, 'b': 1.0, ...
<p>You can use double <code>dict comprehension</code> with filtering with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.notnull.html" rel="noreferrer"><code>pandas.notnull</code></a> or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.isnull.html" rel="noreferrer"><code>pand...
python|pandas|dictionary
5
291
60,896,045
Pandas Shift Rows and Backfill (Time-Series Alignment)
<p>I have time-series customer data with running totals that look like this:</p> <pre><code> week1 | week2 | week3 | week4 | week5 user1 20 40 40 50 50 user2 0 10 20 30 40 user3 0 0 0 10 10 </code></pre> <p>I am looking for spending tr...
<p>You can do this quite compactly as:</p> <pre><code>df.iloc[:, 1:] = df.iloc[:, 1:]. \ apply(lambda row: row.shift(-np.argmax(row &gt; 0)), axis=1). \ ffill(axis=1) </code></pre> <p>but there is a lot going on in that 1 statement</p> <p><code>iloc[:, 1:]</code> selects all rows, and all but the first colum...
python|pandas
2
292
61,007,840
fill dataframe column with list of tuples
<p>I have dataframe like below:</p> <pre><code>miles uid 12 235 13 234 14 233 15 236 </code></pre> <p>a list <strong>list1</strong> like below:</p> <pre><code>[(39.14973, -77.20692), (33.27569, -86.35877), (42.55214, -83.18532), (41.3278, -95.96396)] </code></pre> <p>The output datafram...
<p>you can use:</p> <pre><code>df['lat-long'] = my_list df['lat'] = [e[0] for e in my_list] df['long'] = [e[1] for e in my_list] </code></pre> <p>output:</p> <p><a href="https://i.stack.imgur.com/rzYxx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rzYxx.png" alt="enter image description here"></...
python|pandas
2
293
71,619,322
Iterate over function and concatinate results
<p>So let's say I have a function x which I want to iterate over a list of 3 What I wat to do is to iterate over this list and concatinate the results. Like this:</p> <pre><code>number1 = func(1) number2 = func(2) number3 = func(3) </code></pre> <p>And then</p> <pre><code>Results = pd.concat([number1, number2, number3]...
<p>You could use list-comprehension and loops like the other answers suggested, or, if <code>func</code> takes only one parameter, you could use <code>map</code>:</p> <pre><code>df = pd.concat(map(func, range(1, 4)), axis=1) </code></pre> <p>Example output:</p> <pre><code>&gt;&gt;&gt; def func(x): ... return pd.Data...
python|pandas|loops|concatenation
1
294
71,653,696
Counting values greater than 0 in a given area (specific Rows * Columns) - Python, Excel, Pandas
<p>Based on the following data:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Participant</th> <th>Condition</th> <th>RT</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>1</td> <td>0.10</td> </tr> <tr> <td>1</td> <td>1</td> <td></td> </tr> <tr> <td>1</td> <td>2</td> <td>0.48</td> </tr> <tr...
<p>Use a boolean mask and sum it:</p> <pre><code>N = sum((df['Participant'] == 1) &amp; (df['Condition'] == 1) &amp; (df['RT'].notna())) print(N) # Output 1 </code></pre> <p>Details:</p> <pre><code>m1 = df['Participant'] == 1 m2 = df['Condition'] == 1 m3 = df['RT'].notna() df[['m1', 'm2', 'm3']] = pd.concat([m1, m2, m...
python|excel|pandas|conditional-statements|counting
1
295
42,471,183
specificy gpu devices failed when using tensorflow c++ api
<p>I trained my tf model in python:</p> <pre><code> with sv.managed_session(master='') as sess: with tf.device("/gpu:1"):#my systerm has 4 nvidia cards </code></pre> <p>and use the command line to abstract the model:</p> <pre><code> freeze_graph.py --clear_devices False </code></pre> <p>and during test phase,...
<p>Is it possible you're using a version of TensorFlow without GPU support enabled? If you're building a binary you may need to add additional BUILD rules from //tensorflow that enable GPU support. Also ensure you enabled GPU support when running configure.</p> <p><strong>EDIT</strong>: Can you file a bug on TF's gi...
python|c++|tensorflow|gpu
0
296
69,740,806
How could I create a column with matchin values from different datasets with different lengths
<p>I want to create a new column in the dataset in which a ZipCode is assigned to a specific Region.</p> <p>There are in total 5 Regions. Every Region consists of an <code>x</code> amount of ZipCodes. I would like to use the two different datasets to create a new column.</p> <p>I tried some codes already, however, I fa...
<p>I believe you need to map the zipcode from dataframe 2 to the region column from the first dataframe. Assuming Postcode and ZipCode are same.</p> <p>First create a dictionary from df1 and then replace the zipcode values based on the dictionary values</p> <pre><code>zip_dict = dict(zip(df1.Postcode, df1.Regio)) df2.Z...
python|pandas|numpy|dataset|matching
0
297
69,694,584
Python SQL Query Execution
<p>I trying to run a SQL query to perform a lookup between table and add column and update the result in new table in SQL and then pass the new table in pandas dataframe.</p> <p>But when i execute i get the following error:</p> <p>&quot;</p> <pre><code>File &quot;C:\Users\Sundar_ars\Desktop\Code\SQL_DB_Extract_1.py&quo...
<p><a href="https://pandas.pydata.org/docs/reference/api/pandas.read_sql_query.html#pandas-read-sql-query" rel="nofollow noreferrer"><code>read_sql()</code></a> assumes that your query returns rows - but your SQL query <em>modifies</em> rows (without returning any).</p> <p>If you want to fetch the content of <code>[Saf...
python|sql|pandas
0
298
69,816,918
loss: nan when fitting training data in tensorflow keras regression network
<p>I am trying to do replicate a regression network from a book but no matter what I try I only get nan losses during the fitting. I have checked and this might happen because of:</p> <ul> <li>bad input data: my data is clean</li> <li>unscaled input data: I have tried with StandardScaler and MinMaxScaler but no dice</l...
<p>I have found the answer to my own question:</p> <p>As it turns out, Tensorflow doesn't work as of now in python 3.10. After downgrading my python version to 3.8 everything started working.</p>
tensorflow|machine-learning|keras|nan
0
299
50,480,517
How to take difference of several Timestamp series in a dataframe in Pandas?
<p>I want to obtain the timedelta interval between several timestamp columns in a dataframe. Also, several entries are NaN. </p> <p>Original DF:</p> <pre><code> 0 1 2 3 4 5 0 date1 date2 NaN NaN NaN NaN 1 date3 date4 date5 date6 date7 date8 </code></pre> <p>Desired Output:</p>...
<p>I think you can use if consecutive <code>NaN</code>s to end of rows:</p> <pre><code>df = pd.DataFrame([['2015-01-02','2015-01-03', np.nan, np.nan], ['2015-01-02','2015-01-05','2015-01-07','2015-01-12']]) print (df) 0 1 2 3 0 2015-01-02 2015-01-03 ...
python|pandas|time-series|timedelta
1