Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
6,300
| 44,986,651
|
delete elements from a data frame w.r.t columns of another data frame
|
<p>I have a data frame say <code>df1</code> with MULTILEVEL INDEX:</p>
<pre><code> A B C D
0 0 0 1 2 3
4 5 6 7
1 2 8 9 10 11
3 2 3 4 5
</code></pre>
<p>and I have another data frame with 2 common columns in <code>df2</code> also with MULTILEVEL INDEX</p>
<pre><code> X B C Y
0 0 0 0 7 3
1 4 5 6 7
1 2 8 2 3 11
3 2 3 4 5
</code></pre>
<p>I need to remove the rows from <code>df1</code> where the values of column <code>B</code> and <code>C</code> are the same as in <code>df2</code>, so I should be getting something like this:</p>
<pre><code> A B C D
0 0 0 1 2 3
0 2 8 9 10 11
</code></pre>
<p>I have tried to do this by getting the index of the common elements and then remove them via a list, but they are all messed up and are in multi-level form.</p>
|
<p>You can do this in a one liner using <a href="http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.DataFrame.iloc.html" rel="nofollow noreferrer">pandas.dataframe.iloc</a>, <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow noreferrer">numpy.where</a> and <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.logical_or.html" rel="nofollow noreferrer">numpy.logical_or</a> like this: (I find it to be the simplest way)</p>
<pre><code>df1 = df1.iloc[np.where(np.logical_or(df1['B']!=df2['B'],df1['C']!=df2['C']))]
</code></pre>
<p>of course don't forget to:</p>
<pre><code>import numpy as np
</code></pre>
<p>output:</p>
<pre><code> A B C D
0 0 0 1 2 3
1 2 8 9 10 11
</code></pre>
<p>Hope this was helpful. If there are any questions or remarks please feel free to comment.</p>
|
python|pandas
| 1
|
6,301
| 45,126,962
|
Python Numpy array assignment casting int
|
<p>I'm fairly new with numpy.</p>
<p>As shown below, when I try to cast the numeric values from strings to integers, it doesn't seem to 'stick', as below:</p>
<pre><code>>> import numpy as np
>>> a = np.array([['a','1','2'],['b','3','4']])
>>> a[:,1:3].astype(int)
array([[1, 2],
[3, 4]])
>>> a[:,1:3] = a[:,1:3].astype(int)
>>> a
array([['a', '1', '2'],
['b', '3', '4']],
dtype='<U1')
</code></pre>
<p>How can I convert the string values to ints in the array ?</p>
|
<p>You need to first change the <code>dtype</code> of the full array to <code>object</code> in order for it to contain both strings and integers:</p>
<pre><code>a = a.astype(object)
a[:,1:3] = a[:,1:3].astype(int)
print(a)
> [['a' 1 2]
['b' 3 4]]
</code></pre>
<p>Though note that better solutions may exist, for example using <a href="https://pandas.pydata.org/" rel="nofollow noreferrer"><code>pandas</code></a>, using columns of different types.</p>
|
python|numpy|int
| 2
|
6,302
| 44,955,185
|
Change every single element in Numpy object
|
<p>I have a Numpy object with random N*M elements, and I also have two numbers A and B.</p>
<p>Now I want to access every element in this N*M array and make a change, i.e., if the element > 0, replace this element to A (i.e., element <- A), and if this element < 0, replace this element to B (i.e., element <- B).</p>
<p>I know there is a naive way to implement this method, that is accessing every single element using for loop, but it is very slow.</p>
<p>Can we use more fancy code to implement this ?</p>
|
<p>Boolean masked assignment will change values in place:</p>
<pre><code>In [493]: arr = np.random.randint(-10,10,(5,7))
In [494]: arr
Out[494]:
array([[ -5, -6, -7, -1, -8, -8, -10],
[ -9, 1, -3, -9, 3, 8, -1],
[ 6, -7, 4, 0, -4, 4, -2],
[ -3, -10, -2, 7, -4, 2, 2],
[ -5, 5, -1, -7, 7, 5, -7]])
In [495]: arr[arr>0] = 100
In [496]: arr[arr<0] = -50
In [497]: arr
Out[497]:
array([[-50, -50, -50, -50, -50, -50, -50],
[-50, 100, -50, -50, 100, 100, -50],
[100, -50, 100, 0, -50, 100, -50],
[-50, -50, -50, 100, -50, 100, 100],
[-50, 100, -50, -50, 100, 100, -50]])
</code></pre>
<p>I just gave a similar answer in </p>
<p><a href="https://stackoverflow.com/questions/44940806/python-numpy-iterate-for-different-conditions-without-using-a-loop">python numpy: iterate for different conditions without using a loop</a></p>
|
numpy
| 2
|
6,303
| 45,131,230
|
Reuse value of TensorFlow Variable between sessions without writing to disk
|
<p>In sklearn, I'm used to having a model that I can run <code>fit</code> and then <code>predict</code> on. However, with TensorFlow, I'm having trouble loading the learned parameters from <code>fit</code> when I'm calling <code>predict</code>. It boils down to me not knowing how to reuse the value of a variable between sessions. For example,</p>
<pre><code>import tensorflow as tf
x = tf.Variable(0.0)
# fit code
with tf.Session() as sess1:
sess1.run(tf.global_variables_initializer())
sess1.run(tf.assign(x, 1.0)) # at end of training, x = 1.0
# predict code
with tf.Session() as sess2:
sess2.run(tf.global_variables_initializer())
print(sess2.run(x)) # want this to be 1.0, but is 0.0
</code></pre>
<p>I can think of one workaround, but it seems really hacky, and would be annoying if there are several variables I want to reuse:</p>
<pre><code>import tensorflow as tf
x = tf.Variable(0.0)
# fit code
with tf.Session() as sess1:
sess1.run(tf.global_variables_initializer())
sess1.run(tf.assign(x, 1.0)) # at end of training, x = 1.0
learned_x = sess1.run(x) # remember value of learned x at end of session
# predict code
with tf.Session() as sess2:
sess2.run(tf.global_variables_initializer())
sess2.run(tf.assign(x, learned_x))
print(sess2.run(x)) # prints 1.0
</code></pre>
<p>How do I reuse variables between sessions without writing to disk (i.e. using <code>tf.train.Saver</code>)? Is the workaround I wrote above the right way to do this?</p>
|
<p>To mimic sklearn's model, just wrap <code>session</code> into a single class so that you can share it between methods e.g.</p>
<pre><code>class Model:
def __init__(self):
self.graph = self.build_graph()
self.session = tf.Session()
self.session.run(tf.global_variables_initializer())
def build_graph(self):
return {'x': tf.Variable(0.0)}
def fit(self):
self.session.run(tf.assign(self.graph['x'], 1.0))
def predict(self):
print(self.session.run(self.graph['x']))
def close(self):
tf.reset_default_graph()
self.session.close()
m = Model()
m.fit()
m.predict()
m.close()
</code></pre>
<p>Make sure you close the <code>session</code> manually and handle exceptions accordingly.</p>
|
python|tensorflow
| 2
|
6,304
| 56,951,455
|
How to replace NaN and NaT with None - pandas 0.24.1
|
<p>I need to replace all <code>NaN</code> and <code>NaT</code> in a <code>pandas.Series</code> with a <code>None</code>.</p>
<p>I tried this:</p>
<pre><code>def replaceMissing(ser):
return ser.where(pd.notna(ser), None)
</code></pre>
<p>But it does not work:</p>
<pre><code>import pandas as pd
NaN = float('nan')
NaT = pd.NaT
floats1 = pd.Series((NaN, NaN, 2.71828, -2.71828))
floats2 = pd.Series((2.71828, -2.71828, 2.71828, -2.71828))
dates = pd.Series((NaT, NaT, pd.Timestamp("2019-07-09"), pd.Timestamp("2020-07-09")))
def replaceMissing(ser):
return ser.where(pd.notna(ser), None)
print(pd.__version__)
print(80*"-")
print(replaceMissing(dates))
print(80*"-")
print(replaceMissing(floats1))
print(80*"-")
print(replaceMissing(floats2))
</code></pre>
<p>As you can see the <code>NaT</code> was not replaced:</p>
<pre><code>0.24.1
--------------------------------------------------------------------------------
0 NaT
1 NaT
2 2019-07-09
3 2020-07-09
dtype: datetime64[ns]
--------------------------------------------------------------------------------
0 None
1 None
2 2.71828
3 -2.71828
dtype: object
--------------------------------------------------------------------------------
0 2.71828
1 -2.71828
2 2.71828
3 -2.71828
dtype: float64
</code></pre>
<p>Then I tried this extra step:</p>
<pre><code>def replaceMissing(ser):
ser = ser.where(pd.notna(ser), None)
return ser.replace({pd.NaT: None})
</code></pre>
<p>But it still does not work. It brings back the <code>NaN</code>s for some reason:</p>
<pre><code>0.24.1
--------------------------------------------------------------------------------
0 None
1 None
2 2019-07-09 00:00:00
3 2020-07-09 00:00:00
dtype: object
--------------------------------------------------------------------------------
0 NaN
1 NaN
2 2.71828
3 -2.71828
dtype: float64
--------------------------------------------------------------------------------
0 2.71828
1 -2.71828
2 2.71828
3 -2.71828
dtype: float64
</code></pre>
<p>I also tried converting the series into <code>object</code>:</p>
<pre><code>def replaceMissing(ser):
return ser.astype("object").where(pd.notna(ser), None)
</code></pre>
<p>But now the last series is also <code>object</code> even though it has no missing values:</p>
<pre><code>0.24.1
--------------------------------------------------------------------------------
0 None
1 None
2 2019-07-09 00:00:00
3 2020-07-09 00:00:00
dtype: object
--------------------------------------------------------------------------------
0 None
1 None
2 2.71828
3 -2.71828
dtype: object
--------------------------------------------------------------------------------
0 2.71828
1 -2.71828
2 2.71828
3 -2.71828
dtype: object
</code></pre>
<p>I would like it to remain <code>float64</code>. So I add <code>infer_objects</code>:</p>
<pre><code>def replaceMissing(ser):
return ser.astype("object").where(pd.notna(ser), None).infer_objects()
</code></pre>
<p>But it brings back the <code>NaN</code>s again:</p>
<pre><code>0.24.1
--------------------------------------------------------------------------------
0 None
1 None
2 2019-07-09 00:00:00
3 2020-07-09 00:00:00
dtype: object
--------------------------------------------------------------------------------
0 NaN
1 NaN
2 2.71828
3 -2.71828
dtype: float64
--------------------------------------------------------------------------------
0 2.71828
1 -2.71828
2 2.71828
3 -2.71828
dtype: float64
</code></pre>
<p>I feel like there's got to be an easy way to do this. Does anyone know?</p>
|
<p>For me working change order of your second solution, tested in <code>0.24.2</code>, but <code>dtype</code>s is changed to object, because mixed types - <code>None</code>s with <code>float</code>s or <code>timestamp</code>s:</p>
<pre><code>def replaceMissing(ser):
return ser.replace({pd.NaT: None}).where(pd.notna(ser), None)
print(pd.__version__)
print(80*"-")
print(replaceMissing(dates))
print(80*"-")
print(replaceMissing(dates).apply(type))
print(80*"-")
print(replaceMissing(floats1))
print(80*"-")
print(replaceMissing(floats1).apply(type))
print(80*"-")
print(replaceMissing(floats2))
</code></pre>
<hr>
<pre><code>0.24.2
--------------------------------------------------------------------------------
0 None
1 None
2 2019-07-09 00:00:00
3 2020-07-09 00:00:00
dtype: object
--------------------------------------------------------------------------------
0 <class 'NoneType'>
1 <class 'NoneType'>
2 <class 'pandas._libs.tslibs.timestamps.Timesta...
3 <class 'pandas._libs.tslibs.timestamps.Timesta...
dtype: object
--------------------------------------------------------------------------------
0 None
1 None
2 2.71828
3 -2.71828
dtype: object
--------------------------------------------------------------------------------
0 <class 'NoneType'>
1 <class 'NoneType'>
2 <class 'float'>
3 <class 'float'>
dtype: object
--------------------------------------------------------------------------------
0 2.71828
1 -2.71828
2 2.71828
3 -2.71828
dtype: float64
</code></pre>
|
python|pandas
| 2
|
6,305
| 45,851,263
|
pd.describe(include=[np.number]) return 0.00
|
<p>I use the <code>df_30v.describe(include=[np.number])</code> to give me the summary on my variables in the data frame. However the result is something with too many digit</p>
<blockquote>
<p>count 235629.000000 235629.000000 235629.000000 119748.000000</p>
</blockquote>
<p>how can i get the below as a result. Thank you!</p>
<blockquote>
<p>count 235629.00 235629.00 235629.00 119748.00</p>
</blockquote>
|
<p>Call <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.set_option.html" rel="nofollow noreferrer"><code>pd.set_option('precision', 2)</code></a>:</p>
<pre><code>In [165]: pd.set_option('precision', 2)
In [167]: df = pd.DataFrame(np.random.uniform(0, 10**6, size=(100,5)))
In [168]: df.describe()
Out[168]:
0 1 2 3 4
count 100.00 100.00 100.00 100.00 100.00
mean 440786.89 526477.58 457295.14 498070.00 481541.09
std 286118.94 264010.57 312539.39 310191.95 274682.03
min 677.71 11862.05 2934.92 13031.54 11728.73
25% 244739.83 316760.73 188148.99 207720.23 222285.78
50% 411391.98 527119.36 406672.95 496606.54 476422.05
75% 637488.49 741362.83 745412.65 778365.74 701966.74
max 993927.91 990323.15 998025.25 999628.94 998598.52
</code></pre>
<hr>
<p>Or, to change the precision temporarily for a block of code, <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.option_context.html" rel="nofollow noreferrer">use a context manager</a>:</p>
<pre><code>with pd.option_context('precision', 2):
df = pd.DataFrame(np.random.uniform(0, 10**6, size=(100,5)))
print(df.describe())
</code></pre>
|
python|pandas|dataframe|describe
| 0
|
6,306
| 45,996,727
|
numpy.random and Monte Carlo
|
<p>I wrote a Monte Carlo (MC) code in Python with a Fortran extension (compiled with f2py). As it is a stochastic integration, the algorithm relies heavily on random numbers, namely I use <code>~ 10^8 - 10^9</code> random numbers for a typical run. So far, I didn't really mind the 'quality' of the random numbers - this is, however, something that I want to check out. </p>
<p>My question is: does the Mersenne-Twister used by numpy suffice or are there better random number generators out there that one should (could) use? (better in the sense of runtime as well as quality of the generated sequence)</p>
<p>Any suggestions/experiences are most definitely welcome, thanks!</p>
|
<p>I do not think anyone can tell you if this algorithm suffices without knowing how the random numbers are being used.</p>
<p>What I would do is to replace the numpy random numbers by something else, certainly there are other modules already available that provide different algorithms.
If your simulation results are not affected by the choice of random number generator, it is already a good sign.</p>
|
python|numpy|random|montecarlo|mersenne-twister
| 2
|
6,307
| 45,783,080
|
Saving model in Tensorflow not working under GPU?
|
<p><strong>UPDATE: I've found out that the below code DOES work correctly when using tensorflow-cpu. The problem only persists when using tensorflow-gpu. How can I make it work?</strong></p>
<p>I cannot find the problem in my code - I am trying to save my variables, and then reload them, and they don't appear to load from the saved model.</p>
<p>I will note that they DO load if I do the saving and loading in the same python run (without the process ending and running the testing script). My problem is that this doesn't work when I train the mode -> save it -> process ends -> run script again with testing flag -> model is loaded without error, but the results are as if it wasn't.</p>
<p>Code:</p>
<p>Run #1 </p>
<pre><code># creating LSTM model...
with tf.Session() as sess:
saver = tf.train.Saver()
# training...
save_path = saver.save(sess, "./saved_models/model.ckpt")
print("Model saved in file: %s" % save_path)
</code></pre>
<p>Run #2</p>
<pre><code># creating the same exact LSTM model...
with tf.Session() as sess:
saver = tf.train.Saver()
saver.restore(sess, "./saved_models/model.ckpt")
print("Model restored.")
# testing...
</code></pre>
<p>If I run these two snippets back to back, I get the desired output - the model is trained to predict a trivial sequence, and it predicts it properly during testing. If I run the two snippets separately, the model predicts the wrong sequence during testing.</p>
<p>Update: I was suggested to try importing the MetaGraph and it's not working either. Code:</p>
<p>Run #1</p>
<pre><code># creating model...
tf.add_to_collection('a', net.a)
# adding nodes ...
tf.add_to_collection('z', net.z)
with tf.Session() as sess:
saver = tf.train.Saver()
# training...
save_path = saver.save(sess, "./saved_models/my-model")
print("Model saved in file: %s" % save_path)
</code></pre>
<p>Run #2</p>
<pre><code>with tf.Session() as sess:
new_saver = tf.train.import_meta_graph('./saved_models/my-model.meta')
new_saver.restore(sess, './saved_models/my-model')
net.a = tf.get_collection('a')[0]
# adding nodes ...
net.z = tf.get_collection('z')[0]
# testing...
</code></pre>
<p>The above code runs correctly - but the testset result shows it is not post-training (and again, if I run the two snippets in the same Python instance, it works correctly).</p>
<p>This should be fairly trivial and I just cannot get it to work. Any help is welcome. Specifically, I don't really need to save the entire graph either - just the variables (some of them inside the LSTM cell).</p>
|
<p>I've encountered the same problem, and I guess you use <code>tf.Variable()</code>, right?
Try to change it to <code>tf.get_variable()</code>. It worked for me :)</p>
|
python|tensorflow
| 1
|
6,308
| 46,161,119
|
Pandas - merge many rows into one
|
<p>with this:</p>
<pre><code>dataset = pd.read_csv('lyrics.csv', delimiter = '\t', quoting = 3)
</code></pre>
<p>I print my dataset in this fashion:</p>
<pre><code> lyrics,classification
0 "I should have known better with a girl like you
1 That I would love everything that you do
2 And I do, hey hey hey, and I do
3 Whoa, whoa, I
4 Never realized what I kiss could be
5 This could only happen to me
6 Can't you see, can't you see
7 That when I tell you that I love you, oh
8 You're gonna say you love me too, hoo, hoo, ho...
9 And when I ask you to be mine
10 You're gonna say you love me too
11 So, oh I never realized what I kiss could be
12 Whoa whoa I never realized what I kiss could be
13 You love me too
14 You love me too",0
</code></pre>
<p>but what I really need is to have all thats between <code>""</code> per row. how do I make this conversion in <code>pandas</code>?</p>
|
<h3>Solution that worked for OP (from comments):</h3>
<p>Fixing the problem at its source (in <code>read_csv</code>):</p>
<blockquote>
<p>@nbeuchat is probably right, just try</p>
<p><code>dataset = pd.read_csv('lyrics.csv', quoting = 2)</code></p>
<p>That should give you a dataframe with one row and two columns: lyrics (with embedded line returns in the string) and classification (0).</p>
</blockquote>
<h3>General solution for collapsing series of strings:</h3>
<p>You want to use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.cat.html" rel="nofollow noreferrer">pd.Series.str.cat</a>:</p>
<pre><code>import pandas as pd
dataset = pd.DataFrame({'lyrics':pd.Series(['happy birthday to you',
'happy birthday to you',
'happy birthday dear outkast',
'happy birthday to you'])})
dataset['lyrics'].str.cat(sep=' / ')
# 'happy birthday to you / happy birthday to you / happy birthday dear outkast / happy birthday to you'
</code></pre>
<p>The default <code>sep</code> is <code>None</code>, which would give you <code>'happy birthday to youhappy birthday to youhappy ...'</code> so pick the <code>sep</code> value that works for you. Above I used slashes (padded with spaces) since that's what you typically see in quotations of songs and poems.</p>
<p>You can also try <code>print(dataset['lyrics'].str.cat(sep='\n'))</code> which maintains the line breaks but stores them all in one string instead of one string per line.</p>
|
python|pandas
| 1
|
6,309
| 45,726,485
|
Group by column in pandas dataframe and average arrays
|
<p>I have a movie dataframe with movie names, their respective genre, and vector representation (numpy arrays).</p>
<pre><code>ID Year Title Genre Word Vector
1 2003.0 Dinosaur Planet Documentary [-0.55423898, -0.72544044, 0.33189204, -0.1720...
2 2004.0 Isle of Man TT 2004 Review Sports & Fitness [-0.373265237, -1.07549703, -0.469254494, -0.4...
3 1997.0 Character Foreign [-1.57682264, -0.91265768, 2.43038678, -0.2114...
4 1994.0 Paula Abdul's Get Up & Dance Sports & Fitness [0.3096168, -0.57186663, 0.39008939, 0.2868615...
5 2004.0 The Rise and Fall of ECW Sports & Fitness [0.17175879, -2.38005066, -0.45771399, 1.32608...
</code></pre>
<p>I'd like to group by genre and get each genre's average vector representation (the component wise average of each movie vector in the genre).</p>
<hr>
<p>I first tried:</p>
<pre><code>movie_df.groupby(['Genre']).mean()
</code></pre>
<p>But the built in mean function isn't able to take the mean of numpy arrays. </p>
<p>I tried creating my own function to do so and then apply it to each group, but I'm not sure this is using apply correctly:</p>
<pre><code>def vector_average(group):
series_to_array = np.array(group.tolist())
return np.mean(series_to_array, axis = 0)
movie_df.groupby(['Genre']).apply(vector_average)
</code></pre>
<p>Any pointers would be appreciated! </p>
|
<p>If I understand correctly, to get the component-wise averages you can simply apply <code>np.mean</code> to the <code>'Word Vector'</code> SeriesGroupBy explicitly. </p>
<pre><code>df.groupby('Genre')['Word Vector'].apply(np.mean)
</code></pre>
<hr>
<p><strong>Demo</strong></p>
<pre><code>>>> df = pd.DataFrame({'Title': list('ABCDEFGHIJ'),
'Genre': list('ABCBBDCDED'),
'Word Vector': [np.random.randint(0, 10, 10)
for _ in range(len('ABCDEFGHIJ'))]})
>>> df
Genre Title Word Vector
0 A A [3, 6, 8, 0, 4, 8, 1, 4, 0, 1]
1 B B [5, 4, 4, 4, 8, 7, 4, 3, 7, 2]
2 C C [1, 7, 6, 7, 3, 3, 8, 1, 8, 1]
3 B D [0, 4, 6, 7, 1, 5, 5, 0, 6, 7]
4 B E [8, 2, 1, 4, 1, 2, 0, 4, 9, 1]
5 D F [7, 9, 7, 8, 8, 7, 2, 9, 1, 3]
6 C G [0, 7, 1, 9, 6, 2, 1, 0, 3, 7]
7 D H [4, 7, 9, 4, 1, 5, 0, 3, 0, 6]
8 E I [5, 1, 5, 1, 8, 1, 1, 4, 5, 6]
9 D J [7, 9, 0, 1, 8, 3, 8, 8, 1, 0]
>>> df.groupby('Genre')['Word Vector'].apply(np.mean)
Genre
A [3.0, 6.0, 8.0, 0.0, 4.0, 8.0, 1.0, 4.0, 0.0, ...
B [4.33333333333, 3.33333333333, 3.66666666667, ...
C [0.5, 7.0, 3.5, 8.0, 4.5, 2.5, 4.5, 0.5, 5.5, ...
D [6.0, 8.33333333333, 5.33333333333, 4.33333333...
E [5.0, 1.0, 5.0, 1.0, 8.0, 1.0, 1.0, 4.0, 5.0, ...
Name: Word Vector, dtype: object
</code></pre>
|
python|arrays|pandas|numpy|mean
| 11
|
6,310
| 35,678,910
|
Pandas GroupBy Index
|
<p>I have a dataframe with a column that I want to groupby. Within each group, I want to perform a check to see if the first values is less than the second value times some scalar, e.g. (x < y * .5). If it is, the first value is set to True and all other values False. Else, all values are False.</p>
<p>I have a sample data frame here:</p>
<pre><code>d = pd.DataFrame(np.array([[0, 0, 1, 1, 2, 2, 2],
[3, 4, 5, 6, 7, 8, 9],
[1.25, 10.1, 2.3, 2.4, 1.2, 5.5, 5.7]]).T,
columns=['a', 'b', 'c'])
</code></pre>
<p>I can get a stacked groupby to get the data that I want out a <code>a</code>:</p>
<pre><code>g = d.groupby('a')['c'].nsmallest(2).groupby(level='a')
</code></pre>
<p>This results in three groups, each with 2 entries. By adding an <code>apply</code>, I can call a function to return a boolean mask:</p>
<pre><code>def func(group):
if group.iloc[0] < group.iloc[1] * .5:
return [True, False]
else:
return [False, False]
g = d.groupby('a')['c'].nsmallest(2).groupby(level='a').apply(func)
</code></pre>
<p>Unfortunately, this destroys the index into the original dataframe and removes the ability to handle cases where more than 2 elements are present.</p>
<p>Two questions:</p>
<ol>
<li><p>Is it possible to maintain the index in the original dataframe and update a column with the results of a groupby? This is made slightly different because the <code>.nsmallest</code> call results in a Series on the 'c' column.</p></li>
<li><p>Does a more elegant method exist to compute a boolean array for groups in a dataframe based on some custom criteria, e.g. this ratio test.</p></li>
</ol>
|
<p>Looks like <a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html" rel="nofollow"><code>transform</code></a> is what you need:</p>
<pre><code>>>> def func(group):
... res = [False] * len(group)
... if group.iloc[0] < group.iloc[1] * .5:
... res[0] = True
... return res
>>> d['res'] = d.groupby('a')['c'].transform(func).astype('bool')
>>> d
a b c res
0 0 3 1.25 True
1 0 4 10.10 False
2 1 5 2.30 False
3 1 6 2.40 False
4 2 7 1.20 True
5 2 8 5.50 False
6 2 9 5.70 False
</code></pre>
<p>From the documentation:</p>
<blockquote>
<p>The transform method returns an object that is indexed the same (same
size) as the one being grouped. Thus, the passed transform function
should return a result that is the same size as the group chunk. For
example, suppose we wished to standardize the data within each group</p>
</blockquote>
|
python|pandas
| 2
|
6,311
| 35,622,773
|
The size of an array created from np.random.normal
|
<p>I am using the numpy's <code>random.normal</code> routine to create a Gaussian with a given mean and standard deviation. </p>
<pre><code>array_a = an array of len(100)
gaussian = np.random.normal(loc=array_a,scale=0.1,size=len(2*array_a))
</code></pre>
<p>So I expect the <code>gaussian</code> to have a <code>mean=array_a</code> and <code>stddev=0.1</code> and the size of the <code>gaussian</code> array to be 2 times <code>array_a</code>.
<strong>However the above returns me an array with the same size as that of <code>array_a</code> !</strong> </p>
<p>How do I get the <code>len(gaussian)</code> to be <strong>2 times</strong> <code>len(array_a)</code> with the given <code>mean</code> and <code>standard deviation</code>? </p>
|
<p>you have to multiplicate <code>len(array_a) * 2</code> instead of <code>len(array_a * 2)</code> and <code>loc=array_a.mean()</code>
Try:</p>
<pre><code>import numpy as np
array_a = np.arange(100)
gaussian = np.random.normal(loc=array_a.mean(), scale=0.1, size=2 * len(array_a))
</code></pre>
<p>Now <code>gaussian.size</code> is <code>200</code> and <code>gaussian.mean()</code> is equal to <code>array_a.mean()</code>.</p>
|
python|arrays|numpy|random|gaussian
| 1
|
6,312
| 28,709,810
|
How can I take a list and add elements in columns in intervals?
|
<p>Here's my problem:
This is for an introductory Python course, however I just cannot wrap my head around how to do this without using loops. I have a list of lists, with each list containing 12 float values corresponding to sunshine hours in a month. Each list of 12 months corresponds to a year (1929 - 2009).
Here is an example of the list:</p>
<pre><code> data = [
[43.8, 60.5, 190.2, 144.7, 240.9, 210.3, 219.7, 176.3, 199.1, 109.2, 78.7, 67.0],
[49.9, 54.3, 109.7, 102.0, 134.5, 211.2, 174.1, 207.5, 108.2, 113.5, 68.7, 23.3],...]
</code></pre>
<p>Now, the task is to calculate mean sunshine hours per day in the winter. This is to be done by the following algorithm: Decade 1930-1939 would equal the hours from (Dec 1929 + Jan 1930 + Dec 1930 + Jan 1931...+ Jan 1939) / (20 numbers * 30 days in a month) = Mean winter sunshine hours per day.</p>
<p>Now I can do this using for loops, but the task is to do this using NO loops and instead using Numpy and array manipulation.</p>
<p>Here's things that I have considered:
-Splitting the data into two arrays (one with the January column and one with the December column)
-Adding those (though remember, there's an offset because Jan 1929 is unused as well as Dec 2009)
-Splitting the addition array into decades and averaging them.</p>
<p>However I'm very lost on how to go about this. So far I've split the data list into January and December arrays, but now I'm stuck.</p>
<p>Update: I've made an array with all the correct "winter" monthly hours (Dec+Jan) and now I just have to figure out how to find the mean of groups of 10 of them.</p>
<pre><code>dataarray = np.array(data)
December = dataarray[:,11]
January = dataarray[:,0]
JanDec = np.zeros(80)
JanDec[:] = January[1:] + December[:-1]
</code></pre>
<p>Any help is appreciated. Thanks!</p>
|
<p>To answer your updated question, to group the data into decades you can <code>reshape</code> your array and take the mean along the correct axis.</p>
<p>This assumes that the number of years you have is divisible by 10 (which it appears to be since you have an array of length 80).</p>
<p>So, as a small example, if you wanted to group <code>[3, 2, 5, 3, 2, 1]</code> into chunks of 2, you could write:</p>
<pre><code>>>> a = np.array([3, 2, 5, 3, 2, 1])
>>> a.reshape(-1, 2)
np.array([[3, 2],
[5, 3],
[2, 1]])
</code></pre>
<p>This gives you a 2D array - the groups you want to calculate the mean of are the rows. To take the mean across the rows you use <code>mean(axis=1)</code>, so you can write:</p>
<pre><code>>>> a.reshape(-1, 2).mean(axis=1)
np.array([ 2.5 , 4.0 , 1.5 ])
</code></pre>
<p>Using this idea, you can quickly take the mean across decades in your data.</p>
|
python|arrays|list|numpy
| 1
|
6,313
| 51,003,769
|
How to apply scipy.stats.describe to each group?
|
<p>I would appreciate if you could let me know how to apply <code>scipy.stats.describe</code> to calculate summary statistics by group. My data (<code>TrainSet</code>) is like this:</p>
<pre><code>Financial Distress x1 x2 x3
0 1.28 0.02 0.87
0 1.27 0.01 0.82
0 1.05 -0.06 0.92
1 1.11 -0.02 0.86
0 1.06 0.11 0.81
0 1.06 0.08 0.88
1 0.87 -0.03 0.79
</code></pre>
<p>I want to compute the summary statistics by "Financial Distress". I mean something like this <a href="https://stackoverflow.com/questions/33575587/pandas-dataframe-how-to-apply-describe-to-each-group-and-add-to-new-columns">post</a> but via <code>scipy.stats.describe</code> because I need skewness and kurtosis for x1, x2, and x3 by group. However, my code doesn't provide the statistics by group.</p>
<pre><code> desc=dict()
for col in TrainSet.columns:
if [TrainSet["Financial Distress"]==0]:
desc[col] = describe(TrainSet[col]())
df = pd.DataFrame.from_dict(desc, orient='index')
df.to_csv("Descriptive Statistics3.csv")
</code></pre>
<p>In fact, I need something like this:</p>
<pre><code>Group 0 1
statistics nobs minmax mean variance skewness kurtosis nobs minmax mean variance skewness kurtosis
Financial Distress 2569 (0, 1) 0.0 0.0 4.9 22.1 50 (0, 1) 0.0 0.0 2.9 22.1
x1 2569 (0.1, 38) 1.4 1.7 16.5 399.9 50 (-3.6, 3.8) 0.3 0.1 0.5 21.8
x2 2569 (-0.2, 0.7) 0.1 0.0 1.0 1.8 50 (-0.3, 0.7) 0.1 0.0 0.9 1.2
x3 2569 (0.1, 0.9) 0.6 0.0 -0.5 -0.2 50 (0.1, 0.9) 0.6 0.0 -0.6 -0.3
x4 2569 (5.3, 6.3) 0.9 0.3 3.2 19.7 50 (-26, 38) 14.0 12.0 15.1 26.5
x5 2569 (-0.2, 0.8) 0.2 0.0 0.8 1.4 50 (0.3, 0.9) 0.4 0.0 0.5 -0.3
</code></pre>
<p>Or</p>
<pre><code> nobs minmax mean variance skewness kurtosis
x1 0 5 (1.05, 1.28) 1.144 0.01433 4.073221e-01 -1.825477
1 2 (0.87, 1.11) 0.990 0.02880 1.380350e-15 -2.000000
x2 0 5 (-0.06, 0.11) 0.032 0.00437 -1.992376e-01 -1.130951
1 2 (-0.03, -0.02) -0.025 0.00005 1.058791e-15 -2.000000
x3 0 5 (0.81, 0.92) 0.860 0.00205 1.084093e-01 -1.368531
1 2 (0.79, 0.86) 0.825 0.00245 4.820432e-15 -2.000000
</code></pre>
<p>Thanks in advance,</p>
|
<p>If you wish to describe 3 series independently by group, it seems you'll need 3 dataframes. You can construct these dataframes and then concatenate them:</p>
<pre><code>from scipy.stats import describe
grouper = df.groupby('FinancialDistress')
variables = df.columns[1:]
res = pd.concat([pd.DataFrame(describe(g[x]) for _, g in grouper)\
.reset_index().assign(cat=x).set_index(['cat', 'index']) \
for x in variables], axis=0)
print(res)
nobs minmax mean variance skewness kurtosis
cat index
x1 0 5 (1.05, 1.28) 1.144 0.01433 4.073221e-01 -1.825477
1 2 (0.87, 1.11) 0.990 0.02880 1.380350e-15 -2.000000
x2 0 5 (-0.06, 0.11) 0.032 0.00437 -1.992376e-01 -1.130951
1 2 (-0.03, -0.02) -0.025 0.00005 1.058791e-15 -2.000000
x3 0 5 (0.81, 0.92) 0.860 0.00205 1.084093e-01 -1.368531
1 2 (0.79, 0.86) 0.825 0.00245 4.820432e-15 -2.000000
</code></pre>
|
python|python-3.x|pandas|scipy|statistics
| 2
|
6,314
| 50,748,340
|
Improve program readability for logic over multiple pandas columns
|
<p>I need to apply some logic over multiple columns, but all I could do is just write it one at a time (and that's not python way). </p>
<pre><code>import numpy as np
import pandas as pd
data = {
'Ticker':['S&P','Kospi','FTSE','DAX','Topix'],
'P/E_Cur':[26,21,16,14,23],
'P/E_lag_1yr':[22,14,28,31,18],
'P/E_lag_2yr':[17,11,13,np.NaN,10],
'P/E_lag_3yr':[np.NaN,np.NaN,12,14,15]
}
df = pd.DataFrame(data)
</code></pre>
<p>Current P/E is at 4 yr high and current value has increased over 10% in past 3 years. If any of this condition is met Flag will be 1 else 0. But if any field of column is Null then flag should be Null as well. All I could write is manually write all code</p>
<pre><code>c1 = df['P/E_Cur'].notnull()
c2 = df['P/E_lag_1yr'].notnull()
c3 = df['P/E_lag_2yr'].notnull()
c4 = df['P/E_lag_3yr'].notnull()
c5 = df['P/E_Cur']>df['P/E_lag_1yr']
c6 = df['P/E_Cur']>df['P/E_lag_2yr']
c7 = df['P/E_Cur']>df['P/E_lag_3yr']
c8 = (df['P/E_Cur']/df['P/E_lag_3yr']-1)>0.1
df['P/E_flag'] = np.where(c1&c2&c3&c4,np.where(c5&c6&c7&c8,1,0), np.NaN)
</code></pre>
<p>I want to write all this logic in python(smart) way.</p>
|
<p>Here's my attempt using <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.all.html" rel="nofollow noreferrer"><code>pd.DataFrame.all</code></a> and <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.max.html" rel="nofollow noreferrer"><code>pd.DataFrame.max</code></a>. Setting <code>axis=1</code> ensures we aggregate <em>over columns</em> or <em>by row</em>.</p>
<pre><code>mask1 = df[['P/E_Cur', 'P/E_lag_1yr', 'P/E_lag_2yr', 'P/E_lag_3yr']].notnull().all(1)
mask2 = df['P/E_Cur'] > df[['P/E_lag_1yr', 'P/E_lag_2yr', 'P/E_lag_3yr']].max(1)
mask3 = (df['P/E_Cur'] / df['P/E_lag_3yr'] - 1) > 0.1
df['P/E_flag'] = np.where(mask1, np.where(mask2 & mask3, 1, 0), np.nan)
</code></pre>
|
python|pandas|dataframe
| 3
|
6,315
| 33,503,806
|
Use Regex to extract file path and save it in python
|
<p>I have a text file which holds lots of files path <em>file.txt</em>:</p>
<pre><code>C:\data\AS\WO\AS_WOP_1PPPPPP20070506.bin
C:\data\AS\WO\AS_WOP_1PPPPPP20070606.bin
C:\data\AS\WO\AS_WOP_1PPPPPP20070708.bin
C:\data\AS\WO\AS_WOP_1PPPPPP20070808.bin
...
</code></pre>
<p>What I did with <em>Regex</em> to extract the date from path:</p>
<pre><code>import re
textfile = open('file.txt', 'r')
filetext = textfile.read()
textfile.close()
data = []
for line in filetext:
matches = re.search("AS_[A-Z]{3}_(.{7})([0-9]{4})([0-9]{2})([0-9]{2})", line)
data.append(line)
</code></pre>
<p>it does not give what I want.</p>
<p>My output should be like this:</p>
<pre><code>year month
2007 05
2007 06
2007 07
2007 08
</code></pre>
<p>and then save it as <strong>list of lists</strong>:</p>
<pre><code>[['2007', '5'], ['2007', '6'], ['2007', '7'], ['2007', '8']]
</code></pre>
<p><strong>or</strong> save it as a <strong>Pandas series</strong>.</p>
<p>is there any way with <code>regex</code> to get what I want !?</p>
|
<p>You can simplify your regex to this:</p>
<pre><code>/(....)(..)..\.bin$/
</code></pre>
<p>Group 1 will have the year while Group 2 will have the month. I assume that the format is pertaining throughout the file. </p>
<p>Now, <code>.</code> represents <em>any</em> character and <code>\.</code> represents "dot" or literal <code>.</code>. <code>$</code> means at the <em>end of the string.</em>
So, I'm matching <code>.bin</code> at the end of the line and leaving out day and just grouping year and month.</p>
|
python|regex|pandas
| 3
|
6,316
| 5,657,444
|
NumPy: load heterogenous columns of data from list of strings
|
<p>I'm working with array data stored in an ASCII file (similar to <a href="http://thread.gmane.org/gmane.comp.python.numeric.general/42342" rel="nofollow">this thread</a>). My file is at least 2M lines (158 MB), and is divided into multiple sections with different schemas. In my module to read the format, I want to read the whole file via <code>lines = open('myfile.txt', 'r').readlines()</code>, so I can index the positions of each section, then read each section that I need into NumPy data structures.</p>
<p>For example, one excerpt of a section is:</p>
<pre><code>>>> print lines[5:10]
[' 1 0.1000 0.300E-03 0.000E+00 0.300E-03 0.000E+00 0.000E+00 0.300E-03 0.100E-03\n',
' 2 0.1000 0.120E-02 0.000E+00 0.120E-02 0.000E+00 0.000E+00 0.120E-02 0.100E-03\n',
' 3 0.1000 0.100E-02 0.000E+00 0.100E-02 0.000E+00 0.000E+00 0.100E-02 0.100E-03\n',
' 4 0.1000 0.110E-02 0.000E+00 0.110E-02 0.000E+00 0.000E+00 0.110E-02 0.100E-03\n',
' 5 0.1000 0.700E-03 0.000E+00 0.700E-03 0.000E+00 0.000E+00 0.700E-03 0.100E-03\n']
</code></pre>
<p>Which has the schema <code>[int, float, float, float, float, float, float, float, float]</code>, and a later part will have have a simpler <code>[int, float]</code> schema:</p>
<pre><code>>>> print lines[20:25]
[' 1 0.00000E+00\n',
' 2 0.43927E-07\n',
' 3 0.44006E-07\n',
' 4 0.44020E-07\n',
' 5 0.44039E-07\n']
</code></pre>
<p>How can I quickly load in different sections of the lines with NumPy? I see there is <code>np.loadtxt</code>, but it requires a file handle, and reads all the way to the end. I also see <code>np.from*</code> functions, but I'm not sure how to use them with my already read <code>lines</code>. Do I need to read the file twice?</p>
<p>With regards to the heterogeneous data types, I figure I can use a compound <code>dtype</code>, like <code>np.dtype([('col1', '<i2'), ('col2', 'f4'), ('col3', 'f4'), ('col4', 'f4'), ('col5', 'f4'), ('col6', 'f4'), ('col7', 'f4'), ('col8', 'f4'), ('col9', 'f4')])</code>, correct?</p>
|
<p><a href="http://docs.python.org/release/2.6.4/library/stringio.html" rel="nofollow"><code>StringIO</code></a> can make file-type objects from strings. So you could do</p>
<pre><code>from StringIO import StringIO
m = np.loadtxt(StringIO('\n'.join(lines[5:10])))
</code></pre>
<p>Or even easier, just do</p>
<pre><code>m = np.fromiter(lines[5:10],np.dtype([('col1', '<i2'), ('col2', 'f4'), ('col3', 'f4')]))
</code></pre>
|
python|load|numpy
| 3
|
6,317
| 66,425,898
|
How to not SELECT rows where certain columns are the same and one column is different?
|
<p>This seems like a simple thing I'm surprised I haven't done before, but I basically want to remove duplicates based on a few different columns, but only when a particular column is different. I have the option to do this either in SQL or pandas, though SQL would be preferable. So given the following query:</p>
<pre><code>SELECT fname, lname, order_date, product_id
FROM T_ORDERS
</code></pre>
<p>I want to remove any orders where fname, lname, and product_id are the same AND order_date is different keeping the row where the order_date is later. Is there an easy way to do this in SQL?</p>
<p>If I must do it python/pandas or it would be much easier, I can do that as well.</p>
|
<p>It's not that easy with <code>SQL</code> AFAIK. You need to do an implicit join one way or another.</p>
<p>For pandas, it's <code>drop_duplicates</code>:</p>
<pre><code>(df.sort_values('order_date', ascending=False)
.drop_duplicates(['fname', 'lname', 'product_id'])
)
</code></pre>
|
sql|pandas|ssms
| 1
|
6,318
| 66,529,856
|
Pandas: Combine two data-frames with different shape based on one common column
|
<p>I have a <code>df</code> with columns:</p>
<pre><code>Student_id subject marks
1 English 70
1 math 90
1 science 60
1 social 80
2 English 90
2 math 50
2 science 70
2 social 40
</code></pre>
<p>I have another <code>df1</code> with columns</p>
<pre><code>Student_id Year_of_join column_with_info
1 2020 some_info1
1 2020 some_info2
1 2020 some_info3
2 2019 some_info4
2 2019 some_info5
</code></pre>
<p>I want to combine two of the above data frames(.csv files) something like below <code>res_df</code>:</p>
<pre><code>Student_id subject marks year_of_join column_with_info
1 English 70 2020 some_info1
1 math 90 2020 some_info2
1 science 60 2020 some_info3
1 social 80 NaN NaN
2 English 90 2019 some_info4
2 math 50 2019 some_info5
2 science 70 NaN NaN
2 social 40 NaN NaN
</code></pre>
<p><strong>Note:</strong>
I want to join the datasets based on <code>Student_id</code>s. Both have the same unique Student_id's but the shape of the data is different for both the datasets.</p>
<p><strong>P.S:</strong> The resulting df <code>res_df</code> is just an example of how the data might look after combining two data-frames, It can also be like this:</p>
<pre><code>Student_id subject marks year_of_join column_with_info
1 English 70 NaN NaN
1 math 90 2020 some_info1
1 science 60 2020 some_info2
1 social 80 2020 some_info3
2 English 90 NaN NaN
2 math 50 NaN NaN
2 science 70 2019 some_info4
2 social 40 2019 some_info5
</code></pre>
<p>Thanks in advance for the help! Please help me to solve this..</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.cumcount.html" rel="nofollow noreferrer"><code>GroupBy.cumcount</code></a> for helper column used for merge with left join:</p>
<pre><code>df['g'] = df.groupby('Student_id').cumcount()
df1['g'] = df1.groupby('Student_id').cumcount()
df = df.merge(df1, on=['Student_id','g'], how='left').drop('g', axis=1)
</code></pre>
|
python|python-3.x|pandas|dataframe
| 1
|
6,319
| 66,619,277
|
Fuzzymatcher returns NaN for best_match_score
|
<p>I'm observing odd behaviour while performing <code>fuzzy_left_join</code> from <code>fuzzymatcher</code> library. Trying to join two df, left one with 5217 records and right one with 8734, the all records with <code>best_match_score</code> is 71 records, which seems really odd . To achieve better results I even remove all the numbers and left only alphabetical charachters for joining columns. In the merged table the id column from the right table is <code>NaN</code>, which is also strange result.</p>
<p>left table - column for join "amazon_s3_name". First item - <code>limonig</code></p>
<pre><code>+------+---------+-------+-----------+------------------------------------+
| id | product | price | category | amazon_s3_name |
+------+---------+-------+-----------+------------------------------------+
| 1 | A | 1.49 | fruits | limonig |
| 8964 | B | 1.39 | beverages | studencajfuzelimonilimonetatrevaml |
| 9659 | C | 2.79 | beverages | studencajfuzelimonilimtreval |
+------+---------+-------+-----------+------------------------------------+
</code></pre>
<p>right table - column for join "amazon_s3_name" - last item - <code>limoni</code></p>
<pre><code>+------+----------------------------------------------------------------------------------------------------------------------------+--------------------------------------------+
| id | picture | amazon_s3_name |
+------+----------------------------------------------------------------------------------------------------------------------------+--------------------------------------------+
| 191 | https://s3.eu-central-1.amazonaws.com/groceries.pictures/images/AhmadCajLimonIDjindjifil20X2G.jpg | ahmadcajlimonidjindjifilxg |
| 192 | https://s3.eu-central-1.amazonaws.com/groceries.pictures/images/AhmadCajLimonIDjindjifil20X2G40g.jpg | ahmadcajlimonidjindjifilxgg |
| 204 | https://s3.eu-central-1.amazonaws.com/groceries.pictures/images/Ahmadcajlimonidjindjifil20x2g40g00051265.jpg | ahmadcajlimonidjindjifilxgg |
| 1608 | https://s3.eu-central-1.amazonaws.com/groceries.pictures/images/Cajstudenfuzetealimonilimonovatreva15lpet.jpg | cajstudenfuzetealimonilimonovatrevalpet |
| 4689 | https://s3.eu-central-1.amazonaws.com/groceries.pictures/images/Lesieursalatensosslimonimaslinovomaslo.jpg | lesieursalatensosslimonimaslinovomaslo |
| 4690 | https://s3.eu-central-1.amazonaws.com/groceries.pictures/images/Lesieursalatensosslimonimaslinovomaslo05l500ml01301150.jpg | lesieursalatensosslimonimaslinovomaslolml |
| 4723 | https://s3.eu-central-1.amazonaws.com/groceries.pictures/images/Limoni.jpg | limoni |
+------+----------------------------------------------------------------------------------------------------------------------------+--------------------------------------------+
</code></pre>
<p>merged table - as we can see in the merged table <code>best_match_score</code> is <code>NaN</code></p>
<pre><code>+----+------------------+-----------+------------+-------+----------+----------------------+------------+---------------------+-------------+----------------------+
| id | best_match_score | __id_left | __id_right | price | category | amazon_s3_name_left | image_left | amazon_s3_name_left | image_right | amazon_s3_name_right |
+----+------------------+-----------+------------+-------+----------+----------------------+------------+---------------------+-------------+----------------------+
| 0 | NaN | 0_left | None | 1.49 | Fruits | Limoni500g09700112 | NaN | limonig | NaN | NaN |
| 2 | NaN | 2_left | None | 1.69 | Bio | Morkovi1kgbr09700132 | NaN | morkovikgbr | NaN | NaN |
+----+------------------+-----------+------------+-------+----------+----------------------+------------+---------------------+-------------+----------------------+
</code></pre>
|
<p>You could give <a href="https://pypi.org/project/polyfuzz/" rel="nofollow noreferrer"><code>polyfuzz</code></a> a try. Use the examples' setup, for example using <code>TF-IDF</code> or <code>Bert</code>, then run:</p>
<pre><code>model = PolyFuzz(matchers).match(df1["amazon_s3_name"].tolist(), df2["amazon_s3_name"].to_list())
df1['To'] = model.get_matches()['To']
</code></pre>
<p>then merge:</p>
<pre><code>df1.merge(df2, left_on='To', right_on='amazon_s3_name')
</code></pre>
|
python|python-3.x|pandas|fuzzy-search
| 1
|
6,320
| 16,064,308
|
Include labels for each data point in pandas plotting
|
<p>Assume we have a DataFrame with prices and volume(think finance).</p>
<p>What's the best way to label each price point with the volume of that price point?</p>
<pre><code> Price Volume
2013-04-10 04:46 1300 19
2013-04-10 04:47 1305 20
2013-04-10 04:48 1302 6
2013-04-10 04:49 1301 10
</code></pre>
|
<p>Here is one possible implementation</p>
<p>I have import the following:</p>
<pre><code>import pandas as pd
import datetime as dt
import matplotlib.pyplot as plt
</code></pre>
<p>Now we can recreate the data</p>
<pre><code>ind = pd.date_range(start=dt.datetime(2013, 4, 10, 4, 46),
periods=4, freq='Min')
data = pd.DataFrame([[1200, 19], [1302, 20], [1302, 6], [1301, 10]],
index=ind, columns=['Price', 'Volume'])
</code></pre>
<p>Now I will define the annotate_plot funciton. The docstrings should have enough information to figure out what it is doing.</p>
<pre><code>def annotate_plot(frame, plot_col, label_col, **kwargs):
"""
Annotate the plot of a given DataFrame using one of its columns
Should be called right after a DataFrame or series plot method,
before telling matplotlib to show the plot.
Parameters
----------
frame : pandas.DataFrame
plot_col : str
The string identifying the column of frame that was plotted
label_col : str
The string identifying the column of frame to be used as label
kwargs:
Other key-word args that should be passed to plt.annotate
Returns
-------
None
Notes
-----
After calling this function you should call plt.show() to get the
results. This function only adds the annotations, it doesn't show
them.
"""
import matplotlib.pyplot as plt # Make sure we have pyplot as plt
for label, x, y in zip(frame[label_col], frame.index, frame[plot_col]):
plt.annotate(label, xy=(x, y), **kwargs)
</code></pre>
<p>This function can now be used to do a basic plot with labels</p>
<pre><code>data.Price.plot(marker='*')
annotate_plot(data, 'Price', 'Volume')
plt.show()
</code></pre>
<p>You can also pass arbitrary arguments through the annotate_plot function that go directly to plt.annotate(). Note that most of these arguments were taken from <a href="https://stackoverflow.com/questions/5147112/matplotlib-how-to-put-individual-tags-for-a-scatter-plot">this answer</a>.</p>
<pre><code>bbox = dict(boxstyle='round,pad=0.5', fc='green', alpha=0.3)
ha = 'right'
va = 'bottom'
arrowprops = dict(arrowstyle='->', connectionstyle='arc3,rad=0')
xytext = (-20, 20)
textcoords = 'offset points'
data.Price.plot(marker='*')
annotate_plot(data, 'Price', 'Volume', bbox=bbox, ha=ha, va=va,
xytext=xytext, textcoords=textcoords)
plt.show()
</code></pre>
|
python|pandas
| 2
|
6,321
| 57,373,034
|
Quickest way to assign cell values in Pandas
|
<p>I have a list of tuples:</p>
<pre><code>d = [("a", "x"), ("b", "y"), ("a", "y")]
</code></pre>
<p>and the <code>DataFrame</code>:</p>
<pre><code> y x
b 0.0 0.0
a 0.0 0.0
</code></pre>
<p>I would like to replace any <code>0s</code> with <code>1s</code> if the row and column labels correspond to a tuple in <code>d</code>, such that the new DataFrame is:</p>
<pre><code> y x
b 1.0 0.0
a 1.0 1.0
</code></pre>
<p>I am currently using:</p>
<pre><code>for i, j in d:
df.loc[i, j] = 1.0
</code></pre>
<p>This seems to me as the most "pythonic" approach but for a <code>DataFrame</code> of shape 20000 * 20000 and a list of length 10000, this process literally takes forever. There must be a better way of accomplishing this. Any ideas?</p>
<p>Thanks</p>
|
<p><strong>Approach #1: No bad entries in <code>d</code></strong></p>
<p>Here's one NumPy based method -</p>
<pre><code>def assign_val(df, d, newval=1):
# Get d-rows,cols as arrays for efficient usage latet on
di,dc = np.array([j[0] for j in d]), np.array([j[1] for j in d])
# Get col and index data
i,c = df.index.values.astype(di.dtype),df.columns.values.astype(dc.dtype)
# Locate row indexes from d back to df
sidx_i = i.argsort()
I = sidx_i[np.searchsorted(i,di,sorter=sidx_i)]
# Locate column indexes from d back to df
sidx_c = c.argsort()
C = sidx_c[np.searchsorted(c,dc,sorter=sidx_c)]
# Assign into array data with new values
df.values[I,C] = newval
# Use df.to_numpy(copy=False)[I,C] = newval on newer pandas versions
return df
</code></pre>
<p>Sample run -</p>
<pre><code>In [21]: df = pd.DataFrame(np.zeros((2,2)), columns=['y','x'], index=['b','a'])
In [22]: d = [("a", "x"), ("b", "y"), ('a','y')]
In [23]: assign_val(df, d, newval=1)
Out[23]:
y x
b 1.0 0.0
a 1.0 1.0
</code></pre>
<p><strong>Approach #2: Generic one</strong></p>
<p>If there are any <em>bad</em> entries in `d, we need to filter out those. So, a modified one for that generic case would be -</p>
<pre><code>def ssidx(i,di):
sidx_i = i.argsort()
idx_i = np.searchsorted(i,di,sorter=sidx_i)
invalid_mask = idx_i==len(sidx_i)
idx_i[invalid_mask] = 0
I = sidx_i[idx_i]
invalid_mask |= i[I]!=di
return I,invalid_mask
# Get d-rows,cols as arrays for efficient usage latet on
di,dc = np.array([j[0] for j in d]), np.array([j[1] for j in d])
# Get col and index data
i,c = df.index.values.astype(di.dtype),df.columns.values.astype(dc.dtype)
# Locate row indexes from d back to df
I,badmask_I = ssidx(i,di)
# Locate column indexes from d back to df
C,badmask_C = ssidx(c,dc)
badmask = badmask_I | badmask_C
goodmask = ~badmask
df.values[I[goodmask],C[goodmask]] = newval
</code></pre>
|
python|pandas|numpy
| 2
|
6,322
| 57,432,437
|
Optimize iteration through numpy array when averaging adjacent values
|
<p>I have a definition in python that </p>
<ol>
<li>Iterates over a sorted distinct array of Floats</li>
<li>Gets the previous and next item</li>
<li>Finds out if they are within a certain range of each other</li>
<li>averages them out, and replaces the original values with the averaged value</li>
<li>rerun through that loop until there are no more changes</li>
<li>returns a distinct array</li>
</ol>
<p>The issue is that it is extremely slow. the array "a" could be 100k+ and it takes 7-10 minutes to complete</p>
<p>I found that I needed to iterate over the array after the initial iteration because after averaging, sometimes the average values could be within range to be averaged again</p>
<p>I thought about breaking it into chunks and use multiprocessing, my concern is the end of one chunk, and the beginning of the next chunk would need to be averaged too.</p>
<pre class="lang-py prettyprint-override"><code>def reshape_arr(a, close):
"""Iterates through 'a' to find values +- 'close', and averages them, then returns a distinct array of values"""
flag = True
while flag:
array = a.sort_values().unique()
l = len(array)
flag = False
for i in range(l):
previous_item = next_item = None
if i > 0:
previous_item = array[i - 1]
if i < (l - 1):
next_item = array[i + 1]
if previous_item is not None:
if abs(array[i] - previous_item) < close:
average = (array[i] + previous_item) / 2
flag = True
#find matching values in a, and replace with the average
a.replace(previous_item, value=average, inplace=True)
a.replace(array[i], value=average, inplace=True)
if next_item is not None:
if abs(next_item - array[i]) < close:
flag = True
average = (array[i] + next_item) / 2
# find matching values in a, and replace with the average
a.replace(array[i], value=average, inplace=True)
a.replace(next_item, value=average, inplace=True)
return a.unique()
</code></pre>
<p>a is a Pandas.Series from a DataFrame of anything between 0 and 200k rows, and close is an int (100 for example) </p>
<p>it works, just very slow.</p>
|
<p>First, if the length of the input array <code>a</code> is large and <code>close</code> is relatively small, your proposed algorithm may be numerically unstable.</p>
<p>That being said, here are some ideas that reduce the time complexity from <code>O(N^3)</code> to <code>O(N)</code> (for an approximate implementation) or <code>O(N^2)</code> (for an equivalent implementation). For <code>N=100</code>, this gives a speedup up to a factor of <code>6000</code> for some choices of <code>arr</code> and <code>close</code>.</p>
<p>Consider an input array <code>arr = [a,b,c,d]</code>, and suppose that <code>close > d - a</code>. In this case, the algorithm proceeds as follows:</p>
<pre><code>[a,b,c,d]
[(a+b)/2,(b+c)/2,(c+d)/2]
[(a+2b+c)/4,(b+2c+d)/4]
[(a+3b+3c+d)/8]
</code></pre>
<p>One can recognize that if <code>[x_1, x_2, ..., x_n]</code> is a maximal contiguous subarray of <code>arr</code> s.t. <code>x_i - x_{i-1} < close</code>, then <code>[x_1, x_2, ..., x_n]</code> eventually evaluates to <code>(sum_{k=0}^{k=n} x_k * c_{n,k})/(2^(n-1))</code> where <code>c_{n,k}</code> is the binomial coefficient <code>n choose k</code>.</p>
<p>This gives an <code>O(N)</code> implementation as follows:</p>
<pre><code>import numpy as np
from scipy.stats import binom
from scipy.special import comb
def binom_mean(arr, scipy_cutoff=64):
"""
Given an array arr, returns an average of arr
weighted by binomial coefficients.
"""
n = arr.shape[0]
if arr.shape[0] == 1:
return arr[0]
# initializing a scipy binomial random variable can be slow
# so, if short runs are likely, we can speed things up
# by doing explicit computations
elif n < scipy_cutoff:
return np.average(arr, weights=comb(n-1, np.arange(n), exact=False))
else:
f = binom(n-1, 0.5).pmf
return np.average(arr, weights=f(np.arange(n)))
def reshape_arr_binom(arr, close):
d = np.ediff1d(arr, to_begin=0) < close
close_chunks = np.split(arr, np.where(~d)[0])
return np.fromiter(
(binom_mean(c) for c in close_chunks),
dtype=np.float
)
</code></pre>
<p>The result is within <code>10e-15</code> of your implementation for <code>np.random.seed(0);N=1000;cost=1/N;arr=np.random.rand(N)</code>. However, for large <code>N</code>, this may not be meaningful unless <code>cost</code> is small. For the above parameter values, this is <code>270</code> times faster than the original code on my machine.</p>
<p>However, if we choose a modest value of <code>N = 100</code> and set <code>close</code> to a large value like <code>1</code>, the speedup is by a factor of <code>6000</code>. This is because for large values of <code>close</code>, the original implementation is <code>O(N^3)</code>; specifically, <code>a.replace</code> is potentially called <code>O(N^2)</code> times and has a cost <code>O(N)</code>. So, maximal speedup is achieved when contiguous elements are likely to be close.</p>
<hr>
<p>For the reference, here is an <code>O(N^2)</code> implementation that is equivalent to your code (I do not recommend using this in practice).</p>
<pre><code>import pandas as pd
import numpy as np
np.random.seed(0)
def custom_avg(arr, indices, close):
new_indices = list()
last = indices[-1]
for i in indices:
if arr[i] - arr[i-1] < close:
new_indices.append(i)
avg = (arr[i-1] + arr[i]) / 2
arr[i-1] = avg
if i != last and arr[i+1] - arr[i] >= close:
arr[i] = avg
return new_indices
def filter_indices(indices):
new_indices = list()
second_dups = list()
# handle empty index case
if not indices:
return new_indices, second_dups
for i, j in zip(indices, indices[1:]):
if i + 1 == j:
# arr[i] is guaranteed to be different from arr[i-1]
new_indices.append(i)
else:
# arr[i+1] is guaranteed to be a duplicate of arr[i]
second_dups.append(i)
second_dups.append(indices[-1])
return new_indices, second_dups
def reshape_arr_(arr, close):
indices = range(1, len(arr))
dup_mask = np.zeros(arr.shape, bool)
while indices:
indices, second_dups = filter_indices(custom_avg(arr, indices, close))
# print(f"n_inds = {len(indices)};\tn_dups = {len(second_dups)}")
dup_mask[second_dups] = True
return np.unique(arr[~dup_mask])
</code></pre>
<p>The basic ideas are the following:</p>
<p>First, consider two adjacent elements <code>(i,j)</code> with <code>j = i + 1</code>. If <code>arr[j] - arr[i] >= close</code> in current iteration, <code>arr[j] - arr[i] >= close</code> also holds <em>after</em> the current iteration. This is because <code>arr[i]</code> can only decrease and <code>arr[j]</code> can only increase. So, if <code>(i,j)</code> pair is not averaged in the current iteration, it will not be averaged in any of the subsequent iterations. So, we can avoid looking at <code>(i,j)</code> in the future.</p>
<p>Second, if <code>(i,j)</code> is averaged and <code>(i+1,j+1)</code> is not, we know that <code>arr[i]</code> is a duplicate of <code>arr[j]</code>. Also, the last modified element in each iteration is always a duplicate.</p>
<p>Based on these observations, we need to process fewer and fewer indices in each iteration. The worst case is still <code>O(N^2)</code>, which can be witnessed by setting <code>close = arr.max() - arr.min() + 1</code>.</p>
<p>Some benchmarks:</p>
<pre><code>from timeit import timeit
make_setup = """
from __main__ import np, pd, reshape_arr, reshape_arr_, reshape_arr_binom
np.random.seed(0)
arr = np.sort(np.unique(np.random.rand({N})))
close = {close}""".format
def benchmark(N, close):
np.random.seed(0)
setup = make_setup(N=N, close=close)
print('Original:')
print(timeit(
stmt='reshape_arr(pd.Series(arr.copy()), close)',
# setup='from __main__ import reshape_arr; import pandas as pd',
setup=setup,
number=1,
))
print('Quadratic:')
print(timeit(
stmt='reshape_arr_(arr.copy(), close)',
setup=setup,
number=10,
))
print('Binomial:')
print(timeit(
stmt='reshape_arr_binom(arr.copy(), close)',
setup=setup,
number=10,
))
if __name__ == '__main__':
print('N=10_000, close=1/N')
benchmark(10_000, 1/10_000)
print('N=100, close=1')
benchmark(100, 1)
# N=10_000, close=1/N
# Original:
# 14.855983458999999
# Quadratic:
# 0.35902471400000024
# Binomial:
# 0.7207887170000014
# N=100, close=1
# Original:
# 4.132993569
# Quadratic:
# 0.11140068399999947
# Binomial:
# 0.007650813999998007
</code></pre>
<p>The following table shows how the number of pairs we need to look at in the quadratic algorithm goes down each iteration.</p>
<pre><code>n_inds = 39967; n_dups = 23273
n_inds = 25304; n_dups = 14663
n_inds = 16032; n_dups = 9272
n_inds = 10204; n_dups = 5828
n_inds = 6503; n_dups = 3701
n_inds = 4156; n_dups = 2347
n_inds = 2675; n_dups = 1481
n_inds = 1747; n_dups = 928
n_inds = 1135; n_dups = 612
n_inds = 741; n_dups = 394
n_inds = 495; n_dups = 246
n_inds = 327; n_dups = 168
n_inds = 219; n_dups = 108
n_inds = 145; n_dups = 74
n_inds = 95; n_dups = 50
n_inds = 66; n_dups = 29
n_inds = 48; n_dups = 18
n_inds = 36; n_dups = 12
n_inds = 26; n_dups = 10
n_inds = 20; n_dups = 6
n_inds = 15; n_dups = 5
n_inds = 10; n_dups = 5
n_inds = 6; n_dups = 4
n_inds = 3; n_dups = 3
n_inds = 1; n_dups = 2
n_inds = 0; n_dups = 1
</code></pre>
|
python|numpy
| 2
|
6,323
| 24,230,233
|
Fit gaussian integral function to data
|
<p>I have a problem with finding a least-square-fit for a set of given data.
I know the data follows a function witch is a convolution of a gaussian and a rectangle (x-ray through a broad slit). What I have done so far is taken a look at the convolution integral and discover that it comes down the this:
<img src="https://i.stack.imgur.com/TaGWk.gif" alt="enter image description here">
the integration parameter a is the width of the slit (unknown and desired) with g(x-t) a gaussian function defined as
<img src="https://i.stack.imgur.com/fDl9p.gif" alt="enter image description here">
So basically the function to fit is a integratiofunction of a gaussian with the integration borders given by the width parameter a. The integration is then also carried out with a shift of x-t. </p>
<p>Here is a smaller part of the Data and a handmade fit.
from pylab import *
from scipy.optimize import curve_fit
from scipy.integrate import quad</p>
<pre><code># 1/10 of the Data to show the form.
xData = array([-0.1 , -0.09, -0.08, -0.07, -0.06, -0.05, -0.04, -0.03, -0.02,
-0.01, 0. , 0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07,
0.08, 0.09, 0.1 ])
yData = array([ 18. , 22. , 22. , 34.000999, 54.002998,
152.022995, 398.15799 , 628.39502 , 884.781982, 848.719971,
854.72998 , 842.710022, 762.580994, 660.435974, 346.119995,
138.018997, 40.001999, 8. , 6. , 4. ,
6. ])
yerr = 0.1*yData # uncertainty of the data
plt.scatter(xData, yData)
plt.show()
</code></pre>
<p><img src="https://i.stack.imgur.com/3FGJU.png" alt="plot of the Data"></p>
<pre><code># functions
def gaus(x, *p):
""" gaussian with p = A, mu, sigma """
A, mu, sigma = p
return A/(sqrt(2*pi)*sigma)*numpy.exp(-(x-mu)**2/(2.*sigma**2))
def func(x,*p):
""" Convolution of gaussian and rectangle is a gaussian integral.
Parameters: A, mu, sigma, a"""
A, mu, sigma, a = p
return quad(lambda t: gaus(x-t,A,mu,sigma),-a,a)
vfunc = vectorize(func) # Probably this is a Problem but if I dont use it, func can only be evaluated at 1 point not an array
</code></pre>
<p>To see that func does indeed describe the data and my calculatons are right I played around with data and function and tired to match them.
I found the following to be feasible:</p>
<pre><code>p0=[850,0,0.01, 0.04] # will be used as starting values for fitting
sample = linspace(-0.1,0.1,200) # just to make the plot smooth
y, dy = vfunc(sample,*p0)
plt.plot(sample, y, label="Handmade Fit")
plt.scatter(xData, yData, label="Data")
plt.legend()
plt.show()
</code></pre>
<p><img src="https://i.stack.imgur.com/yqX4P.png" alt="Data and handmade fit">
The problem occurs, when I try to fit the data using the just obtained starting values:</p>
<pre><code>fp, Sfcov = curve_fit(vfunc, xData, yData, p0=p0, sigma=yerr)
yf = vfunc(xData, fp)
plt.plot(x, yf, label="Fit")
plt.show()
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-83-6d362c4b9204> in <module>()
----> 1 fp, Sfcov = curve_fit(vfunc, xData, yData, p0=p0, sigma=yerr)
2 yf = vfunc(xData,fp)
3 plt.plot(x,yf, label="Fit")
/usr/lib/python3/dist-packages/scipy/optimize/minpack.py in curve_fit(f, xdata, ydata, p0, sigma, **kw)
531 # Remove full_output from kw, otherwise we're passing it in twice.
532 return_full = kw.pop('full_output', False)
--> 533 res = leastsq(func, p0, args=args, full_output=1, **kw)
534 (popt, pcov, infodict, errmsg, ier) = res
535
/usr/lib/python3/dist-packages/scipy/optimize/minpack.py in leastsq(func, x0, args, Dfun, full_output, col_deriv, ftol, xtol, gtol, maxfev, epsfcn, factor, diag)
369 m = shape[0]
370 if n > m:
--> 371 raise TypeError('Improper input: N=%s must not exceed M=%s' % (n, m))
372 if epsfcn is None:
373 epsfcn = finfo(dtype).eps
TypeError: Improper input: N=4 must not exceed M=2
</code></pre>
<p>I think this does mean I have less data points than fit-parameters. Well lets look at it:</p>
<pre><code>print("Fit-Parameters: %i"%len(p0))
print("Datapoints: %i"%len(yData))
Fit-Parameters: 4
Datapoints: 21
</code></pre>
<p>And actually I have 210 data points.</p>
<p>Like written above I don't really understand why I need to use the vectorise function from numpy for the integral-function (func <> vfunc) but not using it doesnt help either. In general one can pass a numpy array to a function but it appears to be not working here. On the other hand, I might be overestimating the power of leas-square-fit here and it might not be usable in this case but I do not like to use maximum-likelihood here. In general I have never tried to fit a integral function to data so this is new to me. Likely the problem is here. My knowledge of quad is limited and there might be a better way. Carrying out the integral analytically is not possible to my knowledge but clearly would be the ideal solution ;).</p>
<p>So any ideas where this error comes from?</p>
|
<p>You have two problems. One is that <code>quad</code> returns a tuple with the value and an estimate of the error, and the other is in how you are vectorizing. You don't want to vectorize on the vectors parameter. <code>np.vectorize</code> has a for loop, so there is no performance gain from doing it yourself:</p>
<pre><code>def func(x, p):
""" Convolution of gaussian and rectangle is a gaussian integral.
Parameters: A, mu, sigma, a"""
A, mu, sigma, a = p
return quad(lambda t: gaus(x-t,A,mu,sigma),-a,a)[0]
def vfunc(x, *p):
evaluations = numpy.array([func(i, p) for i in x])
return evaluations
</code></pre>
<p>Note that I have taken the <code>*</code> in <code>func</code> away, but not from <code>gaus</code>. Also, I am selecting the first output of <code>quad</code>.</p>
<p>Whereas this solves your problem, to fit a convolution you may consider going to Fourier space. The Fourier transform of a convolution is the product of the transforms of the functions, and this is going to simplify your life a lot. Furthermore, once in Fourier space you may consider applying a low-pass filter to reduce noise. 210 data points are high enough to get good results.</p>
<p>Also, if you need more powerful algorithms, you should consider iminuit, using ROOT's long proven Minuit.</p>
|
python|numpy|model-fitting
| 0
|
6,324
| 24,089,409
|
Create dataframe row with positive numbers and other with negative
|
<p>I have the following dataframe called <strong>Utilidad</strong></p>
<pre>
Argentina Bolivia Chile España Uruguay
2004 3 6 1 3 2
2005 5 1 4 1 5
</pre>
<p>And I calculate the difference between 2004 and 2005 using</p>
<pre>
Utilidad.ix['resta']=Utilidad.ix[2005]-Utilidad.ix[2004]
</pre>
<p>Now I'm trying to create two additional rows, one with the result of the difference when is positive and the other one with the negatives, something like this</p>
<pre>
Argentina Bolivia Chile España Uruguay
2004 3 6 1 3 2
2005 5 1 4 1 5
resta 2 -5 3 -2 3
positive 2 0 3 0 3
negative 0 -5 0 -2 0
</pre>
<p>The only I have managed to do is to have an additional column which tells me wheter "resta" is positive or not, using</p>
<pre>
Utilidad.ix['boleano'][Utilidad.ix['resta']>0]
</pre>
<p>Can someone help me to create this two additional rows?</p>
<p>Thanks</p>
|
<p><code>numpy.clip</code> will be handy here, or just calculate it .</p>
<pre><code>In [35]:
Utilidad.ix['positive']=np.clip(Utilidad.ix['resta'], 0, np.inf)
Utilidad.ix['negative']=np.clip(Utilidad.ix['resta'], -np.inf, 0)
#or
Utilidad.ix['positive']=(Utilidad.ix['resta']+Utilidad.ix['resta'].abs())/2
Utilidad.ix['negative']=(Utilidad.ix['resta']-Utilidad.ix['resta'].abs())/2
print Utilidad
Argentina Bolivia Chile España Uruguay
id
2004 3 6 1 3 2
2005 5 1 4 1 5
resta 2 -5 3 -2 3
positive 2 0 3 0 3
negative 0 -5 0 -2 0
[5 rows x 5 columns]
</code></pre>
<p>Some speed comparisons:</p>
<pre><code>%timeit (Utilidad.ix['resta']-Utilidad.ix['resta'].abs())/2
1000 loops, best of 3: 627 µs per loop
In [36]:
%timeit Utilidad.ix['positive'] = np.where(Utilidad.ix['resta'] > 0, Utilidad.ix['resta'], 0)
1000 loops, best of 3: 647 µs per loop
In [38]:
%timeit Utilidad.ix['positive']=np.clip(Utilidad.ix['resta'], 0, 100)
100 loops, best of 3: 2.6 ms per loop
In [45]:
%timeit Utilidad.ix['resta'].clip_upper(0)
1000 loops, best of 3: 1.32 ms per loop
</code></pre>
|
python|pandas
| 1
|
6,325
| 43,901,238
|
pandas keep rows with multiple delimiters
|
<p>one text file with multiple columns for represntation just showing 2 columns and 5 rows original df has ~400,000 rows</p>
<pre><code>col0 col1
A1 info
A2 info1,info2
A3 info4,info1,info6
A4 info3,info10
A5 info7,info1,info2,info4,info9
</code></pre>
<p>What I would like to do is in there is a row where col1 has multiple elements keep the first element and remove rest of the elements expected output </p>
<pre><code>col0 col1
A1 info
A2 info1
A3 info4
A4 info3
A5 info7
</code></pre>
<p>for sanity check is it possible to output the rows that were modified in a separate text file?
example </p>
<p>file_with_rows_modified.txt will have</p>
<pre><code>col0 col1
A2 info1,info2
A3 info4,info1,info6
A4 info3,info10
A5 info7,info1,info2,info4,info9
</code></pre>
<p>edit: these are flat strings</p>
|
<p>You need</p>
<pre><code>df.col1 = df.col1.str.split(',').str[0]
col0 col1
0 A1 info
1 A2 info1
2 A3 info4
3 A4 info3
4 A5 info7
</code></pre>
<p>For your second question,</p>
<pre><code>df[df.col1.str.split(',').str.len() >1]
</code></pre>
<p>will return all the rows that need to be edited so you can save the result into another df before modifying the dataframe</p>
|
python|pandas
| 3
|
6,326
| 43,768,426
|
numpy vectorization functions
|
<p>I want to use vectorization to do some computation on <code>numpy.ndarray</code>. Suppose I have the following vectorized function:</p>
<pre><code>import numpy as np
fun = lambda x:x[0]+x[1]
fun = np.vectorize(fun)
</code></pre>
<p>and the following <code>numpy.ndarray</code></p>
<pre><code> a = range(10)
b = range(10)
c = np.array([a,b])
</code></pre>
<p>When I apply </p>
<pre><code> result = fun(c)
</code></pre>
<p>I obtain the following error</p>
<pre><code> IndexError: invalid index to scalar variable.
</code></pre>
<p>\Why is this the case and how should I fix it?</p>
|
<p><code>np.vectorize</code> feeds scalar values to your function. It iterates on the input arrays, broadcasting if needed, and feeds <code>func</code> scalars, not arrays or lists. It then collects the values in a new array of shape and dtype that it deduces.</p>
<p>For example:</p>
<pre><code>In [108]: fun = lambda x,y: x+y
...: fun = np.vectorize(fun)
In [110]: a=np.arange(10); b=np.arange(10)
In [111]: fun(a,b)
Out[111]: array([ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18])
</code></pre>
<p>It is not 'vectorize' in the sense of turning your function into fast compiled code. It's a convenience, saving you some work in setting up an interation.</p>
<p>I'm sure your <code>fun</code> is just a example, but as written it is already 'vectorized'</p>
<pre><code>In [112]: (lambda x,y: x+y)(a,b)
Out[112]: array([ 0, 2, 4, 6, 8, 10, 12, 14, 16, 18])
</code></pre>
<p>Expressing your calculation with numpy primitives, without iteration, is the true 'vectorization'. That isn't always possible, but if you feel you must fall back on <code>np.vectorize</code> remember that</p>
<ul>
<li>it feeds scalars</li>
<li>it will iterate at Python speeds</li>
<li>use <code>otypes</code> if possible</li>
</ul>
|
python|function|numpy|vectorization
| 1
|
6,327
| 43,813,948
|
Pandas read_csv get rid of enclosing double quotes
|
<p>Here is my example:</p>
<p>I first create dataframe and save it to file</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'col_1':[['a','b','s'], 23423]})
df.to_csv(r'C:\test.csv')
</code></pre>
<p>Then <code>df.col_1[0]</code> returns <code>['a','b','s']</code> a list</p>
<p>Later I read it from file:</p>
<pre><code>df_1 = pd.read_csv(r'C:\test.csv', quoting = 3, quotechar = '"')
</code></pre>
<p>Now <code>df_1['col_1'][0]</code> returns <code>"['a' 's']"</code> a string.</p>
<p>I would like to get list back. I am experimenting with different <code>read_csv</code> settings, but so far no luck</p>
|
<p>You're not going to get the list back without a bit of work</p>
<p>Use <a href="https://docs.python.org/2/library/ast.html" rel="nofollow noreferrer"><strong><code>literal_eval</code></strong></a> to convert the lists</p>
<pre><code>import ast
conv = dict(col_1=ast.literal_eval)
pd.read_csv(r'C:\test.csv', index_col=0, converters=conv).loc[0, 'col_1']
['a', 'b', 'c']
</code></pre>
|
python|csv|pandas
| 6
|
6,328
| 43,817,349
|
TensorFlow: test_session and device placement
|
<p>I'm trying to use <code>tf.test.TestCase</code> and test specifically on both GPU and CPU. To this end, I'm using <code>self.test_session</code> and set <code>force_gpu</code> to either <code>True</code> or <code>False</code>. However, when running on a machine without a GPU, the behavior is different depending on whether <code>log_device_placement</code> is set to <code>True</code>.</p>
<pre><code>with self.test_session(force_gpu=True) as sess:
<add_ops>
sess.run()
</code></pre>
<p>does not report an error, even if no GPU is present, while</p>
<pre><code>with self.test_session(force_gpu=on_gpu,
config=tf.ConfigProto(log_device_placement=True)) as sess:
<add_ops>
sess.run()
</code></pre>
<p>does. Why is logging affecting the behavior?</p>
|
<p>The relevant piece of code is here:
<a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/framework/test_util.py#L385" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/framework/test_util.py#L385</a></p>
<p>It is not that enabling logging affects the behavior; it is an unintended side effect of <em>how</em> you are enabling logging.</p>
<p>To enable logging you provided a <code>tf.ConfigProto()</code> where all fields but <code>log_device_placement</code> have their default values. Boolean fields of protocol buffers default to <code>False</code> if you do not give them a value explicitly. In particular, <code>allow_soft_placement</code> defaults to <code>False</code>.</p>
<p>Normally, <code>tf.test_session()</code> sets <code>allow_soft_placement</code> to <code>True</code> if you do not set <code>force_gpu</code> to be <code>True</code>. However if you provide your own <code>ConfigProto</code>, it will not override the value of <code>allow_soft_placement</code> (unless you set <code>force_gpu</code> to <code>True</code>).</p>
<p>(Arguably the <code>test_session()</code> API could be improved, but that's probably a discussion better had on a GitHub issue.)</p>
<p>Hope this helps!</p>
|
python-3.x|tensorflow
| 0
|
6,329
| 73,046,713
|
Replacing all elements except NaN in Python
|
<p>I want to replace all elements in array <code>X</code> except <code>nan</code> with <code>10.0</code>. Is there a one-step way to do it? I present the expected output.</p>
<pre><code>import numpy as np
from numpy import nan
X = np.array([[3.25774286e+02, 3.22008654e+02, nan, 1.85356823e+02,
1.85356823e+02, 3.22008654e+02, nan, 3.22008654e+02]])
</code></pre>
<p>The expected output is</p>
<pre><code>X = array([[10.0, 10.0, nan, 10.0,
10.0, 10.0, nan, 10.0]])
</code></pre>
|
<p>You can get an array of <code>True</code>/<code>False</code> for <code>nan</code> location using <code>np.isnan</code>, invert it, and use it to replace all other values with 10.0</p>
<pre><code>indices = np.isnan(X)
X[~indices] = 10.0
print(X) # [[10. 10. nan 10. 10. 10. nan 10.]]
</code></pre>
|
python|numpy
| 3
|
6,330
| 72,886,437
|
Plot Dynamic graph of live dataframe of multiple columns using Python
|
<p>I'm trying to plot dynamic graph with live data.
Below is sample dataframe in which after every 30 seconds I want to add data which I'm collecting from other function.</p>
<p>Dynamic DataFrame</p>
<pre><code> TIME CE_15750 CE_15800 CE_15850 CE_15900 CE_15950 PE_16000 PE_16050 PE_16100 PE_16150 PE_16200 PE_16250
0 18:54 -5146 -58520 -22849 -78925 -20435 59348 10805 5877 -22 2182 519
1 18:55 -20435 -30990 37085 108 -4634 13528 27239 64451 46905 83803 815
</code></pre>
<p>I'm trying below code to plot graph but its not giving expected output</p>
<pre><code>def draw_method():
while True:
plt.cla()
temp_var = live_data()
final_dataframe = pd.DataFrame(temp_var)
final_dataframe.set_index('TIME').plot(figsize = (10,5),grid=True)
plt.gcf()
plt.show()
sleep(30) #30 seconds sleep to add updated data field in dataframe
</code></pre>
<p>also, how can I give single color for all columns which has <strong>CE_</strong> and single color to <strong>PE_</strong> in graph? The graph I'm getting had different color for each line.</p>
<p>Thanks so much for help</p>
|
<p>Try this. And tell me how it went. Is worth noting, my changes were the following. I highly doubt you want to make more than one window be displayed every 30 seconds so i took out plt.show() from you func. And i utilized the FuncAnimation, cause that guy is going to call your dataframe multiple times, and update it for you. Very usefull.</p>
<pre><code>from matplotlib.animation import FuncAnimation
def draw_method():
plt.cla()
temp_var = live_data()
final_dataframe = pd.DataFrame(temp_var)
final_dataframe.set_index('TIME').plot(figsize = (10,5),grid=True)
ani = FuncAnimation(plt.gcf(), draw_method, interval=1000)
plt.tight_layout()
plt.show()
temp_var = live_data()
final_dataframe = pd.DataFrame(temp_var)
</code></pre>
|
python|pandas|dataframe|matplotlib|seaborn
| 0
|
6,331
| 73,096,947
|
How to get the maximum value of a group in the past
|
<p>In group 3, I want to get the max value of group 1</p>
<p>In group 5, I want to get the max value of group 3</p>
<p>Input:</p>
<pre><code>import pandas as pd
A=[20,13,15,25,24,13,14,19,13,11]
group=[1,1,2,2,2,3,3,4,4,5]
df=pd.DataFrame({'A':A,'group':group})
</code></pre>
<p>Expected Output</p>
<pre><code> A group g_max g-2_max
0 20 1 20
1 13 1 20
2 15 2 25
3 25 2 25
4 24 2 25
5 13 3 14 20
6 14 3 14 20
7 19 4 19 25
8 13 4 19 25
9 11 5 11 14
</code></pre>
|
<p>One way to go, would be as follows:</p>
<pre><code>df['g_max'] = df.groupby('group')['A'].transform('max')
df['g-2_max'] = df.group.apply(lambda x: df.g_max[df.group == x-2].max())
print(df)
A group g_max g-2_max
0 20 1 20 NaN
1 13 1 20 NaN
2 15 2 25 NaN
3 25 2 25 NaN
4 24 2 25 NaN
5 13 3 14 20.0
6 14 3 14 20.0
7 19 4 19 25.0
8 13 4 19 25.0
9 11 5 11 14.0
</code></pre>
<hr />
<p>If the values in <code>group</code> are consecutive, another way to get <code>g-2_max</code> could be:</p>
<pre><code>s = df.groupby('group')['g_max'].max().shift(2)
s.name = 'g-2_max'
df = pd.merge(df, s, on='group')
</code></pre>
|
python|pandas|dataframe
| 1
|
6,332
| 73,125,638
|
Declaring Variables inside the Tensorflow GradientTape
|
<p>I have a model with a complex loss, computed per class of the model output.</p>
<p>As you can see below, I'm computing the loss with some custom loss function, assigning this value to the variable, as tensor are immutable in tensorflow.</p>
<pre><code>def calc_loss(y_true, y_pred):
num_classes=10
pos_loss_class = tf.Variable(tf.zeros((1, num_classes), dtype=tf.dtypes.float32))
for idx in range(num_classes):
pos_loss = SOME_LOSS_FUNC(y_true[:, idx], y_pred[:, idx]
pos_loss_class[:, idx].assign(pos_loss)
return tf.reduce_mean(pos_loss_class)
</code></pre>
<p>My code is simple:</p>
<pre><code>with tf.GradientTape() as tape:
output = model(input, training=True)
loss = calc_loss(targets, output)
grads = tape.gradient(loss, model.trainable_weights)
</code></pre>
<p>However, I receive None for all model's variables. From my understanding this is caused by a blocking manner of the state of the variable as written here: <a href="https://www.tensorflow.org/guide/autodiff#4_took_gradients_through_a_stateful_object" rel="nofollow noreferrer">https://www.tensorflow.org/guide/autodiff#4_took_gradients_through_a_stateful_object</a></p>
<p>Any suggestions?</p>
<p>Here is the reproducible code, which is a toy example, but demonstrates the issue.</p>
<pre><code>y_true = tf.Variable(tf.random.normal((1, 2)), name='targets')
layer = tf.keras.layers.Dense(2, activation='relu')
x = tf.constant([[1., 2., 3.]])
with tf.GradientTape() as tape:
y_pred = layer(x)
loss_class = tf.Variable(tf.zeros((1,2)), dtype=tf.float32)
for idx in range(2):
loss = tf.abs(y_true[:, idx] - y_pred[:, idx])
loss_class[:, idx].assign(loss)
final_loss = tf.reduce_mean(loss_class)
grads = tape.gradient(final_loss, layer.trainable_weights)
</code></pre>
|
<p>My current second guess, is that the assign method blocks the gradient, as explained in the tensorflow page you liked... instead, try to use just a plain list:</p>
<pre><code>def calc_loss(y_true, y_pred):
num_classes=10
pos_loss_class = []
for idx in range(num_classes):
pos_loss = SOME_LOSS_FUNC(y_true[:, idx], y_pred[:, idx]
pos_loss_class.append(pos_loss)
return tf.reduce_mean(pos_loss_class)
</code></pre>
|
tensorflow|gradienttape
| 0
|
6,333
| 3,986,345
|
How to find the local minima of a smooth multidimensional array in NumPy efficiently?
|
<p>Say I have an array in NumPy containing evaluations of a continuous differentiable function, and I want to find the local minima. There is no noise, so every point whose value is lower than the values of all its neighbors meets my criterion for a local minimum.</p>
<p>I have the following list comprehension which works for a two-dimensional array, ignoring potential minima on the boundaries:</p>
<pre><code>import numpy as N
def local_minima(array2d):
local_minima = [ index
for index in N.ndindex(array2d.shape)
if index[0] > 0
if index[1] > 0
if index[0] < array2d.shape[0] - 1
if index[1] < array2d.shape[1] - 1
if array2d[index] < array2d[index[0] - 1, index[1] - 1]
if array2d[index] < array2d[index[0] - 1, index[1]]
if array2d[index] < array2d[index[0] - 1, index[1] + 1]
if array2d[index] < array2d[index[0], index[1] - 1]
if array2d[index] < array2d[index[0], index[1] + 1]
if array2d[index] < array2d[index[0] + 1, index[1] - 1]
if array2d[index] < array2d[index[0] + 1, index[1]]
if array2d[index] < array2d[index[0] + 1, index[1] + 1]
]
return local_minima
</code></pre>
<p>However, this is quite slow. I would also like to get this to work for any number of dimensions. For example, is there an easy way to get all the neighbors of a point in an array of any dimensions? Or am I approaching this problem the wrong way altogether? Should I be using <code>numpy.gradient()</code> instead?</p>
|
<p>The location of the local minima can be found for an array of arbitrary dimension
using <a href="https://stackoverflow.com/questions/3684484/peak-detection-in-a-2d-array/3689710#3689710">Ivan</a>'s <a href="https://stackoverflow.com/questions/3684484/peak-detection-in-a-2d-array/3689710#3689710">detect_peaks function</a>, with minor modifications:</p>
<pre><code>import numpy as np
import scipy.ndimage.filters as filters
import scipy.ndimage.morphology as morphology
def detect_local_minima(arr):
# https://stackoverflow.com/questions/3684484/peak-detection-in-a-2d-array/3689710#3689710
"""
Takes an array and detects the troughs using the local maximum filter.
Returns a boolean mask of the troughs (i.e. 1 when
the pixel's value is the neighborhood maximum, 0 otherwise)
"""
# define an connected neighborhood
# http://www.scipy.org/doc/api_docs/SciPy.ndimage.morphology.html#generate_binary_structure
neighborhood = morphology.generate_binary_structure(len(arr.shape),2)
# apply the local minimum filter; all locations of minimum value
# in their neighborhood are set to 1
# http://www.scipy.org/doc/api_docs/SciPy.ndimage.filters.html#minimum_filter
local_min = (filters.minimum_filter(arr, footprint=neighborhood)==arr)
# local_min is a mask that contains the peaks we are
# looking for, but also the background.
# In order to isolate the peaks we must remove the background from the mask.
#
# we create the mask of the background
background = (arr==0)
#
# a little technicality: we must erode the background in order to
# successfully subtract it from local_min, otherwise a line will
# appear along the background border (artifact of the local minimum filter)
# http://www.scipy.org/doc/api_docs/SciPy.ndimage.morphology.html#binary_erosion
eroded_background = morphology.binary_erosion(
background, structure=neighborhood, border_value=1)
#
# we obtain the final mask, containing only peaks,
# by removing the background from the local_min mask
detected_minima = local_min ^ eroded_background
return np.where(detected_minima)
</code></pre>
<p>which you can use like this:</p>
<pre><code>arr=np.array([[[0,0,0,-1],[0,0,0,0],[0,0,0,0],[0,0,0,0],[-1,0,0,0]],
[[0,0,0,0],[0,-1,0,0],[0,0,0,0],[0,0,0,-1],[0,0,0,0]]])
local_minima_locations = detect_local_minima(arr)
print(arr)
# [[[ 0 0 0 -1]
# [ 0 0 0 0]
# [ 0 0 0 0]
# [ 0 0 0 0]
# [-1 0 0 0]]
# [[ 0 0 0 0]
# [ 0 -1 0 0]
# [ 0 0 0 0]
# [ 0 0 0 -1]
# [ 0 0 0 0]]]
</code></pre>
<p>This says the minima occur at indices [0,0,3], [0,4,0], [1,1,1] and [1,3,3]:</p>
<pre><code>print(local_minima_locations)
# (array([0, 0, 1, 1]), array([0, 4, 1, 3]), array([3, 0, 1, 3]))
print(arr[local_minima_locations])
# [-1 -1 -1 -1]
</code></pre>
|
python|numpy|discrete-mathematics|mathematical-optimization
| 20
|
6,334
| 70,433,762
|
Filter a cell Dataframe by cell based on a dynamic threshold
|
<p>I hope you are doing very well and have a good end of the year. First of all, excuse me for my English as I am not a native speaker.</p>
<p>My problem is that having a Dataframe on python (for example 30 row and 6 columns), I try to filter cell by cell based on the average of the values on each row (as example: if the value of the it is lower than the average of its row, I keep it otherwise I replace it with 0), what makes it difficult for me is that the threshold is dynamic, unfortunately I cannot apply the applymap method which I used in other cases.</p>
<pre class="lang-py prettyprint-override"><code> Data = {
'2021' : [12, 12, 14],
'2022' : [10, 20, 25],
'2023' : [100, 10, 35]}
df = pd.DataFrame.from_dict(Data, orient='index')
df['mean'] = df.mean(axis=1)
</code></pre>
<p>for that case I want to replace 14 for the first row, 20 and 25 for he second and 100 for the last one, because they are higher than the average of the values of their rows.</p>
|
<p>If need replace values by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.mean.html" rel="nofollow noreferrer"><code>DataFrame.mean</code></a> use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.clip.html" rel="nofollow noreferrer"><code>DataFrame.clip</code></a>:</p>
<pre><code>df1 = df.clip(upper=df.mean(axis=1), axis=0)
print (df1)
0 1 2
2021 12.000000 12.000000 12.666667
2022 10.000000 18.333333 18.333333
2023 48.333333 10.000000 35.000000
</code></pre>
<p>If need replace by <code>0</code> use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.mask.html" rel="nofollow noreferrer"><code>DataFrame.mask</code></a>:</p>
<pre><code>df2 = df.mask(df.gt(df.mean(axis=1), axis=0), 0)
print (df2)
0 1 2
2021 12 12 0
2022 10 0 0
2023 0 10 35
</code></pre>
|
python|pandas|dataframe
| 1
|
6,335
| 70,561,070
|
Dataframe is showing as nill
|
<p>import pandas as pd
from bs4 import BeautifulSoup
import requests
baseurl = "https://www.amazon.com/"
headers = {'User-Agent':'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.88 Safari/537.36", "Accept-language":"en-US, en;q=0.5'}
for x in range(1,75):
main_page = requests.get(f'https://www.amazon.in/books-for-1-year-old/s?k=books+for+1+year+old&i=stripbooks&rh=n%3A1318073031%2Cp_n_age_range%3A1318384031&page={x}', headers = headers)
soup = BeautifulSoup(main_page.content, 'lxml')
booklist = soup.find_all('div', attrs={'class':'a-section a-spacing-none'})
# print(booklist)</p>
<pre><code>for book in booklist:
reference = book.find_all('a',attrs = {'class': 'a-link-normal s-no-outline'}, href=True)
for item in reference:
link = baseurl+item['href']
#print(link)
webpage = requests.get(link, headers = headers)
soup2 = BeautifulSoup(webpage.content, "lxml")
title_parent = soup2.find('span', attrs = {'id': 'productTitle'})
if title_parent is not None:
title = title_parent.text
print(link)
print(title)
desc_parent1 = soup2.find('div', attrs = {'id': 'bookDescription_feature_div'})
desc_parent2 = soup2.find('div', attrs = {'id': 'iframeContent'})
if desc_parent1 is not None:
desc = desc_parent1.find('div').text
elif desc_parent1 is not None:
desc = desc_parent2.find('div').text
print(desc)
testlink = 'https://www.amazon.in/Wonder-House-Books/dp/9388369882/ref=sr_1_1_sspa?keywords=books+for+1+year+old&qid=1637501815&refinements=p_n_age_range%3A1318384031&s=books&sr=1-1-spons&psc=1&spLa=ZW5jcnlwdGVkUXVhbGlmaWVyPUFUVDk0RFk2WEpFTTAmZW5jcnlwdGVkSWQ9QTA2MTk3MDg3WUpXR0E2RzJXSCZlbmNyeXB0ZWRBZElkPUEwNDY1NzE5M0xRQzE4TUhYUkQwSiZ3aWRnZXROYW1lPXNwX2F0ZiZhY3Rpb249Y2xpY2tSZWRpcmVjdCZkb05vdExvZ0NsaWNrPXRydWU='
for book in booklist:
r = requests.get(testlink, headers=headers)
soup = BeautifulSoup(r.content, 'lxml')
details = soup.find('ul', attrs = {'class':'a-unordered-list a-nostyle a-vertical a-spacing-none detail-bullet-list'}).text.strip()
print(details)
book_data = {
'links': ['link'],
'title':['title'],
'description':['desc_parent1'],
'description2' : ['desc_parent2'] ,
'deatils': ['details'],
}
print(book_data)
df = pd.DataFrame(book_data)
print(df.head())
</code></pre>
<p>When i execute <code>print(book_data)</code>, I am getting the dictionary data as expected, but when it is tranformed into a dataframe with pandas it shows null. Can someone help me out with this?</p>
|
<p>The scraping code seems to be working fine, the only problem is with the <code>book_data</code> dictionary variable. The variables link, title, etc does not work in the way you think it does, that is why you are getting a null dataframe as the dictionary variable is empty/wrong.</p>
<pre><code>import pandas as pd
from bs4 import BeautifulSoup
import requests
baseurl = "https://www.amazon.com/"
headers = {'User-Agent':'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.88 Safari/537.36", "Accept-language":"en-US, en;q=0.5'}
for x in range(1,75):
main_page = requests.get(f'https://www.amazon.in/books-for-1-year-old/s?k=books+for+1+year+old&i=stripbooks&rh=n%3A1318073031%2Cp_n_age_range%3A1318384031&page={x}', headers = headers)
soup = BeautifulSoup(main_page.content, 'lxml')
booklist = soup.find_all('div', attrs={'class':'a-section a-spacing-none'})
# print(booklist)
df = pd.DataFrame(columns=['links', 'title', 'description'])
for book in booklist:
reference = book.find_all('a',attrs = {'class': 'a-link-normal s-no-outline'}, href=True)
for item in reference:
link = baseurl+item['href']
# print(link)
webpage = requests.get(link, headers = headers)
soup2 = BeautifulSoup(webpage.content, "lxml")
title_parent = soup2.find('span', attrs = {'id': 'productTitle'})
if title_parent is not None:
title = title_parent.text
# print(link)
# print(title)
desc_parent1 = soup2.find('div', attrs = {'id': 'bookDescription_feature_div'})
desc_parent2 = soup2.find('div', attrs = {'id': 'iframeContent'})
if desc_parent1 is not None:
desc = desc_parent1.find('div').text
elif desc_parent1 is not None:
desc = desc_parent2.find('div').text
# print(desc)
df.loc[len(df.index)] = [link, title, desc]
print(df.head())
</code></pre>
|
python|pandas|web-scraping|beautifulsoup
| 0
|
6,336
| 30,418,489
|
How do I plug distance data into scipy's agglomerative clustering methods?
|
<p>So, I have a set of texts I'd like to do some clustering analysis on. I've taken a <a href="http://en.wikipedia.org/wiki/Normalized_compression_distance" rel="nofollow">Normalized Compression Distance</a> between every text, and now I have basically built a complete graph with weighted edges that looks something like this:</p>
<pre><code>text1, text2, 0.539
text2, text3, 0.675
</code></pre>
<p>I'm having tremendous difficulty figuring out the best way to plug this data into scipy's hierarchical clustering methods. I can probably convert the distance data into a table like the one on <a href="http://home.deib.polimi.it/matteucc/Clustering/tutorial_html/hierarchical.html" rel="nofollow">this page</a>. How can I format this data so that it can easily be plugged into scipy's HAC code?</p>
|
<p>You're on the right track with converting the data into a table like the one on the linked page (a redundant distance matrix). According to the documentation, you should be able to pass that directly into <code>scipy.cluster.hierarchy.linkage</code> or a related function, such as <code>scipy.cluster.hierarchy.single</code> or <code>scipy.cluster.hierarchy.complete</code>. The related functions explicitly specify how distance between clusters should be calculated. <code>scipy.cluster.hierarchy.linkage</code> lets you specify whichever method you want, but defaults to single link (i.e. the distance between two clusters is the distance between their closest points). All of these methods will return a multidimensional array representing the agglomerative clustering. You can then use the rest of the <code>scipy.cluster.hierarchy</code> module to perform various actions on this clustering, such as visualizing or flattening it.</p>
<p>However, there's a catch. As of the time <a href="https://stackoverflow.com/questions/18952587/use-distance-matrix-in-scipy-cluster-hierarchy-linkage">this question</a> was written, you couldn't actually use a redundant distance matrix, despite the fact that the documentation says you can. Based on the fact that the <a href="https://github.com/scipy/scipy/issues/2614" rel="nofollow noreferrer">github issue</a> is still open, I don't think this has been resolved yet. As pointed out in the answers to the linked question, you can get around this issue by passing the complete distance matrix into the <code>scipy.spatial.distance.squareform</code> function, which will convert it into the format which is actually accepted (a flat array containing the upper-triangular portion of the distance matrix, called a condensed distance matrix). You can then pass the result to one of the <code>scipy.cluster.hierarchy</code> functions.</p>
|
numpy|machine-learning|scipy|hierarchical-clustering
| 1
|
6,337
| 30,681,446
|
Python Pandas Dataframe pull column value/index down by one
|
<p>I am using a pandas DataFrame and I would like to pull one column value/index down by one. So the list Dataframe Length will be one less. Just like this in my example image:</p>
<p><img src="https://i.stack.imgur.com/Xts9L.png" alt="DataFrame before -> after"></p>
<p>The new DataFrame should be <code>id</code> 2-5, but of course re-index after the manipulation to 1-4. There are more than just <code>name</code> and <code>place</code> rows.</p>
<p>How can I quickly manipulate the DataFrame like this?</p>
<p>Thank you very much.</p>
|
<p>You can <a href="http://pandas.pydata.org/pandas-docs/version/0.16.1/generated/pandas.Series.shift.html#pandas.Series.shift" rel="nofollow"><code>shift</code></a> the name column and then take a slice using <a href="http://pandas.pydata.org/pandas-docs/version/0.16.1/generated/pandas.DataFrame.iloc.html#pandas.DataFrame.iloc" rel="nofollow"><code>iloc</code></a>:</p>
<pre><code>In [55]:
df = pd.DataFrame({'id':np.arange(1,6), 'name':['john', 'bla', 'tim','walter','john'], 'place':['new york','miami','paris','rome','sydney']})
df
Out[55]:
id name place
0 1 john new york
1 2 bla miami
2 3 tim paris
3 4 walter rome
4 5 john sydney
In [56]:
df['name'] = df['name'].shift(-1)
df = df.iloc[:-1]
df
Out[56]:
id name place
0 1 bla new york
1 2 tim miami
2 3 walter paris
3 4 john rome
</code></pre>
<p>If your 'id' column is your index the above still works:</p>
<pre><code>In [62]:
df = pd.DataFrame({'name':['john', 'bla', 'tim','walter','john'], 'place':['new york','miami','paris','rome','sydney']},index=np.arange(1,6))
df.index.name = 'id'
df
Out[62]:
name place
id
1 john new york
2 bla miami
3 tim paris
4 walter rome
5 john sydney
In [63]:
df['name'] = df['name'].shift(-1)
df = df.iloc[:-1]
df
Out[63]:
name place
id
1 bla new york
2 tim miami
3 walter paris
4 john rome
</code></pre>
|
python|python-2.7|pandas
| 2
|
6,338
| 30,620,323
|
Merge two files in Python PANDAS?
|
<p>I have two files from where I need to fetch information for <code>data analysis</code>. I am using <strong><code>Python Pandas</code></strong> for this. Any help on how to do this will be appreciated. </p>
<p>I already know how merge 2 files using Python - I am looking forward to achieve this job in <code>PANDAS</code> particularly.</p>
<p>Once 2 files merged then I need to get some analytical data out of it. Both these file do have same structure of data in <code>CSV</code> format. </p>
|
<p>I would suggest to read the csv files into dataframes and concatenate them this way</p>
<pre class="lang-py prettyprint-override"><code>frames = [pd.read_csv('f1.csv'), pd.read_csv('f2.csv')]
result = concat(frames,ignore_index=True)
</code></pre>
|
python|pandas|analytics
| 6
|
6,339
| 26,678,467
|
Export a Pandas dataframe as a table image
|
<p>Is it possible to export a Pandas dataframe as an image file? Something like <code>df.to_png()</code> or <code>df.to_table().savefig('table.png')</code>.</p>
<p>At the moment I export a dataframe using <code>df.to_csv()</code>. I then open this csv file in Excel to make the data look pretty and then copy / paste the Excel table into Powerpoint as an image. I see matplotlib has a <code>.table()</code> method, but I'm having trouble getting it to work with my df.</p>
<p>The data frame I'm using has 5 columns and 5 rows and each 'cell' is a number.</p>
|
<p>With some additional code, you can even make output look decent:</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import six
df = pd.DataFrame()
df['date'] = ['2016-04-01', '2016-04-02', '2016-04-03']
df['calories'] = [2200, 2100, 1500]
df['sleep hours'] = [2200, 2100, 1500]
df['gym'] = [True, False, False]
def render_mpl_table(data, col_width=3.0, row_height=0.625, font_size=14,
header_color='#40466e', row_colors=['#f1f1f2', 'w'], edge_color='w',
bbox=[0, 0, 1, 1], header_columns=0,
ax=None, **kwargs):
if ax is None:
size = (np.array(data.shape[::-1]) + np.array([0, 1])) * np.array([col_width, row_height])
fig, ax = plt.subplots(figsize=size)
ax.axis('off')
mpl_table = ax.table(cellText=data.values, bbox=bbox, colLabels=data.columns, **kwargs)
mpl_table.auto_set_font_size(False)
mpl_table.set_fontsize(font_size)
for k, cell in six.iteritems(mpl_table._cells):
cell.set_edgecolor(edge_color)
if k[0] == 0 or k[1] < header_columns:
cell.set_text_props(weight='bold', color='w')
cell.set_facecolor(header_color)
else:
cell.set_facecolor(row_colors[k[0]%len(row_colors) ])
return ax
render_mpl_table(df, header_columns=0, col_width=2.0)
</code></pre>
<p><a href="https://i.stack.imgur.com/N7VZE.png" rel="noreferrer"><img src="https://i.stack.imgur.com/N7VZE.png" alt="enter image description here"></a></p>
|
python|pandas
| 60
|
6,340
| 39,220,504
|
Applying an operation on multiple columns with a fixed column in pandas
|
<p>I have a dataframe as shown below. The last column shows the sum of values from all the columns i.e. <code>A</code>,<code>B</code>,<code>D</code>,<code>K</code> and <code>T</code>. Please note some of the columns have <code>NaN</code> as well.</p>
<pre><code>word1,A,B,D,K,T,sum
na,,63.0,,,870.0,933.0
sva,,1.0,,3.0,695.0,699.0
a,,102.0,,1.0,493.0,596.0
sa,2.0,487.0,,2.0,15.0,506.0
su,1.0,44.0,,136.0,214.0,395.0
waw,1.0,9.0,,34.0,296.0,340.0
</code></pre>
<p>How can I calculate the entropy for each row? i.e. I should find something like following</p>
<pre><code>df['A']/df['sum']*log(df['A']/df['sum']) + df['B']/df['sum']*log(df['B']/df['sum']) + ...... + df['T']/df['sum']*log(df['T']/df['sum'])
</code></pre>
<p>The condition is that whenever the value inside the <code>log</code> becomes <code>zero</code> or <code>NaN</code>, the whole value should be treated as zero (by definition, the log will return an error as log 0 is undefined).</p>
<p>I am aware of using lambda operation to apply on individual columns. Here I am not able to think for a pure pandas solution where a fixed column <code>sum</code> is applied on different columns <code>A</code>,<code>B</code>,<code>D</code> etc.. Though I can think of a simple loopwise iteration over CSV file with hard-coded column values.</p>
|
<p>I think you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.ix.html" rel="nofollow"><code>ix</code></a> for selecting columns from <code>A</code> to <code>T</code>, then divide by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.div.html" rel="nofollow"><code>div</code></a> with <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.log.html" rel="nofollow"><code>numpy.log</code></a>. Last use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sum.html" rel="nofollow"><code>sum</code></a>:</p>
<pre><code>print (df['A']/df['sum']*np.log(df['A']/df['sum']))
0 NaN
1 NaN
2 NaN
3 -0.021871
4 -0.015136
5 -0.017144
dtype: float64
print (df.ix[:,'A':'T'].div(df['sum'],axis=0)*np.log(df.ix[:,'A':'T'].div(df['sum'],axis=0)))
A B D K T
0 NaN -0.181996 NaN NaN -0.065191
1 NaN -0.009370 NaN -0.023395 -0.005706
2 NaN -0.302110 NaN -0.010722 -0.156942
3 -0.021871 -0.036835 NaN -0.021871 -0.104303
4 -0.015136 -0.244472 NaN -0.367107 -0.332057
5 -0.017144 -0.096134 NaN -0.230259 -0.120651
print((df.ix[:,'A':'T'].div(df['sum'],axis=0)*np.log(df.ix[:,'A':'T'].div(df['sum'],axis=0)))
.sum(axis=1))
0 -0.247187
1 -0.038471
2 -0.469774
3 -0.184881
4 -0.958774
5 -0.464188
dtype: float64
</code></pre>
|
python|pandas|dataframe|sum|multiple-columns
| 5
|
6,341
| 19,521,493
|
numpy.view gives valueerror
|
<p>Ran into this in the context of libtiff saving a file, but now I'm just confused. Can anyone tell me why these two are not equivalent?</p>
<pre><code>ar1 = zeros((1000,1000),dtype=uint16)
ar1 = ar1.view(dtype=uint8) # works
ar2 = zeros((1000,2000),dtype=uint16)
ar2 = ar2[:,1000:]
ar2 = ar2.view(dtype=uint8) # ValueError: new type not compatible with array.
</code></pre>
<p>Edit:
so this also works?</p>
<pre><code>ar2 = zeros((1000,2000),dtype=uint16)
ar2 = array(ar2[:,1000:])
ar2 = ar2.view(dtype=uint8)
</code></pre>
|
<h2>Summary</h2>
<p>In a nutshell, just move the view before the slicing.</p>
<p>Instead of:</p>
<pre><code>ar2 = zeros((1000,2000),dtype=uint16)
ar2 = ar2[:,1000:]
ar2 = ar2.view(dtype=uint8)
</code></pre>
<p>Do:</p>
<pre><code>ar2 = zeros((1000,2000),dtype=uint16)
ar2 = ar2.view(dtype=uint8) # ar2 is now a 1000x4000 array...
ar2 = ar2[:,2000:] # Note the 2000 instead of 1000!
</code></pre>
<p>What's happening is that the sliced array isn't contiguous (as @Craig noted) and <code>view</code> errs on the conservative side and doesn't try to re-interpret non-contiguous memory buffers. (It happens to be possible in this exact case, but in some cases it would result in a non-evenly-strided array, which numpy doesn't allow.)</p>
<hr>
<p>If you're not very familiar with <code>numpy</code>, it's possible that you're misunderstanding <code>view</code>, and you actually want <code>astype</code> instead.</p>
<hr>
<h2>What does <code>view</code> do?</h2>
<p>First off, let's take a detailed look at what <code>view</code> does. In this case, it re-interprets the memory buffer of a numpy array as a new datatype, if possible. That means that the <em>number of elements in the array</em> will often change when you use view. (You can also use it to view the array as a different subclass of <code>ndarray</code>, but we'll skip that part for now.)</p>
<p>You may already be aware of the following (your problem is a bit more subtle), but if not, here's an explanation.</p>
<p>As an example:</p>
<pre><code>In [1]: import numpy as np
In [2]: x = np.zeros(2, dtype=np.uint16)
In [3]: x
Out[3]: array([0, 0], dtype=uint16)
In [4]: x.view(np.uint8)
Out[4]: array([0, 0, 0, 0], dtype=uint8)
In [5]: x.view(np.uint32)
Out[5]: array([0], dtype=uint32)
</code></pre>
<p>If you want to make a copy of the array with the new datatype instead, use <code>astype</code>:</p>
<pre><code>In [6]: x
Out[6]: array([0, 0], dtype=uint16)
In [7]: x.astype(np.uint8)
Out[7]: array([0, 0], dtype=uint8)
In [8]: x.astype(np.uint32)
Out[8]: array([0, 0], dtype=uint32)
</code></pre>
<hr>
<p>Now let's take a look at what happens with when viewing a 2D array.</p>
<pre><code>In [9]: y = np.arange(4, dtype=np.uint16).reshape(2, 2)
In [10]: y
Out[10]:
array([[0, 1],
[2, 3]], dtype=uint16)
In [11]: y.view(np.uint8)
Out[12]:
array([[0, 0, 1, 0],
[2, 0, 3, 0]], dtype=uint8)
</code></pre>
<p>Notice that the shape of the array has changed, and that the changes have happened along the last axis (in this case, extra columns have been added).</p>
<p>At first glance it may appear that extra zeros have been added. It's <em>not</em> that extra zeros are being inserted, it's that the <code>uint16</code> representation of <code>2</code> is equivalent to two <code>uint8</code>s, one with a value of <code>2</code> and one with a value of <code>0</code>. Therefore, any <code>uint16</code> under 255 will result in the value and a zero, while any value over that will result in two smaller <code>uint8</code>s. As an example:</p>
<pre><code>In [13]: y * 100
Out[14]:
array([[ 0, 100],
[200, 300]], dtype=uint16)
In [15]: (y * 100).view(np.uint8)
Out[15]:
array([[ 0, 0, 100, 0],
[200, 0, 44, 1]], dtype=uint8)
</code></pre>
<hr>
<h2>What's happening behind the scenes</h2>
<p>Numpy arrays consist of a "raw" memory buffer that's interpreted through a shape, a dtype, and strides (and an offset, but let's ignore that for now). For more detail, there are several good overviews: <a href="http://docs.scipy.org/doc/numpy/reference/arrays.ndarray.html" rel="nofollow">the official documentation</a>, <a href="http://csc.ucdavis.edu/~chaos/courses/nlp/Software/NumPyBook.pdf" rel="nofollow">the numpy book</a>, or <a href="http://scipy-lectures.github.io/advanced/advanced_numpy/" rel="nofollow">scipy-lectures</a>.</p>
<p>This allows numpy to be very memory efficient and "slice and dice" the underlying memory buffer in many different ways without making a copy. </p>
<p>Strides tell numpy how many bytes to jump within the memory buffer to go one increment along a particular axis. </p>
<p>For example:</p>
<pre><code>In [17]: y
Out[17]:
array([[0, 1],
[2, 3]], dtype=uint16)
In [18]: y.strides
Out[18]: (4, 2)
</code></pre>
<p>So, to go one row deeper in the array, numpy needs to step forward 4 bytes in the memory buffer, while to go one column farther in the array, numpy needs to step 2 bytes. Transposing the array just amounts to reversing the strides (and shape, but in this case, <code>y</code> is 2x2):</p>
<pre><code>In [19]: y.T.strides
Out[19]: (2, 4)
</code></pre>
<p>When we view the array as <code>uint8</code>, the strides change. We still step forward 4 bytes per row, but only one byte per column:</p>
<pre><code>In [20]: y.view(np.uint8).strides
Out[20]: (4, 1)
</code></pre>
<p>However, numpy arrays have to have the one stride length per dimension. This is what "evenly-strided" means. In other words, do move forward one row/column/whatever, numpy needs to be able to step the same amount through the underlying memory buffer each time. In other words, there's no way to tell numpy to step different amounts for each row/column/whatever.</p>
<p>For that reason, <code>view</code> takes a very conservative route. If the array isn't contiguous, and the view would change the shape and strides of the array, it doesn't try to handle it. As @Craig noted, it's because the slice of <code>y</code> isn't contiguous that <code>view</code> isn't working.</p>
<p>There are plenty of cases (yours is one) where the resulting array would be valid, but the <code>view</code> method doesn't try to be too smart about it. </p>
<p>To really play around with what's possible, you can use <code>numpy.lib.stride_tricks.as_strided</code> or directly use the <a href="http://docs.scipy.org/doc/numpy/reference/arrays.interface.html#__array_interface__" rel="nofollow"><code>__array_interface__</code></a>. It's a good learning tool to experiment with it, but you have to really understand what you're doing to use it effectively.</p>
<p>Hopefully that helps a bit, anyway! Sorry for the long-winded answer!</p>
|
python|numpy
| 3
|
6,342
| 19,820,280
|
Offset date for a Pandas DataFrame date index
|
<p>Given a Pandas dataframe created as follows:</p>
<pre><code>dates = pd.date_range('20130101',periods=6)
df = pd.DataFrame(np.random.randn(6),index=dates,columns=list('A'))
A
2013-01-01 0.847528
2013-01-02 0.204139
2013-01-03 0.888526
2013-01-04 0.769775
2013-01-05 0.175165
2013-01-06 -1.564826
</code></pre>
<p>I want to add 15 days to the index.
This does not work></p>
<pre><code>#from pandas.tseries.offsets import *
df.index+relativedelta(days=15)
#df.index + DateOffset(days=5)
TypeError: relativedelta(days=+15)
</code></pre>
<p>I seem to be incapable of doing anything right with indexes....</p>
|
<p>you can use <a href="http://pandas.pydata.org/pandas-docs/dev/timeseries.html#dateoffset-objects">DateOffset</a>:</p>
<pre><code>>>> df = pd.DataFrame(np.random.randn(6),index=dates,columns=list('A'))
>>> df.index = df.index + pd.DateOffset(days=15)
>>> df
A
2013-01-16 0.015282
2013-01-17 1.214255
2013-01-18 1.023534
2013-01-19 1.355001
2013-01-20 1.289749
2013-01-21 1.484291
</code></pre>
|
python|pandas
| 24
|
6,343
| 29,172,934
|
Difference in shapes of numpy array
|
<p>For the array:</p>
<pre><code>import numpy as np
arr2d = np.array([[1,2,3],[4,5,6],[7,8,9]])
>>> arr2d
array([[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
>>> arr2d[2].shape
(3,)
>>> arr2d[2:,:].shape
(1, 3)
</code></pre>
<p>Why do I get different shapes when both statements return the 3rd row? and shouldn't the result be (1,3) in both cases since we are returning a single row with 3 columns?</p>
|
<blockquote>
<p>Why do I get different shapes when both statements return the 3rd row?</p>
</blockquote>
<p>Because with the first operation you are indexing the rows, and selecting just ONE element, which -as mentioned in the <a href="http://docs.scipy.org/doc/numpy/user/basics.indexing.html#single-element-indexing" rel="nofollow">single-element indexing</a> paragraph of a multidimensional array- returns an array with a lower dimension (a 1D array). </p>
<p>In the 2nd example, you are using a <a href="http://wiki.scipy.org/Cookbook/Indexing#head-62e252898cdeb3a531ea2d76e5fbbafaeec11aca" rel="nofollow"><strong>slice</strong></a> as evident by the colon. Slicing operations do not reduce the dimensions of an array. This is also logical, because imagine the array would not have 3 but 4 rows. Then <code>arr2d[2:,:].shape</code> would be <code>(2,3)</code>. The developers of numpy made slicing operations consistent and therefor they (slices) never reduce the number of dimensions of the array.</p>
<blockquote>
<p>and shouldn't the result be (1,3) in both cases since we are returning a single row with 3 columns?</p>
</blockquote>
<p>No, just because of the previous reasons.</p>
|
python|numpy
| 5
|
6,344
| 23,695,851
|
python - repeating numpy array without replicating data
|
<p>This question has been asked before, but the solution only works for 1D/2D arrays, and I need a more general answer.</p>
<p>How do you create a repeating array without replicating the data? This strikes me as something of general use, as it would help to vectorize python operations without the memory hit.</p>
<p>More specifically, I have a (y,x) array, which I want to tile multiple times to create a (z,y,x) array. I can do this with numpy.tile(array, (nz,1,1)), but I run out of memory. My specific case has x=1500, y=2000, z=700.</p>
|
<p>One simple trick is to use <code>np.broadcast_arrays</code> to broadcast your <code>(x, y)</code> against a <code>z</code>-long vector in the first dimension:</p>
<pre><code>import numpy as np
M = np.arange(1500*2000).reshape(1500, 2000)
z = np.zeros(700)
# broadcasting over the first dimension
_, M_broadcast = np.broadcast_arrays(z[:, None, None], M[None, ...])
print M_broadcast.shape, M_broadcast.flags.owndata
# (700, 1500, 2000), False
</code></pre>
<p>To generalize the <code>stride_tricks</code> method given for a 1D array in <a href="https://stackoverflow.com/a/5568169/1461210">this answer</a>, you just need to include the shape and stride length for each dimension of your output array:</p>
<pre><code>M_strided = np.lib.stride_tricks.as_strided(
M, # input array
(700, M.shape[0], M.shape[1]), # output dimensions
(0, M.strides[0], M.strides[1]) # stride length in bytes
)
</code></pre>
|
python|numpy|memory|large-data
| 5
|
6,345
| 29,761,266
|
Should Image.fromarray(pixels) and np.array(img) leave the data unchanged?
|
<p>I am trying to generate PNGs using the Image.fromarray() function from PIL but not getting the expected images.</p>
<pre><code>arr=np.random.randint(0,256,5*5)
arr.resize((5,5))
print arr
</code></pre>
<p>gives</p>
<pre><code>[[255 217 249 221 88]
[ 28 207 85 219 85]
[ 90 145 155 152 98]
[196 121 228 101 92]
[ 50 159 66 130 8]]
</code></pre>
<p>then</p>
<pre><code>img=Image.fromarray(arr,'L')
new_arr=np.array(img)
</code></pre>
<p>I would expect new_arr to be the same as arr but</p>
<pre><code>print new_arr
[[122 0 0 0 0]
[ 0 0 0 61 0]
[ 0 0 0 0 0]
[ 0 168 0 0 0]
[ 0 0 0 0 221]]
</code></pre>
|
<p>The problem is that <code>np.random.randint()</code> returns signed int, while the <code>'L'</code> option to <code>Image.fromarray()</code> tells it to interpret the array as 8-bit <strong>unsigned</strong> int (<a href="https://pillow.readthedocs.org/en/latest/handbook/concepts.html#concept-modes" rel="nofollow">PIL modes</a>). If you explicitly cast it to <code>uint8</code> it works:</p>
<pre><code>arr=np.random.randint(0,256,5*5)
arr.resize((5,5))
print arr
</code></pre>
<p>output:</p>
<pre><code>[[255 217 249 221 88]
[ 28 207 85 219 85]
[ 90 145 155 152 98]
[196 121 228 101 92]
[ 50 159 66 130 8]]
</code></pre>
<p>then</p>
<pre><code>img=Image.fromarray(arr.astype('uint8'),'L') # cast to uint8
new_arr=np.array(img)
print new_arr
</code></pre>
<p>output:</p>
<pre><code>[[255 217 249 221 88]
[ 28 207 85 219 85]
[ 90 145 155 152 98]
[196 121 228 101 92]
[ 50 159 66 130 8]]
</code></pre>
|
python|numpy|python-imaging-library
| 1
|
6,346
| 29,704,455
|
NumPy: finding N largest elements in a matrix
|
<p>Edited since my last question was a duplicate, but I'm struggling with this as well. I'm currently working with a matrix and can easily find the largest element with</p>
<pre><code>M[M != 1].max()
</code></pre>
<p>However, I'm interested in getting the N largest elements and can't find an easy way to do this with matrices. Is there an efficient solution?</p>
|
<p>Yes there is a <code>where</code> method which takes a condition as one of the parameter ,</p>
<pre><code>minimum = M[M != 0].min()
print numpy.where(M==minimum)
</code></pre>
|
python|numpy|matrix
| 0
|
6,347
| 62,274,904
|
How to generate an nd-array where values are greater than 1?
|
<p>Is it possible to generate random numbers in an nd-array such the elements in the array are between 1 and 2 (The interval should be between 1 and some number greater than 1 )? This is what I did.</p>
<pre><code>input_array = np.random.rand(3,10,10)
</code></pre>
<p>But the values in the nd-array are between 0 and 1.</p>
<p>Please let me know if that is possible. Any help and suggestions will be highly appreciated.</p>
|
<p>You can try scaling:</p>
<pre><code>min_val, max_val = 1, 2
input_array = np.random.rand(3,10,10) * (mal_val-min_val) + min_val
</code></pre>
<p>or use <code>uniform</code>:</p>
<pre><code>input_array = np.random.uniform(min_val, max_val, (3,10,10))
</code></pre>
|
python-3.x|numpy|numpy-ndarray
| 0
|
6,348
| 62,160,650
|
CSV file to JSON using python
|
<p>I am currently trying to convert a csv with 4 different fields to a json body for making an api call. The current csv looks like this:</p>
<pre><code>firstname, lastname, email, login
Jake, Smith, jake.smith@example.com, jake.smith@example.com
John, Appleseed, john.appleseed@example.com, john.appleseed@example.com
</code></pre>
<p>I would like the json to look like this </p>
<pre><code>{"profile": {"firstName": "Jake", "lastName": "Smith", "email": "jake.smith@example.com", "login": "jake.smith@example.com"}}
</code></pre>
<pre><code>{"profile": {"firstName": "John", "lastName": "Appleseed", "email": "john.appleseed@example.com", "login": "john.appleseed@example.com"}}
</code></pre>
|
<p>Try this, not the best solution but works:</p>
<pre><code>df = pd.read_csv('test.csv')
for i in range(0, df.shape[0]):
json_data = df.loc[[i]].to_json(orient='records')
json_data = json_data.strip('[]')
x = json.loads(json_data)
j = {'profile': x}
print(json.dumps(j))
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>{"profile": {"firstname": "Jake", "lastname": "Smith", "email": "jake.smith@example.com", "login": " ake.smith@example.com"}}
{"profile": {"firstname": "John", "lastname": "Appleseed", "email": "john.appleseed@example.com", "login": "john.appleseed@example.com"}}
</code></pre>
|
python|json|python-3.x|pandas|csv
| 1
|
6,349
| 51,466,308
|
Indices of multiple elements in a numpy array
|
<p>I have a numpy array and a list as follows</p>
<pre><code>y=np.array([[1],[2],[1],[3],[1],[3],[2],[2]])
x=[1,2,3]
</code></pre>
<p>I would like to return a tuple of arrays each of which contains the indices of each element of x in y.
i.e.</p>
<pre><code>(array([[0,2,4]]),array([[1,6,7]]),array([[3,5]]))
</code></pre>
<p>Is this possible to be done in a vectorized fashion(without any loops)?</p>
|
<p>One solution is to <code>map</code></p>
<pre><code>y = y.reshape(1,len(y))
map(lambda k: np.where(y==k)[-1], x)
[array([0, 2, 4]),
array([1, 6, 7]),
array([3, 5])]
</code></pre>
<hr>
<p>Reasonable performance. For 100000 rows,</p>
<pre><code>%timeit list(map(lambda k: np.where(y==k), x))
3.1 ms ± 113 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
</code></pre>
|
python|numpy
| 1
|
6,350
| 48,789,430
|
Obtaining index based on connditions
|
<p>From the following code:</p>
<pre><code>aps1.Status.head(10)
Out[663]:
0 OK
1 OK
2 OK
3 OK
4 OK
5 OK
6 Fail
7 OK
8 Fail
9 OK
</code></pre>
<p>How to obtain the indexes for which Status is Fail? I tried:</p>
<pre><code> print (index for index,value in enumerate(aps1.Status) if value == "Fail"])
</code></pre>
<p>But I'm getting syntax error. Thanks</p>
|
<p>remove the ']' at the end</p>
<pre><code>print (index for index,value in enumerate(aps1.Status) if value == "Fail"**]**)
</code></pre>
|
python|pandas
| 2
|
6,351
| 48,667,218
|
How to get the N nearest entries to the median in a Pandas series?
|
<p>For a Pandas Series:</p>
<pre><code>ser = pd.Series([i**2 for i in range(9)])
print(ser)
0 0
1 1
2 4
3 9
4 16
5 25
6 36
7 49
8 64
dtype: int64
</code></pre>
<p>The median can be grabbed with <code>ser.median()</code>, which returns <code>16</code>. How can the <em>N</em> entries around the median be grabbed? Something like:</p>
<pre><code>print(ser.get_median_entries(3)) # N == 3; not real functionality
3 9
4 16
5 25
dtype: int64
</code></pre>
|
<p>You can find the abs difference between each value and the median and use <code>sort_values()</code>: </p>
<pre><code>ser[abs(ser - ser.median()).sort_values()[0:3].index]
#4 16
#3 9
#5 25
#dtype: int64
</code></pre>
<p>If you want it as a function, where <code>n</code> is an input variable:</p>
<pre><code>def get_n_closest_to_median(ser, n):
return ser[abs(ser - ser.median()).sort_values()[0:n].index]
print get_n_closest_to_median(ser, 3)
#4 16
#3 9
#5 25
#dtype: int64
</code></pre>
<p>You will probably have to add some error checking on the bounds.</p>
|
python|pandas|series|median
| 3
|
6,352
| 48,524,402
|
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all()?
|
<p>I'm noob with pandas, and recently, I got that 'ValueError' when I'm trying to modify the columns that follows some rules, as:</p>
<pre><code>csv_input = pd.read_csv(fn, error_bad_lines=False)
if csv_input['ip.src'] == '192.168.1.100':
csv_input['flow_dir'] = 1
csv_input['ip.src'] = 1
csv_input['ip.dst'] = 0
else:
if csv_input['ip.dst'] == '192.168.1.100':
csv_input['flow_dir'] = 0
csv_input['ip.src'] = 0
csv_input['ip.dst'] = 1
</code></pre>
<p>I was searching about this error and I guess that it's because the 'if' statement and the '==' operator, but I don't know how to fix this.</p>
<p>Thanks!</p>
|
<p>So Andrew L's comment is correct, but I'm going to expand on it a bit for your benefit.</p>
<p>When you call, e.g.</p>
<pre><code>csv_input['ip.dst'] == '192.168.1.100'
</code></pre>
<p>What this returns is a Series, with the same index as csv_input, but all the values in that series are boolean, and represent whether the value in <code>csv_input['ip.dst']</code> for that row is equal to <code>'192.168.1.100'</code>.</p>
<p>So, when you call </p>
<pre><code> if csv_input['ip.dst'] == '192.168.1.100':
</code></pre>
<p>You're asking whether that <em>Series</em> evaluates to True or False. Hopefully that explains what it meant by <code>The truth value of a Series is ambiguous.</code>, it's a Series, it can't be boiled down to a boolean.</p>
<p>Now, what it <em>looks like</em> you're trying to do is set the values in the <code>flow_dir</code>,<code>ip.src</code> & <code>ip.dst</code> columns, based on the value in the <code>ip.src</code> column.</p>
<p>The correct way to do this is would be with <code>.loc[]</code>, something like this:</p>
<pre><code>#equivalent to first if statement
csv_input.loc[
csv_input['ip.src'] = '192.168.1.100',
('ip.src','ip.dst','flow_dir')
] = (1,0,1)
#equivalent to second if statement
csv_input.loc[
csv_input['ip.dst'] = '192.168.1.100',
('ip.src','ip.dst','flow_dir')
] = (0,1,0)
</code></pre>
|
python|pandas|valueerror
| 0
|
6,353
| 48,751,140
|
Pandas: select rows where two columns are different
|
<p>Suppose I have a dataframe as below</p>
<pre><code>a b c
1 1 45
0 2 74
2 2 54
1 4 44
</code></pre>
<p>Now I want the rows where column a and b are not same. So the expected outpu is</p>
<pre><code>a b c
0 2 74
1 4 44
</code></pre>
<p>How can I do this?</p>
|
<p>I am a fan of readability, use <code>query</code>:</p>
<pre><code>df.query('a != b')
</code></pre>
<p>Output:</p>
<pre><code> a b c
1 0 2 74
3 1 4 44
</code></pre>
|
python|pandas
| 21
|
6,354
| 48,715,401
|
set limits to numpy polyfit
|
<p>I have two arrays with some data. In particular, the y array contains percentages that can not exceed y = 100 value.
The y values satisfy the condition y <100 but if I make a fit, the result is that the curve exceeds y = 100, as shown in the figure below.</p>
<p>Is there any way to make a curve fit that does not exceed y = 100?</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import numpy.polynomial.polynomial as poly
x = [0.25,0.75,1.25,1.75,2.15,2.75,3.15,3.75,4.15,4.75,5.15,5.75]
y = [ 100.,100.,90.,69.23076923,47.36842105,39.13043478,
35.71428571,26.31578947,22.22222222,18.86792453,
11.76470588,9.43396226]
coefs = poly.polyfit(x, y, 3)
ffit = poly.polyval(x, coefs)
plt.plot(x, ffit)
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/4zljK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4zljK.png" alt="enter image description here"></a></p>
|
<p>You can pass the <code>polyfit</code> function a list of degrees that you want to fit, which means that you can leave out certain degrees (for example the constant value). With a bit of manipulation you can get what you want.</p>
<p>Assuming that you want your fit function to reach 100 at your minimum x value (0.25), you can subtract that 100 from all y-values, subtract 0.25 from all x values and then fit a polynomial such that only the coefficients for the first, second and third degree terms are fit parameters, but not the zeroth (or constant) term. Then, after the fitting you can set that constant term to 100 and compute the new, fitted values. I adjusted your example code to illustrate what I mean:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
import numpy.polynomial.polynomial as poly
fig, ax = plt.subplots()
x = np.array([0.25,0.75,1.25,1.75,2.15,2.75,3.15,3.75,4.15,4.75,5.15,5.75])
y = np.array([ 100.,100.,90.,69.23076923,47.36842105,39.13043478,
35.71428571,26.31578947,22.22222222,18.86792453,
11.76470588,9.43396226])
x_new = np.linspace(x[0],x[-1], 100)
##the original way to fit
coefs = poly.polyfit(x, y, 3)
ffit = poly.polyval(x_new, coefs)
##the adjusted way to fit
coefs2 = poly.polyfit(x-0.25, y-100, [1,2,3])
coefs2[0] = 100
ffit2 = poly.polyval(x_new-0.25,coefs2)
ax.plot(x_new, ffit, label = 'without constraints')
ax.plot(x_new, ffit2, label = 'with constraints')
ax.plot(x, y, 'ro', label = 'data')
ax.legend()
plt.show()
</code></pre>
<p>The result looks like this:</p>
<p><a href="https://i.stack.imgur.com/ms2fk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ms2fk.png" alt="result of the above code"></a></p>
<p>Hope this helps.</p>
|
python|numpy|matplotlib|curve-fitting
| 2
|
6,355
| 71,073,257
|
How can i use loop to show the data distribution in a dataset
|
<p>I am a beginner<br />
How can i use for loop to print all them</p>
<pre><code>['MSZoning', 'Street', 'LotShape', 'LandContour', 'Utilities',
'LotConfig', 'LandSlope', 'Neighborhood', 'Condition1', 'Condition2',
'BldgType', 'HouseStyle', 'RoofStyle', 'RoofMatl', 'Exterior1st',
'Exterior2nd', 'MasVnrType', 'ExterQual', 'ExterCond', 'Foundation',
'BsmtQual', 'BsmtCond', 'BsmtExposure', 'BsmtFinType1', 'BsmtFinType2',
'Heating', 'HeatingQC', 'CentralAir', 'Electrical', 'KitchenQual',
'Functional', 'GarageType', 'GarageFinish', 'GarageQual', 'GarageCond',
'PavedDrive', 'SaleType', 'SaleCondition']
</code></pre>
<p>Like I did in it</p>
<pre class="lang-py prettyprint-override"><code>pd.concat([df['MSZoning'].value_counts()/df.shape[0] * 100,
df3['MSZoning'].value_counts()/df3.shape[0] * 100], axis=1,
keys=['MSZoning_org','MSZoning_clean'])
</code></pre>
<p>output:-</p>
<p>![output][https://i.stack.imgur.com/trH9m.jpg]</p>
|
<pre><code>xs = ['MSZoning', 'Street', 'LotShape', 'LandContour', 'Utilities',
'LotConfig', 'LandSlope', 'Neighborhood', 'Condition1', 'Condition2',
'BldgType', 'HouseStyle', 'RoofStyle', 'RoofMatl', 'Exterior1st',
'Exterior2nd', 'MasVnrType', 'ExterQual', 'ExterCond', 'Foundation',
'BsmtQual', 'BsmtCond', 'BsmtExposure', 'BsmtFinType1', 'BsmtFinType2',
'Heating', 'HeatingQC', 'CentralAir', 'Electrical', 'KitchenQual',
'Functional', 'GarageType', 'GarageFinish', 'GarageQual', 'GarageCond',
'PavedDrive', 'SaleType', 'SaleCondition']
def cat_var_dist(var):
return pd.concat([
df[var].value_counts()/df.shape[0] * 100,
df3[var].value_counts()/df3.shape[0] * 100,
],
axis=1,
keys=[f'{var}_org',f'{var}_clean'],
)
for x in xs:
print(cat_var_dist(x))
</code></pre>
|
python|pandas|loops|for-loop|data-science
| 1
|
6,356
| 70,968,601
|
How to groupby year and unstack years into columns in pandas?
|
<p>I have a pandas time series <code>ser</code></p>
<pre><code>ser
>>>
date x
2018-01-01 0.912
2018-01-02 0.704
...
2021-02-01 1.285
</code></pre>
<p>and I want to take a cumulative sum by year and make each year into a column as such, and the date index should now be just dates in year (e.g. Jan 01, Jan 02,... the formatting of Month and Day doesn't matter)</p>
<pre><code>date 2018_x 2019_x 2020_x 2021_x 2022_x
Jan-01 0.912 ... ... ... ...
Jan-02 1.616 ... ... ... ...
...
</code></pre>
<p>I know how to groupby and take a cumulative sum, but then I want to do some sort of unstacking operation to get the years into columns</p>
<pre><code>ser.groupby(ser.index.year).cumsum()
# what do I do next?
</code></pre>
<p>The standard pandas <code>unstack()</code> operation doesn't work here.</p>
<p>Can anyone please advise how to do this?</p>
|
<p>First you can aggregate <code>sum</code> per <code>MM-DD</code> with years and then reshape by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.unstack.html" rel="nofollow noreferrer"><code>Series.unstack</code></a>:</p>
<pre><code>df = ser.groupby([ser.index.strftime('%m-%d'), ser.index.year]).sum().unstack(fill_value=0).cumsum()
print (df)
date 2018 2021
date
01-01 0.912 0.000
01-02 1.616 0.000
02-01 1.616 1.285
</code></pre>
<p>Or if no duplicated datetimes create <code>MultiIndex</code> without <code>groupby</code>:</p>
<pre><code>ser.index = [ser.index.strftime('%m-%d'), ser.index.year]
df = ser.unstack(fill_value=0).cumsum()
print (df)
date 2018 2021
date
01-01 0.912 0.000
01-02 1.616 0.000
02-01 1.616 1.285
</code></pre>
|
pandas|group-by|cumsum
| 1
|
6,357
| 70,790,473
|
pytorch lightning epoch_end/validation_epoch_end
|
<p>Could anybody breakdown the code and explain it to me? The part that needs help is indicated with the "#This part". I would greatly appreciate any help thanks</p>
<pre><code>def validation_epoch_end(self, outputs):
batch_losses = [x["val_loss"]for x in outputs] #This part
epoch_loss = torch.stack(batch_losses).mean()
batch_accs = [x["val_acc"]for x in outputs] #This part
epoch_acc = torch.stack(batch_accs).mean()
return {'val_loss': epoch_loss.item(), 'val_acc': epoch_acc.item()}
def epoch_end(self, epoch, result):
print("Epoch [{}], val_loss: {:.4f}, val_acc: {:.4f}".format( epoch,result['val_loss'], result['val_acc'])) #This part
</code></pre>
|
<p>Based on the structure, I assume you are using <code>pytorch_lightning</code>.</p>
<p><code>validation_epoch_end()</code> will collect outputs from <code>validation_step()</code>, so it's a <code>list</code> of <code>dict</code> with the length of number of batch in your validation dataloader. Thus, the first two <code>#This part</code> is just unwrapping the result from your validation set.</p>
<p><code>epoch_end()</code> catch the result <code>{'val_loss': epoch_loss.item(), 'val_acc': epoch_acc.item()}</code> from <code>validation_epoch_end()</code>.</p>
|
neural-network|pytorch|pytorch-lightning
| 1
|
6,358
| 51,957,712
|
Can an xlwings UDF return a list of numpy arrays?
|
<p>I am trying to write an <em>xlwings</em> user-defined function (UDF) which returns a list of <em>numpy</em> arrays in Excel VBA. Is this possible?</p>
<p>Whenever I try, I get this error in VBA:</p>
<p><a href="https://i.stack.imgur.com/E4nGb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/E4nGb.png" alt="image of my error"></a></p>
<p>In words, that's:</p>
<blockquote>
<p>Run-time error '-2147467259 (80004005)':</p>
<p>Unexpected Python Error: TypeError: Internal error - the buffer length is not the sequence length!</p>
</blockquote>
|
<p>Probably the simplest way to return multiple Numpy arrays is to combine them into a single 2D array in Python and return that.</p>
|
python|excel|vba|numpy|xlwings
| 1
|
6,359
| 42,073,239
|
tf.get_collection to extract variables of one scope
|
<p>I have <code>n</code> (e.g: n=3) scopes and <code>x</code> (e.g: x=4) no of Variables defined in each scope.
The scopes are:</p>
<pre><code>model/generator_0
model/generator_1
model/generator_2
</code></pre>
<p>Once I compute the loss, I want to extract and provide all the variables from only one of the scope based on a criteria during run-time. Hence the index of the scope <code>idx</code> that I select is an argmin tensor cast into int32</p>
<pre><code><tf.Tensor 'model/Cast:0' shape=() dtype=int32>
</code></pre>
<p>I have already tried:</p>
<pre><code>train_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, 'model/generator_'+tf.cast(idx, tf.string))
</code></pre>
<p>which obviously did not work.
Is there any way to get all the <code>x</code> Variables belonging to that particular scope using idx to pass into the optimizer.</p>
<p>Thanks in advance!</p>
<p>Vignesh Srinivasan</p>
|
<p>You can do something like this in TF 1.0 rc1 or later:</p>
<pre><code>v = tf.Variable(tf.ones(()))
loss = tf.identity(v)
with tf.variable_scope('adamoptim') as vs:
optim = tf.train.AdamOptimizer(learning_rate=0.1).minimize(loss)
optim_vars = tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES, scope=vs.name)
print([v.name for v in optim_vars]) #=> prints lists of vars created
</code></pre>
|
python|tensorflow
| 4
|
6,360
| 41,830,190
|
python - Fill in missing dates with respect to a specific attribute in pandas
|
<p>My data looks like below:</p>
<pre><code>id, date, target
1,2016-10-24,22
1,2016-10-25,31
1,2016-10-27,44
1,2016-10-28,12
2,2016-10-21,22
2,2016-10-22,31
2,2016-10-25,44
2,2016-10-27,12
</code></pre>
<p>I want to fill in missing dates among id.
For example, the date range of id=1 is 2016-10-24 ~ 2016-10-28, and 2016-10-26 is missing. Moreover, the date range of id=2 is 2016-10-21 ~ 2016-10-27, and 2016-10-23, 2016-10-24 and 2016-10-26 are missing.
I want to fill in the missing dates and fill in the target value as 0.</p>
<p>Therefore, I want my data to be as below:</p>
<pre><code>id, date, target
1,2016-10-24,22
1,2016-10-25,31
1,2016-10-26,0
1,2016-10-27,44
1,2016-10-28,12
2,2016-10-21,22
2,2016-10-22,31
2,2016-10-23,0
2,2016-10-24,0
2,2016-10-25,44
2,2016-10-26,0
2,2016-10-27,12
</code></pre>
<p>Can somebody help me?</p>
<p>Thanks in advance.</p>
|
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="noreferrer"><code>groupby</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.resample.html" rel="noreferrer"><code>resample</code></a> - then is problem <code>fillna</code> - so need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.asfreq.html" rel="noreferrer"><code>asfreq</code></a> first:</p>
<pre><code>#if necessary convert to datetime
df.date = pd.to_datetime(df.date)
df = df.set_index('date')
df = df.groupby('id').resample('d')['target'].asfreq().fillna(0).astype(int).reset_index()
print (df)
id date target
0 1 2016-10-24 22
1 1 2016-10-25 31
2 1 2016-10-26 0
3 1 2016-10-27 44
4 1 2016-10-28 12
5 2 2016-10-21 22
6 2 2016-10-22 31
7 2 2016-10-23 0
8 2 2016-10-24 0
9 2 2016-10-25 44
10 2 2016-10-26 0
11 2 2016-10-27 12
</code></pre>
|
python|pandas
| 6
|
6,361
| 64,334,422
|
What is wrong with this Numpy/Pandas code to construct new boolean column based on the values in two other boolean columns?
|
<p>I have the following data set:</p>
<p>Beginning Data Set:</p>
<pre><code>ObjectID,Date,Price,Vol,Mx
101,2017-01-01,,145,203
101,2017-01-02,,155,163
101,2017-01-03,67.0,140,234
101,2017-01-04,78.0,130,182
101,2017-01-05,58.0,178,202
101,2017-01-06,53.0,134,204
101,2017-01-07,52.0,134,183
101,2017-01-08,62.0,148,176
101,2017-01-09,42.0,152,193
101,2017-01-10,80.0,137,150
</code></pre>
<p>I first create two new columns of boolean values called VolPrice and Check based on the values in my starting data set. I think want to create a third additional column called DoubleCheck where the value of this column should be True if either VolPrice OR Check are equal to True, otherwise the value of DoubleCheck should be false. Initially I got the following error:</p>
<p>ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()</p>
<p>but then I added .any() after each column within my statement to construct the DoubleCheck column. However this isn't working either because it is providing 'True' values throughout the DoubleCheck column even when there should be false values as shown below.</p>
<p>Code:</p>
<pre><code>import pandas as pd
import numpy as np
Observations = pd.read_csv("C:\\Users\\Observations.csv", parse_dates=['Date'], index_col=['ObjectID', 'Date'])
Observations['VolPrice'] = np.where((Observations['Price']<Observations['Vol']) & (Observations['Vol']<Observations['Mx']), True, False)
Observations['Check'] = np.where(Observations['Vol']<Observations['Price'], True, False)
Observations['DoubleCheck'] = np.where((Observations['Check'].any()==True) or (Observations['VolPrice'].any()==True), True, False)
print(Observations)
</code></pre>
<p>Current Result:</p>
<pre><code>ObjectID,Date,Price,Vol,Mx,VolPrice,Check,DoubleCheck
101,2017-01-01,,145,203,False,False,True
101,2017-01-02,,155,163,False,False,True
101,2017-01-03,67.0,140,234,True,False,True
101,2017-01-04,78.0,130,182,True,False,True
101,2017-01-05,58.0,178,202,True,False,True
101,2017-01-06,53.0,134,204,True,False,True
101,2017-01-07,52.0,134,183,True,False,True
101,2017-01-08,62.0,148,176,True,False,True
101,2017-01-09,42.0,152,193,True,False,True
101,2017-01-10,80.0,137,150,True,False,True
</code></pre>
<p>Desired Result:</p>
<pre><code>ObjectID,Date,Price,Vol,Mx,VolPrice,Check,DoubleCheck
101,2017-01-01,,145,203,False,False,False
101,2017-01-02,,155,163,False,False,False
101,2017-01-03,67.0,140,234,True,False,True
101,2017-01-04,78.0,130,182,True,False,True
101,2017-01-05,58.0,178,202,True,False,True
101,2017-01-06,53.0,134,204,True,False,True
101,2017-01-07,52.0,134,183,True,False,True
101,2017-01-08,62.0,148,176,True,False,True
101,2017-01-09,42.0,152,193,True,False,True
101,2017-01-10,80.0,137,150,True,False,True
</code></pre>
|
<p>Use <code>|</code> for bitwise <code>OR</code>, working same like <code>&</code> for bitwise <code>AND</code>:</p>
<pre><code>Observations['DoubleCheck'] = Observations['Check'] | Observations['VolPrice']
</code></pre>
<p>Or <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.any.html" rel="nofollow noreferrer"><code>DataFrame.any</code></a> with both columns:</p>
<pre><code>Observations['DoubleCheck'] = Observations[['Check','VolPrice']].any(axis=1)
</code></pre>
<p>All together is possible without <code>np.where</code>:</p>
<pre><code>Observations['VolPrice'] = (Observations['Price']<Observations['Vol']) & (Observations['Vol']<Observations['Mx'])
Observations['Check'] = Observations['Vol']<Observations['Price']
Observations['DoubleCheck'] = Observations['Check'] | Observations['VolPrice']
</code></pre>
|
python|pandas|numpy
| 1
|
6,362
| 64,397,362
|
how to create a new numpy array by masking another numpy array with a single assignment
|
<p>Supposing A is an NP array.</p>
<p>If I do this:</p>
<pre><code>B = np.copy(A)
B[B !=0] = 1
</code></pre>
<p>or</p>
<pre><code>A[A != 0]=1
B=np.copy(A)
</code></pre>
<p>I get B as the masked version of A, i.e. the desired output. However if I try the assignment like this:</p>
<pre><code>B= A[A !=0]=1
</code></pre>
<p>B becomes an integer for reasons I don't understand. Why does this happen and is there a way or performance reason to perform this operation as a single assignment?</p>
|
<p>I must preface this by saying any attempt in doing so massively decreases readability. Just use two lines if you don't want to other people (or yourself) to ever work with that code again. This answer is just supposed to demonstrate what <em>could</em> be done, not what <em>should</em> be done.</p>
<p>The expression <code>A=B=x</code> assigns <code>x</code> to both <code>A</code> and <code>B</code>. If you really want to squeeze everything onto one line you could try something like</p>
<pre><code>import numpy as np
a = np.arange(5)
(b:=a.copy())[a!=0]=1
</code></pre>
<p>The <code>:=</code> (walrus) operator actually evaluates to the assigned value, unlike the assignment (<code>=</code>) operator. (Note that <code>A=B=x</code> works because it is basically a shorthand for <code>t=x; A=t; B=t</code>, but <code>A=(B=x)</code> will not work as the assignment does not evaluate to anything. You chould write <code>A=(B:=x)</code> though.) Then <code>a</code> remains unchanged, which corresponds to your first version, so</p>
<pre><code>>>> b
array([0, 1, 1, 1, 1])
>>> a
array([0, 1, 2, 3, 4])
</code></pre>
|
python|arrays|numpy|masking
| 2
|
6,363
| 64,494,143
|
Pandas to_csv is removing commas
|
<p>I have a column in my pandas dataframe as a list and when I write the file to csv, it is removing commas inside the list.</p>
<p>code to replicate</p>
<pre><code>import numpy as np
def to_vector(probs, num_classes):
vec = np.zeros(num_classes)
for i in probs:
vec[i] = 1
return vec
import pandas as pd
l1 = [[[1,5]],[[2,4]]]
num = 10
a = pd.DataFrame(l1, columns=['dep'])
a['Y_dept'] = a["dep"].apply(lambda x: to_vector(x, num))
a.to_csv('a_temp.csv', index=False)
</code></pre>
<p>But when I read the same file, the commas inside the Y_dept column are missing</p>
<pre><code>b = pd.read_csv('a_temp.csv')
b.head()
dep Y_dept
0 [1, 5] [0. 1. 0. 0. 0. 1. 0. 0. 0. 0.]
1 [2, 4] [0. 0. 1. 0. 1. 0. 0. 0. 0. 0.]
</code></pre>
<p>Expected Output:</p>
<pre><code> dep Y_dept
0 [1, 5] [0.0, 1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, ...
1 [2, 4] [0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0, 0.0, 0.0, ...
</code></pre>
<p>quoting=csv.QUOTE_ALL is not working.
version: pandas==0.25.3</p>
|
<p>If you convert the numpy array to list then you will find the desired result. By default, the numpy array wont be dispalyed using commas. The representation of the data inside computer does not use or need commas, they are simply there for display.</p>
<pre><code>import numpy as np
import pandas as pd
def to_vector(probs, num_classes):
vec = np.zeros(num_classes)
for i in probs:
vec[i] = 1
return list(vec)
l1 = [[[1,5]],[[2,4]]]
num = 10
a = pd.DataFrame(l1, columns=['dep'])
a['Y_dept'] = a["dep"].apply(lambda x: to_vector(x, num))
a.to_csv('a_temp.csv', index=False)
</code></pre>
|
python|pandas|export-to-csv
| 3
|
6,364
| 64,555,242
|
Personalize pandas boxplot with colors
|
<p>I've been trying to make a boxplot of some gender data that I divided into two sapareted dataframes, one for male, and one for female.
I managed to make the graph basically how I wanted it, but now I would like to make it look better. I'd like to make it look like a seaborn graph, but I wasn't able to find a way to make this using the seaborn library. I tried some ideas I found for coloring the pandas boxpplot, but nothing worked.</p>
<p>Is there a way to color these graphs? Or is there a way to make these side-by-side boxplots with seaborn?</p>
<pre><code>dados_generos = dados_sem_zeros[["NU_NOTA_CN","NU_NOTA_CH","NU_NOTA_MT","NU_NOTA_LC","NU_NOTA_REDACAO", "TP_SEXO"]]
sexo_f = dados_generos[dados_generos["TP_SEXO"].str.contains("F")]
sexo_m = dados_generos[dados_generos["TP_SEXO"].str.contains("M")]
labels = ["CN", "CH", "MT", "LC", "REDAÇÃO"]
fig, (ax, ax2) = plt.subplots(figsize = (10,7), ncols=2, sharey=True)
#Setting axis titles
ax.set_xlabel('Provas')
ax2.set_xlabel('Provas')
ax.set_ylabel('Notas')
#Making plots
chart1 = sexo_f[provas].boxplot(ax=ax)
chart2 = sexo_m[provas].boxplot(ax=ax2)
#Setting axis labels
chart1.set_xticklabels(labels,rotation=45)
chart2.set_xticklabels(labels,rotation=45)
plt.show()
</code></pre>
<p>This is the result I have:</p>
<p><a href="https://i.stack.imgur.com/vylLz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vylLz.png" alt="Side-by-side boxplot usind pandas library" /></a></p>
<p>This is the link to the data I'm using:
<a href="https://github.com/KarolDuarte/dados_generos/blob/main/dados_generos.csv" rel="nofollow noreferrer">https://github.com/KarolDuarte/dados_generos/blob/main/dados_generos.csv</a></p>
|
<p>Since <code>sns</code> is best suitable for long form data, let's try melting the data and use <code>sns</code>.</p>
<pre><code># melting the data
plot_data = df.melt('TP_SEXO')
fig, axes = plt.subplots(figsize = (10,7), ncols=2, sharey=True)
for ax, (gender, data) in zip(axes, plot_data.groupby('TP_SEXO')) :
sns.boxplot(x='variable',y='value',data=data, ax=ax)
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/2Q89w.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2Q89w.png" alt="enter image description here" /></a></p>
|
python|pandas|seaborn|boxplot
| 0
|
6,365
| 64,420,532
|
How to load and process dataset with million columns with Pandas or Pandas-like library?
|
<p>I usually find question and discussion about loading dataset with several million rows to python, by using Dask or Pandas chunk-size, but my problem is a bit different. I got millions of columns/features, and only a few thousand records. I found that the data loading time(from csv) with such dataset is absurdly slow, and consume large memory, I have done some benchkmark and sometimes, pandas is even faster than dask!</p>
<p>I tested the case in which I have 1 million rows and 300 columns, I can load it in memory easily, but if I have 300 rows and 1 million columns, then pandas consumes all 64GB RAM and dies.</p>
<p>How can I handle such dataset?</p>
<p>Thank you very much.</p>
|
<p>This was my idea in the comments if you were to use pandas. This is untested, but you could do columns in chunks using <code>usecols</code> dynamically. I said <code>iloc</code> in comments, but that would still require reading the entire file first, so what I meant was <code>usecols</code>. You can just adjust <code>i</code> and the number in the range.</p>
<pre><code>i = 10
for _ in range(1,4):
cols = list(range(i-10,i))
print(cols)
df = pd.read_csv(f, usecols=cols).T
df.append(df)
i += 10
df = df.T
[0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
[10, 11, 12, 13, 14, 15, 16, 17, 18, 19]
[20, 21, 22, 23, 24, 25, 26, 27, 28, 29]
</code></pre>
<p>As you can see you can essentially "chunk" by columns using this technique.</p>
|
python|pandas
| 2
|
6,366
| 47,878,659
|
Using TensorFlow Audio Recognition Model on iOS
|
<p>I'm trying to use the TensorFlow audio recognition model (<code>my_frozen_graph.pb</code>, generated here: <a href="https://www.tensorflow.org/tutorials/audio_recognition" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/audio_recognition</a>) on iOS. </p>
<p>But the iOS code <code>NSString* network_path = FilePathForResourceName(@"my_frozen_graph", @"pb");</code> in the TensorFlow Mobile's <code>tf_simple_example</code> project outputs this error message: <code>Could not create TensorFlow Graph: Not found: Op type not registered 'DecodeWav'</code>. </p>
<p>Anyone knows how I can fix this? Thanks!</p>
|
<p>I believe you are using the pre-build Tensorflow from Cocapods? It probably does not have that op type, so you should build it yourself from latest source.</p>
<p>From <a href="https://www.tensorflow.org/mobile/ios_build#building_the_tensorflow_ios_libraries_from_source" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>While Cocapods is the quickest and easiest way of getting started, you
sometimes need more flexibility to determine which parts of TensorFlow
your app should be shipped with. For such cases, you can build the iOS
libraries from the sources. <a href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/ios#building-the-tensorflow-ios-libraries-from-source" rel="nofollow noreferrer">This guide</a> contains detailed instructions
on how to do that.</p>
</blockquote>
<p>This might also be helpful: <a href="https://github.com/tensorflow/tensorflow/pull/14421" rel="nofollow noreferrer">[iOS] Add optional Selective Registration of Ops #14421</a></p>
<blockquote>
<p><strong>Optimization</strong></p>
<p>The <code>build_all_ios.sh</code> script can take optional
command-line arguments to selectively register only for the operators
used in your graph.</p>
<p><code>tensorflow/contrib/makefile/build_all_ios.sh -a arm64 -g $HOME/graphs/inception/tensorflow_inception_graph.pb</code></p>
<p>Please note this
is an aggresive optimization of the operators and the resulting
library may not work with other graphs but will reduce the size of the
final library.</p>
</blockquote>
<p>After the build is done you can check <code>/tensorflow/tensorflow/core/framework/ops_to_register.h</code> for operations that were registered. (autogenerated during build with -g flag)</p>
|
ios|tensorflow
| 2
|
6,367
| 47,947,310
|
How to specify model directory in Floydhub?
|
<p>I am new to Floydhub. I am trying to run the code from <a href="https://github.com/dennybritz/chatbot-retrieval/" rel="nofollow noreferrer">this github repository</a> and the corresponding tutorial.</p>
<p>For the training, I successfully used this command:</p>
<pre><code> floyd run --gpu --env tensorflow-1.2 --data janinanu/dataset
/data/2:tut_train 'python udc_train.py'
</code></pre>
<p>I adjusted this line in the training file to work in Floydhub:</p>
<pre><code>tf.flags.DEFINE_string("input_dir", "/tut_train", "Directory containing
input data files 'train.tfrecords' and 'validation.tfrecords'")
</code></pre>
<p>As said, this worked without problems for the training.</p>
<p>Now for the testing, I do not really find any details on how to specify the model directory in which the output of the training gets stored. I mounted the output from training with model_dir as mount point. I assumed that the correct command should look something like this:</p>
<pre><code>floyd run --cpu --env tensorflow-1.2 --data janinanu/datasets
/data/2:tut_test --data janinanu/projects/retrieval-based-dialogue-system-
on-ubuntu-corpus/18/output:model_dir 'python udc_test.py
--model_dir=?'
</code></pre>
<p>I have no idea what to put in the <code>--model_dir=?</code></p>
<p>Correspondingly, I assumed that I have to adjust some lines in the test file:</p>
<pre><code>tf.flags.DEFINE_string("test_file", "/tut_test/test.tfrecords", "Path of
test data in TFRecords format")
tf.flags.DEFINE_string("model_dir", "/model_dir", "Directory to load model
checkpoints from")
</code></pre>
<p>...as well as in the train file (not sure about that though...):</p>
<pre><code>tf.flags.DEFINE_string("input_dir", "/tut_train", "Directory containing
input data files 'train.tfrecords' and 'validation.tfrecords'")
tf.flags.DEFINE_string("model_dir", "/model_dir", "Directory to store
model checkpoints (defaults to /model_dir)")
</code></pre>
<p>When I use e.g. <code>--model_dir=/model_dir</code> and the code with the above adjustments, I get this error:</p>
<pre><code>2017-12-22 12:17:49,048 INFO - return func(*args, **kwargs)
2017-12-22 12:17:49,048 INFO - File "/usr/local/lib/python3.5/site-
packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py",
line 543, in evaluate
2017-12-22 12:17:49,048 INFO - log_progress=log_progress)
2017-12-22 12:17:49,049 INFO - File "/usr/local/lib/python3.5/site-
packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py",
line 816, in _evaluate_model
2017-12-22 12:17:49,049 INFO - % self._model_dir)
2017-12-22 12:17:49,049 INFO -
tensorflow.contrib.learn.python.learn.estimators._sklearn.NotFittedError:
Couldn't find trained model at /model_dir
</code></pre>
<p>Which doesn't come as a surprise. </p>
<p>Can anyone give me some clarification on how to feed the training output into the test run?</p>
<p>I will also post this question in the Floydhub Forum.</p>
<p>Thanks!!</p>
<p>.</p>
|
<p>You can mount the output of any job just like you mount a data. In your example:</p>
<p><code>--data janinanu/projects/retrieval-based-dialogue-system-
on-ubuntu-corpus/18/output:model_dir</code></p>
<p>should mount the entire output directory from run 18 to <code>/mount_dir</code> of the new job.</p>
<p>You can confirm this by viewing the job page (select the "data" tab to see what datasets are mounted at which paths).</p>
<p>In you case, can you confirm if the test is looking for the correct model filename?</p>
<p>I will also respond to this in the FloydHub forum.</p>
|
tensorflow|nlp|deep-learning
| 2
|
6,368
| 47,894,387
|
How to correlate an Ordinal Categorical column in pandas?
|
<p>I have a DataFrame <code>df</code> with a non-numerical column <code>CatColumn</code>.</p>
<pre><code> A B CatColumn
0 381.1396 7.343921 Medium
1 481.3268 6.786945 Medium
2 263.3766 7.628746 High
3 177.2400 5.225647 Medium-High
</code></pre>
<p>I want to include <code>CatColumn</code> in the correlation analysis with other columns in the Dataframe. I tried <code>DataFrame.corr</code> but it does not include columns with nominal values in the correlation analysis.</p>
|
<p>I am going to <strong>strongly</strong> disagree with the other comments.</p>
<p>They miss the main point of correlation: How much does variable 1 increase or decrease as variable 2 increases or decreases. So in the very first place, order of the ordinal variable must be preserved during factorization/encoding. If you alter the order of variables, correlation will change completely. If you are building a tree-based method, this is a non-issue but for a correlation analysis, special attention must be paid to preservation of order in an ordinal variable.</p>
<p>Let me make my argument reproducible. A and B are numeric, C is ordinal categorical in the following table, which is intentionally slightly altered from the one in the question.</p>
<pre><code>rawText = StringIO("""
A B C
0 100.1396 1.343921 Medium
1 105.3268 1.786945 Medium
2 200.3766 9.628746 High
3 150.2400 4.225647 Medium-High
""")
myData = pd.read_csv(rawText, sep = "\s+")
</code></pre>
<p>Notice: As C moves from Medium to Medium-High to High, both A and B increase monotonically. Hence we should see strong correlations between tuples (C,A) and (C,B). Let's reproduce the two proposed answers:</p>
<pre><code>In[226]: myData.assign(C=myData.C.astype('category').cat.codes).corr()
Out[226]:
A B C
A 1.000000 0.986493 -0.438466
B 0.986493 1.000000 -0.579650
C -0.438466 -0.579650 1.000000
</code></pre>
<p>Wait... What? Negative correlations? How come? Something is definitely not right. So what is going on?</p>
<p>What is going on is that C is factorized according to the alphanumerical sorting of its values. [High, Medium, Medium-High] are assigned [0, 1, 2], therefore the ordering is altered: 0 < 1 < 2 implies High < Medium < Medium-High, which is not true. Hence we accidentally calculated the response of A and B as C goes from High to Medium to Medium-High. The correct answer must preserve ordering, and assign [2, 0, 1] to [High, Medium, Medium-High]. Here is how:</p>
<pre><code>In[227]: myData['C'] = myData['C'].astype('category')
myData['C'].cat.categories = [2,0,1]
myData['C'] = myData['C'].astype('float')
myData.corr()
Out[227]:
A B C
A 1.000000 0.986493 0.998874
B 0.986493 1.000000 0.982982
C 0.998874 0.982982 1.000000
</code></pre>
<p>Much better!</p>
<p>Note1: If you want to treat your variable as a nominal variable, you can look at things like contingency tables, Cramer's V and the like; or group the continuous variable by the nominal categories etc. I don't think it would be right, though.</p>
<p>Note2: If you had another category called Low, my answer could be criticized due to the fact that I assigned equally spaced numbers to unequally spaced categories. You could make the argument that one should assign [2, 1, 1.5, 0] to [High, Medium, Medium-High, Small], which would be valid. I believe this is what people call the art part of data science.</p>
|
python|pandas|scikit-learn|correlation|categorical-data
| 32
|
6,369
| 48,927,671
|
assigning the value to a user depending on the cluster he comes from
|
<p>I have two dataframes, one with the customers who prefer songs, and my other dataframe consists of users and their cluster.</p>
<p>DATA 1:</p>
<pre><code>user song
A 11
A 22
B 99
B 11
C 11
D 44
C 66
E 66
D 33
E 55
F 11
F 77
</code></pre>
<p>DATA 2:</p>
<pre><code>user cluster
A 1
B 2
C 3
D 1
E 2
F 3
</code></pre>
<p>Using above data sets, I was able to achieve what all songs are listened by users of that cluster.</p>
<pre><code>cluster songs
1 [11, 22, 33, 44]
2 [11, 99, 66, 55]
3 [11,66,88,77]
</code></pre>
<p>I need to assign the song of a particular cluster to that particular user who has not listened to it yet.
In my expected output A belongs to cluster 1, and he has not yet listened to song 33 and 44..so my output should be like below. Same for B, which belongs to cluster 2, B has not listen to 66 and 55 songs, output for B looks like below.</p>
<p>EXPECTED OUTPUT : </p>
<pre><code> user song
A [33, 44]
B [66,55]
C [77]
D [11,22]
E [11,99]
F [66]
</code></pre>
|
<p>Use sets for comparison.</p>
<p><strong>Setup</strong></p>
<pre><code>df1
# user song
# 0 A 11
# 1 A 22
# 2 B 99
# 3 B 11
# 4 C 11
# 5 D 44
# 6 C 66
# 7 E 66
# 8 D 33
# 9 E 55
# 10 F 11
# 11 F 77
df2
# user cluster
# 0 A 1
# 1 B 2
# 2 C 3
# 3 D 1
# 4 E 2
# 5 F 3
df3
# cluster songs
# 0 1 [11, 22, 33, 44]
# 1 2 [11, 99, 66, 55]
# 2 3 [11, 66, 88, 77]
</code></pre>
<p><strong>Calculation</strong></p>
<pre><code>df = df1.groupby('user')['song'].apply(set)\
.reset_index().rename(columns={'song': 'heard'})
df['all'] = df['user'].map(df2.set_index('user')['cluster'])\
.map(df3.set_index('cluster')['songs'])\
.map(set)
df['not heard'] = df.apply(lambda row: row['all'] - row['heard'], axis=1)
</code></pre>
<p><strong>Result</strong></p>
<pre><code> user heard all not heard
0 A {11, 22} {33, 11, 44, 22} {33, 44}
1 B {11, 99} {99, 66, 11, 55} {66, 55}
2 C {66, 11} {88, 66, 11, 77} {88, 77}
3 D {33, 44} {33, 11, 44, 22} {11, 22}
4 E {66, 55} {99, 66, 11, 55} {11, 99}
5 F {11, 77} {88, 66, 11, 77} {88, 66}
</code></pre>
<p>Extract any columns you need; conversion to list is trivial, i.e. <code>df[col] = df[col].map(list)</code>.</p>
<p><strong>Explanation</strong></p>
<p>There are 3 steps:</p>
<ol>
<li>Convert lists to sets and aggregate heard songs by user to sets.</li>
<li>Perform mappings to put all data in one table.</li>
<li>Add a column which calculates the difference between 2 sets.</li>
</ol>
|
python|pandas|pandas-groupby
| 1
|
6,370
| 49,302,095
|
Plot dataframe then add vertical lines; how get custom legend text for all?
|
<p>I can plot a dataframe (2 "Y" values) and add vertical lines (2) to the plot, and I can specify custom legend text for either the Y values OR the vertical lines, but not both at the same time.</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
d = {'x' : [1., 2., 3., 4.], 'y1' : [8., 6., 4., 2.], 'y2' : [-4., 13., 2.2, -1.1]}
df = pd.DataFrame(d)
ax = df.plot(x='x', y=['y1'], linestyle='-', color='b')
df.plot(x='x', y=['y2'], linestyle='--', color='y', ax=ax)
ax.legend(labels=['y1custom', 'y2custom'])
plt.axvline(x=1.5, color='r', linestyle='--', label='vline1.5custom')
plt.axvline(x=3.5, color='k', linestyle='--', label='vline3.5custom')
plt.legend() # <---- comment out....or not....for different effects
plt.show()
</code></pre>
<p>A key line in the code is "plt.legend()". With it in the code, I get this (note legend has dataframe column labels "y1" and "y2" instead of my desired custom labels):</p>
<p><a href="https://i.stack.imgur.com/tdRYf.png" rel="noreferrer"><img src="https://i.stack.imgur.com/tdRYf.png" alt="with plt.legend() call"></a></p>
<p>With "plt.legend()" removed, I get this (legend has my custom labels for the dataframe values only, legend for vertical lines does not even appear!):</p>
<p><a href="https://i.stack.imgur.com/wGD2t.png" rel="noreferrer"><img src="https://i.stack.imgur.com/wGD2t.png" alt="without plt.legend() call"></a></p>
<p>How can I get the best of both worlds, specifically the following (in whatever order) for my legend?:</p>
<pre><code>y1custom
y2custom
vline1.5custom
vline3.5custom
</code></pre>
<p>Sure I could rename the columns of the dataframe first, but...ugh! There must be a better way.</p>
|
<p>Each call to <code>legend()</code> overwrites the initially created legend. So you need to create one single legend with all the desired labels in. </p>
<p>This means you can get the current labels via <code>ax.get_legend_handles_labels()</code> and replace those you do not like with something else. Then specify the new list of labels when calling <code>legend()</code>.</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
d = {'x' : [1., 2., 3., 4.], 'y1' : [8., 6., 4., 2.], 'y2' : [-4., 13., 2.2, -1.1]}
df = pd.DataFrame(d)
ax = df.plot(x='x', y=['y1'], linestyle='-', color='b')
df.plot(x='x', y=['y2'], linestyle='--', color='y', ax=ax)
ax.axvline(x=1.5, color='r', linestyle='--', label='vline1.5custom')
ax.axvline(x=3.5, color='k', linestyle='--', label='vline3.5custom')
h,labels = ax.get_legend_handles_labels()
labels[:2] = ['y1custom','y2custom']
ax.legend(labels=labels)
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/Jvwbo.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Jvwbo.png" alt="enter image description here"></a></p>
|
python|pandas|matplotlib
| 6
|
6,371
| 48,997,373
|
output logical series based on multiple conditions pandas
|
<p>I'd like to create a new series of logical values based on the evaluation of multiple conditions.</p>
<p>For example</p>
<pre><code>> df = pd.DataFrame({'id':[40, 20, 50, 5, 80], 'value': ['a', 'd', 'g', 'g', 'g']})
> df[(df.id > 40) & (df.value.isin(['a', 'g']))]
id value
2 50 g
4 80 g
</code></pre>
<p>However I would like to return a logical series i.e.</p>
<pre><code>False
False
True
False
True
</code></pre>
<p>I'd like to do with with pandas methods if possible.</p>
|
<p>Just use:</p>
<pre><code>s = (df.id > 40) & (df.value.isin(['a', 'g']))
print(s)
</code></pre>
<p>Output:</p>
<pre><code>0 False
1 False
2 True
3 False
4 True
dtype: bool
</code></pre>
|
python|pandas
| 1
|
6,372
| 49,099,440
|
Print string list vertically
|
<p>I have a dataframe as below.</p>
<pre><code>df = pd.DataFrame({'Title': ['x','y','z','aa'], 'Result': [2, 5, 11, 16]})
</code></pre>
<p>I want to return a text string only including those which are more than 10. </p>
<p>example of the result i want is below</p>
<pre><code>From the results in df, the below returned greater than 10:
z
aa
</code></pre>
<p>I have tried the below, but it didn't give the output that i was looking for. It gave an array in the same line. Not as the above. </p>
<pre><code>df2 = df[df['Result']>= 10]
df3 = df2['Title'].values
print ('From the results in df, the below returned greater than 10:\n\t%s' (df3))
</code></pre>
|
<p>change</p>
<pre><code>print ('From the results in df, the below returned greater than 10:\n\t%s' (df3))
</code></pre>
<p>to</p>
<pre><code>print ('From the results in df, the below returned greater than 10:')
for n in df3:
print('\t' + str(n))
</code></pre>
|
python|numpy
| 2
|
6,373
| 58,889,216
|
Rolling average and sum by days over timestamp in Pyspark
|
<p>I have a PySpark dataframe where the timestamp is in units of days. Following is an example of the dataframe (let's call it <code>df</code>):</p>
<pre><code>+-----+-----+----------+-----+
| name| type| timestamp|score|
+-----+-----+----------+-----+
|name1|type1|2012-01-10| 11|
|name1|type1|2012-01-11| 14|
|name1|type1|2012-01-12| 2|
|name1|type3|2012-01-12| 3|
|name1|type3|2012-01-11| 55|
|name1|type1|2012-01-13| 10|
|name1|type2|2012-01-14| 11|
|name1|type2|2012-01-15| 14|
|name2|type2|2012-01-10| 2|
|name2|type2|2012-01-11| 3|
|name2|type2|2012-01-12| 55|
|name2|type1|2012-01-10| 10|
|name2|type1|2012-01-13| 55|
|name2|type1|2012-01-14| 10|
+-----+-----+----------+-----+
</code></pre>
<p>In this dataframe, I want to <strong>average over, and take sum of scores for different names over a rolling time window</strong> of three days. Meaning, for any given day of the data frame, and find sum of scores on that day, the day before the considered day, and the day before the day before the considered day for a <code>name1</code> . And do similar things for all days of <code>name1</code>. And also do same exercises for all kinds of <code>names</code> , <em>viz.</em> <code>name2</code> etc. <strong>How can I do this?</strong></p>
<p>I took a look at <a href="https://stackoverflow.com/questions/45806194/pyspark-rolling-average-using-timeseries-data">this</a> post, and tried the following</p>
<pre><code>from pyspark.sql import SparkSession
from pyspark.sql import functions as F
from pyspark.sql.window import Window
days = lambda i: i*1
w_rolling = Window.orderBy(F.col("timestamp").cast("long")).rangeBetween(-days(3), 0)
df_agg = df.withColumn("rolling_average", F.avg("score").over(w_rolling)).withColumn(
"rolling_sum", F.sum("score").over(w_rolling)
)
df_agg.show()
+-----+-----+----------+-----+------------------+-----------+
| name| type| timestamp|score| rolling_average|rolling_sum|
+-----+-----+----------+-----+------------------+-----------+
|name1|type1|2012-01-10| 11|18.214285714285715| 255|
|name1|type1|2012-01-11| 14|18.214285714285715| 255|
|name1|type1|2012-01-12| 2|18.214285714285715| 255|
|name1|type3|2012-01-12| 3|18.214285714285715| 255|
|name1|type3|2012-01-11| 55|18.214285714285715| 255|
|name1|type1|2012-01-13| 10|18.214285714285715| 255|
|name1|type2|2012-01-14| 11|18.214285714285715| 255|
|name1|type2|2012-01-15| 14|18.214285714285715| 255|
|name2|type2|2012-01-10| 2|18.214285714285715| 255|
|name2|type2|2012-01-11| 3|18.214285714285715| 255|
|name2|type2|2012-01-12| 55|18.214285714285715| 255|
|name2|type1|2012-01-10| 10|18.214285714285715| 255|
|name2|type1|2012-01-13| 55|18.214285714285715| 255|
|name2|type1|2012-01-14| 10|18.214285714285715| 255|
+-----+-----+----------+-----+------------------+-----------+
</code></pre>
<p>As you see, I always get the same rolling average and rolling sum which is nothing but the average and sum of the column <code>score</code> for all days. This is not what I want.</p>
<p>You can create the above-mentioned dataframe using the following code snippet:</p>
<pre><code>
df_Stats = Row("name", "type", "timestamp", "score")
df_stat1 = df_Stats("name1", "type1", "2012-01-10", 11)
df_stat2 = df_Stats("name1", "type1", "2012-01-11", 14)
df_stat3 = df_Stats("name1", "type1", "2012-01-12", 2)
df_stat4 = df_Stats("name1", "type3", "2012-01-12", 3)
df_stat5 = df_Stats("name1", "type3", "2012-01-11", 55)
df_stat6 = df_Stats("name1", "type1", "2012-01-13", 10)
df_stat7 = df_Stats("name1", "type2", "2012-01-14", 11)
df_stat8 = df_Stats("name1", "type2", "2012-01-15", 14)
df_stat9 = df_Stats("name2", "type2", "2012-01-10", 2)
df_stat10 = df_Stats("name2", "type2", "2012-01-11", 3)
df_stat11 = df_Stats("name2", "type2", "2012-01-12", 55)
df_stat12 = df_Stats("name2", "type1", "2012-01-10", 10)
df_stat13 = df_Stats("name2", "type1", "2012-01-13", 55)
df_stat14 = df_Stats("name2", "type1", "2012-01-14", 10)
df_stat_lst = [
df_stat1,
df_stat2,
df_stat3,
df_stat4,
df_stat5,
df_stat6,
df_stat7,
df_stat8,
df_stat9,
df_stat10,
df_stat11,
df_stat12,
df_stat13,
df_stat14
]
df = spark.createDataFrame(df_stat_lst)
</code></pre>
|
<p>You can use below code to calculate the sum and average of score over last 3 days including current day.</p>
<pre><code># Considering the dataframe already created using code provided in question
df = df.withColumn('unix_time', F.unix_timestamp('timestamp', 'yyyy-MM-dd'))
winSpec = Window.partitionBy('name').orderBy('unix_time').rangeBetween(-2*86400, 0)
df = df.withColumn('rolling_sum', F.sum('score').over(winSpec))
df = df.withColumn('rolling_avg', F.avg('score').over(winSpec))
df.orderBy('name', 'timestamp').show(20, False)
+-----+-----+----------+-----+----------+-----------+------------------+
|name |type |timestamp |score|unix_time |rolling_sum|rolling_avg |
+-----+-----+----------+-----+----------+-----------+------------------+
|name1|type1|2012-01-10|11 |1326153600|11 |11.0 |
|name1|type3|2012-01-11|55 |1326240000|80 |26.666666666666668|
|name1|type1|2012-01-11|14 |1326240000|80 |26.666666666666668|
|name1|type1|2012-01-12|2 |1326326400|85 |17.0 |
|name1|type3|2012-01-12|3 |1326326400|85 |17.0 |
|name1|type1|2012-01-13|10 |1326412800|84 |16.8 |
|name1|type2|2012-01-14|11 |1326499200|26 |6.5 |
|name1|type2|2012-01-15|14 |1326585600|35 |11.666666666666666|
|name2|type1|2012-01-10|10 |1326153600|12 |6.0 |
|name2|type2|2012-01-10|2 |1326153600|12 |6.0 |
+-----+-----+----------+-----+----------+-----------+------------------+
</code></pre>
|
python|pandas|pyspark|pyspark-sql|pyspark-dataframes
| 3
|
6,374
| 58,909,204
|
Proper way to extract embedding weights for CBOW model?
|
<p>I'm currently trying to implement the CBOW model on managed to get the training and testing, but am facing some confusion as to the "proper" way to finally extract the weights from the model to use as our word embeddings.</p>
<h2>Model</h2>
<pre><code>class CBOW(nn.Module):
def __init__(self, config, vocab):
self.config = config # Basic config file to hold arguments.
self.vocab = vocab
self.vocab_size = len(self.vocab.token2idx)
self.window_size = self.config.window_size
self.embed = nn.Embedding(num_embeddings=self.vocab_size, embedding_dim=self.config.embed_dim)
self.linear = nn.Linear(in_features=self.config.embed_dim, out_features=self.vocab_size)
def forward(self, x):
x = self.embed(x)
x = torch.mean(x, dim=0) # Average out the embedding values.
x = self.linear(x)
return x
</code></pre>
<h2>Main process</h2>
<p>After I run my model through a Solver with the training and testing data, I basically told the <code>train</code> and <code>test</code> functions to also return the model that's used. Then I assigned the embedding weights to a separate variable and used those as the word embeddings.</p>
<p>Training and testing was conducted using cross entropy loss, and each training and testing sample is of the form <code>([context words], target word)</code>.</p>
<pre><code>def run(solver, config, vocabulary):
for epoch in range(config.num_epochs):
loss_train, model_train = solver.train()
loss_test, model_test = solver.test()
embeddings = model_train.embed.weight
</code></pre>
<p>I'm not sure if this is the correct way of going about extracting and using the embeddings. Is there usually another way to do this? Thanks in advance.</p>
|
<p>Yes, <code>model_train.embed.weight</code> will give you a torch tensor that stores the embedding weights. Note however, that this tensor also contains the latest gradients. If you don't want/need them, <code>model_train.embed.weight.data</code> will give you the weights only.</p>
<p>A more generic option is to call <code>model_train.embed.parameters()</code>. This will give you a generator of all the weight tensors of the layer. In general, there are multiple weight tensors in a layer and <code>weight</code> will give you only one of them. <code>Embedding</code> happens to have only one, so here it doesn't matter which option you use.</p>
|
deep-learning|nlp|pytorch
| 1
|
6,375
| 58,960,475
|
Pandas - Apply logic to every column in DataFrame
|
<p>I have Dataframe that has 50 columns. I am trying to apply a certain logic to every column</p>
<p>Logic I am trying to apply is </p>
<pre><code>df[1] = df[1].str.split("'",expand=True)
</code></pre>
<p>The above logic works well for column with <code>index 1</code>, how could I extend this to every column in the DataFrame.</p>
|
<p>Actually it is neater to use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.apply.html" rel="nofollow noreferrer"><code>.apply()</code></a>:</p>
<pre><code>def custom_split(string):
return string.str.split("'", expand=True)
df.apply(custom_split)
</code></pre>
|
python|pandas
| 2
|
6,376
| 58,763,079
|
Subtracting booleans obtained from numpy variables results in TypeError
|
<p>I have trouble understanding this weird behaviour while using numpy variable-</p>
<pre><code>import numpy as np
from operator import lt,gt
val = lt(np.float64(0.8514),0) - gt(np.float(0.8514),0)
</code></pre>
<p>This fails with the following error- </p>
<pre><code>TypeErrorTraceback (most recent call last)
<ipython-input-37-ddc655dbbe89> in <module>()
1 from operator import lt,gt
----> 2 val = lt(np.float64(0.8514),0) - gt(np.float(0.8514),0)
TypeError: numpy boolean subtract, the `-` operator, is deprecated, use the bitwise_xor, the `^` operator, or the logical_xor function instead.
</code></pre>
<p>This should not happen as the output of lt and gt is a boolean variable. The following snippets both work without a hitch-</p>
<pre><code>from operator import lt,gt
import numpy as np
val = True - False
val = lt(float(np.float64(0.8514)),0) - gt(float(np.float(0.8514)),0)
</code></pre>
<p>I don't understand what's the issue being when the input is a numpy variable. The above code was executed in Python-2.</p>
|
<p>As the error message indicates, the <code>-</code> operator is deprecated. Just use the <code>^</code> operator for logical operations instead.</p>
<pre><code>import numpy as np
from operator import lt, gt
exp1 = lt(np.float64(0.8514), 0)
exp2 = gt(np.float64(0.8514), 0)
val = exp1 ^ exp2
print(val) # True
</code></pre>
<p>I don't get the error message if I use Python 3 and <code>val = exp1 - exp2</code> also works. So you may consider using Python 3 instead of Python 2.</p>
<p>If you, for some reason, don't want to perform logical operations, you can cast <code>exp1</code> and <code>exp2</code> to <code>float</code> or <code>int</code>:</p>
<pre><code>import numpy as np
from operator import lt, gt
exp1 = lt(np.float64(0.8514), 0)
exp2 = gt(np.float64(0.8514), 0)
val = int(exp1) - int(exp2)
print(val) # -1
val = float(exp1) - float(exp2)
print(val) # -1.0
</code></pre>
|
python|numpy
| 1
|
6,377
| 70,168,024
|
How to transpose two particular column and keep first row in python?
|
<p>I have the data frame as follows:</p>
<pre><code> df = pd.DataFrame({
'ID': [12, 12, 15, 15, 16, 17, 17],
'Name': ['A', 'A', 'B', 'B', 'C', 'D', 'D'],
'Date':['2019-12-20' ,'2018-12-20' ,'2017-12-20' , '2016-12-20', '2015-12-20', '2014-12-20', '2013-12-20'],
'Color':['Black', 'Blue', 'Red' , 'Yellow' , 'White' , 'Sky' , 'Green']
})
</code></pre>
<p>or data table:</p>
<pre><code>
ID Name Date Color
0 12 A 2019-12-20 Black
1 12 A 2018-12-20 Blue
2 15 B 2017-12-20 Red
3 15 B 2016-12-20 Yellow
4 16 C 2015-12-20 White
5 17 D 2014-12-20 Sky
6 17 D 2013-12-20 Green
</code></pre>
<p>My desired result would be as below table. How could I get that?</p>
<pre><code>
ID Name Date Color Date_ Color_
0 12 A 2019-12-20 Black 2018-12-20 Blue
1 15 B 2017-12-20 Red 2016-12-20 Yellow
2 16 C 2015-12-20 White 2015-12-20 White
3 17 D 2014-12-20 Sky 2013-12-20 Green
</code></pre>
<p>I need your help, thanks in advance!</p>
|
<p>Use virtual groups to set each row to a column. The rest is just formatting.</p>
<pre><code># Identify target column for each row
out = df.assign(col=df.groupby('Name').cumcount().astype(str)) \
.pivot(index=['ID', 'Name'], columns='col', values=['Date', 'Color']) \
.ffill(axis=1)
# Sort columns according your output
out = out.sort_index(level=[1, 0], axis=1, ascending=[True, False])
# Flat the multiindex column
out.columns = out.columns.to_flat_index().str.join('_')
# Reset index
out = out.reset_index()
</code></pre>
<p>Output:</p>
<pre><code>>>> out
ID Name Date_0 Color_0 Date_1 Color_1
0 12 A 2019-12-20 Black 2018-12-20 Blue
1 15 B 2017-12-20 Red 2016-12-20 Yellow
2 16 C 2015-12-20 White 2015-12-20 White
3 17 D 2014-12-20 Sky 2013-12-20 Green
</code></pre>
<p>After <code>pivot</code>, your dataframe looks like:</p>
<pre><code>>>> df.assign(col=df.groupby('Name').cumcount().astype(str)) \
.pivot(index=['ID', 'Name'], columns='col', values=['Date', 'Color']) \
.ffill(axis=1)
Date Color
col 0 1 0 1
ID Name
12 A 2019-12-20 2018-12-20 Black Blue
15 B 2017-12-20 2016-12-20 Red Yellow
16 C 2015-12-20 2015-12-20 White White
17 D 2014-12-20 2013-12-20 Sky Green
</code></pre>
|
python|pandas|multiple-columns|rows|swap
| 2
|
6,378
| 70,215,756
|
When using geopandas explore() method is there a way to restrict the boundaries of the resulting map?
|
<p>I am trying to show an interactive heatmap of the United States using explore(). But it shows the entire world. Is there any way to restrict it to only the United States?</p>
|
<p>you can pass same parameters as if you were using <strong>folium</strong> directly. For example, center on USA geometry centroid</p>
<pre><code>gdf = gpd.read_file(gpd.datasets.get_path("naturalearth_lowres"))
gdf.explore(
"pop_est",
cmap="Blues",
location=gdf.loc[gdf["iso_a3"].eq("USA"), "geometry"]
.apply(lambda g: [g.centroid.xy[1][0], g.centroid.xy[0][0]])
.values[0],
zoom_start=3,
control_scale=True,
)
</code></pre>
|
python|pandas|geopandas
| 0
|
6,379
| 70,120,519
|
Python version mismatch: module was compiled for Python 3.6, but the interpreter version is incompatible: 3.9.8
|
<p>In order to install the newest <code>tensorflow</code>(2.7.0), I updated my <code>python3</code> verison from <code>3.6.6</code> to <code>3.9.8</code>. Here is how I do it inside my <strong>docker</strong>!!.</p>
<pre><code>Download the Python-3.9.8.tgz file
1. tar -xf Python-3.9.8.tgz
2. cd Python-3.9.8 & ./configure --enable-optimizations
3. make -j 12
4. make altinstall
</code></pre>
<p>And my <code>python3 --version</code> is <code>Python 3.9.8</code>. However, as I am trying to load the newest <code>tf</code> by <code>import tensorflow.compat.v1 as tf</code>. Here comes the error:</p>
<pre><code> File "/workspaces/model/task.py", line 120, in new_model_test
import model_api
File "/lfs/biomind/model_tmp/19bddfc44e8211ecbe172d8a58f5e38e/wmh_v2/model_api.py", line 3, in <module>
import tensorflow.compat.v1 as tf
File "/usr/local/lib/python3.6/site-packages/tensorflow/__init__.py", line 99, in <module>
from tensorflow_core import *
File "/usr/local/lib/python3.6/site-packages/tensorflow_core/__init__.py", line 28, in <module>
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import
File "/usr/local/lib/python3.6/site-packages/tensorflow/__init__.py", line 50, in __getattr__
module = self._load()
File "/usr/local/lib/python3.6/site-packages/tensorflow/__init__.py", line 44, in _load
module = _importlib.import_module(self.__name__)
File "/usr/local/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "/usr/local/lib/python3.6/site-packages/tensorflow_core/python/__init__.py", line 49, in <module>
from tensorflow.python import pywrap_tensorflow
File "/usr/local/lib/python3.6/site-packages/tensorflow_core/python/pywrap_tensorflow.py", line 74, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/tensorflow_core/python/pywrap_tensorflow.py", line 58, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "/usr/local/lib/python3.6/site-packages/tensorflow_core/python/pywrap_tensorflow_internal.py", line 2453, in <module>
from tensorflow.python.util import deprecation
File "/usr/local/lib/python3.6/site-packages/tensorflow_core/python/util/deprecation.py", line 25, in <module>
from tensorflow.python.platform import tf_logging as logging
File "/usr/local/lib/python3.6/site-packages/tensorflow_core/python/platform/tf_logging.py", line 38, in <module>
from tensorflow.python.util.tf_export import tf_export
File "/usr/local/lib/python3.6/site-packages/tensorflow_core/python/util/tf_export.py", line 48, in <module>
from tensorflow.python.util import tf_decorator
File "/usr/local/lib/python3.6/site-packages/tensorflow_core/python/util/tf_decorator.py", line 64, in <module>
from tensorflow.python.util import tf_stack
File "/usr/local/lib/python3.6/site-packages/tensorflow_core/python/util/tf_stack.py", line 29, in <module>
from tensorflow.python import _tf_stack
ImportError: Python version mismatch: module was compiled for Python 3.6, but the interpreter version is incompatible: 3.9.8 (main, Nov 25 2021, 21:54:13)
[GCC 7.5.0].
</code></pre>
<p>Is there a way to change the compiled version of <code>Python</code> or do I do something which is not the right step? Thanks in advance.</p>
|
<p>For my question, the issue starts from <code>File "/usr/local/lib/python3.6/site-packages/tensorflow/__init__.py"</code>. Though I installed <code>Python3.9</code>, python search <code>python3.6</code> lib as well. The solution is simple: delete <code>/usr/local/lib/python3.6/</code></p>
|
python-3.x|tensorflow|tensorflow2.0|cpython
| 0
|
6,380
| 55,712,905
|
dataframe parameters: Why do the changes on my df are local?
|
<p>I have a function that is supposed to implement some transformations and calculations on a df, the code runs but once it's done df remains the same as before.</p>
<p>EDIT: The problem is solved when I remove the line 2 (pd.merge), but I'd like to understand why.</p>
<pre class="lang-py prettyprint-override"><code>def Tilt_contribution(ptf, data_factors, data_universe, Sample_size_adjustment:bool):
ptf['ActiveWeight'] = ptf['PortfolioWeight']- ptf['BenchmarkWeight']
ptf = pd.merge(ptf, data_factors, on = ['ISIN','SecurityName'], how = 'left' )
sample_size_adj = 1 #initialisation
if Sample_size_adjustment == True:
sample_size_adj = Sample_size_adj(ptf)
for column in factor_list:
#Weighted std
universe_wght_std = Universe_weighted_std(data_universe, column)
#Benchmark weighted average
benchmk_wght_avg = Benchmark_weighted_avg(ptf, column)
#Tilt Contrib
col_index = ptf.columns.get_loc(column)
ptf.insert(col_index+1, column + ' Tilt Contribution', 0)
for idx in ptf.index:
tilt_cont = (ptf.at[idx, column] - benchmk_wght_avg)*ptf.at[idx, 'ActiveWeight']/(sample_size_adj * universe_wght_std)
if math.isnan(tilt_cont):
tilt_cont=0
ptf.at[idx, column + ' Tilt Contribution'] = tilt_cont
global Tilt
Tilt = ptf.sum()
</code></pre>
|
<p>You are setting the global var <code>Tilt</code> with the sum of <code>ptf</code> but not returning the modified <code>ptf</code> DataFrame. </p>
|
python|pandas
| 0
|
6,381
| 55,777,588
|
Size mismatch for DNN for the MNIST dataset in pytorch
|
<p>I have to find a way to create a neural network model and train it on the MNIST dataset. I need there to be 5 layers, with 100 neurons each. However, when I try to set this up I get an error that there is a size mismatch. Can you please help? I am hoping that I can train on the model below:</p>
<pre><code>class Mnist_DNN(nn.Module):
def __init__(self):
super().__init__()
self.layer1 = nn.Linear(784, 100)
self.layer2 = nn.Linear(100, 100)
self.layer3 = nn.Linear(100, 100)
self.layer4 = nn.Linear(100, 100)
self.layer5 = nn.Linear(100, 10)
def forward(self, xb):
xb = xb.view(-1, 1, 28, 28)
xb = F.relu(self.layer1(xb))
xb = F.relu(self.layer2(xb))
xb = F.relu(self.layer3(xb))
xb = F.relu(self.layer4(xb))
xb = F.relu(self.layer5(xb))
return self.layer5(xb)
</code></pre>
|
<p>You setup your layer to get a batch of 1D vectors of dim 784 (=28*28). However, in your <code>forward</code> function you <code>view</code> the input as a batch of 2D matrices of size 28*28.<br>
try viewing the input as a batch of 1D signals:</p>
<pre><code>xb = xb.view(-1, 784)
</code></pre>
|
neural-network|pytorch|mnist
| 0
|
6,382
| 64,732,158
|
boxplot not show the plots
|
<p>Following this <a href="https://towardsdatascience.com/a-step-by-step-introduction-to-pca-c0d78e26a0dd" rel="nofollow noreferrer">tutorial</a>, I used the first few statements in order to show the distribution of iris data as below</p>
<pre><code>from sklearn.datasets import load_iris
from pandas import DataFrame
import numpy as np
iris = load_iris()
colors = ["blue", "red", "green"]
df = DataFrame(
data=np.c_[iris["data"], iris["target"]], columns=iris["feature_names"] + ["target"]
)
print (df)
df.boxplot(by="target", layout=(2, 2), figsize=(10, 10))
</code></pre>
<p>Problem is that I don't see the boxplot output although the <code>df</code> is not empty.</p>
<pre><code>$ python3 pca_iris.py
sepal length (cm) sepal width (cm) petal length (cm) petal width (cm) target
0 5.1 3.5 1.4 0.2 0.0
1 4.9 3.0 1.4 0.2 0.0
2 4.7 3.2 1.3 0.2 0.0
3 4.6 3.1 1.5 0.2 0.0
4 5.0 3.6 1.4 0.2 0.0
.. ... ... ... ... ...
145 6.7 3.0 5.2 2.3 2.0
146 6.3 2.5 5.0 1.9 2.0
147 6.5 3.0 5.2 2.0 2.0
148 6.2 3.4 5.4 2.3 2.0
149 5.9 3.0 5.1 1.8 2.0
[150 rows x 5 columns]
$
</code></pre>
<p>How can I debug more to see what is the problem with boxplot?</p>
|
<p>I'm using jupyter notebook and the same code showed me the plots. if you are use python code I think you have to use</p>
<pre><code>matplotlib.pyplot.show()
</code></pre>
<p>to see the plots</p>
|
python-3.x|pandas|dataframe
| 1
|
6,383
| 64,911,356
|
How to optimally update cells based on previous cell value / How to elegantly spread values of cell to other cells?
|
<p>I have a "large" DataFrame table with index being country codes (alpha-3) and columns being years (1900 to 2000) imported via a pd.read_csv(...) [as I understand, these are actually string so I need to pass it as '1945' for example].</p>
<p>The values are 0,1,2,3.
I need to "spread" these values until the next non-0 for each row.</p>
<ul>
<li>example : 0 0 1 0 0 3 0 0 2 1</li>
<li>becomes: 0 0 1 1 1 3 3 3 2 1</li>
</ul>
<p>I understand that I should not use iterations (current implementation is something like this, as you can see, using 2 loops is not optimal, I guess I could get rid of one by using apply(row) )</p>
<pre><code>def spread_values(df):
for idx in df.index:
previous_v = 0
for t_year in range(min_year, max_year):
current_v = df.loc[idx, str(t_year)]
if current_v == 0 and previous_v != 0:
df.loc[idx, str(t_year)] = previous_v
else:
previous_v = current_v
</code></pre>
<p>However I am told I should use the apply() function, or vectorisation or list comprehension because it is not optimal?</p>
<p>The apply function however, regardless of the axis, does not allow to dynamically get the index/column (which I need to conditionally update the cell), and I think the core issue I can't make the vec or list options work is because I do <strong>not</strong> have a <strong>finite set</strong> of column names but rather a wide range (all examples I see use a handful of named columns...)</p>
<p>What would be the more <strong>optimal</strong> / more <strong>elegant</strong> solution here?</p>
<p>OR are DataFrames not suited for my data <strong>at all</strong>? what should I use instead?</p>
|
<p>You can use <code>df.replace(to_replace=0, method='ffil)</code>. This will fill all zeros in your dataframe (except for zeros occuring at the start of your dataframe) with the previous non-zero value per column.</p>
<p>If you want to do it <code>rowwise</code> unfortunately the <code>.replace()</code> function does not accept an <code>axis</code> argument. But you can <code>transpose</code> your <code>dataframe</code>, replace the zeros and <code>transpose</code> it again: <code>df.T.replace(0, method='ffill').T</code></p>
|
python|pandas
| 1
|
6,384
| 65,045,427
|
Callbacks in tensorflow 2.3
|
<p>I was writing my own callback to stop training based on some custom condition. EarlyStopping has this to stop the training once condition is met:</p>
<pre><code>self.model.stop_training = True
</code></pre>
<p>e.g. from <a href="https://www.tensorflow.org/guide/keras/custom_callback" rel="nofollow noreferrer">https://www.tensorflow.org/guide/keras/custom_callback</a></p>
<p>class EarlyStoppingAtMinLoss(keras.callbacks.Callback):
"""Stop training when the loss is at its min, i.e. the loss stops decreasing.</p>
<p>Arguments:
patience: Number of epochs to wait after min has been hit. After this
number of no improvement, training stops.
"""</p>
<pre><code>def __init__(self, patience=0):
super(EarlyStoppingAtMinLoss, self).__init__()
self.patience = patience
# best_weights to store the weights at which the minimum loss occurs.
self.best_weights = None
def on_train_begin(self, logs=None):
# The number of epoch it has waited when loss is no longer minimum.
self.wait = 0
# The epoch the training stops at.
self.stopped_epoch = 0
# Initialize the best as infinity.
self.best = np.Inf
def on_epoch_end(self, epoch, logs=None):
current = logs.get("loss")
if np.less(current, self.best):
self.best = current
self.wait = 0
# Record the best weights if current results is better (less).
self.best_weights = self.model.get_weights()
else:
self.wait += 1
if self.wait >= self.patience:
self.stopped_epoch = epoch
self.model.stop_training = True
print("Restoring model weights from the end of the best epoch.")
self.model.set_weights(self.best_weights)
def on_train_end(self, logs=None):
if self.stopped_epoch > 0:
print("Epoch %05d: early stopping" % (self.stopped_epoch + 1))
</code></pre>
<p>The thing is, it doesn't work for tensorflow 2.2 and 2.3. Any idea for workaround? How else can one stop the training of a model in tf 2.3?</p>
|
<p>I copied your code and added a few print statements to see what is going on. I also changed the loss being monitored from training loss to validation loss because training loss tends to keep decreasing over many epochs while validation loss tends to level out faster. Better to monitor validation loss for early stopping and for saving weights then to use training loss. Your code runs fine and does stop training if the loss does not reduce after patience number of epochs. Make sure you have the code below</p>
<pre><code>patience=3 # set patience value
callbacks=[EarlyStoppingAtMinLoss(patience)]
# in model.fit include callbacks=callbacks
</code></pre>
<p>Here is your code modified with print statements so you can see what is going on</p>
<pre><code>class EarlyStoppingAtMinLoss(keras.callbacks.Callback):
def __init__(self, patience=0):
super(EarlyStoppingAtMinLoss, self).__init__()
self.patience = patience
# best_weights to store the weights at which the minimum loss occurs.
self.best_weights = None
def on_train_begin(self, logs=None):
# The number of epoch it has waited when loss is no longer minimum.
self.wait = 0
# The epoch the training stops at.
self.stopped_epoch = 0
# Initialize the best as infinity.
self.best = np.Inf
def on_epoch_end(self, epoch, logs=None):
current = logs.get("val_loss")
print('epoch = ', epoch +1, ' loss= ', current, ' best_loss = ', self.best, ' wait = ', self.wait)
if np.less(current, self.best):
self.best = current
self.wait = 0
print ( ' loss improved setting wait to zero and saving weights')
# Record the best weights if current results is better (less).
self.best_weights = self.model.get_weights()
else:
self.wait += 1
print ( ' for epoch ', epoch +1, ' loss did not improve setting wait to ', self.wait)
if self.wait >= self.patience:
self.stopped_epoch = epoch
self.model.stop_training = True
print("Restoring model weights from the end of the best epoch.")
self.model.set_weights(self.best_weights)
def on_train_end(self, logs=None):
if self.stopped_epoch > 0:
print("Epoch %05d: early stopping" % (self.stopped_epoch + 1))
</code></pre>
<p>I copied your new code and ran it. Apparently tensorflow does not evaluate model.stop_training during batches. So even though model.stop_training gets set to True in on_train_batch_end it continues processing the batches until all batches for the epoch are completed. Then at the end of the epoch tensorflow evaluates model.stop_training and training does stop.</p>
<pre><code></code></pre>
|
tensorflow|keras|callback|early-stopping
| 1
|
6,385
| 64,678,082
|
Getting same result for different CSV files
|
<p>DESCRIPTION: I have a piece of Python code, and this code takes a CSV file as input and produces a .player file as output. I've four different CSV files, hence, after running the code four times (taking each CSV file one by one), I've four .player files.</p>
<p>REPOSITORY: <a href="https://github.com/divkrsh/gridlab-d" rel="nofollow noreferrer">https://github.com/divkrsh/gridlab-d</a></p>
<p>DATA: The data in the CSV files are put through this code to produce a .player file as output in the range of 0 to 1. So, the code is supposed to read the second column of the CSV files and create a player file in the range of 0 to 1.</p>
<p>RUN:</p>
<pre><code>pip install -r requirements.txt
python player_adjuster.py Load1.csv
python player_adjuster.py Load2.csv
python player_adjuster.py Load3.csv
python player_adjuster.py Load4.csv
</code></pre>
<pre><code>PS C:\Users\JOHN\Documents\PYTHON\GRIDLAB-D> python player_adjuster.py Load1.csv
.csv
> Enter starttime:
Accepted format is 'YYYY-MM-DD HH:mm:ss'
? 2020-08-01 00:00:00
> Simulation Interval:
Example acceptable values:
1h, 10s, 5m, i.e. any other integer value followed by h,d,s or m
? 15m
> Player file name (dont provide extension.
It will automatically have *.player extension
? Load1
PS C:\Users\JOHN\Documents\PYTHON\GRIDLAB-D>
</code></pre>
<p><a href="https://i.stack.imgur.com/TVPUd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TVPUd.png" alt="output_directory" /></a></p>
<p>PROBLEM: The contents of all four .player files are the same. However, they should be different.</p>
<p>WHAT I NEED: Which part of the code is doing this? How can I correct this (i.e., receive different output for different CSV files)?</p>
|
<p>Analyzing the repository we can see:</p>
<p><code>x = np.arange(rows_to_make)</code></p>
<p><code>x = preprocessing.minmax_scale(x, feature_range=(0, rows_to_make), axis=0, copy=True)</code></p>
<p><code>y_new = preprocessing.minmax_scale(x, feature_range=(0, 1), axis=0, copy=True)</code></p>
<p>x is the same for every CSV file (depends on user input only). So it is definitely 3rd line fault here.</p>
<p>My guess is probably last line that I mentiod here (line 87 in repository) should be something like <code>y_new = preprocessing.minmax_scale(y_new, feature_range=(0, 1), axis=0, copy=True)</code></p>
|
python|python-3.x|pandas|numpy|csv
| 2
|
6,386
| 64,793,458
|
fastest polynomial evaluation in python
|
<p>I'm working on an old project of building Newton Basins and I'm trying to make it as fast as possible. The first thing I'm trying to speed up is how to evaluate a polynomial function at a given complex point <code>x0</code>. I thought of 4 different ways of doing this and tested them with <code>timeit</code>. The code I used is the following:</p>
<pre><code>import timeit
import numpy as np
import random
from numpy import polyval
class Test(object):
re = random.randint(-40000, 40000)/10000
im = random.randint(-40000, 40000)/10000
x0 = complex(re, im)
coefs = np.array([48,8,4,-10,2,-3,1])
flip_coefs = np.flip(coefs)
def solve0():
y = np.array([Test.coefs[i]*(Test.x0**i) for i in range(len(Test.coefs))]).sum()
return y
def solve1():
y = 0
for i in range(len(Test.coefs)):
y += Test.coefs[i]*(Test.x0**i)
return y
def solve2():
y = np.dot(Test.coefs,Test.x0**np.arange(len(Test.coefs)))
return y
def solve3():
y = polyval(Test.flip_coefs, Test.x0)
return y
Test.solve0()
if __name__ == '__main__':
print(timeit.timeit('Test.solve0()', setup="from __main__ import Test", number=10000))
print(timeit.timeit('Test.solve1()', setup="from __main__ import Test", number=10000))
print(timeit.timeit('Test.solve2()', setup="from __main__ import Test", number=10000))
print(timeit.timeit('Test.solve3()', setup="from __main__ import Test", number=10000))
</code></pre>
<p>the thing is that I was pretty sure that <code>numpy.polyval()</code> would be the fastest, but it seems that, on Linux, <code>np.dot(coefs,x**np.arange(len(coefs)))</code> is more than twice as fast, regardless of the value of <code>x0</code> (I don't know if it is the same in Windows and MacOS). This is an output example I've got:</p>
<pre><code>0.1735437790002834
0.12607222800033924
0.0313361469998199
0.0796813930001008
</code></pre>
<p>This seems quite strange since <code>numpy.polyval()</code> was specifically built for solving polynomials. So, my questions are: Is there something I'm missing here (maybe related to the coefficients I chose)? Are there faster ways of evaluating polynomials?</p>
|
<p>Examining the <a href="https://github.com/numpy/numpy/blob/v1.19.0/numpy/lib/polynomial.py#L665-L735" rel="nofollow noreferrer">source code</a> of the <code>polyval()</code> function of numpy you'll observe that this is a purely pythonic function. Numpy uses Horner's method for polynomial evaluation (and facilitates the evaluation of multiple points concurrently, though this case doesn't apply here).</p>
<p>To answer your questions about evaluation cost, I believe <a href="https://en.wikipedia.org/wiki/Horner%27s_method" rel="nofollow noreferrer">Horner's method</a> is optimal from the algorithmic perspective: It allows the evaluation of a polynomial of degree <em>n</em> with only <em>n</em> multiplications and <em>n</em> additions, in contrast to your fastest method <code>solve2()</code> where multiplication operations scale polynomially with input size. In terms of speed of computation, I suspect that the increase in performance of your approach vs the <code>polyval()</code> function comes from the fact that the dot product in numpy is implemented in C and receives a substantial speedup on that front.</p>
<p>Addendum: For a comprehensive overview of the topic, I cannot recommend enough Knuth's <a href="https://doc.lagout.org/science/0_Computer%20Science/2_Algorithms/The%20Art%20of%20Computer%20Programming%20%28vol.%202_%20Seminumerical%20Algorithms%29%20%283rd%20ed.%29%20%5BKnuth%201997-11-14%5D.pdf" rel="nofollow noreferrer">The Art of Computer Programming. Vol. 2: Seminumerical Algorithms (3rd ed.)</a>, of which section 4.6.4 on page 485 refers to this situation. I believe Horner's method is optimal for a single polynomial evaluation, however if this polynomial is to be evaluated many times, then potentially further speedups can be gained if one allows preconditioning.</p>
|
python|algorithm|performance|numpy
| 3
|
6,387
| 64,841,988
|
How to apply distance IoU loss?
|
<p>I'm currently training custom dataset using this repository: <a href="https://github.com/zylo117/Yet-Another-EfficientDet-Pytorch" rel="nofollow noreferrer">https://github.com/zylo117/Yet-Another-EfficientDet-Pytorch</a>.</p>
<p>The result of training is not satisfactory for me, so I'm gonna change the regression loss, which is L1-smooth loss, into distance IoU loss.</p>
<p>The code for regresssion loss for this repo is below:</p>
<pre><code> anchor_widths_pi = anchor_widths[positive_indices]
anchor_heights_pi = anchor_heights[positive_indices]
anchor_ctr_x_pi = anchor_ctr_x[positive_indices]
anchor_ctr_y_pi = anchor_ctr_y[positive_indices]
gt_widths = assigned_annotations[:, 2] - assigned_annotations[:, 0]
gt_heights = assigned_annotations[:, 3] - assigned_annotations[:, 1]
gt_ctr_x = assigned_annotations[:, 0] + 0.5 * gt_widths
gt_ctr_y = assigned_annotations[:, 1] + 0.5 * gt_heights
# efficientdet style
gt_widths = torch.clamp(gt_widths, min=1)
gt_heights = torch.clamp(gt_heights, min=1)
targets_dx = (gt_ctr_x - anchor_ctr_x_pi) / anchor_widths_pi
targets_dy = (gt_ctr_y - anchor_ctr_y_pi) / anchor_heights_pi
targets_dw = torch.log(gt_widths / anchor_widths_pi)
targets_dh = torch.log(gt_heights / anchor_heights_pi)
targets = torch.stack((targets_dy, targets_dx, targets_dh, targets_dw))
targets = targets.t()
# L1 loss
regression_diff = torch.abs(targets - regression[positive_indices, :])
regression_loss = torch.where(
torch.le(regression_diff, 1.0 / 9.0),
0.5 * 9.0 * torch.pow(regression_diff, 2),
regression_diff - 0.5 / 9.0
</code></pre>
<p>The code that i'm using as distance IoU is below:</p>
<pre><code> rows = bboxes1.shape[0]
cols = bboxes2.shape[0]
dious = torch.zeros((rows, cols))
if rows * cols == 0:
return dious
exchange = False
bboxes1 = bboxes1.index_select(1, torch.LongTensor([1, 0, 3, 2]).to('cuda'))
if bboxes1.shape[0] > bboxes2.shape[0]:
bboxes1, bboxes2 = bboxes2, bboxes1
dious = torch.zeros((cols, rows))
exchange = True
w1 = bboxes1[:, 2] - bboxes1[:, 0]
h1 = bboxes1[:, 3] - bboxes1[:, 1]
w2 = bboxes2[:, 2] - bboxes2[:, 0]
h2 = bboxes2[:, 3] - bboxes2[:, 1]
area1 = w1 * h1
area2 = w2 * h2
center_x1 = (bboxes1[:, 2] + bboxes1[:, 0]) / 2
center_y1 = (bboxes1[:, 3] + bboxes1[:, 1]) / 2
center_x2 = (bboxes2[:, 2] + bboxes2[:, 0]) / 2
center_y2 = (bboxes2[:, 3] + bboxes2[:, 1]) / 2
inter_max_xy = torch.min(bboxes1[:, 2:],bboxes2[:, 2:])
inter_min_xy = torch.max(bboxes1[:, :2],bboxes2[:, :2])
out_max_xy = torch.max(bboxes1[:, 2:],bboxes2[:, 2:])
out_min_xy = torch.min(bboxes1[:, :2],bboxes2[:, :2])
inter = torch.clamp((inter_max_xy - inter_min_xy), min=0)
inter_area = inter[:, 0] * inter[:, 1]
inter_diag = (center_x2 - center_x1)**2 + (center_y2 - center_y1)**2
outer = torch.clamp((out_max_xy - out_min_xy), min=0)
outer_diag = (outer[:, 0] ** 2) + (outer[:, 1] ** 2)
union = area1+area2-inter_area
dious = inter_area / union - (inter_diag) / outer_diag
dious = torch.clamp(dious,min=-1.0,max = 1.0)
if exchange:
dious = dious.T
loss = 1 - dious
return loss
</code></pre>
<p>The question is here:</p>
<ol>
<li><p>Should I apply the distance IoU loss for one target bbox to all pred bbox?
For example, there is 2 annotated bboxes, and 1000 predicted bboxes.
Should I calculate losses for twice like each annotated bbox vs 1000 predicted bboxes?</p>
</li>
<li><p>Should I change the predicted bbox into real coordinates for calculation?</p>
</li>
</ol>
|
<ol>
<li><p>I've seen this done a couple ways, but typically the methods work by assigning the boxes. Calculating 1000x2 array of IOU, you can assign each prediction box a ground truth target and threshold the IOU for good/bad predictions as seen in RetinaNet or assign each ground truth target the best prediction box as seen in older YOLO. Either way, the loss is applied only to the assigned box pairs, not each combination so each assigned prediction focuses on a single target.</p>
</li>
<li><p>DIOU is invariant to scale</p>
</li>
</ol>
|
python|pytorch|object-detection|loss-function|bounding-box
| 1
|
6,388
| 40,026,441
|
Joining Dataframes on DatetimeIndex by Seconds and Minutes for NaNs
|
<p>I'm looking for a good way to align dataframes each having a timestamp that "includes" seconds without loosing data. Specifically, my problem looks as follows:</p>
<p>Here <code>d1</code> is my "main" dataframe.</p>
<pre><code>ind1 = pd.date_range("20120101", "20120102",freq='S')[1:20]
data1 = np.random.randn(len(ind1))
df1 = pd.DataFrame(data1, index=ind1)
</code></pre>
<p>Eg. df1 could look like:</p>
<pre><code> 0
2012-01-01 00:00:01 2.738425
2012-01-01 00:00:02 -0.323905
2012-01-01 00:00:03 1.861855
2012-01-01 00:00:04 0.480284
2012-01-01 00:00:05 0.340270
2012-01-01 00:00:06 -1.139052
2012-01-01 00:00:07 -0.203018
2012-01-01 00:00:08 -0.398599
2012-01-01 00:00:09 -0.568802
2012-01-01 00:00:10 -1.539783
2012-01-01 00:00:11 -1.778668
2012-01-01 00:00:12 -1.488097
2012-01-01 00:00:13 0.889712
2012-01-01 00:00:14 -0.620267
2012-01-01 00:00:15 0.075169
2012-01-01 00:00:16 -0.091302
2012-01-01 00:00:17 -1.035364
2012-01-01 00:00:18 -0.459013
2012-01-01 00:00:19 -2.177190
</code></pre>
<p>In addition I have another dataframe, say df2:</p>
<pre><code>ind21 = pd.date_range("20120101", "20120102",freq='S')[2:7]
ind22 = pd.date_range("20120101", "20120102",freq='S')[12:19]
data2 = np.random.randn(len(ind21+ind22))
df2 = pd.DataFrame(data2, index=ind21+ind22)
</code></pre>
<p>df2 looks like (note the non-periodic timestamps):</p>
<pre><code> 0
2012-01-01 00:00:02 -1.877779
2012-01-01 00:00:03 1.772659
2012-01-01 00:00:04 0.037251
2012-01-01 00:00:05 -1.195782
2012-01-01 00:00:06 -0.145339
2012-01-01 00:00:12 -0.220673
2012-01-01 00:00:13 -0.581469
2012-01-01 00:00:14 -0.520756
2012-01-01 00:00:15 -0.562677
2012-01-01 00:00:16 0.109325
2012-01-01 00:00:17 -0.195091
2012-01-01 00:00:18 0.838294
</code></pre>
<p>Now, I join both to df and get:</p>
<pre><code>df = df1.join(df2, lsuffix='A')
0A 0
2012-01-01 00:00:01 2.738425 NaN
2012-01-01 00:00:02 -0.323905 -1.877779
2012-01-01 00:00:03 1.861855 1.772659
2012-01-01 00:00:04 0.480284 0.037251
2012-01-01 00:00:05 0.340270 -1.195782
2012-01-01 00:00:06 -1.139052 -0.145339
2012-01-01 00:00:07 -0.203018 NaN
2012-01-01 00:00:08 -0.398599 NaN
2012-01-01 00:00:09 -0.568802 NaN
2012-01-01 00:00:10 -1.539783 NaN
2012-01-01 00:00:11 -1.778668 NaN
2012-01-01 00:00:12 -1.488097 -0.220673
2012-01-01 00:00:13 0.889712 -0.581469
2012-01-01 00:00:14 -0.620267 -0.520756
2012-01-01 00:00:15 0.075169 -0.562677
2012-01-01 00:00:16 -0.091302 0.109325
2012-01-01 00:00:17 -1.035364 -0.195091
2012-01-01 00:00:18 -0.459013 0.838294
2012-01-01 00:00:19 -2.177190 NaN
</code></pre>
<p>This is fine, however, I would like to replace the NaN values in column 0 with the "minute level" value of df2. So only in cases where I don't have an exact match on the "seconds level", I would like to go back to the minute level. This could be a simple average over all values for that specific minute (here: 2012-01-01 00:00:00).</p>
<p>Thx for any help!</p>
|
<p>Use the DateTimeIndex attribute <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DatetimeIndex.minute.html" rel="nofollow"><code>.minute</code></a> to perform grouping and later fill the missing values with it's mean across each group(every minute):</p>
<pre><code>df['0'] = df.groupby(df.index.minute)['0'].transform(lambda x: x.fillna(x.mean()))
</code></pre>
|
python|pandas|datetimeindex|datetime64
| 0
|
6,389
| 44,077,331
|
Pandas iterate max value of a variable length slice in a series
|
<p>Let's assume i have a Pandas DataFrame as follows: </p>
<pre><code>import pandas as pd
idx = ['2003-01-02', '2003-01-03', '2003-01-06', '2003-01-07',
'2003-01-08', '2003-01-09', '2003-01-10', '2003-01-13',
'2003-01-14', '2003-01-15', '2003-01-16', '2003-01-17',
'2003-01-21', '2003-01-22', '2003-01-23', '2003-01-24',
'2003-01-27']
a = pd.DataFrame([1,2,0,0,1,2,3,0,0,0,1,2,3,4,5,0,1],
columns = ['original'], index = pd.to_datetime(idx))
</code></pre>
<p>I am trying to get the max for each slices of that DataFrame between two zeros.
In that example i would get: </p>
<pre><code>a['result'] = [0,2,0,0,0,0,3,0,0,0,0,0,0,0,5,0,1]
</code></pre>
<p>that is: </p>
<pre><code> original result
2003-01-02 1 0
2003-01-03 2 2
2003-01-06 0 0
2003-01-07 0 0
2003-01-08 1 0
2003-01-09 2 0
2003-01-10 3 3
2003-01-13 0 0
2003-01-14 0 0
2003-01-15 0 0
2003-01-16 1 0
2003-01-17 2 0
2003-01-21 3 0
2003-01-22 4 0
2003-01-23 5 5
2003-01-24 0 0
2003-01-27 1 1
</code></pre>
|
<ul>
<li>find zeros</li>
<li><code>cumsum</code> to make groups</li>
<li><code>mask</code> the zeros into their own group <code>-1</code></li>
<li>find the max location in each group <code>idxmax</code></li>
<li>get rid of the one for group <code>-1</code>, that was for zeros anyway</li>
<li>get <code>a.original</code> for found max locations, reindex and fill with zeros</li>
</ul>
<hr>
<pre><code>m = a.original.eq(0)
g = a.original.groupby(m.cumsum().mask(m, -1))
i = g.idxmax().drop(-1)
a.assign(result=a.loc[i, 'original'].reindex(a.index, fill_value=0))
original result
2003-01-02 1 0
2003-01-03 2 2
2003-01-06 0 0
2003-01-07 0 0
2003-01-08 1 0
2003-01-09 2 0
2003-01-10 3 3
2003-01-13 0 0
2003-01-14 0 0
2003-01-15 0 0
2003-01-16 1 0
2003-01-17 2 0
2003-01-21 3 0
2003-01-22 4 0
2003-01-23 5 5
2003-01-24 0 0
2003-01-27 1 1
</code></pre>
|
pandas|python-3.5
| 4
|
6,390
| 44,165,605
|
Pandas add calculated row to bottom of dataframe
|
<p>Below is a small sample of a dataframe I have, and I want to add a calculated row to the bottom of it:</p>
<pre><code>sch q1 q2 q3
acc Yes Yes No
acc Yes No No
acc Yes No No
acc Yes Yes Yes
</code></pre>
<p>I want to add a row at the bottom that will give me the percentage of values that are 'Yes' for each column, so that it would look like below. </p>
<pre><code>sch q1 q2 q3
acc Yes Yes No
acc Yes No No
acc Yes No No
acc Yes Yes Yes
acc 1.00 0.5 0.25
</code></pre>
<p>Any help would be greatly appreciated.</p>
|
<p>I see your lambda and raise a pure pandas solution:</p>
<pre><code>df.append(df.eq('Yes').mean(), ignore_index=True)
</code></pre>
<p>You don't specify what should happen to the <code>sch</code> column, so I ignored it. In my current solution this column will get the value <code>0</code>.</p>
|
python|pandas
| 3
|
6,391
| 69,559,617
|
Open certain amount of files by glob
|
<p>I'm trying to use <code>glob</code>to open excel file in one folder and then <code>concat</code> them into 1 file but it takes quite a long time to open all files and then concat like that (each file contents around 20000 rows).</p>
<p>So I would like to ask is there anyway to open certain amount of files using glob? Ex: Recent 30 files in all files. Or is there another way to make it</p>
<p>Thanks and best regards</p>
|
<blockquote>
<p>Or is there another way to make it</p>
</blockquote>
<p>I generally deal with this by using the os method <code>listdir</code> to list all available files in a given directory (<em>e.g.</em> <code>path_to_files</code>), then open them using the pandas <code>read_csv</code> or <code>read_excel</code> method and append them to a <code>list_of_dataframes</code> to concatenate:</p>
<pre><code>import os
import pandas as pd
from pathlib import Path
path_to_files = Path('...') #The path to the folder containing your excel files
list_of_dataframes = []
for myfile in os.listdir(path_to_files):
pathtomyfile = path_to_files / myfile
list_of_dataframes.append(pd.read_csv(pathtomyfile))
df=pd.concat(list_of_dataframes)
</code></pre>
<p>The number of files to load can be specified by indexing, <em>e.g.</em> for the last 30 files:</p>
<p><code>for myfile in os.listdir(path_to_files)[-30:]</code></p>
|
python|pandas
| 1
|
6,392
| 69,543,371
|
Filter rows based on multiple columns entries
|
<p>I have a dataframe which contains millions of entries and looks something like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Chr</th>
<th style="text-align: center;">Start</th>
<th style="text-align: right;">Alt</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">21651521</td>
<td style="text-align: right;">A</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">41681521</td>
<td style="text-align: right;">T</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">41681521</td>
<td style="text-align: right;">T</td>
</tr>
<tr>
<td style="text-align: left;">...</td>
<td style="text-align: center;">...</td>
<td style="text-align: right;">...</td>
</tr>
<tr>
<td style="text-align: left;">X</td>
<td style="text-align: center;">423565</td>
<td style="text-align: right;">T</td>
</tr>
</tbody>
</table>
</div>
<p>I am currently trying to count the number of rows that match several conditions at the same time, i.e. <code>Chr==1</code>, <code>Start==41681521</code> and <code>Alt==T</code>.
Right now I am using this syntax, which works fine, but seems unpythonic and is also rather slow I think.</p>
<pre class="lang-py prettyprint-override"><code>num_occurrence = sum((df["Chr"] == chrom) &
(df["Start"] == int(position)) &
(df["Alt"] == allele))
</code></pre>
<p>Does anyone have an approach which is more suitable then mine?
Any help is much appreciated!</p>
<p>Cheers!</p>
|
<p>Use <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.all.html" rel="nofollow noreferrer"><code>DataFrame.all</code></a> + <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.sum.html" rel="nofollow noreferrer"><code>Series.sum</code></a>:</p>
<pre><code>res = (df[["Chr", "Start", "Alt"]] == [chrom, int(position), allele]).all(1).sum()
</code></pre>
<p>For example:</p>
<pre><code>import pandas as pd
# toy data
df = pd.DataFrame(data=[[1, 21651521, "A"], [1, 41681521, "T"], [1, 41681521, "T"]], columns=["Chr", "Start", "Alt"])
chrom, position, allele = 1, "21651521", "A"
res = (df[["Chr", "Start", "Alt"]] == [chrom, int(position), allele]).all(1).sum()
print(res)
</code></pre>
<p><strong>Output</strong></p>
<pre><code>1
</code></pre>
|
python|pandas
| 1
|
6,393
| 40,819,246
|
python pandas - retrieve index timestamp value at cummax()
|
<p>I am retrieving the cummax() value of the following dataframe,</p>
<pre><code> exit_price trend netgain high low MFE_pr
exit_time
2000-02-01 01:00:00 1400.25 -1 1.00 1401.50 1400.25 1400.25
2000-02-01 01:30:00 1400.75 -1 0.50 1401.00 1399.50 1399.50
2000-02-01 02:00:00 1400.00 -1 1.25 1401.00 1399.75 1399.50
2000-02-01 02:30:00 1399.25 -1 2.00 1399.75 1399.25 1399.25
2000-02-01 03:00:00 1399.50 -1 1.75 1400.00 1399.50 1399.25
2000-02-01 03:30:00 1398.25 -1 3.00 1399.25 1398.25 1398.25
2000-02-01 04:00:00 1398.75 -1 2.50 1399.00 1398.25 1398.25
2000-02-01 04:30:00 1400.00 -1 1.25 1400.25 1399.00 1398.25
2000-02-01 05:00:00 1400.25 -1 1.00 1400.50 1399.25 1398.25
2000-02-01 05:30:00 1400.50 -1 0.75 1400.75 1399.50 1398.25
</code></pre>
<p>with the following formula</p>
<pre><code>trade ['MFE_pr'] = np.nan
trade ['MFE_pr'] = trade ['MFE_pr'].where(trade ['trend']<0, trade.high.cummax())
trade ['MFE_pr'] = trade ['MFE_pr'].where(trade ['trend']>0, trade.low.cummin())
</code></pre>
<p>is there a way to retrieve the timestamp of the row at which cummax() is taken from for each row? something similar to .idxmax() but for cummax() ?</p>
|
<p>This is probably what you are looking for.</p>
<pre><code>import pandas as pd
import datetime
df = pd.DataFrame({'a': [1, 2, 1, 3, 2, 5, 4, 3, 5]},
index=pd.DatetimeIndex(start=
datetime.datetime.fromtimestamp(0),
periods=9, freq='D'))
df['cummax'] = df.a.cummax()
df['timestamp'] = df.index
df = df.merge(df.groupby('cummax')[['timestamp']].first().reset_index(), on='cummax')
df.rename(columns={'timestamp_y': 'max_timestamp'}, inplace=True)
df.index=df.timestamp_x.values
del df['timestamp_x']
print(df)
a cummax max_timestamp
1970-01-01 03:00:00 1 1 1970-01-01 03:00:00
1970-01-02 03:00:00 2 2 1970-01-02 03:00:00
1970-01-03 03:00:00 1 2 1970-01-02 03:00:00
1970-01-04 03:00:00 3 3 1970-01-04 03:00:00
1970-01-05 03:00:00 2 3 1970-01-04 03:00:00
1970-01-06 03:00:00 5 5 1970-01-06 03:00:00
1970-01-07 03:00:00 4 5 1970-01-06 03:00:00
1970-01-08 03:00:00 3 5 1970-01-06 03:00:00
1970-01-09 03:00:00 5 5 1970-01-06 03:00:00
</code></pre>
|
python|pandas
| 2
|
6,394
| 41,216,528
|
Importing sklearn error
|
<p>So I have been trying to install numpy, scipy and sklearn for a course I am taking. After many issues and numerous attempts, I installed pycharm and used their built in package manager to get numpy and scipy. I also installed sklearn but when I import it in my code i get the following error:</p>
<pre><code>Traceback (most recent call last):
Python Shell, prompt 1, line 3
File "C:\Users\Berges\AppData\Local\Programs\Python\Python35\Lib\site-packages\sklearn\__init__.py", line 57, in <module>
from .base import clone
File "C:\Users\Berges\AppData\Local\Programs\Python\Python35\Lib\site-packages\sklearn\base.py", line 12, in <module>
from .utils.fixes import signature
File "C:\Users\Berges\AppData\Local\Programs\Python\Python35\Lib\site-packages\sklearn\utils\__init__.py", line 11, in <module>
from .validation import (as_float_array,
File "C:\Users\Berges\AppData\Local\Programs\Python\Python35\Lib\site-packages\sklearn\utils\validation.py", line 18, in <module>
from ..utils.fixes import signature
File "C:\Users\Berges\AppData\Local\Programs\Python\Python35\Lib\site-packages\sklearn\utils\fixes.py", line 406, in <module>
if np_version < (1, 12, 0):
builtins.TypeError: unorderable types: str() < int()
</code></pre>
<p>(I am using python 3.5.2 and when u run python3 on bash I can import sklearn just fine but it seems to be using python 3.4.3 for that)</p>
<p>UPDATE:</p>
<p>I installed Anaconda and attempted to run the following code from Wing IDE and Atom:</p>
<pre><code>import numpy as np
X = np.array([[-1,-1],[-2,-1],[-3.-2],[1,1],[2,1],[3,2]])
Y = np.array([1,1,1,2,2,2])
from sklearn.naive_bayes import GaussianNB
clf = GaussianNB()
clf.fit(X,Y)
print(clf.predict([[-0.8,-1]]))
</code></pre>
<p>I then get the following error:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\Berges\Downloads\test.py", line 6, in <module>
clf.fit(X,Y)
File "C:\Users\Berges\Anaconda3\lib\site-packages\sklearn\naive_bayes.py", line 173, in fit
X, y = check_X_y(X, y)
File "C:\Users\Berges\Anaconda3\lib\site-packages\sklearn\utils\validation.py", line 510, in check_X_y
ensure_min_features, warn_on_dtype, estimator)
File "C:\Users\Berges\Anaconda3\lib\site-packages\sklearn\utils\validation.py", line 373, in check_array
array = np.array(array, dtype=dtype, order=order, copy=copy)
ValueError: setting an array element with a sequence.
</code></pre>
|
<p>If you just want to get something up and running for a course and you want to make it work on Windows. Then I suggest you to install anaconda package manager. It works like a breeze on Windows and is very easy to install and contains all the necessary packages (you don't have to worry about version mismatch and everything). </p>
<p>After you install anaconda manager, change the pycharm interpreter location to anaconda installed python interpreter.</p>
<p>Link: <a href="https://www.continuum.io/downloads" rel="nofollow noreferrer">https://www.continuum.io/downloads</a>
List of available packages: <a href="https://docs.continuum.io/anaconda/pkg-docs" rel="nofollow noreferrer">https://docs.continuum.io/anaconda/pkg-docs</a></p>
|
python|numpy|scipy|scikit-learn|pycharm
| 0
|
6,395
| 53,970,350
|
ValueError contains new labels when trying to label encode in python
|
<p>I have a dataset which requires label encoding. I am using sklearn's label encoder for the same.</p>
<p>Here is the reproducible code for the problem:</p>
<pre><code>import pandas as pd
from sklearn.preprocessing import LabelEncoder
data11 = pd.DataFrame({'Transaction_Type': ['Mortgage', 'Credit reporting', 'Consumer Loan', 'Mortgage'],
'Complaint_reason': ['Incorrect Info', 'False Statement', 'Using a Debit Card', 'Payoff process'],
'Company_response': ['Response1', 'Response2', 'Response3', 'Response1'],
'Consumer_disputes': ['Yes', 'No', 'No', 'Yes'],
'Complaint_Status': ['Processing','Closed', 'Awaiting Response', 'Closed']
})
le = LabelEncoder()
data11['Transaction_Type'] = le.fit_transform(data11['Transaction_Type'])
data11['Complaint_reason'] = le.transform(data11['Complaint_reason'])
data11['Company_response'] = le.fit_transform(data11['Company_response'])
data11['Consumer_disputes'] = le.transform(data11['Consumer_disputes'])
data11['Complaint_Status'] = le.transform(data11['Complaint_Status'])
</code></pre>
<p>The desired output should be something like:</p>
<pre><code>({'Transaction_Type': ['1', '2', '3', '1'],
'Complaint_reason': ['1', '2', '3', '4'],
'Company_response': ['1', '2', '3', '1'],
'Consumer_disputes': ['1', '2', '2', '1'],
'Complaint_Status': ['1','2', '3', '2']
})
</code></pre>
<p>The problem is when I try to encode the columns:
'Transaction_Type' and 'Company_response' get encoded successfully but the columns 'Complaint_reason', 'Consumer_disputes' and 'Complaint_Status' throw errors. </p>
<p>For 'Complaint_reason':</p>
<pre><code>File "C:/Users/Ashu/untitled0.py", line 26, in <module>
data11['Complaint_reason'] = le.transform(data11['Complaint_reason'])
ValueError: y contains new labels: ['APR or interest rate' 'Account opening, closing, or management'
'Account terms and changes' ...
"Was approved for a loan, but didn't receive the money"
'Written notification about debt' 'Wrong amount charged or received']
</code></pre>
<p>and similarly for 'Consumer_disputes':</p>
<pre><code> File "<ipython-input-117-9625bd78b740>", line 1, in <module>
data11['Consumer_disputes'] = le.transform(data11['Consumer_disputes'].astype(str))
ValueError: y contains new labels: ['No' 'Yes']
</code></pre>
<p>and similarly for 'Complaint_Status':</p>
<pre><code> File "<ipython-input-119-5cd289c72e45>", line 1, in <module>
data11['Complaint_Status'] = le.transform(data11['Complaint_Status'])
ValueError: y contains new labels: ['Closed' 'Closed with explanation' 'Closed with monetary relief'
'Closed with non-monetary relief' 'Untimely response']
</code></pre>
<p>These all are categorical variables with fixed inputs in forms of sentences. Following is the data slice image:</p>
<p><a href="https://i.stack.imgur.com/EekQU.png" rel="nofollow noreferrer">Categorical Data Label Encoding</a></p>
<p>There are a couple of questions on this on SO but none have been answered successfully.</p>
|
<p>You are missing <strong>fit_transform()</strong> and that's why you are getting error.</p>
<p><strong>sklearn.preprocessing.LabelEncoder</strong> -> Encode labels with value between 0 and n_classes-1 (from official docs)</p>
<p>Still if you want to encode your classes between 1 and n_classes, you just need to add 1.</p>
<pre><code>data11['Transaction_Type'] = le.fit_transform(data11['Transaction_Type'])
data11['Transaction_Type']
</code></pre>
<p>Output:</p>
<pre><code>0 2
1 1
2 0
3 2
Name: Transaction_Type, dtype: int64
</code></pre>
<p><strong>Notice</strong> here, LabelEncoder() do encoding in an alphabetical order, it will give a label of 0 to <strong>Consumer Loan</strong> which comes first in alphabetical order. Similarly, it gives a label of 2 to <strong>Mortage</strong> which comes last in order.</p>
<p>Now, you have two ways to encode it, either accept the default output of LabelEncoder like this, </p>
<pre><code>data11['Transaction_Type'] = le.fit_transform(data11['Transaction_Type'])
data11['Complaint_reason'] = le.fit_transform(data11['Complaint_reason'])
data11['Company_response'] = le.fit_transform(data11['Company_response'])
data11['Consumer_disputes'] = le.fit_transform(data11['Consumer_disputes'])
data11['Complaint_Status'] = le.fit_transform(data11['Complaint_Status'])
</code></pre>
<p>Output: </p>
<pre><code> Transaction_Type Complaint_reason Company_response Consumer_disputes Complaint_Status
0 2 1 0 1 2
1 1 0 1 0 1
2 0 3 2 0 0
3 2 2 0 1 1
</code></pre>
<p>OR</p>
<pre><code>data11['Transaction_Type'] = le.fit_transform(data11['Transaction_Type'].sort_values()) + 1
data11['Complaint_reason'] = le.fit_transform(data11['Complaint_reason'].sort_values()) + 1
data11['Company_response'] = le.fit_transform(data11['Company_response']) + 1
data11['Consumer_disputes'] = le.fit_transform(data11['Consumer_disputes'].sort_values()) + 1
data11['Complaint_Status'] = le.fit_transform(data11['Complaint_Status'].sort_values()) + 1
</code></pre>
<p>Output:</p>
<pre><code> Transaction_Type Complaint_reason Company_response Consumer_disputes Complaint_Status
0 1 1 1 1 1
1 2 2 2 1 2
2 3 3 3 2 2
3 3 4 1 2 3
</code></pre>
|
python|pandas|dataframe|encoding
| 1
|
6,396
| 54,116,967
|
What is an efficient way to make rows by every set of two data elements?
|
<p><strong>Objective</strong>: to take pairs from data and create a new labeled dataframe with the appropriate rows</p>
<pre><code>data = [2618926, -1, 2955664, 2978, 2959058, -1, 3038766, 4470, 3044420, -1]
column = ['Date','Value']
</code></pre>
<p>I need to create a dataframe from the variable 'Data' and display in the following format:</p>
<pre><code>Date Value
2618926 -1
2955664 2978
2959058 -1
2028766 4470
3044420 -1
</code></pre>
|
<p>I'll use my favourite <code>zip</code> and <code>iter</code> recipe:</p>
<pre><code>it = iter(data)
pd.DataFrame(list(zip(it, it)), columns=column)
# Or, let pandas exhaust the iterator for you.
# pd.DataFrame.from_records(zip(it, it), columns=column)
Date Value
0 2618926 -1
1 2955664 2978
2 2959058 -1
3 3038766 4470
4 3044420 -1
</code></pre>
<p><code>zip(it, it)</code> zips together every alternate element together.</p>
<hr>
<p>Another option is using <code>np.reshape</code>:</p>
<pre><code>pd.DataFrame(np.reshape(data, (-1, 2)), columns=column)
Date Value
0 2618926 -1
1 2955664 2978
2 2959058 -1
3 3038766 4470
4 3044420 -1
</code></pre>
|
python|python-3.x|pandas|list|dataframe
| 4
|
6,397
| 53,953,121
|
Dataframe summary math based on condition from another dataframe?
|
<p>I have what amounts to 3D data but can't install the Pandas recommended <a href="http://xarray.pydata.org/en/stable/" rel="nofollow noreferrer">xarray package</a>.</p>
<h3>df_values</h3>
<pre><code> | a b c
-----------------
0 | 5 9 2
1 | 6 9 5
2 | 1 6 8
</code></pre>
<h3>df_condition</h3>
<pre><code> | a b c
-----------------
0 | y y y
1 | y n y
2 | n n y
</code></pre>
<p>I know I can get the average of all values in <code>df_values</code> like this.</p>
<pre><code>df_values.stack().mean()
</code></pre>
<p><br>
Question... <br>
What is the simplest way to find the <code>average of df_values</code> where <code>df_condition == "y"</code>?</p>
|
<p>IIUC Boolean mask </p>
<pre><code>df[c.eq('y')].mean().mean()
6.5
</code></pre>
<p>Or you may want </p>
<pre><code>df[c.eq('y')].sum().sum()/c.eq('y').sum().sum()
5.833333333333333
</code></pre>
|
python|pandas|dataframe
| 1
|
6,398
| 54,117,856
|
How to create a loss-function for an unsupervised-learning model, where the ouput resembles the direct input for a game agent?
|
<p>I'm trying to setup a deep neuronal network, which predicts the next move for a game agent to navigate a world. To control the game agent it takes two float inputs. The first one controls the speed (0.0 = stop/do not move, 1.0 = max. speed). The second controls the steering (-1.0 = turn left, 0.0 = straight, +1.0 = turn right). </p>
<p>I designed the network so the it has two output neurons one for the speed (it has a sigmoid activation applied) and on for the steering (has a tanh activation). The actual input I want to feed the network is the pixel data and some game state values. </p>
<p>To train the network I would simply run a whole game (about 2000frames/samples). When the game is over I want to train the model. Here is where I struggle, how would my loss-function look like? While playing I collect all actions/ouputs from the network, the game state and rewards per frame/sample. When the game is done I also got the information if the agent won or lost.</p>
<p>Edit:</p>
<p>This post <a href="http://karpathy.github.io/2016/05/31/rl/" rel="nofollow noreferrer">http://karpathy.github.io/2016/05/31/rl/</a> got me inspired. Maybe I could use the discounted (move, turn) value-pairs, multiply them by (-1) if game agent lost and (+1) if it won. Now I can use these values as gradients to update the networks weights?</p>
<p>It would be nice if someone could help me out here.</p>
<p>All the best,
Tobs.</p>
|
<p>The problem you are talking is belong to <code>reinforcement-learning</code>, where agent interact with environment and collect data that is game state, its action and reward/score it got at end. Now there are many approaches.</p>
<p>The one you are talking is <code>policy-gradient</code> method, And loss function is as <code>E[\sum r]</code>, where <code>r</code> is score, which has to be maximized. And its gradient will be <code>A*grad(log(p_theta))</code>, where <code>A</code> is advantage function i.e. <code>+1/-1</code> for winning/losing. And <code>p_theta</code> is the probability of choosing action parameterized by <code>theta</code>(neural network). Now if it has win, the gradient will be update in favor of that policy because of <code>+1</code> and vice-versa.</p>
<p>Note: There are many methods to design <code>A</code>, in this case <code>+1/-1</code> is chosen.</p>
<p>More you can read <a href="https://medium.com/@jonathan_hui/rl-policy-gradients-explained-9b13b688b146" rel="nofollow noreferrer">here</a> in more detail.</p>
|
tensorflow|deep-learning|unsupervised-learning|loss
| 1
|
6,399
| 54,063,768
|
Selecting rows with the highest value based on 1 column in the dataframe
|
<p>I have a set of dataframe with about 20k rows. with headings X,Y,Z,I,R,G,B. ( yes its point cloud)</p>
<p>I would wanna create numerous sub dataframes by grouping the data in rows of 100 after sorting out according to column X.
Subsequently i would like to sort all sub dataframes according to Y column and breaking them down further into rows of 50. (breaking each sub dataframe down further)
The end result is I should have a group of sub dataframes in rows of 50, and i would like to pick out all the rows with the highest Z value in each sub dataframe and write them onto a CSV file.</p>
<p>I have reached the following method with my code. But i am not sure how to continue further.</p>
<pre><code>import pandas as pd
headings = ['x', 'y', 'z']
data = pd.read_table('file.csv', sep=',', skiprows=[0], names=headings)
points = data.sort_values(by=['x'])
</code></pre>
|
<p>Considering a dummy dataframe of 1000 rows,</p>
<pre><code>df.head() # first 5 rows
X Y Z I R G B
0 6 6 0 3 7 0 2
1 0 8 3 6 5 9 7
2 8 9 7 3 0 4 5
3 9 6 8 5 1 0 0
4 9 0 3 0 9 2 9
</code></pre>
<p>First, extract the highest value of <code>Z</code> from the dataframe,</p>
<pre><code>z_max = df['Z'].max()
df = df.sort_values('X')
# list of dataframes
dfs_X = np.split(df, len(df)/ 100)
results = pd.DataFrame()
for idx, df_x in enumerate(dfs_X):
dfs_X[idx] = df_x.sort_values('Y')
dfs_Y = np.split(dfs_X[idx], len(dfs_X[idx]) / 50)
for idy, df_y in enumerate(dfs_Y):
rows = df_y[df_y['Z'] == z_max]
results = results.append(rows)
results.head()
</code></pre>
<p><code>results</code> will contain rows from all dataframes which have highest value of <code>Z</code>.</p>
<p>Output: First 5 rows</p>
<pre><code> X Y Z I R G B
541 0 0 9 0 3 6 2
610 0 2 9 3 0 7 6
133 0 4 9 3 3 9 9
731 0 5 9 5 1 0 2
629 0 5 9 0 9 7 7
</code></pre>
<p>Now, write this dataframe to <code>csv</code> using <code>df.to_csv()</code>.</p>
|
python|pandas|csv
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.