Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
2,800
67,268,152
Pytorch: create a mask that is larger than the n-th quantile of each 2D tensor in a batch
<p>I have a <code>torch.Tensor</code> of shape <code>(2, 2, 2)</code> (can be bigger), where the values are normalized within range <code>[0, 1]</code>.</p> <p>Now I am given a positive integer <code>K</code>, which tells me that I need to create a mask where for each 2D tensor inside the batch, values are 1 if it is larger than <code>1/k</code> of all the values, and 0 elsewhere. The return mask also has shape <code>(2, 2, 2)</code>.</p> <p>For example, if I have a batch like this:</p> <pre class="lang-py prettyprint-override"><code>tensor([[[1., 3.], [2., 4.]], [[5., 7.], [9., 8.]]]) </code></pre> <p>and let <code>K=2</code>, it means that I must mask the values where they are greater than 50% of all the values inside each 2D tensor.</p> <p>In the example, the 0.5 quantile is <code>2.5</code> and <code>7.5</code>, so this is the desired output:</p> <pre class="lang-py prettyprint-override"><code>tensor([[[0, 1], [0, 1]], [[0, 0], [1, 1]]]) </code></pre> <p>I tried:</p> <pre class="lang-py prettyprint-override"><code>a = torch.tensor([[[0, 1], [0, 1]], [[0, 0], [1, 1]]]) quantile = torch.tensor([torch.quantile(x, 1/K) for x in a]) torch.where(a &gt; val, 1, 0) </code></pre> <p>But this is the result:</p> <pre class="lang-py prettyprint-override"><code>tensor([[[0, 0], [0, 0]], [[1, 0], [1, 1]]]) </code></pre>
<pre><code>t = torch.tensor([[[1., 3.], [2., 4.]], [[5., 7.], [9., 8.]]]) t_flat = torch.reshape(t, (t.shape[0], -1)) quants = torch.quantile(t_flat, 1/K, dim=1) quants = torch..reshape(quants, (quants.shape[0], 1, 1)) res = torch.where(t &gt; val, 1, 0) </code></pre> <p>and after this res is:</p> <pre><code>tensor([[[0, 1], [0, 1]], [[0, 0], [1, 1]]]) </code></pre> <p>which is what you wanted</p>
python|pytorch
1
2,801
34,643,500
iterate through all dataframe columns
<p>I want to compare all rows of 2 given dataframes</p> <p>how can i optimize the following code to dynamically iterate through all columns of the given pandas dataframe?</p> <pre class="lang-python prettyprint-override"><code>df1,df2 = pd.read_csv(...) for index2, row2 in df2.iterrows(): for index1, row1 in df1.iterrows(): if row1[0]==row2[0]: i = i+1 if row1[1]==row2[1]: i = i+1 if row1[2]==row2[2]: i = i+1 if row1[3]==row2[3]: i = i+1 print("# same values: "+str(i)) i = 0 </code></pre>
<p>IIUC you need to check whether whole row of one dataframe is equal to another one. You could compare for equality two dataframes then use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.all.html?highlight=all#pandas.DataFrame.all" rel="nofollow"><code>all</code></a> method for that with <code>axis=1</code> to check rows and then summing the result:</p> <pre><code>df1 = pd.DataFrame({'a': [1, 2, 3, 4, 5], 'b': [2, 3, 4, 5, 6]}) df2 = pd.DataFrame({'a': [1, 5, 3, 7, 5], 'b': [2, 3, 8, 5, 6]}) In [1531]: df1 == df2 Out[1531]: a b 0 True True 1 False True 2 True False 3 False True 4 True True In [1532]: (df1 == df2).all(axis=1) Out[1532]: 0 True 1 False 2 False 3 False 4 True dtype: bool In [1533]: (df1 == df2).all(axis=1).sum() Out[1533]: 2 result = (df1 == df2).all(axis=1).sum() In [1535]: print("# same values: "+str(result)) # same values: 2 </code></pre>
python|pandas|dataframe
2
2,802
60,130,278
How to find all text values in dataframe and put them into list?
<p>I have dataframe which contains both numbers and string values. I am struggling to find an elegant way to extract all strings into list in order to further replace them with NAN. Could you please help me?</p> <p>Actually i dont understand what is the best way to iterate through all values of pandas dataframe, The only thing i can to is to convert pd. to list. and this looks rather stupid for me.</p>
<p>You can iterate through a column like this:</p> <pre><code>import numpy as np df['column'] = df['column'].apply(lambda x: np.nan if isinstance(x, str) else x) </code></pre> <p>Three things are happening here:</p> <ul> <li>.apply() function lets you apply a function to a dataframe or its column</li> <li>lambda lets you iterate over every row</li> <li>x is the row or cell value in your case - that you can verify as string or int.</li> </ul> <p>If you want to do this over all columns one by one, I'd modify the same solution below (although its not the most efficient):</p> <pre><code>for column in df.columns: df[column] = df[column].apply(lambda x: np.nan if isinstance(x, str) else x) </code></pre> <p>Let me know if this helps!</p>
python|pandas
1
2,803
59,905,738
Looping over dataframe and selecting rows based on substring in Pandas
<p>I have a dataframe with 5 columns, one of which is 'TABLE_NAME'. That column has values such as:</p> <pre><code>A_value1 B_value1 B_value2 A_value150 </code></pre> <p>I want to print those that start with 'A_' only. </p> <p>I tried this but its returning the following:</p> <pre><code>ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). </code></pre> <p>Here is the code:</p> <pre><code>value = 'A_' for index, row in df.iterrows(): if df.loc[df['TABLE_NAME'].str.contains(value)]: print('y') else: print('n') </code></pre>
<p>You can use str.startswith:</p> <pre><code>df.loc[df.TABLE_NAME.str.startswith('A_')] </code></pre> <p>If you want to use your for loop, you can do:</p> <pre><code>value = 'A_' for index, row in df.iterrows(): if row['TABLE_NAME'].startswith(value): print('y') else: print('n') </code></pre>
python|string|pandas
1
2,804
49,847,771
How to iterate over an dataframe object list?
<p>I have a dataset with a lot of <code>int</code>, <code>float</code> and <code>object</code> variables. I've used the code bellow to extract only the name of the <code>object</code> variables into a <code>list</code>.</p> <pre><code>objects = df.dtypes[df.dtypes == "object"].index objects = list(objects) </code></pre> <p>And now I want to plot all these variables against another variable <code>Y</code>. I'm trying to do something like that but it's not working. See the code bellow:</p> <pre><code>import matplotlib.pyplot as plt import seaborn as sns i = 0 for i in objects: plt.figure(figsize=(15,8)) sns.boxplot(df.objects[i], df.Y) i = i+1 </code></pre> <p>I'm new to <code>Python</code> and I don't exactly what I'm doing wrong.</p>
<p>I finally find an answer. The code bellow do what I was trying to do. It plots two boxplots against variable <code>Y</code>, one at a time.</p> <pre><code>objects = ['A', 'B'] for obj in objects: plt.figure(figsize=(15,8)) sns.boxplot(df[obj], df.Y) </code></pre>
python|pandas|dataframe|matplotlib
0
2,805
50,217,764
In Python, how to sort a dataframe containing accents?
<p>I use sort_values to sort a dataframe. The dataframe contains UTF-8 characters with accents. Here is an example:</p> <pre><code>&gt;&gt;&gt; df = pd.DataFrame ( [ ['i'],['e'],['a'],['é'] ] ) &gt;&gt;&gt; df.sort_values(by=[0]) 0 2 a 1 e 0 i 3 é </code></pre> <p>As you can see, the "é" with an accent is at the end instead of being after the "e" without accent.</p> <p>Note that the real dataframe has several columns !</p>
<p>This is one way. The simplest solution, as suggested by @JonClements:</p> <pre><code>df = df.iloc[df[0].str.normalize('NFKD').argsort()] </code></pre> <p>An alternative, long-winded solution, normalization code <a href="https://stackoverflow.com/a/37926512/9209546">courtesy of @EdChum</a>:</p> <pre><code>df = pd.DataFrame([['i'],['e'],['a'],['é']]) df = df.iloc[df[0].str.normalize('NFKD').argsort()] # remove accents df[1] = df[0].str.normalize('NFKD')\ .str.encode('ascii', errors='ignore')\ .str.decode('utf-8') # sort by new column, then drop df = df.sort_values(1, ascending=True)\ .drop(1, axis=1) print(df) 0 2 a 1 e 3 é 0 i </code></pre> <hr>
python|string|pandas|sorting|dataframe
3
2,806
63,767,469
Pandas GroupBy and Sum python
<p>Hello everyone Im trying to group data by Date and then sum the second column, but I not getting the information as i need</p> <p>This is my data:</p> <pre><code>|Day |Messages|Codes | |----------|--------|-------| |2020-08-25|647 |34234 | |2020-08-25|6,396 |3425645| |2020-08-25|16,615 |64564 | |2020-08-26|16,066 |45654 | |2020-08-26|4,716 |343234 | |2020-08-26|748 |35455 | |2020-08-28|571 |343423 | |2020-08-28|14 |3423 | |2020-08-28|1 |SDAS2 | </code></pre> <p>The output that i need is like this:</p> <pre><code>|Day |Messages|Codes | |----------|--------|-------| |2020-08-25|23658 |34234 | | | |3425645| | | |64564 | |2020-08-26|21530 |45654 | | | |343234 | | | |35455 | |2020-08-28|586 |343423 | | | |3423 | | | |SDAS2 | </code></pre> <p>First need to group by Day and then sum the Messages column , I tried with the next lines but doesn't work as i expect :c</p> <pre><code>#1 read = pd.read_csv('data.csv') read.groupby(['Day']) read.groupby(['Messages']).sum() read.to_html('test.html',index=False) #2 read = pd.read_csv('data.csv') group_day = read.groupby(['Day','Messages']).sum() group_day.to_html('test.html') #3 read = pd.read_csv('data.csv') read.groupby('Day')[['Messages','ShortCode']].sum() read.to_html('test.html',index=False) </code></pre>
<p>This might get you close to what you need:</p> <p><code>read.groupby(['Day','Messages','ShortCode']).size().reset_index().set_index(['Day','Messages'])</code></p>
python|pandas|dataframe
0
2,807
64,166,439
How to retain the colors of a PNG image when converting back from an array
<p>Whenever I convert a PNG image to a np.array and then convert it back to a PNG I lose all the colors of the image. I would like to be able to retain the colors of the original PNG when I am converting it back from a np.array.</p> <p>Original PNG Image</p> <p><a href="https://i.stack.imgur.com/gsmur.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gsmur.png" alt="enter image description here" /></a></p> <p>My code:</p> <pre><code>from PIL import Image im = Image.open('2007_000129.png') im = np.array(im) #augmenting image im[0,0] = 1 im = Image.fromarray(im, mode = 'P') </code></pre> <p>Outputs a black and white version of the image</p> <p><a href="https://i.stack.imgur.com/zWopI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zWopI.png" alt="enter image description here" /></a></p> <p>I also try using <code>getpalette</code> and <code>putpalette</code> but this does not work it just returns a NonType object.</p> <pre><code>im = Image.open('2007_000129.png') pat = im.getpalette() im = np.array(im) im[0,0] = 1 im = Image.fromarray(im, mode = 'P') im= im.putpalette(pat) </code></pre>
<p>Your image is using single channel color using palette. Try the code below. Also you can check more about this subject at <a href="https://stackoverflow.com/questions/52307290/what-is-the-difference-between-images-in-p-and-l-mode-in-pil">What is the difference between images in &#39;P&#39; and &#39;L&#39; mode in PIL?</a></p> <pre><code>from PIL import Image import numpy as np im = Image.open('gsmur.png') rgb = im.convert('RGB') np_rgb = np.array(rgb) p = im.convert('P') np_p = np.array(p) im = Image.fromarray(np_p, mode = 'P') im.show() im2 = Image.fromarray(np_rgb) im2.show() </code></pre>
python|numpy|python-imaging-library|png|color-palette
2
2,808
64,002,589
Import numpy from macOS Terminal running python launches but not from python script
<p>My goal is to be able to run NumPy through simple scripts. Being new at this, simple is not so simple. From the Terminal running python, NumPy works just fine. However, I can not import it from a script. The numpy sample runs from python with the following result.</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; a = np.arange(15).reshape(3, 5) &gt;&gt;&gt; a array([[ 0, 1, 2, 3, 4], [ 5, 6, 7, 8, 9], [10, 11, 12, 13, 14]]) </code></pre> <p>However, from my script, runNumPy.py</p> <pre><code>#!/usr/bin/sh/env python3.6 print(&quot;Hello world! from runNumPy.py in python called by Terminal&quot;) import sys, os print(&quot;Current Working Directory: &quot;, os.getcwd()) import numpy as np </code></pre> <p>I get</p> <pre><code>&gt;&gt;&gt; a = np.arange(15).reshape(3, 5) Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; NameError: name 'np' is not defined </code></pre> <p>I have tried it with &quot;import numpy as np&quot; and without &quot;as np.&quot;</p> <p>My Terminal script is</p> <pre><code>#!/bin/sh echo &quot;Hello, world! Starts helloWorld.sh Terminal script.&quot; source opt/anaconda3/etc/profile.d/conda.sh conda activate bioETE cd opt; cd anaconda3; cd envs; cd bioETE; cd lib; cd python3.6; cd site-packages python ./runNumPyS.py #This runs runNumPyS.py from the terminal python ./runNumPy.py #This runs the runNumPy.py from the terminal python </code></pre> <p>It's output is</p> <pre><code>Hello, world! Starts helloWorld.sh Terminal script. Hello world: from helloWorld.py in python script called by Terminal Hello world! from runNumPy.py in python called by Terminal Current Working Directory: home/opt/anaconda3/envs/bioETE/lib/python3.6/site-packages Python 3.6.12 |Anaconda, Inc.| (default, Sep 8 2020, 17:50:39) [GCC Clang 10.0.0 ] on darwin Type &quot;help&quot;, &quot;copyright&quot;, &quot;credits&quot; or &quot;license&quot; for more information. &gt;&gt;&gt; </code></pre> <p>As stated before my goal is to begin NumPy at the python prompt &gt;&gt;&gt; without using another import statement. This shell script works fine. It calls the python scripts.</p> <p>In the output above, the first &quot;Hello World&quot; was to show the shell script was working before I went any further. The other two, &quot;Hello World&quot;, were to see if the python scripts were called.</p> <p>You can see by th python script output the python print commands and sys and os import calls worked OK. The system call seems to tell me I am in the correct directory..??</p> <p>The runNumPy.py script is</p> <pre><code>#!/usr/bin/sh/env python3.6 print(&quot;Hello world! from runNumPy.py in python called by Terminal&quot;) import sys, os print(&quot;Current Working Directory: &quot;, os.getcwd()) import numpy as np </code></pre> <p>At the python prompt, I get the following error.</p> <pre><code>&gt;&gt;&gt; a = np.arange(15).reshape(3, 5) Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; NameError: name 'np' is not defined </code></pre> <p>Clearly, numpy has not been imported. Again, at this same prompt, if I type in the &quot;import numpy as np&quot; first, NumPy works just fine.</p> <p>NumPy's module numpy is located at</p> <pre><code>home/opt/anaconda3/envs/bioETE/lib/python3.6/site-packages/numpy </code></pre> <p>I've tried placing python script, runNumPy.py, in two places.</p> <pre><code>1)home/opt/anaconda3/envs/bioETE/lib/python3.6/site-packages 2)home/opt/anaconda3/envs/bioETE/lib/python3.6/ </code></pre> <p>Neither place seems to work. I'm stuck for the moment. Any and all helpful suggestions or a solution or two will be appreciated.</p>
<p>You write:</p> <blockquote> <p>As stated before my goal is to begin NumPy at the python prompt &gt;&gt;&gt; without using another import statement. This shell script works fine. It calls the python scripts.</p> </blockquote> <p>This is actually not a good goal. You should try to get out of the habit of running python from the interactive prompt. It's hard to develop significant programs that way. If you really want to run Python this way, you should look into use Jupyter Notebook or JupyterLab, both of which are included in your Python anaconda release.</p> <p>The reason you don't want to just hack a bunch of stuff into your environment is that you won't be able to create scripts or programs that others can easily use. This is also the reason that you don't want to build a dependency on your current directory into your program. You should rely on <code>site-packages</code> being in <code>sys.path</code>; you should not be changing your current directory so that <code>site-packages</code> is your current directory.</p> <p>Does this make sense?</p> <p>However, if you really want to muck with the command line environment, you will find it useful to read this document:</p> <ul> <li><a href="https://docs.python.org/3/using/cmdline.html" rel="nofollow noreferrer">https://docs.python.org/3/using/cmdline.html</a></li> </ul> <p>In particular, set the environment variable <code>PYTHONSTARTUP</code> to contain the python commands that you want executed before python starts up in interactive mode.</p>
python|macos|numpy|terminal
0
2,809
46,669,135
Launch a model when the session is close - Tensorflow
<p>I build a neural network with two hidden layer. When I launch the session i save the session by :</p> <pre><code>saver.save(sess, "model.ckpt") </code></pre> <p>If I remain in the same session and I launch this code:</p> <pre><code>restorer=tf.train.Saver() with tf.Session() as sess: restorer.restore(sess,"./prova") new_graph = tf.train.import_meta_graph('prova.meta')  new_graph.restore(sess, 'prova.ckpt') feed={ pred1.inputs:test_data, pred1.is_training:False } test_predict=sess.run(pred1.predicted,feed_dict=feed) </code></pre> <p>I can launch the model for the test.</p> <p>The question is: there is a method for launch the model when the session is close? In particulary, I save my train result in .ckpt, I can re-launch the model in another moment?</p>
<p>You can't run the model outside of <code>tf.Session</code>. The quote from <a href="https://www.tensorflow.org/api_docs/python/tf/Session" rel="nofollow noreferrer">the documentation</a>:</p> <blockquote> <p>A Session object encapsulates the environment in which Operation objects are executed, and Tensor objects are evaluated.</p> </blockquote> <p>But you can easily open and close the sessions many times, use an existing graph or load a previously saved graph, and use it in a <em>new session</em>. Here's a slightly modified <a href="https://www.tensorflow.org/programmers_guide/saved_model" rel="nofollow noreferrer">example from the doc</a>:</p> <pre><code>import tensorflow as tf v1 = tf.get_variable("v1", shape=[3], initializer = tf.zeros_initializer) v2 = tf.get_variable("v2", shape=[5], initializer = tf.zeros_initializer) inc_v1 = v1.assign(v1+1) dec_v2 = v2.assign(v2-1) init_op = tf.global_variables_initializer() saver = tf.train.Saver() with tf.Session() as sess: sess.run(init_op) inc_v1.op.run() dec_v2.op.run() save_path = saver.save(sess, "/tmp/model.ckpt") print("Model saved in file: %s" % save_path) with tf.Session() as sess: saver.restore(sess, "/tmp/model.ckpt") print("Model restored.") print("v1 : %s" % v1.eval()) print("v2 : %s" % v2.eval()) </code></pre> <p>Between these two sessions you can't evaluate <code>v1</code> and <code>v2</code>, but you can right after starting a new session.</p>
python-3.x|tensorflow|neural-network|save
1
2,810
46,802,122
Obtaining Indexes for maximum points in a 2D numpy array
<p>I know this seems to be somewhat of a common question, but none of the answers currently seem to help my situation. I have a 2D numpy array which stores a spectrogram of a song. I want to identify the peaks using numpy's where function (I know people have other solutions for peak finding, but that's not what I'm looking for). </p> <p>When I used this on my 2D array, I'm under the impression that it returns an array of x coordinates, and an array of y coordinates. Except almost all my x coordinates, except the last few, are all 5. The y coordinates seem like they would work except they go to high.</p> <p>Here is an example of the output:</p> <pre><code>Coefficient of Variation = 0.310873 Skew = 33.2851477504 Signal to Noise Ratio = 3.21674642281 Peak threshold Scaler = 23.5 Peak Amplitude threshold = 7.30551834404 [5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 5 6 6 6] [ 259 283 324 388 389 412 424 449 453 501 1357 1422 1458 1459 1482 1483 1486 1487 1535 1809 1874 1938 1939 1976 1999 2003 2068 2069 2084 2085 2100 2101 2102 2116 2117 2118 2133 2134 2149 2150 2165 2166 2181 2182 2197 2198 2199 2213 2214 2215 2229 2230 2231 2246 2247 2262 2263 2278 2279 2294 2295 2296 2326 2350 2366 2367 2379 2391 2415 2431 2443 2455 2456 2480 2496 2508 2520 2544 2556 2557 2568 2569 2843 3101 3126 3142 3154 3166 3190 3206 3207 3218 3219 3231 3255 3271 3283 3295 3296 3319 3320 3331 3332 3344 3356 3400 3412 3424 3449 3465 3477 3489 3513 3514 3529 3530 3541 3542 3554 3578 3590 3602 3614 4119 4127 4135 4159 4175 4176 4187 4188 4200 4224 4240 4252 4264 4265 4288 4289 4304 4305 4317 4329 4353 4365 4377 4389 4390 4393 4418 4434 4446 4458 4482 4498 4499 4510 4511 4523 4547 4563 4575 4587 4588 4611 4612 4623 4624 4636 4648 4652 4676 4692 4704 4716 4741 4757 4769 4781 4805 4806 4821 4822 4833 4834 424 1974 1976] Total Time: 0.853456020355 seconds Time to find peaks: 0.0450880527496 seconds Number of x coords: 188 Number of y coords: 188 Number of amplitudes: 188 </code></pre> <p>and my code looks like:</p> <pre><code>peaksx, peaksy = numpy.where(arr2D &gt; (arr2Dcoefvar*threshold)) amplitudes = arr2D[peaksx,peaksy] print(peaksx) print(peaksy) </code></pre> <p>Here you can see I want to get the coordinates for any point which value (z value really) is above 7.3055...</p> <p>The Shape of the arr2D is: (2049, 5037)</p> <p>Am I not using the where function correctly? From what I've read it seems like I am, but the values are complete wrong.</p> <p>Example Picture of it plotting incorrectly: <a href="https://i.stack.imgur.com/E39Sc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/E39Sc.png" alt="bad maxima"></a></p> <p>Example Picture of a good plot: <a href="https://i.stack.imgur.com/UCysg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UCysg.png" alt="good maxima"></a></p> <p>Thanks a bunch!</p>
<p>To answer this question for others who are curious about the answer, it was an issue with how they are indexed vs matplotlib. Kind of like when you study matrices and they list the height and then the length. It is similar here. Therefore the code:</p> <pre><code>peaksx, peaksy = numpy.where(arr2D &gt; (arr2Dcoefvar*threshold)) </code></pre> <p>should be</p> <pre><code>peaksy, peaksx = numpy.where(arr2D &gt; (arr2Dcoefvar*threshold)) </code></pre> <p>and then the plot will come out correct! :)</p>
python|arrays|numpy|multidimensional-array|spectrogram
0
2,811
38,858,177
Pandas - Sort by group membership numbers
<p>When faced with large numbers of groups, any graph you might make is apt to be useless due to having too many lines and an unreadable legend. In these cases, being able to find the groups that have the most and least information in them is very useful. However, while <code>x.size()</code> tells you the group membership (after having used <code>groupby</code>), there is no way I can find to re-sort the dataframe using this information, so that you can then use limiting looping to only graph the first x groups.</p>
<p>You can use <code>transform</code> to get the counts and sort on that column:</p> <pre><code>df = pd.DataFrame({'A': list('aabababc'), 'B': np.arange(8)}) df Out: A B 0 a 0 1 a 1 2 b 2 3 a 3 4 b 4 5 a 5 6 b 6 7 c 7 </code></pre> <hr> <pre><code>df['counts'] = df.groupby('A').transform('count') df Out: A B counts 0 a 0 4 1 a 1 4 2 b 2 3 3 a 3 4 4 b 4 3 5 a 5 4 6 b 6 3 7 c 7 1 </code></pre> <p>Now you can sort by <code>counts</code>:</p> <pre><code>df.sort_values('counts') Out: A B counts 7 c 7 1 2 b 2 3 4 b 4 3 6 b 6 3 0 a 0 4 1 a 1 4 3 a 3 4 5 a 5 4 </code></pre> <p>In one line:</p> <pre><code>df.assign(counts = df.groupby('A').transform('count')).sort_values('counts') </code></pre>
python|pandas
3
2,812
63,239,708
Aggregate time series data to make a scatter plot
<p>I want to make time series scatter plot for my time series data, where my data has categorical columns which needs to be aggregated by group to make plotting data first, then make scatter plot either using <code>seaborn</code> or <code>matplotlib</code>. My data is product sales prices time series data, I want to see each product owner' price trend on different market threshold along times. I tried of using <code>pandas.pivot_table</code>, <code>groupby</code> for shaping plotting data, but couldn't get desired plot that I want to make.</p> <p><strong>reproducible data</strong>:</p> <p>here is <a href="https://jmp.sh/v/BYVkqSjSFP8LZnVvYnKa" rel="nofollow noreferrer">example product data</a> that I used; wheres I want to see each dealer's price trend on different protein type with respect to <code>threshold</code>.</p> <p><strong>my attempt</strong></p> <p>here is my current attempt to aggregate my data for making plotting data but it is not giving my right plot. I bet the my way of aggregating plotting data is not correct. Can anyone point me out how to make this right to get desired plot?</p> <pre><code>import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sn mydf = pd.read_csv('foo.csv') mydf=mydf.drop(mydf.columns[0], axis=1) mydf['expected_price'] = mydf['price']*76/mydf['threshold'] g = mydf.groupby(['dealer','protein_type']) newdf= g.apply(lambda x: pd.Series([np.average(x['threshold'])])).unstack() </code></pre> <p>but above attempt is not working because I want to have plotting data for each dealer's market purchase price on different <code>protein_type</code> with different <code>threshold</code> along the daily time series. I don't know what's best way of dealing with this time series. Can anyone suggest me or correct me how to get this right?</p> <p>I also tried <code>pandas/pivot_table</code> for aggregating my data but it is still not representing plotting data.</p> <pre><code>pv_df= pd.pivot_table(mydf, index=['date'], columns=['dealer', 'protein_type', 'threshold'],values=['price']) pv_df= pv_df.fillna(0) pv_df.groupby(['dealer', 'protein_type', 'threshold'])['price'].unstack().reset_index() </code></pre> <p>but above attempt is still not working. Also in my data, date is not continuous so I assume I could make plot of monthly time series line chart.</p> <p><strong>my attempt for making plot</strong>:</p> <p>here is my attempt for making plot:</p> <pre><code>def scatterplot(x_data, y_data, x_label, y_label, title): fig, ax = plt.subplots() ax.scatter(x_data, y_data, s = 30, color = '#539caf', alpha = 0.75) ax.set_title(title) ax.set_xlabel(x_label) ax.set_ylabel(y_label) fig.autofmt_xdate() </code></pre> <p><strong>desired output</strong>:</p> <p>I want either line chart or scatter plot where x-axis shows monthly time series while y-axis shows price of each different <code>protein_type</code> on different <code>threshold</code> value for each different dealer along monthly time series. Here is the example possible line chart I want to have:</p> <p><a href="https://i.stack.imgur.com/qL56n.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qL56n.png" alt="example line chart" /></a></p>
<h2>Updated with <code>threshold</code></h2> <h3>Option 1</h3> <ul> <li>This option was implemented after seeing the results of <strong>Option 1</strong>. <ul> <li>There is a lot of unexplained information in the plots and they do not clearly present the data</li> </ul> </li> <li>To clearly present the data, each plot should contain only 3 dimensions of data (e.g. <code>date</code>, <code>values</code> and <code>cats</code>) for one <code>dealer</code>, one <code>threshold</code>, and one <code>protein_type</code>.</li> </ul> <pre class="lang-py prettyprint-override"><code>import pandas as pd import matplotlib.pyplot as plt import seaborn as sns from datetime import timedelta # read the data in and parse the date column and set threshold as a str df = pd.read_csv('data/so_data/2020-08-03 63239708/mydf.csv', parse_dates=['date']) # calculate expected price df['expected_price'] = df.price*76/df.threshold # set threshold as a category df.threshold = df.threshold.astype('category') # set the index df = df.set_index(['date', 'dealer', 'protein_type', 'threshold']) # form the dataframe into a long form dfl = df.drop(columns=['destination', 'quantity']).stack().reset_index().rename(columns={'level_4': 'cats', 0: 'values'}) # plot for pt in dfl.protein_type.unique(): for t in dfl.threshold.unique(): data = dfl[(dfl.protein_type == pt) &amp; (dfl.threshold == t)] if not data.empty: utc = len(data.threshold.unique()) f, axes = plt.subplots(nrows=utc, ncols= 2, figsize=(20, 4), squeeze=False) for j in range(utc): for i, d in enumerate(dfl.dealer.unique()): data_d = data[data.dealer == d].sort_values(['cats', 'date']).reset_index(drop=True) p = sns.scatterplot('date', 'values', data=data_d, hue='cats', ax=axes[j, i]) if not data_d.empty: p.set_title(f'{d}\nThreshold: {t}\n{pt}') p.set_xlim(data_d.date.min() - timedelta(days=60), data_d.date.max() + timedelta(days=60)) else: p.set_title(f'{d}: No Data Available\nThreshold: {t}\n{pt}') plt.show() </code></pre> <h3>First four plots</h3> <p><a href="https://i.stack.imgur.com/woor6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/woor6.png" alt="enter image description here" /></a></p> <h3>Option 2</h3> <ul> <li>This results in 4 separate figures with <code>threshold</code> as a <code>category</code> type.</li> <li><code>threshold</code> must first be left as an <code>int</code> for the <code>expected_price</code> calculation, and then converted.</li> <li>Note that my data does not have the extra unnamed column, so that will still need to be dropped, which is not shown in the following code.</li> </ul> <pre class="lang-py prettyprint-override"><code>import pandas as pd import matplotlib.pyplot as plt import seaborn as sns # read the data in and parse the date column and set threshold as a str df = pd.read_csv('data/so_data/2020-08-03 63239708/mydf.csv', parse_dates=['date']) # calculate expected price df['expected_price'] = df.price*76/df.threshold # set threshold as a category df.threshold = df.threshold.astype('category') # set the index df = df.set_index(['date', 'dealer', 'protein_type', 'threshold']) # form the dataframe into a long form dfl = df.drop(columns=['destination', 'quantity']).stack().reset_index().rename(columns={'level_4': 'cats', 0: 'values'}) # plot four plots with threshold for d in dfl.dealer.unique(): for pt in dfl.protein_type.unique(): plt.figure(figsize=(13, 7)) data = dfl[(dfl.protein_type == pt) &amp; (dfl.dealer == d)] sns.lineplot('date', 'values', data=data, hue='threshold', style='cats') plt.yscale('log') plt.title(f'{d}: {pt}') plt.legend(bbox_to_anchor=(1.04,0.5), loc=&quot;center left&quot;, borderaxespad=0) </code></pre> <h2><a href="https://i.stack.imgur.com/37fBP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/37fBP.png" alt="enter image description here" /></a> <a href="https://i.stack.imgur.com/OVR1T.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OVR1T.png" alt="enter image description here" /></a></h2> <h2>Original without <code>threshold</code> as a category</h2> <ul> <li>I don't understand what you're doing with the following: <ul> <li><code>newdf= g.apply(lambda x: pd.Series([np.average(x['threshold'])])).unstack()</code></li> <li>I don't think this is integral to the main issue, which is plotting the data</li> </ul> </li> <li>First, the dataframe needs to be converted to a long format and <code>'destination'</code> needs to be dropped</li> <li>There are to many dimensions to plot on a single figure <ul> <li><code>x='date'</code>, <code>y='values'</code>, <code>hue='cats'</code>, <code>style='dealer'</code></li> <li><code>'protein_type'</code> needs to have a separate figure</li> <li>However, the data overlaps to much to be readable with <code>'dealer'</code> included, so 4 plots are required.</li> </ul> </li> </ul> <h2>DataFrame Setup:</h2> <ul> <li>Note that my data does not have the extra unnamed column, so that will still need to be dropped, which is not shown in the following code.</li> <li>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.stack.html" rel="nofollow noreferrer"><code>pandas.DataFrame.stack</code></a> to convert the dataframe to a long form</li> </ul> <h3>Option 1:</h3> <pre class="lang-py prettyprint-override"><code>import pandas as pd import matplotlib.pyplot as plt import seaborn as sns # read the data in df = pd.read_csv('data/so_data/2020-08-03 63239708/mydf.csv', parse_dates=['date']) # your calculation df['expected_price'] = df['price']*76/df['threshold'] # set the index df = df.set_index(['date', 'dealer', 'protein_type']) # form the dataframe into a long form dfl = df.drop(columns=['destination']).stack().reset_index().rename(columns={'level_3': 'cats', 0: 'values'}) # display(dfl.head()) date dealer protein_type cats values 0 2001-12-22 Alpha Food Corps chicken threshold 50.00 1 2001-12-22 Alpha Food Corps chicken quantity 39037.00 2 2001-12-22 Alpha Food Corps chicken price 0.50 3 2001-12-22 Alpha Food Corps chicken expected_price 0.76 4 2001-12-27 Alpha Food Corps beef threshold 85.00 </code></pre> <h3>Option 2: Rolling Mean</h3> <ul> <li><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>pandas.DataFrame.groupby</code></a> with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.rolling.html" rel="nofollow noreferrer"><code>pandas.DataFrame.rolling</code></a> <code>mean</code> and then <code>.stack</code>.</li> </ul> <pre class="lang-py prettyprint-override"><code>df = pd.read_csv('data/so_data/2020-08-03 63239708/mydf.csv', parse_dates=['date']) df['expected_price'] = df['price']*76/df['threshold'] df = df.set_index('date') # groupby aggregate rolling mean and stack dfl = df.groupby(['dealer', 'protein_type'])[['expected_price', 'price']].rolling(7).mean().stack().reset_index().rename(columns={'level_3': 'cats', 0: 'values'}) </code></pre> <h2>Option 1: Two Plots</h2> <ul> <li>The <code>'dealer'</code> data is to similar to be differentiated (price collusion anyone?)</li> </ul> <pre class="lang-py prettyprint-override"><code>for pt in dfl.protein_type.unique(): plt.figure(figsize=(9, 5)) data = dfl[dfl.protein_type == pt] sns.lineplot('date', 'values', data=data, hue='cats', style='dealer') plt.xlim(datetime(2001, 11, 1), datetime(2004, 8, 1)) plt.yscale('log') plt.title(pt) plt.legend(bbox_to_anchor=(1.04,0.5), loc=&quot;center left&quot;, borderaxespad=0) </code></pre> <p><a href="https://i.stack.imgur.com/aM8QW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aM8QW.png" alt="enter image description here" /></a></p> <ul> <li>Even with only <code>'price'</code> and <code>'expected_price'</code>, <code>'dealer'</code> can't be determined.</li> </ul> <p><a href="https://i.stack.imgur.com/2Qdag.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2Qdag.png" alt="enter image description here" /></a></p> <h2>Option 2: Four Plots</h2> <h3><a href="https://seaborn.pydata.org/generated/seaborn.FacetGrid.html#seaborn-facetgrid" rel="nofollow noreferrer"><code>seaborn.FacetGrid</code></a></h3> <pre class="lang-py prettyprint-override"><code>g = sns.FacetGrid(data=dfl, col='dealer', row='protein_type', hue='cats', height=5, aspect=1.5) g.map(sns.lineplot, 'date', 'values').add_legend() plt.yscale('log') g.set_xticklabels(rotation=90) </code></pre> <p><a href="https://i.stack.imgur.com/3JySU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3JySU.png" alt="enter image description here" /></a></p> <ul> <li>Plot of data from rolling mean</li> </ul> <p><a href="https://i.stack.imgur.com/hMYs1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hMYs1.png" alt="enter image description here" /></a></p> <h3>Nested Loop</h3> <ul> <li>This will product one column of 4 figures, selected first for <code>dealer</code> and then <code>protein_type</code>.</li> <li>Optionally, swap the order of <code>dealer</code> and <code>protein</code></li> </ul> <pre class="lang-py prettyprint-override"><code>for d in dfl.dealer.unique(): for pt in dfl.protein_type.unique(): plt.figure(figsize=(10, 5)) data = dfl[(dfl.protein_type == pt) &amp; (dfl.dealer == d)] sns.lineplot('date', 'values', data=data, hue='cats') plt.xlim(datetime(2001, 11, 1), datetime(2004, 8, 1)) plt.yscale('log') plt.title(f'{d}: {pt}') plt.legend(bbox_to_anchor=(1.04,0.5), loc=&quot;center left&quot;, borderaxespad=0) </code></pre> <h2>CSV Sample:</h2> <pre class="lang-py prettyprint-override"><code>date,dealer,threshold,quantity,price,protein_type,destination 2001-12-22,Alpha Food Corps,50,39037,0.5,chicken,UK 2001-12-27,Alpha Food Corps,85,35432,1.8,beef,UK 2001-12-29,Alpha Food Corps,50,32142,0.5,chicken,UK 2001-12-30,Alpha Food Corps,85,34516,1.8,beef,UK 2002-01-02,Alpha Food Corps,85,39930,1.8,beef,UK 2002-01-04,Alpha Food Corps,85,40709,1.8,beef,UK 2002-01-08,Alpha Food Corps,94,37641,2.2,beef,UK 2002-01-08,Alpha Food Corps,85,37545,1.8,beef,UK 2002-01-08,Alpha Food Corps,85,37564,1.8,beef,UK 2002-01-08,Alpha Food Corps,85,37607,1.8,beef,UK 2002-01-08,Alpha Food Corps,85,41706,1.8,beef,UK 2002-01-08,Alpha Food Corps,90,41628,2.1,beef,UK 2002-01-08,Alpha Food Corps,65,35720,0.9,chicken,UK 2002-01-09,Alpha Food Corps,94,1581,2.2,beef,UK 2002-01-09,Alpha Food Corps,85,11426,1.8,beef,UK 2002-01-09,Alpha Food Corps,85,37489,1.8,beef,UK 2002-01-09,Alpha Food Corps,90,15630,2.1,beef,UK 2002-01-09,Alpha Food Corps,80,3136,1.6,beef,UK 2002-01-10,Alpha Food Corps,85,41919,1.8,beef,UK 2002-01-10,Alpha Food Corps,90,39932,2.1,beef,UK 2002-01-10,Alpha Food Corps,90,41665,2.1,beef,UK 2002-01-10,Alpha Food Corps,90,41860,2.1,beef,UK 2002-01-10,Alpha Food Corps,65,39879,0.9,chicken,UK 2002-01-10,Alpha Food Corps,65,39884,0.9,chicken,UK 2002-01-11,Alpha Food Corps,90,37613,2.1,beef,UK 2002-01-12,Alpha Food Corps,90,41855,2.1,beef,UK 2002-01-13,Alpha Food Corps,90,37585,2.1,beef,UK 2002-01-15,Alpha Food Corps,85,41618,1.8,beef,UK 2002-01-15,Alpha Food Corps,85,41721,1.8,beef,UK 2002-01-15,Alpha Food Corps,85,41869,1.8,beef,UK 2002-01-15,Alpha Food Corps,85,41990,1.8,beef,UK 2002-01-15,Alpha Food Corps,90,41744,2.1,beef,UK 2002-01-15,Alpha Food Corps,90,41936,2.1,beef,UK 2002-01-15,Alpha Food Corps,65,41684,1.0,chicken,UK 2002-01-15,Alpha Food Corps,65,41776,1.0,chicken,UK 2002-01-16,Alpha Food Corps,94,35891,2.2,beef,UK 2002-01-16,Alpha Food Corps,85,39985,1.8,beef,UK 2002-01-16,Alpha Food Corps,85,41754,1.8,beef,UK 2002-01-16,Alpha Food Corps,85,41811,1.8,beef,UK 2002-01-16,Alpha Food Corps,90,39838,2.1,beef,UK 2002-01-16,Alpha Food Corps,80,3244,1.7,beef,UK 2002-01-17,Alpha Food Corps,94,22245,2.2,beef,UK 2002-01-17,Alpha Food Corps,85,5186,1.8,beef,UK 2002-01-17,Alpha Food Corps,90,2016,2.1,beef,UK 2002-01-17,Alpha Food Corps,90,40875,2.1,beef,UK 2002-01-17,Alpha Food Corps,65,41440,1.0,chicken,UK 2002-01-18,Alpha Food Corps,94,12525,2.2,beef,UK 2002-01-18,Alpha Food Corps,94,31325,2.2,beef,UK 2002-01-18,Alpha Food Corps,85,15486,1.8,beef,UK 2002-01-18,Alpha Food Corps,85,29992,1.8,beef,UK 2002-01-18,Alpha Food Corps,85,39938,1.8,beef,UK 2002-01-18,Alpha Food Corps,85,41777,1.8,beef,UK 2002-01-18,Alpha Food Corps,90,9475,2.1,beef,UK 2002-01-18,Alpha Food Corps,90,9960,2.1,beef,UK 2002-01-18,Alpha Food Corps,90,41676,2.1,beef,UK 2002-01-18,Alpha Food Corps,90,41816,2.1,beef,UK 2002-01-18,Alpha Food Corps,90,42036,2.1,beef,UK 2002-01-18,Alpha Food Corps,65,41673,1.0,chicken,UK 2002-01-19,Alpha Food Corps,85,19961,1.8,beef,UK 2002-01-19,Alpha Food Corps,90,19955,2.1,beef,UK 2002-01-19,Alpha Food Corps,90,40437,2.1,beef,UK 2002-01-19,Alpha Food Corps,65,41574,1.0,chicken,UK 2002-01-19,Alpha Food Corps,65,41700,1.0,chicken,UK 2002-01-20,Alpha Food Corps,94,23278,2.2,beef,UK 2002-01-20,Alpha Food Corps,85,9230,1.8,beef,UK 2002-01-20,Alpha Food Corps,85,38842,1.8,beef,UK 2002-01-20,Alpha Food Corps,90,9173,2.1,beef,UK 2002-01-20,Alpha Food Corps,90,38608,2.1,beef,UK 2002-01-20,Alpha Food Corps,50,39191,0.8,chicken,UK 2002-01-22,Alpha Food Corps,94,41741,2.2,beef,UK 2002-01-22,Alpha Food Corps,85,39879,1.8,beef,UK 2002-01-22,Alpha Food Corps,85,41683,1.8,beef,UK 2002-01-22,Alpha Food Corps,85,41958,1.8,beef,UK 2002-01-22,Alpha Food Corps,90,41833,2.1,beef,UK 2002-01-23,Alpha Food Corps,94,20294,2.2,beef,UK 2002-01-23,Alpha Food Corps,85,15553,1.8,beef,UK 2002-01-23,Alpha Food Corps,85,40753,1.8,beef,UK 2002-01-23,Alpha Food Corps,85,41740,1.8,beef,UK 2002-01-23,Alpha Food Corps,90,1892,2.1,beef,UK 2002-01-23,Alpha Food Corps,90,39850,2.1,beef,UK 2002-01-23,Alpha Food Corps,80,3231,1.7,beef,UK 2002-01-23,Alpha Food Corps,65,41415,1.1,chicken,UK 2002-01-24,Alpha Food Corps,90,35473,2.1,beef,UK 2002-01-24,Alpha Food Corps,90,41824,2.1,beef,UK 2002-01-24,Alpha Food Corps,65,41721,1.1,chicken,UK 2002-01-25,Alpha Food Corps,85,19983,1.8,beef,UK 2002-01-25,Alpha Food Corps,85,35823,1.8,beef,UK 2002-01-25,Alpha Food Corps,90,19949,2.1,beef,UK 2002-01-25,Alpha Food Corps,90,41800,2.1,beef,UK 2002-01-25,Alpha Food Corps,65,40990,1.1,chicken,UK 2002-01-26,Alpha Food Corps,90,39938,2.1,beef,UK 2002-01-26,Alpha Food Corps,90,40641,2.1,beef,UK 2002-01-26,Alpha Food Corps,90,41550,2.1,beef,UK 2002-01-27,Alpha Food Corps,94,16589,2.2,beef,UK 2002-01-27,Alpha Food Corps,85,11669,1.8,beef,UK 2002-01-27,Alpha Food Corps,90,24982,2.1,beef,UK 2002-01-27,Alpha Food Corps,65,29819,1.1,chicken,UK 2002-01-29,Alpha Food Corps,94,37516,2.2,beef,UK 2002-01-29,Alpha Food Corps,85,37378,1.8,beef,UK 2002-01-29,Alpha Food Corps,85,37535,1.8,beef,UK 2002-01-29,Alpha Food Corps,85,40174,1.8,beef,UK 2002-01-29,Alpha Food Corps,90,37831,2.1,beef,UK 2002-01-30,Alpha Food Corps,94,34435,2.2,beef,UK 2002-01-30,Alpha Food Corps,94,39640,2.2,beef,UK 2002-01-30,Alpha Food Corps,85,1619,1.8,beef,UK 2002-01-30,Alpha Food Corps,85,3058,1.8,beef,UK 2002-01-30,Alpha Food Corps,85,20929,1.8,beef,UK 2002-01-30,Alpha Food Corps,90,3641,2.1,beef,UK 2002-01-30,Alpha Food Corps,90,20974,2.1,beef,UK 2002-01-30,Alpha Food Corps,90,31160,2.1,beef,UK 2002-01-30,Alpha Food Corps,92,38189,2.3,beef,UK 2002-01-31,Alpha Food Corps,94,8804,2.2,beef,UK 2002-01-31,Alpha Food Corps,85,17398,1.8,beef,UK 2002-01-31,Alpha Food Corps,90,13963,2.1,beef,UK 2002-01-31,Alpha Food Corps,90,37673,2.1,beef,UK 2002-01-31,Alpha Food Corps,90,40330,2.1,beef,UK 2002-01-31,Alpha Food Corps,90,40511,2.2,beef,UK 2002-01-31,Alpha Food Corps,80,38290,1.9,beef,UK 2002-01-31,Alpha Food Corps,92,37193,2.3,beef,UK 2002-02-01,Alpha Food Corps,94,5011,2.2,beef,UK 2002-02-01,Alpha Food Corps,85,18783,1.8,beef,UK 2002-02-01,Alpha Food Corps,85,41827,1.8,beef,UK 2002-02-01,Alpha Food Corps,90,16394,2.1,beef,UK 2002-02-01,Alpha Food Corps,90,23013,2.1,beef,UK 2002-02-01,Alpha Food Corps,90,39923,2.1,beef,UK 2002-02-01,Alpha Food Corps,90,41417,2.1,beef,UK 2002-02-01,Alpha Food Corps,80,15592,1.7,beef,UK 2002-02-01,Alpha Food Corps,80,38364,1.9,beef,UK 2002-02-01,Alpha Food Corps,92,37605,2.3,beef,UK 2002-02-01,Alpha Food Corps,92,39234,2.3,beef,UK 2002-02-02,Alpha Food Corps,90,34578,2.1,beef,UK 2002-02-02,Alpha Food Corps,90,41661,2.1,beef,UK 2002-02-02,Alpha Food Corps,80,3157,1.7,beef,UK 2002-02-02,Alpha Food Corps,65,41272,1.2,chicken,UK 2002-02-02,Alpha Food Corps,65,41503,1.2,chicken,UK 2002-02-02,Alpha Food Corps,92,36207,2.3,beef,UK 2002-02-05,Alpha Food Corps,94,41559,2.2,beef,UK 2002-02-05,Alpha Food Corps,85,41549,1.8,beef,UK 2002-02-05,Alpha Food Corps,85,41753,1.8,beef,UK 2002-02-05,Alpha Food Corps,85,41908,1.8,beef,UK 2002-02-05,Alpha Food Corps,90,39813,2.1,beef,UK 2002-02-05,Alpha Food Corps,90,41526,2.1,beef,UK 2002-02-05,German Food Corps,80,36031,1.9,beef,UK 2002-02-05,German Food Corps,50,38538,0.9,chicken,UK 2002-02-05,Alpha Food Corps,50,38772,0.9,chicken,UK 2002-02-05,German Food Corps,50,39099,0.9,chicken,UK 2002-02-05,German Food Corps,50,39132,0.9,chicken,UK 2002-02-05,German Food Corps,50,39207,0.9,chicken,UK 2002-02-06,Alpha Food Corps,85,41947,1.8,beef,UK 2002-02-06,German Food Corps,80,37287,1.9,beef,UK 2002-02-06,Alpha Food Corps,89,43201,2.1,beef,UK 2002-02-06,German Food Corps,50,38553,0.9,chicken,UK 2002-02-06,German Food Corps,50,38837,0.9,chicken,UK 2002-02-06,Alpha Food Corps,50,38985,0.9,chicken,UK 2002-02-06,German Food Corps,65,40386,1.4,chicken,UK 2002-02-06,Alpha Food Corps,65,41851,1.2,chicken,UK 2002-02-06,Alpha Food Corps,92,38405,2.3,beef,UK 2002-02-06,German Food Corps,73,37731,1.5,chicken,UK 2002-02-07,Alpha Food Corps,85,41097,1.9,beef,UK 2002-02-07,Alpha Food Corps,90,39582,2.1,beef,UK 2002-02-07,German Food Corps,65,38832,1.4,chicken,UK 2002-02-07,German Food Corps,50,39269,0.9,chicken,UK 2002-02-07,German Food Corps,50,40129,0.9,chicken,UK 2002-02-07,German Food Corps,50,41124,0.8,chicken,UK 2002-02-07,German Food Corps,65,41739,1.2,chicken,UK 2002-02-08,Alpha Food Corps,85,20034,1.8,beef,UK 2002-02-08,German Food Corps,85,33503,1.9,beef,UK 2002-02-08,German Food Corps,85,40780,1.9,beef,UK 2002-02-08,Alpha Food Corps,90,19913,2.1,beef,UK 2002-02-08,Alpha Food Corps,90,36682,2.1,beef,UK 2002-02-08,Alpha Food Corps,90,41624,2.1,beef,UK 2002-02-08,German Food Corps,65,37503,1.4,chicken,UK 2002-02-08,German Food Corps,50,38973,0.9,chicken,UK 2002-02-08,German Food Corps,50,39069,0.9,chicken,UK 2002-02-08,German Food Corps,50,40697,0.9,chicken,UK 2002-02-08,German Food Corps,92,36103,2.3,beef,UK 2002-02-08,Alpha Food Corps,92,38278,2.3,beef,UK 2002-02-09,Alpha Food Corps,90,39842,2.1,beef,UK 2002-02-09,Alpha Food Corps,90,16553,2.3,beef,UK 2002-02-09,Alpha Food Corps,80,18739,1.9,beef,UK 2002-02-09,German Food Corps,80,36349,1.9,beef,UK 2002-02-09,German Food Corps,65,35238,1.4,chicken,UK 2002-02-09,German Food Corps,50,38391,0.9,chicken,UK 2002-02-09,Alpha Food Corps,50,38819,0.9,chicken,UK 2002-02-09,German Food Corps,50,41691,0.9,chicken,UK 2002-02-09,Alpha Food Corps,92,40245,2.3,beef,UK 2002-02-09,German Food Corps,73,37323,1.5,chicken,UK 2002-02-09,German Food Corps,90,40312,2.2,beef,UK 2002-02-10,Alpha Food Corps,90,42108,2.1,beef,UK 2002-02-10,German Food Corps,65,37831,1.4,chicken,UK 2002-02-11,Alpha Food Corps,50,38591,0.9,chicken,UK 2002-02-12,Alpha Food Corps,94,41559,2.3,beef,UK 2002-02-12,Alpha Food Corps,85,40968,1.8,beef,UK 2002-02-12,Alpha Food Corps,85,41985,1.8,beef,UK 2002-02-12,German Food Corps,50,38931,0.9,chicken,UK 2002-02-12,German Food Corps,50,38986,0.9,chicken,UK 2002-02-12,German Food Corps,92,39684,2.3,beef,UK 2002-02-12,German Food Corps,73,36619,1.5,chicken,UK 2002-02-13,Alpha Food Corps,85,41291,1.8,beef,UK 2002-02-13,Alpha Food Corps,85,41892,1.8,beef,UK </code></pre>
python|pandas|matplotlib|time-series|seaborn
7
2,813
62,987,940
how to write into excel with the header follow exactly the dictionary keys?
<p>I do have a list of 20000++ dictionaries with 59 keys and values. I need to export the dictionary into excel. Below is my script using pandas to write into excel but the problem is the header did not follow the correct position of the key in dictionary. Below is just some part of the list.</p> <pre><code>new_d=[{'file': '1_2', 'name': 'paul', 'role': 'engineer',.....}, {'file': '1_2', 'name': 'smith', 'role': 'engineer',....}, {'file': '1_2', 'name': 'mei', 'role': 'engineer',.....} . . . ] </code></pre> <p>Here is my code to export the list into an excel using pandas</p> <pre><code>ccf_df=pd.DataFrame(a) writer=pd.ExcelWriter('file.xlsx',engine='xlsxwriter') ccf_df.to_excel(writer,sheet_name='FCC') </code></pre> <p>Unfortunately, the output in the excel file is not following the correct position of the key in the dictionary.</p> <p>It was supposedly to be <code>file|name|role|actor|time|.....</code> but the output is <code>actor|file|time|role|name|.....</code></p>
<p>You can force pandas excel writer to keep the order of the columns the way you want by giving <code>columns=[list of columns]</code> <code>ccf_df.to_excel(writer,sheet_name='CCF', columns=[list of columns])</code></p>
python|excel|pandas|dictionary
1
2,814
63,169,314
Group datafrime by time slots in Pandas python
<p>i'm working with a dataset that comes from the data sent by underground sensors stations, which provide an estimate of the flow of the cars going through them. My data are grouped by hour for each sensor on the same period of time, this is how the df looks like:</p> <p><a href="https://i.stack.imgur.com/YbfsW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YbfsW.png" alt="enter image description here" /></a></p> <p>I thought to find some trends of the flow in various time slots( like morning, afternoon, evening, night)</p> <p>My question is:</p> <p>there's a way to group the data for each station_id in time slots? For example group the data of each station from 00:00 to 06:00, from 06:00 to 12:00 and so on, and for every subgroup calculate the mean of the flow value.</p> <p>Concerning the time i'm interested in keeping for each time slot only the day and the month</p> <p>I've read the datetime's documentation and tried with some methods but unsuccessfully</p> <p>I'll appreciate everyone who'll reply and help me with any tip.</p>
<p>create the bins and group by them:</p> <pre><code>df = pd.read_csv('readings_by_hour.csv') df['time'] = pd.to_datetime(df['time']) df['time_bins'] = df['time'].dt.floor('6h') df.groupby(['station_id', 'time_bins'])['flow'].mean() </code></pre>
python|pandas|datetime
1
2,815
63,080,812
sum for id'sin python
<p>I have below dataframe called &quot;df&quot; and calculating the last amount sum by unique id called</p> <pre><code>import pandas as pd from dateutil import parser from datetime import datetime, timedelta df= {'Date':['2019-01-11 10:23:45','2019-01-09 10:23:45', '2019-01-11 10:27:45', '2019-01-11 10:25:45', '2019-01-11 10:30:45', '2019-01-11 10:35:45', '2019-02-09 10:25:45'], 'Fruit id':['100','200','300','100','100', '100','200'], 'X':[200,400,330,100,300,200,500], } df= pd.DataFrame(df) df[&quot;Date&quot;] = pd.to_datetime(df['Date']) </code></pre>
<p><code>pivot_table</code> could be useful here.</p> <pre><code>df.sort_values(by='Date', inplace=True) newdf = pd.pivot_table(df, columns='Fruit id', index='Date', aggfunc=np.sum, values='Amount').rolling('30min', closed='left').sum().sort_index() newdf['Fruit id'] = df['Fruit id'].values df['count_ncc_amt'] = newdf.apply(lambda row: row[row['Fruit id']], axis=1).values print(df) Date Fruit id NCC Amount Sys count_ncc_amt 1 2019-01-09 10:23:45 200 100 400 0 NaN 0 2019-01-11 10:23:45 100 100 200 1 NaN 3 2019-01-11 10:25:45 100 100 100 0 200.0 2 2019-01-11 10:27:45 300 200 330 1 NaN 4 2019-01-11 10:30:45 100 100 300 1 300.0 5 2019-01-11 10:35:45 100 100 200 0 600.0 6 2019-02-09 10:25:45 200 100 500 1 NaN </code></pre>
python|pandas
1
2,816
63,284,825
LSTM model is giving me 99% R-squared even if my training data set is 5% of the overall set
<p>I'm using a LSTM model to perform time series forecasting. I have a weird issue where my R-squared is basically always 99% even if my training data set is 5% of my total data set! I plot the graph between the predicted values and the test data and it looks almost identical. How is this even possible?</p> <p>My data is like so after normalization</p> <pre><code>date 0 1 2 3 4 5 6 7 8 9 0 2019-01-01 00:00:01+00:00 0.000000 0.000000 0.000 1.000 0.000 0.500000 0.079178 0.076970 0.079109 0.077500 1 2019-01-01 00:00:02+00:00 0.000000 0.000000 0.000 1.000 0.000 0.500000 0.079178 0.076970 0.079109 0.077500 2 2019-01-01 00:00:07+00:00 0.000025 0.000103 0.000 0.492 0.508 0.738780 0.079178 0.076970 0.079109 0.077500 3 2019-01-01 00:00:07+00:00 0.000000 0.000002 0.000 1.000 0.000 0.500000 0.079178 0.076970 0.079109 0.077500 4 2019-01-01 00:00:08+00:00 0.000000 0.000000 0.000 0.000 1.000 0.711130 0.079178 0.076970 0.079109 0.077500 ... ... ... ... ... ... ... ... ... ... ... ... 116022 2020-07-28 08:39:59+00:00 0.000000 0.000000 0.000 0.844 0.156 0.786466 0.781738 0.782749 0.781928 0.782748 116023 2020-07-28 08:44:57+00:00 0.000000 0.000000 0.000 1.000 0.000 0.500000 0.781738 0.782749 0.781928 0.782748 116024 2020-07-28 08:47:59+00:00 0.000000 0.000000 0.244 0.756 0.000 0.279403 0.781738 0.782749 0.781928 0.782748 116025 2020-07-28 09:15:26+00:00 0.000000 0.000000 0.000 0.735 0.265 0.965187 0.781738 0.782749 0.781928 0.782748 116026 2020-07-28 09:15:40+00:00 0.000000 0.000000 0.000 0.784 0.216 0.755760 0.781738 0.782749 0.781928 0.782748 </code></pre> <pre><code>from sklearn.metrics import mean_squared_error, r2_score, mean_absolute_error from keras.models import Sequential from keras.layers import Dense, LSTM, Dropout from keras.optimizers import Adam model = Sequential() model.add(LSTM(64, input_shape=x_train.shape[1:3], return_sequences=False)) model.add(Dense(1)) model.compile(loss='mse', optimizer='adam', metrics=['mse']) history = model.fit(x_train, y_train, epochs=1, batch_size=1, verbose=1) train_pred = model.predict(x_train) y_pred = model.predict(x_test) print('R2 Score: ', r2_score(y_test, y_pred)) print('MAE: ', mean_absolute_error(y_test, y_pred)) </code></pre> <p>Results</p> <pre><code>R2 Score: 0.9959650143133337 MAE: 0.008859985819425287 </code></pre>
<p>Mathematically, The <a href="https://en.wikipedia.org/wiki/Coefficient_of_determination" rel="nofollow noreferrer">R-Squared</a>'s purpose is to give you an estimation on the fraction of your model's variance that is explained by your model's independent features.</p> <p>The formula goes as follows: [1 - (SSres / SStot)].</p> <p>Where: SStot stands for the sum of your total squared error and SSres stands for residual sum of squares.</p> <p>As both SSres and SStot are being a sum of something that is aggregated on the same amount of 'n' records on your dataset, the number of records you have on your dataset (training dataset in your case) can change the R-Squared metric but shouldn't make any dramatic changes to it as a metric. It is safe to say that the R-Squared as a metric isn't reflecting anything that has to do with the amount of data you have as it is being nullified by the deviation between SSres and SStot.</p> <p>For the 99% result, you are dealing with in your model: it probably just means that your independent features have a pretty high predictive value over your dependent value. I would check if any of my X variables have any direct connection to my y variable. (as if it is an arithmetic combination that contains y's value in it). I would also try to get a sense about the std I have per every feature I include in my model. A small std may decrease the SSres and therefore lead to a high R-Squared metric.</p> <p>Most importantly: R-Squared =/= Accuracy!!!!! the two metrics have very little to do with each other mathematically.</p>
python|tensorflow|machine-learning|keras|lstm
0
2,817
67,916,079
Tensorflow ImportError: cannot import name 'model_lib_v2' from 'object_detection'
<p>Today I was working on a sign language detector using deep learning by <code>Tensorflow</code></p> <p>And by following the tutorial named <a href="https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html" rel="nofollow noreferrer">Training Custom Object Detector</a> but as soon as I get to the training stage it gives me this Error saying</p> <pre><code>Traceback (most recent call last): File &quot;model_main_tf2.py&quot;, line 32, in &lt;module&gt; from object_detection import model_lib_v2 ImportError: cannot import name 'model_lib_v2' from 'object_detection' (C:\Python\379\lib\site-packages\object_detection\__init__.py) </code></pre> <p>And also in this below Image <a href="https://i.stack.imgur.com/8mJzU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8mJzU.png" alt="Image" /></a></p> <p>It says to put the <code>model_main_tf2.py</code> in the <code>training_demo</code> folder but then <code>model_main_tf2.py</code> can't import the classes because they are all local classes take a look <a href="https://i.stack.imgur.com/mmwC9.png" rel="nofollow noreferrer">here</a></p> <p><strong>I am new to this so please help</strong><br /> <strong>If you want any other info <strong>Please Ask</strong></strong></p> <p><em><strong>Thanks!!</strong></em></p>
<p>Your error looks like this, <code>ImportError: cannot import name 'model_lib_v2' from 'object_detection' (C:\Python\379\lib\site-packages\object_detection\__init__.py)</code>.</p> <p>Clearly, python is unable to find <strong>model_lib_v2.py</strong> at &quot;C:\Python\379\lib\site-packages\object_detection&quot; since that particular file is located in another directory (as shown in the image attached by you).</p> <p>You need to check your working directory and set working directory to the path as shown in <strong>your attached <a href="https://i.stack.imgur.com/mmwC9.png" rel="nofollow noreferrer">image</a></strong>.</p>
python|python-3.x|deep-learning|tensorflow2.0
3
2,818
31,760,427
Save result of multiplication to existing array
<p>Consider the following code:</p> <pre><code>a = numpy.array([1,2,3,4]) b = numpy.array([5,6,7,8]) # here a new array (b*2) will be created and name 'a' will be assigned to it a = b * 2 </code></pre> <p>So, can numpy write the result of <code>b*2</code> directly to memory already allocated for <code>a</code>, without allocating a new array? </p>
<p>Yes this is possible - you need to use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.multiply.html#numpy.multiply" rel="nofollow"><code>np.multiply</code></a> with its <a href="http://docs.scipy.org/doc/numpy/reference/ufuncs.html#optional-keyword-arguments" rel="nofollow"><code>out</code></a> parameter:</p> <pre><code>np.multiply(b, 2, out=a) </code></pre> <p>The array <code>a</code> is now filled with the result of <code>b * 2</code> (and no new memory was allocated to hold the output of the function).</p> <p>All of NumPy's ufuncs have the <code>out</code> parameter which is especially handy when working with large arrays; it helps to keep memory consumption to a minimum by allowing arrays to be reused. The only caveat is that the array has to the correct size/shape to hold the output of the function. </p>
python|arrays|numpy|multiplication
3
2,819
32,053,203
Ewma in pandas but over rolling weekly data.
<p>I'm trying to calculate the ewma in pandas on a "rolling weekly" way. For example lets say today is tuesday. Then today´s ewma would be calculated using only tuesdays data. (This tuesday, the previous tuesday, the one before and so on). Now tomorrow we would have to do the same thing but with wednesdays and so forth and so on. After doing this if i want to get a "rolling weekly ewma" that includes every day of the week, I need to combine each vector that was generated. Meaning the weekly ewma of only mondays, the weekly ewma of only tuesdays, the weekly ewma of only wednesdays, then thursdays and then fridays.<br> Now this combined vector (of each day) is the "rolling weekly ewma" I'm talking about. </p> <p>Doesn't pandas have an embedded way of doing this? Currently this is how I do it:</p> <pre><code>import pandas as pd import numpy as np df = pd.DataFrame(np.random.randn(1000,1),index=pd.date_range(pd.datetime(1995,3,30), freq='B', periods=1000),columns =['PX_LAST'] ) lista1 = ['mon','tue','wed','thu','fri'] lista4 = ['W-MON','W-TUE','W-WED','W-THU','W-FRI'] for x,y in zip(lista1,lista4): r = "{0} = pd.ewma(df.PX_LAST.resample('{1}'),span = 10)".format(x,y) exec r comb2 = mon.combine_first(tue) for y in lista1[1:6]: w = "comb2 =comb2.combine_first({0})".format(y) exec w df['emaw'] = comb2 </code></pre>
<p>There's probably multiple ways to do this, but the way I'd do it is with a reduce.</p> <p>Your resampled EWMA calls can be done with this list comprehension to give a list of DataFrames: </p> <pre><code>ewmas = [pd.ewma(df[['PX_LAST']].resample(w), span=10) for w in lista4] </code></pre> <p>and then we want to mash them all together, so we can do:</p> <pre><code>ewma_frame = reduce(pd.DataFrame.combine_first, ewmas) </code></pre> <p>and lastly join them back to the original frame with:</p> <pre><code>df.merge(ewma_frame, left_index=True, right_index=True) </code></pre> <p>As just a one-liner, it's:</p> <pre><code>df.merge(reduce(pd.DataFrame.combine_first, [pd.ewma(df[['PX_LAST']].resample(w), span=10) for w in lista4]), left_index=True, right_index=True) </code></pre> <p>which, if you run it after your code, appears to give the same value as your original method (with a different column heading that you can just rename).</p>
python|pandas
1
2,820
31,715,082
How to drop rows which has elements equal to a specific value
<p>I have a file which looks like</p> <pre><code>1618246950 0.000 0.000 0.003 0.000 0.000 0.000 0 0 -1 -1 -1 -1 -1 -1 -1 -1 9 0 1618387251 0.000 0.000 0.000 0.000 0.021 0.012 0 0 -1 -1 -1 -1 -1 -1 -1 -1 0 0 1618436689 0.000 0.000 0.000 0.000 0.000 0.000 1 0 -1 -1 -1 -1 -1 -1 -1 -1 9 0 1618494414 0.005 0.002 0.001 0.000 0.002 0.005 0 0 -1 -1 -1 -1 -1 -1 -1 -1 1 0 1618499491 0.000 0.000 0.001 0.000 0.000 0.000 0 0 -1 -1 -1 -1 -1 -1 -1 -1 0 0 </code></pre> <p>I want to drop the rows which contains elements equal to minus one.</p> <p>The following code could drop the rows if columns 1 equal to -1, but each columns could have -1.</p> <pre><code>df=df[df[1] != -1.0 ] </code></pre> <p>or</p> <pre><code>for i in range(1,15) df=df[df[i] != -1.0 ] </code></pre> <p>15 is the index of the last column.</p> <p>so is there a pandas-style way to do this?</p>
<p>How about this:</p> <pre><code>df=df[-df.applymap(lambda x: x==-1).any(axis=1)] </code></pre>
pandas
2
2,821
41,453,892
Python Pandas: Function doesn't work when used with apply()
<p>The following function:</p> <pre><code>def func(x): for k in x['slices']: for j in k: print(x['low'].iloc[j]) </code></pre> <p>applied in the following manner works:</p> <pre><code>func(test) </code></pre> <p>but as follow doesn't:</p> <pre><code>test.apply(func, axis=1) </code></pre> <p>Would you be able to determine why?</p> <hr> <p>EDIT: I used the print only for debug purpose: the function used to be :</p> <pre><code>def func(x): result=[] for k in x: for j in k: result.append(x['low'].iloc[j]) return result </code></pre> <p>which also didn't work</p> <p>Below the elements to reconstructs the data.</p> <pre><code>df = pd.DataFrame(dict, columns=["low", "slices"]) dict = {'low': {0: 1207.25, 1: 1207.5, 2: 1205.75, 3: 1206.0, 4: 1201.0, 5: 1202.75, 6: 1203.75}, 'slices': {0: [slice(1, 2, None)], 1: [slice(1, 3, None), slice(2, 3, None)], 2: [slice(1, 4, None), slice(2, 4, None), slice(3, 4, None)], 3: [slice(1, 5, None), slice(2, 5, None), slice(3, 5, None), slice(4, 5, None)], 4: [slice(1, 6, None), slice(2, 6, None), slice(3, 6, None), slice(4, 6, None), slice(5, 6, None)], 5: [slice(1, 7, None), slice(2, 7, None), slice(3, 7, None), slice(4, 7, None), slice(5, 7, None), slice(6, 7, None)], 6: [slice(1, 8, None), slice(2, 8, None), slice(3, 8, None), slice(4, 8, None), slice(5, 8, None), slice(6, 8, None), slice(7, 8, None)]}} </code></pre>
<p>define your function this way</p> <pre><code>def fun(slices): return [df.low.loc[s].tolist() for s in slices] </code></pre> <p>And apply over the slices column</p> <pre><code>df['slices_low'] = df.slices.apply(fun) df </code></pre> <p><a href="https://i.stack.imgur.com/hAGXG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hAGXG.png" alt="enter image description here"></a></p>
python|pandas
2
2,822
61,264,739
How can I alternate tf.Session.run in TensorFlow2.0?
<p>Hi I've started learning machine learning by TensorFlow. I've learned codes below and realized that these doesn't work anymore.</p> <pre><code>sess = tf.Session() print(sess.run(hello)) print(sess.run([a, b, c])) sess.close </code></pre> <p>It would be grateful someone can help me how to change these codes work. Since I'm not English native and this is my first question on Stack overflow, I'm sorry for if you feel any uncomfort</p>
<p>In Tensorflow 2.0, You can use <code>tf.compat.v1.Session()</code> instead of <code>tf.session()</code></p> <p>Please refer code in TF 1.X in below</p> <pre><code>%tensorflow_version 1.x import tensorflow as tf print(tf.__version__) with tf.Session() as sess: output = tf.constant(""Hello, World"") print(sess.run(output).decode()) sess.close() </code></pre> <p>Output:</p> <pre><code>1.15.2 Hello, World </code></pre> <p>Please refer code in TF 2.X in below</p> <pre><code>%tensorflow_version 2.x import tensorflow as tf print(tf.__version__) with tf.compat.v1.Session() as sess: output = tf.constant(""Hello, World"") print(sess.run(output).decode()) sess.close() </code></pre> <p>Output:</p> <pre><code>2.2.0-rc3 Hello, World </code></pre>
tensorflow
1
2,823
61,400,852
Find matching rows in a numpy matrix of 3
<p>Given a cube of mxmxm, I need to know the rows, in the 6 faces that the smallest value in their row is greater than a given n.</p>
<p>To obtain the various faces:</p> <pre><code>faces = np.array([ x[ 0, :, :], x[-1, :, :], x[ :, 0, :], x[ :, -1, :], x[ :, :, 0], x[ :, :, -1], ]) </code></pre> <p>Now collapse the last dimension axis:</p> <pre><code># No information on orientation provided by OP so always pick axis=-1 row_mins = np.min(faces, axis=-1) </code></pre> <p>And then keep only the rows that satisfy the condition:</p> <pre><code>valid_rows = faces[row_mins &gt; n] print(valid_rows) </code></pre>
python-3.x|numpy|cube
-1
2,824
61,208,808
How to use pandas DataFrames with sklearn?
<p>The goal of my project is to predict the accuracy level of some textual descriptions.</p> <p>I made the vectors with FASTTEXT.</p> <p>TSV output:</p> <pre><code>0 1:0.0033524514 2:-0.021896651 3:0.05087798 4:0.0072637126 ... 1 1:0.003118149 2:-0.015105667 3:0.040879637 4:0.000539902 ... </code></pre> <p>Resources are labelled as Good (1) or Bad (0).</p> <p>To check the accuracy I used scikit-learn and SVM.</p> <p>Following <a href="https://www.datacamp.com/community/tutorials/svm-classification-scikit-learn-python" rel="nofollow noreferrer">this</a> tutorial I made this script:</p> <pre class="lang-py prettyprint-override"><code> import pandas as pd from sklearn.model_selection import train_test_split from sklearn import svm from sklearn import metrics import numpy as np import matplotlib.pyplot as plt r_filenameTSV = 'TSV/A19784.tsv' tsv_read = pd.read_csv(r_filenameTSV, sep='\t',names=["vector"]) df = pd.DataFrame(tsv_read) df = pd.DataFrame(df.vector.str.split(' ',1).tolist(), columns = ['label','vector']) print ("Features:" , df.vector) print ("Labels:" , df.label) X_train, X_test, y_train, y_test = train_test_split(df.vector, df.label, test_size=0.2,random_state=0) #Create a svm Classifier clf = svm.SVC(kernel='linear') #Train the model using the training sets clf.fit (str((X_train, y_train))) #Predict the response for test dataset y_pred = clf.predict(X_test) print("Accuracy:",metrics.accuracy_score(y_test, y_pred)) </code></pre> <p>The first time I tried to run the script I got this error on line 28:</p> <pre><code>ValueError: could not convert string to float: </code></pre> <p>So I changed from</p> <pre><code>clf.fit (X_train, y_train) </code></pre> <p>to</p> <pre><code> clf.fit (str((X_train, y_train))) </code></pre> <p>Then, on the same line, I got this error</p> <pre><code>TypeError: fit() missing 1 required positional argument: 'y' </code></pre> <p>Suggestions how to solve this issue?</p> <p>kind regards and thanks for your time.</p>
<p>Like mentioned in the comments below your question your features and your label are persumably strings. However, sklearn requires them to be numeric (sklearn is normally used with numpy arrays). If this is the case you have to convert the elements of your dataframe from strings to numeric values. </p> <p>Looking at your code I assume that each element of your feature column is a list of strings and each element of your label column is a single string. Here is an example of how such a dataframe can be converted to contain numeric values.</p> <pre><code>import numpy as np import pandas as pd df = pd.DataFrame({'features': [['5', '4.2'], ['3', '7.9'], ['2', '9']], 'label': ['1', '0', '0']}) print(type(df.features[0][0])) print(type(df.label[0])) def convert_to_float(collection): floats = [float(el) for el in collection] return np.array(floats) df_numeric = pd.concat([df["features"].apply(convert_to_float), pd.to_numeric(df["label"])], axis=1) print(type(df_numeric.features[0][0])) print(type(df_numeric.label[0])) </code></pre> <p><strong>However</strong>, the described dataframe format is not the format sklearn models expect pandas dataframes to have. As far as I know sklearn models expect each feature to be stored in a seperate column, like it is the case here:</p> <pre><code>from sklearn.model_selection import train_test_split from sklearn.svm import SVC feature_df = pd.DataFrame(np.arange(6).reshape(3, 2), columns=["feature_1", "feature_2"]) label_df = pd.DataFrame(np.array([[1], [0], [0]]), columns=["label"]) df = pd.concat([feature_df, label_df], axis=1) X_train, X_test, y_train, y_test = train_test_split(df.drop(["label"], axis=1), df["label"], test_size=1 / 3) clf = SVC(kernel='linear') clf.fit(X_train, y_train) clf.predict(X_test) </code></pre> <p>That is, after converting your dataframe so that it only contains numeric values, you'd have to create an own column for each element in the lists of your feature column. You could do so like this:</p> <pre><code>arr = np.concatenate(df_numeric.features.to_numpy()).reshape(df_numeric.shape) df_sklearn_compatible = pd.concat([pd.DataFrame(arr, columns=["feature_1", "feature_2"]), df["label"]], axis=1) </code></pre>
python|pandas|scikit-learn
1
2,825
68,605,678
How to color bars based on a separate pandas column
<p>I need to plot a barchat and to apply a color according to the &quot;Attribute&quot; column of my dataframe</p> <p><a href="https://i.stack.imgur.com/VBDqr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VBDqr.png" alt="enter image description here" /></a></p> <p>x axis = Shares<br /> y axis = Price</p> <pre><code>fig, ax = plt.subplots() ax.barh(df['Share'],df['Price'], align='center') ax.set_xlabel('Shares') ax.set_ylabel('Price') ax.set_title('Bar Chart &amp; Colors') plt.show() </code></pre> <p>Thanks for your help !</p>
<ul> <li>There are two easy ways to plot the bars with separate colors for <code>'Attribute'</code> <ol> <li>Transform the dataframe with <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.pivot.html" rel="nofollow noreferrer"><code>.pivot</code></a> and then plot with <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.plot.html" rel="nofollow noreferrer"><code>pandas.DataFrame.plot</code></a> and specify <code>kind='barh'</code> for a horizontal bar plot <ul> <li>The index will be the x-axis if using <code>kind='bar'</code>, and will be the y-axis if using <code>kind='barh'</code></li> <li>The columns of the transformed dataframe will each be plotted with a separate color.</li> <li><code>pandas</code> uses <code>matplotlib</code> as the default plotting backend.</li> </ul> </li> <li>Use <a href="https://seaborn.pydata.org/generated/seaborn.barplot.html" rel="nofollow noreferrer"><code>seaborn.barplot</code></a> with <code>hue='Attribute'</code> and <code>orient='h'</code>. This option works with the dataframe in a long format, as shown in the OP. <ul> <li><code>seaborn</code> is a high-level API for <code>matplotlib</code></li> </ul> </li> </ol> </li> <li>Tested with <code>pandas 1.3.0</code>, <code>seaborn 0.11.1</code>, and <code>matplotlib 3.4.2</code></li> </ul> <h2>Imports and DataFrame</h2> <pre class="lang-py prettyprint-override"><code>import pandas as pd import seaborn as sns # test dataframe data = {'Price': [110, 105, 119, 102, 111, 117, 110, 110], 'Share': [110, -50, 22, 79, 29, -2, 130, 140], 'Attribute': ['A', 'B', 'C', 'D', 'A', 'B', 'B', 'C']} df = pd.DataFrame(data) Price Share Attribute 0 110 110 A 1 105 -50 B 2 119 22 C 3 102 79 D 4 111 29 A 5 117 -2 B 6 110 130 B 7 110 140 C </code></pre> <h2><code>pandas.DataFrame.plot</code></h2> <pre class="lang-py prettyprint-override"><code># transform the dataframe with .pivot dfp = df.pivot(index='Price', columns='Attribute', values='Share') Attribute A B C D Price 102 NaN NaN NaN 79.0 105 NaN -50.0 NaN NaN 110 110.0 130.0 140.0 NaN 111 29.0 NaN NaN NaN 117 NaN -2.0 NaN NaN 119 NaN NaN 22.0 NaN # plot ax = dfp.plot(kind='barh', title='Bar Chart of Colors', figsize=(6, 4)) ax.set(xlabel='Shares') ax.legend(title='Attribute', bbox_to_anchor=(1, 1), loc='upper left') ax.grid(axis='x') </code></pre> <p><a href="https://i.stack.imgur.com/fo7r0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fo7r0.png" alt="enter image description here" /></a></p> <ul> <li>with <code>stacked=True</code></li> </ul> <pre class="lang-py prettyprint-override"><code>ax = dfp.plot(kind='barh', stacked=True, title='Bar Chart of Colors', figsize=(6, 4)) </code></pre> <p><a href="https://i.stack.imgur.com/sq5vn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sq5vn.png" alt="enter image description here" /></a></p> <h2><code>seaborn.barplot</code></h2> <ul> <li>Note the order of the y-axis values are reversed compared to the previous plot</li> </ul> <pre class="lang-py prettyprint-override"><code>ax = sns.barplot(data=df, x='Share', y='Price', hue='Attribute', orient='h') ax.set(xlabel='Shares', title='Bar Chart of Colors') ax.legend(title='Attribute', bbox_to_anchor=(1, 1), loc='upper left') ax.grid(axis='x') </code></pre> <p><a href="https://i.stack.imgur.com/77pJD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/77pJD.png" alt="enter image description here" /></a></p>
python|pandas|matplotlib|seaborn|bar-chart
1
2,826
68,761,888
Compare and match values from two df and multiple columns
<p>I've got two dataframes with data about popular stores and districts where they are located. Each store is kind of a chain and may have more than one district location id (for example &quot;Store1&quot; has several stores in different places).</p> <p>First df has info about top-5 most popular stores and district ids separated by semicolon, for example:</p> <pre><code>store_name district_id Store1 | 1;2;3;4;5 Store2 | 1;2 Store3 | 3 Store4 | 4;7;10;15 Store5 | 12;15; </code></pre> <p>Second df has only two columns with ALL districts in city and each row is unique district id and it's name.</p> <pre><code> district_id district_name 1 | District1 2 | District2 3 | District3 4 | District4 5 | District5 6 | District6 7 | District7 8 | District8 9 | District9 10 | District10 etc. </code></pre> <p>The goal is to create columns in df1 for every store in top-5 and match every district id number to district name.</p> <p>So, firstly I splitted df1 into form like this:</p> <pre><code>store_name district_id 0 1 2 3 4 5 Store1 | 1 | 2 | 3 | 4 | 5 Store2 | 1 | 2 | | | Store3 | 3 | | | | Store4 | 4 | 7 | 10| 15| Store5 | 12 | 15| </code></pre> <p>But now I'm stucked and don't know how to match each value from df1 to df2 and get district names for each id. Empty cells is None, because columns were created by maximum values for each store.</p> <p>I would like to get df like this:</p> <pre><code>store_name district_name district_name2 district_name3 district_name4 district_name5 Store1 | District1 | District2 | District3 | District4 | District5 Store2 | District1 | District2 | | | Store3 | District3 | | | | Store4 | District4 | District7 | District10 | District15 | Store5 | District12 | District15 | | | </code></pre> <p>Thanks in advance!</p>
<p>You can <code>stack</code> first dataframe, then convert it to float type, <code>map</code> the column from second dataframe, then <code>unstack</code> and finally <code>add_prefix</code>:</p> <pre class="lang-py prettyprint-override"><code>df1.stack().astype(float).map(df2['district_name']).unstack().add_prefix('district_name') </code></pre> <p><strong>OUTPUT:</strong></p> <pre class="lang-py prettyprint-override"><code> district_name0 district_name1 ... district_name3 district_name4 store_name ... Store1 District1 District2 ... District4 District5 Store2 District1 District2 ... NaN NaN Store3 District3 NaN ... NaN NaN Store4 District4 District7 ... NaN NaN Store5 NaN NaN ... NaN NaN </code></pre> <p>The dataframes used for above code:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; df1 0 1 2 3 4 store_name Store1 1 2 3 4 5 Store2 1 2 NaN NaN NaN Store3 3 NaN NaN NaN NaN Store4 4 7 10 15 NaN Store5 12 15 NaN NaN NaN &gt;&gt;&gt; df2 district_name district_id 1 District1 2 District2 3 District3 4 District4 5 District5 6 District6 7 District7 8 District8 9 District9 10 District10 </code></pre>
python|pandas|dataframe
1
2,827
68,781,636
Finding a vector that is orthogonal to n columns of a matrix
<p>Given a matrix <code>B</code> with shape <code>(M, N)</code>, where <code>M &gt; N</code>. How to find a vector <code>v</code> (with shape of <code>M</code>) that is perpendicular to all columns in <code>B</code>.</p> <p>I tried using Numpy <code>numpy.linalg.lstsq</code> method to solve : <code>Bx = 0</code>. <code>0</code> here is a vector with <code>M</code> zeros.</p> <p>It returns a vector of zeros with <code>(N,)</code> shape.</p>
<p>You can use sympy library, like</p> <pre><code>from sympy import Matrix B = [[2, 3, 5], [-4, 2, 3], [0, 0, 0]] V = A.nullspace()[0] </code></pre> <p>or to find whole nullspace</p> <pre><code>N = A.nullspace() </code></pre>
python-3.x|numpy|linear-algebra
0
2,828
65,509,486
Return boolean from cv2.inRange() if color is present in mask
<p>I am making a mask using cv2.inRange(), which accepts an image, a lower bound, and an upper bound. The mask is working, but I want to return something like True/False or print('color present') if the color between the ranges is present. Here is some sample code:</p> <pre><code> from cv2 import cv2 import numpy as np img = cv2.imread('test_img.jpg', cv2.COLOR_RGB2HSV) lower_range = np.array([0, 0, 0]) upper_range = np.array([100, 100, 100]) mask = cv2.inRange(img_hsv, lower_range, upper_range) #this is where I need an IF THEN statement. IF the image contains the hsv color between the hsv color range THEN print('color present') </code></pre>
<p>Since you picked the color you can count Non Zero pixels in the mask result. Or use the relative area of this pixels.</p> <p>Here is a full code and example.</p> <p>Source image from <a href="https://commons.wikimedia.org/wiki/File:RGB_color_model.svg" rel="nofollow noreferrer">wikipedia</a>. I get 500px PNG image for this example. So, OpenCV show transparency as black pixels.</p> <p><a href="https://i.stack.imgur.com/10E8q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/10E8q.png" alt="RGB colors" /></a></p> <p>Then the HSV full histogram:</p> <p><a href="https://i.stack.imgur.com/6gjYy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6gjYy.png" alt="enter image description here" /></a></p> <p>And code:</p> <pre><code>import cv2 import numpy as np fra = cv2.imread('images/rgb.png') height, width, channels = fra.shape hsv = cv2.cvtColor(fra, cv2.COLOR_BGR2HSV_FULL) mask = cv2.inRange(hsv, np.array([50, 0, 0]), np.array([115, 255, 255])) ones = cv2.countNonZero(mask) percent_color = (ones/(height*width)) * 100 print(&quot;Non Zeros Pixels:{:d} and Area Percentage:{:.2f}&quot;.format(ones,percent_color)) cv2.imshow(&quot;mask&quot;, mask) cv2.waitKey(0) cv2.destroyAllWindows() # Blue Color [130, 0, 0] - [200, 255, 255] # Non Zeros Pixels:39357 and Area Percentage:15.43 # Green Color [50, 0, 0] - [115, 255, 255] # Non Zeros Pixels:38962 and Area Percentage:15.28 </code></pre> <p>Blue and Green segmented. <a href="https://i.stack.imgur.com/cUh3k.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cUh3k.png" alt="enter image description here" /></a></p> <p>Note that results show near same percentage and non zero pixels for both segmentation indicating the good approach to measure colors.</p> <pre><code> # Blue Color [130, 0, 0] - [200, 255, 255] # Non Zeros Pixels:39357 and Area Percentage:15.43 # Green Color [50, 0, 0] - [115, 255, 255] # Non Zeros Pixels:38962 and Area Percentage:15.28 </code></pre> <p>Good Luck!</p>
python|numpy|opencv|computer-vision
1
2,829
65,728,814
return missing rows by searching by ID in two different dataframes
<p>I have the following two dataframes.</p> <p>current_df:</p> <pre><code>speacial_id name count date 123 al 4 01-01-2020 456 james 4 01-01-2021 789 joe 5 01-02-2021 111 will 2 01-09-2020 222 hal 1 02-10-2009 </code></pre> <p>previous_df:</p> <pre><code>speacial_id name count date alert 123 al 4 01-01-2020 True 456 james 4 01-01-2021 False 789 joe 5 01-02-2021 True </code></pre> <p>I want to find the difference only if <code>special_id</code> doesn't exist between the two data frames and merge the values onto previous_df with the output as:</p> <pre><code>speacial_id name count date alert 123 al 4 01-01-2020 True 456 james 4 01-01-2021 False 789 joe 5 01-02-2021 True 111 will 1 01-09-2020 NaN 222 hal 2 02-10-2009 NaN </code></pre> <p>Please take note of the alert column and the NaN values added.</p> <p>What I've tried:</p> <pre><code>new_df = current_df[~current_df.isin( previous_df.to_dict('list')).all(1)].copy() </code></pre> <p>unfortunately, this is picking up when there is a difference in any of the column, I only want to notice the changes if there are missing special_ids between the two data frames.</p> <p>Any advices is greatly appreciated.</p>
<p>Im pretty sure <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.merge.html" rel="nofollow noreferrer"><code>pandas.DataFrame.merge</code></a> is what you need:</p> <pre><code>current_df.merge(previous_df, how = &quot;outer&quot;) </code></pre>
python|python-3.x|pandas|dataframe|series
0
2,830
65,647,979
Keras, Sequential Neural Network Model
<p>Here is the code for the Keras Model, which gives typeError</p> <pre><code> model=keras.Sequential() model.add(Dense(128, input_shape=(len(train_x[0]),), activation='relu')) model.add(Dropout(0,5)) model.add(Dense(64, activation='relu')) model.add(Dropout(0,5)) model.add(Dense(len(train_y[0]), activation='softmax')) sgd= SGD(lr=0.01, decay=1e-6, momentum=0.9,nesterov=True) model.compile(loss='categorical_crossentropy', optimizer= sgd, metrics= ['accuracy']) model.fit(np.array(train_x),np.array(train_y),epochs=200,batch_size=5, verbose=1) model.save(&quot;chatbot.model&quot;) print(&quot;training is done&quot;) </code></pre> <pre><code>here i 'm creating a chat bot using Keras Sequential Model and encountered the TypeError which shows on the Training the Neural Network, exact on model.fit line </code></pre> <p>Note: intents are the messages i have given in dictionary format, here is the example <code>{&quot;intents&quot;:[{&quot;tag&quot;: &quot;welcome&quot;, &quot;patterns&quot;:[&quot;Hi&quot;,&quot;Hello&quot;],&quot;responses&quot;:[&quot;Hello&quot;,&quot;Hi&quot;]}</code></p> <p>i have imported nltk, nltk.stem-WordNetLemmatizer, numpy,pickle,random,tensorflow,keras, Keras.Model-Sequential,Keras.Layers-Dense, Activation and Dropout. Keras.Optimizers-SGD</p> <pre><code> &gt; TypeError: in user code: /usr/local/lib/python3.6/dist- &gt; &gt; packages/tensorflow/python/keras/engine/training.py:805 train_function &gt; * return step_function(self, iterator) /usr/local/lib/python3.6/dist- packages/tensorflow/python/keras/engine/training.py:795 &gt; step_function ** outputs = model.distribute_strategy.run(run_step, &gt; args=(data,)) &gt; /usr/local/lib/python3.6/dist- packages/tensorflow/python/distribute/distribute_lib.py:1259 &gt; run return self._extended.call_for_each_replica(fn, args=args, &gt; kwargs=kwargs) &gt; /usr/local/lib/python3.6/dist- packages/tensorflow/python/distribute/distribute_lib.py:2730 &gt; call_for_each_replica return self._call_for_each_replica(fn, args, &gt; kwargs) &gt; /usr/local/lib/python3.6/dist- packages/tensorflow/python/distribute/distribute_lib.py:3417 &gt; _call_for_each_replica return fn(*args, **kwargs) /usr/local/lib/python3.6/dist- packages/tensorflow/python/keras/engine/training.py:788 &gt; run_step ** outputs = model.train_step(data) &gt; /usr/local/lib/python3.6/dist- packages/tensorflow/python/keras/engine/training.py:754 &gt; train_step y_pred = self(x, training=True) &gt; /usr/local/lib/python3.6/dist- packages/tensorflow/python/keras/engine/base_layer.py:1012 &gt; __call__ outputs = call_fn(inputs, *args, **kwargs) /usr/local/lib/python3.6/dist- packages/tensorflow/python/keras/engine/sequential.py:375 &gt; call return super(Sequential, self).call(inputs, training=training, &gt; mask=mask) &gt; /usr/local/lib/python3.6/dist- packages/tensorflow/python/keras/engine/functional.py:425 &gt; call inputs, training=training, mask=mask) &gt; /usr/local/lib/python3.6/dist- packages/tensorflow/python/keras/engine/functional.py:560 &gt; _run_internal_graph outputs = node.layer(*args, **kwargs) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py:1012 &gt; __call__ outputs = call_fn(inputs, *args, **kwargs) /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/layers/core.py:231 &gt; call lambda: array_ops.identity(inputs)) &gt; /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/utils/control_flow_util.py:115 &gt; smart_cond pred, true_fn=true_fn, false_fn=false_fn, name=name) &gt; /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/smart_cond.py:54 &gt; smart_cond return true_fn() &gt; /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/layers/core.py:226 &gt; dropped_inputs noise_shape=self._get_noise_shape(inputs), &gt; /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/layers/core.py:215 &gt; _get_noise_shape for i, value in enumerate(self.noise_shape): &gt; &gt; TypeError: 'int' object is not iterable </code></pre>
<p>@Andrey is correct. There are other minor things, but thats the reason for the error.</p> <p>Here is the fixed code -</p> <pre><code>from tensorflow import keras from tensorflow.keras.layers import Dense, Dropout from tensorflow.keras.optimizers import SGD import numpy as np train_x = np.random.random((100,8)) train_y = np.random.random((100,4)) model=keras.Sequential() model.add(Dense(128, input_shape=(len(train_x[0]),), activation='relu')) model.add(Dropout(0.5)) model.add(Dense(64, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(len(train_y[0]), activation='softmax')) sgd= SGD(lr=0.01, decay=1e-6, momentum=0.9,nesterov=True) model.compile(loss='categorical_crossentropy', optimizer= sgd, metrics= ['accuracy']) model.fit(np.array(train_x),np.array(train_y),epochs=3,batch_size=5, verbose=1) print(&quot;training is done&quot;) print(model.summary()) </code></pre> <pre><code>Epoch 1/3 20/20 [==============================] - 0s 820us/step - loss: 2.6962 - accuracy: 0.0820 Epoch 2/3 20/20 [==============================] - 0s 807us/step - loss: 2.9444 - accuracy: 0.3395 Epoch 3/3 20/20 [==============================] - 0s 741us/step - loss: 336196611196720054272.0000 - accuracy: 0.2951 training is done Model: &quot;sequential_24&quot; _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense_66 (Dense) (None, 128) 1152 _________________________________________________________________ dropout_8 (Dropout) (None, 128) 0 _________________________________________________________________ dense_67 (Dense) (None, 64) 8256 _________________________________________________________________ dropout_9 (Dropout) (None, 64) 0 _________________________________________________________________ dense_68 (Dense) (None, 4) 260 ================================================================= Total params: 9,668 Trainable params: 9,668 Non-trainable params: 0 _________________________________________________________________ </code></pre>
tensorflow|keras|typeerror|sequential
0
2,831
20,901,968
numpy.rint not working as expected
<p>I am trying to find the cause of this result:</p> <pre><code>import numpy result1 = numpy.rint(1.5) result2 = numpy.rint(6.5) print result </code></pre> <p>The output:</p> <pre><code>result1-&gt; 2 result2-&gt; 6 </code></pre> <p>This is odd: <em>result1</em> is correct but I <em>result2</em> is not (It has to be 7 because <em>rint</em> rounds any float to <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.rint.html" rel="nofollow">the <em>nearest</em> integer</a>).</p> <p>Any idea? (THANKS!)</p>
<p>From <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.around.html#numpy.around" rel="nofollow">numpy's documentation on <code>numpy.around</code>, equivalent to <code>numpy.round</code></a>, which supposedly also is relevant for <code>numpy.rint</code>:</p> <blockquote> <p>For values exactly halfway between rounded decimal values, Numpy rounds to the nearest even value. Thus 1.5 and 2.5 round to 2.0, -0.5 and 0.5 round to 0.0, etc. Results may also be surprising due to the inexact representation of decimal fractions in the IEEE floating point standard [R9] and errors introduced when scaling by powers of ten.</p> </blockquote> <p>Also relevant: While for large numbers there might be representation errors, for small values half integers are exactly representable in binary-base floating points, in particular <code>1.5</code> and <code>6.5</code> are exactly representable in standard single-precision floats. Without the preference for either odd, even, lower, upper integers or any other scheme one would have undefined behaviour here.</p> <p>As @wim points out in the comments the behaviour of Python's build-in <code>round</code> is different. It rounds away from zero: It prefers upper integers for positive inputs and lower integers for negative inputs. (see <a href="http://docs.python.org/2/library/functions.html#round" rel="nofollow">http://docs.python.org/2/library/functions.html#round</a>)</p>
python|math|numpy|scipy
7
2,832
63,515,549
What is df.values[:,1:]?
<pre><code>from sklearn.preprocessing import StandardScaler X = df.values[:,1:] X = np.nan_to_num(X) Clus_dataSet = StandardScaler().fit_transform(X) Clus_dataSet </code></pre> <p><strong>Does anyone understand what is the meaning of this context?</strong></p> <p><a href="https://i.stack.imgur.com/ymqFR.png" rel="nofollow noreferrer">Here is the screenshot!!</a></p>
<ul> <li><p><code>df</code> is a DataFrame with several columns and apparently the target values are on the first column.</p> </li> <li><p><code>df.values</code> returns a numpy array with the underlying data of the DataFrame, without any index or columns names.</p> </li> <li><p><code>[:, 1:]</code> is a slice of that array, that returns all rows and every column starting from the second column. (the first column is index 0)</p> </li> </ul>
python|dataframe|sklearn-pandas
4
2,833
63,342,230
read csv without row enumeration column and sorting with custom key
<p>I have a TSV file which I want to read, sort by a specific column and write it back.<br /> Two problems I ran into are:</p> <ul> <li>providing a custom key results an error (which I will show the backtrace at the end of the post)</li> <li>without providing a custom key, the sorting is done. but when I write the dataframe back to a TSV file, there is another column which anotates the row numbering which I want to remove.</li> </ul> <p>I have tried to use <code>pandas</code> module in the following manner:</p> <pre><code>import re import pandas as pd def natural_sort_key(s, _nsre=re.compile('([0-9]+)')): return [int(text) if text.isdigit() else text.lower() for text in _nsre.split(s)] def main(path): with open(path, 'r') as f: df = pd.read_csv(path, delimiter='\t') a = df.sort_values('#mm10.kgXref.geneSymbol', key=natural_sort_key, na_position='first') a.to_csv('mouse_conversion_by_gene_symbol', sep='\t') if __name__ == '__main__': main('mouse_conversion') </code></pre> <p>The backtrace I am getting after providing a custom key is:</p> <pre><code>Traceback (most recent call last): File &quot;sortTables.py&quot;, line 22, in &lt;module&gt; main('mouse_conversion') File &quot;sortTables.py&quot;, line 12, in main a = df.sort_values('#mm10.kgXref.geneSymbol', key=natural_sort_key, na_position='first') File &quot;/home/eliran/miniconda/envs/newenv/lib/python3.7/site-packages/pandas/core/frame.py&quot;, line 5297, in sort_values k, kind=kind, ascending=ascending, na_position=na_position, key=key File &quot;/home/eliran/miniconda/envs/newenv/lib/python3.7/site-packages/pandas/core/sorting.py&quot;, line 287, in nargsort items = ensure_key_mapped(items, key) File &quot;/home/eliran/miniconda/envs/newenv/lib/python3.7/site-packages/pandas/core/sorting.py&quot;, line 420, in ensure_key_mapped result = key(values.copy()) File &quot;sortTables.py&quot;, line 6, in natural_sort_key return [int(text) if text.isdigit() else text.lower() for text in _nsre.split(s)] TypeError: expected string or bytes-like object </code></pre> <p>As for the second problem, here's an example:<br /> for this input:</p> <pre><code>#mm10.kgXref.geneSymbol mm10.kgXref.refseq mm10.knownToEnsembl.name Rp1 NM_011283 ENSMUST00000027032.5 Gm37483 ENSMUST00000194382.1 Sox17 NM_011441 ENSMUST00000027035.9 </code></pre> <p>I am getting this output:</p> <pre><code>#mm10.kgXref.geneSymbol mm10.kgXref.refseq mm10.knownToEnsembl.name 19 Rp1 NM_011283 ENSMUST00000027032.5 21 Gm37483 ENSMUST00000194382.1 29 Sox17 NM_011441 ENSMUST00000027035.9 </code></pre> <p>And I would like to delete the column with the row enumeration.<br /> Would appreciate some insight on both problems.</p> <p>EDIT: found an answer in the docs about the enumeration problem. if that's relevant for somebody else, simply use this:</p> <pre><code>a.to_csv('mouse_conversion_by_gene_symbol', sep='\t', index=False) </code></pre> <p>instead of the original line</p> <p>EDIT 2 : after implementing the solution suggested I was able to sort the dataframe by the first and last column.<br /> When I try to sort the dataframe by the second column I get the exact same backtrace from above.<br /> The only logical difference I see is that the second column includes <code>NaN</code> values and the other columns don't.<br /> How can I modify the code to solve this problem?</p>
<p>according to docs key func should get and give a Series (BTW, pd.read_csv does not need with open), so try this:</p> <pre><code>import re import pandas as pd def natural_sort_key(S, _nsre=re.compile('([0-9]+)')): return pd.Series([[int(text) if text.isdigit() else text.lower() for text in _nsre.split(s)] for s in S.values]) def main(path): df = pd.read_csv(path, delimiter='\t') a = df.sort_values('#mm10.kgXref.geneSymbol', key=natural_sort_key, na_position='first') a.to_csv('mouse_conversion_by_gene_symbol', sep='\t') if __name__ == '__main__': main('mouse_conversion') </code></pre>
python|pandas|csv
1
2,834
24,581,931
How can I convert a vector containing entries [[[int int]] ...] into a vector containing entries [[int int] ...] in python/numpy?
<p>I have data in a numpy vector that looks like this:</p> <pre><code> [[[1119 15]] [[1125 27]] [[1129 43]] [[1131 62]] [[1131 87]] [[1141 234]] ...] </code></pre> <p>These are supposed to be a set of points that I can use to represent a curve, but instead each point [int, int] seems to be encapsulated inside another vector. I.E.: I have [[1 1]] instead of [1 1].</p> <p>This data was given to me by an opencv function <code>cv2.approxPolyDP</code> after I fed it a `contour', and I need to work with it. I think the function basically has given me what it thinks is a set of curves, but here each curve only contains one point [int int] which doesn't really make sense. A curve with one point is not a curve, it's a point. </p> <p>Is there any way to convert [[int int]] to [int int] in this case?</p>
<p>Look at the shape of this array. It probably is <code>(n, 1, 2)</code>. </p> <p><code>reshape</code> it to <code>(n,2)</code>. <code>x.reshape(-1,2)</code> is a handy shortcut, saving you the work of determining <code>n</code>. <code>squeeze</code> also gits rid of the singular dimension.</p>
python|opencv|numpy
3
2,835
24,743,753
Test if an array is broadcastable to a shape?
<p>What is the best way to test whether an array can be broadcast to a given shape?</p> <p>The "pythonic" approach of <code>try</code>ing doesn't work for my case, because the intent is to have lazy evaluation of the operation.</p> <p>I'm asking how to implement <code>is_broadcastable</code> below:</p> <pre><code>&gt;&gt;&gt; x = np.ones([2,2,2]) &gt;&gt;&gt; y = np.ones([2,2]) &gt;&gt;&gt; is_broadcastable(x,y) True &gt;&gt;&gt; y = np.ones([2,3]) &gt;&gt;&gt; is_broadcastable(x,y) False </code></pre> <p>or better yet:</p> <pre><code>&gt;&gt;&gt; is_broadcastable(x.shape, y.shape) </code></pre>
<p>I really think you guys are over thinking this, why not just keep it simple?</p> <pre><code>def is_broadcastable(shp1, shp2): for a, b in zip(shp1[::-1], shp2[::-1]): if a == 1 or b == 1 or a == b: pass else: return False return True </code></pre>
python|arrays|numpy|multidimensional-array
9
2,836
24,488,927
Variable amount of dimensions in slice
<p>I have a multidimensional array called <code>resultsten</code>, with the following shape </p> <pre><code>print np.shape(resultsten) (3, 3, 6, 10, 1, 9) </code></pre> <p>In some occasions, I use a part of this array in a program called <code>cleanup</code>, which then further tears this array apart into <code>x</code>, <code>y</code>, and <code>z</code> arrays:</p> <pre><code>x,y,z = cleanup(resultsten[0,:,:,:,:,:]) def cleanup(resultsmat): x = resultsmat[:,:,:,:,2] y = resultsmat[:,:,:,:,1] z = resultsmat[:,:,:,:,4] return x,y,z </code></pre> <p>However, it might also occur that I do not want to put the entire matrix of <code>resultsten</code> in my program <code>cleanup</code>, thus:</p> <pre><code>x,y,z = cleanup(resultsten[0,0,:,:,:,:]) </code></pre> <p>This, of course gives an error, as the indices given to <code>cleanup</code> do not match the indices expected. I was wondering if it is possible to have a variable amount of dimensions included in your slice.</p> <p><strong>I would like to know a command that takes all the entries for every dimension, up until the last dimension, where it only takes one index.</strong></p> <p>I've seen that is possible to do this for all dimensions except the first, e.g</p> <pre><code>resultsten[1,:,:,:,:,:] </code></pre> <p>gives the same result as:</p> <pre><code>resultsten[1,:] </code></pre> <p>I tried this:</p> <pre><code>resultsten[:,1] </code></pre> <p>but it does not give the required result, Python interprets it like this:</p> <pre><code>resultsten[:,1,:,:,:,:] </code></pre> <p>MWE:</p> <pre><code>def cleanup(resultsmat): x = resultsmat[:,:,:,0,2] y = resultsmat[:,:,:,0,1] z = resultsmat[:,:,:,0,4] return x,y,z resultsten=np.arange(3*3*6*10*1*9).reshape(3,3,6,10,1,9) x0,y0,z0 = cleanup(resultsten[0,:,:,:,:,:]) #works x0,y0,z0 = cleanup(resultsten[0,0,:,:,:,:]) #does not work </code></pre>
<p>I would use a list of slice objects:</p> <pre><code>import numpy as np A = np.arange(2*3*4*5).reshape(2,3,4,5) #[:] &lt;-&gt; [slice(None,None, None)] sliceList = [slice(None, None, None)]*(len(A.shape)-1) a,b,c,d,e = [A[sliceList+[i]] for i in range(A.shape[-1])] </code></pre> <p>Output:</p> <pre><code>&gt;&gt;&gt; A[:,:,:,0] array([[[ 0, 5, 10, 15], [ 20, 25, 30, 35], [ 40, 45, 50, 55]], [[ 60, 65, 70, 75], [ 80, 85, 90, 95], [100, 105, 110, 115]]]) &gt;&gt;&gt; a array([[[ 0, 5, 10, 15], [ 20, 25, 30, 35], [ 40, 45, 50, 55]], [[ 60, 65, 70, 75], [ 80, 85, 90, 95], [100, 105, 110, 115]]]) </code></pre>
python|arrays|numpy
0
2,837
30,004,737
Create a "wrapped" ndarray from a given array
<p>I'm trying to create a 2D array from an array by using a rolled given array as the rows of the 2D array of a specified row dimension. For example:</p> <pre><code>r = np.array([1,2,3,4]) </code></pre> <p>want a matrix of 3 rows (using r) as</p> <pre><code>[[2,3,4,1], [1,2,3,4], [4,1,2,3]] </code></pre> <p>I think I have an idea by defining a function using numpy.roll and a for-loop but I'm trying to avoid that as my 2D array is going to be very large. I would like to have the ability to roll backwards if possible.</p> <p>Is there a way using numpy functions that can do this instead? Any suggestions on this are appreciated.</p>
<p>If using <a href="http://docs.scipy.org/doc/scipy/reference/" rel="nofollow">scipy</a> is an option, you can use <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.circulant.html" rel="nofollow"><code>scipy.linalg.circulant</code></a>. You will still have to tweak the argument to <code>circulant</code> to get exactly what you want, since <code>circulant</code> simply makes the given one-dimensional argument the first column of a square <a href="http://en.wikipedia.org/wiki/Circulant_matrix" rel="nofollow">circulant matrix</a>.</p> <p>For example:</p> <pre><code>In [25]: from scipy.linalg import circulant In [26]: r = np.array([1,2,3,4]) </code></pre> <p>Here's what <code>circulant</code> gives:</p> <pre><code>In [27]: circulant(r) Out[27]: array([[1, 4, 3, 2], [2, 1, 4, 3], [3, 2, 1, 4], [4, 3, 2, 1]]) </code></pre> <p>With some help from <code>np.roll()</code>, you can get your desired array:</p> <pre><code>In [28]: circulant(np.roll(r, -1)).T[:-1] Out[28]: array([[2, 3, 4, 1], [1, 2, 3, 4], [4, 1, 2, 3]]) </code></pre> <p>Or:</p> <pre><code>In [29]: circulant(np.roll(r[::-1], -1))[1:] Out[29]: array([[2, 3, 4, 1], [1, 2, 3, 4], [4, 1, 2, 3]]) </code></pre>
python|arrays|numpy
2
2,838
53,788,828
How to predict a label in MultiClass classification model in pytorch?
<p>I am currently working on my mini-project, where I predict movie genres based on their posters. So in the dataset that I have, each movie can have from 1 to 3 genres, therefore each instance can belong to multiple classes. I have total of 15 classes(15 genres). So now I am facing with the problem of how to do predictions using pytorch for this particular problem.</p> <p>In pytorch CIFAR-tutorial, where each instance can have only one class ( for example, if image is a car it should belong to class of cars) and there are 10 classes in total. So in this case, model prediction is defined in the following way(copying code snippet from pytorch website):</p> <pre><code>import torch.optim as optim criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9) for epoch in range(2): # loop over the dataset multiple times running_loss = 0.0 for i, data in enumerate(trainloader, 0): # get the inputs inputs, labels = data # zero the parameter gradients optimizer.zero_grad() # forward + backward + optimize outputs = net(inputs) loss = criterion(outputs, labels) loss.backward() optimizer.step() # print statistics running_loss += loss.item() if i % 2000 == 1999: # print every 2000 mini-batches print('[%d, %5d] loss: %.3f' % (epoch + 1, i + 1, running_loss / 2000)) running_loss = 0.0 </code></pre> <p>print('Finished Training')</p> <p>Question 1(for training part). What could you suggest to use as an activation function. I was thinking about BCEWithLogitsLoss() but I am not sure how good it will be.</p> <p>and then the accuracy of prediction for testset is defined in the following way: for the entire network:</p> <pre><code>correct = 0 total = 0 with torch.no_grad(): for data in testloader: images, labels = data outputs = net(images) _, predicted = torch.max(outputs.data, 1) total += labels.size(0) correct += (predicted == labels).sum().item() print('Accuracy of the network on the 10000 test images: %d %%' % ( 100 * correct / total)) </code></pre> <p>and for each class:</p> <pre><code>class_correct = list(0. for i in range(10)) class_total = list(0. for i in range(10)) with torch.no_grad(): for data in testloader: images, labels = data outputs = net(images) _, predicted = torch.max(outputs, 1) c = (predicted == labels).squeeze() for i in range(4): label = labels[i] class_correct[label] += c[i].item() class_total[label] += 1 for i in range(10): print('Accuracy of %5s : %2d %%' % ( classes[i], 100 * class_correct[i] / class_total[i])) </code></pre> <p>where the output is as follows:</p> <pre><code>Accuracy of plane : 36 % Accuracy of car : 40 % Accuracy of bird : 30 % Accuracy of cat : 19 % Accuracy of deer : 28 % Accuracy of dog : 17 % Accuracy of frog : 34 % Accuracy of horse : 43 % Accuracy of ship : 57 % Accuracy of truck : 35 % </code></pre> <p>Now here is question 2: How can I determine the accuracy so it would look in the following way:</p> <p>For example:</p> <pre><code>The Matrix (1999) ['Action: 91%', 'Drama: 25%', 'Adventure: 13%'] The Others (2001) ['Drama: 76%', 'Horror: 65%', 'Action: 41%'] Alien: Resurrection (1997) ['Horror: 67%', 'Action: 64%', 'Drama: 43%'] The Martian (2015) ['Drama: 95%', 'Adventure: 81%'] </code></pre> <p>Considering that every movie does not always have 3 genres, sometimes is 2 and sometimes is 1. So as I see it, I should find 3 maximum values, 2 maximum values or 1 maximum value of my output list , which is list of 15 genres so, for example, if </p> <p>my predicted genres are [Movie, Adventure] then </p> <p>some_kind_of_function(outputs) should give me output of </p> <p>[1 0 0 0 0 0 0 0 0 0 0 1 0 0 0] , </p> <p>which I can compare afterwards with ground_truth. I don't think torchmax will work in this case, cause it gives only one max value from [weigts array], so</p> <p>What's the best way to implement it?</p> <p>Thank you in advance, appreciate any help or suggestion:)</p>
<ol> <li>You're right, you're looking to perform binary classification (is poster X a drama movie or not? Is it an action movie or not?) for each poster-genre pair. <code>BinaryCrossEntropy(WithLogits)</code> is the way to go.</li> <li>Regarding the best metric to evaluate your resulting algorithm, it's up to you, what are <em>you</em> looking for. But you may want to investigate ideas like <a href="https://en.wikipedia.org/wiki/Precision_and_recall" rel="nofollow noreferrer">precision and recall</a> or <a href="https://en.wikipedia.org/wiki/F1_score" rel="nofollow noreferrer">f1 score</a>. <em>Personally</em>, I would probably pick the top 3 for each genre (since that's at max number of genres assigned to each poster) and look if the ones to be expected show up with high probability and if the unexpected ones (in case of a movie with 2 "ground truth" genres) show at the last places, with significantly less probability assigned.</li> </ol>
conv-neural-network|pytorch|multilabel-classification|multiclass-classification
1
2,839
53,723,217
Is there a version of TensorFlow not compiled for AVX instructions?
<p>I'm trying to get TensorFlow up on my Chromebook, not the best place, I know, but I just want to get a feel for it. I haven't done much work in the Python dev environment, or in any dev environment for that matter, so bear with me. After figuring out pip, I installed TensorFlow and tried to import it, receiving this error:</p> <pre><code>Python 3.5.2 (default, Nov 23 2017, 16:37:01) [GCC 5.4.0 20160609] on linux Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import tensorflow as tf 2018-12-11 06:09:54.960546: F tensorflow/core/platform/cpu_feature_guard.cc:37] The TensorFlow library was compiled to use AVX instructions, but these aren't available on your machine. Aborted (core dumped) </code></pre> <p>After some research, I have discovered that my processor (an Intel Celeron N2840 (Bay Trail-M Architecture)) does not support AVX instructions, so I was wondering if there was a way to use a version compiled for some other instruction set. Cog tells me I can use MMX and various SSEs (whatever the hell that means).</p> <p>P.S. This is sort of a duplicate of <a href="https://stackoverflow.com/questions/53590486/tensorflow-error-using-avx-instructions-on-linux-while-working-on-windows-on-the">TensorFlow error using AVX instructions on Linux while working on Windows on the same machine</a> but not entirely. Plus I can't comment because I don't have 50 reputation.</p> <p>P.P.S. I looked at <a href="https://stackoverflow.com/questions/41293077/how-to-compile-tensorflow-with-sse4-2-and-avx-instructions?rq=1">How to compile Tensorflow with SSE4.2 and AVX instructions?</a> and got scared</p>
<p>A best practices approach suggested by <a href="https://stackoverflow.com/users/224132/peter-cordes">peter-cordes</a> is to see what gcc is going to make of your 'what capabilities your cpu has' by issuing the following:</p> <pre><code>gcc -O3 -fverbose-asm -march=native -xc /dev/null -S -o- | less </code></pre> <p>This command will provide information (all) about your cpu capabilities from the view of gcc, whom is going to do the build, so gcc's view matters. </p> <p>When does this come up? When a program offers to tailor itself to your cpu. Dang. What do I know about my cpu. Well, the above line will tell you all you need to know.</p> <p>That said, generally, people/developers that are promoting cpu based capabilities will state or suggest a list of things that go faster/better/stronger if your cpu has *. And the above will give you *. Read carefully what you see. If you don't have it, you don't want it, i.e.</p> <pre><code>-mno-avx(whatever you don't want;in my case it was avx) </code></pre> <p>A good overview of install of CPU capable on older cpu(s) is provided by <a href="https://tech.amikelive.com/node-882/how-to-build-and-install-the-latest-tensorflow-without-cuda-gpu-and-with-optimized-cpu-performance-on-ubuntu/" rel="nofollow noreferrer">Mikael Fernandez Simalango</a> for Ubuntu 16.04 LTS. It assumes a python2.7 environ but easily translates to python3. The heart of the matter is extracting which cpu instruction extensions are available on your particular cpu that will be used in addition to -march=native via /proc/cpuinfo, (but note, it appears limited to what flags it accepts, so may be better to actually read through the instruction above and reflect)</p> <pre><code>grep flags -m1 /proc/cpuinfo | cut -d ":" -f 2 | tr '[:upper:]' '[:lower:]' | { read FLAGS; OPT="-march=native"; for flag in $FLAGS; do case "$flag" in "sse4_1" | "sse4_2" | "ssse3" | "fma" | "cx16" | "popcnt" | "avx" | "avx2") OPT+=" -m$flag";; esac; done; MODOPT=${OPT//_/\.}; echo "$MODOPT"; } </code></pre> <p>Running this on my old box output:</p> <pre><code>-march=native -mssse3 -mcx16 -msse4.1 -msse4.2 -mpopcnt </code></pre> <p>It gets part way there. What is not clear is how to say, 'not this' and 'not that', which for old CPUs would be, most likely, -mno-avx. </p> <p>For an old cpu, which -march matters and <a href="https://unix.stackexchange.com/questions/230634/how-to-find-out-intel-architecture-family-from-command-line">Nephanth</a> very usefully addresses this:</p> <pre><code>gcc -march=native -Q --help=target|grep march </code></pre> <p>produces</p> <pre><code>-march= westmere </code></pre> <p>which means my response to the ./compile question should be or might be, and note the quotes 'westmere' which is also in the gcc docs so the ' ' must be there for a reason</p> <pre><code>-march='westmere' -mssse3 -mcx16 -msse4.1 -msse4.2 -mpopcnt -mno-avx </code></pre> <p>but this is probably much better (see discussion below):</p> <pre><code>-march=native -mssse3 -mcx16 -msse4.1 -msse4.2 -mpopcnt -mno-avx </code></pre> <p>The -mno-avx is an option for gcc, and results, after many hours, in </p> <pre><code>Python 3.5.2 (default, Nov 23 2017, 16:37:01) [GCC 5.4.0 20160609] on linux Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import tensorflow as tf &gt;&gt;&gt; &gt;&gt;&gt; tf.__version__ '2.0.0-alpha0' </code></pre> <p>which looks like success.</p> <p>Restated: In either order, find out what instructions are (or not) supported by your cpu, and state those explicitly. </p>
python|tensorflow|avx
5
2,840
15,806,414
Storing multidimensional arrays in pandas DataFrame columns
<p>I'm hoping to use pandas as the main Trace (series of points in parameter space from MCMC) object. </p> <p>I have a list of dicts of string->array which I would like to store in pandas. The keys in the dicts are always the same, and for each key the shape of the numpy array is always the same, but the shape may be different for different keys and could have a different number of dimensions. </p> <p>I had been using <code>self.append(dict_list, ignore_index = True)</code> which seems to work well for 1d values, but for nd>1 values pandas stores the values as objects which doesn't allow for nice plotting and other nice things. Any suggestions on how to get better behavior?</p> <p><strong>Sample data</strong></p> <pre><code>point = {'x': array(-0.47652306228698005), 'y': array([[-0.41809043], [ 0.48407823]])} points = 10 * [ point] </code></pre> <p>I'd like to be able to do something like </p> <pre><code>df = DataFrame(points) </code></pre> <p>or </p> <pre><code>df = DataFrame() df.append(points, ignore_index=True) </code></pre> <p>and have </p> <pre><code>&gt;&gt; df['x'][1].shape () &gt;&gt; df['y'][1].shape (2,1) </code></pre>
<p>The relatively-new library <em>xray</em>[1] has <code>Dataset</code> and <code>DataArray</code> structures that do exactly what you ask.</p> <p>Here it is my take on your problem, written as an <em>IPython</em> session:</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; import xray &gt;&gt;&gt; ## Prepare data: &gt;&gt;&gt; # &gt;&gt;&gt; point = {'x': np.array(-0.47652306228698005), ... 'y': np.array([[-0.41809043], ... [ 0.48407823]])} &gt;&gt;&gt; points = 10 * [point] &gt;&gt;&gt; ## Convert to Xray DataArrays: &gt;&gt;&gt; # &gt;&gt;&gt; list_x = [p['x'] for p in points] &gt;&gt;&gt; list_y = [p['y'] for p in points] &gt;&gt;&gt; da_x = xray.DataArray(list_x, [('x', range(len(list_x)))]) &gt;&gt;&gt; da_y = xray.DataArray(list_y, [ ... ('x', range(len(list_y))), ... ('y0', range(2)), ... ('y1', [0]), ... ]) </code></pre> <p>These are the two <code>DataArray</code> instances we built so far:</p> <pre><code>&gt;&gt;&gt; print(da_x) &lt;xray.DataArray (x: 10)&gt; array([-0.47652306, -0.47652306, -0.47652306, -0.47652306, -0.47652306, -0.47652306, -0.47652306, -0.47652306, -0.47652306, -0.47652306]) Coordinates: * x (x) int32 0 1 2 3 4 5 6 7 8 9 &gt;&gt;&gt; print(da_y.T) ## Transposed, to save lines. &lt;xray.DataArray (y1: 1, y0: 2, x: 10)&gt; array([[[-0.41809043, -0.41809043, -0.41809043, -0.41809043, -0.41809043, -0.41809043, -0.41809043, -0.41809043, -0.41809043, -0.41809043], [ 0.48407823, 0.48407823, 0.48407823, 0.48407823, 0.48407823, 0.48407823, 0.48407823, 0.48407823, 0.48407823, 0.48407823]]]) Coordinates: * x (x) int32 0 1 2 3 4 5 6 7 8 9 * y0 (y0) int32 0 1 * y1 (y1) int32 0 </code></pre> <p>We can now merge these two <code>DataArray</code> on their common <code>x</code> dimension into a <code>DataSet</code>:</p> <pre><code>&gt;&gt;&gt; ds = xray.Dataset({'X':da_x, 'Y':da_y}) &gt;&gt;&gt; print(ds) &lt;xray.Dataset&gt; Dimensions: (x: 10, y0: 2, y1: 1) Coordinates: * x (x) int32 0 1 2 3 4 5 6 7 8 9 * y0 (y0) int32 0 1 * y1 (y1) int32 0 Data variables: X (x) float64 -0.4765 -0.4765 -0.4765 -0.4765 -0.4765 -0.4765 -0.4765 ... Y (x, y0, y1) float64 -0.4181 0.4841 -0.4181 0.4841 -0.4181 0.4841 -0.4181 ... </code></pre> <p>And we can finally access and aggregate data the way you wanted:</p> <pre><code>&gt;&gt;&gt; ds['X'].sum() &lt;xray.DataArray 'X' ()&gt; array(-4.765230622869801) &gt;&gt;&gt; ds['Y'].sum() &lt;xray.DataArray 'Y' ()&gt; array(0.659878) &gt;&gt;&gt; ds['Y'].sum(axis=1) &lt;xray.DataArray 'Y' (x: 10, y1: 1)&gt; array([[ 0.0659878], [ 0.0659878], [ 0.0659878], [ 0.0659878], [ 0.0659878], [ 0.0659878], [ 0.0659878], [ 0.0659878], [ 0.0659878], [ 0.0659878]]) Coordinates: * x (x) int32 0 1 2 3 4 5 6 7 8 9 * y1 (y1) int32 0 &gt;&gt;&gt; np.all(ds['Y'].sum(axis=1) == ds['Y'].sum(dim='y0')) True &gt;&gt;&gt;&gt; ds['X'].sum(dim='y0') Traceback (most recent call last): ValueError: 'y0' not found in array dimensions ('x',) </code></pre> <p>[1] A library for handling N-dimensional data with labels, like pandas does for 2D: <a href="http://xray.readthedocs.org/en/stable/data-structures.html#dataset" rel="noreferrer">http://xray.readthedocs.org/en/stable/data-structures.html#dataset</a></p>
python|pandas
12
2,841
72,047,477
Neural network to predict air flow from coordinate and fan speed
<p>I'm trying to get a neural network to predict air velocity in a container. The input to the neural network is a coordinate (x, y) and the fan speed (in percentage) and from which is should approximate the velocities (u, v) at that point. The grid looks like <a href="https://i.stack.imgur.com/B4qpQ.png" rel="nofollow noreferrer">this (ignore the red)</a>, where each green point is a coordinate. There are 10007 points.</p> <p>I have used COMSOL to simulate the model with various fan speed scenarios (60%, 70%, 80%, 90%, 100%) and saved the data points for the above grid.</p> <p>For each scenarios, I have 10007 coordinates (x, y), fan speed and 10007 data points (u, v).</p> <p>For a fan speed of 100%, the u and v velocities looks <a href="https://i.stack.imgur.com/zhAhv.png" rel="nofollow noreferrer">this</a>, here x and y labels are the positions.</p> <p>Here i would like to point out that most of the velocities are close to 0 and only a small portion of it have higher velocities.</p> <p>To sum it up:</p> <ul> <li><p>3 inputs: x, y and fan speed</p> </li> <li><p>x and y identical for all scenarios (same points), only difference is the fan speed</p> </li> <li><p>10007*5 data points (5 scenarios)</p> </li> <li><p>2 outputs: u and v</p> </li> </ul> <p>I have tried with a fully connected neural network with 5 hidden layers with 40 neurons each, with a mse loss function, Adam optimizer. For this particular case i have tried with batch sizes of 10007, thus 1 iteration is a scenario but also tried with 10 and got roughly the same <a href="https://i.stack.imgur.com/UsVka.png" rel="nofollow noreferrer">results</a></p> <p>It seems like its trying to minimize the loss by making the prediction became equal to the majority of the data, which is very small and then neglect the higher velocities, if that makes sense.</p> <p>So, the questions:</p> <ul> <li><p>How to make the neural network put higher focus on the coordinates with higher velocities?</p> </li> <li><p>What would be the best NN architecture for this kind of operation?</p> </li> <li><p>What would your guess be on the depth and width of the neural network?</p> </li> <li><p>Should I do any kind of scaling/standardization? If so, which?</p> </li> </ul> <p>Any input would be appreciated.</p> <p>Code:</p> <pre><code>import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras.models import Sequential, Model, load_model from tensorflow.keras.layers import Activation, Dense, BatchNormalization, InputLayer, Lambda from tensorflow.keras.optimizers import Adam from tensorflow.keras.initializers import glorot_normal from tensorflow.math import square, sqrt N = 40 layers = [N,N,N,N,N] def NN(layers): X = tf.keras.layers.Input(shape=(1), name = &quot;X&quot;) Y = tf.keras.layers.Input(shape=(1), name = &quot;Y&quot;) XY = tf.keras.layers.Concatenate(name = &quot;XY_Merging&quot;)([X, Y]) Inflow = tf.keras.layers.Input(shape=(1), name = &quot;Inflow&quot;) concat = tf.keras.layers.Concatenate(name = &quot;Merging&quot;)([XY, Inflow]) x = concat for i in layers: x = Dense(units=i, activation=&quot;tanh&quot;)(x) Output = Dense(units=2, kernel_initializer=&quot;glorot_normal&quot;, name = &quot;Output&quot;)(x) model = Model(inputs=[X, Y, Inflow], outputs=Output) return model model = NN(layers) model.compile(optimizer=Adam(learning_rate=0.0000001), loss='mse', metrics=['mae']) model.summary() model.fit({&quot;X&quot;:y_train, &quot;Y&quot;:x_train, &quot;Inflow&quot;:Inflow_train}, Y_train, batch_size=10007, epochs=1000, verbose=1, shuffle = False) </code></pre> <p><a href="https://i.stack.imgur.com/vJDFT.png" rel="nofollow noreferrer">Model Summary</a></p>
<p>On thing that might help in general is to set up your CFD model grid so that you have a much higher grid density in the areas where you have meaningful results, and a sparser grid where you expect the results to be close to zero. Having most of your data (all those 0s) provide no useful information to the ML model will make it hard to get good results.</p> <p>This might also be something where a simpler regression model might be better than a neural network. Have you tried that?</p>
python|tensorflow|neural-network
0
2,842
71,865,761
Why the rank function is not working when I set axis=1?
<p>I have this code:</p> <pre><code>y=pd.DataFrame({'num':[10,12,13,11,14]}) out = (y.join(y['num'].quantile([0.25,0.5,0.75,1]) .set_axis([f'{i}Q' for i in range(1,5)], axis=0) .to_frame().T .pipe(lambda x: x.loc[x.index.repeat(len(y))]) .reset_index(drop=True)) .assign(Rank=y['num'].rank(method='first')) ) </code></pre> <p>The code is working as it is but is not returning What I want. I was trying to rank <code>num</code> considering only it's row so</p> <pre><code>10 is rank 1 because 10 &lt;= 1Q value 12 is rank 2 **(not 3)** because 2Q &lt;= 12 &lt; 3Q value 13 is rank 3 **(not 4)** because 3Q &lt;= 13 &lt; 4Q value 11 is rank 1 **(not 2)** because 1Q &lt;= 11 &lt; 2Q value 14 is rank 4 **(not 5)** because 14&gt;= Q4 </code></pre> <p>I tried to change this line:</p> <pre><code>.assign(Rank=y['num'].rank(method='first')) </code></pre> <p>to:</p> <pre><code> .assign(Rank=y['num'].rank(axis=1,method='first')) </code></pre> <p>But it didn't work.</p> <p>What am i missing here?</p>
<p>Building on what you already have here:</p> <pre><code>y = y.join(y['num'].quantile([0.25,0.5,0.75,1]) .set_axis([f'{i}Q' for i in range(1,5)], axis=0) .to_frame().T .pipe(lambda x: x.loc[x.index.repeat(len(y))]) .reset_index(drop=True)) </code></pre> <p>we could add the <code>Rank</code> column as follows. The idea is to compare the <code>num</code> column with the quantile columns and get the first column name where the quantile value is greater than a <code>num</code> value. As it happens each quantile column already has rank numbers on it, so we use those to assign values:</p> <pre><code>y['Rank'] = (y.drop(columns='num').ge(y['num'], axis=0) .pipe(lambda x: x*x.columns).replace('', pd.NA) .bfill(axis=1)['1Q'].str[0].astype(int)) </code></pre> <p>Output:</p> <pre><code> num 1Q 2Q 3Q 4Q Rank 0 10 11.0 12.0 13.0 14.0 1 1 12 11.0 12.0 13.0 14.0 2 2 13 11.0 12.0 13.0 14.0 3 3 11 11.0 12.0 13.0 14.0 1 4 14 11.0 12.0 13.0 14.0 4 </code></pre>
python|python-3.x|pandas|dataframe|jupyter-notebook
1
2,843
72,059,571
how can I create a single box plot?
<p>dataset: <a href="https://github.com/rashida048/Datasets/blob/master/StudentsPerformance.csv" rel="nofollow noreferrer">https://github.com/rashida048/Datasets/blob/master/StudentsPerformance.csv</a></p> <pre><code>from bokeh.models import Range1d #used to set x and y limits #p.y_range=Range1d(120, 230) def box_plot(df, vals, label, ylabel=None,xlabel=None,title=None): # Group Data frame df_gb = df.groupby(label) # Get the categories cats = list(df_gb.groups.keys()) # Compute quartiles for each group q1 = df_gb[vals].quantile(q=0.25) q2 = df_gb[vals].quantile(q=0.5) q3 = df_gb[vals].quantile(q=0.75) # Compute interquartile region and upper and lower bounds for outliers iqr = q3 - q1 upper_cutoff = q3 + 1.5*iqr lower_cutoff = q1 - 1.5*iqr # Find the outliers for each category def outliers(group): cat = group.name outlier_inds = (group[vals] &gt; upper_cutoff[cat]) \ | (group[vals] &lt; lower_cutoff[cat]) return group[vals][outlier_inds] # Apply outlier finder out = df_gb.apply(outliers).dropna() # Points of outliers for plotting outx = [] outy = [] for cat in cats: # only add outliers if they exist if cat in out and not out[cat].empty: for value in out[cat]: outx.append(cat) outy.append(value) # If outliers, shrink whiskers to smallest and largest non-outlier qmin = df_gb[vals].min() qmax = df_gb[vals].max() upper = [min([x,y]) for (x,y) in zip(qmax, upper_cutoff)] lower = [max([x,y]) for (x,y) in zip(qmin, lower_cutoff)] cats = [str(i) for i in cats] # Build figure p = figure(sizing_mode='stretch_width', x_range=cats,height=300,toolbar_location=None) p.xgrid.grid_line_color = None p.ygrid.grid_line_width = 2 p.yaxis.axis_label = ylabel p.xaxis.axis_label = xlabel p.title=title p.y_range.start=0 p.title.align = 'center' # stems p.segment(cats, upper, cats, q3, line_width=2, line_color=&quot;black&quot;) p.segment(cats, lower, cats, q1, line_width=2, line_color=&quot;black&quot;) # boxes p.rect(cats, (q3 + q1)/2, 0.5, q3 - q1, fill_color=['#a50f15', '#de2d26', '#fb6a4a', '#fcae91', '#fee5d9'], alpha=0.7, line_width=2, line_color=&quot;black&quot;) # median (almost-0 height rects simpler than segments) p.rect(cats, q2, 0.5, 0.01, line_color=&quot;black&quot;, line_width=2) # whiskers (almost-0 height rects simpler than segments) p.rect(cats, lower, 0.2, 0.01, line_color=&quot;black&quot;) p.rect(cats, upper, 0.2, 0.01, line_color=&quot;black&quot;) # outliers p.circle(outx, outy, size=6, color=&quot;black&quot;) return p p = box_plot(df, 'Total', 'race/ethnicity', ylabel='Total spread',xlabel='',title='BoxPlot') show(p) </code></pre> <p><a href="https://i.stack.imgur.com/xzkOU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xzkOU.png" alt="Boxplot" /></a></p> <p>Hi there, from the code and dataset above I am able to produce a boxplot considering I pass through categorical variables. however I am unable to produce anything when I try to produce a boxplot for a single column. for example just checking the spread of the math scores. i tried to do</p> <pre><code>cats = df['math score'] </code></pre> <p>but it didnt work. any suggestions?</p>
<p>I am not sute if this it is the best to implement this both in one function, but if this is your goal, one solution can be, to add a few <code>if-else</code> conditions.</p> <p>Here is a description of the changes:</p> <p>First give <code>label</code> a default.</p> <pre><code># old # def box_plot(df, vals, label, ylabel=None,xlabel=None,title=None): # new def box_plot(df, vals, label=None, ylabel=None,xlabel=None,title=None): </code></pre> <p>Then add a <code>if-else</code> part for the groupby section.</p> <pre><code># old # # Group Data frame # df_gb = df.groupby(label) # # Get the categories # cats = list(df_gb.groups.keys()) # new if label is not None: # Group Data frame df_gb = df.groupby(label) # Get the categories cats = list(df_gb.groups.keys()) else: df_gb = df[[vals]] cats = [vals] </code></pre> <p>Now the calculation for the outliners is a bit different, because we don't have to loop over a number of columns. Only onw column is left.</p> <pre><code>if label is not None: out = df_gb.apply(outliers).dropna() else: out = df[(df[vals] &gt; upper_cutoff) | (df[vals] &lt; lower_cutoff)] </code></pre> <p>The upper and lower part are now <code>floats</code> and not a <code>list</code>.</p> <pre><code>if label is not None: upper = [min([x,y]) for (x,y) in zip(qmax, upper_cutoff)] lower = [max([x,y]) for (x,y) in zip(qmin, lower_cutoff)] else: upper =min(qmax, upper_cutoff) lower =max(qmin, lower_cutoff) </code></pre> <p>I also added (changed) the line below, to avoid a warning.</p> <pre><code>colors = ['#a50f15', '#de2d26', '#fb6a4a', '#fcae91', '#fee5d9'][:len(cats)] p.rect(cats, (q3 + q1)/2, 0.5, q3 - q1, fill_color=colors, alpha=0.7, line_width=2, line_color=&quot;black&quot;) </code></pre> <p>With these changes the output for</p> <pre><code>p = box_plot(df, 'math score', 'race/ethnicity', ylabel='Total spread',xlabel='',title='BoxPlot') </code></pre> <p>is still the same, but</p> <pre><code>p = box_plot(df, 'math score', ylabel='Total spread',xlabel='',title='BoxPlot') </code></pre> <p>gives us now a boxplot.</p> <p><a href="https://i.stack.imgur.com/kpXws.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kpXws.png" alt="box plot for &quot;math score&quot;" /></a></p>
python|pandas|bokeh|boxplot
1
2,844
19,096,047
Can not install pandas on windows 64 bit
<p>Trying to install pandas on a new windows 64 system. I did so using:</p> <pre><code>pip install pandas </code></pre> <p>the installtion aborts with error when trying to install pytz (from pandas):</p> <blockquote> <p>Could not find a version that satisfies the requirement pytz (from pandas)</p> </blockquote> <p>Trying to import pandas falis with error:</p> <blockquote> <p>Cannot import name hashtable</p> </blockquote> <p>How can I over come it? Or should I just install the 32 bit version of pandas?</p>
<p>If you're running CPython you can try to install it using the windows binaries available here: <a href="http://www.lfd.uci.edu/~gohlke/pythonlibs/#pandas" rel="nofollow">http://www.lfd.uci.edu/~gohlke/pythonlibs/#pandas</a></p> <p>You can find several binaries, for different Python version and x64/win32. It also has a link to it's dependencies (like pytz) if you don't have them all. My complete windows Python environment was built based on that.</p>
python|windows|pandas|pip
0
2,845
22,233,094
Create a Pandas DataFrame from series without duplicating their names?
<p>Is it possible to create a DataFrame from a list of series without duplicating their names?</p> <p>Ex, creating the same DataFrame as:</p> <pre><code>&gt;&gt;&gt; pd.DataFrame({ "foo": data["foo"], "bar": other_data["bar"] }) </code></pre> <p>But without without needing to explicitly name the columns?</p>
<p>Try <code>pandas.concat</code> which takes a list of items to combine as its argument:</p> <pre><code>df1 = pd.DataFrame(np.random.randn(100, 4), columns=list('abcd')) df2 = pd.DataFrame(np.random.randn(100, 3), columns=list('xyz')) df3 = pd.concat([df1['a'], df2['y']], axis=1) </code></pre> <p>Note that you need to use <code>axis=1</code> to stack things together side-by side and <code>axis=0</code> (which is the default) to combine them one-over-the-other.</p>
python|pandas
3
2,846
55,230,516
tensorflow more metrics with custom estimator
<p>I created custom estimator that used <code>binary_classification_head()</code> under the hood. All works good but the problem is with visible metrics. I'm using logging with level <code>tf.logging.set_verbosity(tf.logging.INFO)</code> and tensorboard but I only see loss value. I added this code but it helps nothing.</p> <pre><code>def my_accuracy(labels, predictions): return {'accuracy': tf.metrics.accuracy(labels, predictions['logistic'])} classifier = tf.contrib.estimator.add_metrics(classifier, my_accuracy) </code></pre> <p>Do you know some other way to add metrics?</p>
<p>You need to place relevant metrics function inside your <code>model_fn</code>.</p> <p>For example:</p> <pre><code>tf.summary.image('input_image', input_image, max_outputs) for v in tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES): tf.summary.histogram(v.name, v) </code></pre> <p>Metrics that include <code>update_op</code>, like accuracy of f1 score need to be fed to <code>eval_metric_ops</code>. Slicing is used because they output two values, metric value and update operation</p> <pre><code>f1 = tf.contrib.metrics.f1_score(labels, predictions, num_thresholds) accuracy = tf.metrics.accuracy(labels, predictions) tf.summary.scalar('accuracy', accuracy[1]) eval_metric_ops = { 'f1_score': f1, 'accuracy': accuracy } return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op, eval_metric_ops=eval_metric_ops, ) </code></pre> <p><code>eval_metric_ops</code> dict could be fed both in train mode and eval mode.</p> <p>In case you're using canned estimator, you can jus use add_metrics</p> <p>Edit: As per official documentation , you can use <code>binary_classification_head</code> with canned estimator or inside model_fn func which returns estimator_spec. See </p> <pre><code>my_head = tf.contrib.estimator.binary_classification_head() my_estimator = tf.estimator.DNNEstimator( head=my_head, hidden_units=..., feature_columns=...) </code></pre> <p>In this case you should be able to add metrics even without add_metrics func</p>
tensorflow|metrics|tensorboard|tensorflow-estimator
0
2,847
55,419,362
ModuleNotFoundError: No module named 'mport pandas as pd\r'
<blockquote> <p>ModuleNotFoundError: No module named 'mport pandas as pd\r'</p> </blockquote> <p>but not getting any line in the source code 'mport pandas as pd/r'</p> <p>This is code part nothing here like mport not even in the other file that is imported in the code.</p> <pre><code>{ import cv2 import numpy as np import pandas as pd import nltk from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.metrics import pairwise_distances import pickle from utils import display_img data = pd.read_pickle('pickles/dataclean.py') stop_words = set(nltk.corpus.stopwords.words('english')) def nlp_preprocessing(total_text, index, column): if type(total_text) is not int: string = "" for words in total_text.split(): # remove the special chars in review like '"#$@!%^&amp;*()_+-~?&gt; </code></pre> <p>dataclean.py</p> <pre><code>{import pandas as pd # loading the data using pandas' read_json file. data = pd.read_json('data/tops_fashion.json') data = data.loc[~data['formatted_price'].isnull()]#this will remove data with no price data =data.loc[~data['color'].isnull()]#remove data with no color #print(sum(data.duplicated('title')))#tell about dubplicate from remove_duplicate import remove_dup1,remove_dup2 data=remove_dup1(data)#removes adjacent sorted same title data=remove_dup2(data)#this will take time approx half hour data.to_pickle('pickels/dataclean')} </code></pre> <p>nothing is in dataclean.py related to mport even i searched it on google but no detail is available related to this error Generally "mport" kind of error related to syntax error but no such error mentioned on other side it tried to search for the related module</p> <p>this code is related to product recommendation system </p> <p>Expected Result: Should run smoothly //Error code Actual Result: </p> <pre><code>Traceback (most recent call last): File "recom.py", line 11, in &lt;module&gt; data = pd.read_pickle('pickles/dataclean.py') File "C:\Users\DELL\Anaconda3\lib\site-packages\pandas\io\pickle.py", line 180, in read_pickle return try_read(path, encoding='latin1') File "C:\Users\DELL\Anaconda3\lib\site-packages\pandas\io\pickle.py", line 175, in try_read lambda f: pc.load(f, encoding=encoding, compat=True)) File "C:\Users\DELL\Anaconda3\lib\site-packages\pandas\io\pickle.py", line 149, in read_wrapper return func(f) File "C:\Users\DELL\Anaconda3\lib\site-packages\pandas\io\pickle.py", line 175, in &lt;lambda&gt; lambda f: pc.load(f, encoding=encoding, compat=True)) File "C:\Users\DELL\Anaconda3\lib\site-packages\pandas\compat\pickle_compat.py", line 212, in load return up.load() File "C:\Users\DELL\Anaconda3\lib\pickle.py", line 1050, in load dispatch[key[0]](self) File "C:\Users\DELL\Anaconda3\lib\pickle.py", line 1309, in load_inst klass = self.find_class(module, name) File "C:\Users\DELL\Anaconda3\lib\site-packages\pandas\compat\pickle_compat.py", line 135, in find_class return super(Unpickler, self).find_class(module, name) File "C:\Users\DELL\Anaconda3\lib\pickle.py", line 1388, in find_class __import__(module, level=0) ModuleNotFoundError: No module named 'mport pandas as pd\r'} </code></pre>
<blockquote> <pre><code> File "recom.py", line 11, in &lt;module&gt; data = pd.read_pickle('pickles/dataclean.py') //dataclean.py {import pandas as pd </code></pre> </blockquote> <p>You are trying to load a Python file as a pickle. Python and pickle are two completely different formats, so this is never gonna work. I don't know what you are trying to do, or who suggested you to put Python code into curly braces to boot, but this is one of the craziest things I've ever seen people try.</p>
python|pandas|machine-learning
1
2,848
9,924,135
fast Cartesian to Polar to Cartesian in Python
<p>I want to transform in Python 2d arrays/images to polar, process then, and subsequently transform them back to cartesian. The following is the result from ImajeJ <a href="http://rsbweb.nih.gov/ij/plugins/polar-transformer.html" rel="noreferrer">Polar Transformer</a> plugin (used on the concentric circles of the sample code):</p> <p><img src="https://i.stack.imgur.com/t38pW.jpg" alt="enter image description here"></p> <p>The number and dims of the images is quite large so I was checking whether openCV has a fast and simple way to do this.</p> <p>I read about cv. <code>CartToPolar</code> and <code>PolarToCart</code> but I failed to use it. I understand better the <code>LogPolar</code> where the input and output are arrays, and where you can set the center, interpolation,and inversion (i.e <code>CV_WARP_INVERSE_MAP</code>). Is there a way to use CartToPolar/PolarToCart in an similar fashion?</p> <pre><code> import numpy as np import cv #sample 2D array that featues concentric circles circlesArr = np.ndarray((512,512),dtype=np.float32) for i in range(10,600,10): cv.Circle(circlesArr,(256,256),i-10,np.random.randint(60,500),thickness=4) #logpolar lp = np.ndarray((512,512),dtype=np.float32) cv.LogPolar(circlesArr,lp,(256,256),100,cv.CV_WARP_FILL_OUTLIERS) #logpolar Inverse lpinv = np.ndarray((512,512),dtype=np.float32) cv.LogPolar(lp,lpinv,(256,256),100, cv.CV_WARP_INVERSE_MAP + cv.CV_WARP_FILL_OUTLIERS) #display images from scipy.misc import toimage toimage(lp, mode="L").show() toimage(lpinv, mode="L").show() </code></pre> <p>This is for a tomography (CT) workflow where rings artifacts can be filtered out easier if they appear as lines.</p>
<p>Latest versions of opencv supports a function cv2.linearPolar. This may be another solution that does not involve the use of opencv:</p> <pre><code>def polar2cart(r, theta, center): x = r * np.cos(theta) + center[0] y = r * np.sin(theta) + center[1] return x, y def img2polar(img, center, final_radius, initial_radius = None, phase_width = 3000): if initial_radius is None: initial_radius = 0 theta , R = np.meshgrid(np.linspace(0, 2*np.pi, phase_width), np.arange(initial_radius, final_radius)) Xcart, Ycart = polar2cart(R, theta, center) Xcart = Xcart.astype(int) Ycart = Ycart.astype(int) if img.ndim ==3: polar_img = img[Ycart,Xcart,:] polar_img = np.reshape(polar_img,(final_radius-initial_radius,phase_width,3)) else: polar_img = img[Ycart,Xcart] polar_img = np.reshape(polar_img,(final_radius-initial_radius,phase_width)) return polar_img </code></pre>
python|image-processing|opencv|numpy
5
2,849
56,821,137
How to format an If Statements with multiple conditionals inside a function
<p>I'm working on creating a function that will evaluate two conditions from a dataframe and pass a series of prearranged return values given the inputs back to the dataframe should it encounter a NaN. The first condition I'd like to have is a check to see if the value of one column is a NaN (obviously) and then check to another column to see what key the id that has been assigned (1,2,3 etc). The eventual goal is to use the .apply method on the function to fill the NaN values with the values from the function back to the original dataframe or to leave the existing values (if present) alone. What's getting me hung up is that this is the first time I've written anything like this to call within a dataframe and I'm having issue of assignment within the control flow.</p> <p>This is using python 3.6. I've tried playing around with multiple forms of the below but everything consistently gives me the same type error as it tries to apply the function to the dataframe. This is not the actual dataframe but I made quickly to give you a gist of the issue I'm running into. </p> <p>Obviously something is off in the function but the result would've ideally updated the NaN value with the 40 value</p> <p>So far I've tried amending the function in all the ways I can think to make sense to get it to be able to iterate over the dataframe.</p> <pre><code>import pandas as pd import numpy as np frame = {'key' : [1,2,3,4,5], 'height' : [70, 68, 74, 67, 72], 'age' : [29,45,'N/A',51,34]} frame = pd.DataFrame(frame) frame.replace('N/A',np.nan) def age (x): if (x['age'].isnull()) &amp; (x['key'] == 3): return x.replace(np.nan, 40) else: return x result = frame.apply(age) </code></pre> <p><a href="https://i.stack.imgur.com/5nz9v.png" rel="nofollow noreferrer">Here's a snapshot of the dataframe that I would like to amend</a></p>
<p>The solution to your problem can be addressed as mentioned below if you really wanted to go with the custom function and apply.</p> <pre><code>import pandas as pd import numpy as np import math frame = {'key' : [1,2,3,4,5], 'height' : [70, 68, 74, 67, 72], 'age' : [29,45,'N/A',51,34]} frame = pd.DataFrame(frame) frame = frame.replace('N/A',np.nan) #function modified to compare the numpy float value with nan, math library is used here def age(row): if (math.isnan(row['age'])) &amp; (row['key'] == 3): return row.replace(np.nan, 40) else: return row result = frame.apply(age, axis=1) #here axis=1 is passing single row at a time to the function </code></pre> <p>input dataframe:</p> <pre><code>key height age 1 70 29.0 2 68 45.0 3 74 NaN 4 67 51.0 5 72 34.0 </code></pre> <p>result dataframe:</p> <pre><code>key height age 1.0 70.0 29.0 2.0 68.0 45.0 3.0 74.0 40.0 4.0 67.0 51.0 5.0 72.0 34.0 </code></pre> <p>I hope this will help you, you can modify the function as per your requirement and modify the datatype of columns.</p>
pandas|if-statement|conditional-statements
0
2,850
56,605,509
Pandas substring using another column as the index
<p>I'm trying to use one column containing the start index to subselect a string column.</p> <pre><code>df = pd.DataFrame({'string': ['abcdef', 'bcdefg'], 'start_index': [3, 5]}) expected = pd.Series(['def', 'g']) </code></pre> <p>I know that you can substring with the following</p> <pre><code>df['string'].str[3:] </code></pre> <p>However, in my case, the start index may vary, so I tried:</p> <pre><code>df['string'].str[df['start_index']:] </code></pre> <p>But it return NaNs.</p> <p>EDIT: What if I don't want to use a loop / list comprehension; i.e. vectorized method preferred.</p> <p>EDIT2: In this small test case, it seems like list comprehension is faster.</p> <pre><code>from itertools import islice %timeit df.apply(lambda x: ''.join(islice(x.string, x.start_index, None)), 1) %timeit pd.Series([x[y:] for x , y in zip(df.string,df.start_index) ]) 631 µs ± 1.96 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each) 101 µs ± 233 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each) </code></pre>
<p>Using for loop with <code>zip</code> of two columns , why we are using for loop here, you can check the <a href="https://stackoverflow.com/questions/54028199/for-loops-with-pandas-when-should-i-care">link</a> </p> <pre><code>[x[y:] for x , y in zip(df.string,df.start_index) ] Out[328]: ['def', 'g'] </code></pre>
python|string|pandas|substring
1
2,851
56,739,059
PySpark - map with lambda function
<p>I'm facing an issue when mixing python map and lambda functions on a Spark environment.</p> <p>Given df1, my source dataframe:</p> <pre><code>Animals | Food | Home ---------------------------------- Monkey | Banana | Jungle Dog | Meat | Garden Cat | Fish | House Elephant | Banana | Jungle Lion | Meat | Desert </code></pre> <p>I want to create another dataframe df2. It will contain two columns with a row per column of df1 (3 in my example). The first column would contain the name of df1 columns. The second column would contain an array of elements with the most occurrences (n=3 in the example below) and the count.</p> <pre><code>Column | Content ----------------------------------------------------------- Animals | [("Cat", 1), ("Dog", 1), ("Elephant", 1)] Food | [("Banana", 2), ("Meat", 2), ("Fish", 1)] Home | [("Jungle", 2), ("Desert", 1), ("Garden", 1)] </code></pre> <p>I tried to do it with python list, map and lambda functions but I had conflicts with PySpark functions:</p> <pre><code>def transform(df1): # Number of entry to keep per row n = 3 # Add a column for the count of occurence df1 = df1.withColumn("future_occurences", F.lit(1)) df2 = df1.withColumn("Content", F.array( F.create_map( lambda x: (x, [ str(row[x]) for row in df1.groupBy(x).agg( F.sum("future_occurences").alias("occurences") ).orderBy( F.desc("occurences") ).select(x).limit(n).collect() ] ), df1.columns ) ) ) return df2 </code></pre> <p>The error is:</p> <pre><code>TypeError: Invalid argument, not a string or column: &lt;function &lt;lambda&gt; at 0x7fc844430410&gt; of type &lt;type 'function'&gt;. For column literals, use 'lit', 'array', 'struct' or 'create_map' function. </code></pre> <p>Any idea how to fix it?</p> <p>Thanks a lot!</p>
<p>Here is one possible solution, in which the <code>Content</code> column will be an array of <code>StructType</code> with two named fields: <code>Content</code> and <code>count</code>.</p> <pre class="lang-python prettyprint-override"><code>from pyspark.sql.functions import col, collect_list, desc, lit, struct from functools import reduce def transform(df, n): return reduce( lambda a, b: a.unionAll(b), ( df.groupBy(c).count()\ .orderBy(desc("count"), c)\ .limit(n)\ .withColumn("Column", lit(c))\ .groupBy("Column")\ .agg( collect_list( struct( col(c).cast("string").alias("Content"), "count") ).alias("Content") ) for c in df.columns ) ) </code></pre> <p>This function will iterate through each of the columns in the input DataFrame, <code>df</code>, and count the occurrence of each value. Then we <code>orderBy</code> the count (descending) and the column value it self (alphabetically) and keep only the first <code>n</code> rows (<code>limit(n)</code>). </p> <p>Next, collect the values into an array of structs and finally <code>union</code> together the results for each column. Since the <code>union</code> requires each DataFrame to have the same schema, you will need to cast the column value to a string.</p> <pre class="lang-python prettyprint-override"><code>n = 3 df1 = transform(df, n) df1.show(truncate=False) #+-------+------------------------------------+ #|Column |Content | #+-------+------------------------------------+ #|Animals|[[Cat,1], [Dog,1], [Elephant,1]] | #|Food |[[Banana,2], [Meat,2], [Fish,1]] | #|Home |[[Jungle,2], [Desert,1], [Garden,1]]| #+-------+------------------------------------+ </code></pre> <p>This isn't <em>exactly</em> the same output that you asked for, but it will probably be sufficient for your needs. (Spark doesn't have tuples in the way you described.) Here's the new schema:</p> <pre class="lang-python prettyprint-override"><code>df1.printSchema() #root # |-- Column: string (nullable = false) # |-- Content: array (nullable = true) # | |-- element: struct (containsNull = true) # | | |-- Content: string (nullable = true) # | | |-- count: long (nullable = false) </code></pre>
python|pandas|apache-spark|lambda|pyspark
3
2,852
56,533,560
pandas dataframe with list elements: split, pad
<p>I have a pandas dataframe (NROWS x 1) where each row is a list , such as</p> <pre><code> y 0 [[aa, bb], 0000001] 1 [[uz, mk], 0000011] </code></pre> <p>I want to flatten the list and split into (in this case three) columns like so:</p> <pre><code> 1 2 3 0 aa bb 0000001 1 uz mk 0000011 </code></pre> <p>Further, different rows have unequal lengths:</p> <pre><code> y 0 [[aa, bb], 0000001] 1 [[mk], 0000011] </code></pre> <p>What I really want to end up with is, detect the max length over all rows and pad the rest to empty string ''. In this example,</p> <pre><code> 1 2 3 0 aa bb 0000001 1 '' mk 0000011 </code></pre> <p>I've toyed around with doing .values.tolist() but it doesn't do what I need.</p> <p><strong>Edit-</strong> the answers below are super neat and much appreciated. I'm editing to include a solution for a similar but simpler problem, for completeness.</p> <p>Read data, use the trim() fn from <a href="https://stackoverflow.com/questions/40950310/strip-trim-all-strings-of-a-dataframe">Strip / trim all strings of a dataframe</a> to make sure there is no left/right whitespace</p> <pre><code>df = pd.read_csv('data.csv',sep=',',dtype=str) df = trim_all_columns(df) </code></pre> <p>Keep categorical/nominal ID and CODE columns, remove all NA </p> <pre><code>df.dropna(subset=['dg_cd'] , inplace=True) # drop dg_cd is NaN rows from df df2 = df[['id','dg_cd']] </code></pre> <p>Turn CODE into sentences by ID keeping all repeated instances</p> <pre><code>x = df2.groupby('id').apply(lambda x: x['dg_cd'].values.tolist()).apply(pd.Series).replace(np.nan, '', regex=True) </code></pre> <p>The reason for doing all that is because that feeds into a k-modes cluster search, <a href="https://pypi.org/project/kmodes/" rel="nofollow noreferrer">https://pypi.org/project/kmodes/</a>. NA is not an acceptable input but empty strings </p> <blockquote> <p>''</p> </blockquote> <p>allow rows of same length while there is no spurious similarity. For example,</p> <pre><code>km = KModes(n_clusters=4, init='Cao', n_init=1, verbose=1) clusters = km.fit_predict( x ) </code></pre>
<h3>Setup</h3> <pre><code>df = pd.DataFrame(dict(y=[ [['aa', 'bb'], '0000001'], [['uz', 'mk'], '0000011'], [['mk'], '0000111'] ])) df y 0 [[aa, bb], 0000001] 1 [[uz, mk], 0000011] 2 [[mk], 0000111] </code></pre> <hr> <h3><code>flatten</code></h3> <p>From <a href="https://stackoverflow.com/a/49641953/2336654">@wim</a></p> <pre><code>def flatten(x): try: it = iter(x) except TypeError: yield x return if isinstance(x, str): yield x return for elem in it: yield from flatten(elem) d = dict(zip(df.index, [dict(enumerate([*flatten(x)][::-1])) for x in df.y])) d = pd.DataFrame.from_dict(d, 'index').fillna('') d.iloc[:, ::-1].rename(columns=lambda x: d.shape[1] - x) 1 2 3 0 aa bb 0000001 1 uz mk 0000011 2 mk 0000111 </code></pre>
python|pandas
4
2,853
26,363,156
Can I classify elements of a df.column and create a column with the output without iteration (Python-Pandas-Np)?
<p>Given this dataframe,</p> <pre><code>A = pd.DataFrame([[1, 5, 2], [2, 4, 4], [3, 3, 1], [4, 2, 2], [5, 1, 4]], columns=['A', 'B', 'C'], index=[1, 2, 3, 4, 5]) </code></pre> <p>I would like to classify the elements of column 'A' according to their size, and create a new column with the output like this:</p> <pre><code>In [26]: A['Size'] = "" for index, row in A.iterrows(): if row['A'] &gt;= 4: A.loc[index, 'Size'] = 'Big' if 2.5 &lt; row['A'] &lt; 4: A.loc[index, 'Size'] = 'Medium' if 0 &lt; row['A'] &lt; 2.4: A.loc[index, 'Size'] = 'Small' </code></pre> <p>The output would be:</p> <pre><code>Out[27]: A B C Size 1 1 5 2 Small 2 2 4 4 Small 3 3 3 1 Medium 4 4 2 2 Big 5 5 1 4 Big </code></pre> <p>Imagine that you have a lot of columns and and different parameters for the same categories, is there a more efficient way to do this ?</p> <p>Thanks</p>
<p>You can use <code>loc</code> as a boolean mask to assign just to the rows that meet the criteria, even for such a small df it is faster, for a larger df it will be significantly faster:</p> <pre><code>In [60]: %%timeit A['Size'] = "" for index, row in A.iterrows(): if row['A'] &gt;= 4: A.loc[index, 'Size'] = 'Big' if 2.5 &lt; row['A'] &lt; 4: A.loc[index, 'Size'] = 'Medium' if 0 &lt; row['A'] &lt; 2.4: A.loc[index, 'Size'] = 'Small' 100 loops, best of 3: 2.31 ms per loop In [62]: %%timeit A.loc[A['A'] &gt;=4, 'Size'] = 'Big' A.loc[(A['A'] &gt;= 2.5) &amp; (A['A'] &lt; 4), 'Size'] = 'Medium' A.loc[A['A'] &lt; 2.4, 'Size'] = 'Small' 100 loops, best of 3: 1.95 ms per loop </code></pre> <p>Additionally you could use 3 <code>np.where</code> conditions which is even faster:</p> <pre><code>In [64]: %%timeit A['Size'] = np.where(A['A'] &lt; 2.4, 'Small', np.where((A['A'] &gt;= 2.5) &amp; (A['A'] &lt; 4), 'Medium', np.where(A['A'] &gt;=4, 'Big',''))) 1000 loops, best of 3: 828 µs per loop </code></pre> <p><strong>Update</strong></p> <p>Interestingly for a 50,000 row dataframe, the <code>loc</code> method outperforms the nested <code>np.where</code> method: I get 4.24 ms versus 12.1 ms.</p>
python|numpy|pandas|iteration
1
2,854
67,083,987
Lemmatize df column
<p>I am trying to lemmatize content in a df but the function I wrote isn't working. Prior to trying to lemmatize the data in the column looked like this.</p> <p><a href="https://i.stack.imgur.com/hCfCP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hCfCP.png" alt="enter image description here" /></a></p> <p>Then I ran the following code:</p> <pre><code>import nltk nltk.download('wordnet') from nltk.stem import WordNetLemmatizer # Init the Wordnet Lemmatizer lemmatizer = WordNetLemmatizer() def lemmatize_text(text): lemmatizer = WordNetLemmatizer() return [lemmatizer.lemmatize(w) for w in text] df['content'] = df[&quot;content&quot;].apply(lemmatize_text) print(df.content) </code></pre> <p>Now the content column looks like this:</p> <p><a href="https://i.stack.imgur.com/xiVsl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xiVsl.png" alt="enter image description here" /></a></p> <p>I'm not sure what i did wrong, but I am just trying to lemmatize the data in the content column. Any help would be greatly appreciated.</p>
<p>You are lemmatizing each char instead of word. Your function should look like this instead:</p> <pre><code>def lemmatize_text(text): lemmatizer = WordNetLemmatizer() return ' '.join([lemmatizer.lemmatize(w) for w in text.split(' ')]) </code></pre>
python-3.x|pandas|nltk|stemming|lemmatization
1
2,855
67,117,157
Is there a way to made the name of a dataframe a variable that is defined by the user?
<p>The user would enter in the date. I would then like the name of the dataframe to be table_date. I get the following error when I try running the code. SyntaxError: cannot assign to f-string expression</p> <pre><code>date = &quot;199101&quot; data = {'Start Date': ['1', '2', '3'], 'End Date': ['2', '3','33'], 'Days Between': ['3', '3', '33' ], 'Weeks Between': ['7', '8', '4'], 'Months Between': ['.5', '.6', '.2'] } &quot;table_&quot;f&quot;{date}&quot; = pd.DataFrame(data, columns = ['Start Date','End Date','Days Between', 'Weeks Between', 'Months Between']) </code></pre>
<p>To answer your question:</p> <pre><code>&gt;&gt;&gt; globals()[&quot;table_&quot;f&quot;{date}&quot;] = pd.DataFrame(...) </code></pre> <p>create the variable <code>table_199101</code> in global namespace but I must advise you this is really not a good practice!</p>
pandas|string|dataframe|variables
0
2,856
66,886,064
How to use Raise a value error when calculating derivatives using numpy
<p>``I need to Write a function which calculates the following math equation and round your answer to 2 decimal places: z = π*e<strong>x</strong>2/4y.</p> <p>There are the following contstraints: The input variables x and y are single values (that is, not a list/array). If a division by zero occurs, raise a ValueError. Output should be rounded to 2 decimal places. (hint) You can calculate ex using the NumPy function np.exp(x).</p> <p>I have done the following but still fail the value error tests (ValueError Inputs: [0.5, 0] def test_question_3_ValueError(test_input)):</p> <pre><code>def custom_function(x, y): # your code here a = (np.pi*np.exp(x**2)) b = 4*y z = a / b if b &lt; 0: raise ValueError(&quot;Div by zero&quot;) return round(z, 2) </code></pre>
<p>First thing, I believe that the division by 0, will only happen if y = 0, as b = 4*y, and b can be only 0 if y = 0, thus, you should change the if statement for b == 0.</p> <p>Anoter thing, that if statement should be before calculating the z value, because you want to raise the Error before any computations has been started, and thus stopping everything that follows afterwards.</p>
python|numpy|jupyter
0
2,857
67,138,037
"TextInputSequence must be str” error on Hugging Face Transformers
<p>I’m very new to HuggingFace, I’ve come around this error “<strong>TextInputSequence must be str</strong>” on a notebook which is helping me a lot to do some practice on various hugging face models. <strong>The boilerplate code on the notebook is throwing this error (I guess) due to some changes in huggingface’s API</strong> or something. So I was wondering if someone could suggest some changes that I can make to the code to resolve the error.</p> <p><em><strong>The error can easily be reproduced by just running all the cells of the notebook.</strong></em></p> <p><strong>Link</strong>: <a href="https://colab.research.google.com/drive/1K9H753cX0tD0lsoXvyHsDhrTtbnzq1bL?usp=sharing&amp;authuser=1#scrollTo=d5YvAzA5QJER" rel="nofollow noreferrer">Colab Notebook</a></p> <h3>This is the line that is throwing the error-<img src="https://i.stack.imgur.com/SKUT5.png" alt="the code throwing error2" /></h3> <h3>Here is the error-<a href="https://i.stack.imgur.com/miF9L.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/miF9L.png" alt="error trace" /></a></h3>
<p>This is an issue with data , the data consists of None type or other data type except string</p>
deep-learning|nlp|pytorch|huggingface-transformers|huggingface-tokenizers
1
2,858
67,110,549
pybind11 - Identify and remove memory leak in C++ wrapper
<p>I have a simple C++ function that I've attempted to wrap with <code>pybind11</code> (the <code>ehvi3d_sliceupdate</code> function from the <a href="https://moda.liacs.nl/index.php?page=code" rel="nofollow noreferrer">KMAC library</a>). It's deep in a loop and gets called a few hundred thousand to a million times in my Python module. Unfortunately, it seems to be leaking memory (12+GB after ~700k calls) and I'm not sure what the cause might be. The wrapper that I've compiled looks like this:</p> <pre class="lang-cpp prettyprint-override"><code>#include &lt;pybind11/pybind11.h&gt; #include &lt;pybind11/numpy.h&gt; #include &lt;iostream&gt; #include &quot;helper.h&quot; #include &quot;ehvi_calculations.h&quot; #include &quot;ehvi_sliceupdate.h&quot; namespace py = pybind11; // Copied from main.cc //Checks if p dominates P. Removes points dominated by p from P and return the number of points removed. int checkdominance(deque&lt;individual*&gt; &amp; P, individual* p){ int nr = 0; for (int i=P.size()-1;i&gt;=0;i--){ if (p-&gt;f[0] &gt;= P[i]-&gt;f[0] &amp;&amp; p-&gt;f[1] &gt;= P[i]-&gt;f[1] &amp;&amp; p-&gt;f[2] &gt;= P[i]-&gt;f[2]){ cerr &lt;&lt; &quot;Individual &quot; &lt;&lt; (i+1) &lt;&lt; &quot; is dominated or the same as another point; removing.&quot; &lt;&lt; endl; P.erase(P.begin()+i); nr++; } } return nr; } // Wrap the ehvi3d_sliceupdate function - not sure how to pass straight in double wrap_ehvi3d_sliceupdate(py::array_t&lt;double&gt; y_par, py::array_t&lt;double&gt; ref_point, py::array_t&lt;double&gt; mean_vector, py::array_t&lt;double&gt; std_dev) { deque&lt;individual*&gt; nd_samples; // Get y_par and feed by individual via numpy direct access // https://pybind11.readthedocs.io/en/stable/advanced/pycpp/numpy.html auto yp = y_par.unchecked&lt;2&gt;(); // y_par must have ndim = 2 for (py::ssize_t i = 0; i &lt; yp.shape(0); i++) { individual * tempvidual = new individual; tempvidual-&gt;f[0] = yp(i, 0); tempvidual-&gt;f[1] = yp(i, 1); tempvidual-&gt;f[2] = yp(i, 2); // cerr &lt;&lt; i &lt;&lt; &quot;: &quot; &lt;&lt; yp(i, 0) &lt;&lt; &quot; &quot; &lt;&lt; yp(i, 1) &lt;&lt; &quot; &quot; &lt;&lt; yp(i, 2) &lt;&lt; endl; checkdominance(nd_samples, tempvidual); nd_samples.push_back(tempvidual); } // Marshall ref_point, mean_vector, and std_dev into an array // (might be better ways to do this..) auto rp = ref_point.unchecked&lt;1&gt;(); // ref_point must have ndim = 1, len 3 double r [] = {rp(0), rp(1), rp(2)}; auto mv = mean_vector.unchecked&lt;1&gt;(); // mean_vector must have ndim = 1, len 3 double mu [] = {mv(0), mv(1), mv(2)}; auto sd = std_dev.unchecked&lt;1&gt;(); // std_dev must have ndim = 1, len 3 double s [] = {sd(0), sd(1), sd(2)}; double hvi = ehvi3d_sliceupdate(nd_samples, r, mu, s); return hvi; } PYBIND11_MODULE(kmac, m) { // module docstring m.doc() = &quot;EHVI using KMAC&quot;; // definie EHVI slice update function m.def(&quot;ehvi3d_sliceupdate&quot;, &amp;wrap_ehvi3d_sliceupdate, &quot;O(n^3) slice-update scheme for calculating the EHVI.&quot;); } </code></pre> <p>There's probably a much easier way to wrap this, as I just cobbled together bits I found from the pybind11 docs and here on SO. I'm not that familiar with C++, so I might have committed some other heinous coding errors creating arrays or passing pointers using my limited knowledge. Am I creating something which needs to be cleaned up each time? At first, I thought I might need to contain my numpy arrays as in <a href="https://stackoverflow.com/questions/44659924/returning-numpy-arrays-via-pybind11/44682603#44682603">this</a> and <a href="https://stackoverflow.com/questions/45124695/pybind11-return-numpy-array-of-objects">this</a> previous post, but I'm only returning a double, so there's no numpy array to handle on the python side.</p> <hr /> <p><strong>EDIT</strong></p> <p>I've tried changing the <code>tempvidual</code> block to use heap memory (I believe it's called?), as I read that it would clear itself up, by using:</p> <pre class="lang-cpp prettyprint-override"><code> individual tempvidual; tempvidual.f[0] = yp(i, 0); tempvidual.f[1] = yp(i, 1); tempvidual.f[2] = yp(i, 2); checkdominance(nd_samples, &amp;tempvidual); nd_samples.push_back(&amp;tempvidual); </code></pre> <p>and before returning <code>hvi</code> at the end, I tried adding <code>nd_samples.clear();</code> to clear the <code>deque</code> before returning to python, but I'm still getting an increase in memory per call to the wrapper. Is there anything else left to clean up?</p> <hr /> <p><strong>EDIT 2</strong></p> <p>So it turns out part of the problem was the library itself, which leaked around &gt; 4kb per call according to <code>valgrind</code>. Thanks (and big shout out) to the amazing help of @ajum on the <a href="https://gitter.im/pybind/Lobby" rel="nofollow noreferrer">pybind11 gitter</a> who practically walked me through refactoring most of the code to use <code>shared_ptr</code> and <code>make_shared</code> instead of raw pointers fixing all of the leaks in the library. This has also required a small update to the wrapper, see below. Unfortunately, even using the leak-free (I think) library and updated wrapper, I'm getting a report of:</p> <pre><code>==1932812== LEAK SUMMARY: ==1932812== definitely lost: 676 bytes in 1 blocks ==1932812== indirectly lost: 0 bytes in 0 blocks ==1932812== possibly lost: 145,291 bytes in 80 blocks ==1932812== still reachable: 1,725,888 bytes in 1,013 blocks </code></pre> <p>which is less than before, but I don't know what's causing it.</p> <p>Edited sections of the wrapper:</p> <pre><code>// Copied from main.cc //Checks if p dominates P. Removes points dominated by p from P and return the number of points removed. int checkdominance(deque&lt;shared_ptr&lt;individual&gt;&gt; &amp; P, shared_ptr&lt;individual&gt; p){ int nr = 0; for (int i=P.size()-1;i&gt;=0;i--){ if (p-&gt;f[0] &gt;= P[i]-&gt;f[0] &amp;&amp; p-&gt;f[1] &gt;= P[i]-&gt;f[1] &amp;&amp; p-&gt;f[2] &gt;= P[i]-&gt;f[2]){ cerr &lt;&lt; &quot;Individual &quot; &lt;&lt; (i+1) &lt;&lt; &quot; is dominated or the same as another point; removing.&quot; &lt;&lt; endl; P.erase(P.begin()+i); nr++; } } return nr; } // Wrap the ehvi3d_sliceupdate function - not sure how to pass straight in double wrap_ehvi3d_sliceupdate(py::array_t&lt;double&gt; y_par, py::array_t&lt;double&gt; ref_point, py::array_t&lt;double&gt; mean_vector, py::array_t&lt;double&gt; std_dev) { // deque&lt;individual*&gt; nd_samples; deque&lt;shared_ptr&lt;individual&gt;&gt; nd_samples; // Get y_par and feed by individual via numpy direct access // https://pybind11.readthedocs.io/en/stable/advanced/pycpp/numpy.html auto yp = y_par.unchecked&lt;2&gt;(); // y_par must have ndim = 2 for (py::ssize_t i = 0; i &lt; yp.shape(0); i++) { auto tempvidual = make_shared&lt;individual&gt;(); // individual * tempvidual = new individual; tempvidual-&gt;f[0] = yp(i, 0); tempvidual-&gt;f[1] = yp(i, 1); tempvidual-&gt;f[2] = yp(i, 2); // cerr &lt;&lt; i &lt;&lt; &quot;: &quot; &lt;&lt; yp(i, 0) &lt;&lt; &quot; &quot; &lt;&lt; yp(i, 1) &lt;&lt; &quot; &quot; &lt;&lt; yp(i, 2) &lt;&lt; endl; // cerr &lt;&lt; i &lt;&lt; &quot;: &quot; &lt;&lt; tempvidual-&gt;f[0] &lt;&lt; &quot; &quot; &lt;&lt; tempvidual-&gt;f[1] &lt;&lt; &quot; &quot; &lt;&lt; tempvidual-&gt;f[2] &lt;&lt; endl; checkdominance(nd_samples, tempvidual); nd_samples.push_back(tempvidual); } // Marshall ref_point, mean_vector, and std_dev into an array // (might be better ways to do this..) auto rp = ref_point.unchecked&lt;1&gt;(); // ref_point must have ndim = 1, len 3 double r [] = {rp(0), rp(1), rp(2)}; auto mv = mean_vector.unchecked&lt;1&gt;(); // mean_vector must have ndim = 1, len 3 double mu [] = {mv(0), mv(1), mv(2)}; auto sd = std_dev.unchecked&lt;1&gt;(); // std_dev must have ndim = 1, len 3 double s [] = {sd(0), sd(1), sd(2)}; double hvi = ehvi3d_sliceupdate(nd_samples, r, mu, s); return hvi; } </code></pre> <p>In the output of running <code>valgrind</code> on the python test script, I couldn't identify what the issue was. An excerpt of the output with the <code>definitely lost</code> block looks like this:</p> <pre><code>==1932812== 676 bytes in 1 blocks are definitely lost in loss record 212 of 485 ==1932812== at 0x4C30F0B: malloc (vg_replace_malloc.c:307) ==1932812== by 0x2D595F: _PyMem_RawWcsdup (obmalloc.c:592) ==1932812== by 0x166786: _PyCoreConfig_Copy.cold (main.c:2535) ==1932812== by 0x34C4C7: _Py_InitializeCore (pylifecycle.c:850) ==1932812== by 0x34CCB3: pymain_init (main.c:3041) ==1932812== by 0x3503EB: pymain_main (main.c:3063) ==1932812== by 0x35085B: _Py_UnixMain (main.c:3103) ==1932812== by 0x5A137B2: (below main) (in /usr/lib64/libc-2.28.so) ==1932812== ==1932812== 688 bytes in 1 blocks are possibly lost in loss record 214 of 485 ==1932812== at 0x4C33419: realloc (vg_replace_malloc.c:834) ==1932812== by 0x21E8F8: _PyObject_GC_Resize (gcmodule.c:1758) ==1932812== by 0x2345DA: UnknownInlinedFun (frameobject.c:726) ==1932812== by 0x2345DA: UnknownInlinedFun (call.c:272) ==1932812== by 0x2345DA: _PyFunction_FastCallKeywords (call.c:408) ==1932812== by 0x2979C7: call_function (ceval.c:4616) ==1932812== by 0x2BE4AB: _PyEval_EvalFrameDefault (ceval.c:3124) ==1932812== by 0x233E93: UnknownInlinedFun (ceval.c:547) ==1932812== by 0x233E93: UnknownInlinedFun (call.c:283) ==1932812== by 0x233E93: _PyFunction_FastCallKeywords (call.c:408) ==1932812== by 0x2979C7: call_function (ceval.c:4616) ==1932812== by 0x2BE4AB: _PyEval_EvalFrameDefault (ceval.c:3124) ==1932812== by 0x233E93: UnknownInlinedFun (ceval.c:547) ==1932812== by 0x233E93: UnknownInlinedFun (call.c:283) ==1932812== by 0x233E93: _PyFunction_FastCallKeywords (call.c:408) ==1932812== by 0x2979C7: call_function (ceval.c:4616) ==1932812== by 0x2BE4AB: _PyEval_EvalFrameDefault (ceval.c:3124) ==1932812== by 0x233E93: UnknownInlinedFun (ceval.c:547) ==1932812== by 0x233E93: UnknownInlinedFun (call.c:283) ==1932812== by 0x233E93: _PyFunction_FastCallKeywords (call.c:408) ==1932812== ==1932812== 1,056 bytes in 2 blocks are possibly lost in loss record 350 of 485 ==1932812== at 0x4C30F0B: malloc (vg_replace_malloc.c:307) ==1932812== by 0x221130: UnknownInlinedFun (obmalloc.c:520) ==1932812== by 0x221130: UnknownInlinedFun (obmalloc.c:1584) ==1932812== by 0x221130: UnknownInlinedFun (obmalloc.c:1576) ==1932812== by 0x221130: UnknownInlinedFun (obmalloc.c:633) ==1932812== by 0x221130: UnknownInlinedFun (gcmodule.c:1693) ==1932812== by 0x221130: UnknownInlinedFun (gcmodule.c:1715) ==1932812== by 0x221130: _PyObject_GC_NewVar (gcmodule.c:1744) ==1932812== by 0x2344F2: UnknownInlinedFun (frameobject.c:713) ==1932812== by 0x2344F2: UnknownInlinedFun (call.c:272) ==1932812== by 0x2344F2: _PyFunction_FastCallKeywords (call.c:408) ==1932812== by 0x2979C7: call_function (ceval.c:4616) ==1932812== by 0x2BE4AB: _PyEval_EvalFrameDefault (ceval.c:3124) ==1932812== by 0x206EAC: UnknownInlinedFun (ceval.c:547) ==1932812== by 0x206EAC: UnknownInlinedFun (call.c:283) ==1932812== by 0x206EAC: _PyFunction_FastCallDict (call.c:322) ==1932812== by 0x20F1BA: UnknownInlinedFun (call.c:98) ==1932812== by 0x20F1BA: object_vacall (call.c:1200) ==1932812== by 0x28E2E6: _PyObject_CallMethodIdObjArgs (call.c:1250) ==1932812== by 0x1FC4A6: UnknownInlinedFun (import.c:1652) ==1932812== by 0x1FC4A6: PyImport_ImportModuleLevelObject (import.c:1764) ==1932812== by 0x2C069F: UnknownInlinedFun (ceval.c:4770) ==1932812== by 0x2C069F: _PyEval_EvalFrameDefault (ceval.c:2600) ==1932812== by 0x205AF1: UnknownInlinedFun (ceval.c:547) ==1932812== by 0x205AF1: _PyEval_EvalCodeWithName (ceval.c:3930) ==1932812== by 0x206D08: PyEval_EvalCodeEx (ceval.c:3959) </code></pre> <p>Is this due to the <code>pybind11</code> itself or the way I called it?</p> <p>P.S. Not sure if it's SO style to add edits or replace the original questions with the (long) update. Thanks!</p>
<p>It turns out removing the use of <code>new</code> where possible (and adding a <code>delete</code> where it wasn't) plus replacing all the raw pointers with <code>make_shared</code> and <code>shared_ptr</code> in the base library and the wrapper actually fixed the issue. It seems using these over raw pointers will automatically release the memory once the variable falls out of scope (a knowledgeable C++ user can correct me in the comments.)</p> <p>This is probably basic/obvious for C++ coders, but for non-C++ users/beginners (and for my records, if I forget), the fix was:</p> <pre><code>//Change declarations like these: // vector&lt;mus*&gt; pdf; vector&lt;shared_ptr&lt;mus&gt;&gt; pdf; // mus * tempmus = new mus; auto tempmus = make_shared&lt;mus&gt;(); // newind = new specialind; auto newind = make_shared&lt;specialind&gt;(); // deque&lt;specialind*&gt; Px, Py, Pz; deque&lt;shared_ptr&lt;specialind&gt;&gt; Px, Py, Pz; // Replace function signatures and headers like this // int checkdominance(deque&lt;individual*&gt; &amp; P, individual* p); int checkdominance(deque&lt;shared_ptr&lt;individual&gt;&gt; &amp; P, shared_ptr&lt;individual&gt; p); // Parts of structs like this struct specialind{ // individual *point; std::shared_ptr&lt;individual&gt; point; }; // Couldn't figure out how to change this one to remove new as it was needed in a later scope... // Added delete at the end after it looked like it wasn't needed Pstruct = new thingy[n*n]; // ... delete [] Pstruct; // Addded this at the end. </code></pre> <p>In doing so, I initially got a lot of segmentation faults. I could track down the lines that caused them by using this <a href="https://stackoverflow.com/questions/3718998/fixing-segmentation-faults-in-c">SO post</a>.</p> <p>Although calling <code>valgrind --leak-check=full --track-origins=yes python test.py</code> resulted in the <strong>EDIT 2</strong> leak message, where <code>test.py</code> is just a smiple loop (plus input <code>numpy</code> ndarrays):</p> <pre class="lang-py prettyprint-override"><code>while True: hvi = kmac.ehvi3d_sliceupdate(dat, ref_point, mean_vector, std_dev) </code></pre> <p>-- it actually looks like the memory consumption is stable and is not growing anymore. (I'm not sure why there are spurious messages from <code>valgrind</code> but they don't seem to affect the memory noticeably during the run.) Now I can run <code>python test.py</code> for a few minutes and it stays stable at around 15 MB.</p> <p>Thanks to the folks at <code>pybind11</code> and Adam Thompson for taking me through the basics.</p>
python|c++|numpy|pybind11
2
2,859
67,107,199
Is there a function can choose data from a specified csv file if there are conflicts while combining two csv files?
<p>I have two csv files, and I want to combine these two csv files into one csv file. Assume that the two csv files are A.csv and B.csv, I have already known that there are some conflicts in them. For example, there are two columns, ID and name, in A.csv ID &quot;12345&quot; has name &quot;Jack&quot;, in B.csv ID &quot;12345&quot; has name &quot;Tom&quot;. So there are conflicts that the same ID has different name. Now I want to keep ID &quot;12345&quot;, and I want to choose name from A.csv, and abandon name from B.csv. How could I do that?</p> <p>Here is some code I have tried, but it can only combine two csv files but connot deal with the conflicts, or more precisely, it cannot choose definite value from A.csv :</p> <pre><code>import pandas as pd import glob def merge(csv_list, outputfile): for input_file in csv_list: f = open(input_file, 'r', encoding='utf-8') data = pd.read_csv(f, error_bad_lines=False) data.to_csv(outputfile, mode='a', index=False) print('Combine Completed') def distinct(file): df = pd.read_csv(file, header=None) datalist = df.drop_duplicates() datalist.to_csv('result_new_month01.csv', index = False, header = False) print('Distint Completed') if __name__ = '__main__': csv_list = glob.glob('*.csv') output_csv_path = 'result.csv' print(csv_list) merge(csv_list) distinct(output_csv_path) </code></pre> <p>P.S. English is not my native language. Please excuse my syntax error.</p>
<p>If you want to keep one DataFrame value over the other, then <a href="https://pandas.pydata.org/docs/reference/api/pandas.concat.html" rel="nofollow noreferrer"><code>concatenate</code></a> them and keep the first duplicate in the output. This means the preferred values should be in the first argument to the sequence you provide to <code>concatenate</code> as shown below.</p> <pre><code>df = pd.DataFrame({&quot;ID&quot;: [&quot;12345&quot;, &quot;4567&quot;, &quot;897&quot;], &quot;name&quot;: [&quot;Jack&quot;, &quot;Tom&quot;, &quot;Frank&quot;]}) df1 = pd.DataFrame({&quot;ID&quot;: [&quot;12345&quot;, &quot;333&quot;, &quot;897&quot;], &quot;name&quot;: [&quot;Tom&quot;, &quot;Sam&quot;, &quot;Rob&quot;]}) pd.concat([df, df1]).drop_duplicates(&quot;ID&quot;, keep=&quot;first&quot;).reset_index(drop=True) ID name 0 12345 Jack 1 4567 Tom 2 897 Frank 3 333 Sam </code></pre>
python|pandas|csv
0
2,860
66,928,773
How to find the relative time between two datetime columns?
<p>I have two columns of format:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: center;">A</th> <th style="text-align: center;">B</th> </tr> </thead> <tbody> <tr> <td style="text-align: center;">31-12-2010:10.06</td> <td style="text-align: center;">05-01-2011:15.12</td> </tr> </tbody> </table> </div> <p>And using Python, I want to obtain the relative time between them (<code>A - B</code>).</p> <p>However, I get the error:</p> <pre><code>TypeError: unsupported operand type(s) for -: 'str' and 'str' </code></pre> <p>I tried to do: <code>int(A) - int(B)</code> but it didn't work either:</p> <pre><code>TypeError: cannot convert the series to &lt;class 'int'&gt; </code></pre> <p>Can anyone please tell me how to do it?</p>
<p>You are getting the error because you are trying to operate a negative sign(subtraction) on two objects of <code>str</code> data type. You need to convert them first to datetime object then only you can do mathematical operations.</p> <p>You can try this as well, you can run along with my example given:</p> <pre><code>pd.to_datetime(df['a'], format = '%d-%m-%Y:%H.%M') </code></pre> <p>If you are wondering about %d, %m etc, no need to be afraid, they can be understood easily by reading <a href="https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior" rel="nofollow noreferrer">this</a></p> <p>However to convert more columns into datetime you can use apply like below:</p> <pre><code>df = df.apply(lambda x: pd.to_datetime(x, format='%d-%m-%Y:%H.%M')) </code></pre> <p>Once you have these columns as datetime, you can subtract them using:</p> <pre><code>df['a'] - df['b'] </code></pre> <p>However if you want days then probably you can do this:</p> <pre><code> (df['b'] - df['a']).dt.days </code></pre> <p><strong>Input data:</strong></p> <pre><code>import pandas as pd df = pd.DataFrame({&quot;a&quot;: [&quot;31-12-2010:10.06&quot;], &quot;b&quot;: [&quot;05-01-2011:15.12&quot;]}) </code></pre>
python|python-3.x|pandas|datetime
1
2,861
47,387,555
Numpy equivalent of Tensorflow's embedding_lookup function
<p>What would be a NumPy equivalent code to Tensorflow's <code>embedding_lookup</code> function?</p> <p>In particular, what would be the NumPy equivalent of the last line of the following code block?</p> <pre><code>words = tf.placeholder(tf.int64, name='words') ... embedding = tf.nn.embedding_lookup(embedding_params, words[:, i]) </code></pre> <p>I'm not really sure about what <code>embedding_lookup</code> actually does.</p>
<p><img src="https://www.tensorflow.org/images/Gather.png" alt="tf.gather"></p> <p><a href="https://www.tensorflow.org/versions/master/api_docs/python/tf/nn/embedding_lookup" rel="nofollow noreferrer">tf.nn.embedding_lookup</a> works basically like <a href="https://www.tensorflow.org/api_docs/python/tf/gather" rel="nofollow noreferrer">tf.gather</a>(in above picture). It just get i-th slice of params("i" is element of indices), and concatenate slices in order of indices. </p>
python|numpy|tensorflow
0
2,862
47,272,763
Effective scraping from web into (pandas) DataFrame that preserves the intended format
<p>Goal: Scraping a page and convert it to DataFrame preserving the intended format (python 3).</p> <p>The data seems to be in csv format and is located here: '<a href="https://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data" rel="nofollow noreferrer">https://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data</a>'. I tried three approaches, but they all fail.</p> <p>Approach 1: <code>pandas.read_csv(url)</code> --> the dataframe format is all garbled. E.g.:</p> <pre><code>import pandas as pd url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data' df = pd.read_csv(url, sep=',') df.head() </code></pre> <p>Output:</p> <pre><code> 18.0 8 307.0 130.0 3504. 12.0 70 1 "chevrolet chevelle malibu" 0 15.0 8 350.0 165.0 3693. 11... 1 18.0 8 318.0 150.0 3436. 11... 2 16.0 8 304.0 150.0 3433. 12... 3 17.0 8 302.0 140.0 3449. 10... 4 15.0 8 429.0 198.0 4341. 10... </code></pre> <p>Approach 2: <code>pandas.read_html</code> --> <code>ValueError: No tables found</code>.</p> <p>Full trace:</p> <pre><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-44-07c4c7f7c45c&gt; in &lt;module&gt;() ----&gt; 1 df = pd.read_html(url) 2 df.head(10) /Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/pandas/io/html.py in read_html(io, match, flavor, header, index_col, skiprows, attrs, parse_dates, tupleize_cols, thousands, encoding, decimal, converters, na_values, keep_default_na) 904 thousands=thousands, attrs=attrs, encoding=encoding, 905 decimal=decimal, converters=converters, na_values=na_values, --&gt; 906 keep_default_na=keep_default_na) /Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/pandas/io/html.py in _parse(flavor, io, match, attrs, encoding, **kwargs) 741 break 742 else: --&gt; 743 raise_with_traceback(retained) 744 745 ret = [] /Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/pandas/compat/__init__.py in raise_with_traceback(exc, traceback) 342 if traceback == Ellipsis: 343 _, _, traceback = sys.exc_info() --&gt; 344 raise exc.with_traceback(traceback) 345 else: 346 # this version of raise is a syntax error in Python 3 ValueError: No tables found </code></pre> <p>Approach 3: <code>BeatifulSoup</code> to <code>pandas</code> --> <code>KeyError: 0</code></p> <pre><code>from urllib.request import urlopen from bs4 import BeautifulSoup page = urlopen(url) soup = BeautifulSoup(page, 'html.parser') </code></pre> <p>Full trace:</p> <pre><code>--------------------------------------------------------------------------- KeyError Traceback (most recent call last) &lt;ipython-input-45-a2a52b487623&gt; in &lt;module&gt;() 3 page = urlopen(url) 4 soup = BeautifulSoup(page, 'html.parser') ----&gt; 5 df = pd.DataFrame(soup) /Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/pandas/core/frame.py in __init__(self, data, index, columns, dtype, copy) 335 else: 336 try: --&gt; 337 arr = np.array(data, dtype=dtype, copy=copy) 338 except (ValueError, TypeError) as e: 339 exc = TypeError('DataFrame constructor called with ' /Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/bs4/element.py in __getitem__(self, key) 1009 """tag[key] returns the value of the 'key' attribute for the tag, 1010 and throws an exception if it's not there.""" -&gt; 1011 return self.attrs[key] 1012 1013 def __iter__(self): KeyError: 0 </code></pre>
<pre><code>In [33]: url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data' In [34]: df = pd.read_fwf(url, header=None) In [35]: df Out[35]: 0 1 2 3 4 5 6 7 8 0 18.0 8 307.0 130.0 3504.0 12.0 70 1 "chevrolet chevelle malibu" 1 15.0 8 350.0 165.0 3693.0 11.5 70 1 "buick skylark 320" 2 18.0 8 318.0 150.0 3436.0 11.0 70 1 "plymouth satellite" 3 16.0 8 304.0 150.0 3433.0 12.0 70 1 "amc rebel sst" 4 17.0 8 302.0 140.0 3449.0 10.5 70 1 "ford torino" 5 15.0 8 429.0 198.0 4341.0 10.0 70 1 "ford galaxie 500" 6 14.0 8 454.0 220.0 4354.0 9.0 70 1 "chevrolet impala" .. ... .. ... ... ... ... .. .. ... 391 36.0 4 135.0 84.00 2370.0 13.0 82 1 "dodge charger 2.2" 392 27.0 4 151.0 90.00 2950.0 17.3 82 1 "chevrolet camaro" 393 27.0 4 140.0 86.00 2790.0 15.6 82 1 "ford mustang gl" 394 44.0 4 97.0 52.00 2130.0 24.6 82 2 "vw pickup" 395 32.0 4 135.0 84.00 2295.0 11.6 82 1 "dodge rampage" 396 28.0 4 120.0 79.00 2625.0 18.6 82 1 "ford ranger" 397 31.0 4 119.0 82.00 2720.0 19.4 82 1 "chevy s-10" [398 rows x 9 columns] </code></pre>
python-3.x|pandas|web-scraping|beautifulsoup
1
2,863
47,439,234
Merge dataframes without duplicating rows in python pandas
<p>I'd like to combine two dataframes using their similar column 'A':</p> <pre><code>&gt;&gt;&gt; df1 A B 0 I 1 1 I 2 2 II 3 &gt;&gt;&gt; df2 A C 0 I 4 1 II 5 2 III 6 </code></pre> <p>To do so I tried using:</p> <blockquote> <p>merged = pd.merge(df1, df2, on='A', how='outer')</p> </blockquote> <p>Which returned:</p> <pre><code>&gt;&gt;&gt; merged A B C 0 I 1.0 4 1 I 2.0 4 2 II 3.0 5 3 III NaN 6 </code></pre> <p>However, since df2 only contained one value for A == 'I', I do not want this value to be duplicated in the merged dataframe. Instead I would like the following output:</p> <pre><code>&gt;&gt;&gt; merged A B C 0 I 1.0 4 1 I 2.0 NaN 2 II 3.0 5 3 III NaN 6 </code></pre> <p>What is the best way to do this? I am new to python and still slightly confused with all the join/merge/concatenate/append operations.</p>
<p>Let us create a new variable g, by <code>cumcount</code></p> <pre><code>df1['g']=df1.groupby('A').cumcount() df2['g']=df2.groupby('A').cumcount() df1.merge(df2,how='outer').drop('g',1) Out[62]: A B C 0 I 1.0 4.0 1 I 2.0 NaN 2 II 3.0 5.0 3 III NaN 6.0 </code></pre>
python|pandas|dataframe|merge
8
2,864
47,458,521
Divide matrix into square 2x2 submatrices - maxpooling fprop
<p>I'm trying to implement fprop for MaxPooling layer in Conv Networks with no overlapping and pooling regions 2x2. To do so, I need to split my input matrix into matrices of size 2x2 so that I can extract the maximum. I am then creating a mask which I can use later on in <code>bprop</code>. To carry out the splitting I am splitting my input matrix first vertically and then horizontally and then finding the maximum using <code>vsplit</code>, <code>hsplit</code> and <code>amax</code> respectively. This keeps crashing however with index out of bounds exceptions and I am not sure where the error is. Is there a simpler way to split the 24 x 24 input matrix into 144 2x2 matrices so that I can obtain the maximum.</p> <p>I am doing the following to do so: </p> <pre><code>for i in range(inputs.shape[0]): for j in range(inputs.shape[1]): for k in range(inputs.shape[2] // 2): for h in range(inputs.shape[3] // 2): outputs[i,j,k,h] = np.amax(np.hsplit(np.vsplit(inputs[i,j], inputs.shape[2] // 2)[k], inputs.shape[1] // 2)[h]) max_ind = np.argmax(np.hsplit(np.vsplit(inputs[i,j], inputs.shape[2] // 2)[k], inputs.shape[1] // 2)[h]) max_ind_y = max_ind // inputs.shape[2] if (max_ind_y == 0): max_ind_x = max_ind else: max_ind_x = max_ind % inputs.shape[3] self.mask[i,j,max_ind_y + 2 * k, max_ind_x + 2 * h] = outputs[i,j,k,h] </code></pre> <p>EDIT: </p> <p>This is the output produced by reshape: </p> <p><a href="https://i.stack.imgur.com/zze40.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zze40.png" alt="enter image description here"></a></p> <p>What I would like instead is </p> <pre><code>[0 1 4 5] [2 3 6 7] </code></pre> <p>and so on...</p>
<p>This is implemented as <a href="http://scikit-image.org/docs/dev/api/skimage.util.html#skimage.util.view_as_blocks" rel="nofollow noreferrer"><code>view_as_blocks</code></a> in <code>skimage.util</code>:</p> <pre><code>blocks = skimage.util.view_as_blocks(a,(2,2)) maxs = blocks.max((2,3)) </code></pre>
python|numpy|matrix|machine-learning|deep-learning
2
2,865
68,263,681
How do I remove unknown, extra, data values from large file?
<p>I am working on an Python, TensorFlow, image classification model, and in my training images, I have 12,611 images, but in my training labels, I have 12,613. (each image has a number as the title, and this number corresponds to the same number in a CSV file with the accompanying information for that image).</p> <p>From here, what I need to do is simply remove those 2 extra data points for which I don't have pictures for. How can I write a code to help with this?</p> <p>(If the code tells me which data points are the extras, I can manually remove them from the CSV file)</p> <p>Thanks for the help.</p>
<p>Well its very straightforward, you can try something like this (As I dont kno exactly how and where you have saved your images, you might have to update the code to meet your use case) :</p> <pre class="lang-py prettyprint-override"><code>dir_path = r'/path/to/folder/of/images' csv_path = r'/path/to/csv/file' images = [] # Get all images labels for filename in os.listdir(dir_path): images.append(int(filename.split('.')[0])) # Read CSV df = pd.read_csv(csv_path) # Print which labels are extra for i in df['&lt;COLUMN_NAME&gt;'].tolist(): if i not in images: print(i) </code></pre>
python|image|csv|tensorflow|image-classification
0
2,866
68,276,507
Find a subset of columns based on another dataframe?
<p>I'm collecting heart rate data across time for multiple subjects. Different events occur during the course of the data collection, so the start of each event is recorded elsewhere. Each event would have started at at a slightly different time for each subject. I would like to bridge the information between the two data frames so that I can know the mean heart rate of the different subjects during each chunk of time marked as an event. How can I get the mean heart rates between certain time points that are marked as events in another data frame? For instance, how can I find the mean heart rate between event 2 and event 3?</p> <pre><code>import pandas as pd import numpy as np #example example_g = [[&quot;4/20/21 4:20&quot;, 302, 0, 1, 2, 3, 4, 5], [&quot;2/17/21 9:20&quot;,135, 1, 1.4, 1.8, 2, 8, 10], [&quot;2/17/21 9:20&quot;, 111, 4, 5, 5.1, 5.2, 5.3, 5.4]] example_g_table = pd.DataFrame(example_g,columns=['Date_Time','CID', 0, 1, 2, 3, 4, 5]) #Example Timestamps example_s = [[&quot;4/20/21 4:20&quot;,302,0, 2, 3], [&quot;2/17/21 9:20&quot;,135,0, 1, 4 ], [&quot;2/17/21 9:20&quot;,111,3, 4, 5 ]] example_s_table = pd.DataFrame(example_s,columns=['Date_Time','CID', &quot;event_1&quot;, &quot;event_2&quot;, &quot;event_3&quot;]) desired_outcome = [[&quot;4/20/21 4:20&quot;,302,2.5], [&quot;2/17/21 9:20&quot;,135, 3.3 ], [&quot;2/17/21 9:20&quot;,111, 5.35 ]] desired_outcome_table = pd.DataFrame(desired_outcome,columns=['Date_Time','CID', &quot;Average of data between Event 2 and Event 3&quot;]) </code></pre>
<p>I was able to put together a function that I think works for this, but assumes that columns don't change orders or more get added. If there would be changes to the df shape, this would need to be updated for that.</p> <p>First, I merged together your <code>example_g_table</code> and <code>example_s_table</code> to get them all together.</p> <pre><code>df = pd.merge(left=example_g_table,right=example_s_table,on=['Date_Time','CID'],how='left') Date_Time CID 0 1 2 3 4 5 event_1 event_2 event_3 0 4/20/21 4:20 302 0 1.0 2.0 3.0 4.0 5.0 0 2 3 1 2/17/21 9:20 135 1 1.4 1.8 2.0 8.0 10.0 0 1 4 2 2/17/21 9:20 111 4 5.0 5.1 5.2 5.3 5.4 3 4 5 </code></pre> <p>Now we use a new function that will pull out the values of <code>event_2</code> and <code>event_3</code>, and return the average of the values of those previous column-values. We will later run <code>df.apply</code> on this, so it will take in just a row at a time, as a series (I think, anyway).</p> <pre><code>def func(df): event_2 = df['event_2'] event_3 = df['event_3'] start = int(event_2 + 2) # this assumes that the column called 0 will be the third (and starting at 0, it'll be the called 2), column 1 will be the third column, etc end = int(event_3 + 2) # same as above total = sum(df.iloc[start:end+1]) # this line is the key. It takes the sum of the values of columns in the range of start to finish avg = total/(end-start+1) #(end-start+1) gets the count of things in our range return avg </code></pre> <p>Last, we run <code>df.apply</code> on this to get our new column.</p> <pre><code>df['avg'] = df.apply(func,axis=1) df Date_Time CID 0 1 2 3 4 5 event_1 event_2 event_3 avg 0 4/20/21 4:20 302 0 1.0 2.0 3.0 4.0 5.0 0 2 3 2.50 1 2/17/21 9:20 135 1 1.4 1.8 2.0 8.0 10.0 0 1 4 3.30 2 2/17/21 9:20 111 4 5.0 5.1 5.2 5.3 5.4 3 4 5 5.35 </code></pre>
python|pandas
1
2,867
68,231,586
numpy roots() returns false roots
<p>I'm trying to use numpy to find the roots of some polynomials, but I am getting some erroneous results:</p> <pre><code>&gt;&gt; poly = np.polynomial.Polynomial([4.383930e+00, 2.277144e+14, -7.008406e+25, -4.258004e+16]) &gt;&gt; roots = poly.roots() &gt;&gt; roots array([-1.64593692e+09, -1.91391398e-14, 3.26830022e-12]) &gt;&gt; poly(roots) array([-3.74803539e+23, -7.99360578e-15, -1.89182003e-13]) </code></pre> <p>What is up with the false root <code>-1.64593692e+09</code> which results in <code>-3.74803539e+23</code>? This is clearly not a root.</p> <p>Is this the result of floating-point errors? or something else?..</p> <p>And more importantly;</p> <blockquote> <p><strong>Is there a way to get around it?</strong></p> </blockquote> <p>..perhaps something I can tweak, or a different function I can use?. Any help is much appreciated.</p> <p>I found <a href="https://stackoverflow.com/questions/64288070/strange-roots-using-numpy-roots">this</a> and <a href="https://stackoverflow.com/questions/45318988/numpy-throws-incorrect-roots">this</a> previous question which seemed to be related, but after reading them and the answers/comments I don't think that they are the same problem.</p>
<p>The root appears real:</p> <pre><code>x = np.linspace(-2e9, 1000, 10000) plt.plot(x, poly(x)) </code></pre> <p><a href="https://i.stack.imgur.com/6K2ss.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6K2ss.png" alt="enter image description here" /></a></p> <p>The problem is that the scale of the data is very large. -3e23 is tiny compared to say 6e43. The discrepancy is caused by roundoff error. Third order polynomials have an analytical solution, but it's not going to be numerically stable when your domain is on the order of 1e9.</p> <p>You can try to use the <code>domain</code> and <code>window</code> parameters to attempt to introduce some numerical stability. For example, a common choice of domain is something that envelops your entire dataset. You would have to adjust the coefficients to compenstate, since those values are usually used for fitting data.</p>
python|numpy|optimization|linear-algebra
2
2,868
59,226,174
PANDAS - converting a column with lists as values to dummy variables
<p>I'm working with a dataset of airbnb listings. one of the columns is called amenisities, and contains all of the amenisities that listing has to offer. several examples:</p> <pre><code>[Internet, Wifi, Paid parking off premises] [Internet, Wifi, Kitchen] [Wifi, Smoking allowed, Heating] </code></pre> <p>I would like to replace this column with several binary column, one for each kind of amenisty. so one of them, for example, will be:</p> <pre><code>wifi --&gt; 0,0,0,1,1,0,1,1,0,1,0,1 </code></pre> <p>I found a way to achive this with for loops:</p> <pre><code>all_amenities = [] for row in amenities: all_amenities += row all_amenities = set(all_amenities) for col in all_amenities: df[col] = 0 for i,amenities_of_listing in enumerate(amenities): for amenity in amenities_of_listing: df.loc[i,amenity] = 1 </code></pre> <p>but this is taking forever to run - can someone here think of a more afficiant way to do this?</p>
<p>I believe you need <a href="http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MultiLabelBinarizer.html" rel="nofollow noreferrer"><code>MultiLabelBinarizer</code></a> what working nice if large <code>DataFrame</code>:</p> <pre><code>print (df) amenisities 0 [Internet, Wifi, Paid parking off premises] 1 [Internet, Wifi, Kitchen] 2 [Wifi, Smoking allowed, Heating] from sklearn.preprocessing import MultiLabelBinarizer mlb = MultiLabelBinarizer() df1 = pd.DataFrame(mlb.fit_transform(df['amenisities']),columns=mlb.classes_) print (df1) Heating Internet Kitchen Paid parking off premises Smoking allowed \ 0 0 1 0 1 0 1 0 1 1 0 0 2 1 0 0 0 1 Wifi 0 1 1 1 2 1 </code></pre>
python|pandas|data-processing
2
2,869
59,361,779
Keras model returning AttributeError: 'str' object has no attribute 'ndim'
<p>I am trying to build a simple Keras model but am getting an AttributeError for some unkown reason. All of the datatypes I am feeding to the model are float64. Code is as follows:</p> <p>Defining features and target:</p> <p><code>X = rated_df[["content_found", "domain_found","title_found", "url_found", "CPC","Competition","number_of_results","search_vol"]]</code></p> <p><code>y = "Position"</code></p> <p>Model as follows:</p> <p><code>from keras.models import Sequential from keras.layers import Dense</code></p> <p><code>model = Sequential() model.add(Dense(12, input_dim=8, activation='relu')) model.add(Dense(8, activation='relu')) model.add(Dense(1, activation='sigmoid'))</code></p> <p><code>model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])</code></p> <p>Then the fitting of the model which causes the error:</p> <p><code>model.fit(X, y, epochs=150, batch_size=10)</code></p> <p>and error is</p> <p>AttributeError: 'str' object has no attribute 'ndim'</p> <p>A picture of the data is below and as mentioned contains all float64 datatypes: </p> <p><a href="https://i.stack.imgur.com/g9RvE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/g9RvE.png" alt="enter image description here"></a></p> <p>If anyone has any advice it would be much appreciated, thanks!</p>
<p>The problem is that you are defining <code>y</code> to be a string.</p> <p>You likely want</p> <pre><code>y = df["Position"] </code></pre>
python|pandas|keras
2
2,870
59,163,503
How to efficiently restrict tensorflow model output?
<p>I have a model, e.g. </p> <pre><code>model = keras.Sequential([ keras.layers.Reshape(target_shape=(10,10,1),input_shape=(100,)), keras.layers.Convolution2DTranspose(1, 3, activation='relu') ]) </code></pre> <p>After it's trained, I would only like to do compute a subset of the outputs, e.g. </p> <pre><code>out = model(x)[:,3,5] </code></pre> <p>Is there a way to do this efficiently so that I'm not computing all of the outputs? Ideally, I'd like to define a new model that takes x and the output indices only computes them, e.g.</p> <pre><code>out = new_model(x,out_indices) </code></pre>
<p>You can do the following,</p> <p>This is your first model. Note that I removed the <code>Reshape</code> layer and directly specified the <code>input_shape</code> for the <code>Convolution2DTranspose</code> layer.</p> <pre><code>model = models.Sequential([ layers.Convolution2DTranspose(1, 3, activation='relu', input_shape=(10,10,1)) ]) model.compile(loss='mean_squared_error', optimizer='adam') model.summary() </code></pre> <p>This is probably the bit you're interested in. You get an <code>(None, 12, 12, 1)</code> output from the previous model. Here, you are passing a batch or 4 dimensional indices (one element for each dimension of the previous model output).</p> <pre><code>inp = layers.Input(shape=(4,), dtype='int32') out = layers.Lambda(lambda x: tf.gather_nd(model.output, x))(inp) model2 = models.Model(inputs=[inp, model.input], outputs=out) model2.compile(loss='mean_squared_error', optimizer='adam') model2.summary() </code></pre> <p>Now you can get the values of any indices you pass to the model.</p> <pre><code>x = np.random.normal(size=(1,10, 10, 1)) ind = np.array([[0,0,0,0],[0,1,1,0]], dtype='int32') y = model2.predict([ind, x]) </code></pre>
tensorflow|keras|sparse-matrix
0
2,871
14,008,307
Use Boost-Python to calculate derivative of function defined in python
<p>I want to write a Boost-Python program to take a symbolic python function from user and evaluate its derivative in my program.</p> <p>For example the User provide a python file (Function.py) which defines a function like F = sin(x)*cos(x).</p> <p>Then I want to have access to F'(x) (derivative of F(x)) using symbolic differentiation ability of Sympy. I don't want to use numerical differentiation.</p> <p>Is there a way to make such a function F'(x) accessible in the C++ using Boost-Python.</p>
<p>Here is some code that should help you get started.</p> <p>main.cpp:</p> <pre><code>#include &lt;boost/python.hpp&gt; #include &lt;iostream&gt; using namespace boost::python; int main(void) { Py_Initialize(); object main_module = import("__main__"); object main_namespace = main_module.attr("__dict__"); exec("from __future__ import division\n" "from sympy import *\n" "x = symbols('x')\n" "f = symbols('f', cls=Function)\n" "f = cos(x) * sin(x)\n" "f1 = lambda u: diff(f).subs(x, u);\n", main_namespace); exec("result = f1(1.0)", main_namespace); double res = extract&lt;double&gt;(main_namespace["result"]); std::cout &lt;&lt; "Out: " &lt;&lt; res &lt;&lt; std::endl; return 0; } </code></pre> <p>Compile command, replace with your path and compiler:</p> <pre><code>$ clang++ -I"/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/Current/Headers/" -L"/usr/local/Cellar/python/2.7.3/Frameworks/Python.framework/Versions/Current/lib/" -lpython2.7 main.cpp </code></pre> <p>It compiles but does not work for me right now. Hope it helped.</p>
c++|python|numpy|boost-python|sympy
4
2,872
44,931,689
How to disable printing reports after each epoch in Keras?
<p>After each epoch I have printout like below:</p> <pre><code>Train on 102 samples, validate on 26 samples Epoch 1/1 Epoch 00000: val_acc did not improve 102/102 [==============================] - 3s - loss: 0.4934 - acc: 0.8997 - val_loss: 0.4984 - val_acc: 0.9231 </code></pre> <p>I am not using built-in epochs, so I would like to disable these printouts and print something myself.</p> <p>How to do that?</p> <p>I am using tensorflow backend if it matters.</p>
<p>Set <code>verbose=0</code> to the fit method of your model.</p>
python|tensorflow|keras
53
2,873
57,020,855
How to make a pandas column out of filenames?
<p>I have N images. I want to create pandas dataframe, and put all the filenames of images in these colums.How to do it? I need a column with header "filename". a.jpg b.jpg</p>
<p>Make an array, where you append filenames.</p> <pre><code>array = [] </code></pre> <p>Then save these filenames into pandas.DataFrame as:</p> <pre><code>df = pd.DataFrame(array, index=False, columns=["filenames"]) </code></pre>
python|pandas|glob
0
2,874
45,889,276
Does SavedModelBundle loader support GCS path as export directory
<p>Currently I am using a saved_model file stored on my local disk to read an inference graph and use it in servers. Unfortunately giving a GCS path doesn't work for SavedModelBundle.load api.</p> <p>Tried providing GCS path for the file but did not work.</p> <p>Is this even supported, if not how can i achieve this using the SavedModelBundle api because i have some production servers running on google cloud that i want to serve some tensor-flow graphs.</p>
<p>A <a href="https://github.com/tensorflow/serving/commit/f8cc9fd0d36ab6830340875d26aa7870369afe9e#diff-bf0c841686ee859a4e04283ff50ea0ac" rel="nofollow noreferrer">recent commit</a> inadvertently broke the ability to load files from GCS. This has been <a href="https://github.com/tensorflow/serving/commit/c6ace3fed3a0ec7cec6b7267cd86b8ed3a034a50" rel="nofollow noreferrer">fixed</a> and is available in github.</p>
machine-learning|tensorflow|google-cloud-platform|google-cloud-storage|tensorflow-serving
2
2,875
23,111,990
Pandas DataFrame stored list as string: How to convert back to list
<p>I have an <em>n</em>-by-<em>m</em> Pandas DataFrame <code>df</code> defined as follows. (I know this is not the best way to do it. It makes sense for what I'm trying to do in my actual code, but that would be TMI for this post so just take my word that this approach works in my particular scenario.)</p> <pre><code>&gt;&gt;&gt; df = DataFrame(columns=['col1']) &gt;&gt;&gt; df.append(Series([None]), ignore_index=True) &gt;&gt;&gt; df Empty DataFrame Columns: [col1] Index: [] </code></pre> <p>I stored lists in the cells of this DataFrame as follows.</p> <pre><code>&gt;&gt;&gt; df['column1'][0] = [1.23, 2.34] &gt;&gt;&gt; df col1 0 [1, 2] </code></pre> <p>For some reason, the DataFrame stored this list as a string instead of a list.</p> <pre><code>&gt;&gt;&gt; df['column1'][0] '[1.23, 2.34]' </code></pre> <p>I have 2 questions for you.</p> <ol> <li><strong>Why does the DataFrame store a list as a string and is there a way around this behavior?</strong></li> <li><strong>If not, then is there a Pythonic way to convert this string into a list?</strong></li> </ol> <hr> <p><strong>Update</strong></p> <p>The DataFrame I was using had been saved and loaded from a CSV format. <em>This format, rather than the DataFrame itself, converted the list from a string to a literal.</em></p>
<p>As you pointed out, this can commonly happen when saving and loading pandas DataFrames as <code>.csv</code> files, which is a text format.</p> <p>In your case this happened because list objects have a string representation, allowing them to be stored as <code>.csv</code> files. Loading the <code>.csv</code> will then yield that string representation.</p> <p>If you want to store the actual objects, you should use <code>DataFrame.to_pickle()</code> (note: objects must be picklable!).</p> <p>To answer your second question, you can convert it back with <a href="https://docs.python.org/3.4/library/ast.html#ast.literal_eval" rel="noreferrer"><code>ast.literal_eval</code></a>:</p> <pre><code>&gt;&gt;&gt; from ast import literal_eval &gt;&gt;&gt; literal_eval('[1.23, 2.34]') [1.23, 2.34] </code></pre>
python|string|list|pandas|dataframe
126
2,876
35,368,645
pandas - change df.index from float64 to unicode or string
<p>I want to change a dataframes' index (rows) from float64 to string or unicode. </p> <p>I thought this would work but apparently not:</p> <pre><code>#check type type(df.index) 'pandas.core.index.Float64Index' #change type to unicode if not isinstance(df.index, unicode): df.index = df.index.astype(unicode) </code></pre> <p>error message:</p> <pre><code>TypeError: Setting &lt;class 'pandas.core.index.Float64Index'&gt; dtype to anything other than float64 or object is not supported </code></pre>
<p>You can do it that way:</p> <pre><code># for Python 2 df.index = df.index.map(unicode) # for Python 3 (the unicode type does not exist and is replaced by str) df.index = df.index.map(str) </code></pre> <p>As for why you would proceed differently from when you'd convert from int to float, that's a peculiarity of numpy (the library on which pandas is based).</p> <p>Every numpy array has a <em>dtype</em>, which is basically the <strong>machine</strong> type of its elements : in that manner, <strong>numpy deals directly with native types</strong>, not with Python objects, which explains how it is so fast. So when you are changing the dtype from int64 to float64, numpy will cast each element in the C code.</p> <p>There's also a special dtype : <em>object</em>, that will basically provide a pointer toward a Python object.</p> <p>If you want strings, you thus have to use the <em>object</em> dtype. But using <code>.astype(object)</code> would not give you the answer you were looking for : it would instead create an index with <em>object</em> dtype, but put Python float objects inside.</p> <p>Here, by using map, we convert the index to strings with the appropriate function: numpy gets the string objects and understand that the index has to have an <em>object</em> dtype, because that's the only dtype that can accomodate strings.</p>
python|pandas|indexing|dataframe|rows
128
2,877
28,853,687
cython: create ndarray object without allocating memory for data
<p>In cython, how do I create an ndarray object with defined properties without allocating memory for its contents?</p> <p>My problem is that I want to call a function that requires a ndarray but my data is in a pure c array. Due to some restrictions I cannot switch to using an ndarray directly.</p> <p>Code-segement to illustrate my intention:</p> <pre><code>cdef: ndarray[npy_uint64] tmp_buffer uint64_t * my_buffer tmp_buffer = np.empty(my_buffer_size, dtype='uint64') my_buffer = &lt;uint64_t *&gt; malloc(my_buffer_size * sizeof(uint64_t)) (... do something with my_buffer that cannot be done with a ndarray ...) tmp_buffer.data = my_buffer some_func(tmp_buffer) </code></pre> <p>This seems inefficient since for <code>tmp_buffer</code> memory is allocated and zero-filled which will never be used. How do I avoid this?</p>
<p>Efficiency aside, does this sort of assignment compile?</p> <p><code>np.empty</code> does not zero fill. <code>np.zeros</code> does that, and even that is done 'on the fly'. </p> <p><a href="https://stackoverflow.com/q/27464039">Why the performance difference between numpy.zeros and numpy.zeros_like?</a> explores how <code>empty</code>, <code>zeros</code> and <code>zeros_like</code> are implemented.</p> <hr> <p>I'm just a beginner with <code>cython</code>, but I have to use:</p> <pre><code>tmp_buffer.data = &lt;char *&gt;my_buffer </code></pre> <p>How about going the other way, making <code>my_buffer</code> the allocated <code>data</code> of <code>tmp_buffer</code>?</p> <pre><code>array1 = np.empty(bsize, dtype=int) cdef int *data data = &lt;int *&gt; array1.data for i in range(bsize): data[i] = bsize-data[i] </code></pre> <hr> <p><a href="http://gael-varoquaux.info/programming/cython-example-of-exposing-c-computed-arrays-in-python-without-data-copies.html" rel="nofollow noreferrer">http://gael-varoquaux.info/programming/cython-example-of-exposing-c-computed-arrays-in-python-without-data-copies.html</a> suggests using <code>np.PyArray_SimpleNewFromData</code> to create an array from an existing data buffer. </p> <p>Regarding memoryviews <a href="http://docs.cython.org/src/userguide/memoryviews.html" rel="nofollow noreferrer">http://docs.cython.org/src/userguide/memoryviews.html</a></p>
numpy|cython
3
2,878
50,930,849
Python pandas dataframe with duplicate values
<p>I am looking to index the following pandas dataframe with the following sample values. The dataframe has a lot of duplicates.</p> <pre><code>ID AccountName 83 CHRISTIAN UNIVERSITY 83 CHRISTIAN UNIVERSITY 83 CHRISTIAN UNIVERSITY 83 CHRISTIAN UNIVERSITY 104 UNIVERSITY 104 UNIVERSITY 1740 ELECTRIC CORPORATIO 1740 ELECTRIC CORPORATIO 1740 ELECTRIC CORPORATIO 1740 ELECTRIC CORPORATIO ... </code></pre> <p>The resulting dataframe should be the following.</p> <pre><code> ID index AccountName 83 1 CHRISTIAN UNIVERSITY 83 1 CHRISTIAN UNIVERSITY 83 1 CHRISTIAN UNIVERSITY 83 1 CHRISTIAN UNIVERSITY 104 2 UNIVERSITY 104 2 UNIVERSITY 1740 3 ELECTRIC CORPORATIO 1740 3 ELECTRIC CORPORATIO 1740 3 ELECTRIC CORPORATIO 1740 3 ELECTRIC CORPORATIO ... </code></pre> <p>Does anyone have an fast and efficient way of doing this?</p>
<p>Assuming that you want an increasing index for each new ID, I'd do:</p> <pre><code>In [43]: df["number"] = df.ID.rank(method='dense').astype(int) In [44]: df Out[44]: ID AccountName number 0 83 CHRISTIAN UNIVERSITY 1 1 83 CHRISTIAN UNIVERSITY 1 2 83 CHRISTIAN UNIVERSITY 1 3 83 CHRISTIAN UNIVERSITY 1 4 104 UNIVERSITY 2 5 104 UNIVERSITY 2 6 1740 ELECTRIC CORPORATIO 3 7 1740 ELECTRIC CORPORATIO 3 8 1740 ELECTRIC CORPORATIO 3 9 1740 ELECTRIC CORPORATIO 3 </code></pre> <p>which will give the lowest ID the number 1, and the second lowest 2, etc., independent of the order they actually appear in the frame (e.g. if you put ELECTRIC_CORPORATIO second, it'll still get #3 because 1740 is the third number.)</p> <p>There are other ways if you can be guaranteed that your clusters are contiguous, e.g.</p> <pre><code>(~df["ID"].duplicated()).cumsum() </code></pre> <p>but that's much less reliable in general than mapping a unique ID to a unique number, IMHO.</p> <p>Also, I've used "number" here as the column name rather than "index", because that causes confusion between the frame's index and your column named "index".</p>
python|pandas|dataframe
4
2,879
50,692,749
How to multiply two columns with different two csv file and return result in first csv file using Pandas
<p>I have two CSV files where first csv file contains Price column and second csv contains quantity i tried to multiply this two columns and save result in new columns with first csv</p> <p>First.csv</p> <pre><code>Code Description Unit Price 110101 STATIONARY BICYCLE INDOOR USE SET 120.25 110106 TREADMILL EXERCISE MACHINE, ELEC. AC110V SET 950.22 110107 TREADMILL EXERCISE MACHINE, ELEC. AC220V SET 1000 110110 EXERCISER ROWING INDOOR USE SET 450 110120 BARBELL SET SET 100 </code></pre> <p>Second.csv</p> <pre><code>Code Quantity 110106 210 110107 220 110110 230 110120 240 110122 250 </code></pre> <p>And the expected output is </p> <p>First.csv</p> <pre><code>Code Description Unit Price Total 110101 STATIONARY BICYCLE INDOOR USE SET 120.25 25252.5 110106 TREADMILL EXERCISE MACHINE, ELEC. AC110V SET 150.22 33048.4 110107 TREADMILL EXERCISE MACHINE, ELEC. AC220V SET 100 23000 110110 EXERCISER ROWING INDOOR USE SET 40 9600 110120 BARBELL SET SET 100 25000 </code></pre> <p>I'm able to read file only </p> <pre><code>import pandas as pd df = pd.read_csv("QuoteCSV.csv", parse_dates=True) print(df) df1=pd.read_csv("itemcode.csv",index_col="Price", parse_dates=True) print(df1) </code></pre> <p>Updated:</p> <pre><code> import pandas as pd a = pd.read_csv("itemcode.csv") b = pd.read_csv("QuoteCSV.csv") b = b.dropna(axis=1) merged = a.merge(b, on='Code') merged.to_csv("result.csv", index=False) c = pd.read_csv("result.csv") c['Total'] = c['Price'] * c['Quantity'] </code></pre> <p>But it does not return any rresult</p>
<p>Use <code>map</code></p> <pre><code>First.assign( Total=First.Price * First.Code.map(dict(zip(Second.Code, Second.Quantity)))) </code></pre>
python|pandas|csv|multiple-columns|multiplication
0
2,880
50,690,865
Overfitting Issue On a variation of ZF-net
<p>I am training a CNN on imagenet-2012 dataset, but the model keeps overfitting(Valiation Error rate: top1: 49%, top5: 25%, Training Error rate: top1:25%, top5: 8%. trained on GTX1080ti after 600k training steps (about 5 days)). the architecture is based on ZF-net but adds batch norm:</p> <pre><code> x_input = feed_key['input'] bn_training = tf.placeholder(dtype=tf.bool, shape=(), name='bn_training') with tf.name_scope('ZF_conv1'): w_conv1 = tf.get_variable(name='conv1_kernel', shape=[7, 7, 3, 96], dtype=tf.float32) h_conv1 = tf.nn.conv2d(x_input, w_conv1, strides=[1, 2, 2, 1], padding='SAME') with tf.name_scope('ZF_bn1'): bn1 = tf.layers.batch_normalization(h_conv1, training=bn_training) with tf.name_scope('ZF_relu1'): h_active1 = tf.nn.relu(bn1) with tf.name_scope('ZF_pool1'): h_pool1 = tf.nn.max_pool(h_active1, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='SAME') with tf.name_scope('ZF_conv2'): w_conv2 = tf.get_variable(name='conv2_kernel', shape=[5, 5, 96, 256], dtype=tf.float32) h_conv2 = tf.nn.conv2d(h_pool1, w_conv2, strides=[1, 2, 2, 1], padding='SAME') with tf.name_scope('ZF_bn2'): bn2 = tf.layers.batch_normalization(h_conv2, training=bn_training) with tf.name_scope('ZF_relu2'): h_active2 = tf.nn.relu(bn2) with tf.name_scope('ZF_pool2'): h_pool2 = tf.nn.max_pool(h_active2, ksize=[1, 3, 3, 1], strides=[1, 2, 2, 1], padding='SAME') with tf.name_scope('ZF_conv3'): w_conv3 = tf.get_variable(name='conv3_kernel', shape=[3, 3, 256, 384], dtype=tf.float32) h_conv3 = tf.nn.conv2d(h_pool2, w_conv3, strides=[1, 1, 1, 1], padding='SAME') with tf.name_scope('ZF_bn3'): bn3 = tf.layers.batch_normalization(h_conv3, training=bn_training) with tf.name_scope('ZF_relu3'): h_active3 = tf.nn.relu(bn3) with tf.name_scope('ZF_conv4'): w_conv4 = tf.get_variable(name='conv4_kernel', shape=[3, 3, 384, 384], dtype=tf.float32) h_conv4 = tf.nn.conv2d(h_active3, w_conv4, strides=[1, 1, 1, 1], padding='SAME') with tf.name_scope('ZF_bn4'): bn4 = tf.layers.batch_normalization(h_conv4, training=bn_training) with tf.name_scope('ZF_relu4'): h_active3 = tf.nn.relu(bn4) with tf.name_scope('ZF_conv5'): w_conv5 = tf.get_variable(name='conv5_kernel', shape=[3, 3, 384, 256], dtype=tf.float32) h_conv5 = tf.nn.conv2d(h_active3, w_conv5, strides=[1, 1, 1, 1], padding='SAME') with tf.name_scope('ZF_bn5'): bn5 = tf.layers.batch_normalization(h_conv5, training=bn_training) with tf.name_scope('ZF_relu5'): h_active5 = tf.nn.relu(bn5) feed_key['bn_training'] = bn_training </code></pre> <p>Followed by two FC layer:</p> <pre><code>fc1 = tf.layers.dense(low_out_flat, units=4096, activation=tf.nn.relu, name='zffc1') keep_prob1 = tf.placeholder(tf.float32, name='keep_prob1') dropout1 = tf.nn.dropout(fc1, keep_prob1, name='zfdrop1') fc2 = tf.layers.dense(dropout1, units=4096, activation=tf.nn.relu, name='zffc2') keep_prob2 = tf.placeholder(tf.float32, name='keep_prob2') dropout2 = tf.nn.dropout(fc2, keep_prob2, name='zfdrop2') feed_key['keep_prob1'] = keep_prob1 feed_key['keep_prob2'] = keep_prob2 </code></pre> <p>Finally compute the cross entropy:</p> <pre><code> gt_labels = tf.placeholder(dtype=tf.int64, shape=[None]) logits = tf.layers.dense(model.last_layer, units=1000, name='imagenet_logits') with tf.name_scope('imagenet_cross_entropy'): entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=gt_labels, logits=logits) with tf.name_scope('imagenet_loss'): loss = tf.reduce_mean(entropy) update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) with tf.control_dependencies(update_ops): train_op = training_methods.optimizer.minimize(loss=loss, global_step=training_methods.global_step) </code></pre> <p>For The Data Preprocessing at training, I followed the paper:<a href="https://arxiv.org/abs/1311.2901" rel="nofollow noreferrer">Visualizing and Understanding Convolutional Networks</a></p> <blockquote> <p>Each RGB image was preprocessed by resizing the smallest dimension to 256, cropping the center 256x256 region, subtracting the per-pixel mean (across all images) and then using 10 different sub-crops of size 224x224 (corners + center with(out) horizontal flips).</p> </blockquote> <p>(remark: I calculated the image mean across the whole training dataset by first resizing it to 224*224, not 256*256. so I subtract the image mean after sub-cropping image to 224*224. I thought it is not a problem) When Testing, I just resize the image to 224*224(Is that a problem?)</p> <p>Optimizer:Adam with initial learning rate 0.001 epsilon 0.1, drop out rate was set to 0.5. lastly, I use <code>tf.variance_scaling_initializer()</code> to initialize all weights</p> <p>The ZF-net paper reported their result was Testing error rate: top1:36.7% top5:15.3% so this is way off my result, but I can't find where is wrong</p>
<p>the thing is when I evaluate the model I just resize the image to 224*224, it is different from the training data processing(crop image to 224*224), so the probability distribution of validation set is kind of different from the training set(different manifolds), and originally I thought it is not a big deal and the changes in distribution would not be dramatic. after I revised that, there is a significant drop in the validation error</p>
python|tensorflow|machine-learning|deep-learning
0
2,881
66,354,570
Sort image as NP array
<p>I'm trying to sort an image by luminosity using NumPy, which I'm new to. I've managed to create a random image and sort it.</p> <pre><code>def create_image(output, width, height, arr): array = np.zeros([height, width, 3], dtype=np.uint8) numOfSwatches = len(arr) swatchWidth = int(width/ numOfSwatches) for i in range (0, numOfSwatches): m = i * swatchWidth r = (i+1) * swatchWidth array[:, m:r] = arr[i] img = Image.fromarray(array) img.save(output) </code></pre> <p>Which creates this image: <a href="https://i.stack.imgur.com/ewr0M.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ewr0M.png" alt="random colour stripes" /></a></p> <p>So far so good. Only now I want to switch from creating random images to loading them and <em>then</em> sorting them.</p> <pre><code>#!/usr/bin/python3 import numpy as np from PIL import Image # -------------------------------------------------------------- def load_image( infilename ) : img = Image.open( infilename ) img.load() data = np.asarray( img, dtype = &quot;int32&quot; ) return data # -------------------------------------------------------------- def lum (r,g,b): return math.sqrt( .241 * r + .691 * g + .068 * b ) myImageFile = &quot;random_colours.png&quot; imageNP = load_image(myImageFile) imageNP.sort(key=lambda rgb: lum(*rgb) ) </code></pre> <p>The image should look like this: <a href="https://i.stack.imgur.com/p8fVQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/p8fVQ.png" alt="stripe image sorted by luminosity" /></a></p> <p>The error I get is <code>TypeError: 'key' is an invalid keyword argument for this function</code> I may have created the NP array incorrectly as it worked when it was a random NP array.</p>
<p>Have not ever used PIL, but the following approach hopefully works (I'm not sure as I can't reproduce your exact examples), and of course there might be more efficient ways to do so. I'm using your functions, having changed the <code>math.sqrt</code> function to <code>np.sqrt</code> in the <code>lum</code> function - as it is better for vector calculations. By the way, I believe this won't work with an <code>int32</code> type array (as in your <code>load_image</code> function).</p> <p>The key part is Numpy's <a href="https://numpy.org/devdocs/reference/generated/numpy.argsort.html#numpy-argsort" rel="nofollow noreferrer"><code>argsort</code></a> function (last line), which gives the indices that would sort the given array; this is applied to a row of the luminosity array (exploiting simmetry) and later used as indexer of <code>img_array</code>.</p> <pre><code># Create random image np.random.seed(4) img = create_image('test.png', 75, 75, np.random.random((25,3))*255) # Convert to Numpy array and calculate luminosity img_array = np.array(img, dtype = np.uint8) luminosity = lum(img_array[...,0], img_array[...,1], img_array[...,2]) # Sort by luminosity and convert to image again img_sorted = Image.fromarray(img_array[:,luminosity[0].argsort()]) </code></pre> <p>The original picture:<br /> <a href="https://i.stack.imgur.com/bI0Pk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bI0Pk.png" alt="The original picture" /></a></p> <p>And the luminosity-sorted one:<br /> <a href="https://i.stack.imgur.com/JiolB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JiolB.png" alt="And the luminosity-sorted one" /></a></p>
python|numpy|colors|python-imaging-library
1
2,882
66,558,347
Binarization using pd.cut
<p>I am a newbie to the world of ML. I am trying to learn to preprocess.</p> <p>I have an outcome data that has four types of inputs: 0,1,2,3,4</p> <p>0 corresponds to no disease while 1 to 4 corresponds to different types of diseases.</p> <p>I wish to binarize them into two: 0 for &quot;no disease&quot; and those 1-4 &quot;with diseases&quot;</p> <p>My code:</p> <pre><code>binarize_outcome['Outcome']=pd.cut(outcome_variable['Outcome'], bins=[0,1,4], labels=[&quot;no heart disease&quot;,&quot;heart diseases&quot;]) binarize_outcome </code></pre> <p>The output:</p> <pre><code>0 NaN 1 no heart disease 2 no heart disease 3 NaN 4 NaN ... 299 no heart disease 300 no heart disease 301 no heart disease 302 NaN Outcome 0 NaN 1 heart disease... Name: Outcome, Length: 304, dtype: object </code></pre> <p>As you can see, it is not the output I am expecting because my code is labeling the 0s as NaN and the rest are incorrectly labeled.</p> <p>Hope you can help me figure out this part.</p> <p>Thanks in advance, Art</p>
<p>Your condition is binary so you can use <a href="https://numpy.org/doc/stable/reference/generated/numpy.where.html" rel="nofollow noreferrer"><code>np.where</code></a> from <code>numpy</code>:</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; df Type 0 2 1 2 2 3 3 0 4 2 .. ... 95 2 96 4 97 0 98 0 99 1 [100 rows x 2 columns] &gt;&gt;&gt; df[&quot;Outcome&quot;] = np.where(df == 0, &quot;no heart disease&quot;, &quot;heart disease&quot;) &gt;&gt;&gt; df Type Outcome 0 2 heart disease 1 2 heart disease 2 3 heart disease 3 0 no heart disease 4 2 heart disease .. ... ... 95 2 heart disease 96 4 heart disease 97 0 no heart disease 98 0 no heart disease 99 1 heart disease [100 rows x 2 columns] </code></pre> <p>Or with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.cut.html" rel="nofollow noreferrer"><code>pd.cut</code></a> from <code>pandas</code>:</p> <pre><code>&gt;&gt;&gt; df[&quot;Outcome&quot;] = pd.cut(df[&quot;Type&quot;], [0, 0.9999999, 4], labels=[&quot;no heart disease&quot;, &quot;heart disease&quot;], include_lowest=True) &gt;&gt;&gt; df Type Outcome 0 2 heart disease 1 2 heart disease 2 3 heart disease 3 0 no heart disease 4 2 heart disease .. ... ... 95 2 heart disease 96 4 heart disease 97 0 no heart disease 98 0 no heart disease 99 1 heart disease [100 rows x 2 columns] </code></pre> <p>Same result with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.IntervalIndex.from_breaks.html" rel="nofollow noreferrer"><code>pd.IntervalIndex.from_breaks</code></a>:</p> <pre><code>&gt;&gt;&gt; interval = pd.IntervalIndex.from_breaks([0, 1, 5], closed=&quot;left&quot;) &gt;&gt;&gt; df[&quot;Outcome&quot;] = pd.cut(df[&quot;Type&quot;], interval, include_lowest=True) \ .cat.rename_categories([&quot;no heart disease&quot;, &quot;heart disease&quot;]) </code></pre>
python|pandas
0
2,883
66,683,077
Python function that evaluates each row by a list
<p>I am using Python to clean address data and standardize abbreviations, etc. so that it can be compared against other address data. I finally have 2 dataframes in Pandas. I would like to compare each row in the first df, named <code>df</code>, against a list created from another list of addresses in a df of similar structure, <code>second_df</code>. If the address from <code>df</code> is on the list, then I would like to create a column to note this, maybe a boolean, but best case the string 'found'. I have used <code>isin</code> and it did not work.</p> <p>For example, suppose my data looks like the sample data below. I would like to compare each row in <code>df['concat']</code> to the entire list <code>list</code> to see if the address in <code>df['concat']</code> column appears in the second_df list.</p> <pre><code>read = pd.read_excel('fullfilepath.xlsx') second_df = pd.read_excel('anotherfilepath.xlsx') df = read[['column1','column2', 'concat']] list = second_df.concat.tolist() </code></pre>
<p>EDIT based on <em>tdy</em> comment as my original answer didn't have the value for False option in where statement.</p> <p>Try sth like this:</p> <pre><code>df[&quot;isFound&quot;] = np.where(df['concat'].isin(second_df[&quot;concat&quot;]), &quot;found&quot;, &quot;notfound&quot;) </code></pre> <p>Should be exactly what you need</p>
python|pandas
0
2,884
66,427,448
Tkinter canvas image bug
<p>I am rewriting my application in oop style and ran into an unexpected problem. The palette image is distorted. This has never happened before.</p> <p><a href="https://i.stack.imgur.com/fCzPN.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fCzPN.jpg" alt="enter image description here" /></a></p> <p>The class container</p> <pre><code>class MainApp(Frame): def __init__(self, master, path): self.master = master Frame.__init__(self, self.master) self.configure_main() coloring = Coloring(self.master, path) coloring.grid(row=1, column=1) </code></pre> <p>The class from which the color selector instance is instantiated</p> <pre><code>class Coloring(Frame): def __init__(self, parent, path): self.parent = parent Frame.__init__(self, self.parent) ... self.create_widgets() self.draw_widgets() def change_custom_color(self, *args): try: self.selector_frame.destroy() except AttributeError: pass self.selector_frame = ColorSelector(self.parent.master, args[1], self) self.selector_frame.grid(row=1, column=0) </code></pre> <p>The color selector class</p> <pre><code>class ColorSelector(Frame): def __init__(self, parent, btn_idx, coloring_obj): self.parent = parent Frame.__init__(self, self.parent) self.btn_idx = btn_idx self.palette_img_np = cv2.imread('resources/palette.png') self.palette_img_tk = cv2pil_images(self.palette_img_np) self.coloring_obj = coloring_obj self.create_widgets() self.draw_widgets() def create_widgets(self): self.palette = Canvas(self, width=253, height=253) self.palette.create_image(128, 3, anchor='n', image=self.palette_img_tk) self.palette.create_oval(5, 3, 251, 251, outline='black', width=4) self.cursor_obj_id = self.palette.create_oval(81, 81, 71, 71, fill='green', outline='white') self.palette.bind(&quot;&lt;B1-Motion&gt;&quot;, lambda event, arg=self.btn_idx: self.cursor_move(event, arg)) self.slider_explanation = Label(self, text='Color saturation:') self.enchance_var = IntVar(value=1.0) self.enhance_slider = Scale( self, from_=0.1, to=1.0, orient=HORIZONTAL, command=lambda event, arg=self.btn_idx: self.change_enhance(event, arg), resolution=0.0001, variable=self.enchance_var, length=200 ) self.ok_btn = Button(self, text='OK', command=self.destroy) def draw_widgets(self): self.palette.pack(padx=15) self.slider_explanation.pack() self.enhance_slider.pack() self.ok_btn.pack(pady=10) </code></pre> <p>Screenshot without bug <a href="https://i.stack.imgur.com/d1vHg.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/d1vHg.jpg" alt="enter image description here" /></a></p> <p>Thank you in advance.</p>
<p>The problem might be with the array conversion or something, it is recommended to load and use images using <code>PIL</code> itself, so its much easier. As a work around for you, you can use <code>cv2.imwrite()</code> and save the image and then use that path and open the new image up using <code>PIL</code>. Something like:</p> <pre><code># All the other processes... path = 'img1.png' cv2.imwrite(path) img = PIL.ImageTk.PhotoImage(Image.open(file)) </code></pre> <p>and use <code>img</code> as the image and so on.</p> <p>This might be some bug with your array, as it is not reproducible for me, it works perfectly, anyway here is a function that I would use:</p> <pre><code>def cv2pil(array): img = cv2.cvtColor(array,cv2.COLOR_BGR2RGB) # Also try COLOR_BGR2RGBA for png? pil_img = ImageTk.PhotoImage(Image.fromarray(img)) return pil_img img = cv2.imread('capture.png') pil = cv2pil(img) </code></pre> <p>Except for the color modes switching I don't see any other distortion.</p>
python|numpy|opencv|tkinter|tkinter-canvas
1
2,885
16,223,483
Forced conversion of non-numeric numpy arrays with NAN replacement
<p>Consider the array</p> <p><code>x = np.array(['1', '2', 'a'])</code></p> <p>Tying to convert to a float array raises an exception</p> <pre><code>x.astype(np.float) ValueError: could not convert string to float: a </code></pre> <p>Does numpy provide any efficient way to coerce this into a numeric array, replacing non-numeric values with something like NAN?</p> <p>Alternatively, is there an efficient numpy function equivalent to <code>np.isnan</code>, but which also tests for non-numeric elements like letters?</p>
<p>You can convert an array of strings into an array of floats (with NaNs) using <a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.genfromtxt.html" rel="nofollow noreferrer"><code>np.genfromtxt</code></a>:</p> <pre><code>In [83]: np.set_printoptions(precision=3, suppress=True) In [84]: np.genfromtxt(np.array(['1','2','3.14','1e-3','b','nan','inf','-inf'])) Out[84]: array([ 1. , 2. , 3.14 , 0.001, nan, nan, inf, -inf]) </code></pre> <hr> <p>Here is a way to identify "numeric" strings:</p> <pre><code>In [34]: x Out[34]: array(['1', '2', 'a'], dtype='|S1') In [35]: x.astype('unicode') Out[35]: array([u'1', u'2', u'a'], dtype='&lt;U1') In [36]: np.char.isnumeric(x.astype('unicode')) Out[36]: array([ True, True, False], dtype=bool) </code></pre> <p>Note that "numeric" means a unicode that contains only digit characters -- that is, characters that have the Unicode numeric value property. It does <strong>not</strong> include the decimal point. So <code>u'1.3'</code> is not considered "numeric".</p>
python|numpy|type-conversion|nan|coercion
15
2,886
16,099,488
Elementwise multiplication of several arrays in Python Numpy
<p>Coding some Quantum Mechanics routines, I have discovered a curious behavior of Python's NumPy. When I use NumPy's multiply with more than two arrays, I get faulty results. In the code below, i have to write:</p> <pre><code>f = np.multiply(rowH,colH) A[row][col]=np.sum(np.multiply(f,w)) </code></pre> <p>which produces the correct result. However, my initial formulation was this:</p> <pre><code>A[row][col]=np.sum(np.multiply(rowH, colH, w)) </code></pre> <p>which does not produce an error message, but the wrong result. Where is my fault in thinking that I could give three arrays to numpy's multiply routine?</p> <p>Here is the full code:</p> <pre><code>from numpy.polynomial.hermite import Hermite, hermgauss import numpy as np import matplotlib.pyplot as plt dim = 3 x,w = hermgauss(dim) A = np.zeros((dim, dim)) #build matrix for row in range(0, dim): rowH = Hermite.basis(row)(x) for col in range(0, dim): colH = Hermite.basis(col)(x) #gaussian quadrature in vectorized form f = np.multiply(rowH,colH) A[row][col]=np.sum(np.multiply(f,w)) print(A) </code></pre> <p><strong>::NOTE::</strong> this code only runs with <strong>NumPy 1.7.0</strong> and higher!</p>
<p>Your fault is in not reading <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.multiply.html" rel="noreferrer">the documentation</a>:</p> <blockquote> <p><code>numpy.multiply(x1, x2[, out])</code></p> </blockquote> <p><code>multiply</code> takes exactly two input arrays. The optional third argument is an output array which can be used to store the result. (If it isn't provided, a new array is created and returned.) When you passed three arrays, the third array was overwritten with the product of the first two.</p>
python|numpy|multiplication
17
2,887
57,311,750
Create a dataframe with duplicate entries
<p>I do a sql query, and I then with <code>data = pd.read_sql(query, connection)</code> I have the following table,</p> <pre><code> ID ITEM TYPE_USER Count 711757 item1 type1 1 711757 item2 type1 1 711757 item3 type1 1 711794 item1 type2 1 711794 item2 type2 1 711541 item2 type3 1 . . . . . . . . </code></pre> <p>But I need create the following dataframe</p> <pre><code> ID item1 item2 item3 TYPE_USER 711757 1 1 1 type1 711794 1 1 0 type2 711541 0 1 0 type3 </code></pre> <p>So, my idea was take </p> <pre><code>`data.pivot(index='ID', columns = 'ITEM', values='Count') </code></pre> <p>But, this give me, the follwoing dataframe</p> <pre><code> ID item1 item2 item3 0 711757 1 1 1 1 711794 1 1 0 2 711541 0 1 0 </code></pre> <p>In this point, I don't know how join the column 'TYPE_USER', any idea will be appreciated! Thanks!</p>
<pre><code>pd.pivot_table(df, index=['ID','TYPE_USER'], columns='ITEM', values='Count').fillna(0).reset_index() </code></pre> <p>result</p> <pre><code>ITEM ID TYPE_USER item1 item2 item3 0 711541 type3 0.0 1.0 0.0 1 711757 type1 1.0 1.0 1.0 2 711794 type2 1.0 1.0 0.0 </code></pre>
python|pandas|dataframe
2
2,888
57,438,215
Is there a way to set all my GPUs to NOT be XLA so I can train with multiple gpus rather than just one?
<p>I would like to train keras models using multiple GPUs. My understanding is that you cannot currently train multiple gpus using XLA. The issue is I can't figure out how to turn off XLA. Every GPU is listed as an xla gpu.</p> <p>For reference, I am using 3 RTX2070s on the latest Ubuntu desktop. nvidia-smi does indeed show all 3 gpus. </p> <p>I have tried uninstalling and reinstalling <code>tensorflow-gpu</code>. That does not help. </p> <p>from </p> <pre><code>keras.utils.training_utils import multi_gpu_model model = multi_gpu_model(model,gpus=3) </code></pre> <p>ValueError:</p> <pre><code> To call `multi_gpu_model` with `gpus=3`, we expect the following devices to be available: ['/cpu:0', '/gpu:0', '/gpu:1', '/gpu:2']. However this machine only has: ['/cpu:0', '/xla_cpu:0', '/xla_gpu:0', '/xla_gpu:1', '/xla_gpu:2']. Try reducing `gpus`. </code></pre> <p>EDIT: I am using <code>tensorflow-gpu</code> and actually I've just confirmed it isn't even using one gpu. I confirmed this by cranking up the batch size to 10,000 and saw no change to nvidia-smi but I did see changes to the cpu/memory usage via htop. </p> <p>EDIT2: </p> <pre><code>tf.test.gpu_device_name() </code></pre> <p>prints just an empty string</p> <p>whereas</p> <pre><code> from tensorflow.python.client import device_lib print(device_lib.list_local_devices()) prints all of my devices... [name: "/device:CPU:0" device_type: "CPU" memory_limit: 268435456 locality { } incarnation: 7781250607362587360 , name: "/device:XLA_CPU:0" device_type: "XLA_CPU" memory_limit: 17179869184 locality { } incarnation: 12317810384332135154 physical_device_desc: "device: XLA_CPU device" , name: "/device:XLA_GPU:0" device_type: "XLA_GPU" memory_limit: 17179869184 locality { } incarnation: 1761593194774305176 physical_device_desc: "device: XLA_GPU device" , name: "/device:XLA_GPU:1" device_type: "XLA_GPU" memory_limit: 17179869184 locality { } incarnation: 11323027499711415341 physical_device_desc: "device: XLA_GPU device" , name: "/device:XLA_GPU:2" device_type: "XLA_GPU" memory_limit: 17179869184 locality { } incarnation: 3573490477127930095 physical_device_desc: "device: XLA_GPU device" ] </code></pre>
<p>I faced this problem either.</p> <p>Sometimes I fixed it by reinstalling the tensorflow-gpu package.</p> <pre><code>pip uninstall tensorflow-gpu pip install tensorflow-gpu </code></pre> <p>However, sometimes these commands didn't work. So I tried the following ones and it works surprisingly.</p> <pre><code>conda install -c anaconda tensorflow-gpu </code></pre>
tensorflow|keras|gpu|nvidia
0
2,889
43,531,329
Group by timestamp a single CSV file - Pandas
<p>i have a almost endless horizontal csv where the variables are spreaded across the header and i have many repeated timestamps which results in a scenario like this:</p> <pre><code>+------------+------------+------------+------------+ | Timestamp | Variable1 | Variable2 | .... | +------------+------------+------------+------------+ | 2017/02/12 | 20 | | | | 2017/02/13 | 20 | | | | 2017/02/14 | 30 | | | | 2017/02/12 | | 5 | | | 2017/02/13 | | 2 | | | 2017/02/14 | | 10 | | | ... | | | | +------------+------------+------------+------------+ </code></pre> <p>I'm trying to concatenate by the timestamp in order to get a result like this:</p> <pre><code>+------------+------------+------------+------------+ | Timestamp | Variable1 | Variable2 | .... | +------------+------------+------------+------------+ | 2017/02/12 | 20 | 5 | | | 2017/02/13 | 20 | 2 | | | 2017/02/14 | 30 | 10 | | +------------+------------+------------+------------+ </code></pre> <p>Im relatively new in pandas but i feel this can be done with ease with multiple dataframes but im having a little doubt grouping a single dataframe. Can anyone give me a hand? Thank you very much!</p>
<p>You can groupby timestamp and combine the values </p> <pre><code>df.groupby('Timestamp')['Variable1', 'Variable2'].apply(lambda x: x.sum()).reset_index() </code></pre> <p>You get</p> <pre><code> Timestamp Variable1 Variable2 0 2017/02/12 20 5 1 2017/02/13 20 2 2 2017/02/14 30 10 </code></pre> <p>EDIT: More generic thanks to @piRSquared</p> <pre><code>df.set_index('Timestamp').groupby(level=0).sum().reset_index‌​() </code></pre>
python-3.x|pandas
4
2,890
43,832,311
How to plot by category over time
<p>I have two columns, categorical and year, that I am trying to plot. I am trying to take the sum total of each categorical per year to create a multi-class time series plot.</p> <pre><code>ax = data[data.categorical=="cat1"]["categorical"].plot(label='cat1') data[data.categorical=="cat2"]["categorical"].plot(ax=ax, label='cat3') data[data.categorical=="cat3"]["categorical"].plot(ax=ax, label='cat3') plt.xlabel("Year") plt.ylabel("Number per category") sns.despine() </code></pre> <p>But am getting an error stating no numeric data to plot. I am looking for something similar to the above, perhaps with <code>data[data.categorical=="cat3"]["categorical"].lambda x : (1 for x in data.categorical)</code> </p> <p>I will use the following lists as examples.</p> <pre><code>categorical = ["cat1","cat1","cat2","cat3","cat2","cat1","cat3","cat2","cat1","cat3","cat3","cat3","cat2","cat1","cat2","cat3","cat2","cat2","cat3","cat1","cat1","cat1","cat3"] year = [2013,2014,2013,2015,2014,2014,2013,2014,2014,2015,2015,2013,2014,2014,2013,2014,2015,2015,2015,2013,2014,2015,2013] </code></pre> <p>My goal is to obtain something similar to the following picture <a href="https://i.stack.imgur.com/ovcQH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ovcQH.png" alt="enter image description here"></a></p>
<p>I'm hesitant to call this a "solution", as it's basically just a summary of basic Pandas functionality, which is explained in the same documentation where you found the time series plot you've placed in your post. But seeing as there's some confusion around <code>groupby</code> and plotting, a demo may help clear things up. </p> <p>We can use two calls to <code>groupby()</code>.<br> The first <code>groupby()</code> gets a count of category appearances per year, using the <code>count</code> aggregation.<br> The second <code>groupby()</code> is used to plot the time series for each category.</p> <p>To start, generate a sample data frame:</p> <pre><code>import pandas as pd categorical = ["cat1","cat1","cat2","cat3","cat2","cat1","cat3","cat2", "cat1","cat3","cat3","cat3","cat2","cat1","cat2","cat3", "cat2","cat2","cat3","cat1","cat1","cat1","cat3"] year = [2013,2014,2013,2015,2014,2014,2013,2014,2014,2015,2015,2013, 2014,2014,2013,2014,2015,2015,2015,2013,2014,2015,2013] df = pd.DataFrame({'categorical':categorical, 'year':year}) categorical year 0 cat1 2013 1 cat1 2014 ... 21 cat1 2015 22 cat3 2013 </code></pre> <p>Now get counts per category, per year:</p> <pre><code># reset_index() gives a column for counting, after groupby uses year and category ctdf = (df.reset_index() .groupby(['year','categorical'], as_index=False) .count() # rename isn't strictly necessary here, it's just for readability .rename(columns={'index':'ct'}) ) year categorical ct 0 2013 cat1 2 1 2013 cat2 2 2 2013 cat3 3 3 2014 cat1 5 4 2014 cat2 3 5 2014 cat3 1 6 2015 cat1 1 7 2015 cat2 2 8 2015 cat3 4 </code></pre> <p>Finally, plot time series for each category, keyed by color:</p> <pre><code>from matplotlib import pyplot as plt fig, ax = plt.subplots() # key gives the group name (i.e. category), data gives the actual values for key, data in ctdf.groupby('categorical'): data.plot(x='year', y='ct', ax=ax, label=key) </code></pre> <p><a href="https://i.stack.imgur.com/akBU7.png" rel="noreferrer"><img src="https://i.stack.imgur.com/akBU7.png" alt="time series plot by category"></a></p>
python|pandas|matplotlib
5
2,891
73,056,640
Drop row with bad data in a Pandas DataFrame
<p>I have at least one row with potentially bad data. I'd like to identify rows with bad data and entirely drop the row. Here's a pattern I have observed in a fairly large dataframe - <code>50k x 200</code></p> <pre><code>import pandas as pd df = pd.DataFrame({ 'name': ['12 x st', '0.5555', 'y'], 'val': [1, 0.5555, 2], 'col': ['t', 'u', '0.5555'], 'z': [2500, 2000, 1800] }) </code></pre> <p>The column type is <code>dtype('O')</code>. I'd like to remove the row with <code>0.5555</code>. Note that in columns <code>name</code> and <code>col</code> this is of type <code>str</code>. Not all columns have the bad value, but when it does it's in at least in a few columns.</p> <p>The value is numeric and could be anything of type <code>float</code>.</p> <p>Expected output is to completely drop the row with bad data that clearly doesn't match the format of the column.</p>
<p>If the valid values are only going to be letter characters, you could do something as simple as this filter, which checks if all of the characters in each value are alphabetic.</p> <pre class="lang-py prettyprint-override"><code>df = df[df['name'].str.isalpha()] </code></pre> <pre><code> name val col z 2 y 2.0 0.5555 1800 </code></pre> <p>and do the same for column 'col'.</p> <p>However, if there's the chance that a valid value contains <em>both</em> numbers and letters (something like &quot;I bought 5 bananas&quot;), the above wouldn't work, nor would substituting the <code>isalpha()</code> method for <code>isalphanum()</code>, since <code>isalphanum()</code> returns true if the characters are alphabetic <em>or</em> numeric.</p> <p>My approach to that situation would be a custom validator function that tries to cast the value to an float and returns False if possible.</p> <pre class="lang-py prettyprint-override"><code>def is_string(s): try: float(s) except ValueError: return True else: return False df = df[df['name'].apply(is_string)] </code></pre> <pre><code> name val col z 0 12 x st 1.0 t 2500 2 y 2.0 0.5555 1800 </code></pre> <p>Then, to apply this to all string columns, you could do:</p> <pre class="lang-py prettyprint-override"><code>for col in df.select_dtypes('O'): df = df[df[col].apply(is_string)] </code></pre> <pre><code> name val col z 0 12 x st 1.0 t 2500 </code></pre> <p>A more concise way to express this might be:</p> <pre class="lang-py prettyprint-override"><code>df = df.apply(lambda x: x[x.apply(is_string)] if x.dtypes == 'O' else x, axis=0) df = df.dropna(how='any', axis=0) </code></pre>
python|pandas
1
2,892
73,012,710
How iterate over a df to know the most frequent item in each month
<p>I have the following Pandas DF:</p> <pre><code>visit_date|house_id ----------+--------- 2017-12-27|892815605 2018-01-03|892807836 2018-01-03|892815815 2018-01-03|892812970 2018-01-03|892803143 2018-01-03|892815463 2018-01-03|892816168 2018-01-03|892814475 2018-01-03|892813594 2018-01-03|892813557 2018-01-03|892809834 2018-01-03|892809834 2018-01-03|892803143 2018-01-03|892803143 2018-01-03|892800500 2018-01-03|892806236 2018-01-03|892810789 2018-01-03|892797487 2018-01-03|892815182 2018-01-03|892814514 2018-01-03|892778046 2018-01-03|892809386 2018-01-03|892816048 2018-01-03|892816048 2018-01-03|892816078 2018-01-03|892810643 </code></pre> <p><strong>I need to know the most visited house (house_id) in each month(month).</strong></p> <p>How do I do that? I did a groupby:</p> <pre><code>df_1.groupby(by=['house_id', 'month']).count().reset_index().sort_values(by=['month'], ascending=True, ignore_index=True) </code></pre> <p>But it say me anything. So I try to do that for each month:</p> <pre><code>df_1[df_1['month']==1].groupby(by=['house_id']).count().reset_index().sort_values(by=['month'], ascending=True, ignore_index=True).tail(1) df_1[df_1['month']==2].groupby(by=['house_id']).count().reset_index().sort_values(by=['month'], ascending=True, ignore_index=True).tail(1) </code></pre> <p>and so on...</p> <p>But I think there is a clever way to do that. But I don't know. Is it possible to iterate? How do I iterate to know the most visited house in each month ({1:'January', ... 12:'December'}) Thanks a lot</p>
<p>Adopted from a <a href="https://stackoverflow.com/questions/35364601/group-by-and-find-top-n-value-counts-pandas">similar answer</a></p> <pre><code>df = pd.DataFrame({'visit_date': ['2017-12-27', '2018-01-03', '2018-01-03', '2018-01-03', '2018-01-03', '2018-01-03', '2018-01-03', '2018-01-03', '2018-01-03', '2018-01-03', '2018-01-03', '2018-01-03', '2018-01-03', '2018-01-03', '2018-01-03', '2018-01-03', '2018-01-03', '2018-01-03', '2018-01-03', '2018-01-03', '2018-01-03', '2018-01-03', '2018-01-03', '2018-01-03', '2018-01-03', '2018-01-03'], 'house_id': [892815605, 892807836, 892815815, 892812970, 892803143, 892815463, 892816168, 892814475, 892813594, 892813557, 892809834, 892809834, 892803143, 892803143, 892800500, 892806236, 892810789, 892797487, 892815182, 892814514, 892778046, 892809386, 892816048, 892816048, 892816078, 892810643]}) df['month'] = pd.to_datetime(df['visit_date']).dt.month df = df.groupby(['month','house_id']).size().groupby(level=0).nlargest(1).droplevel(1).reset_index(name='count') print(df) </code></pre> <p>Output</p> <pre><code> month house_id count 0 1 892803143 3 1 12 892815605 1 </code></pre>
python|pandas|loops
1
2,893
72,950,881
Running an external function within Pandas Dataframe to speed up processing loops
<p>Good Day Peeps,</p> <p>I currently have 2 data frames, &quot;Locations&quot; and &quot;Pokestops&quot;, both containing a list of coordinates. The goal with these 2 data frames, is to cluster points from &quot;Pokestops&quot; that are within 70m of the points in &quot;Locations&quot;.</p> <p>I have created a &quot;Brute Force&quot; clustering script.</p> <p>The process is as follows:</p> <ol> <li>Calculate which &quot;Pokestops&quot; are within 70m of each point in &quot;Locations&quot;.</li> <li>Add all nearby Pokestops to Locations[&quot;Pokestops&quot;], as a list/array of their index value eg, ([0, 4, 22])</li> <li>If no Pokestops are near a point in &quot;Locations&quot;, remove that line from the Locations df</li> </ol> <pre class="lang-py prettyprint-override"><code>for i in range(len(locations)-1, -1, -1): array = [] for f in range(0, len(pokestops)): if geopy.distance.geodesic(locations.iloc[i, 2], pokestops.iloc[f, 2]).m &lt;= 70: array.append(f) if len(array) &lt;= 0: locations.drop([i], inplace=True) else: locations.iat[i, 3] = array locations[&quot;Length&quot;] = locations[&quot;Pokestops&quot;].map(len) </code></pre> <p>This results in:</p> <pre class="lang-py prettyprint-override"><code> Lat Long Coordinates Pokestops Length 2 -33.916432 18.426188 -33.916432,18.4261883 [1] 1 3 -33.916432 18.426287 -33.916432,18.42628745 [1] 1 4 -33.916432 18.426387 -33.916432,18.4263866 [1] 1 5 -33.916432 18.426486 -33.916432,18.42648575 [0, 1] 2 6 -33.916432 18.426585 -33.916432,18.4265849 [0, 1] 2 7 -33.916432 18.426684 -33.916432,18.426684050000002 [0, 1] 2 </code></pre> <ol start="4"> <li>Sort by most to least amount of pokestops within 70m.</li> </ol> <pre class="lang-py prettyprint-override"><code>locations.sort_values(&quot;Length&quot;, ascending=False, inplace=True) </code></pre> <p>This results in:</p> <pre class="lang-py prettyprint-override"><code> Lat Long Coordinates Pokestops Length 136 -33.915441 18.426585 -33.91544050000003,18.4265849 [1, 2, 3, 4] 4 149 -33.915341 18.426585 -33.915341350000034,18.4265849 [1, 2, 3, 4] 4 110 -33.915639 18.426585 -33.915638800000025,18.4265849 [1, 2, 3, 4] 4 111 -33.915639 18.426684 -33.915638800000025,18.426684050000002 [1, 2, 3, 4] 4 </code></pre> <ol start="5"> <li>Remove all index values listed in Locations[0, &quot;Pokestops&quot;], from all other rows Locations[1:, &quot;Pokestops&quot;]</li> </ol> <pre class="lang-py prettyprint-override"><code> stops = list(locations['Pokestops']) seen = list(locations.iloc[0, 3]) stops_filtered = [seen] for xx in stops[1:]: xx = [x for x in xx if x not in seen] stops_filtered.append(xx) locations['Pokestops'] = stops_filtered </code></pre> <p>This results in:</p> <pre class="lang-py prettyprint-override"><code> Lat Long Coordinates Pokestops Length 136 -33.915441 18.426585 -33.91544050000003,18.4265849 [1, 2, 3, 4] 4 149 -33.915341 18.426585 -33.915341350000034,18.4265849 [] 4 110 -33.915639 18.426585 -33.915638800000025,18.4265849 [] 4 111 -33.915639 18.426684 -33.915638800000025,18.426684050000002 [] 4 </code></pre> <ol start="6"> <li>Remove all empty rows in Locations[&quot;Pokestops]</li> </ol> <pre class="lang-py prettyprint-override"><code>locations = locations[locations['Pokestops'].map(len)&gt;0] </code></pre> <p>This results in:</p> <pre class="lang-py prettyprint-override"><code> Lat Long Coordinates Pokestops Length 136 -33.915441 18.426585 -33.91544050000003,18.4265849 [1, 2, 3, 4] 4 176 -33.915143 18.426684 -33.91514305000004,18.426684050000002 [5] 3 180 -33.915143 18.427081 -33.91514305000004,18.427080650000004 [5] 3 179 -33.915143 18.426982 -33.91514305000004,18.426981500000004 [5] 3 </code></pre> <ol start="7"> <li>Add Locations[0, &quot;Coordinates&quot;] to an array that can be saved to .txt later, which will form our final list of &quot;Clustered&quot; coordinates.</li> </ol> <pre class="lang-py prettyprint-override"><code>clusters = np.append(clusters, locations.iloc[0 , 0:2]) </code></pre> <p>This results in:</p> <pre class="lang-py prettyprint-override"><code> Lat Long Coordinates Pokestops Length 176 -33.915143 18.426684 -33.91514305000004,18.426684050000002 [5] 3 180 -33.915143 18.427081 -33.91514305000004,18.427080650000004 [5] 3 179 -33.915143 18.426982 -33.91514305000004,18.426981500000004 [5] 3 64 -33.916035 18.427180 -33.91603540000001,18.427179800000005 [0] 3 </code></pre> <ol start="8"> <li>Repeat the process from 4-7 till the Locations df is empty.</li> </ol> <p>This all results in an array containing all coordinates of points from the Locations dataframe, that contain points within 70m from Pokestops, sorted from Largest to Smallest cluster.</p> <p>Now for the actual question.</p> <p>The method I am using in steps 1-3, results in needing to loop a few million times for a small-medium dataset.</p> <p>I believe I can achieve faster times by migrating away from using the &quot;for&quot; loops and allowing Pandas to calculate the distances between the two tables &quot;Directly&quot; using the geopy.distance.geodesic function.</p> <p>I am just unsure how to even approach this...</p> <ul> <li>How do I get it to iterate through rows without using a for loop?</li> <li>How do I maintain using my &quot;lists/arrays&quot; in my locations[&quot;Pokestops&quot;] column?</li> <li>Will it even be faster?</li> </ul> <p>I know there is a library called GeoPandas, but this requires conda, and will mean I need to step away from being able to use my arrays/lists in the column Locations[&quot;Pokestops&quot;]. (I also have 0 knowlege on how to use GeoPandas to be fair)</p> <p>I know very broad questions like this are generally shunned, but I am fully self-taught in python, trying to achieve what is most likely too complicated of a script for my level.</p> <p>I've made it this far, I just need this last step to make it more efficient. The script is fully working, and provides the required results, it simply takes too long to run due to the nested for loops.</p> <p>Any advise/ideas are greatly appreciated, and keep in mind my knowlege on python/Pandas is somewhat limited and i do not know all the functions/terminology.</p> <h1><strong>EDIT #1:</strong></h1> <p>Thank you @Finn, although this solution has caused me to significantly alter my main body, this is working as intended.</p> <p>With the new matrix, I am filtering everything&gt; 0.07 to be NaN.</p> <pre class="lang-py prettyprint-override"><code> Lat Long Count 0 1 2 3 4 82 -33.904620 18.402612 5 NaN NaN NaN 0.052401 NaN 75 -33.904620 18.400183 5 NaN NaN NaN NaN 0.053687 120 -33.903579 18.401224 5 NaN NaN NaN NaN NaN 68 -33.904967 18.402612 5 NaN 0.044402 NaN 0.015147 NaN 147 -33.902885 18.400877 5 NaN NaN NaN NaN NaN 89 -33.904273 18.400183 5 NaN NaN NaN NaN NaN 182 -33.901844 18.398448 4 NaN NaN NaN NaN NaN 54 -33.905314 18.402612 4 NaN 0.020793 NaN 0.026215 NaN 183 -33.901844 18.398795 4 NaN NaN NaN NaN NaN 184 -33.901844 18.399142 4 NaN NaN NaN NaN NaN </code></pre> <p>The problem I face now is step 5 in my original post.</p> <p>Can you advise how I would go about removing all columns that do NOT contain NaN in the 1st row?</p> <p>The only info I can find is removing columns if ANY value in any row is not NaN.. I have tried every combination of .dropna() I could find online.</p>
<p>The Apply function Might be helpful. The Apply function applies the specified function to every cell of the Dataset (Of course you can control the parameters). Check this documentation (<a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.apply.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.apply.html</a>) for further understanding.</p> <blockquote> <p>I do believe loops will be very chaotic to implement once the solution hides in multiple layers. From the perspective of playing with datasets it is far better without a loop approach and apply functions as here we are expected to provide quick solutions.</p> </blockquote>
python|arrays|pandas|dataframe|for-loop
0
2,894
73,091,127
Use list items as column seperators pd.read_fwf
<p>I have text files containing tables which I want to put into a dataframe. Per file the column headers are the same, but the width is different depending on the content (because they contain names of different lengths for example).</p> <p>So far I managed to get the index of the first character of the header, so I know where the column separation should be. As all files (about 100) have different widths I want to be able to insert this list in order to create a dataframe with the correct column width.</p> <p>This is what the first two rows look like:</p> <pre><code>!Column1 !Column2 !Column3 !Column4 !Column5 Company with a $1,000,000 Yes Jack, Hank X name Company with. $2,000 No Rita, Hannah X another name </code></pre> <p>What I tried so far:</p> <p>(1)</p> <p>pandas pd.read_fwf('file.txt', colspec(()) - This does work, but with colspec I have to put in the (from, start) indexes for each column. Not only would this be burdensome manually, but some files have 12 columns while others have 10.</p> <p>(2)</p> <p>pandas pd.read_fwf('file.txt', widhts(list)) - Here I can easily insert the list with locations of the column separations, but it does not seem to create a separation at those indexes. I do not exactly understand what is does.</p> <p>Question:</p> <p>I currently have a list of indexes of all the exclamation marks:</p> <p>list = [0, 17, 30, 45, 58]</p> <p>How can I use this list and separate the columns to convert the .txt file into a DataFrame?</p> <p>Any other way to solve this issue is more than welcome!</p>
<p>So what you can do is standardize the spacing with regex.</p> <pre><code>import re string = &quot;something something something more&quot; results = re.sub(&quot;(\W+)&quot;, &quot;|&quot;, string) results </code></pre> <p>That returns</p> <pre><code>'something|something|something|more' </code></pre> <p>If you have standardized the delimiters, you can load it with fwf or just read_csv.</p> <h2>EDIT</h2> <p>In order to derive the span of the header that is delimited with a exclamation mark <code>!</code> you can use the <code>re</code> library too. The logic of the pattern is that the sequence has to start with <code>!</code> and then is followed up by many non-<code>!</code>. The next group would inherently start with a <code>!</code>. In code it would look something like this:</p> <pre><code>example_txt = &quot;&quot;&quot;!Column1 !Column2 !Column3 !Column4 !Column5 Company with a $1,000,000 Yes Jack, Hank X name Company with. $2,000 No Rita, Hannah X another name&quot;&quot;&quot; first_line = example_txt.split(&quot;\n&quot;)[0] import re indexes = [] p = re.compile(&quot;![^!]*&quot;) for m in p.finditer(first_line): indexes.append(m.span()) print(indexes) </code></pre> <p>Which returns</p> <pre><code>[(0, 16), (16, 29), (29, 44), (44, 57), (57, 83)] </code></pre> <p>This should bring you close to what you need for <code>fwf</code> method of pandas. Not that indexing in python starts at <code>0</code> and that if the end-index doesn't count. So if you index from <code>[0:16]</code> then you would get the 0th to 15th element (not including the 16th element), returning 16 elements in total. The index can therefore be directly applied.</p>
python|pandas|dataframe|text|read.fwf
1
2,895
70,404,478
Get column index of max value in pandas row
<p>I want to find not just the max value in a dataframe row, but also the specific column that has that value. If there are multiple columns with the value, then either returning the list of all columns, or just one, are both fine.</p> <p>In this case, I'm specifically concerned with doing this for a single given row, but if there is a solution that can apply to a dataframe, that would be great as well.</p> <p>Below is a rough idea of what I mean. <code>row.max()</code> returns the max value, but my desired function <code>row.max_col()</code> returns the name of the column that has the max value.</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; import pandas as pd &gt;&gt;&gt; df = pd.DataFrame({&quot;A&quot;: [1,2,3], &quot;B&quot;: [4,5,6]}) &gt;&gt;&gt; row = df.iloc[0] &gt;&gt;&gt; row.max() 4 &gt;&gt;&gt; row.max_col() Index(['B'], dtype='object') </code></pre> <p>My current approach is this:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; row.index[row.eq(row.max())] Index(['B'], dtype='object') </code></pre> <p>I'm not familiar with how pandas optimizes everything so I apologize if I'm wrong here, but I assume that <code>row.index[row.eq(...)]</code> grows in linear time proportional to the number of columns. I'm working with a small number of columns, so it shouldn't be a huge issue, but I'm curious if there is a way to get the column name the same way that I can use <code>.max()</code> without having to do the extra work afterwards to look for equal values.</p>
<p>Assume that the source DataFrame contains:</p> <pre><code> A B 0 1 4 1 7 5 2 3 6 3 9 8 </code></pre> <p>Then, to find the column name holding the max value in <strong>each</strong> row (not only row <em>0</em>), run:</p> <pre><code>result = df.apply('idxmax', axis=1) </code></pre> <p>The result is:</p> <pre><code>0 B 1 A 2 B 3 A dtype: object </code></pre> <p>But if you want to get the <strong>integer</strong> index of the column holding the max value, change the above code to:</p> <pre><code>result = df.columns.get_indexer(df.apply('idxmax', axis=1)) </code></pre> <p>This time the result is:</p> <pre><code>array([1, 0, 1, 0], dtype=int64) </code></pre>
python|pandas|dataframe
1
2,896
70,472,524
Finding The Row Of A Pandas Dataframe When Searching With A Variable In a Column PROBLEM
<p>I have a csv file that i imported to pandas df. Lets say it is something like this</p> <pre><code># A B C D # 0 foo one 0 0 # 1 bar one 1 2 # 2 foo two 2 4 # 3 bar three 3 6 # 4 foo two 4 8 # 5 bar two 5 10 # 6 foo one 6 12 # 7 foo three 7 14 </code></pre> <p>When i use &quot;foo&quot; to search there is no problem and the statement below works as expected.</p> <pre><code>print(df.loc[df['A'] == 'foo']) </code></pre> <p>BUT when i use a &quot;variable x&quot; i.e. instead of &quot;foo&quot; to search in the Column A it doesn't return anything.</p> <pre><code>print(df.loc[df['A'] == variablex]) print(df.loc[df['A'] == 'variablex']) </code></pre> <p>How can i solve this problem. Thanks a lot for your help.</p>
<p>You mean like this?:</p> <pre><code>df = pd.DataFrame([ (&quot;foo&quot;,&quot;one&quot;,0,0), (&quot;bar&quot;,&quot;one&quot;,1,2), (&quot;foo&quot;,&quot;two&quot;,2,4), (&quot;bar&quot;,&quot;three&quot;,3,6), (&quot;foo&quot;,&quot;two&quot;,4,8), (&quot;bar&quot;,&quot;two&quot;,5,10), (&quot;foo&quot;,&quot;one&quot;,6,12), (&quot;foo&quot;,&quot;three&quot;,7,14), ], columns = [&quot;A&quot;,&quot;B&quot;,&quot;C&quot;,&quot;D&quot;]) variablex = &quot;foo&quot; print(df.loc[df[&quot;A&quot;] == variablex]) </code></pre> <p>Output:</p> <pre><code> A B C D 0 foo one 0 0 2 foo two 2 4 4 foo two 4 8 6 foo one 6 12 7 foo three 7 14 </code></pre> <p>I'm not sure I see what the problem is, so not sure if we've answered your question.</p>
python|pandas|dataframe
1
2,897
70,683,832
Get one elements inside <tb> with Python
<p>im new to Python and im trying to make a web scraper to get the name and the ip of Minecraft server.</p> <p>The problem is that I was able to get the value of the but for example the ip of the server is in a div inside de Im using pandas and lxml.html</p> <p>example:</p> <pre><code>&lt;tr&gt; &lt;td class=&quot;server-rank visible-sm visible-md visible-lg&quot;&gt; &lt;p&gt;&lt;a href=&quot;#1.akumamc.net&quot;&gt;&lt;span class=&quot;badge&quot;&gt;#1&lt;/span&gt;&lt;/a&gt;&lt;/p&gt; &lt;/td&gt; &lt;td class=&quot;server-name&quot; align=&quot;center&quot;&gt; &lt;div class=&quot;server-ip input-group&quot;&gt; &lt;p&gt; this is de ip of the server &lt;p&gt; -I WANT TO GET HERE- &lt;/div&gt; &lt;/td&gt; &lt;/tr&gt; </code></pre> <p>I dont know how to make to the div inside the tb. I have this script that I took from a page that works perfect to the other things but not for getting to the inside.</p> <pre><code>from numpy import tile import requests import lxml.html as lh import pandas as pd import re #https://www.servidoresminecraft.info/1.8/ url='https://topminecraftservers.org/version/1.8.8' #Create a handle, page, to handle the contents of the website page = requests.get(url) #Store the contents of the website under doc doc = lh.fromstring(page.content) #Parse data that are stored between &lt;tr&gt;..&lt;/tr&gt; of HTML tr_elements = doc.xpath('//tr') #Check the length of the first 12 rows [len(T) for T in tr_elements[:5]] tr_elements = doc.xpath('//tr') #Create empty list col=[] i=0 #For each row, store each first element (header) and an empty list for t in tr_elements[0]: i+=1 name=t.text_content() print ('%d:&quot;%s&quot;'%(i,name)) col.append((name,[])) #Since out first row is the header, data is stored on the second row onwards for j in range(1,len(tr_elements)): #T is our j'th row T=tr_elements[j] #If row is not of size 10, the //tr data is not from our table if len(T)!=3: break #i is the index of our column i=0 #Iterate through each element of the row for t in T.iterchildren(): data=t.text_content() #Check if row is empty if i&gt;0: #Convert any numerical value to integers try: if i==2 and j == 1: print(2) data=int(data) except: pass #Append the data to the empty list of the i'th column col[i][1].append(data) #Increment i for the next column i+=1 [len(C) for (title,C) in col] Dict={title:column for (title,column) in col} df=pd.DataFrame(Dict) print(df.head()) </code></pre> <p>I just want to get and aotput thats shows the a table with the name of the server and the ip</p> <pre><code>Name ip server1 xxx.xxx.x.x server2 xxx.xxx.x.x </code></pre> <p>Any help??</p>
<p>If I understand you correctly, this should get you what you're looking for:</p> <pre><code>servers = [] cols = [&quot;Name&quot;, &quot;ip&quot;] for s in doc.xpath(&quot;//td[@class='server-name']&quot;): s_ip = s.xpath(&quot;.//div[@class='server-ip input-group']//span[@class='form-control text-justify']/text()&quot;)[0] s_name = s.xpath('.//h4/a/span/text()')[0] servers.append([s_name,s_ip]) pd.DataFrame(servers, columns = cols) </code></pre> <p>Output:</p> <pre><code> Name ip 0 AkumaMC akumamc.net 1 BattleAsya 1.8-1.16 play.battleasya.com 2 Caraotacraft network PRISON caraotacraft.top 3 FlameSquad 87.121.54.214:25568 4 LunixCraft lunixcraft.dk </code></pre> <p>etc.</p>
python|html|pandas|web-scraping
1
2,898
42,816,124
What is the relationship between steps and epochs in TensorFlow?
<p>I am going through TensorFlow <a href="https://www.tensorflow.org/get_started/get_started" rel="nofollow noreferrer">get started tutorial</a>. In the <code>tf.contrib.learn</code> example, these are two lines of code:</p> <pre><code>input_fn = tf.contrib.learn.io.numpy_input_fn({"x":x}, y, batch_size=4, num_epochs=1000) estimator.fit(input_fn=input_fn, steps=1000) </code></pre> <p>I am wondering what is the difference between argument <code>steps</code> in the call to <code>fit</code> function and <code>num_epochs</code> in the <code>numpy_input_fn</code> call. Shouldn't there be just one argument? How are they connected? </p> <p>I have found that code is somehow taking the <code>min</code> of these two as the number of steps in the toy example of the tutorial.</p> <p>At least, one of the two parameters either <code>num_epochs</code> or <code>steps</code> has to be redundant. We can calculate one from the other. Is there a way I can know how many steps (number of times parameters get updated) my algorithm actually took? </p> <p>I am curious about which one takes precedence. And does it depend on some other parameters?</p>
<p><strong>TL;DR</strong>: An epoch is when your model goes through your whole training data once. A step is when your model trains on a single batch (or a single sample if you send samples one by one). Training for 5 epochs on a 1000 samples 10 samples per batch will take 500 steps.</p> <p>The <code>contrib.learn.io</code> module is not documented very well, but it seems that <code>numpy_input_fn()</code> function takes some numpy arrays and batches them together as input for a classificator. So, the number of epochs probably means "how many times to go through the input data I have before stopping". In this case, they feed two arrays of length 4 in 4 element batches, so it will just mean that the input function will do this at most a 1000 times before raising an "out of data" exception. The steps argument in the estimator <code>fit()</code> function is how many times should estimator do the training loop. This particular example is somewhat perverse, so let me make up another one to make things a bit clearer (hopefully).</p> <p>Lets say you have two numpy arrays (samples and labels) that you want to train on. They are a 100 elements each. You want your training to take batches with 10 samples per batch. So after 10 batches you will go through all of your training data. That is one epoch. If you set your input generator to 10 epochs, it will go through your training set 10 times before stopping, that is it will generate at most a 100 batches.</p> <p>Again, the io module is not documented, but considering how other input related APIs in tensorflow work, it should be possible to make it generate data for unlimited number of epochs, so the only thing controlling the length of training are going to be the steps. This gives you some extra flexibility on how you want your training to progress. You can go a number of epochs at a time or a number of steps at a time or both or whatever.</p>
tensorflow
44
2,899
27,127,539
numpy loading file error
<p>I tried to load <em>.npy</em> file created by <em>numpy</em>:</p> <pre><code>import numpy as np F = np.load('file.npy') </code></pre> <p>And <em>numpy</em> raises this error:</p> <blockquote> <p>C:\Miniconda3\lib\site-packages\numpy\lib\npyio.py in load(file, mmap_mode)</p> <pre><code>379 N = len(format.MAGIC_PREFIX) 380 magic = fid.read(N) </code></pre> <p>--> 381 fid.seek(-N, 1) # back-up</p> <pre><code>382 if magic.startswith(_ZIP_PREFIX): 383 # zip-file (assume .npz) </code></pre> <p>OSError: [Errno 22] Invalid argument</p> </blockquote> <p>Could anyone explain me what its mean? How can I recover my file?</p>
<p>You are using a file object that does not support the <code>seek</code> method. Note that the <code>file</code> parameter of <code>numpy.load</code> <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.load.html" rel="nofollow">must support the <code>seek</code> method</a>. My guess is that you are perhaps operating on a file object that corresponds to another file object that has already been opened elsewhere and remains open:</p> <pre><code>&gt;&gt;&gt; f = open('test.npy', 'wb') # file remains open after this line &gt;&gt;&gt; np.load('test.npy') # numpy now wants to use the same file # but cannot apply `seek` to the file opened elsewhere Traceback (most recent call last): File "&lt;pyshell#114&gt;", line 1, in &lt;module&gt; np.load('test.npy') File "C:\Python27\lib\site-packages\numpy\lib\npyio.py", line 370, in load fid.seek(-N, 1) # back-up IOError: [Errno 22] Invalid argument </code></pre> <p>Note that I receive the same error as you did. If you have an open file object, you will want to close it before using <code>np.load</code> and before you use <code>np.save</code> to save your file object.</p>
python|numpy
2