Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
6,700
| 72,869,211
|
Mapping set in dataframe column to another dataframe / dict
|
<p>i have a newbie question - what I want to do is concat 2 columns holding some names separated with comma, create set (to drop duplicates). Then I would have for each dataframe row a set of names and for each of these names I want to look up some dict / df called 'countries'.</p>
<pre><code>countries = pd.DataFrame(
[["LOW", ["A", "D"]],
["MEDIUM", ["B", "E"]],
["HIGH", ["C", "F"]],], columns=['Risk_level', 'Country_code'])
my_df = pd.DataFrame([['A', 'B,A'],['B', 'C,C']], columns=['id_1', 'id_2'])
</code></pre>
<p>What I tried is:</p>
<pre><code>my_df["country_codes"] = (my_df["id_2"] + "," + my_df["id_1"]).astype('str')
my_df["country_codes"] = my_df["country_codes"].str.split(",").apply(set)
my_df['country_risks'] = my_df['country_codes'].map(
countries.explode('Country_code').set_index('Country_code')['Risk_level'])
</code></pre>
<p>What I want to get is to have for each row a list of risks, so:</p>
<pre><code>my_df['country_risks'] = pd.Series([["LOW", "MEDIUM"], ["MEDIUM", "HIGH"]])
| id_1 | id_2 | country_risk |
|------|------|----------------|
| A | B,A | Low, Medium |
| B | C,C | Medium, High |
</code></pre>
<p>I know its a mess but I tried to explain it as clear as I could. Im stuck in the middle with how to map each element in set to dict/dataframe for each row. I could iterate over df but Id rather not. Any thoughts?</p>
|
<p>Try this:</p>
<pre><code>import pandas as pd
countries = pd.DataFrame(
[["LOW", ["A", "D"]],
["MEDIUM", ["B", "E"]],
["HIGH", ["C", "F"]],], columns=['Risk_level', 'Country_code'])
my_df = pd.DataFrame([['A', 'B,A'],['B', 'C,C']], columns=['id_1', 'id_2'])
my_df["country_codes"] = (my_df["id_2"] + "," + my_df["id_1"]).astype('str')
my_df["country_codes"] = my_df["country_codes"].str.split(",").apply(set)
dfe=my_df.explode('country_codes')
dfe['country_risk'] = dfe['country_codes'].map(countries.explode('Country_code').set_index('Country_code')['Risk_level'])
my_df['country_risk'] = dfe.groupby(level=0)['country_risk'].agg(list)
my_df
</code></pre>
<p>Output:</p>
<pre><code> id_1 id_2 country_codes country_risk
0 A B,A {A, B} [LOW, MEDIUM]
1 B C,C {B, C} [MEDIUM, HIGH]
</code></pre>
<p>You can explode your my_df, map the codes, and groupby agg list then assign back to original dataframe.</p>
|
python|pandas|dataframe
| 0
|
6,701
| 10,346,336
|
List of lists into numpy array
|
<p>How do I convert a simple list of lists into a numpy array? The rows are individual sublists and each row contains the elements in the sublist.</p>
|
<p>If your list of lists contains lists with varying number of elements then the answer of Ignacio Vazquez-Abrams will not work. Instead there are at least 3 options:</p>
<p>1) Make an array of arrays:</p>
<pre><code>x=[[1,2],[1,2,3],[1]]
y=numpy.array([numpy.array(xi) for xi in x])
type(y)
>>><type 'numpy.ndarray'>
type(y[0])
>>><type 'numpy.ndarray'>
</code></pre>
<p>2) Make an array of lists:</p>
<pre><code>x=[[1,2],[1,2,3],[1]]
y=numpy.array(x)
type(y)
>>><type 'numpy.ndarray'>
type(y[0])
>>><type 'list'>
</code></pre>
<p>3) First make the lists equal in length:</p>
<pre><code>x=[[1,2],[1,2,3],[1]]
length = max(map(len, x))
y=numpy.array([xi+[None]*(length-len(xi)) for xi in x])
y
>>>array([[1, 2, None],
>>> [1, 2, 3],
>>> [1, None, None]], dtype=object)
</code></pre>
|
python|list|numpy
| 271
|
6,702
| 3,692,401
|
creating a masked array from text fields
|
<p>The <a href="http://docs.scipy.org/doc/numpy/reference/maskedarray.generic.html" rel="nofollow noreferrer">numpy documentation</a> shows an example of masking existing values with <code>ma.masked</code> a posteriori (after array creation), or creating a masked array from an list of what seem to be valid data types (integer if <code>dtype=int</code>). I am trying to read in data from a file (and requires some text manipulation) but at some point I will have a list of lists (or tuples) containing strings from which I want to make a numeric (float) array. </p>
<p>An example of the data might be <code>textdata='1\t2\t3\n4\t\t6'</code> (typical flat text format after cleaning).</p>
<p>One problem I have is that missing values may be encoded as '', which when trying to convert to float using the dtype argument, will tell me </p>
<pre><code>ValueError: setting an array element with a sequence.
</code></pre>
<p>So I've created this function</p>
<pre><code>def makemaskedarray(X,missing='',fillvalue='-999.',dtype=float):
arr = lambda x: x==missing and fillvalue or x
mask = lambda x: x==missing and 1 or 0
triple = dict(zip(('data','mask','dtype'),
zip(*[(map(arr,x),map(mask,x)) for x in X])+
[dtype]))
return ma.array(**triple)
</code></pre>
<p>which seems to do the trick:</p>
<pre><code>>>> makemaskedarray([('1','2','3'),('4','','6')])
masked_array(data =
[[1.0 2.0 3.0]
[4.0 -- 6.0]],
mask =
[[False False False]
[False True False]],
fill_value = 1e+20)
</code></pre>
<p>Is this the way to do it? Or there is a built-in function?</p>
|
<p>The way you're doing it is fine. (though you could definitely make it a bit more readable by avoiding building the temporary "<code>triple</code>" dict, just to expand it a step later, i.m.o.)</p>
<p>The built-in way is to use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.genfromtxt.html" rel="nofollow noreferrer"><code>numpy.genfromtxt</code></a>. Depending on the amount of pre-processing you need to do to your text file, it may or may not do what you need. However, as a basic example: (Using StringIO to simulate a file...)</p>
<pre><code>from StringIO import StringIO
import numpy as np
txt_data = """
1\t2\t3
4\t\t6
7t\8t\9"""
infile = StringIO(txt_data)
data = np.genfromtxt(infile, usemask=True, delimiter='\t')
</code></pre>
<p>Which yields:</p>
<pre><code>masked_array(data =
[[1.0 2.0 3.0]
[4.0 -- 6.0]
[7.0 8.0 9.0]],
mask =
[[False False False]
[False True False]
[False False False]],
fill_value = 1e+20)
</code></pre>
<p>One word of caution: If you do use tabs as your delimiter and an empty string as your missing value marker, you'll have issues with missing values at the start of a line. (<code>genfromtxt</code> essentially calls <code>line.strip().split(delimiter)</code>). You'd be better off using something like <code>"xxx"</code> as a marker for missing values, if you can.</p>
|
python|numpy|scipy
| 1
|
6,703
| 70,733,352
|
Python: Pandas dataframe and for loop - seperate row variable outside of loop body
|
<p>I have some table data (based on some pandas dataframe) in following form:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Index</th>
<th style="text-align: center;">Name</th>
<th style="text-align: center;">Region 1</th>
<th style="text-align: center;">...</th>
<th style="text-align: right;">Region n</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">Index data</td>
<td style="text-align: center;">Name data</td>
<td style="text-align: center;">Region 1 data</td>
<td style="text-align: center;">Region 1 data</td>
<td style="text-align: right;">Region 1 data</td>
</tr>
</tbody>
</table>
</div>
<p>Now I want to loop through the datarows and seperate for each row the data of column <strong>Name</strong> in some string variable and the data of column <strong>Region i</strong> for all 1≤i≤n in some kind of array or list.</p>
<p>The way I know is as follows:</p>
<pre><code>for index, row in data.iterrows():
name = row.values[0]
regions = row.filter(regex = '^Region').values
body of loop
</code></pre>
<p>In the body of the for loop I never need the variable <strong>row</strong> again, only name and regions. So for me the code feels a little bit overloaded.</p>
<p>My question now is:</p>
<p>Is their some way to make all a little bit simpler, maybe some for loop of kind:</p>
<pre><code>for index, name, regions in data():
body of loop
</code></pre>
|
<p>First of all, when using pandas, it is better to avoid for-loops as much as we can. It is faster to use pandas methods and there are plenty for everything you can do with a for loop.</p>
<p>For your case, you can define what you want to do in a function and pass it to the <code>apply()</code> method of pandas data frames. For example:</p>
<pre><code>def body_for_loop(row, region_index):
name = row["Name"]
regions = row.filter(regex = '^Region').values
# body of loop
</code></pre>
<p>Now when you want to use it, you will just call:</p>
<pre><code>df.apply(body_fro_loop, axis=1)
</code></pre>
|
python|pandas|dataframe|for-loop
| 1
|
6,704
| 70,390,824
|
Create a new column based on a condition
|
<p>I have a dataframe df, need to create a new column, which is a product of <code>price</code> with <code>metric</code>(int calculated before).</p>
<pre><code>df['cost'] = df['price'] * metric if (df['status'] == 'online')
df['cost'] = 0 if df['status'] == 'offline'
</code></pre>
|
<p>We can leverage the point that <code>True</code> is 1 and <code>False</code> is 0 when used in multiplication.</p>
<pre><code>3 * True -> 3
3 * False -> 0
</code></pre>
<p>We have to check if values are equal to <code>online</code> in the <code>status</code> column.</p>
<pre><code>df['cost'] = df['price'] * df['status'].eq('online') * metric
</code></pre>
<p>Wherever, <code>status</code> is <code>offline</code> <code>cost</code> value is 0.</p>
<hr />
<p>The above solution relies on the fact you want to set <code>offline</code> values to 0. If you want to set <code>offline</code> to let's <code>999</code>. Then we can use <code>Series.where</code> here.</p>
<pre><code>df['cost'] = df['price'].mul(metric).where(df['status'].eq('online'), 999)
</code></pre>
<p>Now, every <code>offline</code> value to set to <code>999</code>.</p>
<p>Useful links:</p>
<ul>
<li><a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.where.html" rel="nofollow noreferrer"><code>Series.where</code></a></li>
<li><a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.eq.html" rel="nofollow noreferrer"><code>Series.eq</code></a></li>
<li><a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.mul.html" rel="nofollow noreferrer"><code>Series.mul</code></a></li>
<li><a href="https://stackoverflow.com/q/13332162/12416453">Multiplying boolean with float</a></li>
</ul>
|
python|pandas|dataframe
| 5
|
6,705
| 70,515,648
|
Reading and saving 12bit Raw bayer image using OpenCV / Python
|
<p>I'm trying to read and save a 12bit Raw file using Python and openCV. The code I'm using saves an image but the saved image is garbled / distorted.</p>
<p>The image is from a FLIR Oryx Camera 23MP (5320x4600) 12Bit with BayerRG12P pixelformat, which should be RG GB bayer pattern.</p>
<pre><code>import cv2
import numpy as np
width = 5320
height = 4600
with open("aaa12packed.Raw", "rb") as rawimg:
img = np.fromfile(rawimg, np.dtype('u1'), width * height).reshape(height, width)
colimg = cv2.cvtColor(img, cv2.COLOR_BAYER_GR2RGB)
cv2.imwrite("test.jpeg", colimg)
</code></pre>
<p>I uploaded two raw files that I'm using to test the debayer/demoisaic. You can download them using the link below:</p>
<p>"RG12P.Raw" (this is the 12bit regular) and "RG12packed.Raw" (this is the 12bit packed)</p>
<p><a href="https://snaprecordings.wetransfer.com/downloads/128f9692472c4f5b9e00553c67d8396820211229054950/dce2f1513758ad319f4df46910d57f5e20211229054950/0b26b1?utm_campaign=WT_email_tracking&utm_content=general&utm_medium=download_button&utm_source=notify_recipient_email" rel="nofollow noreferrer">Raw files download</a></p>
<p>** My end goal is to use openCV to process folders containing RAW image sequences and process/convert them to another high resolution format like DPX 10bit or equivalent.</p>
<p>I'm very new to this - any help is appreciated.</p>
<p><strong>EDIT#1: Adding screenshot with information from FLIR manual about the 12bits formats used.</strong></p>
<p><a href="https://i.stack.imgur.com/6lyEz.png" rel="nofollow noreferrer">Link to 12 bit channel image formats (packed and regular)</a></p>
|
<p>We may reuse my answer from the following <a href="https://stackoverflow.com/questions/68039594/find-bayer-pattern-format-from-a-byte-file">post</a>.</p>
<p>OpenCV doesn't support DPX 10bit image format.<br />
You may use Tiff format or 16 bits PNG, but you may need to scale the pixels by 16 (placing the 12 bits in the upper part of the 16 bits).</p>
<hr />
<p>Code sample:</p>
<pre><code>import cv2
import numpy as np
width = 5320
height = 4600
with open("RG12P.Raw", "rb") as rawimg:
# Read the packed 12bits as bytes - each 3 bytes applies 2 pixels
data = np.fromfile(rawimg, np.uint8, width * height * 3//2)
data = data.astype(np.uint16) # Cast the data to uint16 type.
result = np.zeros(data.size*2//3, np.uint16) # Initialize matrix for storing the pixels.
# 12 bits packing: ######## ######## ########
# | 8bits| | 4 | 4 | 8 |
# | lsb | |msb|lsb | msb |
# <-----------><----------->
# 12 bits 12 bits
result[0::2] = ((data[1::3] & 15) << 8) | data[0::3]
result[1::2] = (data[1::3] >> 4) | (data[2::3] << 4)
bayer_im = np.reshape(result, (height, width))
# Apply Demosacing (COLOR_BAYER_BG2BGR gives the best result out of the 4 combinations).
bgr = cv2.cvtColor(bayer_im, cv2.COLOR_BAYER_BG2BGR) # The result is BGR format with 16 bits per pixel and 12 bits range [0, 2^12-1].
# Show image for testing (multiply by 16 because imshow requires full uint16 range [0, 2^16-1]).
cv2.imshow('bgr', cv2.resize(bgr*16, [width//10, height//10]))
cv2.waitKey()
cv2.destroyAllWindows()
# Convert to uint8 before saving as JPEG (not part of the conversion).
colimg = np.round(bgr.astype(float) * (255/4095))
cv2.imwrite("test.jpeg", colimg)
</code></pre>
<hr />
<p>Result:<br />
<a href="https://i.stack.imgur.com/7YeEJ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7YeEJ.jpg" alt="enter image description here" /></a></p>
|
python|numpy|opencv
| 3
|
6,706
| 42,725,474
|
Numpy vectorize multidimensional function (or, building feature planes for a neural network)
|
<p>Say we have N color (RGB) images of size 100x100 stored in A[N][100][100][3].
So: </p>
<pre><code>Channel 0 = R
Channel 1 = G
Channel 2 = B
</code></pre>
<p>What is the most efficient way of building some other channels using numpy? For example, let's define:</p>
<pre><code>Channel 3 = R + G * 0.5
Channel 4 = If B > 128 Then 1 Else 0
Channel 5 = If R == 100 Then 1 Else 0
Channel 6 = If (R + G) > B Then 1 Else 0
</code></pre>
<p>In other words, we would like to get A[N][100][100][7] with the extra 4 channels built using the above rules for each pixel.</p>
<p>It seems that is no general method to vectorize such operations in numpy, but I think there should be a method for the simple case here. Moreover, what will be the fastest method when N is large (>10000) ?</p>
|
<p>There are comparatively straight-forward ways, for example:</p>
<pre><code>rgb = np.random.random((1,2,2,3))
r,g,b = np.transpose(rgb, (3,0,1,2))
np.r_["-1, 4, 0", rgb, r+g*0.5, b>128, r==100, (r+g)>b]
# array([[[[ 0.64715017, 0.45204962, 0.28497451, 0.87317498, 0. , 0. , 1. ],
# [ 0.51238478, 0.62095329, 0.9339249 , 0.82286142, 0. , 0. , 1. ]],
# [[ 0.29647208, 0.81635033, 0.76079918, 0.70464724, 0. , 0. , 1. ],
# [ 0.3307639 , 0.1878836 , 0.04642399, 0.4247057 , 0. , 0. , 1. ]]]])
</code></pre>
<p>The <code>r_</code> concatenation operator is a bit cryptic, if a 3-int-string is passed as first argument it means concatenation axis, depth to promote operands to, axis to align unpadded dimensions to.</p>
<p>One could probably save a bit of peak memory by preallocating and computing the intermediates sequentially.</p>
<p>Speed-wise I don't see any obvious improvements over the above.</p>
|
python|numpy|multidimensional-array|computer-vision
| 0
|
6,707
| 42,678,439
|
ImportError: libnvidia-fatbinaryloader.so.375.39: cannot open shared object file: No such file or directory
|
<p>I'm using Ubuntu 16.04, Cuda 8.0 and cudann-v5.1. I uninstalled Tensorflow-CPU version and reinstalled tensorflow-GPU enabled. Followed the instructions given here: <a href="https://alliseesolutions.wordpress.com/2016/09/08/install-gpu-tensorflow-from-sources-w-ubuntu-16-04-and-cuda-8-0-rc/" rel="noreferrer">https://alliseesolutions.wordpress.com/2016/09/08/install-gpu-tensorflow-from-sources-w-ubuntu-16-04-and-cuda-8-0-rc/</a></p>
<p>However, when I try to load tensorflow, I get the following error:</p>
<pre><code>>>> import tensorflow as tf
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/tensorflow/__init__.py", line 24, in <module>
from tensorflow.python import *
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/__init__.py", line 51, in <module>
from tensorflow.python import pywrap_tensorflow
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/pywrap_tensorflow.py", line 56, in <module>
raise ImportError(msg)
ImportError: Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/pywrap_tensorflow.py", line 41, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper
_mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description)
ImportError: libnvidia-fatbinaryloader.so.375.39: cannot open shared object file: No such file or directory
Failed to load the native TensorFlow runtime.
</code></pre>
|
<p>I encountered this issue as well, there were two issues that needed to be resolved.</p>
<ol>
<li><p>I added <code>/usr/lib/nvidia-375</code> to my <code>LD_LIBRARY_PATH</code> environment variable. You can verify that the file <code>libnvidia-fatbinaryloader.so.375.39</code> lives in that directory. If not, find where it does live and add that path. It's not clear to me why this wasn't picked up properly in compiling the sources.</p></li>
<li><p>Next I encountered the error:</p>
<pre><code>libstdc++.so.6: version `CXXABI_1.3.8' not found
</code></pre></li>
</ol>
<p>If you encounter that it's because you have a newer version of gcc than is available in anaconda or your python installation. For me that meant adding this path to <code>LD_LIBRARY_PATH</code>: <code>/usr/lib/x86_64-linux-gnu/libstdc++.so.6</code></p>
<p>I also had to rename the old <code>libstdc++.so.6</code> at the path shown in the error message. I couldn't find a way to convince python not to look in the default path without just renaming the file. There may be a cleaner way to do this, but that worked for me.</p>
<p>There were a lot of hidden gotchas in the installation.</p>
|
python-2.7|tensorflow
| 14
|
6,708
| 25,244,582
|
Python pandas groupby with cumsum and percentage
|
<p>Given the following dataframe df:</p>
<pre><code> app platform uuid minutes
0 1 0 a696ccf9-22cb-428b-adee-95c9a97a4581 67
1 2 0 8e17a2eb-f0ee-49ae-b8c2-c9f9926aa56d 1
2 2 1 40AD6CD1-4A7B-48DD-8815-1829C093A95C 13
3 1 0 26c1022a-7a8e-42a2-b7cc-bea6bffa7a6f 2
4 2 0 34271596-eebb-4423-b890-dc3761ed37ca 8
5 3 1 C57D0F52-B565-4322-85D2-C2798F7CA6FF 16
6 2 0 245501ec2e39cb782bab1fb02d7813b7 1
7 3 1 DE6E4714-5A3C-4C80-BD81-EAACB2364DF0 30
8 3 0 f88eb774-fdf3-4d1d-a91d-0b4ab95cf36e 10
9 2 0 9c08c860-7a6d-4810-a5c3-f3af2a3fcf66 470
10 3 1 19fdaedfd0dbdaf6a7a6b49619f11a19 3
11 3 1 AAF1CFF7-4564-4C79-B2D8-F0AAF9C9971B 58
12 2 0 4eb1024b-c293-42a4-95a2-31b20c3b524b 24
13 3 1 8E0B0BE3-8553-4F38-9837-6C907E01F84C 7
14 3 1 E8B2849C-F050-4DCD-B311-5D57015466AE 465
15 2 0 ec7fedb6-b118-424a-babe-b8ffad579685 266
16 1 0 7e302dcb-ceaf-406c-a9e5-66933d921064 184
17 2 0 f786528ded200c9f553dd3a5e9e9bb2d 10
18 3 1 1E291633-AF27-4DFB-8DA4-4A5B63F175CF 13
19 2 0 953a525c-97e0-4c2f-90e0-dfebde3ec20d 2408`
</code></pre>
<p>I'll group it:</p>
<pre><code>y=df.groupby(['app','platform','uuid']).sum().reset_index().sort(['app','platform','minutes'],ascending=[1,1,0]).set_index(['app','platform','uuid'])
minutes
app platform uuid
1 0 7e302dcb-ceaf-406c-a9e5-66933d921064 184
a696ccf9-22cb-428b-adee-95c9a97a4581 67
26c1022a-7a8e-42a2-b7cc-bea6bffa7a6f 2
2 0 953a525c-97e0-4c2f-90e0-dfebde3ec20d 2408
9c08c860-7a6d-4810-a5c3-f3af2a3fcf66 470
ec7fedb6-b118-424a-babe-b8ffad579685 266
4eb1024b-c293-42a4-95a2-31b20c3b524b 24
f786528ded200c9f553dd3a5e9e9bb2d 10
34271596-eebb-4423-b890-dc3761ed37ca 8
245501ec2e39cb782bab1fb02d7813b7 1
8e17a2eb-f0ee-49ae-b8c2-c9f9926aa56d 1
1 40AD6CD1-4A7B-48DD-8815-1829C093A95C 13
3 0 f88eb774-fdf3-4d1d-a91d-0b4ab95cf36e 10
1 E8B2849C-F050-4DCD-B311-5D57015466AE 465
AAF1CFF7-4564-4C79-B2D8-F0AAF9C9971B 58
DE6E4714-5A3C-4C80-BD81-EAACB2364DF0 30
C57D0F52-B565-4322-85D2-C2798F7CA6FF 16
1E291633-AF27-4DFB-8DA4-4A5B63F175CF 13
8E0B0BE3-8553-4F38-9837-6C907E01F84C 7
19fdaedfd0dbdaf6a7a6b49619f11a19 3
</code></pre>
<p>So that I got its minutes per uuid in decrescent order.</p>
<p>Now, I will sum the cumulative minutes per app/platform/uuid:</p>
<pre><code>y.groupby(level=[0,1]).cumsum()
app platform uuid
1 0 7e302dcb-ceaf-406c-a9e5-66933d921064 184
a696ccf9-22cb-428b-adee-95c9a97a4581 251
26c1022a-7a8e-42a2-b7cc-bea6bffa7a6f 253
2 0 953a525c-97e0-4c2f-90e0-dfebde3ec20d 2408
9c08c860-7a6d-4810-a5c3-f3af2a3fcf66 2878
ec7fedb6-b118-424a-babe-b8ffad579685 3144
4eb1024b-c293-42a4-95a2-31b20c3b524b 3168
f786528ded200c9f553dd3a5e9e9bb2d 3178
34271596-eebb-4423-b890-dc3761ed37ca 3186
245501ec2e39cb782bab1fb02d7813b7 3187
8e17a2eb-f0ee-49ae-b8c2-c9f9926aa56d 3188
1 40AD6CD1-4A7B-48DD-8815-1829C093A95C 13
3 0 f88eb774-fdf3-4d1d-a91d-0b4ab95cf36e 10
1 E8B2849C-F050-4DCD-B311-5D57015466AE 465
AAF1CFF7-4564-4C79-B2D8-F0AAF9C9971B 523
DE6E4714-5A3C-4C80-BD81-EAACB2364DF0 553
C57D0F52-B565-4322-85D2-C2798F7CA6FF 569
1E291633-AF27-4DFB-8DA4-4A5B63F175CF 582
8E0B0BE3-8553-4F38-9837-6C907E01F84C 589
19fdaedfd0dbdaf6a7a6b49619f11a19 592
</code></pre>
<p>My question is: how can I get the percent agains the total cumulative sum, per group, i.e, something like this:</p>
<pre><code>app platform uuid
1 0 7e302dcb-ceaf-406c-a9e5-66933d921064 184 0.26
a696ccf9-22cb-428b-adee-95c9a97a4581 251 0.36
26c1022a-7a8e-42a2-b7cc-bea6bffa7a6f 253 0.36
...
...
...
</code></pre>
|
<p>It's not clear you came up with 0.26, 0.36 in your desired output - but assuming those are just dummy numbers, to get a running % of total for each group, you could do this:</p>
<pre><code>y['cumsum'] = y.groupby(level=[0,1]).cumsum()
y['running_pct'] = y.groupby(level=[0,1])['cumsum'].transform(lambda x: x / x.iloc[-1])
</code></pre>
<p>Should give the right output.</p>
<pre><code>In [398]: y['running_pct'].head()
Out[398]:
app platform uuid
1 0 7e302dcb-ceaf-406c-a9e5-66933d921064 0.727273
a696ccf9-22cb-428b-adee-95c9a97a4581 0.992095
26c1022a-7a8e-42a2-b7cc-bea6bffa7a6f 1.000000
2 0 953a525c-97e0-4c2f-90e0-dfebde3ec20d 0.755332
9c08c860-7a6d-4810-a5c3-f3af2a3fcf66 0.902760
Name: running_pct, dtype: float64
</code></pre>
<p>EDIT:</p>
<p>Per the comments, if you're looking to wring out a little more performance, this will be faster as of version 0.14.1</p>
<pre><code>y['cumsum'] = y.groupby(level=[0,1])['minutes'].transform('cumsum')
y['running_pct'] = y['cumsum'] / y.groupby(level=[0,1])['minutes'].transform('sum')
</code></pre>
<p>And as @Jeff notes, in 0.15.0 this may be faster yet.</p>
<pre><code>y['running_pct'] = y['cumsum'] / y.groupby(level=[0,1])['minutes'].transform('last')
</code></pre>
|
python|pandas|grouping|percentage
| 3
|
6,709
| 39,371,467
|
numpy.loadtxt returns string repr of bytestring instead of string
|
<p>I'm having trouble reading a data file containing mixed strings and floats with numpy.loadtxt in Python 3. Python 2 works fine, but I want my code to work in Py3.</p>
<p>A simplified example:</p>
<pre><code>import numpy as n
strings = ['str1', 'str2']
parsed = n.loadtxt(strings, dtype='str')
print('Result:', parsed)
</code></pre>
<p>which, when executed, gives different results for Py2 and Py3.</p>
<pre><code>$> python2 mwe.py
Result: ['str1' 'str2']
$> python3 mwe.py
Result: ["b'str1'" "b'str2'"]
</code></pre>
<p>Python 2 gives strings as expected, Python 3 gives strings containing the string representation of bytestrings.</p>
<p>How can I get plain strings from this mess in Python3?</p>
|
<p><code>loadtxt</code> has passed your input string through a <code>asbytes</code> function before parsing (it normally reads files as bytestrings). But how it converts those to unicode does look buggy.</p>
<p><code>genfromtxt</code> appears to handle this better</p>
<pre><code>In [241]: np.genfromtxt([b'str1', b'str2'], dtype='str')
Out[241]:
array(['str1', 'str2'],
dtype='<U4')
</code></pre>
<p>But it complains if you don't give it bytestrings:</p>
<pre><code>In [242]: np.genfromtxt(['str1', 'str2'], dtype='str')
TypeError: Can't convert 'bytes' object to str implicitly
</code></pre>
<p>Loading as <code>S4</code> and converting to unicode after is another option:</p>
<pre><code>In [244]: np.genfromtxt([b'str1', b'str2'], dtype='S4').astype('str')
Out[244]:
array(['str1', 'str2'],
dtype='<U4')
In [245]: np.loadtxt([b'str1', b'str2'], dtype='S4').astype('str')
Out[245]:
array(['str1', 'str2'],
dtype='<U4')
In [246]: np.loadtxt(['str1', 'str2'], dtype='S4').astype('str')
Out[246]:
array(['str1', 'str2'],
dtype='<U4')
</code></pre>
<p>Another work around is with a <code>converter</code>:</p>
<pre><code>In [250]: np.loadtxt(['str1', 'str2'], dtype='str',converters={0:lambda x: x.decode()})
Out[250]:
array(['str1', 'str2'],
dtype='<U4')
</code></pre>
|
python-3.x|numpy
| 2
|
6,710
| 39,378,510
|
Iterating across multiple columns in Pandas DF and slicing dynamically
|
<p><strong>TLDR:</strong> How to iterate across all options of multiple columns in a pandas dataframe without specifying the columns or their values explicitly?</p>
<p><strong>Long Version:</strong> I have a pandas dataframe that looks like this, only it has a lot more features or drug dose combinations than are listed here. Instead of just 3 types of features, it could have something like 70...:</p>
<pre><code>> dosage_df
First Score Last Score A_dose B_dose C_dose
22 28 1 40 130
55 11 2 40 130
15 72 3 40 130
42 67 1 90 130
90 74 2 90 130
87 89 3 90 130
14 43 1 40 700
12 61 2 40 700
41 5 3 40 700
</code></pre>
<p>Along with my data frame, I also have a python dictionary with the relevant ranges for each feature. The keys are the feature names, and the different values which it can take are the keys:</p>
<pre><code>> dict_of_dose_ranges = {'A_dose': [1, 2, 3], 'B_dose': [40, 90], 'C_dose': [130,700]}
</code></pre>
<p>For my purposes, I need to generate a particular combination (say A_dose = 1, B_dose = 90, and C_dose = 700), and based on those settings take the relevant slice out of my dataframe, and do relevant calculations from that smaller subset, and save the results somewhere.</p>
<p>I need to do this for ALL possible combinations of ALL of my features (far more than the 3 which are here, and which will be variable in the future). </p>
<p>In this case, I could easily pop this into SkLearn's Parameter grid, generate the options:</p>
<pre><code>> from sklearn.grid_search import ParameterGrid
> all_options = list(ParameterGrid(dict_of_dose_ranges))
> all_options
</code></pre>
<p>and get:</p>
<pre><code>[{'A_dose': 1, 'B_dose': 40, 'C_dose': 130},
{'A_dose': 1, 'B_dose': 40, 'C_dose': 700},
{'A_dose': 1, 'B_dose': 90, 'C_dose': 130},
{'A_dose': 1, 'B_dose': 90, 'C_dose': 700},
{'A_dose': 2, 'B_dose': 40, 'C_dose': 130},
{'A_dose': 2, 'B_dose': 40, 'C_dose': 700},
{'A_dose': 2, 'B_dose': 90, 'C_dose': 130},
{'A_dose': 2, 'B_dose': 90, 'C_dose': 700},
{'A_dose': 3, 'B_dose': 40, 'C_dose': 130},
{'A_dose': 3, 'B_dose': 40, 'C_dose': 700},
{'A_dose': 3, 'B_dose': 90, 'C_dose': 130},
{'A_dose': 3, 'B_dose': 90, 'C_dose': 700}]
</code></pre>
<p><strong>This is where I run into problems:</strong></p>
<p><strong>Problem #1)</strong> I can now iterate across <code>all_options</code>, but I'm not sure how to now SELECT out of my <code>dosage_df</code> from each of the dictionary options (i.e. {'A_dose': 1, 'B_dose': 40, 'C_dose': 130}) WITHOUT doing it explicitly. </p>
<p>In the past, I could do something like:</p>
<pre><code>dosage_df[(dosage_df.A_dose == 1) & (dosage_df.B_dose == 40) & (dosage_df.C_dose == 130)]
First Score Last Score A_dose B_dose C_dose
0 22 28 140 130
</code></pre>
<p>But now I'm not sure what to put inside the brackets to slice it dynamically...</p>
<pre><code>dosage_df[?????]
</code></pre>
<p><strong>Problem #2)</strong> When I actually enter in my full dictionary of features with their respective ranges, I get an error because it deems it as having too many options... </p>
<pre><code>from sklearn.grid_search import ParameterGrid
all_options = list(ParameterGrid(dictionary_of_features_and_ranges))
all_options
---------------------------------------------------------------------------
OverflowError Traceback (most recent call last)
<ipython-input-138-7b73d5e248f5> in <module>()
1 from sklearn.grid_search import ParameterGrid
----> 2 all_options = list(ParameterGrid(dictionary_of_features_and_ranges))
3 all_options
OverflowError: long int too large to convert to int
</code></pre>
<p>I tried a number of alternate approaches including using double while loops, a <a href="https://stackoverflow.com/questions/23986892/python-recursive-iteration-exceeding-limit-for-tree-implementation">tree / recursion method from here</a>, another <a href="https://stackoverflow.com/questions/13109274/python-recursion-permutations">recursion method from here</a>, but it wasn't coming together.... Any help is much appreciated. </p>
|
<p>You can use <a href="https://docs.python.org/3/library/itertools.html#itertools.product" rel="nofollow"><code>itertools.product</code></a> to generate all possible dosage combinations, and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.query.html" rel="nofollow"><code>DataFrame.query</code></a> to do the selection:</p>
<pre><code>from itertools import product
for dosage_comb in product(*dict_of_dose_ranges.values()):
dosage_items = zip(dict_of_dose_ranges.keys(), dosage_comb)
query_str = ' & '.join('{} == {}'.format(*x) for x in dosage_items)
sub_df = dosage_df.query(query_str)
# Do Stuff...
</code></pre>
|
python|pandas|machine-learning|scikit-learn|grid-search
| 2
|
6,711
| 38,970,028
|
Pandas count occurrence within column on condition being satisfied
|
<p>I am trying to do count by grouping. see below input and output.</p>
<p>input:</p>
<pre><code>df = pd.DataFrame()
df['col1'] = ['a','a','a','a','b','b','b']
df['col2'] = [4,4,5,5,6,7,8]
df['col3'] = [1,1,1,1,1,1,1]
</code></pre>
<p>output:</p>
<pre><code> col4
0 2
1 2
2 2
3 2
4 1
5 1
6 1
</code></pre>
<p>Tried playing around with groupby and count, by doing:</p>
<pre><code>s = df.groupby(['col1','col2'])['col3'].sum()
</code></pre>
<p>and the output I got was </p>
<pre><code>a 4 2
5 2
b 6 1
7 1
8 1
</code></pre>
<p>how do I add it just as a column on the main df. </p>
<p>Thanks vm!</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.transform.html" rel="nofollow"><code>transform</code></a> <code>len</code> or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.size.html" rel="nofollow"><code>size</code></a>:</p>
<pre><code>df['count'] = df.groupby(['col1','col2'])['col3'].transform(len)
print (df)
col1 col2 col3 count
0 a 4 1 2
1 a 4 1 2
2 a 5 1 2
3 a 5 1 2
4 b 6 1 1
5 b 7 1 1
6 b 8 1 1
</code></pre>
<hr>
<pre><code>df['count'] = df.groupby(['col1','col2'])['col3'].transform('size')
print (df)
col1 col2 col3 count
0 a 4 1 2
1 a 4 1 2
2 a 5 1 2
3 a 5 1 2
4 b 6 1 1
5 b 7 1 1
6 b 8 1 1
</code></pre>
<p>But column <code>col3</code> is not necessary, you can use <code>col1</code> or <code>col2</code>:</p>
<pre><code>df = pd.DataFrame()
df['col1'] = ['a','a','a','a','b','b','b']
df['col2'] = [4,4,5,5,6,7,8]
df['count'] = df.groupby(['col1','col2'])['col1'].transform(len)
df['count1'] = df.groupby(['col1','col2'])['col2'].transform(len)
print (df)
col1 col2 count count1
0 a 4 2 2
1 a 4 2 2
2 a 5 2 2
3 a 5 2 2
4 b 6 1 1
5 b 7 1 1
6 b 8 1 1
</code></pre>
|
pandas|dataframe|count|group-by|size
| 2
|
6,712
| 28,933,596
|
Reconstruct a 2D array from a string
|
<p>In Python, I have a string (<strong>b</strong> in the following example) converted from a 2D array (<strong>a</strong>). How can I reconstruct the 2D array from the string? </p>
<p>I guess I am using the wrong function "numpy.fromstring" since <strong>c</strong> here is a 1D array. </p>
<pre><code>import numpy
a = numpy.array([[1,2],[3,4]], dtype='float32')
b = a.tostring()
c = numpy.fromstring(b, dtype='float32')
</code></pre>
|
<p>Another approach which saves the shape of the array is to use <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.savetxt.html" rel="nofollow"><code>np.savetxt</code></a> and <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.loadtxt.html" rel="nofollow"><code>np.loadtxt</code></a>. These functions expect files as input, so we'll be using <a href="https://docs.python.org/2/library/stringio.html" rel="nofollow"><code>StringIO</code></a> to make a file-like string object:</p>
<pre><code>>>> import numpy
>>> from StringIO import StringIO
>>> a = numpy.array([[1,2],[3,4]], dtype='float32')
>>> io = StringIO()
>>> numpy.savetxt(io, a)
>>> s = io.getvalue()
>>> s
'1.000000000000000000e+00 2.000000000000000000e+00\n3.000000000000000000e+00 4.000000000000000000e+00\n'
</code></pre>
<p>We can recover the array from this string by using <code>np.loadtxt</code>, wrapping it in another <code>StringIO</code>:</p>
<pre><code>>>> numpy.loadtxt(StringIO(s))
array([[ 1., 2.],
[ 3., 4.]])
</code></pre>
|
python|numpy
| 0
|
6,713
| 33,929,624
|
Create a dataframe from a list
|
<p>I've got a learner that returns a list of values corresponding to dates. </p>
<p>I need the function to return a dataframe for plotting purposes. I've got the dataframe created, but now I need to populate the dataframe with the values from the list. Here is my code:</p>
<pre><code>learner.addEvidence(x,y_values.values)
y_prediction_list = learner.query(x) # this yields a plain old python list
y_prediction_df = pd.DataFrame(index=dates,columns="Y-Prediction")
y_prediction_df = ??
return y_prediction_df
</code></pre>
|
<p>you can simply create the dataframe with:</p>
<pre><code>y_prediction_df=pd.DataFrame({"Y-Prediction":y_prediction_list},index=dates)
</code></pre>
|
list|pandas|dataframe
| 2
|
6,714
| 23,841,130
|
Creating a list of numpy.ndarray of unequal length in Cython
|
<p>I now have python code to create a list of ndarrays, and these arrays are not equal length. The piece of code snippet that looks like this:</p>
<pre><code>import numpy as np
from mymodule import list_size, array_length # list_size and array_length are two lists of ints, and the len(array_length) == list_size
ndarray_list = []
for i in range(list_size):
ndarray_list.append(np.zeros(array_length[i]))
</code></pre>
<p>Now, I need to convert this to Cython, but do not know how. I tried to create a 2-d dynamically allocated array, like this:</p>
<pre><code>import numpy as np
cimport numpy as np
from mymodule import list_size, array_length
cdef int i
ndarray_list = <double **>malloc(list_size * sizeof(double*))
for i in range(list_size):
ndarray_list[i] = <double *>malloc(array_length[i] * sizeof(double))
</code></pre>
<p>However, this method only creates a double pointer in ndarray_list[i]. I cannot pass it to other functions which requires some of the ndarray method.</p>
<p>What should I do?</p>
|
<p>In order to pass the C <code>double*</code> buffer to a function that requires a <code>numpy.ndarray</code> you can create a temporary buffer and assign to its memory address the address of the <code>double*</code> array.</p>
<p>This <code>malloc()</code>-based solution is orders of magnitude faster than the other answer based on NumPy buffers. Note how to <code>free()</code> the inner arrays to avoid a memory leak.</p>
<pre><code>import numpy as np
cimport numpy as np
from cython cimport view
from libc.stdlib cimport malloc, free
cdef int i
cdef double test
list_size = 10
ndarray_list = <double **>malloc(list_size * sizeof(double*))
array_length = <int *>malloc(list_size * sizeof(int*))
for i in range(list_size):
array_length[i] = i+1
ndarray_list[i] = <double *>malloc(array_length[i] * sizeof(double))
for j in range(array_length[i]):
ndarray_list[i][j] = j
for i in range(list_size):
for j in range(array_length[i]):
test = ndarray_list[i][j]
cdef view.array buff
for i in range(list_size):
buff = <double[:array_length[i]]>ndarray_list[i]
print np.sum(buff)
#...
for i in range(list_size):
free(ndarray_list[i])
free(ndarray_list)
free(array_length)
</code></pre>
|
python|arrays|object|numpy|cython
| 4
|
6,715
| 29,805,372
|
Date parse error in Python pandas while reading file
|
<p>Follow on question to: <a href="https://stackoverflow.com/questions/29804236/python-pandas-for-reading-in-file-with-date/29805004#29805004">Python pandas for reading in file with date</a></p>
<p>I am not able to parse the date on the dataframe below. The code is as follows:</p>
<pre><code>df = pandas.read_csv(file_name, skiprows = 2, index_col='datetime',
parse_dates={'datetime': [0,1,2]}, delim_whitespace=True,
date_parser=lambda x: pandas.datetime.strptime(x, '%Y %m %d'))
</code></pre>
<hr>
<pre><code> OTH-000.opc
XKN1= 0.500000E-01
Y M D PRCP VWC1
2006 1 1 0.0 0.17608E+00
2006 1 2 6.0 0.21377E+00
2006 1 3 0.1 0.22291E+00
2006 1 4 3.0 0.23460E+00
2006 1 5 6.7 0.26076E+00
</code></pre>
<p>I get an error saying: lambda () takes exactly 1 argument (3 given)</p>
<p>Based on @EdChum's comment below, if I use this code:</p>
<pre><code>df = pandas.read_csv(file_name, skiprows = 2, index_col='datetime', parse_dates={'datetime': [0,1,2]}, delim_whitespace=True))
</code></pre>
<p>df.index results in an object and not a datetime series</p>
<pre><code>df.index
Index([u'2006 1 1',u'2006 1 2'....,u'nan nan nan'],dtype='object')
</code></pre>
<p>Finally the file is available here:</p>
<p><a href="https://www.dropbox.com/s/0xgk2w4ed9mi4lx/test.txt?dl=0" rel="nofollow noreferrer">https://www.dropbox.com/s/0xgk2w4ed9mi4lx/test.txt?dl=0</a></p>
|
<p>OK I see the problem, your file had extraneous blank lines at the end, unfortunately this messes up the parser as it's looking for whitespace, this caused the df to look the following:</p>
<pre><code>Out[25]:
PRCP VWC1
datetime
2006 1 1 0.0 0.17608
2006 1 2 6.0 0.21377
2006 1 3 0.1 0.22291
2006 1 4 3.0 0.23460
2006 1 5 6.7 0.26076
nan nan nan NaN NaN
</code></pre>
<p>When I remove the blank lines it imports and parses the dates fine:</p>
<pre><code>Out[26]:
PRCP VWC1
datetime
2006-01-01 0.0 0.17608
2006-01-02 6.0 0.21377
2006-01-03 0.1 0.22291
2006-01-04 3.0 0.23460
2006-01-05 6.7 0.26076
</code></pre>
<p>and the index is now a datetimeindex as desired:</p>
<pre><code>In [27]:
df.index
Out[27]:
<class 'pandas.tseries.index.DatetimeIndex'>
[2006-01-01, ..., 2006-01-05]
Length: 5, Freq: None, Timezone: None
</code></pre>
|
python|pandas
| 1
|
6,716
| 62,341,749
|
std.constant' op requires attribute's type to match op's return type
|
<p>I'm trying to convert a keras model which I trained and fine tuned with quantization <a href="https://www.tensorflow.org/model_optimization/guide/quantization/training_example" rel="nofollow noreferrer">aware training tutorial</a> on their official website to a int tflite model. I am able to follow their steps until I have to convert the model to tflite format. Then it gives me this output:</p>
<pre><code>`Traceback (most recent call last):
File "/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/lite/python/convert.py", line 185, in toco_convert_protos
enable_mlir_converter)
File "/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/lite/python/wrap_toco.py", line 38, in wrapped_toco_convert
enable_mlir_converter)
Exception: /home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/layers/ops/core.py:56:1: error: 'std.constant' op requires attribute's type ('tensor<48x64xf32>') to match op's return type ('tensor<*xf32>')
outputs = standard_ops.tensordot(inputs, kernel, [[rank - 1], [0]])
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/layers/core.py:1194:1: note: called from
dtype=self._compute_dtype_object)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow_model_optimization/python/core/quantization/keras/quantize_wrapper.py:162:1: note: called from
outputs = self.layer.call(inputs)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/autograph/impl/api.py:302:1: note: called from
return func(*args, **kwargs)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py:961:1: note: called from
outputs = call_fn(inputs, *args, **kwargs)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/engine/functional.py:507:1: note: called from
outputs = node.layer(*args, **kwargs)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/engine/functional.py:385:1: note: called from
inputs, training=training, mask=mask)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py:961:1: note: called from
outputs = call_fn(inputs, *args, **kwargs)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/saving/saving_utils.py:132:1: note: called from
outputs = model(inputs, training=False)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py:600:1: note: called from
return weak_wrapped_fn().__wrapped__(*args, **kwds)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/layers/ops/core.py:56:1: note: see current operation: %cst_8 = "std.constant"() {value = dense<"0x38211AB .. A3E"> : tensor<48x64xf32>} : () -> tensor<*xf32>
outputs = standard_ops.tensordot(inputs, kernel, [[rank - 1], [0]])
^
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/student1/kvantizacija/tensorflow_example.py", line 58, in <module>
tflite_model_quant = converter.convert()
File "/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/lite/python/lite.py", line 778, in convert
self).convert(graph_def, input_tensors, output_tensors)
File "/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/lite/python/lite.py", line 595, in convert
**converter_kwargs)
File "/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/lite/python/convert.py", line 560, in toco_convert_impl
enable_mlir_converter=enable_mlir_converter)
File "/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/lite/python/convert.py", line 188, in toco_convert_protos
raise ConverterError(str(e))
tensorflow.lite.python.convert.ConverterError: /home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/layers/ops/core.py:56:1: error: 'std.constant' op requires attribute's type ('tensor<48x64xf32>') to match op's return type ('tensor<*xf32>')
outputs = standard_ops.tensordot(inputs, kernel, [[rank - 1], [0]])
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/layers/core.py:1194:1: note: called from
dtype=self._compute_dtype_object)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow_model_optimization/python/core/quantization/keras/quantize_wrapper.py:162:1: note: called from
outputs = self.layer.call(inputs)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/autograph/impl/api.py:302:1: note: called from
return func(*args, **kwargs)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py:961:1: note: called from
outputs = call_fn(inputs, *args, **kwargs)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/engine/functional.py:507:1: note: called from
outputs = node.layer(*args, **kwargs)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/engine/functional.py:385:1: note: called from
inputs, training=training, mask=mask)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/engine/base_layer.py:961:1: note: called from
outputs = call_fn(inputs, *args, **kwargs)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/saving/saving_utils.py:132:1: note: called from
outputs = model(inputs, training=False)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/eager/def_function.py:600:1: note: called from
return weak_wrapped_fn().__wrapped__(*args, **kwds)
^
/home/student1/venv_kvant/lib/python3.6/site-packages/tensorflow/python/keras/layers/ops/core.py:56:1: note: see current operation: %cst_8 = "std.constant"() {value = dense<"0x38211ABEE ... 6D3DE88D49BE40211A3E"> : tensor<48x64xf32>} : () -> tensor<*xf32>
outputs = standard_ops.tensordot(inputs, kernel, [[rank - 1], [0]])
^
Process finished with exit code 1
</code></pre>
<p>If I remove the flag for optimization it gives me a tflite model but it doesn't give me the int8 model I need. I can successfully post train quantize the same model I fine tune with quant aware training but for some reason when I wrap the model in Quantize wrappers and try to convert it it doesn't work. I'm using the latest nightly version and tried running the script both with and without the access to GPU.
`</p>
<p>If you need any more information feel free to ask. One more this the model is are 4 CNN + Max_pool blocks with some Dense layers at the end. If needed I can provide the vizualization of the model. </p>
<p>PS. Here is the summary:</p>
<pre><code>__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input (InputLayer) [(None, 48, 48, 3)] 0
__________________________________________________________________________________________________
quantize_layer (QuantizeLayer) (None, 48, 48, 3) 3 input[0][0]
__________________________________________________________________________________________________
quant_conv_1 (QuantizeWrapper) (None, 46, 46, 16) 483 quantize_layer[0][0]
__________________________________________________________________________________________________
quant_relu_1 (QuantizeWrapper) (None, 46, 46, 16) 3 quant_conv_1[0][0]
__________________________________________________________________________________________________
quant_pool_1 (QuantizeWrapper) (None, 22, 22, 16) 1 quant_relu_1[0][0]
__________________________________________________________________________________________________
quant_conv_2 (QuantizeWrapper) (None, 20, 20, 32) 4707 quant_pool_1[0][0]
__________________________________________________________________________________________________
quant_relu_2 (QuantizeWrapper) (None, 20, 20, 32) 3 quant_conv_2[0][0]
__________________________________________________________________________________________________
quant_pool_2 (QuantizeWrapper) (None, 9, 9, 32) 1 quant_relu_2[0][0]
__________________________________________________________________________________________________
quant_conv_3 (QuantizeWrapper) (None, 7, 7, 32) 9315 quant_pool_2[0][0]
__________________________________________________________________________________________________
quant_relu_3 (QuantizeWrapper) (None, 7, 7, 32) 3 quant_conv_3[0][0]
__________________________________________________________________________________________________
quant_pool_3 (QuantizeWrapper) (None, 3, 3, 32) 1 quant_relu_3[0][0]
__________________________________________________________________________________________________
quant_conv_4 (QuantizeWrapper) (None, 2, 2, 64) 8387 quant_pool_3[0][0]
__________________________________________________________________________________________________
quant_pool_4 (QuantizeWrapper) (None, 1, 1, 64) 1 quant_conv_4[0][0]
__________________________________________________________________________________________________
quant_relu_4 (QuantizeWrapper) (None, 1, 1, 64) 3 quant_pool_4[0][0]
__________________________________________________________________________________________________
quant_fc_yaw (QuantizeWrapper) (None, 1, 1, 48) 3125 quant_relu_4[0][0]
__________________________________________________________________________________________________
quant_fc_pitch (QuantizeWrapper (None, 1, 1, 48) 3125 quant_relu_4[0][0]
__________________________________________________________________________________________________
quant_fc_roll (QuantizeWrapper) (None, 1, 1, 48) 3125 quant_relu_4[0][0]
__________________________________________________________________________________________________
quant_relu_yaw (QuantizeWrapper (None, 1, 1, 48) 3 quant_fc_yaw[0][0]
__________________________________________________________________________________________________
quant_relu_pitch (QuantizeWrapp (None, 1, 1, 48) 3 quant_fc_pitch[0][0]
__________________________________________________________________________________________________
quant_relu_roll (QuantizeWrappe (None, 1, 1, 48) 3 quant_fc_roll[0][0]
__________________________________________________________________________________________________
quant_flatten_yaw (QuantizeWrap (None, 48) 1 quant_relu_yaw[0][0]
__________________________________________________________________________________________________
quant_flatten_pitch (QuantizeWr (None, 48) 1 quant_relu_pitch[0][0]
__________________________________________________________________________________________________
quant_flatten_roll (QuantizeWra (None, 48) 1 quant_relu_roll[0][0]
__________________________________________________________________________________________________
quant_output_yaw (QuantizeWrapp (None, 61) 2994 quant_flatten_yaw[0][0]
__________________________________________________________________________________________________
quant_output_pitch (QuantizeWra (None, 61) 2994 quant_flatten_pitch[0][0]
__________________________________________________________________________________________________
quant_output_roll (QuantizeWrap (None, 61) 2994 quant_flatten_roll[0][0]
__________________________________________________________________________________________________
quant_yaw (QuantizeWrapper) (None, 1) 54 quant_flatten_yaw[0][0]
__________________________________________________________________________________________________
quant_pitch (QuantizeWrapper) (None, 1) 54 quant_flatten_pitch[0][0]
__________________________________________________________________________________________________
quant_roll (QuantizeWrapper) (None, 1) 54 quant_flatten_roll[0][0]
==================================================================================================
Total params: 41,442
Trainable params: 41,066
Non-trainable params: 376
</code></pre>
|
<p>Greetings this is closed due to finding another way to resolve the problem. The problem was that there was a sequence of layers: -> Dense -> Flatten -> Dense and because of it this error was occurring. For the time being the solution I'm using is to switch places of Flatten layer and 1st Dense layer. If anybody know how to resolve using the original sequence please let me know. </p>
|
python|tensorflow|quantization|tensorflow-lite|quantization-aware-training
| 0
|
6,717
| 62,156,914
|
Can I create a dataframe from few 1d arrays as columns?
|
<p>Is it possible to create a dataframe from few 1d arrays and place them as columns?
If I create a dataframe from 1 1d array everything is ok:</p>
<pre><code>arr1 = np.array([11, 12, 13, 14, 15])
arr1_arr2_df = pd.DataFrame(data=arr1, index=None, columns=None)
arr1_arr2_df
Out:
0
0 11
1 12
2 13
3 14
4 15
</code></pre>
<p>But If make a datafreme form 2 arrays they are placed is rows:</p>
<pre><code>arr1 = np.array([11, 12, 13, 14, 15])
arr2 = np.array([21, 22, 23, 24, 25])
arr1_arr2_df = pd.DataFrame(data=(arr1,arr2), index=None, columns=None)
arr1_arr2_df
Out:
0 1 2 3 4
0 11 12 13 14 15
1 21 22 23 24 25
</code></pre>
<p>I know that I can achieve it by using transpose:</p>
<pre><code>arr1_arr2_df = arr1_arr2_df.transpose()
arr1_arr2_df
Out:
0 1
0 11 21
1 12 22
2 13 23
3 14 24
4 15 25
</code></pre>
<p>But is it possible to get it from the start?</p>
|
<p>You can use a dictionary:</p>
<pre><code>arr1_arr2_df = pd.DataFrame(data={0:arr1,1:arr2})
</code></pre>
|
python-3.x|pandas|numpy|dataframe
| 3
|
6,718
| 62,393,882
|
Unable to access list from csv file using pandas
|
<p>I have below content in my csv file, which I am trying to read last column from csv using pandas. And after successful fetching of last column x2. I am unable to access the column from the output. Instead if I try to index the x2 column, I am getting rows. But I want columns.</p>
<p><strong>CSV File:</strong></p>
<pre><code>symbol,close,low,high,x0,x1,x2
ACC,-1.41,1241.5,1270.0,-1.41,"[1221241.5, 1270, -1.41]","[1241.5, 1270, -1.41]"
ADANIPORTS,-1.61,336.85,346.85,-1.61,"[336.85, 346.85, -1.61]","[336.85, 346.85, -1.61]"
ADANITRANS,3.45,202.8,211.2,3.45,"[202.8, 211.2, 3.45]","[202.8, 211.2, 3.45]"
</code></pre>
<p><strong>Code</strong></p>
<pre><code>import pandas as pd
df = pd.read_csv("tickerdb.csv", index_col=0)
print((df.iloc[:, -1]))
</code></pre>
<p><strong>Output</strong></p>
<pre><code>symbol
ACC [1241.5, 1270, -1.41]
ADANIPORTS [336.85, 346.85, -1.61]
ADANITRANS [202.8, 211.2, 3.45]
</code></pre>
<p>I tried accessing the column from the list, but I am getting rows instead.</p>
<pre><code>print((df.iloc[:, -1][1]))
</code></pre>
<p><strong>New Output:</strong></p>
<pre><code>[336.85, 346.85, -1.61]
</code></pre>
<p><strong>But expected output is a column from the list, not row:</strong></p>
<pre><code>1270
346.85
211.2
</code></pre>
<p>Second solution which I am also fine with would be if somehow I can get </p>
<pre><code>#Current output from last column using df.iloc
ACC [1241.5, 1270, -1.41]
ADANIPORTS [336.85, 346.85, -1.61]
ADANITRANS [202.8, 211.2, 3.45]
#If I can get like below for x2 column is also fine for me.
symbol low high change
ACC 1241.5 1270 -1.41
ADANIPORTS 336.85 346.85 -1.61
ADANITRANS 202.8 211.2 3.45
</code></pre>
<p>Any of above two solutions would be good for me. Thanks in advance for the help.</p>
|
<p>Try this:</p>
<pre><code>from ast import literal_eval
df2 = pd.DataFrame(df.x2.apply(lambda x: literal_eval(x)).tolist(), columns=['low', 'high', 'change'])
df2.insert(0, column='symbol', value=df.symbol)
</code></pre>
<p><strong>Output:</strong></p>
<pre><code> symbol low high change
0 ACC 1241.50 1270.00 -1.41
1 ADANIPORTS 336.85 346.85 -1.61
2 ADANITRANS 202.80 211.20 3.45
</code></pre>
|
python|pandas|csv|data-analysis
| 0
|
6,719
| 51,386,851
|
Tensorflow operation on non-zero vectors
|
<p>I have spent about two hours on this, but could not find the solution. The closes thing to what I need is probably this <a href="https://www.tensorflow.org/api_docs/python/tf/boolean_mask" rel="nofollow noreferrer">boolen mask</a>, but I am still missing the next step.</p>
<p>My neural network wasn't learning so I started looking at every step it performs. And sure enough I found a problem. The problem lies in the fact that due to sparsity on my input layer I get too many bias terms propagated throughout. Uniqueness of my set up though is that the last <code>time</code> matrices will be zero matrices. Let me show you, I will first show a screenshot of my notebook and will then present the code.</p>
<p>screenshot:</p>
<p><a href="https://i.stack.imgur.com/E7sfU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/E7sfU.png" alt="enter image description here"></a></p>
<p>I do not want bias terms added to where the whole <code>time</code> is a zeros matrix. I thought I could perhaps perform an op on the boolean mask filtered matrix?</p>
<p>Here is the code:</p>
<pre><code>import tensorflow as tf
import numpy as np
dim = 4
# batch x time x events x dim
tensor = np.random.rand(1, 3, 4, dim)
zeros_last_time = np.zeros((4, dim))
tensor[0][2] = zeros_last_time
input_layer = tf.placeholder(tf.float64, shape=(None, None, 4, dim))
# These are supposed to perform operations on the non-zero times
Wn = tf.Variable(
tf.truncated_normal(dtype=dtype, shape=(dim,), mean=0, stddev=0.01),
name="Wn")
bn = tf.Variable(tf.truncated_normal(dtype=dtype, shape=(1,), mean=0,
stddev=0.01), name="bn")
# this is the op I want to be performed only on non-zero times
op = tf.einsum('bted,d->bte', input_layer, Wn) + bn
s = tf.Session()
glob_vars = tf.global_variables_initializer()
s.run(glob_vars)
# first let's see what the bias term is
s.run(bn, feed_dict={input_layer: tensor})
s.run(op, feed_dict={input_layer: tensor})
</code></pre>
<p>EDIT: So I believe <code>tf.where</code> is what I need.</p>
|
<p>I managed to get the right bias, but then noticed that the dimensions are messed up. So this is only a partial answer:</p>
<pre><code>import tensorflow as tf
import numpy as np
dim = 4
# batch x time x events x dim
tensor = np.random.rand(1, 3, 4, dim)
zeros_last_time = np.zeros((4, dim))
tensor[0][2] = zeros_last_time
dtype = tf.float64
input_layer = tf.placeholder(dtype, shape=(None, None, 4, dim))
# These are supposed to perform operations on the non-zero times
Wn = tf.Variable(
tf.truncated_normal(dtype=dtype, shape=(dim,), mean=0, stddev=0.01),
name="Wn")
bn = tf.Variable(
tf.truncated_normal(dtype=dtype, shape=(1,), mean=0, stddev=0.01),
name="bn")
zeros = tf.equal(input_layer, tf.cast(tf.zeros(tf.shape(input_layer)[2:]),
tf.float64))
# bias
where_ = tf.where(zeros, tf.zeros(tf.shape(input_layer)),
tf.ones(tf.shape(input_layer)))
bias = bn * tf.cast(where_, tf.float64)
op = tf.einsum('bted,d->bte', input_layer, Wn) + bias # will fail
print(bias)
s = tf.Session()
glob_vars = tf.global_variables_initializer()
s.run(glob_vars)
s = tf.Session()
glob_vars = tf.global_variables_initializer()
s.run(glob_vars)
feed_dict = {input_layer: tensor}
s.run(bias, feed_dict)
</code></pre>
<p>and these two for bias do the job:</p>
<p><code>biases = tf.slice(biases, [0, 0, 0, 0], [1, 3, 1, 4])
squeezed_biases = tf.squeeze(biases)</code></p>
|
python|tensorflow
| 0
|
6,720
| 51,342,213
|
How to dynamically hide glyphs and legend items with Bokeh
|
<p>I am trying to implement checkboxes in bokeh where each checkbox should show/hide the line associated with it. I'm aware it's possible to achieve this with legends, but I want this effect to happen in two plots at the same time. Also, the legend should update as well. In the example below the checkboxes appear, but do nothing. I am clearly not grasping how to update de dataframe used as source. Thanks for any help.</p>
<pre><code>from bokeh.io import show, curdoc
from bokeh.models import HoverTool, ColumnDataSource, Legend
from bokeh.plotting import figure
from bokeh.palettes import Category10
from bokeh.models.widgets import CheckboxGroup
from bokeh.layouts import row
import pandas as pd
def update(atrr, old, new):
lines_to_plot = [checkbox_group.labels[i] for i in checkbox_group.active]
cols = ['x']
for label in lines_to_plot:
cols += [label + 'y']
cols += [label]
newdf = df0[cols]
source.data.update(ColumnDataSource(newdf))
df0 = pd.DataFrame({'x': [1, 2, 3], 'Ay' : [1, 5, 3], 'A': [0.2, 0.1, 0.2], 'By' : [2, 4, 3], 'B':[0.1, 0.3, 0.2]})
columns = ['A', 'B']
checkbox_group = CheckboxGroup(labels=columns, active=[0, 1])
tools_to_show = 'box_zoom,save,hover,reset'
p = figure(plot_height =300, plot_width = 1200,
toolbar_location='above',
tools=tools_to_show)
legend_it = []
color = Category10[10]
columns = ['A', 'B']
source = ColumnDataSource(df0)
for i, col in enumerate(columns):
c = p.line('x', col, source=source, name=col, color=color[i])
legend_it.append((col, [c]))
legend = Legend(items=legend_it, location=(5,114))#(0, -60))
p.add_layout(legend, 'right')
hover = p.select(dict(type=HoverTool))
hover.tooltips = [("Name","$name"), ("Aux", "@$name")]
hover.mode = 'mouse'
layout = row(p,checkbox_group)
checkbox_group.on_change('active', update)
curdoc().add_root(layout)
</code></pre>
|
<p>You will have to manage <code>LegendItem</code> objects manually. Here is a complete example:</p>
<pre><code>import numpy as np
from bokeh.io import curdoc
from bokeh.layouts import row
from bokeh.palettes import Viridis3
from bokeh.plotting import figure
from bokeh.models import CheckboxGroup, Legend, LegendItem
p = figure()
props = dict(line_width=4, line_alpha=0.7)
x = np.linspace(0, 4 * np.pi, 100)
l0 = p.line(x, np.sin(x), color=Viridis3[0], **props)
l1 = p.line(x, 4 * np.cos(x), color=Viridis3[1], **props)
l2 = p.line(x, np.tan(x), color=Viridis3[2], **props)
legend_items = [LegendItem(label="Line %d" % i, renderers=[r]) for i, r in enumerate([l0, l1, l2])]
p.add_layout(Legend(items=legend_items))
checkbox = CheckboxGroup(labels=["Line 0", "Line 1", "Line 2"], active=[0, 1, 2], width=100)
def update(attr, old, new):
l0.visible = 0 in checkbox.active
l1.visible = 1 in checkbox.active
l2.visible = 2 in checkbox.active
p.legend.items = [legend_items[i] for i in checkbox.active]
checkbox.on_change('active', update)
layout = row(checkbox, p)
curdoc().add_root(layout)
</code></pre>
|
python|pandas|bokeh
| 2
|
6,721
| 51,277,748
|
Keras AttributeError: 'Tensor' object has no attribute 'log'
|
<p>I am having an error - 'Tensor' object has no attribute 'log' that I code in Keras to build a network while apply custom loss function to Keras. I think somehow I need to get rid of np.log but not sure how.</p>
<pre><code>import Numpy
import numpy as np
</code></pre>
<p>Custom Function</p>
<pre><code>def rmsle(y_pred,y_test):
return np.sqrt(np.mean((np.log(1+y_pred) - np.log(1+y_test))**2))
</code></pre>
<p>My network </p>
<pre><code>def base_model():
model = Sequential()
model.add(Dense(50, input_dim=X_train.shape[1], init='normal', activation='sigmoid'))
model.add(Dropout(0.5))
model.add(Dense(1, init='normal'))
sgd = SGD(lr=0.01, momentum=0.8, decay=0.1, nesterov=False)
model.compile(loss=rmsle, optimizer = sgd)# )'adam') #
return model
keras = KerasRegressor(build_fn=base_model, nb_epoch=80, batch_size=1,verbose=1)
keras.fit(X_train ,y_train)
</code></pre>
<p>When i check the error msg in detail, it shows that</p>
<pre><code>424 """
425 # score_array has ndim >= 2
--> 426 score_array = fn(y_true, y_pred)
427 if mask is not None:
428 # Cast the mask to floatX to avoid float64 upcasting in theano
2 #return np.sqrt(np.mean(np.square( np.log( (np.exp(a)) + 1 ) - np.log((np.exp(b))+1) )))
----> 4 return np.sqrt(np.mean((np.log(1+y_pred) - np.log(1+y_test))**2))
2 #return np.sqrt(np.mean(np.square( np.log( (np.exp(a)) + 1 ) - np.log((np.exp(b))+1) )))
</code></pre>
|
<p>You must use valid tensor operations from your backend (i.e. from <a href="https://keras.io/backend/" rel="nofollow noreferrer">keras.backend</a>) in order to define a custom loss function. For example, your loss function could be defined as follows:</p>
<pre><code>import keras.backend as K
def rmsle(y_test, y_pred):
return K.sqrt(K.mean(K.square(K.log(1 + y_pred) - K.log(1 + y_test))))
</code></pre>
<p><strong>NOTE:</strong> Keras expects the first argument to be <code>y_test</code> (alias the ground truth).</p>
|
python|python-3.x|tensorflow|machine-learning|keras
| 1
|
6,722
| 51,520,279
|
Tensorflow dynamic/static shapes: Can not convert a int into a Tensor or Operation
|
<p>In this code, I'm getting the dynamic and static shapes of an input tensor. The problem is that although my Numpy generated array should be considered as a tensor, it does not! Any help will be appreciated!</p>
<pre><code>import tensorflow as tf
import numpy as np
def get_shape(tensor):
"""
Return the static shape of a tensor only when available
"""
static_shape = tensor.shape.as_list()
dynamic_shape = tf.unstack(tf.shape(tensor))
dim = [s[1] if s[0] is None else s[0] for s in zip(static_shape, dynamic_shape)]
return dim
a = tf.placeholder(dtype=tf.float32, shape=[None, 128])
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
x = np.random.normal(loc=0.5, scale=0.3, size=[150, 128])
shapes = get_shape(a)
print(sess.run(shapes, feed_dict={a: x}))
</code></pre>
|
<p>Just change the line</p>
<pre><code>dim = [s[1] if s[0] is None else s[0] for s in zip(static_shape, dynamic_shape)]
</code></pre>
<p>to</p>
<pre><code> dim = [s[1] if s[0] is None else tf.constant(s[0]) for s in zip(static_shape, dynamic_shape)]
</code></pre>
<p>The thing is that you <code>s[0]</code> in this case refers to <code>int</code> type, because it's a static shape. But here we need a valid tensorflow operation. Using <code>tf.constant(s[0])</code> instead of <code>s[0]</code> solves the problem.</p>
|
python|tensorflow
| 1
|
6,723
| 51,193,969
|
filter by two conditions after a group by
|
<p>I am willing to filter the ids which have SMS and Phone in the <code>type</code> column and whenever <code>login_method</code> is equal to <code>resend</code></p>
<pre><code>df
id type login_method
1 SMS resend
1 SMS complete
2 phone resend
2 SMS resend
2 SMS start
3 phone resend
3 phone start
3 phone complete
3 SMS nice
</code></pre>
<p>expected result</p>
<pre><code>df
id type login_method
1 SMS resend
1 SMS complete
3 phone resend
3 phone start
3 phone complete
3 SMS nice
</code></pre>
<p>In this case only id 2 have phone and SMS in the login method equal to resend</p>
|
<p>Use:</p>
<pre><code>v = ['SMS','phone']
#first filter only valuse by list
df = df[df['type'].isin(v)]
#get id where are all values per groups with resend
m1 = df['login_method'] == 'resend'
s = df[m1].drop_duplicates(['id','type']).groupby('id')['type'].nunique() != len(v)
#filtering by ids
df1 = df[df['id'].isin(s.index[s])]
print (df1)
id type login_method
0 1 SMS resend
1 1 SMS complete
4 3 phone resend
5 3 phone start
6 3 phone complete
7 3 SMS nice
</code></pre>
|
python|pandas
| 1
|
6,724
| 51,254,498
|
How to test whether pandas.Series contains only certain type (e.g. int)?
|
<p>I want to test whether a pandas.Series() contains ONLY integers. None of the things below work. I would prefer solutions that use <code>isinstance()</code>. </p>
<pre><code>import pandas as pd
import numpy
print(isinstance(pd.Series([1, 2]).dtype, numpy.int64))
print(isinstance(pd.Series([1, 2]).dtype.type, numpy.int64))
print(pd.Series([1, 2]).dtype)
print(isinstance(pd.Series([1, 2]).dtype.type, int64))
# False
# False
# int64
# NameError: name 'int64' is not defined
</code></pre>
<p>I assume this question must have been addressed already, though I don't find it when I search for it.</p>
|
<p>If you know the series has only one data type, you could just do
<code>print(s.dtype == 'int64')</code></p>
<p>When it contains multiple data types, the data type of the series would be "object" in which case you might want to check if every element is int:</p>
<pre><code>s = pd.Series([1,'5'])
s.apply(isinstance,args = [int])
>> 0 True
1 False
dtype: bool
s.apply(isinstance,args = [int]).all()
>> False
</code></pre>
|
python|pandas
| 6
|
6,725
| 48,116,509
|
Capturing multiple groups in a regex does not return any result
|
<p>I have a python function </p>
<pre><code>def regex(series, regex):
series = series.str.extract(regex)
series1 = series.dropna()
return (series1)
</code></pre>
<p>Aim to match the regex with the pattern as below:</p>
<ul>
<li><p>anything with 'no' followed by (group of words) or a 'not' should not be matched. Below is the regex used in a python function:</p>
<p><code>result = regex(df['col'],r'(^(?!.*\bno\b.*\b(text|sample text )\b)(?!.*\b(text|sample text)\b .*not).+$)')</code></p></li>
</ul>
<p>I do not get any results (just an empty data frame)when applying the regex in a function, <a href="https://i.stack.imgur.com/ENMhn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ENMhn.png" alt="enter image description here"></a></p>
<p>but testing the regex in this link works well <a href="https://regex101.com/r/Epq0Ns/21" rel="nofollow noreferrer">https://regex101.com/r/Epq0Ns/21</a></p>
|
<h2>Code</h2>
<p>For simplicity sake, you can actually just use lists and list comprehension to build simple regular expression patterns.</p>
<h3>Usage</h3>
<p><a href="https://ideone.com/uvmoaU" rel="nofollow noreferrer">See code in use here</a></p>
<pre><code>import re
negations = ["no", "not"]
words = ["text", "sample text", "text book", "notebook"]
sentences = [
"first sentence with no and sample text",
"second with a text but also a not",
"third has a no, a text and a not",
"fourth alone is what is neeeded with just text",
"keep putting line here no"
]
for sentence in sentences:
negationsRegex = re.compile(r"\b(?:" + "|".join([re.escape(n) for n in negations]) + r")\b")
wordsRegex = re.compile(r"\b(?:" + "|".join([re.escape(w) for w in words]) + r")\b")
if not (re.search(negationsRegex, sentence) and re.search(wordsRegex, sentence)):
print sentence
</code></pre>
<p><strong>Above code outputs</strong>:</p>
<pre><code>fourth alone is what is neeeded with just text
keep putting line here no
</code></pre>
<hr>
<h2>Explanation</h2>
<p>The code compiles a joined list of regex-escaped words ensuring word boundaries are set. The resulting regular expressions generated (given the lists <code>negations</code> and `words) will be as follows:</p>
<pre><code>\b(?:no|not)\b
\b(?:text|sample text|text book|notebook)\b
</code></pre>
<p>The <code>if</code> statement then checks to see if both generated patterns (the negation regex and word regex) match the sentence. If both expressions don't match (one or both don't match), then the string is returned.</p>
|
python|regex|pandas|text-mining
| 1
|
6,726
| 70,883,956
|
How can I sample every other value in a grid while ensuring it alternates each row to create an offset?
|
<p>I would like to take a grid of evenly spaced points and sample every other value while ensuring that each row is offset from the one before. I have been able to make this fairly easily when the number of <code>x</code> points in the grid is odd, but not when they are even.</p>
<p>As an Example the original grid looks like:</p>
<pre><code>import pandas as pd
import numpy as np
x = np.array(range(1, 6))
y = np.array(range(1, 6))
df = pd.DataFrame(np.array(np.meshgrid(x, y, )).T.reshape(-1, 2), columns= {'x', 'y'})
df.plot.scatter('x', 'y')
</code></pre>
<p><a href="https://i.stack.imgur.com/zQOJo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zQOJo.png" alt="enter image description here" /></a></p>
<p>I have used</p>
<pre><code>df_2 = df[::2]
df_2.plot.scatter('x', 'y')
</code></pre>
<p><a href="https://i.stack.imgur.com/U2jIL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/U2jIL.png" alt="enter image description here" /></a></p>
<p>But I cannot figure out how to make it work when the number of <code>x</code> values is even.</p>
<p>For background, I am new to python (coming from R) and I am trying to sample geospatial data at even longitudinal intervals and offset for every change in latitude for even spatial coverage. It is hard to guarantee that each point grid will start with an odd number of <code>x</code> values as they are already random samples of spatial data.</p>
|
<p>We can create a new map/boolean column to say whether the point should be included in the plot. The pattern for filling that column is adding <code>x</code> and <code>y</code> values for a point and taking the result's modulus 2 and comparing that to <code>0</code>.</p>
<p>Then when we plot we restrict the DF to the new map/boolean column.</p>
<pre><code>import pandas as pd
import numpy as np
x = np.array(range(1, 6))
y = np.array(range(1, 6))
df = pd.DataFrame(np.array(np.meshgrid(x, y, )).T.reshape(-1, 2), columns= {'x', 'y'})
df['includepoint'] = (df.y + df.x) % 2 == 0
df[df.includepoint].plot.scatter('x', 'y')
</code></pre>
<p><a href="https://i.stack.imgur.com/LmCwF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LmCwF.png" alt="enter image description here" /></a></p>
|
python|pandas|numpy
| 2
|
6,727
| 71,078,186
|
Prevent exploding loss function multi-step multi-variate/output forecast ConvLSTM
|
<p>I have a plausible problem that currently fails to solve. During the training, my loss function explodes becomes inf or NaN, because the MSE of all errors becomes huge if the predictions (at the beginning of the training) are worse. And that is the normal intended behavior and correct. But, how do I train a ConvLSTM to which loss function to be able to learn a multi-step multi-variate output?</p>
<p>E.g. i try a (32, None, 200, 12) to predict (32, None, 30, 12). 32 is the batch size, None is the number of samples (>3000). 200 is the number of time steps, 12 features wide. 30 output time steps, 12 features wide.</p>
<p>My ConvLSTM model:</p>
<pre><code> input = Input(shape=(None, input_shape[1]))
conv1d_1 = Conv1D(filters=64, kernel_size=3, activation=LeakyReLU())(input)
conv1d_2 = Conv1D(filters=32, kernel_size=3, activation=LeakyReLU())(conv1d_1)
dropout = Dropout(0.3)(conv1d_2)
lstm_1 = LSTM(32, activation=LeakyReLU())(dropout)
dense_1 = Dense(forecasting_horizon * input_shape[1], activation=LeakyReLU())(lstm_1)
output = Reshape((forecasting_horizon, input_shape[1]))(dense_1)
model = Model(inputs=input, outputs=output)
</code></pre>
<p>My ds generation:</p>
<pre><code> ds_inputs = tf.keras.utils.timeseries_dataset_from_array(df[:-forecast_horizon], None, sequence_length=window_size, sequence_stride=1,
shuffle=False, batch_size=None)
ds_targets = tf.keras.utils.timeseries_dataset_from_array(df[forecast_horizon:], None, sequence_length=forecast_horizon, sequence_stride=1,
shuffle=False, batch_size=None)
ds_inputs = ds_inputs.batch(batch_size, drop_remainder=True)
ds_targets = ds_targets.batch(batch_size, drop_remainder=True)
ds = tf.data.Dataset.zip((ds_inputs, ds_targets))
ds = ds.shuffle(buffer_size=(len(ds)))
</code></pre>
<p>Besides MSE, I already tried MeanAbsoluteError, MeanSquaredLogarithmicError, MeanAbsolutePercentageError, CosineSimilarity. Where the last, produce non-sense. MSLE works best but does not favor large errors and therefore the MSE (used as metric has an incredible variation during training). Additionally, after a while, the Network becomes stale and gets no better loss (my explanation is that the difference in loss becomes too minor on the logarithmic scale and therefore the weights cannot be well adjusted).</p>
|
<p>I can partially answer my own question. One issue is that I used ReLu/LeakyReLu which will lead to exploding gradient problem because the RNN/LSTM Layer applies the same weights over time leading to exploding values as the values add up. Weights will not be reduced by any chance (ReLu min == 0). With Tanh as activation, it is possible to have negative values which also allow a reduction of the internal weights and really minimize the chance of exploding weights/predictions within the network. After some tests, the LSTM layer stays numerical stable.</p>
|
tensorflow|tensorflow2.0|tensorflow-datasets
| 1
|
6,728
| 71,037,635
|
How to generate predictions from new data using trained tensorflow network?
|
<p>I want to train Googles <a href="https://github.com/tensorflow/models/tree/master/research/audioset/vggish" rel="nofollow noreferrer">VGGish</a> network (<a href="http://10.1109/ICASSP.2017.7952132" rel="nofollow noreferrer">Hershey et al 2017</a>) from scratch to predict classes specific to my own audio files.</p>
<p>For this I am using the <a href="https://github.com/tensorflow/models/blob/master/research/audioset/vggish/vggish_train_demo.py" rel="nofollow noreferrer">vggish_train_demo.py</a> script available on their github repo which uses tensorflow. I've been able to modify the script to extract melspec features from my own audio by changing the <code>_get_examples_batch()</code> function, and, then train the model on the output of this function. This runs to completetion and prints the loss at each epoch.</p>
<p>However, I've been unable to figure out how to get this trained model to generate predictions from new data. Can this be done with changes to the vggish_train_demo.py script?</p>
|
<p>For anyone who stumbles across this in the future, I wrote this script which does the job. You must save logmel specs for train and test data in the arrays: X_train, y_train, X_test, y_test. The X_train/test are arrays of the (n, 96,64) features and the y_train/test are arrays of shape (n, _NUM_CLASSES) for two classes, where n = the number of 0.96s audio segments and _NUM_CLASSES = the number of classes used.</p>
<p>See the function definition statement for more info and the vggish github in my original post:</p>
<pre><code>### Run the network and save the predictions and accuracy at each epoch
### Train NN, output results
r"""This uses the VGGish model definition within a larger model which adds two
layers on top, and then trains this larger model.
We input log-mel spectrograms (X_train) calculated above with associated labels
(y_train), and feed the batches into the model. Once the model is trained, it
is then executed on the test log-mel spectrograms (X_test), and the accuracy is
ouput, alongside a .csv file with the predictions for each 0.96s chunk and their
true class."""
def main(X):
with tf.Graph().as_default(), tf.Session() as sess:
# Define VGGish.
embeddings = vggish_slim.define_vggish_slim(training=FLAGS.train_vggish)
# Define a shallow classification model and associated training ops on top
# of VGGish.
with tf.variable_scope('mymodel'):
# Add a fully connected layer with 100 units. Add an activation function
# to the embeddings since they are pre-activation.
num_units = 100
fc = slim.fully_connected(tf.nn.relu(embeddings), num_units)
# Add a classifier layer at the end, consisting of parallel logistic
# classifiers, one per class. This allows for multi-class tasks.
logits = slim.fully_connected(
fc, _NUM_CLASSES, activation_fn=None, scope='logits')
tf.sigmoid(logits, name='prediction')
linear_out= slim.fully_connected(
fc, _NUM_CLASSES, activation_fn=None, scope='linear_out')
logits = tf.sigmoid(linear_out, name='logits')
# Add training ops.
with tf.variable_scope('train'):
global_step = tf.train.create_global_step()
# Labels are assumed to be fed as a batch multi-hot vectors, with
# a 1 in the position of each positive class label, and 0 elsewhere.
labels_input = tf.placeholder(
tf.float32, shape=(None, _NUM_CLASSES), name='labels')
# Cross-entropy label loss.
xent = tf.nn.sigmoid_cross_entropy_with_logits(
logits=logits, labels=labels_input, name='xent')
loss = tf.reduce_mean(xent, name='loss_op')
tf.summary.scalar('loss', loss)
# We use the same optimizer and hyperparameters as used to train VGGish.
optimizer = tf.train.AdamOptimizer(
learning_rate=vggish_params.LEARNING_RATE,
epsilon=vggish_params.ADAM_EPSILON)
train_op = optimizer.minimize(loss, global_step=global_step)
# Initialize all variables in the model, and then load the pre-trained
# VGGish checkpoint.
sess.run(tf.global_variables_initializer())
vggish_slim.load_vggish_slim_checkpoint(sess, FLAGS.checkpoint)
# The training loop.
features_input = sess.graph.get_tensor_by_name(
vggish_params.INPUT_TENSOR_NAME)
accuracy_scores = []
for epoch in range(num_epochs):#FLAGS.num_batches):
epoch_loss = 0
i=0
while i < len(X_train):
start = i
end = i+batch_size
batch_x = np.array(X_train[start:end])
batch_y = np.array(y_train[start:end])
_, c = sess.run([train_op, loss], feed_dict={features_input: batch_x, labels_input: batch_y})
epoch_loss += c
i+=batch_size
#print no. of epochs and loss
print('Epoch', epoch+1, 'completed out of', num_epochs,', loss:',epoch_loss) #FLAGS.num_batches,', loss:',epoch_loss)
#If these lines are left here, it will evaluate on the test data every iteration and print accuracy
#note this adds a small computational cost
correct = tf.equal(tf.argmax(logits, 1), tf.argmax(labels_input, 1)) #This line returns the max value of each array, which we want to be the same (think the prediction/logits is value given to each class with the highest value being the best match)
accuracy = tf.reduce_mean(tf.cast(correct, 'float')) #changes correct to type: float
accuracy1 = accuracy.eval({features_input:X_test, labels_input:y_test})
accuracy_scores.append(accuracy1)
print('Accuracy:', accuracy1)#TF is smart so just knows to feed it through the model without us seeming to tell it to.
#Save predictions for test data
predictions_sigm = logits.eval(feed_dict = {features_input:X_test}) #not really _sigm, change back later
#print(predictions_sigm) #shows table of predictions, meaningless if saving at each epoch
test_preds = pd.DataFrame(predictions_sigm, columns = col_names) #converts predictions to df
true_class = np.argmax(y_test, axis = 1) #This saves the true class
test_preds['True class'] = true_class #This adds true class to the df
#Saves csv file of table of predictions for test data. NB. header will not save when using np.text for some reason
np.savetxt("/content/drive/MyDrive/..."+"Epoch_"+str(epoch+1)+"_Accuracy_"+str(accuracy1), test_preds.values, delimiter=",")
if __name__ == '__main__':
tf.app.run()
#'An exception has occurred, use %tb to see the full traceback.' error will occur, fear not, this just means its finished (perhaps as its exited the tensorflow session?)
</code></pre>
|
python|tensorflow|audio|deep-learning|neural-network
| 1
|
6,729
| 70,821,382
|
Pandas - Add items to dataframe
|
<p>I am trying to add row items to the dataframe, and I am not able to update the dataframe.
What i tried until now is commented out as it doesn't do what I need.</p>
<p>I simply want to download the json file and store it to a dataframe with those given columns. Seems I am not able to extract the child components fron JSON file and store them to a brand new dataframe.</p>
<p>Please find bellow my code:</p>
<pre><code>import requests, json, urllib
import pandas as pd
url = "https://www.cisa.gov/sites/default/files/feeds/known_exploited_vulnerabilities.json"
data = pd.read_json(url)
headers = []
df = pd.DataFrame()
for key, item in data['vulnerabilities'].items():
for k in item.keys():
headers.append(k)
col = list(set(headers))
new_df = pd.DataFrame(columns=col)
for item in data['vulnerabilities'].items():
print(item[1])
# new_df['product'] = item[1]['product']
# new_df['vendorProject'] = item[1]['vendorProject']
# new_df['dueDate'] = item[1]['dueDate']
# new_df['shortDescription'] = item[1]['shortDescription']
# new_df['dateAdded'] = item[1]['dateAdded']
# new_df['vulnerabilityName'] = item[1]['vulnerabilityName']
# new_df['cveID'] = item[1]['cveID']
# new_df.append(item[1], ignore_index = True)
new_df
</code></pre>
<p>At the end my df is still blank.
<a href="https://i.stack.imgur.com/9KVUM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9KVUM.png" alt="enter image description here" /></a></p>
|
<p>It worked with this:</p>
<pre><code>
import requests, json, urllib
import pandas as pd
url = "https://www.cisa.gov/sites/default/files/feeds/known_exploited_vulnerabilities.json"
data = pd.read_json(url)
headers = []
df = pd.DataFrame()
for key, item in data['vulnerabilities'].items():
for k in item.keys():
headers.append(k)
col = list(set(headers))
new_df = pd.DataFrame(columns=col)
for item in data['vulnerabilities'].items():
new_df.loc[len(new_df.index)] = item[1] <===THIS
new_df.head()
</code></pre>
|
python-3.x|pandas|dataframe|jupyter-notebook
| 0
|
6,730
| 51,883,453
|
How to check if two tensors are equal
|
<p>Given two tensors of any rank, how can I tell if they both are the same, have I to set my propre solution of there is any kind of implementation of this comparaison </p>
|
<p>To check if two tensors are equal, one can use <code>tf.equal</code>. But it returns a tensor, a result of a bitwise operation. This tensor elements are whether 1 or 0. Therefore computing the sum of the later tensor should give the number of elements of the tensor if both tensors are equal.</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code> const a = tf.tensor([1, 2, 3, 4], [2, 2]);
const b = tf.tensor([1, 2, 3, 4], [2, 2]);
const c = a.equal(b).sum().dataSync()[0]
console.log(c)
c === a.shape.reduce((a,b) => a *= b) ? console.log("true") : console.log("false")</code></pre>
<pre class="snippet-code-html lang-html prettyprint-override"><code><html>
<head>
<!-- Load TensorFlow.js -->
<script src="https://cdnjs.cloudflare.com/ajax/libs/tensorflow/0.12.4/tf.js"> </script>
</head>
<body>
</body>
</html></code></pre>
</div>
</div>
</p>
|
tensorflow.js
| 2
|
6,731
| 51,856,485
|
Finetuning a tensorflow object detection pretrained model
|
<p>I'm working on a real-time object detector with tensorflow and opencv.</p>
<p>I've used different SSD and Faster-RCNN based frozen inference graphs and they almost never fail. </p>
<p>The video stream comes from a IR camera fixed to a wall that has a background that almost never changes. There are some misdetections at particular hours of the day (e.g. when the light changes in the afternoon) that occur in the background area or on small objects too close to the camera.</p>
<p>So to fix this little mistakes i wanted to finetune the model with images from the same background.</p>
<p>Being the background always the same, how do i approach the retraining of the model having 1000 misdetections pics that are all almost the same?</p>
|
<p>In case of variations in the background lighting, it might be possible to use Background Subtraction</p>
<p><a href="https://docs.opencv.org/3.4.1/d1/dc5/tutorial_background_subtraction.html" rel="nofollow noreferrer">https://docs.opencv.org/3.4.1/d1/dc5/tutorial_background_subtraction.html</a>
,while dynmically updating it as shown here:</p>
<p><a href="https://www.pyimagesearch.com/2015/06/01/home-surveillance-and-motion-detection-with-the-raspberry-pi-python-and-opencv/" rel="nofollow noreferrer">https://www.pyimagesearch.com/2015/06/01/home-surveillance-and-motion-detection-with-the-raspberry-pi-python-and-opencv/</a></p>
<p>Thank you.</p>
|
python|c++|opencv|tensorflow|object-detection
| 0
|
6,732
| 51,952,446
|
Pandas, how to take the pd.Dataframe as a argument in functions
|
<p>Actually, to simplify the code, instead of writing two similar part codes, I decide to use a function with Dataframe as arguments. Like following:</p>
<pre><code>from sklearn.neighbors import KNeighborsRegressor
from sklearn.metrics import mean_squared_error
train_one = split_one
test_one = split_two
train_two = split_two
test_two = split_one
def train_predict(train_arg, predict_arg):
knn = KNeighborsRegressor()
knn.fit(train_arg['accommodates'], train_arg['price'])
predict = knn.predict(predict_arg['accommodates'])
rmse = numpy.sqrt(mean_squared_error(predict_arg['price'], predict))
return rmse
iteration_one_rmse = train_predict(train_one, test_one)
iteration_two_rmse = train_predict(train_two, test_two)
avg_rmse = numpy.mean(iteration_one_rmse, iteration_two_rmse)
</code></pre>
<p>Maybe the arguments in the function definition block are inappropriate. However, I can not figure it out. Thanks for any hints.
With the error notice like that:</p>
<pre><code>ValueErrorTraceback (most recent call last)
<ipython-input-1-f3f78fcf6758> in <module>()
14 return rmse
15
---> 16 iteration_one_rmse = train_predict(train_one, test_one)
17 iteration_two_rmse = train_predict(train_two, test_two)
18
<ipython-input-1-f3f78fcf6758> in train_predict(train_arg, predict_arg)
9 def train_predict(train_arg, predict_arg):
10 knn = KNeighborsRegressor()
---> 11 knn.fit(train_arg['accommodates'], train_arg['price'])
12 predict = knn.predict(predict_arg['accommodates'])
13 rmse = numpy.sqrt(mean_squared_error(predict_arg['price'], predict))
/dataquest/system/env/python3/lib/python3.4/site-packages/sklearn/neighbors/base.py in fit(self, X, y)
739 """
740 if not isinstance(X, (KDTree, BallTree)):
--> 741 X, y = check_X_y(X, y, "csr", multi_output=True)
742 self._y = y
743 return self._fit(X)
/dataquest/system/env/python3/lib/python3.4/site-packages/sklearn/utils/validation.py in check_X_y(X, y, accept_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, multi_output, ensure_min_samples, ensure_min_features, y_numeric, warn_on_dtype, estimator)
529 y = y.astype(np.float64)
530
--> 531 check_consistent_length(X, y)
532
533 return X, y
/dataquest/system/env/python3/lib/python3.4/site-packages/sklearn/utils/validation.py in check_consistent_length(*arrays)
179 if len(uniques) > 1:
180 raise ValueError("Found input variables with inconsistent numbers of"
--> 181 " samples: %r" % [int(l) for l in lengths])
182
183
ValueError: Found input variables with inconsistent numbers of samples: [1, 1862]
</code></pre>
|
<p>If <code>train_arg</code> is a dataframe, then <code>train_arg['accomodates']</code> is a series whereas <code>train_arg[['accomodate']]</code> is a dataframe (containing only one column). </p>
<p>Since the data used in fit and predict is supposed to have multiple columns, the functions will on a <code>pandas.DataFrame</code> but not on a <code>pandas.Series</code>.</p>
<p>To prevent this error from happening, make sure your data (first argument in fit, and only argument in predict) is of <code>pandas.DataFrame</code> or <code>numpy.ndarray</code> type.</p>
|
python|pandas|scikit-learn
| 2
|
6,733
| 51,753,549
|
Access row elements as compound list from two dataframes
|
<p>I have a dataframe as below, df </p>
<pre><code> 1 2 3 4 5 6 7 8 9 10
0 C1 S1 S3
1 C2 S4 S1 S2
2 C3 S3 S5 S1 S3
3 C4 S2 S4 S5 S2 S1 S4 S5 S6 S9
4 C5 S1 S5
</code></pre>
<p>and another dataframe df1 </p>
<pre><code> 1 2 3 4 5 6 7 8 9 10
0 S1 1 17 6 67 0 89 0 4 7
1 S2 4 17 6 67 7 0 0 0 0
2 S3 6 89 0 4 17 6 67 0 1
3 S4 0 2 8 67 7 0 0 6 7
4 S5 23 4 9 2 3 4 5 6 0
</code></pre>
<p>In the end,I want to access rows of the df which in turn contains values in df1 and my final data should look like below. I am aware of the <code>df.iloc</code>, but i cant append to a list.</p>
<pre><code>C1 = [S1 appended with S3]
C1 = [ 1 17 6 67 0 89 0 4 7 6 89 0 4 17 6 67 0 1]
</code></pre>
<p>similarly ,
C2 = [ S4 S1 S2] etc. </p>
|
<p><strong><em>Option 1</em></strong></p>
<p>You can get your template lists using <code>apply</code>:</p>
<pre><code>tmp = df.drop('0', 1).set_index('1').apply(lambda x: list(x.dropna()), 1)
1
C1 [S1, S3]
C2 [S4, S1, S2]
C3 [S3, S5, S1]
C4 [S2, S4, S5]
C5 [S1, S5]
</code></pre>
<p>Prepare <code>df2</code>:</p>
<pre><code>df2 = df2.drop('0', 1).set_index('1')
</code></pre>
<p>Then using <code>loc</code> to explode the lists:</p>
<pre><code>tmp.apply(lambda x: np.array([df2.loc[i] for i in x]).ravel())
1
C1 [1, 17, 6, 67, 0, 89, 0, 4, 7, 6, 89, 0, 4, 17...
C2 [0, 2, 8, 67, 7, 0, 0, 6, 7, 1, 17, 6, 67, 0, ...
C3 [6, 89, 0, 4, 17, 6, 67, 0, 1, 23, 4, 9, 2, 3,...
C4 [4, 17, 6, 67, 7, 0, 0, 0, 0, 0, 2, 8, 67, 7, ...
C5 [1, 17, 6, 67, 0, 89, 0, 4, 7, 23, 4, 9, 2, 3,...
</code></pre>
<p>Or slightly faster using a list comprehension:</p>
<pre><code>pd.Series([np.array([df2.loc[i] for i in x]).ravel() for x in tmp], index=tmp.index)
</code></pre>
<p><strong><em>Option 2</em></strong></p>
<pre><code>df2 = df2.drop('0', 1).set_index('1')
dct = df2.apply(list, 1).to_dict()
tmp = df.drop('0', 1).set_index('1')
tmp.applymap(dct.get).apply(lambda x: [val for pair in x[x.notnull()] for val in pair], 1)
1
C1 [1, 17, 6, 67, 0, 89, 0, 4, 7, 6, 89, 0, 4, 17...
C2 [0, 2, 8, 67, 7, 0, 0, 6, 7, 1, 17, 6, 67, 0, ...
C3 [6, 89, 0, 4, 17, 6, 67, 0, 1, 23, 4, 9, 2, 3,...
C4 [4, 17, 6, 67, 7, 0, 0, 0, 0, 0, 2, 8, 67, 7, ...
C5 [1, 17, 6, 67, 0, 89, 0, 4, 7, 23, 4, 9, 2, 3,...
dtype: object
</code></pre>
|
python|pandas|dataframe
| 2
|
6,734
| 41,983,943
|
tf.parse_example.features for Multi-label photo classification
|
<p>I am trying to apply the tutorial code from cloudml-samples/flowers/ on a set of photos with multi-label. Environment is Google Cloud Shell. "preprocess"ed all training and evaluation set. Running into error when started the training task. </p>
<p>I called the trainer.task through python and returning the below error messages. Please let me know if the log info (only some generic information) through running gcloud beta ml command, or anything else will be helpful. </p>
<p>Thanks in advance for taking the look.</p>
<pre><code>python -m trainer.task \
--output_path gs://yelp_restaurant_photo_classification/yelp_restaurant_photo_classification/training \
--eval_data_paths gs://yelp_restaurant_photo_classification/yelp_restaurant_photo_classification/preproc/eval* \
--train_data_paths gs://yelp_restaurant_photo_classification/yelp_restaurant_photo_classification/preproc/train*
INFO:root:Original job data: {}
INFO:root:setting eval batch size to 100
INFO:tensorflow:global_step/sec: 0
INFO:tensorflow:global_step/sec: 0
W tensorflow/core/framework/op_kernel.cc:968] Invalid argument: Name: <unknown>, Key: label, Index: 0. Number of int64 values != expected. Values size: 5 but output shape: [1]
INFO:tensorflow:Error reported to Coordinator: <class 'tensorflow.python.framework.errors.InvalidArgumentError'>, Name: <unknown>, Key: label, Index: 0. Number of int64 values != expected. Values size: 5 but output shape: [1]
[[Node: inputs/ParseExample/ParseExample = ParseExample[Ndense=3, Nsparse=0, Tdense=[DT_FLOAT, DT_STRING, DT_INT64], dense_shapes=[[2048], [], [1]], sparse_types=[], _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch:1, inputs/ParseExample/ParseExample/names, inputs/ParseExample/ParseExample/dense_keys_0, inputs/ParseExample/ParseExample/dense_keys_1, inputs/ParseExample/ParseExample/dense_keys_2, inputs/ParseExample/Const, inputs/ParseExample/Reshape, inputs/ParseExample/Reshape_1)]]
Caused by op u'inputs/ParseExample/ParseExample', defined at:
File "/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/home/slalomconsultingsf/git/cloudml-samples/yelp_restaurant_photo_classification/trainer/task.py", line 559, in <module>
tf.app.run()
File "/home/slalomconsultingsf/.local/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 30, in run
sys.exit(main(sys.argv[:1] + flags_passthrough))
File "/home/slalomconsultingsf/git/cloudml-samples/yelp_restaurant_photo_classification/trainer/task.py", line 322, in main
run(model, argv)
File "/home/slalomconsultingsf/git/cloudml-samples/yelp_restaurant_photo_classification/trainer/task.py", line 453, in run
dispatch(args, model, cluster, task)
File "/home/slalomconsultingsf/git/cloudml-samples/yelp_restaurant_photo_classification/trainer/task.py", line 494, in dispatch
Trainer(args, model, cluster, task).run_training()
File "/home/slalomconsultingsf/git/cloudml-samples/yelp_restaurant_photo_classification/trainer/task.py", line 193, in run_training
self.args.batch_size)
File "trainer/model.py", line 278, in build_train_graph
return self.build_graph(data_paths, batch_size, GraphMod.TRAIN)
File "trainer/model.py", line 232, in build_graph
parsed = tf.parse_example(tensors.examples, features=feature_map)
File "/home/slalomconsultingsf/.local/lib/python2.7/site-packages/tensorflow/python/ops/parsing_ops.py", line 307, in parse_example
dense_types, dense_defaults, dense_shapes, name)
File "/home/slalomconsultingsf/.local/lib/python2.7/site-packages/tensorflow/python/ops/parsing_ops.py", line 405, in _parse_example_raw
name=name)
File "/home/slalomconsultingsf/.local/lib/python2.7/site-packages/tensorflow/python/ops/gen_parsing_ops.py", line 165, in _parse_example
dense_shapes=dense_shapes, name=name)
File "/home/slalomconsultingsf/.local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 749, in apply_op
op_def=op_def)
File "/home/slalomconsultingsf/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2380, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/home/slalomconsultingsf/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1298, in __init__
self._traceback = _extract_stack()
InvalidArgumentError (see above for traceback): Name: <unknown>, Key: label, Index: 0. Number of int64 values != expected. Values size: 5 but output shape: [1]
[[Node: inputs/ParseExample/ParseExample = ParseExample[Ndense=3, Nsparse=0, Tdense=[DT_FLOAT, DT_STRING, DT_INT64], dense_shapes=[[2048], [], [1]], sparse_types=[], _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch:1, inputs/ParseExample/ParseExample/names, inputs/ParseExample/ParseExample/dense_keys_0, inputs/ParseExample/ParseExample/dense_keys_1, inputs/ParseExample/ParseExample/dense_keys_2, inputs/ParseExample/Const, inputs/ParseExample/Reshape, inputs/ParseExample/Reshape_1)]]
INFO:tensorflow:Error reported to Coordinator: <class 'tensorflow.python.framework.errors.InvalidArgumentError'>, Name: <unknown>, Key: label, Index: 0. Number of int64 values != expected. Values size: 5 but output shape: [1]
[[Node: inputs/ParseExample/ParseExample = ParseExample[Ndense=3, Nsparse=0, Tdense=[DT_FLOAT, DT_STRING, DT_INT64], dense_shapes=[[2048], [], [1]], sparse_types=[], _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch:1, inputs/ParseExample/ParseExample/names, inputs/ParseExample/ParseExample/dense_keys_0, inputs/ParseExample/ParseExample/dense_keys_1, inputs/ParseExample/ParseExample/dense_keys_2, inputs/ParseExample/Const, inputs/ParseExample/Reshape, inputs/ParseExample/Reshape_1)]]
Caused by op u'inputs/ParseExample/ParseExample', defined at:
File "/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/home/slalomconsultingsf/git/cloudml-samples/yelp_restaurant_photo_classification/trainer/task.py", line 559, in <module>
tf.app.run()
File "/home/slalomconsultingsf/.local/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 30, in run
sys.exit(main(sys.argv[:1] + flags_passthrough))
File "/home/slalomconsultingsf/git/cloudml-samples/yelp_restaurant_photo_classification/trainer/task.py", line 322, in main
run(model, argv)
File "/home/slalomconsultingsf/git/cloudml-samples/yelp_restaurant_photo_classification/trainer/task.py", line 453, in run
dispatch(args, model, cluster, task)
File "/home/slalomconsultingsf/git/cloudml-samples/yelp_restaurant_photo_classification/trainer/task.py", line 494, in dispatch
Trainer(args, model, cluster, task).run_training()
File "/home/slalomconsultingsf/git/cloudml-samples/yelp_restaurant_photo_classification/trainer/task.py", line 193, in run_training
self.args.batch_size)
File "trainer/model.py", line 278, in build_train_graph
return self.build_graph(data_paths, batch_size, GraphMod.TRAIN)
File "trainer/model.py", line 232, in build_graph
parsed = tf.parse_example(tensors.examples, features=feature_map)
File "/home/slalomconsultingsf/.local/lib/python2.7/site-packages/tensorflow/python/ops/parsing_ops.py", line 307, in parse_example
dense_types, dense_defaults, dense_shapes, name)
File "/home/slalomconsultingsf/.local/lib/python2.7/site-packages/tensorflow/python/ops/parsing_ops.py", line 405, in _parse_example_raw
name=name)
File "/home/slalomconsultingsf/.local/lib/python2.7/site-packages/tensorflow/python/ops/gen_parsing_ops.py", line 165, in _parse_example
dense_shapes=dense_shapes, name=name)
File "/home/slalomconsultingsf/.local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 749, in apply_op
op_def=op_def)
File "/home/slalomconsultingsf/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2380, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/home/slalomconsultingsf/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1298, in __init__
self._traceback = _extract_stack()
InvalidArgumentError (see above for traceback): Name: <unknown>, Key: label, Index: 0. Number of int64 values != expected. Values size: 5 but output shape: [1]
[[Node: inputs/ParseExample/ParseExample = ParseExample[Ndense=3, Nsparse=0, Tdense=[DT_FLOAT, DT_STRING, DT_INT64], dense_shapes=[[2048], [], [1]], sparse_types=[], _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch:1, inputs/ParseExample/ParseExample/names, inputs/ParseExample/ParseExample/dense_keys_0, inputs/ParseExample/ParseExample/dense_keys_1, inputs/ParseExample/ParseExample/dense_keys_2, inputs/ParseExample/Const, inputs/ParseExample/Reshape, inputs/ParseExample/Reshape_1)]]
Traceback (most recent call last):
File "/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/home/slalomconsultingsf/git/cloudml-samples/yelp_restaurant_photo_classification/trainer/task.py", line 559, in <module>
tf.app.run()
File "/home/slalomconsultingsf/.local/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 30, in run
sys.exit(main(sys.argv[:1] + flags_passthrough))
File "/home/slalomconsultingsf/git/cloudml-samples/yelp_restaurant_photo_classification/trainer/task.py", line 322, in main
run(model, argv)
File "/home/slalomconsultingsf/git/cloudml-samples/yelp_restaurant_photo_classification/trainer/task.py", line 453, in run
dispatch(args, model, cluster, task)
File "/home/slalomconsultingsf/git/cloudml-samples/yelp_restaurant_photo_classification/trainer/task.py", line 494, in dispatch
Trainer(args, model, cluster, task).run_training()
File "/home/slalomconsultingsf/git/cloudml-samples/yelp_restaurant_photo_classification/trainer/task.py", line 245, in run_training
self.global_step = session.run(to_run)[0]
File "/home/slalomconsultingsf/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 717, in run
run_metadata_ptr)
File "/home/slalomconsultingsf/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 915, in _run
feed_dict_string, options, run_metadata)
File "/home/slalomconsultingsf/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 965, in _do_run
target_list, options, run_metadata)
File "/home/slalomconsultingsf/.local/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 985, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors.InvalidArgumentError: Name: <unknown>, Key: label, Index: 0. Number of int64 values != expected. Values size: 5 but output shape: [1]
[[Node: inputs/ParseExample/ParseExample = ParseExample[Ndense=3, Nsparse=0, Tdense=[DT_FLOAT, DT_STRING, DT_INT64], dense_shapes=[[2048], [], [1]], sparse_types=[], _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch:1, inputs/ParseExample/ParseExample/names, inputs/ParseExample/ParseExample/dense_keys_0, inputs/ParseExample/ParseExample/dense_keys_1, inputs/ParseExample/ParseExample/dense_keys_2, inputs/ParseExample/Const, inputs/ParseExample/Reshape, inputs/ParseExample/Reshape_1)]]
Caused by op u'inputs/ParseExample/ParseExample', defined at:
File "/usr/lib/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/home/slalomconsultingsf/git/cloudml-samples/yelp_restaurant_photo_classification/trainer/task.py", line 559, in <module>
tf.app.run()
File "/home/slalomconsultingsf/.local/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 30, in run
sys.exit(main(sys.argv[:1] + flags_passthrough))
File "/home/slalomconsultingsf/git/cloudml-samples/yelp_restaurant_photo_classification/trainer/task.py", line 322, in main
run(model, argv)
File "/home/slalomconsultingsf/git/cloudml-samples/yelp_restaurant_photo_classification/trainer/task.py", line 453, in run
dispatch(args, model, cluster, task)
File "/home/slalomconsultingsf/git/cloudml-samples/yelp_restaurant_photo_classification/trainer/task.py", line 494, in dispatch
Trainer(args, model, cluster, task).run_training()
File "/home/slalomconsultingsf/git/cloudml-samples/yelp_restaurant_photo_classification/trainer/task.py", line 193, in run_training
INFO:tensorflow:Error reported to Coordinator: <class 'tensorflow.python.framework.errors.InvalidArgumentError'>, Name: <unknown>, Key: label, Index:
self.args.batch_size)
File "trainer/model.py", line 278, in build_train_graph
return self.build_graph(data_paths, batch_size, GraphMod.TRAIN)
File "trainer/model.py", line 232, in build_graph
parsed = tf.parse_example(tensors.examples, features=feature_map)
File "/home/slalomconsultingsf/.local/lib/python2.7/site-packages/tensorflow/python/ops/parsing_ops.py", line 307, in parse_example
dense_types, dense_defaults, dense_shapes, name)
File "/home/slalomconsultingsf/.local/lib/python2.7/site-packages/tensorflow/python/ops/parsing_ops.py", line 405, in _parse_example_raw
name=name)
File "/home/slalomconsultingsf/.local/lib/python2.7/site-packages/tensorflow/python/ops/gen_parsing_ops.py", line 165, in _parse_example
dense_shapes=dense_shapes, name=name)
File "/home/slalomconsultingsf/.local/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 749, in apply_op
op_def=op_def)
File "/home/slalomconsultingsf/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2380, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/home/slalomconsultingsf/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1298, in __init__
self._traceback = _extract_stack()
InvalidArgumentError (see above for traceback): Name: <unknown>, Key: label, Index: 0. Number of int64 values != expected. Values size: 5 but outpu
t shape: [1]
[[Node: inputs/ParseExample/ParseExample = ParseExample[Ndense=3, Nsparse=0, Tdense=[DT_FLOAT, DT_STRING, DT_INT64], dense_shapes=[[2048], [
], [1]], sparse_types=[], _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch:1, inputs/ParseExample/ParseExample/names, inputs/ParseExamp
le/ParseExample/dense_keys_0, inputs/ParseExample/ParseExample/dense_keys_1, inputs/ParseExample/ParseExample/dense_keys_2, inputs/ParseExample/Const
, inputs/ParseExample/Reshape, inputs/ParseExample/Reshape_1)]]
</code></pre>
|
<p>The flowers sample does not directly support multi-label classification. The exact problem in your code I believe is related to the <a href="https://github.com/GoogleCloudPlatform/cloudml-samples/blob/master/flowers/trainer/model.py#L226" rel="nofollow noreferrer">shape=[1] in model.py</a>.</p>
<p>Alas plugging in multi-label examples will not work out of the box with this code, which was designed to be a good example of how to get started with a complex workflow (ie restoring from checkpoints, transfer learning, embedding in Dataflow, etc), more so than an image modeling Swiss army knife.</p>
<p>There should be only a few lines you would have to make changes to model.py to get your multi-label example working though. Furthermore, I'm sure other members of the community would greatly appreciate any progress you make so please consider creating a <a href="https://github.com/GoogleCloudPlatform/cloudml-samples/pulls" rel="nofollow noreferrer">pull request</a>, should you get a sample that does both multi and single-label problems!</p>
|
tensorflow|google-cloud-ml
| 0
|
6,735
| 41,879,770
|
how to use MNIST datast on linux using tensorflow
|
<p>I'm new in machine learning and I am following tensorflow's tutorial to create some simple Neural Networks which learn the MNIST data.</p>
<p>i wanna run a code that do the recognition hand writing digits using the MNIST data but i don't know how to run it ... should i dowload the data on my machine and extracted and put it on a file and then set the path on the code or did tensorflow contain the data ...but when i do import input_data i get
No module named 'input_data' also when i do
from tensorflow.examples.tutorials.mnist import input_data ==> No module named 'tensorflow.examples'
ps:when i do import tensorflow as tf i get no erreur so it's fine with tensorflow i think </p>
<p>could u help me plz for example i wanna run the code below what should i do
<a href="https://github.com/hwalsuklee/tensorflow-mnist-cnn" rel="nofollow noreferrer">https://github.com/hwalsuklee/tensorflow-mnist-cnn</a></p>
|
<p>If you cannot import <em>tensorflow.examples</em> I'm guessing something went wrong with the installation. Try reinstalling tensorflow with the latest version.
You don't need to download the data on your own, tensorflow will put it in the path you provide. But first, try these steps:</p>
<p>I'm currently using tf 1.2.0 and I'm not getting that error.</p>
<p>If you want to know which version you have installed:</p>
<pre><code>import tensorflow as tf
print(tf.__version__)
</code></pre>
<p>After everything is installed try:</p>
<pre><code>from tensorflow.examples.tutorials.mnist import input_data
input_data.read_data_sets("./data/", one_hot=True)
</code></pre>
<p>That should copy the data to a "data" folder inside your working folder (the "data" folder will be created and all the files will be available there).</p>
<p>If the above lines of code run with no errors, you should be able to run the example.</p>
|
tensorflow|mnist
| 1
|
6,736
| 42,044,256
|
Updating a value in a Pandas dataframe seems to update all dataframes
|
<p>I've built two Pandas dataframes like this:</p>
<pre><code>import panda as pd
d = {'FIPS' : pd.Series(['01001', '01002']), 'count' : pd.Series([3, 4])}
df1 = pd.DataFrame(d)
df2 = df1
</code></pre>
<p>I want to change one of the values in df2. This is what I've tried:</p>
<pre><code>df2.loc[df2['FIPS'] == '01001','FIPS'] = '01003'
</code></pre>
<p>This line appears to update both df1 and df2, but I don't understand why.</p>
|
<p>Because <code>df2</code> is only a reference of <code>df1</code>. They point to the same object in the memory, only by different name. If you do <code>df2=df1.copy()</code> it should create a new memory for <code>df2</code> and only update it..plus you have a typo in import pandas :) </p>
<p>You can check what memory address the object is located in with <code>id(df1)</code> and see its same as <code>df2</code> and changes if you use the <code>.copy()</code> method </p>
<p>Welcome to SO! </p>
|
python|pandas|dataframe
| 1
|
6,737
| 64,610,737
|
Tensorflow Object Detection API TF Nightly
|
<p>Is it possible to use the Tensorflow Object Detection API with the current tf-nightly build and how do I replace tensorflow with tf-nightly?
I have a RTX 3080 which needs CUDA 11 and that is only supported in tf-nightly as of now.</p>
|
<p>I got it working with my RTX 3070 on Windows with TF2.</p>
<ol>
<li>Create a python 3.8 conda environment and install tf-nightly-gpu via pip</li>
</ol>
<p><code>pip install tf-nightly-gpu==2.5.0.dev20210109</code></p>
<ol start="2">
<li><p>Install cuda 11.0 and cuDNN 8.0.2</p>
</li>
<li><p>Install cuda 11.1</p>
</li>
<li><p>Replace ptxas.exe in the v11.0 bin directory with the v11.1 version (the 11.0 version was causing errors for me)</p>
</li>
<li><p>Make sure your path/cuda path point to cuda 11.0 (not 11.1)</p>
</li>
<li><p>install object detection api from tensorflow</p>
</li>
<li><p>Add this to your model_main_tf2.py</p>
</li>
</ol>
<pre><code>os.environ['TF_CPP_MIN_LOG_LEVEL']='2'
from absl import flags
import tensorflow.compat.v2 as tf
physical_devices = tf.config.experimental.list_physical_devices('GPU')
assert len(physical_devices) > 0, "Not enough GPU hardware devices available"
config = tf.config.experimental.set_memory_growth(physical_devices[0], True)
from object_detection import model_lib_v2
</code></pre>
<p>With this unfortunately i only can train and not evaluate the model.</p>
|
tensorflow|object-detection|object-detection-api
| 1
|
6,738
| 64,470,908
|
Pandas Merge DataFrames on similar dates (+-7 days)
|
<p>I have two Pandas DataFrames and want to merge them on two attributes <code>key</code> and <code>date</code>, where <code>date</code> is a datetime and two rows should be merged, if the date of the second table is +-7 days close to the date in the first table.</p>
<p>Currently, I merge the data frames first and select the matching rows afterwards, but this is slow and results in a huge intermediate table:</p>
<pre><code>res = pd.merge(left, right, on=['key'], how='inner')
mask = (
((res['date_x'] + pd.Timedelta(0, 'days')) <= (res['date_y'] + pd.Timedelta(7, 'days'))) &
((res['date_x'] - pd.Timedelta(0, 'days')) >= (res['date_y'] - pd.Timedelta(7, 'days')))
)
res = res.loc[mask]
</code></pre>
<p>Is there a faster way to reach the same result, like conditional merging?</p>
|
<p>Without seeing a reproducible example, it sounds like you may be looking for the <code>merge_asof</code> function (if I understood your question correctly). <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.merge_asof.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.merge_asof.html</a> Should look something like:</p>
<pre><code>pd.merge_asof(left, right, on="date", by="key", tolerance=pd.Timedelta(7, 'days'))
</code></pre>
|
pandas|dataframe|datetime|merge
| 3
|
6,739
| 64,190,105
|
Is there a faster way to split a pandas dataframe into two complementary parts?
|
<p>Good evening all,</p>
<p>I have a situation where I need to split a dataframe into two complementary parts based on the value of one feature.</p>
<p>What I mean by this is that for every row in dataframe 1, I need a complementary row in dataframe 2 that takes on the opposite value of that specific feature.</p>
<p>In my source dataframe, the feature I'm referring to is stored under column "773", and it can take on values of either <strong>0.0</strong> or <strong>1.0</strong>.</p>
<p>I came up with the following code that does this sufficiently, but it is remarkably slow. It takes about a minute to split 10,000 rows, even on my all-powerful EC2 instance.</p>
<pre><code>data = chunk.iloc[:,1:776]
listy1 = []
listy2 = []
for i in range(0,len(data)):
random_row = data.sample(n=1).iloc[0]
listy1.append(random_row.tolist())
if random_row["773"] == 0.0:
x = data[data["773"] == 1.0].sample(n=1).iloc[0]
listy2.append(x.tolist())
else:
x = data[data["773"] == 0.0].sample(n=1).iloc[0]
listy2.append(x.tolist())
df1 = pd.DataFrame(listy1)
df2 = pd.DataFrame(listy2)
</code></pre>
<p><strong>Note: I don't care about duplicate rows, because this data is being used to train a model that compares two objects to tell which one is "better."</strong></p>
<p>Do you have some insight into why this is so slow, or any suggestions as to make this faster?</p>
|
<p>A key concept in efficient <code>numpy</code>/<code>scipy</code>/<code>pandas</code> coding is using library-shipped vectorized functions whenever possible. Try to process multiple rows at once instead of iterate explicitly over rows. i.e. avoid <code>for</code> loops and <code>.iterrows()</code>.</p>
<p>The implementation provided is a little subtle in terms of indexing, but the vectorization thinking should be straightforward as follows:</p>
<ol>
<li>Draw the main dataset at once.</li>
<li>The complementary dataset: draw the 0-rows at once, the complementary 1-rows at once, and then put them into the corresponding rows at once.</li>
</ol>
<p><strong>Code</strong>:</p>
<pre><code>import pandas as pd
import numpy as np
from datetime import datetime
np.random.seed(52) # reproducibility
n = 10000
df = pd.DataFrame(
data={
"773": [0,1]*int(n/2),
"dummy1": list(range(n)),
"dummy2": list(range(0, 10*n, 10))
}
)
t0 = datetime.now()
print("Program begins...")
# 1. draw the main dataset
draw_idx = np.random.choice(n, n) # repeatable draw
df_main = df.iloc[draw_idx, :].reset_index(drop=True)
# 2. draw the complementary dataset
# (1) count number of 1's and 0's
n_1 = np.count_nonzero(df["773"][draw_idx].values)
n_0 = n - n_1
# (2) split data for drawing
df_0 = df[df["773"] == 0].reset_index(drop=True)
df_1 = df[df["773"] == 1].reset_index(drop=True)
# (3) draw n_1 indexes in df_0 and n_0 indexes in df_1
idx_0 = np.random.choice(len(df_0), n_1)
idx_1 = np.random.choice(len(df_1), n_0)
# (4) broadcast the drawn rows into the complementary dataset
df_comp = df_main.copy()
mask_0 = (df_main["773"] == 0).values
df_comp.iloc[mask_0 ,:] = df_1.iloc[idx_1, :].values # df_1 into mask_0
df_comp.iloc[~mask_0 ,:] = df_0.iloc[idx_0, :].values # df_0 into ~mask_0
print(f"Program ends in {(datetime.now() - t0).total_seconds():.3f}s...")
</code></pre>
<p><strong>Check</strong></p>
<pre><code>print(df_main.head(5))
773 dummy1 dummy2
0 0 28 280
1 1 11 110
2 1 13 130
3 1 23 230
4 0 86 860
print(df_comp.head(5))
773 dummy1 dummy2
0 1 19 190
1 0 74 740
2 0 28 280 <- this row is complementary to df_main
3 0 60 600
4 1 37 370
</code></pre>
<p><strong>Efficiency gain</strong>: 14.23s -> 0.011s (ca. 128x)</p>
|
python|pandas|dataframe
| 1
|
6,740
| 47,550,705
|
If value for a column is Nat, update it by subtracting two dates
|
<p>I have a data frame which looks like this:</p>
<p><a href="https://i.stack.imgur.com/rA3jW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rA3jW.png" alt="enter image description here"></a></p>
<p>What I am trying to do is check if days_diff is NaT using numpy and pandas, if it is NaT then update it by subtracting "2016-01-01" by outofservicedatetime. After running code below:</p>
<pre><code>df[['days_diff']] = np.where(pd.isnull(df[['days_diff']]), df[['outofservicedatetime']] - np.datetime64('2016-01-01'), df[['days_diff']])
</code></pre>
<p>I get output which looks like this:</p>
<p><a href="https://i.stack.imgur.com/SMNZt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SMNZt.png" alt="enter image description here"></a></p>
<p>How can I have days_diff value as days? Or if anyone can suggest an easier way to achieve this would be equally helpful.</p>
|
<p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_timedelta.html" rel="nofollow noreferrer"><code>to_timedelta</code></a>:</p>
<pre><code>df['days_diff'] = pd.to_timedelta(df['days_diff'])
</code></pre>
<p>But better is use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>loc</code></a> for set values by condition:</p>
<pre><code>date = pd.to_datetime('2016-01-01')
df.loc[df['days_diff'].isnull(), 'days_diff'] = df['outofservicedatetime'] - date
print (df)
windturbineid outofservicedatetime subsystem days_diff
0 wtg_11 2016-03-09 Transformer 68 days
1 wtg_29 2016-03-14 Gearbox 73 days
2 wtg_14 2016-03-22 Converter 81 days
3 wtg_4 2016-03-25 Converter 84 days
4 wtg_19 2016-03-30 Converter 89 days
5 wtg_13 2016-04-05 Yawing 95 days
6 wtg_9 2016-05-04 Converter 124 days
7 wtg_15 2016-05-24 Converter 144 days
8 wtg_7 2016-05-25 Converter 145 days
9 wtg_22 2016-05-30 Generator 150 days
10 wtg_2 2016-05-31 Converter 151 days
11 wtg_1 2016-06-29 Converter 180 days
12 wtg_1 2016-07-11 Generator 192 days
13 wtg_14 2016-07-17 Converter 117 days
14 wtg_7 2016-08-11 Converter 78 days
15 wtg_14 2016-08-22 Transformer 234 days
16 wtg_7 2016-08-25 Yawing 237 days
</code></pre>
|
python|pandas|numpy
| 0
|
6,741
| 48,957,094
|
How can i use "leaky_relu" as an activation in Tensorflow "tf.layers.dense"?
|
<p>Using Tensorflow 1.5, I am trying to add <code>leaky_relu</code> activation to the output of a dense layer while I am able to change the <code>alpha</code> of <code>leaky_relu</code> (check <a href="https://www.tensorflow.org/api_docs/python/tf/nn/leaky_relu" rel="noreferrer">here</a>). I know I can do it as follows:</p>
<pre><code>output = tf.layers.dense(input, n_units)
output = tf.nn.leaky_relu(output, alpha=0.01)
</code></pre>
<p>I was wondering if there is a way to write this in one line as we can do for <code>relu</code>:</p>
<p><code>ouput = tf.layers.dense(input, n_units, activation=tf.nn.relu)</code></p>
<p>I tried the following but I get an error:</p>
<pre><code>output = tf.layers.dense(input, n_units, activation=tf.nn.leaky_relu(alpha=0.01))
TypeError: leaky_relu() missing 1 required positional argument: 'features'
</code></pre>
<p>Is there a way to do this?</p>
|
<p>If you're really adamant about a one liner for this, you could use the <code>partial()</code> method from the <code>functools</code> module, as follow:</p>
<pre><code>import tensorflow as tf
from functools import partial
output = tf.layers.dense(input, n_units, activation=partial(tf.nn.leaky_relu, alpha=0.01))
</code></pre>
<p>It should be noted that <code>partial()</code> does not work for all operations and you might have to try your luck with <code>partialmethod()</code> from the same module.</p>
<p>Hope this helps you in your endeavour.</p>
|
python|tensorflow
| 14
|
6,742
| 58,723,212
|
Using Pandas in python, how do I change a certain row?
|
<p>I am working on changing a value inside of a row inside of pandas.</p>
<p>I will include the first two lines of my .csv file, so you can get a feel for the data.</p>
<pre><code>section,price,seats,type
101,50.00,150,matinee
</code></pre>
<p>As you can see, it is pretty straight forward. Here is the issue.</p>
<pre><code>localList = matineeSeats.df.loc[matineeSeats.df['section'] == int(selection)] #Create slice of DataFrame for selected section
if localList.iloc[0, 2] > 0: #If theres more than 0 seats left... Cant do [0, 'seats']
print("Taking seat")
#Set the seats -= 1 ###THIS LINE###
</code></pre>
<hr>
<p><em>NOTE: For some reason i cannot access data by doing localList.iloc['seats'], but maybe i am doing it wrong?</em></p>
<hr>
<h1>What I tried</h1>
<p>I am unable to figure out how to get the seats to decrement by 1 each time one is purchased. The "THIS LINE" is where all my problems come from. I tried just setting the value equal to itself minus 1, and got the following.</p>
<h2>Try 1</h2>
<pre><code> if localList.iloc[0, 2] > 0:
print("Taking seat")
localList.iloc[0, 2] = localList.iloc[0, 2] - 1
print(localList.iloc[0, 2])
</code></pre>
<blockquote>
<p>SettingWithCopyWarning: A value is trying to be set on a copy of a
slice from a DataFrame. Try using .loc[row_indexer,col_indexer] =
value instead</p>
<p>See the caveats in the documentation:
<a href="http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy" rel="nofollow noreferrer">http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy</a>
self.obj[item] = s </p>
</blockquote>
<h2>Try 2</h2>
<p>After I saw that, I pressed the buy button multiple times but it ALWAYS stayed at the previous data - 1, and would never decrement further than that. So I tried what was given to me in the console. Using LOC instead of ILOC</p>
<pre><code> if localList.iloc[0, 2] > 0:
print("Taking seat")
localList.loc[0, 2] = localList.loc[0, 2] - 1
print(localList.iloc[0, 2])
</code></pre>
<blockquote>
<p>TypeError: cannot do label indexing on with these indexers [2] of class 'int'</p>
</blockquote>
<h2>Try 3</h2>
<p>I wanted to then just limit this to one variable to test if I can even touch this data with LOC, which it seems that I cant.</p>
<pre><code>localList.loc[0, 2] -= 1
</code></pre>
<blockquote>
<p>TypeError: cannot do label indexing on with these indexers [2] of class 'int'</p>
</blockquote>
<h2>Try 4</h2>
<p>From here I wanted to see what I was working with using LOC instead of ILOC. So I just printed the data out. It's no different from ILOC, so why can I not access this data the same way?</p>
<pre><code> print(localList.loc[0])
</code></pre>
<blockquote>
<p>section 101</p>
<p>price 50</p>
<p>seats 150</p>
<p>type matinee</p>
<p>Name: 0, dtype: object</p>
</blockquote>
<h1>What fixed it for me</h1>
<p>So I didnt think that saving off the slice would stop it from updating the dataframe. So while testing I figure out I need to take my localList and save it back into the frame where it was selected in the first place.</p>
|
<p><strong>Edit</strong>: I understood the question now. You are trying to update the original dataframe <code>matineeSeats.df</code> and not <code>localList</code></p>
<h1>Problem</h1>
<p>You are creating a copy with the <code>.loc</code> selection</p>
<pre><code>import pandas as pd
matineeSeats_df = pd.DataFrame([{'section': 101, 'price': 50.0, 'seats': 150, 'type': 'matinee'}])
# this creates a copy
localList = matineeSeats_df.loc[matineeSeats_df['section'] == 101]
# just localList gets updated, matineeSeats_df is not updated
localList.at[0, 'seats'] = localList.at[0, 'seats'] - 1
</code></pre>
<h1>Solution</h1>
<p>To update <code>matineeSeats_df</code> directly you can do it like this:</p>
<pre><code>matineeSeats_df.loc[matineeSeats_df['section'] == 101, 'seats'] -= 1
</code></pre>
|
python|pandas
| 0
|
6,743
| 58,840,462
|
Indexing numpy.ndarrays periodically
|
<p>I am trying to access (read/write) <code>numpy.ndarrays</code> periodically. In other words, if I have <code>my_array</code> with the shape of <strong>10*10</strong> and I use the access operator with the inputs:</p>
<p><code>my_arrray[10, 10]</code> or <code>acess_function(my_array, 10, 10)</code></p>
<p>I can have access to element </p>
<p><code>my_array[0, 0]</code>.</p>
<p>I want to have read/write ability at my returned element of periodically indexed array.</p>
<p>Can anyone how to do it without making a shifted copy of my original array?</p>
|
<p>I think this does what you want but I'm not sure whether there's something more elegant that exists. It's probably possible to write a general function for an Nd array but this does 2D only. As you said it uses modular arithmetic.</p>
<pre><code>import numpy as np
def access(shape, ixr, ixc):
""" Returns a selection. """
return np.s_[ixr % shape[0], ixc % shape[1]]
arr = np.arange(100)
arr.shape = 10,10
arr[ access(arr.shape, 45, 87) ]
# 57
arr[access(arr.shape, 45, 87)] = 100
In [18]: arr
# array([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9],
# [ 10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
# [ 20, 21, 22, 23, 24, 25, 26, 27, 28, 29],
# [ 30, 31, 32, 33, 34, 35, 36, 37, 38, 39],
# [ 40, 41, 42, 43, 44, 45, 46, 47, 48, 49],
# [ 50, 51, 52, 53, 54, 55, 56, **100**, 58, 59],
# [ 60, 61, 62, 63, 64, 65, 66, 67, 68, 69],
# [ 70, 71, 72, 73, 74, 75, 76, 77, 78, 79],
# [ 80, 81, 82, 83, 84, 85, 86, 87, 88, 89],
# [ 90, 91, 92, 93, 94, 95, 96, 97, 98, 99]])
</code></pre>
<p><strong>Edit - Generic nD version</strong></p>
<pre><code>def access(shape, *args):
if len(shape) != len(args):
error = 'Inconsistent number of dimemsions: {} & number of indices: {} in coords.'
raise IndexError( error.format(len(shape), len(args)))
res = []
for limit, ix in zip(shape, args):
res.append(ix % limit)
return tuple(res)
</code></pre>
<p>Usage/Test</p>
<pre><code>a = np.arange(24)
a.shape = 2,3,4
a[access(a.shape, 5, 6, 7)]
# 15
a[access(a.shape, 5,6,7) ] = 100
a
# array([[[ 0, 1, 2, 3],
# [ 4, 5, 6, 7],
# [ 8, 9, 10, 11]],
# [[ 12, 13, 14, 100],
# [ 16, 17, 18, 19],
# [ 20, 21, 22, 23]]])
</code></pre>
|
python|numpy|numpy-ndarray
| 1
|
6,744
| 70,139,937
|
How to randomly fill X of rows in a pandas dataframe?
|
<p>How to randomly fill the rows of a dataframe by setting a number? For example:</p>
<p>Given a pandas dataframe with 10 elements:</p>
<pre><code>col1
a
b
c
d
e
f
g
h
i
j
</code></pre>
<p>How to fill randomly with <code>1</code> and the rest with <code>0</code> in the rows of another column. For example, I would like to fill four rows with <code>1</code> and the rest six <code>0</code>:</p>
<pre><code>col1 col2
a 1
b 0
c 1
d 1
e 0
f 1
g 0
h 0
i 0
j 0
</code></pre>
|
<p>This should do the trick. For each row, set the col2 to a random int between 0 and 1</p>
<pre><code>df["col2"] = df.apply(lambda x: randint(0,1), axis=1)
</code></pre>
<p>If you need n random values to exist and the rest to be set, you can try this:</p>
<pre><code>n = 4
df["col2"] = 0
df_to_update = df.sample(n)
df_to_update = 1
df.update(df_to_update)
</code></pre>
|
python|pandas|dataframe
| 1
|
6,745
| 70,187,803
|
Compute mean value of rows that has the same column value in Pandas
|
<p>I'm trying to combine three pandas DataFrames together</p>
<p>One of them (called <code>major</code>) has a column <code>category</code> where each row has a unique label :</p>
<pre class="lang-py prettyprint-override"><code>major_df = pd.DataFrame(np.random.randint(0, 100, size=(3, 2)), columns=list("AB"))
major_df["category"] = pd.Series(["cat_A", "cat_B", "cat_C"])
</code></pre>
<pre><code> A B category
0 90 17 cat_A
1 36 81 cat_B
2 90 67 cat_C
</code></pre>
<p>Two other dfs (called <code>minor) contains multiple rows and have their own unique column names. Each df has a column </code>category` where each row has a value that is present in the major df category column :</p>
<pre class="lang-py prettyprint-override"><code>minor_dfs = {}
for k, cols in zip(("1st", "2nd"), ("CD", "EF")):
minor_dfs[k] = pd.DataFrame(np.random.randint(0, 100, size=(8, 2)), columns=list(cols))
minor_dfs[k]["category"] = np.random.choice(["cat_A", "cat_B", "cat_C"], 8)
</code></pre>
<p>Here is an example of one of those minor dfs. The only difference between both is that first minor df has the columns <code>C</code> and <code>D</code>, where the second has columns <code>E</code> and <code>F</code>.</p>
<pre><code> C D category
0 71 44 cat_C
1 5 88 cat_C
2 8 78 cat_C
3 31 27 cat_C
4 42 48 cat_B
5 18 18 cat_B
6 84 23 cat_A
7 94 23 cat_A
</code></pre>
<hr />
<p>So, my goal is to compute the mean of the values in minor dfs based on the category column, so that at the end, I have the following dfs :</p>
<pre><code> C D
cat_A 89.00 23.00
cat_B 30.00 33.00
cat_C 28.75 59.25
</code></pre>
<p>where each column contain the mean of the values that are in each category.</p>
<hr />
<p>For that, I made the following code, where we create empty DataFrames with the column values of the minor dfs and indices from the different values of categories. I then fill this dataframe using a for loop where I iterate over every value of the index.</p>
<pre class="lang-py prettyprint-override"><code>copy_dfs = {}
for k, min_df in minor_dfs.items():
# Get columns from minor df
# Get index from category of major df
col_names = min_df.columns.values
ind_values = major_df.category.values
# Create a df with columns and indices and set values to np.nan
copy_df = pd.DataFrame(np.nan, index=ind_values, columns=col_names)
copy_df = copy_df.drop("category", axis=1)
# For each category in the index of the dataframe
for maj_category in copy_df.index:
# Select rows in minor df where category is the same as major df category
minor_rows = min_df[min_df.category == maj_category]
minor_rows = minor_rows.drop("category", axis=1)
# Compute the mean values (by column) of the rows that were selected
# Add the mean values into copy_df, where the index corresponds to major df category
copy_df.loc[maj_category] = minor_rows.mean()
# Store into dict
copy_dfs[k] = copy_df
</code></pre>
<p>Yet, I think that this code could be optimized using vectorized operations, especially in the part where I iterate for each row. So I was wondering if there was a easier and clever way to accomplish what I'm trying to do ?</p>
|
<p>This?</p>
<pre><code>import pandas as pd
df = pd.read_excel('test.xlsx')
df1 = df.groupby(['category']).mean()
print(df)
print(df1)
</code></pre>
<p>output:</p>
<pre><code> C D category
0 71 44 cat_C
1 5 88 cat_C
2 8 78 cat_C
3 31 27 cat_C
4 42 48 cat_B
5 18 18 cat_B
6 84 23 cat_A
7 94 23 cat_A
C D
category
cat_A 89.00 23.00
cat_B 30.00 33.00
cat_C 28.75 59.25
</code></pre>
|
python|pandas|dataframe|loops|vectorization
| 3
|
6,746
| 70,240,197
|
Cleaning spikes in time series data using neighbouring data points
|
<p>I am trying to clean spikes in data in time series data in Pandas dataframe.</p>
<pre><code>value = 5000
for index, row in gauteng_df.iterrows():
if index == gauteng_df.shape[0]-1:
break
upper, lower = row['Admissions to Date'] + value, row['Admissions to Date'] - value
a = gauteng_df.iloc[index+1]['Admissions to Date']
if a > upper or a < lower:
a = (gauteng_df.iloc[index-1]['Admissions to Date'] + gauteng_df.iloc[index+1]['Admissions to Date'])/2
gauteng_df.iloc[index]['Admissions to Date'] = a
</code></pre>
<p>I tried to reference the subsequent data point. If the current data point falls outside of the interval of the subsequent data point (i.e point +- value), the current data point will be replaced by the average of the previous data point and the next data point. Unfortunately, when I tried to plot the new graph, there are no changes reflected, and the spikes are still there.</p>
<p>I would appreciate any help in this! Also, <code>df.iterrows()</code> might not be the most efficient method so I would appreciate any help on a better method to replace the spikes values.</p>
|
<p>Here is an alternative approach that might save you the trouble of iterating over DataFrame values: <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.find_peaks.html" rel="nofollow noreferrer"><code>scipy.signal.find_peaks</code></a>.</p>
<pre><code>import pandas as pd
import numpy as np
from scipy.signal import find_peaks
# Example data with a peak and a valley
gauteng_df = pd.DataFrame({'Admissions to Date':
[8000, 4500, 12000, 5500,
3000, 7500, 1000, 8500]
})
# Peak detection threshold
value = 5000
# `prominence` sets minimum height above surrounding
# signal at which a given value is considered a peak
peak_idx = find_peaks(gauteng_df['Admissions to Date'], prominence=value)[0]
# To detect valleys deeper than `value`,
# run find_peaks on negative of data
valley_idx = find_peaks(-gauteng_df['Admissions to Date'], prominence=value)[0]
# Combine indexes of peaks and valleys into a single array
idx = np.concatenate((peak_idx, valley_idx))
# Build an indicator column of peaks and valleys, or outliers
gauteng_df['outlier'] = False
gauteng_df.loc[idx, 'outlier'] = True
# Replace each outlier value with NaN
gauteng_df.loc[gauteng_df['outlier'], 'Admissions to Date'] = np.nan
# Interpolate over NaNs just created with default linear method
gauteng_df['Interpolated'] = (gauteng_df['Admissions to Date']
.interpolate()
.astype(int))
# Result
print(gauteng_df)
Admissions to Date outlier Interpolated
0 8000.0 False 8000
1 4500.0 False 4500
2 NaN True 5000
3 5500.0 False 5500
4 3000.0 False 3000
5 7500.0 False 7500
6 NaN True 8000
7 8500.0 False 8500
</code></pre>
|
python|pandas|dataframe
| 1
|
6,747
| 56,365,415
|
How can I import several Excel sheets in a multi-index dataframe in Python?
|
<p>I am trying to import an Excel file with several sheets containing the same structure of bidimensional arrays in a multi-indexed Dataframe in Python.</p>
<p>Assume each sheet contains an array (A,B)x(a,b). Basically I would like to have something like this </p>
<pre><code> Sheet1 | Sheet2 | Sheet3
a | b | a | b | a | b
A
B
</code></pre>
<p>I have tried to use a for loop.</p>
<pre class="lang-py prettyprint-override"><code>df={}
for i in Sheets:
df[i] = pd.read_excel (r'file.xlsx', sheet_name = [i], header=0, index_col=0)
</code></pre>
<p>I would expect df to be such that, if I recall </p>
<pre class="lang-py prettyprint-override"><code>df['Sheet1']
</code></pre>
<p>I can retrieve one of the arrays, and this actually works fine. The problem comes up if I try to recall</p>
<pre class="lang-py prettyprint-override"><code>df['Sheet1']['a']
</code></pre>
<p>to retrieve the first column of the first sheet. However, I get the following error message </p>
<pre class="lang-py prettyprint-override"><code>KeyError: a
</code></pre>
<p>and I am stuck here.</p>
|
<h3><code>sheet_name=None</code> in <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_excel.html" rel="nofollow noreferrer">pd.read_excel</a></h3>
<p>Will produce a dicitonary of all sheets. Pass that to <code>pd.concat</code> with <code>axis=1</code></p>
<pre><code>pd.concat(pd.read_excel('Book1.xlsx', None, index_col=0), axis=1)
Sheet1 Sheet2 Sheet3
a b a b a b
A 1 2 1 2 1 2
B 3 4 3 4 3 4
</code></pre>
<p>You can also limit the sheets by passing a list of names</p>
<pre><code>pd.concat(pd.read_excel('Book1.xlsx', ['Sheet1', 'Sheet2', 'Sheet3'], index_col=0), axis=1)
</code></pre>
|
python|arrays|pandas|dataframe
| 3
|
6,748
| 56,137,777
|
pandas plot x axis labels overlapping
|
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
from PIL import Image
import numpy as np
#code here
df = pd.DataFrame(items) #items is a list of dictionaries
counts =df.groupby('Merchant')['Merchant 1'].count().sort_values(ascending=False)counts2 =df.groupby('Merchant 2')['Seller'].count().sort_values(ascending=False)
total = counts.add(counts2, fill_value=0)
plt.ylabel('Number')
plt.xlabel("Merchant")
plt.title('Merchants Overview',color='b')
total.plot(kind='bar')
plt.xticks(rotation='horizontal')
plt.savefig("ex")
</code></pre>
<p>the provides correct "bars" but the text is over lapping in the x-axis</p>
<p><img src="https://i.stack.imgur.com/kZoP7.png" alt="plot output"></p>
<p>another quick question how can I sort the bars from highest to lowest.</p>
<p>I checked this <a href="https://stackoverflow.com/questions/48932345/pandas-plot-hide-some-x-axis-labels">Pandas plot hide some x-axis labels</a>,also <a href="https://stackoverflow.com/questions/29104081/x-axis-label-in-plot-overlaps">x axis label in plot overlaps</a>
This is my first question,please be nice.</p>
|
<p>Try to change:</p>
<pre><code>total.plot(kind='bar')
plt.xticks(rotation='horizontal')
</code></pre>
<p>to</p>
<pre><code>total.sort_values(ascending=False).plot(kind='bar')
plt.xticks(rotation=90)
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/otbiS.png" rel="noreferrer"><img src="https://i.stack.imgur.com/otbiS.png" alt="enter image description here"></a></p>
<p>Or:</p>
<pre><code>counts.sort_values(ascending=False).plot(kind='bar', color='darkblue')
plt.xticks(rotation=25)
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/FYoqY.png" rel="noreferrer"><img src="https://i.stack.imgur.com/FYoqY.png" alt="enter image description here"></a></p>
<p>If you really insist on horizontal label, consider horizontal bar:</p>
<pre><code>counts.sort_values(ascending=False).plot(kind='barh', color='darkblue')
# plt.xticks(rotation=25)
</code></pre>
<p>Output:
<a href="https://i.stack.imgur.com/ODroV.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ODroV.png" alt="enter image description here"></a></p>
|
python|pandas
| 6
|
6,749
| 55,619,581
|
Getting total memory and cpu usage for one python instance
|
<p>I'm using keras to make and test different types of Neural Nets and need data to compare them. I need data on the cpu and memory used during the training and testing. This is in python and as I looked around I found lot of suggestions for psutil. However everything I see seems to grab the current usage. </p>
<p>What is current usage? Like the amount of memory used at that specific moment? How do I use it to get the total cpu and memory used by the entire program, or at least the portion of the code where the NN is training and testing. Thanks for any help!</p>
|
<p>psutil is a good recommendation to collect that type of information. If you incorporate this code into your existing keras code, you can collect information about the cpu usage of your process at the time the cpu_times() method is called</p>
<pre><code>import psutil
process = psutil.Process()
print(process.cpu_times())
</code></pre>
<p>The meaning of the value returned by cpu_times() is explained <a href="https://stackoverflow.com/questions/556405/what-do-real-user-and-sys-mean-in-the-output-of-time1">here</a>. It is cumulative, so if you want to know how much CPU time your keras code used altogether, just run it before you exit the python script.</p>
<p>To get the memory usage information for your process, at the particular time you make the call to memory_info() you can run this on the same <code>process</code> object we declared before</p>
<pre><code>print(process.memory_info())
</code></pre>
<p>The exact meaning of the cpu and memory results depend on what platform you're using. The memory info structure is explained <a href="https://psutil.readthedocs.io/en/latest/#psutil.Process.memory_info" rel="nofollow noreferrer">here</a></p>
<p>A more comprehensive example shows how you could use the <a href="https://apscheduler.readthedocs.io/en/v3.6.0/" rel="nofollow noreferrer">Advanced Python Scheduler</a> to take cpu and memory measurements in the background as you run your keras training</p>
<pre><code>import psutil
import time
import os
from apscheduler.schedulers.background import BackgroundScheduler
process = psutil.Process()
def get_info():
print(process.cpu_times(), process.memory_info())
if __name__ == '__main__':
scheduler = BackgroundScheduler()
scheduler.add_job(get_info, 'interval', seconds=3)
scheduler.start()
# run the code you want to measure here
# replace this nonsense loop
now = time.time()
finish = now + 60
while time.time() < finish:
print("Some progress message: {}".format(time.time()))
time.sleep(10)
</code></pre>
|
python|tensorflow|keras|psutil
| 1
|
6,750
| 55,923,743
|
How to make to_sql method from Pandas DataFrame work with Jaydebeapi?
|
<p>I tried to write a pandas dataframe into a HIVE table on a remote server using <code>to_sql</code> method. Currently I have the Jaydebeapi connection object created and I'm able to use <code>read_sql</code> method from pandas plus this connection object to query data from that table. However when I tried to write back the code generates errors. Could you please help me understand this issue?</p>
<p>Below is the code sample I tried:</p>
<pre><code>import pandas as pd
import jaydebeapi
conn = jaydebeapi.connect("org.apache.hive.jdbc.HiveDriver","jdbc:hive2://server:port/;transportMode=http;httpPath=gateway/default/hive;ssl=true", ["user", "password"])
dff = pd.DataFrame([[1,2],[4,7]], columns=['A','B'])
dff.to_sql(name='table', con=conn, if_exists='append')
</code></pre>
<p>And the error message I got is:</p>
<pre><code>Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\site-packages\jaydebeapi\__init__.py", line 501, in execute
is_rs = self._prep.execute()
jpype._jexception.org.apache.hive.service.cli.HiveSQLExceptionPyRaisable: org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: FAILED: ParseException line 1:59 cannot recognize input near ''table'' ';' '<EOF>' in expression specification
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\sql.py", line 1378, in execute
cur.execute(*args)
File "C:\ProgramData\Anaconda3\lib\site-packages\jaydebeapi\__init__.py", line 503, in execute
_handle_sql_exception()
File "C:\ProgramData\Anaconda3\lib\site-packages\jaydebeapi\__init__.py", line 156, in _handle_sql_exception_jpype
reraise(exc_type, exc_info[1], exc_info[2])
File "C:\ProgramData\Anaconda3\lib\site-packages\jaydebeapi\__init__.py", line 57, in reraise
raise value.with_traceback(tb)
File "C:\ProgramData\Anaconda3\lib\site-packages\jaydebeapi\__init__.py", line 501, in execute
is_rs = self._prep.execute()
jaydebeapi.DatabaseError: org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: FAILED: ParseException line 1:59 cannot recognize input near ''table'' ';' '<EOF>' in expression specification
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\site-packages\jaydebeapi\__init__.py", line 417, in rollback
self.jconn.rollback()
jpype._jexception.java.sql.SQLExceptionPyRaisable: java.sql.SQLException: Method not supported
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\sql.py", line 1382, in execute
self.con.rollback()
File "C:\ProgramData\Anaconda3\lib\site-packages\jaydebeapi\__init__.py", line 419, in rollback
_handle_sql_exception()
File "C:\ProgramData\Anaconda3\lib\site-packages\jaydebeapi\__init__.py", line 156, in _handle_sql_exception_jpype
reraise(exc_type, exc_info[1], exc_info[2])
File "C:\ProgramData\Anaconda3\lib\site-packages\jaydebeapi\__init__.py", line 57, in reraise
raise value.with_traceback(tb)
File "C:\ProgramData\Anaconda3\lib\site-packages\jaydebeapi\__init__.py", line 417, in rollback
self.jconn.rollback()
jaydebeapi.DatabaseError: java.sql.SQLException: Method not supported
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\ProgramData\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 3267, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-19-b63282bc3842>", line 1, in <module>
dff.to_sql(name='table', con=conn, if_exists='append')
File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\generic.py", line 2130, in to_sql
dtype=dtype)
File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\sql.py", line 450, in to_sql
chunksize=chunksize, dtype=dtype)
File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\sql.py", line 1480, in to_sql
table.create()
File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\sql.py", line 561, in create
if self.exists():
File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\sql.py", line 549, in exists
return self.pd_sql.has_table(self.name, self.schema)
File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\sql.py", line 1492, in has_table
return len(self.execute(query, [name, ]).fetchall()) > 0
File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\sql.py", line 1386, in execute
raise_with_traceback(ex)
File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\compat\__init__.py", line 404, in raise_with_traceback
raise exc.with_traceback(traceback)
File "C:\ProgramData\Anaconda3\lib\site-packages\pandas\io\sql.py", line 1382, in execute
self.con.rollback()
File "C:\ProgramData\Anaconda3\lib\site-packages\jaydebeapi\__init__.py", line 419, in rollback
_handle_sql_exception()
File "C:\ProgramData\Anaconda3\lib\site-packages\jaydebeapi\__init__.py", line 156, in _handle_sql_exception_jpype
reraise(exc_type, exc_info[1], exc_info[2])
File "C:\ProgramData\Anaconda3\lib\site-packages\jaydebeapi\__init__.py", line 57, in reraise
raise value.with_traceback(tb)
File "C:\ProgramData\Anaconda3\lib\site-packages\jaydebeapi\__init__.py", line 417, in rollback
self.jconn.rollback()
pandas.io.sql.DatabaseError: Execution failed on sql: SELECT name FROM sqlite_master WHERE type='table' AND name=?;
org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: FAILED: ParseException line 1:59 cannot recognize input near ''table'' ';' '<EOF>' in expression specification
unable to rollback
</code></pre>
|
<p>You cannot do it that way. You need to create manually the table (or with an SQL script executed on the cursor, not on the connection) and then you run an UPDATE statement with all the rows of the DB that are well formatted.
Sorry about that.</p>
|
python|pandas|python-3.7|jaydebeapi
| 0
|
6,751
| 55,912,952
|
PyTorch does not converge when approximating square function with linear model
|
<p>I'm trying to learn some PyTorch and am referencing this discussion <a href="https://discuss.pytorch.org/t/minimal-working-example-of-optim-sgd/11623" rel="nofollow noreferrer">here</a></p>
<p>The author provides a minimum working piece of code that illustrates how you can use PyTorch to solve for an unknown linear function that has been polluted with random noise.</p>
<p>This code runs fine for me.</p>
<p>However, when I change the function such that I want t = X^2, the parameter does not seem to converge.</p>
<pre><code>import torch
import torch.nn as nn
import torch.optim as optim
from torch.autograd import Variable
# Let's make some data for a linear regression.
A = 3.1415926
b = 2.7189351
error = 0.1
N = 100 # number of data points
# Data
X = Variable(torch.randn(N, 1))
# (noisy) Target values that we want to learn.
t = X * X + Variable(torch.randn(N, 1) * error)
# Creating a model, making the optimizer, defining loss
model = nn.Linear(1, 1)
optimizer = optim.SGD(model.parameters(), lr=0.05)
loss_fn = nn.MSELoss()
# Run training
niter = 50
for _ in range(0, niter):
optimizer.zero_grad()
predictions = model(X)
loss = loss_fn(predictions, t)
loss.backward()
optimizer.step()
print("-" * 50)
print("error = {}".format(loss.data[0]))
print("learned A = {}".format(list(model.parameters())[0].data[0, 0]))
print("learned b = {}".format(list(model.parameters())[1].data[0]))
</code></pre>
<p>When I execute this code, the new A and b parameters are seemingly random thus it does not converge. I think this should converge because you can approximate any function with a slope and offset function. My theory is that I'm using PyTorch incorrectly. </p>
<p>Can any identify a problem with my <code>t = X * X + Variable(torch.randn(N, 1) * error)</code> line of code?</p>
|
<p>You cannot fit a 2nd degree polynomial with a linear function. You cannot expect more than random (since you have random samples from the polynomial).<br>
What you can do is try and have two inputs, <code>x</code> and <code>x^2</code> and fit from them:</p>
<pre><code>model = nn.Linear(2, 1) # you have 2 inputs now
X_input = torch.cat((X, X**2), dim=1) # have 2 inputs per entry
# ...
predictions = model(X_input) # 2 inputs -> 1 output
loss = loss_fn(predictions, t)
# ...
# learning t = c*x^2 + a*x + b
print("learned a = {}".format(list(model.parameters())[0].data[0, 0]))
print("learned c = {}".format(list(model.parameters())[0].data[0, 1]))
print("learned b = {}".format(list(model.parameters())[1].data[0]))
</code></pre>
|
python|machine-learning|neural-network|regression|pytorch
| 3
|
6,752
| 55,964,143
|
How to define multiple filters in Tensorflow
|
<p>I have small matrix <strong>4*4</strong>, I want to filter it with two different filters in TensorFlow (1.8.0). I have an example with one filter (<code>my_filter</code>):
I want to change the filter to </p>
<pre><code>my_filter = tf.constant([0.2,0.5], shape=[2, 2, 3, 1])
</code></pre>
<p>One will be <strong>2*2</strong> all <code>0.25</code> other <strong>2*2</strong> all <code>0.5</code>. But how to set the values? </p>
<p>This is my code:</p>
<pre><code>import tensorflow as tf
import numpy as np
tf.reset_default_graph()
x_shape = [1, 4, 4, 1]
x_val = np.ones(shape=x_shape)
x_val[0,1,1,0]=5
print(x_val)
x_data = tf.placeholder(tf.float32, shape=x_shape)
my_filter = tf.constant(0.25, shape=[2, 2, 1, 1])
my_strides = [1, 2, 2, 1]
mov_avg_layer= tf.nn.conv2d(x_data, my_filter, my_strides,
padding='SAME', name='Moving_Avg_Window')
# Execute the operations
with tf.Session() as sess:
#print(x_data.eval())
result =sess.run(mov_avg_layer,feed_dict={x_data: x_val})
print("Filter: " , result)
print("Filter: " , result.shape)
sess.close()
</code></pre>
|
<p><strong>First option</strong></p>
<p>The filter can also be defined as a placeholder</p>
<pre><code>filter = tf.placeholder(filter_type, filter_shape)
...
with tf.Session() as sess:
for i in range (number_filters) :
result =sess.run(mov_avg_layer,feed_dict={x_data: x_val, filter: filter_val})
</code></pre>
<p><strong>Second option</strong></p>
<p>define a second filter in the graph</p>
<pre><code>my_filter = tf.constant(0.25, shape=[2, 2, 1, 1])
my_filter2 = tf.constant(0.5, shape=[2, 2, 1, 1])
mov_avg_layer= tf.nn.conv2d(x_data, my_filter, my_strides,
padding='SAME', name='Moving_Avg_Window')
mov_avg_laye2= tf.nn.conv2d(x_data, my_filter2, my_strides,
padding='SAME', name='Moving_Avg_Window')
...
with tf.Session() as sess:
result1, result2 =sess.run([mov_avg_layer1, mov_avg_layer2],feed_dict={x_data: x_val})
sess.close()
</code></pre>
|
tensorflow
| 1
|
6,753
| 64,907,186
|
Is there any way to list down datetime range between 2 datatime in python?
|
<p>i have 2 set of datetime data in different column and i would like to list down, the range within the 2 date and time of a same row.</p>
<p>example, i tried below the output does not shows the range of datetime</p>
<pre><code>import pandas as pd
a = '2020-11-17 13:35:18'
b = '2020-11-17 13:36:09'
tt= pd.date_range(start=a,end=b)
print(tt)
OUTPUT:
DatetimeIndex(['2020-11-17 13:35:18'], dtype='datetime64[ns]', freq='D')
</code></pre>
<p>expected output would be:</p>
<pre><code>['2020-11-17 13:35:18','2020-11-17 13:35:19',,'2020-11-17 13:35:20','2020-11-17 13:35:21','2020-11-17 13:35:22','2020-11-17 13:35:23','2020-11-17 13:35:24','2020-11-17 13:35:25',
'2020-11-17 13:35:26','2020-11-17 13:35:27','2020-11-17 13:35:28','2020-11-17 13:35:29','2020-11-17 13:35:30','2020-11-17 13:35:31','2020-11-17 13:35:32','2020-11-17 13:35:33',
'2020-11-17 13:35:34','2020-11-17 13:35:35','2020-11-17 13:35:36','2020-11-17 13:35:37','2020-11-17 13:35:38','2020-11-17 13:35:39','2020-11-17 13:35:40','2020-11-17 13:35:41',
'2020-11-17 13:35:42','2020-11-17 13:35:43','2020-11-17 13:35:44','2020-11-17 13:35:45','2020-11-17 13:35:46','2020-11-17 13:35:47','2020-11-17 13:35:48','2020-11-17 13:35:49',
'2020-11-17 13:35:50','2020-11-17 13:35:51','2020-11-17 13:35:52','2020-11-17 13:35:53','2020-11-17 13:35:54','2020-11-17 13:35:55','2020-11-17 13:35:56','2020-11-17 13:35:57',
'2020-11-17 13:35:58','2020-11-17 13:35:59','2020-11-17 13:36:00','2020-11-17 13:36:01','2020-11-17 13:36:02','2020-11-17 13:36:03','2020-11-17 13:36:04','2020-11-17 13:36:05',
'2020-11-17 13:36:06','2020-11-17 13:36:07','2020-11-17 13:36:08','2020-11-17 13:36:09']
</code></pre>
|
<p>Reason of your output is if not set <code>freq</code> parameter is used default.</p>
<p>So set <code>S</code> for seconds frequency:</p>
<pre><code>tt = pd.date_range(start=a,end=b, freq='S')
print(tt)
DatetimeIndex(['2020-11-17 13:35:18', '2020-11-17 13:35:19',
'2020-11-17 13:35:20', '2020-11-17 13:35:21',
'2020-11-17 13:35:22', '2020-11-17 13:35:23',
'2020-11-17 13:35:24', '2020-11-17 13:35:25',
'2020-11-17 13:35:26', '2020-11-17 13:35:27',
'2020-11-17 13:35:28', '2020-11-17 13:35:29',
'2020-11-17 13:35:30', '2020-11-17 13:35:31',
'2020-11-17 13:35:32', '2020-11-17 13:35:33',
'2020-11-17 13:35:34', '2020-11-17 13:35:35',
'2020-11-17 13:35:36', '2020-11-17 13:35:37',
'2020-11-17 13:35:38', '2020-11-17 13:35:39',
'2020-11-17 13:35:40', '2020-11-17 13:35:41',
'2020-11-17 13:35:42', '2020-11-17 13:35:43',
'2020-11-17 13:35:44', '2020-11-17 13:35:45',
'2020-11-17 13:35:46', '2020-11-17 13:35:47',
'2020-11-17 13:35:48', '2020-11-17 13:35:49',
'2020-11-17 13:35:50', '2020-11-17 13:35:51',
'2020-11-17 13:35:52', '2020-11-17 13:35:53',
'2020-11-17 13:35:54', '2020-11-17 13:35:55',
'2020-11-17 13:35:56', '2020-11-17 13:35:57',
'2020-11-17 13:35:58', '2020-11-17 13:35:59',
'2020-11-17 13:36:00', '2020-11-17 13:36:01',
'2020-11-17 13:36:02', '2020-11-17 13:36:03',
'2020-11-17 13:36:04', '2020-11-17 13:36:05',
'2020-11-17 13:36:06', '2020-11-17 13:36:07',
'2020-11-17 13:36:08', '2020-11-17 13:36:09'],
dtype='datetime64[ns]', freq='S')
</code></pre>
|
python|pandas|datetime
| 3
|
6,754
| 44,205,471
|
Tensorflow tfprof LSTMCell
|
<p>I'm using tfprof in order to get number of flops necessary for model forward path.
My model is 3 layer LSTM and fully connected layer afterwards.
I've observed that number of computations grows linearly for fully connected layer, while it doesn't changes for LSTM layers. How that could be possible? </p>
<p>tfprof Report for 1 timestamp forward path.</p>
<pre><code>==================Model Analysis Report======================
_TFProfRoot (0/2.71m flops)
rnn/while/multi_rnn_cell/cell_1/lstm_cell/lstm_cell_1/MatMul (1.05m/1.05m flops)
rnn/while/multi_rnn_cell/cell_2/lstm_cell/lstm_cell_1/MatMul (1.05m/1.05m flops)
rnn/while/multi_rnn_cell/cell_0/lstm_cell/lstm_cell_1/MatMul (606.21k/606.21k flops)
fc_layer/MatMul (1.54k/1.54k flops)
rnn/while/multi_rnn_cell/cell_0/lstm_cell/lstm_cell_1/BiasAdd (1.02k/1.02k flops)
rnn/while/multi_rnn_cell/cell_1/lstm_cell/lstm_cell_1/BiasAdd (1.02k/1.02k flops)
rnn/while/multi_rnn_cell/cell_2/lstm_cell/lstm_cell_1/BiasAdd (1.02k/1.02k flops)
fc_layer/BiasAdd (3/3 flops)
</code></pre>
<p>tfprof Report for 2 timestamps forward path.</p>
<pre><code>==================Model Analysis Report======================
_TFProfRoot (0/2.71m flops)
rnn/while/multi_rnn_cell/cell_1/lstm_cell/lstm_cell_1/MatMul (1.05m/1.05m flops)
rnn/while/multi_rnn_cell/cell_2/lstm_cell/lstm_cell_1/MatMul (1.05m/1.05m flops)
rnn/while/multi_rnn_cell/cell_0/lstm_cell/lstm_cell_1/MatMul (606.21k/606.21k flops)
fc_layer/MatMul (3.07k/3.07k flops)
rnn/while/multi_rnn_cell/cell_0/lstm_cell/lstm_cell_1/BiasAdd (1.02k/1.02k flops)
rnn/while/multi_rnn_cell/cell_1/lstm_cell/lstm_cell_1/BiasAdd (1.02k/1.02k flops)
rnn/while/multi_rnn_cell/cell_2/lstm_cell/lstm_cell_1/BiasAdd (1.02k/1.02k flops)
fc_layer/BiasAdd (6/6 flops)
</code></pre>
|
<p>tfprof does static analysis of your graph and calculate the float operations for each graph node.</p>
<p>I assume you are using dynamic_rnn or something similar that has tf.while_loop. In that case, a graph node appear in graph once
but is actually run multiple times at run time.</p>
<p>In this case, tfprof has no way to statically figure out how many
steps (timestamp in your word) will be run. Hence, it only counts
the float operations once.</p>
<p>A work around for now is probably multiply timesteps by yourself.</p>
|
tensorflow|lstm
| 2
|
6,755
| 44,321,487
|
Pandas: How to count Occurrences of Categorical features Grouped by ID
|
<p>Suppose I have this DataFrame <code>df</code></p>
<pre><code>My_ID My_CAT
1 A
2 B
3 C
1 A
1 B
2 D
</code></pre>
<p>I'd like to know how many occurrences of each distinct <code>My_Cat</code> value occurs for each distinct <code>My_ID</code>.</p>
<p>I need it in the form of a dense array like</p>
<pre><code>My_ID A B C D
1 2 1 0 0
2 0 1 0 1
3 0 0 1 0
</code></pre>
<p>I tryd with</p>
<pre><code>df.groupby(['My_ID','My_CAT']).count()
</code></pre>
<p>but although I see data are grouped accordingly to my need occurrences are not counted. </p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.crosstab.html" rel="nofollow noreferrer"><code>crosstab</code></a> (less typing, by slowiest):</p>
<pre><code>df = pd.crosstab(df['My_ID'], df['My_CAT'])
print (df)
My_CAT A B C D
My_ID
1 2 1 0 0
2 0 1 0 1
3 0 0 1 0
</code></pre>
<p>Faster solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> + aggregate <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.size.html" rel="nofollow noreferrer"><code>size</code></a> + <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.unstack.html" rel="nofollow noreferrer"><code>unstack</code></a>:</p>
<pre><code>df = df.groupby(['My_ID','My_CAT']).size().unstack(fill_value=0)
print (df)
My_CAT A B C D
My_ID
1 2 1 0 0
2 0 1 0 1
3 0 0 1 0
</code></pre>
<p>And last:</p>
<pre><code>df = df.reset_index().rename_axis(None, axis=1)
print (df)
My_ID A B C D
0 1 2 1 0 0
1 2 0 1 0 1
2 3 0 0 1 0
</code></pre>
<p>Notice:</p>
<p><a href="https://stackoverflow.com/questions/33346591/what-is-the-difference-between-size-and-count-in-pandas">What is the difference between size and count in pandas?</a></p>
<p><strong>Timings</strong> (bigger data):</p>
<pre><code>np.random.seed(123)
N = 100000
L = list('abcdefghijklmno')
df = pd.DataFrame({'My_CAT': np.random.choice(L, N),
'My_ID':np.random.randint(1000,size=N)})
print (df)
In [79]: %timeit pd.crosstab(df['My_ID'], df['My_CAT'])
10 loops, best of 3: 96.7 ms per loop
In [80]: %timeit df.groupby(['My_ID','My_CAT']).size().unstack(fill_value=0)
100 loops, best of 3: 14.2 ms per loop
In [81]: %timeit pd.get_dummies(df.My_CAT).groupby(df.My_ID).sum()
10 loops, best of 3: 25.5 ms per loop
In [82]: %timeit df.groupby('My_ID').My_CAT.value_counts().unstack(fill_value=0)
10 loops, best of 3: 25.4 ms per loop
In [136]: %timeit xtab_df(df, 'My_ID', 'My_CAT')
100 loops, best of 3: 4.23 ms per loop
In [137]: %timeit xtab(df, 'My_ID', 'My_CAT')
100 loops, best of 3: 4.61 ms per loop
</code></pre>
|
python|pandas|numpy
| 4
|
6,756
| 40,840,539
|
How to reshape numpy array of array into single row
|
<p>I have a numpy array as </p>
<pre><code>[[0 0 0 ..., 0 0 0]
[0 0 0 ..., 0 0 0]
[0 0 0 ..., 0 0 0]
...,
[0 0 0 ..., 0 0 0]
[0 0 0 ..., 0 0 0]
[0 0 0 ..., 0 0 0]]
</code></pre>
<p>I would like to have it as </p>
<pre><code>0
0
0
.
.
0
0
</code></pre>
<p>I know that we have to use the reshape function, but how to use it, is I am not able to figure out,</p>
<p>my attempt </p>
<pre><code>np.reshape(new_arr, newshape=1)
</code></pre>
<p>Which gives an error </p>
<pre><code>ValueError: total size of new array must be unchanged
</code></pre>
<p>The <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.reshape.html" rel="nofollow noreferrer">documentation</a> isn't very friendly</p>
|
<p>You can also have a look at <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.flatten.html" rel="nofollow noreferrer">numpy.ndarray.flatten</a>:</p>
<pre><code>a = np.array([[1,2], [3,4]])
a.flatten()
# array([1, 2, 3, 4])
</code></pre>
<p>The difference between <code>flatten</code> and <code>ravel</code> is that flatten will return a copy of the array whereas ravel will refence the original if possible. Thus, if you modify the array returned by ravel, it may also modify the entries in the original array.</p>
<p>It is usually safer to create a copy of the original array, although it will take more time since it has to allocate new memory to create it.</p>
<p>You can read more about the difference between these two options <a href="https://stackoverflow.com/questions/28930465/what-is-the-difference-between-flatten-and-ravel-functions-in-numpy">here</a>.</p>
|
python|arrays|numpy
| 7
|
6,757
| 54,101,593
|
Conditional Batch Normalization in Keras
|
<p>I'm trying to implement Conditional Batch Normalization in Keras. I assumed that I will have to create a custom layer, hence, I extended from the <a href="https://github.com/keras-team/keras/blob/master/keras/layers/normalization.py" rel="nofollow noreferrer">Normalization</a> source code from Keras team. </p>
<p>The idea:
I will have 3 conditions, so, I will need 3 different beta and gamma parameters to be initialized. Then, I just incorporated conditional statements where its needed. Note that my condition changes after every iteration randomly and trying to set the condition based on 3 global Keras variables, c1, c2, and c3.</p>
<p>Here is the code I have currently. It gives me error because of the conditional statements. Any idea how to improve or implement Conditional Batch Normalization in Keras:</p>
<p>UPDATED:</p>
<pre><code>from keras import regularizers, initializers, constraints
from keras.legacy import interfaces
import keras.backend as K
from keras.layers import Layer, Input, InputSpec
from keras.models import Model
import tensorflow as tf
global c1, c2, c3
c1 = K.variable([0])
c2 = K.variable([0])
c3 = K.variable([0])
class ConditionalBatchNormalization(Layer):
"""Conditional Batch normalization layer.
"""
@interfaces.legacy_batchnorm_support
def __init__(self,
axis=-1,
momentum=0.99,
epsilon=1e-3,
center=True,
scale=True,
beta_initializer='zeros',
gamma_initializer='ones',
moving_mean_initializer='zeros',
moving_variance_initializer='ones',
beta_regularizer=None,
gamma_regularizer=None,
beta_constraint=None,
gamma_constraint=None,
**kwargs):
super(ConditionalBatchNormalization, self).__init__(**kwargs)
self.axis = axis
self.momentum = momentum
self.epsilon = epsilon
self.center = center
self.scale = scale
self.beta_initializer = initializers.get(beta_initializer)
self.gamma_initializer = initializers.get(gamma_initializer)
self.moving_mean_initializer = initializers.get(moving_mean_initializer)
self.moving_variance_initializer = (
initializers.get(moving_variance_initializer))
self.beta_regularizer = regularizers.get(beta_regularizer)
self.gamma_regularizer = regularizers.get(gamma_regularizer)
self.beta_constraint = constraints.get(beta_constraint)
self.gamma_constraint = constraints.get(gamma_constraint)
def build(self, input_shape):
dim = input_shape[0][self.axis]
if dim is None:
raise ValueError('Axis ' + str(self.axis) + ' of '
'input tensor should have a defined dimension '
'but the layer received an input with shape ' +
str(input_shape[0]) + '.')
shape = (dim,)
if self.scale:
self.gamma1 = self.add_weight(shape=shape,
name='gamma',
initializer=self.gamma_initializer,
regularizer=self.gamma_regularizer,
constraint=self.gamma_constraint)
self.gamma2 = self.add_weight(shape=shape,
name='gamma',
initializer=self.gamma_initializer,
regularizer=self.gamma_regularizer,
constraint=self.gamma_constraint)
self.gamma3 = self.add_weight(shape=shape,
name='gamma',
initializer=self.gamma_initializer,
regularizer=self.gamma_regularizer,
constraint=self.gamma_constraint)
else:
self.gamma1 = None
self.gamma2 = None
self.gamma3 = None
if self.center:
self.beta1 = self.add_weight(shape=shape,
name='beta',
initializer=self.beta_initializer,
regularizer=self.beta_regularizer,
constraint=self.beta_constraint)
self.beta2 = self.add_weight(shape=shape,
name='beta',
initializer=self.beta_initializer,
regularizer=self.beta_regularizer,
constraint=self.beta_constraint)
self.beta3 = self.add_weight(shape=shape,
name='beta',
initializer=self.beta_initializer,
regularizer=self.beta_regularizer,
constraint=self.beta_constraint)
else:
self.beta1 = None
self.beta2 = None
self.beta3 = None
self.moving_mean = self.add_weight(
shape=shape,
name='moving_mean',
initializer=self.moving_mean_initializer,
trainable=False)
self.moving_variance = self.add_weight(
shape=shape,
name='moving_variance',
initializer=self.moving_variance_initializer,
trainable=False)
super(ConditionalBatchNormalization, self).build(input_shape)
def call(self, inputs, training=None):
input_shape = K.int_shape(inputs[0])
c1 = inputs[1][0]
c2 = inputs[2][0]
# Prepare broadcasting shape.
ndim = len(input_shape)
reduction_axes = list(range(len(input_shape)))
del reduction_axes[self.axis]
broadcast_shape = [1] * len(input_shape)
broadcast_shape[self.axis] = input_shape[self.axis]
# Determines whether broadcasting is needed.
needs_broadcasting = (sorted(reduction_axes) != list(range(ndim))[:-1])
def normalize_inference():
if needs_broadcasting:
# In this case we must explicitly broadcast all parameters.
broadcast_moving_mean = K.reshape(self.moving_mean,
broadcast_shape)
broadcast_moving_variance = K.reshape(self.moving_variance,
broadcast_shape)
if self.center:
broadcast_beta = \
tf.case({
c1: lambda: K.reshape(self.beta1,
broadcast_shape),
c2: lambda: K.reshape(self.beta2,
broadcast_shape)
},
default=lambda: K.reshape(self.beta3,
broadcast_shape)
)
else:
broadcast_beta = None
if self.scale:
broadcast_gamma = \
tf.case({
c1: lambda: K.reshape(self.gamma1,
broadcast_shape),
c2: lambda: K.reshape(self.gamma2,
broadcast_shape)
},
default=lambda: K.reshape(self.gamma3,
broadcast_shape)
)
else:
broadcast_gamma = None
return K.batch_normalization(
inputs[0],
broadcast_moving_mean,
broadcast_moving_variance,
broadcast_beta,
broadcast_gamma,
axis=self.axis,
epsilon=self.epsilon)
else:
out = \
tf.case({
c1: lambda: K.batch_normalization(
inputs[0],
self.moving_mean,
self.moving_variance,
self.beta1,
self.gamma1,
axis=self.axis,
epsilon=self.epsilon),
c2: lambda: K.batch_normalization(
inputs[0],
self.moving_mean,
self.moving_variance,
self.beta2,
self.gamma2,
axis=self.axis,
epsilon=self.epsilon)
},
default=lambda: K.batch_normalization(
inputs[0],
self.moving_mean,
self.moving_variance,
self.beta3,
self.gamma3,
axis=self.axis,
epsilon=self.epsilon)
)
return out
# If the learning phase is *static* and set to inference:
if training in {0, False}:
return normalize_inference()
# If the learning is either dynamic, or set to training:
normed_training, mean, variance = \
tf.case({
c1: lambda: K.normalize_batch_in_training(
inputs[0], self.gamma1, self.beta1, reduction_axes,
epsilon=self.epsilon),
c2: lambda: K.normalize_batch_in_training(
inputs[0], self.gamma2, self.beta2, reduction_axes,
epsilon=self.epsilon)
},
default=lambda: K.normalize_batch_in_training(
inputs[0], self.gamma3, self.beta3, reduction_axes,
epsilon=self.epsilon)
)
print(normed_training)
if K.backend() != 'cntk':
sample_size = K.prod([K.shape(inputs[0])[axis]
for axis in reduction_axes])
sample_size = K.cast(sample_size, dtype=K.dtype(inputs[0]))
if K.backend() == 'tensorflow' and sample_size.dtype != 'float32':
sample_size = K.cast(sample_size, dtype='float32')
# sample variance - unbiased estimator of population variance
variance *= sample_size / (sample_size - (1.0 + self.epsilon))
self.add_update([K.moving_average_update(self.moving_mean,
mean,
self.momentum),
K.moving_average_update(self.moving_variance,
variance,
self.momentum)],
inputs[0])
# Pick the normalized form corresponding to the training phase.
return K.in_train_phase(normed_training,
normalize_inference,
training=training)
def get_config(self):
config = {
'axis': self.axis,
'momentum': self.momentum,
'epsilon': self.epsilon,
'center': self.center,
'scale': self.scale,
'beta_initializer': initializers.serialize(self.beta_initializer),
'gamma_initializer': initializers.serialize(self.gamma_initializer),
'moving_mean_initializer':
initializers.serialize(self.moving_mean_initializer),
'moving_variance_initializer':
initializers.serialize(self.moving_variance_initializer),
'beta_regularizer': regularizers.serialize(self.beta_regularizer),
'gamma_regularizer': regularizers.serialize(self.gamma_regularizer),
'beta_constraint': constraints.serialize(self.beta_constraint),
'gamma_constraint': constraints.serialize(self.gamma_constraint)
}
base_config = super(ConditionalBatchNormalization, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
def compute_output_shape(self, input_shape):
return input_shape[0]
if __name__ == '__main__':
x = Input((10,))
c1 = Input(batch_shape=(1,), dtype=tf.bool)
c2 = Input(batch_shape=(1,), dtype=tf.bool)
h = ConditionalBatchNormalization()([x, c1, c2])
model = Model([x, c1, c2], h)
model.compile(optimizer=Adam(1e-4), loss='mse')
c1 = K.constant([False]*100, dtype=tf.bool)
c2 = K.constant([True]*100, dtype=tf.bool)
X = np.random.rand(100, 10)
Y = np.random.rand(100, 10)
model.train_on_batch(x=[X, c1, c2], y=Y)
c1 = K.constant([False]*100, dtype=tf.bool)
c2 = K.constant([True]*100, dtype=tf.bool)
model.train_on_batch(x=[X, c1, c2], y=Y)
</code></pre>
<p>`</p>
|
<p>I would use <a href="https://www.tensorflow.org/api_docs/python/tf/case" rel="nofollow noreferrer">tf.case</a> to express your conditional statements:
</p>
<pre class="lang-py prettyprint-override"><code>normed_training, mean, variance = \
tf.case({
c1: lambda: K.normalize_batch_in_training(
inputs, self.gamma1, self.beta1, reduction_axes,
epsilon=self.epsilon),
c2: lambda: K.normalize_batch_in_training(
inputs, self.gamma2, self.beta2, reduction_axes,
epsilon=self.epsilon)
},
default=lambda: K.normalize_batch_in_training(
inputs, self.gamma3, self.beta3, reduction_axes,
epsilon=self.epsilon)
)
</code></pre>
<p>Note also that <a href="https://www.tensorflow.org/api_docs/python/tf/case" rel="nofollow noreferrer">tf.case</a> requires the conditions <code>c1</code> and <code>c2</code> to be of type <a href="https://www.tensorflow.org/api_docs/python/tf/Tensor" rel="nofollow noreferrer">tf.Tensor</a>, so I defined them as follows:</p>
<pre class="lang-py prettyprint-override"><code>c1 = K.constant(False, dtype=tf.bool)
c2 = K.constant(False, dtype=tf.bool)
</code></pre>
|
python|tensorflow|machine-learning|keras|keras-layer
| 2
|
6,758
| 53,843,863
|
Pandas top N records in each group sorted by a column's value
|
<pre><code>import pandas as pd
d = {
'resource': [1,2,3,4,5,6,7],
'branch': ['a', 'b', 'c', 'a', 'a', 'c', 'b'],
'utilization': [0.7, 0.76, 0.9, 0.3, 0.55, 0.87, 0.71]
}
df = pd.DataFrame(data=d)
</code></pre>
<h3>I need to display the top 2 utilized resources by branches</h3>
<p><strong>Something like this:</strong></p>
<pre><code>df.groupby('branch')[['resource', 'utilization']].nlargest(2, 'utilization')
</code></pre>
<p>I tried the following:</p>
<pre><code>f = lambda x: x.sort_values('utilization', ascending=False)
df.groupby('branch', sort=False).apply(f).nlargest(3, 'utilization')
</code></pre>
<p>but it gives me top 3 across all the records when I need <strong>top N in each group</strong></p>
<pre><code> resource branch utilization
branch
c 2 3 c 0.90
5 6 c 0.87
b 1 2 b 0.76
</code></pre>
|
<p>May using <code>sort_values</code> + <code>tail</code></p>
<pre><code>df.sort_values('utilization').groupby('branch').tail(2)
branch resource utilization
4 a 5 0.55
0 a 1 0.70
6 b 7 0.71
1 b 2 0.76
5 c 6 0.87
2 c 3 0.90
</code></pre>
|
pandas|pandas-groupby
| 5
|
6,759
| 54,043,734
|
How to split data set into subsets following some criterions?
|
<p>Though I use machine learning-related terminology, my question is 100% engineering topic and it has nothing to do with statistics and mathematics. Therefore I ask it in this forum instead of Cross Validated.</p>
<p>This is my sample data that I will use to comment my question:</p>
<pre><code>X = pd.DataFrame(columns=["F1","F2"],
data=[[23,0.8],
[11,5.35],
[24,19.18],
[15,10.25],
[10,11.30],
[55,44.85],
[15,33.88],
[12,45.30],
[14,22.20],
[15,15.80],
[83,0.8],
[51,5.35],
[34,30.28],
[35,15.25],
[60,13.30],
[75,44.80],
[35,30.77],
[62,40.33],
[64,23.40],
[14,11.80]])
y = pd.DataFrame(columns=["y"],
data=[[0],
[0],
[1],
[0],
[2],
[2],
[2],
[1],
[0],
[1],
[0],
[0],
[1],
[0],
[1],
[0],
[1],
[1],
[0],
[2]])
</code></pre>
<p>I should split data into training and testing sets. A classical way is to use <code>train_test_split</code> function of <code>sklearn</code>:</p>
<pre><code>X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, test_size=0.25)
</code></pre>
<p>But I want to specify % of records to be assigned to the training and testing sets. More details are explained below.</p>
<p>In my case I deal with a multi-class classification problem, in which <code>y</code> may take one of 3 different values: 0, 1, 2. The records with the value 2 are very rare (in my real data set, approx 3% of the whole dataset). Therefore this is an imbalanced classification problem.</p>
<p>Since this is an imbalanced classification problem, the records of the rare class are very important. Therefore I want to update <code>model_selection.train_test_split</code> as follows: I want to assign % of records per class for the training and testing sets. <strong>For example, <50%, 60%, 90%> would mean that 90% of the rare class's records are assigned to the training set.</strong></p>
<p>In my example, I would like to get, for instance, 3 records of <code>y</code> equal to <code>2</code> in the training set (<code>X_train</code>, <code>y_train</code>), and 1 record in the testing set. </p>
<p>I googled for similar questions but haven't found anything. </p>
<p>To solve this task, I shuffled the initial data frame:</p>
<pre><code>df = pd.concat([X, y], axis=1)
df = df.sample(frac=1).reset_index(drop=True)
</code></pre>
<p>However, I don't know how to proceed with the rest of tasks. Maybe there is some sklearn built-in function or some library that can do solve this problem?</p>
|
<p>There is an option called stratify, in train_test_split. also take a look at this <a href="https://stackoverflow.com/questions/29438265/stratified-train-test-split-in-scikit-learn">kind of similar question</a></p>
<p>To accomplish the proportions that you need, you can use <code>np.random.choice</code> from numpy:</p>
<pre><code>import numpy as np
df = pd.concat([X,y], axis = 1)
#get index values for y = 0
i0 = np.random.choice(df.loc[df.y==0].index.values,
round(len(df.loc[df.y==0])*.5), replace = False)
i1 = np.random.choice(df.loc[df.y==1].index.values,
round(len(df.loc[df.y==1])*.6), replace = False)
i2 = np.random.choice(df.loc[df.y==2].index.values,
round(len(df.loc[df.y==1])*.9), replace = False)
df_train = df.loc[df.index.isin(np.concatenate([i1,i2,i0]))]
df_test = df.loc[~df.index.isin(np.concatenate([i1,i2,i0]))]
</code></pre>
|
python|pandas|dataframe|scikit-learn
| 2
|
6,760
| 53,857,373
|
Loading images for multi-class from csv file
|
<p>I have Train and Test folders and inside each folder there is many folders with images inside each. In .csv file there is label for each folder and class.</p>
<p>here is csv file</p>
<p><a href="https://i.imgur.com/qMLGOpC.png" rel="nofollow noreferrer">https://i.imgur.com/qMLGOpC.png</a></p>
<p>and folders</p>
<p><a href="https://i.imgur.com/RrYBZxG.png" rel="nofollow noreferrer">https://i.imgur.com/RrYBZxG.png</a></p>
<p>How to load these folders with labels in keras ?</p>
<p>I tried to make a dataframe with folders and labels like below :</p>
<pre><code> pth = 'C:/Users/Documents/train/'
folders = os.listdir(pth)
filepath='C:/Users//Documents//keras/labels.csv'
metadf = pd.read_csv(filepath)
metadf.index = metadf.Class
videos = pd.DataFrame([])
for folder in folders:
pth_upd = pth + folder + '/'
for file in allfiles:
videos = pd.DataFrame(metadf.values, index=folders)
</code></pre>
<p>the output is :</p>
<p><a href="https://i.imgur.com/CsXAE8f.png" rel="nofollow noreferrer">https://i.imgur.com/CsXAE8f.png</a></p>
<p>Is that the correct way of doing it ? how can I load each folder with images and corresponding labels ?</p>
|
<p>you can use inbuild flow_from_directory() method
check out this link <a href="https://keras.io/preprocessing/image/" rel="nofollow noreferrer">keras docs</a></p>
|
python|pandas|keras|deep-learning
| 0
|
6,761
| 54,166,444
|
Removing List Within Pandas Dataframe
|
<p>I have the following dataframe:</p>
<pre><code>Index Recipe_ID order content
0 1285 1 Heat oil in a large frypan with lid over mediu...
1 1285 2 Meanwhile, add cauliflower to a pot of boiling...
2 1285 3 Remove lid from chicken and let simmer uncover...
3 1289 1 To make the dressing, whisk oil, vinegar and m...
4 1289 2 Cook potatoes in a large saucepan of boiling w..
</code></pre>
<p>Task: I need to get the contents in one cell:</p>
<pre><code>df = df.groupby('recipe_variation_part_id', as_index=False).agg(lambda x: x.tolist())
</code></pre>
<p>This returns the following:</p>
<pre><code>Index Recipe_ID order content
0 1285 [1, 2, 3] [Heat oil in a large frypan with lid over medi...
1 1289 [1, 2, 3] [To make the dressing, whisk oil, vinegar and ...
2 1297 [1, 2, 4, 3] [Place egg in saucepan of cold water and bring...
3 1301 [1, 2] [Preheat a non-stick frying pan and pan fry th...
4 1309 [2, 3, 4, 1] [Meanwhile, cook noodles according to package ...
</code></pre>
<p>If you look at the first recipe entry, you get the following:</p>
<pre><code>['Heat oil in a large frypan with lid over medium-high heat. Cook onions, garlic and rosemary for a couple of minutes until soft. Add chicken and brown on both sides for a few minutes, then add in tomatoes and olives. Season with salt and pepper and allow to simmer with lid on for 20-25 minutes. ',
'Meanwhile, add cauliflower to a pot of boiling water and cook for 10 minutes or until soft. Drain and then mash and gently fold in olive oil, parmesan, salt and pepper. ',
'Remove lid from chicken and let simmer uncovered for five minutes more. Sprinkle with parsley then serve with cauliflower mash. ']
</code></pre>
<p>This is what I want, but I need to remove the square brackets</p>
<p>dtype = list</p>
<p>I've tried:</p>
<pre><code>df.applymap(lambda x: x[0] if isinstance(x, list) else x)
</code></pre>
<p>Only returns the first entry, not every step</p>
<p>I've tried:</p>
<pre><code>df['content'].str.replace(']', '')
</code></pre>
<p>Only returns NANs</p>
<p>I've tried:</p>
<pre><code>df['content'].str.replace(r'(\[\[(?:[^\]|]*\|)?([^\]|]*)\]\])', '')
</code></pre>
<p>Only returns NANs</p>
<p>I've tried:</p>
<pre><code>df['content'].str.get(0)
</code></pre>
<p>Only returns the first entry</p>
<p>Any help would be greatly appreciated.</p>
<p>If you need any further information, please let me know.</p>
|
<p>I've created a little example that might solve this problem for you:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'order': [1, 1, 2], 'content': ['hello', 'world', 'sof']})
df
Out[4]:
order content
0 1 hello
1 1 world
2 2 sof
df.groupby(by=['order']).agg(lambda x: ' '.join(x))
Out[5]:
content
order
1 hello world
2 sof
</code></pre>
<p>So just like you do in line 5th in your question, you use <code>' '.join(x)</code> instead of <code>tolist()</code> which will put everything as 1 big string instead list of strings, therefor, no <code>[]</code> </p>
|
python|pandas|data-cleaning
| 3
|
6,762
| 66,079,643
|
How to remove the following error : ImportError: cannot import name 'normalize_data_format'
|
<p>I am getting stuck by sequential errors for the following code as you can see with the google colab link:
<a href="https://colab.research.google.com/drive/1Tc8WEzqBMRd0Eg7pJijI98eBEKTw45s3?usp=sharing" rel="nofollow noreferrer">https://colab.research.google.com/drive/1Tc8WEzqBMRd0Eg7pJijI98eBEKTw45s3?usp=sharing</a></p>
<p>you can get access the code from GitHub if it is not visible to you: <a href="https://github.com/nephashi/GaitRecognitionCNN" rel="nofollow noreferrer">https://github.com/nephashi/GaitRecognitionCNN</a></p>
<p>how do I make it resolve?</p>
<p>I am getting following errors:</p>
<p>2021-02-06 17:03:14.551645: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.10.1
Traceback (most recent call last):
File "/content/sample_data/GaitRecognitionCNN-master/main.py", line 5, in
from layers.Conv2D121 import Conv2D121
File "/content/sample_data/GaitRecognitionCNN-master/layers/Conv2D121.py", line 13, in
from keras.utils.conv_utils import normalize_data_format
ImportError: cannot import name 'normalize_data_format'</p>
<p>If somehow I resolved it the next error in appearing, all solution present on the internet becomes old while keras and tensorflow lib have been changed so those solution not working always.</p>
<p>Please make resolve all next error if you are getting, I would be very grateful to you, thank you</p>
|
<p>Replace import statement from</p>
<pre><code>from keras.utils import conv_utils
</code></pre>
<p>to</p>
<pre><code>from tensorflow.python.keras.utils import conv_utils
</code></pre>
|
python|tensorflow|keras
| 0
|
6,763
| 65,994,547
|
Append a row to pandas dataframe for select columns only
|
<p>I'd like to append a new row to pandas DataFrame, but only populate select columns. In the code below, I subset the columns list I'd like to populate and assign a list of values.</p>
<pre><code>import pandas as pd
sampleDF = pd.DataFrame(columns=['Tenant','Industry','Square Footage'])
sampleDF = sampleDF.iloc[sampleDF.tail(1).index.item(), ['Tenant', 'Industry']] = ['DE Shaw', 'Finance']
</code></pre>
<p>The above code doesn't work.</p>
|
<p>You can use the append() method by inserting a dictionary. Does this help ?</p>
<pre><code>sampleDF = sampleDF.append({'Tenant': 'De Shaw', 'Industry': 'Finance'}, ignore_index = True)
</code></pre>
|
python|pandas
| 0
|
6,764
| 66,076,400
|
PyTorch: Using a target size (torch.Size([1])) that is different to the input size (torch.Size([1, 1]))
|
<p>I am new to PyTorch and working on the implementation of recommender systems.</p>
<p>I obtained my models from here:
<a href="https://blog.fastforwardlabs.com/2018/04/10/pytorch-for-recommenders-101.html" rel="nofollow noreferrer">https://blog.fastforwardlabs.com/2018/04/10/pytorch-for-recommenders-101.html</a></p>
<p>Following the instructions from the website, i feed the DenseNet model exactly the same way as the MatrixFactorization model.</p>
<p>models.py:</p>
<pre><code>class MatrixFactorization(nn.Module):
def __init__(self, n_users, n_items, n_factors=20):
super(MatrixFactorization, self).__init__()
# create user embeddings
self.user_factors = nn.Embedding(n_users, n_factors,
sparse=True)
# create item embeddings
self.item_factors = nn.Embedding(n_items, n_factors,
sparse=True)
def forward(self, user, item):
# matrix multiplication
prediction = (self.user_factors(user) * self.item_factors(item)).sum(1)
# test
return F.hardsigmoid(prediction)
def predict(self, user, item):
return self.forward(user, item)
class DenseNet(nn.Module):
def __init__(self, n_users, n_items, n_factors, h1=128, d_out=1):
"""
Simple Feedforward with Embeddings
"""
super(DenseNet, self).__init__()
# user and item embedding layers
self.user_factors = torch.nn.Embedding(n_users, n_factors,
sparse=True)
self.item_factors = torch.nn.Embedding(n_items, n_factors,
sparse=True)
# linear layers
self.linear1 = torch.nn.Linear(n_factors*2, h1)
self.linear2 = torch.nn.Linear(h1, d_out)
def forward(self, users, items):
users_embedding = self.user_factors(users)
items_embedding = self.item_factors(items)
# concatenate user and item embeddings to form input
x = torch.cat([users_embedding, items_embedding], 1)
h1_relu = F.relu(self.linear1(x))
output_scores = self.linear2(h1_relu)
return output_scores
def predict(self, users, items):
# return the score
output_scores = self.forward(users, items)
return output_scores
</code></pre>
<p>training of densenet:</p>
<pre><code> index = 0
model.train()
for user, item in zip(users, items):
# get user, item and rating data
# rating = Variable(torch.FloatTensor([ratings[user, item]]))
rating = normalize(rating_values[index])
rating = Variable(torch.FloatTensor([rating]))
user = Variable(torch.LongTensor([int(user)]))
item = Variable(torch.LongTensor([int(item)]))
index += 1
# predict
prediction = model.predict(user, item)
loss = loss_fn(prediction, rating)
optimizer.zero_grad()
# backpropagate
loss.backward()
# update weights
optimizer.step()
</code></pre>
<p>Although i receive an output i get the UserWarning:</p>
<pre><code> AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\loss.py:446: UserWarning: Using a target size (torch.Size([1])) that is different to the input size (torch.Size([1, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
return F.mse_loss(input, target, reduction=self.reduction)
</code></pre>
<p>I thought this model takes the same input (tensor of user and item id for each user) as the mf model which works for me. What is wrong with the inputs? Which line produces this error?</p>
|
<p>This is not an error message, it's a warning about the shapes of tensors passed to <code>nn.MSELoss</code>.</p>
<p>Provided that you feed each model with a 1D tensor of shape <code>(n,)</code>. The only difference is that <code>MatrixFactorization</code> will return a 1D tensor (of shape <code>(n,)</code>, while <code>DenseNet</code> will return a 2D tensor: shape <code>(n, 1)</code>. To remove the extra dimension you could broadcast to shape <code>(-1,)</code> or just <a href="https://pytorch.org/docs/stable/generated/torch.squeeze.html*" rel="nofollow noreferrer"><code>squeeze</code></a> all dimensions:</p>
<pre><code>loss = loss_fn(prediction.squeeze(), rating)
</code></pre>
<hr />
<p>Other things to point out:</p>
<ul>
<li>Call <code>self</code> instead of <code>self.forward</code></li>
<li>Don't use <a href="https://pytorch.org/docs/stable/autograd.html#variable-deprecated" rel="nofollow noreferrer"><code>Variable</code></a> as it's deprecated</li>
</ul>
|
machine-learning|pytorch
| 0
|
6,765
| 66,285,835
|
TensorFlow Lite PoseNet on Android crashes because of memory
|
<p>I am trying to create an Android app that uses TensorFlow Lite PoseNet for human pose estimation.
The problem I have is that native memory slowly increases until it crashes.
Even if I run the <a href="https://github.com/tensorflow/examples/tree/master/lite/examples/posenet/android" rel="nofollow noreferrer">demo app</a> it will crash on my S10 after about 20 minutes.
I tried profiling it and I don't think it is a leak because if I code it so that the interpreter takes breaks then garbage collection is able to keep up.</p>
<p>I would like to have it do estimations at a rate of about 15 per second which seems to do very well for a few minutes. Is there a way to tune it to run longer or is that unrealistic for running on a device such as a Samsung S10?</p>
|
<p>There was a memory leak in the Android PoseNet demo app that is not noticeable unless you enable <code>window.addFlags(WindowManager.LayoutParams.FLAG_KEEP_SCREEN_ON)</code></p>
<p>The PosenetActivity.kt captureSession!!.setRepeatingRequest was using a backgroundHandler which was holding a reference that was preventing native memory from getting cleaned up. The callback and handler is not needed for setRepeatingRequest.
Changing this</p>
<pre><code> previewRequest = previewRequestBuilder!!.build()
captureSession!!.setRepeatingRequest(
previewRequest!!,
captureCallback, backgroundHandler
)
</code></pre>
<p>to</p>
<pre><code> previewRequest = previewRequestBuilder!!.build()
captureSession!!.setRepeatingRequest(
previewRequest!!,
null, null
)
</code></pre>
<p>fixes the memory leak.</p>
<p>Also, the demo app Posenet.kt is coded to use CPU which probably helps support a greater range of devices but switching it to GPU or NNAPI speeds things up significantly.</p>
|
android|tensorflow
| 0
|
6,766
| 52,710,348
|
How to take and restore snapshots of model training on another VM in Google Colab?
|
<p>There is a 12 hour <a href="https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/discussion/48320" rel="nofollow noreferrer">time limit</a> for training DL models on GPU, according to google colab. Other people have had similar <a href="https://stackoverflow.com/questions/49469697/how-can-i-keep-the-gpu-running-for-more-than-12-hours-in-google-colaboratory?rq=1">questions</a> in the past, but there has been no clear answer on how to save and load models halfway through training when the 12 hour limits get exceeded, including saving the number of epochs that has been completed/saving other parameters. Is there an automated script for me to save the relevant parameters and resume operations on another VM? I am a complete noob; clear cut answers will be much appreciated.</p>
|
<p>As far as I know, there is no way to automatically reconnect to another VM whenever you reach the 12 hours limit. So in any case, you have to manually reconnect when the time is up.</p>
<p>As Bob Smith points out, you can mount Google Drive in Colab VM so that you can save and load data from there. In particular, you can periodically save model checkpoints so that you can load the most recent one whenever you connect to a new Colab VM.</p>
<ol>
<li><p>Mount Drive in your Colab VM:</p>
<pre><code>from google.colab import drive
drive.mount('/content/gdrive')
</code></pre></li>
<li><p>Create a <code>saver</code> in your graph:</p>
<pre><code>saver = tf.train.Saver()
</code></pre></li>
<li><p>Periodically (e.g. every epoch) save a checkpoint in Drive:</p>
<pre><code>saver.save(session, CHECKPOINT_PATH)
</code></pre></li>
</ol>
<p>When you connect to a new Colab VM (because of the timeout), mount Drive again in your VM and restore the most recent checkpoint before the training phase:</p>
<pre><code>saver.restore(session, CHECKPOINT_PATH)
...
# Start training with the restored model.
</code></pre>
<p>Take a look at the <a href="https://www.tensorflow.org/api_docs/python/tf/train/Saver" rel="nofollow noreferrer">documentation</a> to read more about <code>tf.train.Saver</code>.</p>
|
python|tensorflow|deep-learning|pytorch|google-colaboratory
| 1
|
6,767
| 52,859,756
|
Using idxmax on a hierarchical dataframe
|
<p>I'm trying to find the index of the maximum values in multiple columns in a multi-index Pandas dataframe. </p>
<pre><code> Kommune Upplands Vallentuna... Kiruna
Year Party
1973 M 0.9 29.2 ... 20
KD 15 10 ... 2
MP 1.1 4 ... 5
V 6 7 ... 8
SD NaN NaN ... NaN
L 10.1 13.5 ... 8.8
1976 M 1.8 29.2 ... 20
KD 16 10 ... 2
MP 10 4 ... 5
V 15 7 ... 8
SD NaN NaN ... NaN
L 11.9 15 ... 18
... ... ... ... ... ...
... ... ... ... ... ...
2014 M 28 22 ... 29
KD 4.5 13 ... 5
MP 11 8 ... 9
V 1.9 5 ... 10
SD 20 10 ... 5
L 19 25 ... 1
</code></pre>
<p>The desired output is</p>
<pre><code>Kommune Upplands Vallentuna... Kiruna
Year
1973 KD M ... M
1976 V M ... M
... ... ... ... ...
2014 M L ... M
</code></pre>
<p>I've tried using <code>groupby</code> (as suggested in a previous post on multi-index- <a href="https://stackoverflow.com/questions/48956379/getting-max-values-from-pandas-multiindex-dataframe">Getting max values from pandas multiindex dataframe</a>) but it returns a tuple for every position. </p>
<pre><code>Kommune Upplands Vallentuna ... Kiruna
Year
1973 (1973, KD) (1973, M) ... (1973, M)
1976 (1976, V) (1976, M) ... (1976, M)
... ... ... ... ...
2014 (2014, M) (2014, L) ... (2014, M)
</code></pre>
<p>How do I get only the second element from each tuple? Or is there a more efficient way to find the indices?</p>
|
<p>Seems like you need </p>
<pre><code>df.stack().sort_values().groupby(level=[0,2]).tail(1).reset_index(level=1).Party.unstack()
Out[544]:
Upplands Vallentuna Kiruna
Year
1973 KD M M
1976 KD M M
</code></pre>
|
python|pandas|multi-index
| 3
|
6,768
| 52,762,328
|
Issues with using numpy
|
<p>I have <em>pypy</em> (Python 2.7.13, [PyPy 6.0.0 with GCC 6.2.0 20160901] on linux2) and <em>python</em> (Python 2.7.14 [GCC 4.8.4] on linux2) installed on same machine.</p>
<p>I am seamlessly able to use <em>numpy</em> with <em>pypy</em>. However, with <em>python</em> I get following error.</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/numpy/__init__.py", line 160, in <module>
from . import random
File "/usr/local/lib/python2.7/dist-packages/numpy/random/__init__.py", line 99, in <module>
from .mtrand import *
ImportError: /usr/local/lib/python2.7/dist-packages/numpy/random/mtrand.so: undefined symbol: PyFPE_jbuf
</code></pre>
<p>I tried solutions suggested in <a href="https://stackoverflow.com/questions/36190757/numpy-undefined-symbol-pyfpe-jbuf">this</a> stackoverflow answer. Things didn't work.
When I try <code>pip uninstall numpy</code> I get following error: <code>Skipping numpy as it is not installed.</code></p>
<p>I also tried installing numpy for python again: <code>sudo apt-get install python-numpy</code>. I get following error:</p>
<pre><code>Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
python-numpy : Depends: python (>= 2.7) but it is not going to be installed
Depends: python (< 2.8) but it is not going to be installed
Depends: python:any (>= 2.7.1-0ubuntu2)
Depends: python2.7:any
</code></pre>
<p>Another option I tried is: <code>sudo pip install numpy</code>. I get following error:</p>
<pre><code>Command "/usr/bin/pypy -u -c "import setuptools, tokenize;__file__='/tmp/pip-install-k3GbV2/numpy/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" install --record /tmp/pip-record-8SqQxW/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-install-k3GbV2/numpy/
/usr/local/lib/pypy2.7/dist-packages/pip/_vendor/urllib3/util/ssl_.py:160: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. You can upgrade to a newer version of Python to solve this. For more information, see https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings
InsecurePlatformWarning
</code></pre>
<p><strong>Notice following in above block</strong>: <em>/usr/local/lib/<strong>pypy2.7</strong>/dist-packages/pip/</em> It seems my pip is using some pypy2.7 libs.</p>
<p>I am quite not sure what is going on. Any help will be appreciated. Please let me know if you need any further information.</p>
|
<p>If you have mixed <code>sudo pip install</code> with <code>sudo apt install</code> you have propbably messed up your system. You might want to explore using <code>virtualenv</code> to set up a self-contained python, one that lives totally inside a single folder that can be managed with usr-level <code>pip install</code>, no <code>sudo</code> needed.</p>
|
python|numpy|pypy
| 0
|
6,769
| 52,601,904
|
I am trying to create a new column to bin values of a time column of a dataframe in python based on time range
|
<p>Time column values: 09:11:00,10:11:00...NAT</p>
<pre><code>pd.cut(df_master['time_colum`enter code here`n'],bins=
['09:11:00','11:44:00','13:55:00','16:28:00'], labels=
['Morning','Afternoon','Evening'])
TypeError Traceback (most recent call last)
<ipython-input-102-c66025f961bf> in <module>()
----> 1 pd.cut(df_master['Sent at_time'],bins=['09:11:00','11:44:00','13:55:00','16:28:00'], labels=['Morning','Afternoon','Evening'])
C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\reshape\tile.py in cut(x, bins, right, labels, retbins, precision, include_lowest, duplicates)
225 bins = np.asarray(bins)
226 bins = _convert_bin_to_numeric_type(bins, dtype)
--> 227 if (np.diff(bins) < 0).any():
228 raise ValueError('bins must increase monotonically.')
229
C:\ProgramData\Anaconda3\lib\site-packages\numpy\lib\function_base.py in diff(a, n, axis)
1944 op = not_equal if a.dtype == np.bool_ else subtract
1945 for _ in range(n):
-> 1946 a = op(a[slice1], a[slice2])
1947
1948 return a
TypeError: ufunc 'subtract' did not contain a loop with signature matching types dtype('<U8') dtype('<U8') dtype('<U8')
</code></pre>
|
<p>You just have to convert your string time to time delta value </p>
<pre><code> time
0 09:31:00
1 12:04:00
2 14:15:00
3 16:48:00
df1['time'] = pd.to_timedelta(df1['time']
pd.cut(df1['time'],bins=pd.to_timedelta(['09:11:00','11:44:00','13:55:00','16:28:00']), labels=['Morning','Afternoon','Evening'])
</code></pre>
<p>Out:</p>
<pre><code>0 Morning
1 Afternoon
2 Evening
3 NaN
Name: time, dtype: category
</code></pre>
|
python|pandas
| 1
|
6,770
| 52,754,453
|
Tensorflow Keras - AttributeError: Layer features has no inbound nodes
|
<p>Tensorflow version : 1.11.0</p>
<p>I am trying to use TensorBoard with Tensorflow keras model for projector visualisation.
I am getting AttributeError: Layer features has no inbound nodes.
I am not sure why I get this error in below simple code. I indeed google the error but I could not find right solution to fix it. </p>
<pre><code>from os import makedirs
from os.path import exists, join
import tensorflow as tf
mnist = tf.keras.datasets.mnist
import numpy as np
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation=tf.nn.relu),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(10, activation=tf.nn.relu, name='features'),
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
log_dir = "./logs"
with open(join(log_dir, 'metadata.tsv'), 'w') as f:
np.savetxt(f, y_test)
from tensorflow.keras.callbacks import TensorBoard
tf_board_callback = TensorBoard(
log_dir=log_dir,
batch_size=32,
embeddings_freq=1,
embeddings_layer_names=['features'],
embeddings_metadata='metadata.tsv',
embeddings_data=x_test
)
model.fit(x_train, y_train, epochs=5, callbacks=[tf_board_callback])
</code></pre>
|
<p>When defining a network in Keras, the first layer added needs to have input_shape added.</p>
<p>See the docs here: <a href="https://keras.io/getting-started/sequential-model-guide/#specifying-the-input-shape" rel="nofollow noreferrer">https://keras.io/getting-started/sequential-model-guide/#specifying-the-input-shape</a></p>
<p>So for MNIST, you should have something like input_shape=(28,28,1)</p>
<p>There's a nice example here: <a href="https://www.kaggle.com/adityaecdrid/mnist-with-keras-for-beginners-99457" rel="nofollow noreferrer">https://www.kaggle.com/adityaecdrid/mnist-with-keras-for-beginners-99457</a></p>
|
python|tensorflow|keras|tensorboard
| 3
|
6,771
| 52,888,715
|
How to write a file to S3 using Pandas
|
<p>I want to write a data frame column in .ann format to S3. </p>
<p>Right now I am using the following code to do that.</p>
<pre><code>df['user_input'].to_csv(ann_file_path, header=None, index=None, sep=' ')
</code></pre>
<p>Where ann_file_path is the full path of the .ann file on the Server.</p>
<p>I am getting following error message:</p>
<pre><code>[Errno 22] Invalid argument: 'https://s3-eu-west-1.amazonaws.com/bucket/sub_folder/somefile.ann'
</code></pre>
<p>Why am I getting that? </p>
<p>Also, do I need to use Boto3 to write or can I directly write the file on S3 with full path?</p>
<p>I can think of some authorization might be required for that but the error message seems different from something related to authorization.</p>
|
<p>I've resolved. We need AWS handshake using <code>access_key_id</code> and <code>secret_key</code> for AWS.</p>
<p>Get URL starting from the bucket name (not https:/...), hence get rid of whatever before it.</p>
<p>My URL: <code>https://s3-eu-west-1.amazonaws.com/bucket/sub_folder/somefile.ann</code></p>
<p>Transformed to: <code>bucket/sub_folder/somefile.ann</code></p>
<p>Code to do that: <code>ann_file_path = ann_file_path.split('.com/', 1)[1]</code></p>
<p>Once I got <code>ann_file_path</code>, I used <a href="https://s3fs.readthedocs.io/en/latest/" rel="nofollow noreferrer">s3fs</a> python library to upload the ann file to the server.</p>
<pre><code>bytes_to_write = df['user_input'].to_csv(header=None, index=None).encode()
fs = s3fs.S3FileSystem(key=settings.AWS_ACCESS_KEY_ID, secret=settings.AWS_SECRET_ACCESS_KEY)
with fs.open(ann_file_path, 'wb') as f:
f.write(bytes_to_write)
</code></pre>
|
python-3.x|pandas|amazon-web-services|amazon-s3|boto3
| 4
|
6,772
| 46,571,137
|
Error parsing datetime string "09-11-2017 00:02:00" at position 8
|
<p>I created a data frame with a column of datetime objects, re sampled it but would now like to turn the data frame into a list of lists - where the datetimes are now strings again. </p>
<pre><code>for i in range(1, len(dataf.index)):
dataf["Time Stamp"][i] = datetime.strftime(dataf["Time Stamp"][i], '%m-%d-%Y %H:%M:%S')
print(dataf["Time Stamp"][i])
</code></pre>
<p>I keep getting the error
(note the print part is just for me to check the output)</p>
<pre><code>ValueError: Error parsing datetime string "09-11-2017 00:02:00" at position 8
</code></pre>
<p>But from what I can tell my date format is exactly the same. I've even tried different capitalization in '%m-%d-%Y %H:%M:%S' to no avail.</p>
<p>Any idea?</p>
|
<p>You ought to be able to</p>
<pre><code>dataf['Time Stamp'].dt.strftime('%m-%d-%Y %H:%M:%S')
</code></pre>
<p>So to rewrite the column</p>
<pre><code>dataf['Time Stamp'] = dataf['Time Stamp'].dt.strftime('%m-%d-%Y %H:%M:%S')
</code></pre>
<p>If you have errors, it's probably because the column isn't actually datetime.</p>
<pre><code>dataf['Time Stamp'] = pd.to_datetime(
dataf['Time Stamp']
).dt.strftime('%m-%d-%Y %H:%M:%S')
</code></pre>
<p>If you have non-parsable data</p>
<pre><code>dataf['Time Stamp'] = pd.to_datetime(
dataf['Time Stamp'], errors='coerce'
).dt.strftime('%m-%d-%Y %H:%M:%S')
</code></pre>
|
python|string|pandas|datetime
| 1
|
6,773
| 58,320,848
|
How to count by time frequency using groupby - pandas
|
<p>I'm trying to count a frequency of 2 events by the month using 2 columns from my <code>df</code>. What I have done so far has counted all events by the unique time which is not efficient enough as there are too many results. I wish to create a graph with the results afterwards.</p>
<p>I've tried adapting my code by the answers on the SO questions:</p>
<ul>
<li>[<a href="https://stackoverflow.com/questions/32147224/how-to-groupby-time-series-by-10-minutes-using-pandas]">How to groupby time series by 10 minutes using pandas?</a></li>
<li>[<a href="https://stackoverflow.com/questions/30281861/counting-frequency-of-occurrence-by-month-year-using-python-panda]">Counting frequency of occurrence by month-year using python panda</a></li>
<li>[<a href="https://stackoverflow.com/questions/52489702/pandas-groupby-using-time-frequency]">Pandas Groupby using time frequency</a></li>
</ul>
<p>but can not seem to get the command working when I input <code>freq='day'</code> within the <code>groupby</code> command.</p>
<p>My code is:</p>
<pre><code>print(df.groupby(['Priority', 'Create Time']).Priority.count())
</code></pre>
<p>which initially produced something like 170000 results in the structure of the following:</p>
<pre><code>Priority Create Time
1.0 2011-01-01 00:00:00 1
2011-01-01 00:01:11 1
2011-01-01 00:02:10 1
...
2.0 2011-01-01 00:01:25 1
2011-01-01 00:01:35 1
...
</code></pre>
<p>But now for some reason (I'm using Jupyter Notebook) it only produces:</p>
<pre><code>Priority Create Time
1.0 2011-01-01 00:00:00 1
2011-01-01 00:01:11 1
2011-01-01 00:02:10 1
2.0 2011-01-01 00:01:25 1
2011-01-01 00:01:35 1
Name: Priority, dtype: int64
</code></pre>
<p>No idea why the output has changed to only 5 results (maybe I unknowingly changed something).</p>
<p>I would like the results to be in the following format:</p>
<pre><code>Priority month Count
1.0 2011-01 a
2011-02 b
2011-03 c
...
2.0 2011-01 x
2011-02 y
2011-03 z
...
</code></pre>
<p>Top points for showing how to change the frequency correctly for other values as well, for example <code>hour/day/month/year</code>. With the answers please could you explain what is going on in your code as I am new and learning pandas and wish to understand the process. Thank you.</p>
|
<p>One possible solution is convert datetime column to months periods by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.to_period.html" rel="nofollow noreferrer"><code>Series.dt.to_period</code></a>:</p>
<pre><code>print(df.groupby(['Priority', df['Create Time'].dt.to_period('m')]).Priority.count())
</code></pre>
<p>Or use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Grouper.html" rel="nofollow noreferrer"><code>Grouper</code></a>:</p>
<pre><code>print(df.groupby(['Priority', pd.Grouper(key='Create Time', freq='MS')]).Priority.count())
</code></pre>
<p><strong>Sample</strong>:</p>
<pre><code>np.random.seed(123)
df = pd.DataFrame({'Create Time':pd.date_range('2019-01-01', freq='10D', periods=10),
'Priority':np.random.choice([0,1], size=10)})
print (df)
Create Time Priority
0 2019-01-01 0
1 2019-01-11 1
2 2019-01-21 0
3 2019-01-31 0
4 2019-02-10 0
5 2019-02-20 0
6 2019-03-02 0
7 2019-03-12 1
8 2019-03-22 1
9 2019-04-01 0
</code></pre>
<hr>
<pre><code>print(df.groupby(['Priority', df['Create Time'].dt.to_period('m')]).Priority.count())
Priority Create Time
0 2019-01 3
2019-02 2
2019-03 1
2019-04 1
1 2019-01 1
2019-03 2
Name: Priority, dtype: int64
print(df.groupby(['Priority', pd.Grouper(key='Create Time', freq='MS')]).Priority.count())
Priority Create Time
0 2019-01-01 3
2019-02-01 2
2019-03-01 1
2019-04-01 1
1 2019-01-01 1
2019-03-01 2
Name: Priority, dtype: int64
</code></pre>
|
python|pandas|pandas-groupby
| 2
|
6,774
| 68,948,888
|
Panda Dataframe Find rows which does not have equivalent value in the DataFrame
|
<p>DataFrame:</p>
<pre><code> column1 column2
0 some_data string1
1 some_data string1
2 some_data string2
3 some_data string3
4 some_data string2
5 some_data string4
5 some_data string4
...
20k+ rows in total
</code></pre>
<p>Explanation:
For most rows, column2 data appear in pairs. I want to find out rows that do not have paired data (e.g. string3)</p>
<p>Expected Output:</p>
<pre><code> column1 column2
0 some_data string3
</code></pre>
<p>Any solutions to find out such rows? thanks!</p>
|
<p>If possible simplify problem for found all rows without dupes by <code>column2</code> use:</p>
<pre><code>df1 = df[~df['column2'].duplicated(keep=False)]
</code></pre>
<p>If need test counts and filter all rows without pairs (2):</p>
<pre><code>df2 = df[df.groupby('column2')['column2'].transform('size').ne(2)]
</code></pre>
<p>Also if need test all pairs, it means <code>2, 4, 6, 8</code>... use:</p>
<pre><code>df3 = df[df.groupby('column2')['column2'].transform('size') % 2 == 1]
</code></pre>
|
python|pandas
| 1
|
6,775
| 68,913,591
|
Merge two arrays with the same dimension based on a condition
|
<p>I have two arrays with the same dimension:</p>
<pre><code>a = [
[1, 1, 1, 1],
[1, 0, 0, 1],
[1, 0, 0, 1],
[1, 1, 1, 1], ]
b = [
[0, 1, 1, 0],
[0, 0, 0, 0],
[2, 0, 0, 2],
[0, 0, 0, 0], ]
</code></pre>
<p>I would like to create a new one, only changing the values where B is not 0 and is different than A. The result would be:</p>
<pre><code>c = [
[1, 1, 1, 1],
[1, 0, 0, 1],
[2, 0, 0, 2],
[1, 1, 1, 1], ]
</code></pre>
<p>How can I do this?</p>
|
<p>You can do assignment with boolean conditions:</p>
<pre><code>a[b != 0] = b[b != 0]
a
array([[1, 1, 1, 1],
[1, 0, 0, 1],
[2, 0, 0, 2],
[1, 1, 1, 1]])
</code></pre>
|
python|arrays|numpy
| 2
|
6,776
| 44,797,757
|
tfslim "Training a model from scratch." some kind of error occured
|
<p>i'm training tf slim with</p>
<p><a href="https://github.com/tensorflow/models/tree/master/slim" rel="nofollow noreferrer">https://github.com/tensorflow/models/tree/master/slim</a></p>
<p>training a model form scratch. some kind of error occured</p>
<p>i think its kind of gpu and cpu running problem.</p>
<p>other codes works fine at me.</p>
<p>but this occuring error</p>
<p>i run following code</p>
<pre><code>python train_image_classifier.py
--train_dir= /home/sk/workspace/slim/datasets/log
--dataset_name=imagenet
--dataset_split_name=train
--dataset_dir=/home/sk/workspace/slim/datasets/imagenet
--model_name=inception_v3
</code></pre>
<p>and error is</p>
<pre><code>Caused by op u'InceptionV3/Logits/Conv2d_1c_1x1/biases/RMSProp_1', defined at:
File "/home/sk/workspace/slim/train_image_classifier.py", line 573, in <module>
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "/home/sk/workspace/slim/train_image_classifier.py", line 539, in main
global_step=global_step)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/optimizer.py", line 446, in apply_gradients
self._create_slots([_get_variable_for(v) for v in var_list])
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/rmsprop.py", line 103, in _create_slots
self._zeros_slot(v, "momentum", self._name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/optimizer.py", line 766, in _zeros_slot
named_slots[_var_key(var)] = slot_creator.create_zeros_slot(var, op_name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/slot_creator.py", line 174, in create_zeros_slot
colocate_with_primary=colocate_with_primary)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/slot_creator.py", line 146, in create_slot_with_initializer
dtype)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/training/slot_creator.py", line 66, in _create_slot_var
validate_shape=validate_shape)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 1049, in get_variable
use_resource=use_resource, custom_getter=custom_getter)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 948, in get_variable
use_resource=use_resource, custom_getter=custom_getter)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 356, in get_variable
validate_shape=validate_shape, use_resource=use_resource)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 341, in _true_getter
use_resource=use_resource)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variable_scope.py", line 714, in _get_single_variable
validate_shape=validate_shape)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variables.py", line 197, in __init__
expected_shape=expected_shape)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/variables.py", line 281, in _init_from_args
name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/state_ops.py", line 128, in variable_op_v2
shared_name=shared_name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_state_ops.py", line 708, in _variable_v2
name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 768, in apply_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2336, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1228, in __init__
self._traceback = _extract_stack()
InvalidArgumentError (see above for traceback): Cannot assign a device to node 'InceptionV3/Logits/Conv2d_1c_1x1/biases/RMSProp_1': Could not satisfy explicit device specification '/device:GPU:0' because no devices matching that specification are registered in this process; available devices: /job:localhost/replica:0/task:0/cpu:0
Colocation Debug Info:
Colocation group had the following types and devices:
ApplyRMSProp: CPU
Const: CPU
Assign: CPU
IsVariableInitialized: CPU
Identity: CPU
VariableV2: CPU
[[Node: InceptionV3/Logits/Conv2d_1c_1x1/biases/RMSProp_1 = VariableV2[_class=["loc:@InceptionV3/Logits/Conv2d_1c_1x1/biases"], container="", dtype=DT_FLOAT, shape=[3], shared_name="", _device="/device:GPU:0"]()]]
Process finished with exit code 1
</code></pre>
|
<p>It's trying to run some ops on the GPU, but TensorFlow doesn't see a GPU device (either because you're using the CPU version of TensorFlow, because of a CUDA installation issue, or because there is no GPU). It looks like you can specify <code>--clone_on_cpu=True</code> to use the CPU instead.</p>
|
python|tensorflow|deep-learning|tf-slim
| 0
|
6,777
| 44,541,648
|
Loading large dataset into Pandas Python
|
<p>I would like to load large .csv (3.4m rows, 206k users) open sourced dataset from InstaCart <a href="https://www.instacart.com/datasets/grocery-shopping-2017" rel="nofollow noreferrer">https://www.instacart.com/datasets/grocery-shopping-2017</a></p>
<p>Basically, I have trouble loading orders.csv into Pandas DataFrame. I would like to learn best practices for loading large files into Pandas/Python.</p>
|
<p>Best option would be to <strong>read the data in chunks instead of loading the whole file into memory</strong>.</p>
<p>Luckily, <code>read_csv</code> method accepts <code>chunksize</code> argument.</p>
<pre><code>for chunk in pd.read_csv(file.csv, chunksize=somesize):
process(chunk)
</code></pre>
<p>Note: By specifying a <code>chunksize</code> to <code>read_csv</code> or <code>read_table</code>, the return value will be an <code>iterable</code> object of type <code>TextFileReader</code>:</p>
<p>Also see:</p>
<ul>
<li><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow noreferrer">read_csv</a></li>
<li><a href="http://pandas.pydata.org/pandas-docs/stable/io.html#io-chunking" rel="nofollow noreferrer">Iterating through files chunk by chunk</a></li>
</ul>
|
python|csv|pandas
| 3
|
6,778
| 44,396,805
|
Retrieving index from multi-index based on column values in Pandas
|
<p>Say I have a table as such:</p>
<pre><code> Attr | Foo | Bar
Name|Val | 1 | 2 | 3 | 4
-----------------------------
OFO |1 | F | T | F | F
|2 | T | F | F | T
-----------------------------
ARB |5 | T | T | F | F
|6 | F | F | F | T
</code></pre>
<p>Where my rows are controlled by an index with level 0 = {OFO, ARB} and level 1 = {1,2,5,6} and columns are level 0 = {Foo,Bar} and level 1 = {1,2,3,4} I want to be able to pull the respective row and column index from each T entry, so one return would be: </p>
<pre><code>((Foo,1),(OFO,1))
</code></pre>
<p>I'm struggling to come up with a good solution.</p>
|
<p>You can start with:</p>
<pre><code>df.stack([0,1]).reset_index(name='value').query('value == True')
</code></pre>
<p>Output:</p>
<pre><code> level_0 level_1 level_2 level_3 value
3 OFO 1 Foo 2 True
5 OFO 2 Bar 4 True
6 OFO 2 Foo 1 True
10 ARB 5 Foo 1 True
11 ARB 5 Foo 2 True
13 ARB 6 Bar 4 True
</code></pre>
|
python|pandas
| 1
|
6,779
| 60,926,614
|
Creating a new column from pairwise row entries in pandas
|
<p>I have a dataframe as given below</p>
<pre><code>>>> df
t c f e
0 1 100 2 1
1 1 200 1 1
2 1 300 4 0
3 1 400 2 0
4 2 100 3 1
5 2 200 3 1
6 2 300 4 1
7 2 400 1 0
8 3 100 4 0
9 3 200 3 0
10 3 300 1 1
11 3 400 4 1
12 4 100 1 1
13 4 200 4 1
14 4 300 4 1
15 4 400 2 1
</code></pre>
<p>I want to add a new column using pairwise information of the rows. In the above case, I want to add a new column 'rr' with value 1 if i-th row and (i+4)-th row has same value for column 'e' (0, in case i+4 index does not exist) and similarly I also want to add another column 'rr2' is i-th row and (I+1)-th row has same value for column 'e'.</p>
<pre><code>>>> df
t c f e rr rr2
0 1 100 2 1 1 1
1 1 200 1 1 0 1
2 1 300 4 0 1 0
3 1 400 2 0 0 1
4 2 100 3 1 1 0
5 2 200 3 1 1 0
6 2 300 4 1 0 1
7 2 400 1 0 1 0
8 3 100 4 0 1 0
9 3 200 3 0 0 1
10 3 300 1 1 1 1
11 3 400 4 1 1 1
12 4 100 1 1 1 0
13 4 200 4 1 1 0
14 4 300 4 1 1 0
15 4 400 2 1 1 0
</code></pre>
<p>My idea was using the apply method</p>
<pre><code>X['rr'] = X.apply(lambda x: func1(x),axis=1 )
X['rr2'] = X.apply(lambda x: func2(x),axis=1 )
</code></pre>
<p>But in that case, I will not be able to access the i+1 or i+4 indices of the original dataframe.
Is there a way to do this efficiently, rather than going through each row one-by-one.</p>
|
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.shift.html" rel="nofollow noreferrer"><code>Series.shift</code></a></p>
<pre><code>df['rr'] = df['e'].eq(df['e'].shift(-4)).astype(int)
df['rr2'] = df['e'].eq(df['e'].shift(-1)).astype(int)
print(df)
</code></pre>
<hr>
<pre><code> t c f e rr rr2
0 1 100 2 1 1 1
1 1 200 1 1 1 0
2 1 300 4 0 0 1
3 1 400 2 0 1 0
4 2 100 3 1 0 1
5 2 200 3 1 0 1
6 2 300 4 1 1 0
7 2 400 1 0 0 1
8 3 100 4 0 0 1
9 3 200 3 0 0 0
10 3 300 1 1 1 1
11 3 400 4 1 1 1
12 4 100 1 1 0 1
13 4 200 4 1 0 1
14 4 300 4 1 0 1
15 4 400 2 1 0 0
</code></pre>
<p><strong>Note:</strong></p>
<p>When NaN is compared the result always returns False</p>
|
python-3.x|pandas
| 1
|
6,780
| 60,994,945
|
Get all records from 2 columns, starting from specific row
|
<p><strong>Set-up</strong></p>
<p>Via gspread I have access to a Google sheet containing data.</p>
<p>Normally, I use <code>df = pd.DataFrame(wsheet.get_all_records())</code> to dump all data into a pandas dataframe. </p>
<hr>
<p><strong>Issue</strong></p>
<p>I only need the data of 5 specific sequential columns, i.e. all columns including and between for example column 1 and 5 of the Google sheet. </p>
<p>Moreover, I only need the data starting from the 5th row in the Google sheet.</p>
<p>I've tried my regular <code>df = pd.DataFrame(wsheet.get_all_records())</code> and then drop columns and rows in pandas. However, I think due to the markup I use in the first 4 rows in the Google sheet, the resulting dataframe has some oddities – adjusting in pandas gives strange results.</p>
<hr>
<p><strong>Question</strong></p>
<p>Given the markup, I suspect it's easier to just dump all data incl. and between column 1 and 5 in a dataframe, starting from row 5. </p>
<p>But how do I do this? </p>
|
<ul>
<li>You want to retrieve the values from the columns "A" and "E" after the row 5 from the Google Spreadsheet.</li>
<li>You want to achieve this using gspread with python.</li>
<li>You have already been able to get and put values for Spreadsheet using Sheets API.</li>
</ul>
<h3>Modification points:</h3>
<ul>
<li>In this modification, at first, the values are retrieved with <code>get_all_values()</code> as a slice. And the retrieved values are processed and convert it to the dataframe.</li>
</ul>
<h3>Modified script:</h3>
<p>When your script is modified, it becomes as follows. In this case, it supposes that <code>wsheet</code> can be used.</p>
<h4>From:</h4>
<pre><code>df = pd.DataFrame(wsheet.get_all_records())
</code></pre>
<h4>To:</h4>
<pre><code>v = [[e[0], e[4]] for e in wsheet.get_all_values()]
df = pd.DataFrame(v[4:], columns=v[0])
</code></pre>
<ul>
<li>In this case, <code>df</code> is the values retrieved the columns "A" and "E" after the row 5.</li>
</ul>
<h3>Reference:</h3>
<ul>
<li><a href="https://gspread.readthedocs.io/en/latest/api.html#gspread.models.Worksheet.get_all_values" rel="nofollow noreferrer">get_all_values()</a></li>
</ul>
<h2>Added:</h2>
<p>If you want to retrieve the values from <strong>the columns "A" to "E"</strong> after the row 5 from the Google Spreadsheet, how about the following modification?</p>
<h4>From:</h4>
<pre><code>df = pd.DataFrame(wsheet.get_all_records())
</code></pre>
<h4>To:</h4>
<pre><code>v = [e[0:5] for e in wsheet.get_all_values()]
df = pd.DataFrame(v[4:], columns=v[0])
</code></pre>
|
python|pandas|dataframe|gspread
| 1
|
6,781
| 69,976,066
|
Subplots with counter like legends
|
<p>I have written <code>plot_dataframe()</code> to create two subplots (one for line chart and another for histogram bar chart) for a dataframe that is passed via argument.
Then I call this function from <code>plot_kernels()</code> with multiple dataframs.</p>
<pre><code>def plot_dataframe(df, cnt):
row = df.iloc[0].astype(int) # First row in the dataframe
plt.subplot(2, 1, 1)
row.plot(legend=cnt) # Line chart
plt.subplot(2, 1, 2)
df2 = row.value_counts()
df2.reindex().plot(kind='bar', legend=cnt) # Histogram
def plot_kernels(mydict2):
plt.figure(figsize=(20, 15))
cnt=1
for key in my_dict2:
df = my_dict2[key]
plot_dataframe(df, cnt)
cnt = cnt + 1
plt.show()
</code></pre>
<p>The dictionary looks like</p>
<pre><code>{'K1::foo(bar::z(x,u))': Value Value
0 10 2
1 5 2
2 10 2, 'K3::foo(bar::y(z,u))': Value Value
0 6 12
1 7 13
2 8 14}
</code></pre>
<p>And based on the values in row[0], [10,2] are shown in blue line and [6,12] are shown in orange line. For histogram, they are similar. As you can see the legends in the subplots are shown as 0 in the figure. I expect to see 1 and 2. How can I fix that?</p>
<p><a href="https://i.stack.imgur.com/oGAbL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oGAbL.png" alt="enter image description here" /></a></p>
|
<p>Change <code>legend</code> to <code>label</code>, then force the legend after you plot everything:</p>
<pre><code>def plot_dataframe(df, cnt,axes):
row = df.iloc[0].astype(int) # First row in the dataframe
row.plot(label=cnt, ax=axes[0]) # Line chart -- use label, not legend
df2 = row.value_counts()
df2.plot(kind='bar', ax=axes[1], label=cnt) # Histogram
def plot_kernels(d):
# I'd create the axes first and pass to the plot function
fig,axes = plt.subplots(2,1, figsize=(20, 15))
cnt=1
for key in d:
df = d[key]
plot_dataframe(df, cnt, axes=axes)
cnt = cnt + 1
# render the legend
for ax in axes:
ax.legend()
plt.show()
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/f9uug.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/f9uug.png" alt="enter image description here" /></a></p>
|
pandas
| 1
|
6,782
| 69,962,366
|
BERT Additional pretraining in TF-Keras
|
<p>I'm currently developing a project involving sequence multilabel classification. Since I'm using a highly technical dataset, I thought that doing additional pretraining on BERT before fine-tuning it for the classification part would be beneficial. But I can't find any guide to use Huiggingface transformers and Keras together to pre-train the model. My idea is to pre-train the model on my dataset, then save it and load it again to fine-tune the classifier. Every come that I found is meant for PyTorch but I have to use TensorFlow. I have written this code so far:</p>
<pre><code>from transformers import TFDistilBertForMaskedLM, AutoTokenizer, AutoConfig
from sklearn.datasets import fetch_20newsgroups
categories = ['alt.atheism', 'soc.religion.christian','comp.graphics', 'sci.med']
twenty_train = fetch_20newsgroups(subset='train',categories=categories, shuffle=True, random_state=42)
tokenizer = AutoTokenizer.from_pretrained("distilbert-base-cased")
model = TFDistilBertForMaskedLM.from_pretrained("distilbert-base-cased")
model.compile(optimizer="adam")
data = tokenizer(
twenty_train.data[:10],
return_tensors="tf",
padding=True,
truncation=True,
max_length=tokenizer.model_max_length
)
</code></pre>
<p>Where do I go from here to fit my data to BERT? I know I should provide the model a masked input too but I can't understand where/how</p>
|
<p>You can use BERT model to pre-training on your custom dataset.</p>
<p><strong>Sample working code</strong></p>
<pre><code>import os
import tensorflow as tf
import tensorflow_hub as hub
bert_preprocess = hub.KerasLayer("https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3")
bert_encoder = hub.KerasLayer("https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/4",trainable=True)
#get sentence embeddings
def get_sentence_embeding(sentences):
preprocessed_text = bert_preprocess(sentences)
return bert_encoder(preprocessed_text)['pooled_output']
get_sentence_embeding([
"How to find which version of TensorFlow is",
"TensorFlow not found using pip"]
)
def build_classifier_model(num_classes):
class Classifier(tf.keras.Model):
def __init__(self, num_classes):
super(Classifier, self).__init__(name="prediction")
self.encoder = hub.KerasLayer(bert_encoder, trainable=True)
self.dropout = tf.keras.layers.Dropout(0.1)
self.dense = tf.keras.layers.Dense(num_classes)
def call(self, preprocessed_text):
encoder_outputs = self.encoder(preprocessed_text)
pooled_output = encoder_outputs["pooled_output"]
x = self.dropout(pooled_output)
x = self.dense(x)
return x
model = Classifier(num_classes)
return model
test_classifier_model = build_classifier_model(2)
bert_raw_result = test_classifier_model(text_preprocessed)
print(tf.sigmoid(bert_raw_result))
</code></pre>
|
python|tensorflow|keras|deep-learning|huggingface-transformers
| 1
|
6,783
| 69,843,011
|
Pandas Dataframe automatically changing from int8 to float
|
<p>I have a very large matrix of 100K x 100K and to manage memory better, I am setting this as a dataframe with int8 as datatype (for 1 byte per cell). However, it gets set as a float with 8 bytes per cell. Where am I going wrong?</p>
<pre><code>df = pd.DataFrame()
df=df.astype('int8')
mat_len=100,000
for i in range(0, mat_len):
new_row = pd.Series([0] * mat_len)
df = df.append(new_row, ignore_index='True') #initializing matrix
for i in range(0, mat_len):
for j in range(i,mat_len):
df.iloc[i,j] = i+j #simplified calc for testing purposes
print(df.info())
</code></pre>
<p><strong>output</strong>: dtypes: float64(100)</p>
|
<p>You should always avoid <code>append</code> to DataFrames/Series, especially avoid using it in a loop. It's very very slow. First, generate the data and then create a DataFrame with it.</p>
<blockquote>
<p>df.iloc[i,j] = i+j #simplified calc for testing purposes</p>
</blockquote>
<p>How complex is your calculation? In this simple case, your code can be highly simplified and optimized, by using <a href="https://numpy.org/doc/stable/reference/generated/numpy.fromfunction.html" rel="nofollow noreferrer"><code>numpy.fromfunction</code></a>
and <a href="https://numpy.org/doc/stable/reference/generated/numpy.triu.html" rel="nofollow noreferrer"><code>numpy.triu</code></a></p>
<pre><code>mat_len = 100_000
# create matrix from a function of the indicies
mat = np.fromfunction(lambda i,j: i+j, shape=(mat_len, mat_len), dtype='int8')
# make it an upper triangular matrix
mat = np.triu(mat)
# create a DataFrame with it
df = pd.DataFrame(mat)
</code></pre>
|
python|pandas|dataframe|memory
| 2
|
6,784
| 69,949,563
|
Python Pandas VLOOKUP function with categorical and non-numeric values
|
<p>I want to optimize a process of a "vlookup" in Python that works but is not scalable in its current form. I have tried pythons pivot.table and pivot but it's been limited due to alphanumeric and string values in cells. I have two tables:</p>
<p><strong>table1:</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ProductID</th>
<th>Sales</th>
</tr>
</thead>
<tbody>
<tr>
<td>123456</td>
<td>34</td>
</tr>
<tr>
<td>abc123</td>
<td>34</td>
</tr>
<tr>
<td>123def</td>
<td>34</td>
</tr>
<tr>
<td>a1234f</td>
<td>34</td>
</tr>
<tr>
<td>1abcd6</td>
<td>34</td>
</tr>
</tbody>
</table>
</div>
<p><strong>table2:</strong></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Brand</th>
<th>Site1</th>
<th>Site2</th>
<th>Site3</th>
</tr>
</thead>
<tbody>
<tr>
<td>Brand1</td>
<td>123456</td>
<td>N/A</td>
<td>N/A</td>
</tr>
<tr>
<td>Brand2</td>
<td>N/A</td>
<td>abc123</td>
<td>N/A</td>
</tr>
<tr>
<td>Brand1</td>
<td>N/A</td>
<td>N/A</td>
<td>123def</td>
</tr>
<tr>
<td>Brand2</td>
<td>N/A</td>
<td>1abcd6</td>
<td>N/A</td>
</tr>
<tr>
<td>Brand1</td>
<td>a1234f</td>
<td>N/A</td>
<td>N/A</td>
</tr>
</tbody>
</table>
</div>
<p>What I originally wanted to see was sales by brand:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Brand</th>
<th>Sales</th>
</tr>
</thead>
<tbody>
<tr>
<td>Brand1</td>
<td>102</td>
</tr>
<tr>
<td>Brand2</td>
<td>68</td>
</tr>
</tbody>
</table>
</div>
<p>Here's the pseudocode I've basically built out in Python and Pandas:</p>
<pre><code># read sales and product tables into pandas
sales_df = pd.read_csv(table1)
product_df = pd.read_csv(table2)
# isolate each product id column into separate dfs
product_site1_df = product_df.drop(['Site2', 'Site3'],axis=1)
product_site2_df = product_df.drop(['Site1', 'Site3'],axis=1)
product_site3_df = product_df.drop(['Site1', 'Site2'],axis=1)
# rename and append all product ids into a single column
product_site1_df.rename(columns={"Site1": "ProductID"})
product_site2_df.rename(columns={"Site2": "ProductID"})
product_site3_df.rename(columns={"Site3": "ProductID"})
product_list_master_df = pd.concat([product_site1_df, product_site2_df, product_site3_df])
#compare sales df and product df, pulling brand in as a new column to the sales table
inner_join = pd.merge(sales_df,
product_df,
on ='ProductID',
how ='inner')
</code></pre>
<p>This is obviously very procedural, not scalable, computationally redundant, and seems very round-about to get to what I want. Additionally, I'm losing data such as if I want to do a pivot based on sites rather than sales. Short of changing the data model itself, what can I do here to improve speed, versatility, and lines of code?</p>
|
<p>Assuming the dataframes are named <code>df1</code> and <code>df2</code>, you can reshape and <code>map</code> to perform the <em>VLOOKUP</em>, then <code>groupby</code>+<code>sum</code>:</p>
<pre><code>(df2.set_index('Brand')
.stack()
.map(df1.set_index('ProductID')['Sales'])
.groupby(level='Brand').sum()
)
</code></pre>
<p>Output:</p>
<pre><code>Brand
Brand1 102
Brand2 68
</code></pre>
|
python|pandas|merge|vlookup
| 2
|
6,785
| 69,666,353
|
Merge two dataframes on multiple columns but only merge on columns if both not NaN
|
<p>I'm looking to merge two dataframes across multiple columns but with some additional conditions.</p>
<pre><code>import pandas as pd
df1 = pd.DataFrame({
'col1': ['a','b','c', 'd'],
'optional_col2': ['X',None,'Z','V'],
'optional_col3': [None,'def', 'ghi','jkl']
})
df2 = pd.DataFrame({
'col1': ['a','b','c', 'd'],
'optional_col2': ['X','Y','Z','W'],
'optional_col3': ['abc', 'def', 'ghi','mno']
})
</code></pre>
<p>I would like to always join on <code>col1</code> but then try to also join on <code>optional_col2</code> and <code>optional_col3</code>. In <code>df1</code>, the value can be <code>NaN</code> for both columns but it is always populated in <code>df2</code>. I would like the join to be valid when the <code>col1</code> + one of <code>optional_col2</code> or <code>optional_col3</code> match.</p>
<p>This would result in <code>['a', 'b', 'c']</code> joining due to exact <code>col2</code>, <code>col3</code>, and exact match, respectively.</p>
<p>In SQL I suppose you could write the join as this, if it helps explain further:</p>
<pre><code>select
*
from
df1
inner join
df2
on df1.col1 = df2.col2
AND (df1.optional_col2 = df2.optional_col2 OR df1.optional_col3 = df2.optional_col3)
</code></pre>
<p>I've messed around with <code>pd.merge</code> but can't figure how to do a complex operation like this. I think I can do a merge on <code>['col1', 'optional_col2']</code> then a second merge on <code>['col1', 'optional_col_3']</code> then union and drop duplicates?</p>
<p>Expected DataFrame would be something like:</p>
<pre><code>merged_df = pd.DataFrame({
'col1': ['a', 'b', 'c'],
'optional_col_2': ['X', 'Y', 'Z'],
'optional_col_3': ['abc', 'def', 'ghi']
})
</code></pre>
|
<p>This solution works by creating an extra column called "temp" in both dataframes. In <code>df11</code> it will be a column of true values. In <code>df2</code> the values will be true if there is a match between either of the optional columns. I'm not clear whether you consider a <code>NaN</code> value to be matchable or not, if so then you need to fill in the <code>NaN</code>s of columns in <code>df1</code> with values from <code>df2</code> before comparing to fulfill your criteria around missing values (this is what is below). If this is not required then drop the <code>fillna</code> calls in the example below.</p>
<pre><code>df1["temp"] = True
optional_col2_match = df1["optional_col2"].fillna(df2["optional_col2"]).eq(df2["optional_col2"])
optional_col3_match = df1["optional_col3"].fillna(df2["optional_col3"]).eq(df2["optional_col3"])
df2["temp"] = optional_col2_match | optional_col3_match
</code></pre>
<p>Then use the "temp" column in the merge, and then drop it - it has served its purpose</p>
<pre><code>pd.merge(df1, df2, on=["col1", "temp"]).drop(columns="temp")
</code></pre>
<p>This gives the following result</p>
<pre><code> col1 optional_col2_x optional_col3_x optional_col2_y optional_col3_y
0 a X abc X abc
1 b Y def Y def
2 c Z ghi Z ghi
</code></pre>
<p>You will need to decide what to do here. In the example you gave there are no rows which match on just one of <code>optional_col2</code> and <code>optional_col2</code>, which is why a 3 column solution looks reasonable. This won't generally be the case.</p>
|
python|pandas|dataframe
| 1
|
6,786
| 43,385,303
|
tensorflow built from source using cuda or not?
|
<p>I built tensorflow with GPU support from source for python on macOS following the official instructions. When I import tensorflow though, I don't get the typical CUDA loading messages I do when I use the pip version (as below).</p>
<pre><code>I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcublas.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcudnn.so.5 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcufft.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:135] successfully opened CUDA library libcurand.so.8.0 locally
</code></pre>
<p>However, when I run my test program with my build, I do see that the GPU is being found and used (I think).</p>
<pre><code>~/Drive/thesis/image_keras$ python3 demo.py
Using TensorFlow backend.
Found 2125 images belonging to 2 classes.
Found 832 images belonging to 2 classes.
demo.py:64: UserWarning: Update your `fit_generator` call to the Keras 2 API: `fit_generator(<keras.pre..., validation_data=<keras.pre..., steps_per_epoch=128, epochs=25, validation_steps=832)`
nb_val_samples=nb_validation_samples)
Epoch 1/25
2017-04-13 08:39:24.542434: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:865] OS X does not support NUMA - returning NUMA node zero
2017-04-13 08:39:24.542538: I tensorflow/core/common_runtime/gpu/gpu_device.cc:887] Found device 0 with properties:
name: GeForce GT 750M
major: 3 minor: 0 memoryClockRate (GHz) 0.9255
pciBusID 0000:01:00.0
Total memory: 2.00GiB
Free memory: 1.77GiB
2017-04-13 08:39:24.542551: I tensorflow/core/common_runtime/gpu/gpu_device.cc:908] DMA: 0
2017-04-13 08:39:24.542557: I tensorflow/core/common_runtime/gpu/gpu_device.cc:918] 0: Y
2017-04-13 08:39:24.542566: I tensorflow/core/common_runtime/gpu/gpu_device.cc:977] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GT 750M, pci bus id: 0000:01:00.0)
49/128 [==========>...................] - ETA: 18s - loss: 0.7352 - acc: 0.5166
</code></pre>
<p>It looks like its using GPU, but without the CUDA loading I'm not sure. If it makes a difference I am running <code>CUDA-8.0</code> with <code>cuDNN-8.0-v5.1</code></p>
|
<pre><code>tensorflow.test.is_gpu_available()
tensorflow.test.is_built_with_cuda()
</code></pre>
<p>If you run these codes, and Tensorflow is built with CUDA, then both functions should return <b>True</b>.</p>
<p>I have to use this, because as given in the previous answer, I don't get a output with "successfully opened CUDA library" lines printed as shown, even though I'm using the pip version.
I use Tensorflow 1.4.0.</p>
|
python|tensorflow
| 0
|
6,787
| 43,215,715
|
Pandas 5yr & 10yr Moving average
|
<p>I have a dataframe where my index is datetime dtype but the dates are not in any sequential ordering. I am looking to calculate the 5 year and 10 year moving averages of my dataset. By using .rolling_mean I can take the average based on what i set the window to, however, as the dates are not sequential, this does not work.</p>
<pre><code>Dataframe:
Date Count
1981-01-08 10
1981-05-12 65
1982-03-17 96
1982-09-15 33
1982-12-01 85
1983-02-03 14
.
.
.
2017-01-28 56
Code:
counts_df = pd.DataFrame(df.groupby('DATE').size().rename('counts'))
start_date_periods = counts_df.loc[counts_df.index > '1981-01-01']
start_date_periods['5yrMA'] = pd.rolling_mean(start_date_periods, window=5)
start_date_periods['10yrMA'] = pd.rolling_mean(start_date_periods, window=10)
</code></pre>
|
<p>This is one of those cases of the rolling function working as advertised but not doing what you want it to do. In the latest versions of Pandas you should get a warning when using <code>rolling_mean</code> as it's being deprecated in favor of <code>rolling</code> so for illustration I'll use <code>rolling</code>:</p>
<p>The rolling function is designed to work with any data, not just time series. So it 'looks back' x number of units. The look back is set with the <code>window</code> parameter. And it does the look back based on the sort order of the dataframe. So even if you sort your data correctly, <code>rolling</code> doesn't know that you mean years when you give it a window of 5... it sees only "look back 5 cells"</p>
<p>So if you want to look back 5 years against data with missing values you need to fill those values with something. You can use <code>NaN</code> or you can use one of the many interpolation methods provided by Pandas. I'll illustrate the <code>NaN</code> method:</p>
<p>since you didn't provide some easy to use synthetic data, I set some up:</p>
<pre><code>np.random.seed(1)
ts_data = pd.DataFrame(np.random.randn(6210),
index = pd.date_range('2000-01-01', '2016-12-31', freq='D'),
columns=['data']) # index of every day for 7 years
ts_sample = ts_data.sample(n=10).sort_index() ## sample then sort
print ts_sample
</code></pre>
<p>that returns a nicely sorted example df with 10 values and a date index:</p>
<pre><code> data
2001-07-21 0.107343
2003-07-12 0.658537
2004-08-21 -0.463338
2006-07-13 -0.866955
2011-12-14 0.020956
2012-05-14 -2.685125
2012-12-27 0.494037
2013-06-09 -1.299026
2013-12-12 0.371309
2015-06-17 0.201656
</code></pre>
<p>so in order to fill in those missing values, let's create a new df with nothing but a full index with all days:</p>
<pre><code>full_period = pd.DataFrame(index = pd.date_range('2000-01-01', '2016-12-31', freq='D') )
</code></pre>
<p>Because of how Pandas uses indexes, if you pop our example data into a column, Pandas will fill in missing values with <code>NaN</code>:</p>
<pre><code>full_period['data'] = ts_sample.data
print full_period['2015-06-16':'2015-06-18']
</code></pre>
<p>and I'm only printing three days so we can see how it popped the data in:</p>
<pre><code> data
2015-06-16 NaN
2015-06-17 0.201656
2015-06-18 NaN
</code></pre>
<p>So now we have a full set of daily data with missing data filled with <code>NaN</code>. Now we can do the rolling mean:</p>
<pre><code>rolling = full_period.rolling(min_periods=1, window=365*5,center=False).mean() # daily data so using 5 years of days
print rolling['2015-06-16':'2015-06-18']
</code></pre>
<p>and, once again, printing the same 3 values:</p>
<pre><code> data
2015-06-16 -0.619570
2015-06-17 -0.482699
2015-06-18 -0.482699
</code></pre>
<p>if you want to select back only the rolling average for your original dates, you can do that with a little one liner:</p>
<pre><code>print rolling.ix[ts_sample.index.tolist()]
data
2001-07-21 0.107343
2003-07-12 0.382940
2004-08-21 0.100847
2006-07-13 -0.141103
2011-12-14 0.020956
2012-05-14 -1.332085
2012-12-27 -0.723377
2013-06-09 -0.867290
2013-12-12 -0.619570
2015-06-17 -0.482699
</code></pre>
|
pandas|moving-average
| 4
|
6,788
| 72,253,277
|
Pandas Resample OHCL
|
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>index</th>
<th>close</th>
</tr>
</thead>
<tbody>
<tr>
<td>2022-02-21</td>
<td>3</td>
</tr>
<tr>
<td>2022-02-22</td>
<td>1</td>
</tr>
<tr>
<td>2022-02-23</td>
<td>5</td>
</tr>
<tr>
<td>2022-02-24</td>
<td>5</td>
</tr>
<tr>
<td>2022-02-25</td>
<td>7</td>
</tr>
<tr>
<td>2022-03-02</td>
<td>4</td>
</tr>
<tr>
<td>2022-03-03</td>
<td>2</td>
</tr>
<tr>
<td>2022-03-04</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
<p>My output should be:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>index</th>
<th>close</th>
</tr>
</thead>
<tbody>
<tr>
<td>2022-02-21</td>
<td>7</td>
</tr>
<tr>
<td>2022-03-02</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
<p>I have tried</p>
<pre><code>df.resample('W-MON', closed='left', label='left').last()
</code></pre>
<p>but I got a wrong label.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>index</th>
<th>close</th>
</tr>
</thead>
<tbody>
<tr>
<td>2022-02-21</td>
<td>7</td>
</tr>
<tr>
<td>2022-02-28</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
<p>The problem is that I could have "missing days", like 2022-02-28 and 2022-03-01 and I would like to use the first "available" day of the week. e.g 2022-02-21 (Monday) and 2022-03-02 (Wednesday)</p>
|
<p>You can assign the index to a new column then keep the first value in this group</p>
<pre class="lang-py prettyprint-override"><code>out = (df.assign(index=df.index)
.groupby(pd.Grouper(freq='W-MON', closed='left', label='left')).agg({'index': 'first', 'close': 'last'})
.reset_index(drop=True))
</code></pre>
<pre><code>print(out)
index close
0 2022-02-21 7
1 2022-03-02 1
</code></pre>
|
pandas|pandas-resample
| 1
|
6,789
| 72,232,619
|
Searching a value within range between columns in pandas (not date columns and no sql)
|
<p>thanks in advance for help. I have two dataframes
as given below. I need to create column category in sold frame based on information in size frame. It should check siz of product within Min and Max sizes for this product and return group. Is it possible to do it in pandas? not SQL. I think merge and join method will not work here.</p>
<pre><code>size=pd.DataFrame({"Min Size":[30,41,40],
"Max Size":[40, 60, 50],
"Category":['small', 'big', "medium"],
"Product":['Apple', 'Apple', "Peach"]})
sold=pd.DataFrame({"Purchase_date":["20/01/2020", "18/02/2020", "01/06/2020"],
"Size":[35, 45, 42],
"Category":["small","big","medium"],
"Product":['Apple', 'Peach', "Apple"]})
</code></pre>
|
<p>Joining condition in pandas must be exact match. It doesn't have the <code>BETWEEN ... AND ...</code> clause like in SQL.</p>
<p>You can use numpy broadcast to compare every row in <code>sold</code> to every row in <code>size</code> and filter for a match:</p>
<pre class="lang-py prettyprint-override"><code># Converting everything to numpy for comparison
sold_product = sold["Product"].to_numpy()[:, None]
sold_size = sold["Size"].to_numpy()[:, None]
product, min_size, max_size = size[["Product", "Min Size", "Max Size"]].T.to_numpy()
# Compare every row in `sold` to every row in `size`.
# `mask` is a len(sold) * len(size) matrix whose value
# indicate if row i in `sold` matches row j in `size`
mask = (sold_product == product) & (min_size <= sold_size) & (sold_size <= max_size)
# For each row in `sold`, find the first row in `size` that
# is True / non-zero
idx, join_key = mask.nonzero()
sold.loc[idx, "join_key"] = join_key
# Result
sold.merge(
size[["Category"]],
how="left",
left_on="join_key",
right_index=True,
suffixes=("_Expected", "_Actual"),
)
</code></pre>
|
python|pandas|search|lookup
| 0
|
6,790
| 50,344,335
|
Grouping pandas dataframe based on common key
|
<p>I have a file which I have parsed as pandas DataFrame but want to collectively group by their individual element at column 3 w.r.t column 2.</p>
<pre><code> 0 1 2 3 4
0 00B2 0 -67 39 1.13
1 00B2 85 -72 39 1.13
2 00B2 1 -67 86 1.13
3 00B2 2 -67 87 1.13
4 00B2 3 -67 88 1.13
5 00B2 91 -67 39 1.13
6 00B2 4 -67 246 1.13
7 00B2 5 -67 78 1.13
8 00B2 6 -67 10 1.13
9 00B2 7 -67 153 1.13
10 00B2 1 -67 38 1.13
11 00B2 8 -67 225 1.13
12 00B2 9 -67 135 1.13
13 00B2 10 -67 23 1.13
14 00B2 4 -67 38 1.13
15 00B2 11 -67 132 1.13
16 00B2 12 -71 214 1.13
17 00B2 13 -71 71 1.13
18 00B2 14 -71 215 1.13
19 00B2 8 -71 38 1.13
20 00B2 15 -71 249 1.13
21 00B2 16 -71 174 1.13
22 00B2 17 -71 196 1.13
23 00B2 18 -71 38 1.13
24 00B2 19 -71 252 1.13
25 00B2 20 -71 196 1.13
26 00B2 21 -71 39 1.13
27 00B2 22 -71 39 1.13
28 00B2 23 -71 252 1.13
29 00B2 24 -71 39 1.13
.. ... .. ... ... ...
</code></pre>
<p>I want the data that looks something like this</p>
<p>DF1:</p>
<pre><code>-67 37
-72 37
-71 37
... ...
</code></pre>
<p>DF2:</p>
<pre><code>-68 38
-67 38
-70 38
... ...
</code></pre>
<p>DF3:</p>
<pre><code>-64 39
-63 39
-62 39
... ...
</code></pre>
<p>I have tried the following:</p>
<pre><code>e1 = pd.DataFrame(e1)
print (e1)
group = e1[3][2] == "group"
print (e1[group])
</code></pre>
<p>This leads to nowhere close to what I want so how to groupby such data according to my requirement?</p>
|
<p>I think need create dictionary of <code>Series</code> by converting <code>groupby</code> object to tuples and dicts:</p>
<pre><code>d = dict(tuple(df.groupby(3)[2]))
print (d[39])
0 -67
1 -72
5 -67
26 -71
27 -71
29 -71
Name: 2, dtype: int64
</code></pre>
<p>For <code>DataFrame</code>:</p>
<pre><code>d1 = dict(tuple(df.groupby(3)))
print (d1[39])
0 1 2 3 4
0 00B2 0 -67 39 1.13
1 00B2 85 -72 39 1.13
5 00B2 91 -67 39 1.13
26 00B2 21 -71 39 1.13
27 00B2 22 -71 39 1.13
29 00B2 24 -71 39 1.13
</code></pre>
|
python|pandas
| 1
|
6,791
| 50,608,658
|
Iterating Through Multiindex Pandas DataFrame to normalize by specific index level
|
<p>I have a multiindexed DataFrame and I want to normalize specific sets of columns (of the "second" index level) against their highest value:</p>
<pre><code> Level1 Level2
Thing1 Thing2 Thing1
Norm1 Norm2 Norm1 Norm2 Norm1 Norm1
0 0.3309 0.5030 1.4494 0.7677 0.2134 0.1235
1 0.1708 0.4845 1.2636 0.8755 0.2419 0.1350
2 0.2272 0.3414 1.2890 0.7636 0.1295 0.0788
3 0.2249 0.3225 1.0368 0.7391 0.1416 0.1267
4 0.2268 0.4230 1.7703 1.0730 0.0198 0.0294
5 0.2078 0.3600 1.7819 0.9052 0.0253 0.0254
6 0.1034 0.1781 3.2156 1.5434 0.1084 0.0452
7 0.0823 0.1574 2.2911 1.3434 0.0440 0.0617
8 0.5260 0.7510 3.1626 2.2208 0.1420 0.0583
9 0.5503 0.6921 3.2830 2.0311 0.0771 0.0677
....
</code></pre>
<p>I want to normalize Norm 1 and Norm 2 by the maximum of either column in each Thing.
I want to preserve column and index names. Is there a good way to iterate through this?</p>
|
<p>I believe need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.max.html" rel="nofollow noreferrer"><code>max</code></a> by first and second level of <code>MultiIndex</code>:</p>
<pre><code>df = df.max(level=[0,1], axis=1)
</code></pre>
<p>Alternative solution is aggregate <code>max</code>:</p>
<pre><code>df = df.groupby(level=[0,1], axis=1).max()
</code></pre>
<hr>
<pre><code>print (df)
Level1 Level2
Thing1 Thing2 Thing1
0 0.5030 1.4494 0.2134
1 0.4845 1.2636 0.2419
2 0.3414 1.2890 0.1295
3 0.3225 1.0368 0.1416
4 0.4230 1.7703 0.0294
5 0.3600 1.7819 0.0254
6 0.1781 3.2156 0.1084
7 0.1574 2.2911 0.0617
8 0.7510 3.1626 0.1420
9 0.6921 3.2830 0.0771
</code></pre>
|
python|pandas
| 1
|
6,792
| 62,716,991
|
Replacing values in certain columns based on a dictionary of thresholds?
|
<p>I am trying to get from this Panda df:</p>
<pre><code> mag ip em as_ppm au_ppm
0 820 6447 99 4670 30
1 774 5827 26 35 97
2 800 9089 75 9727 25
3 584 6122 38 2911 80
4 494 7616 78 6673 67
5 742 6626 30 9424 69
6 803 2136 71 4043 73
7 682 8172 43 8806 26
8 132 1369 41 8267 34
9 680 5536 41 4431 16
</code></pre>
<p>Using these thresholds:</p>
<pre><code>lowThresholds = {'mag':500,'ip':5000, 'em':0, 'as_ppm':0, 'au_ppm':0}
highThresholds = {'mag':1000,'ip':7500, 'em':90, 'as_ppm':8000, 'au_ppm':90}
</code></pre>
<p>To a matrix with the same shape with trues and falses:</p>
<pre><code> mag ip em as_ppm au_ppm
0 True True False True True
1 True True True True False
2 True False True False True
3 True True True True True
4 True False True True True
5 True True True False True
6 True False True True True
7 True False True False True
8 False False True False True
9 True True True True True
</code></pre>
<p>and ideally using:</p>
<pre><code>weights = {'mag':5,'ip':10, 'em':5, 'as_ppm':20, 'au_ppm':30}
</code></pre>
<p>end up with:</p>
<pre><code> mag ip em as_ppm au_ppm
0 5 10 5 20 30
1 5 10 5 20 0
2 5 0 5 0 30
3 5 10 5 20 30
4 5 0 5 20 30
5 5 10 5 0 30
6 5 0 5 20 30
7 5 0 5 0 30
8 0 0 5 0 30
9 5 10 5 20 30
</code></pre>
<p>I have found some terrible ways of doing it by creating various new dataframes, but I know it will scale terribly.</p>
|
<p>Try</p>
<pre><code>s=(df.lt(highThresholds) & df.gt(lowThresholds)).mul(weights)
mag ip em as_ppm au_ppm
0 5 10 0 20 30
1 5 10 5 20 0
2 5 0 5 0 30
3 5 10 5 20 30
4 0 0 5 20 30
5 5 10 5 0 30
6 5 0 5 20 30
7 5 0 5 0 30
8 0 0 5 0 30
9 5 10 5 20 30
</code></pre>
|
python|pandas
| 5
|
6,793
| 62,732,824
|
Convert a string to one hot encoding matrix and then feed to neural network
|
<p>I have many DNA sequence data, which has been read into <em>xtrain</em>. Each sample has a label (classification problem), which has been read into <em>ytrain</em>.</p>
<pre class="lang-py prettyprint-override"><code>tokenizer = keras.preprocessing.text.Tokenizer(char_level=True, lower=True)
tokenizer.fit_on_texts("ATCGN")
# number of distinct characters, should be 5 in this case
max_id = len(tokenizer.word_index)
print(tokenizer.word_index)
</code></pre>
<p>{'a': 1, 't': 2, 'c': 3, 'g': 4, 'n': 5}</p>
<p>one sequence data looks like: "---ATCGATN---".</p>
<p>I want to split each sequence into fixed length (e.g., 4) sub-seq. Take the seq above as an example: "ATCG", "TCGA", "CGAT", "GATN". Each seq will be represented by one row in the matrix. Then using one hot encoding to represent each character. So, "A" is something like [0,0,0,0,1], "T" is something like [0,0,0,1,0]. Concatenating all encodings for each chars in the sequence gives us the encoding for the sub-seq. So, "ATCG" will be something like [0,0,0,0,1,0,0,0,1,0,0,0,1,0,0,...]
In this way, each sequence will be turned into a matrix of size (number_of_sub-seq, len_of_sub-seq * 5), where 5 comes from tokenizer.word_index.</p>
<p>The following code tries to accomplish this. I am pretty new to Tensorflow, so I cannot figure out how to convert each type into others or print out the real values of tensors. For the code <code>[x_encoded] = np.array(tokenizer.texts_to_sequences([x])) - 1</code> gives me an error of AttributeError: 'Tensor' object has no attribute 'lower'.</p>
<pre class="lang-py prettyprint-override"><code>def seq2mat(x, y):
x = tf.strings.regex_replace(x, "-", "")
x = tf.strings.regex_replace(x, 'K', 'N')
[x_encoded] = np.array(tokenizer.texts_to_sequences([x])) - 1
x_dataset = x_dataset.window(kmer_len, shift=3, drop_remainder=True)
x_flat = x_dataset.flat_map(lambda window: window.batch(kmer_len))
x_1hot = x_flat.map(lambda kmer: tf.one_hot(kmer, depth=max_id))
# try to stack them into a matrix
x_np_mat = []
for item in x_1hot:
line = np.array(item)
x_np_mat.append(line.flatten())
x_np_mat = np.array(x_np_mat)
return x_np_mat
batch_size = 8
kmer_len = 8
train_dataset = tf.data.Dataset.from_tensor_slices((xtrain, ytrain))
train_data = train_dataset.shuffle(buffer_size=1000, seed=1)
train_data = train_data.map(seq2mat)
train_data = train_data.batch(batch_size).prefetch(1)
</code></pre>
|
<pre><code>tokenizer = keras.preprocessing.text.Tokenizer(char_level=True, lower=True)
tokenizer.fit_on_texts("ATCGN")
x = []
seq = np.array(tokenizer.texts_to_sequences('ATCGN'))
a = keras.utils.to_categorical(seq[:,0]-1)
for i in a:
x = x + list(i)
print(x)
</code></pre>
|
python|numpy|tensorflow
| 0
|
6,794
| 62,761,435
|
Koalas GroupBy > Apply > Lambda > Series
|
<p>I am trying to port some code from Pandas to Koalas to take advantage of Spark's distributed processing. I am taking a dataframe and grouping it on A and B and then applying a series of functions to populate the columns of the new dataframe. Here is the code that I was using in Pandas:</p>
<pre><code>new = old.groupby(['A', 'B']) \
.apply(lambda x: pd.Series({
'v1': x['v1'].sum(),
'v2': x['v2'].sum(),
'v3': (x['v1'].sum() / x['v2'].sum()),
'v4': x['v4'].min()
})
)
</code></pre>
<p>I believe that it is working well and the resulting dataframe appears to be correct value-wise.</p>
<p>I just have a few questions:</p>
<ol>
<li><p>Does this error mean that my method will be deprecated in the future?
<code>/databricks/spark/python/pyspark/sql/pandas/group_ops.py:76: UserWarning: It is preferred to use 'applyInPandas' over this API. This API will be deprecated in the future releases. See SPARK-28264 for more details.</code></p>
</li>
<li><p>How can I rename the group-by columns to 'A' and 'B' instead of <code>"__groupkey_0__ __groupkey_1__"? </code></p>
</li>
<li><p>As you noticed, I had to call pd.Series -- is there a way to do this in Koalas? Calling ks.Series gives me the following error that I am unsure how to implement:
<code>PandasNotImplementedError: The method `pd.Series.__iter__()` is not implemented. If you want to collect your data as an NumPy array, use 'to_numpy()' instead.</code></p>
</li>
</ol>
<p>Thanks for any help that you can provide!</p>
|
<ol>
<li>I'm not sure about the error. I am using <code>koalas==1.2.0</code> and <code>pandas==1.0.5</code> and I don't get the error so I wouldn't worry about it</li>
<li>The <code>groupby</code> columns are already called <code>A</code> and <code>B</code> when I run the code. This again may have been a bug which has since been patched.</li>
<li>For this you have 3 options:
<ol>
<li>Keep utilising <code>pd.Series</code>. As long as your original Dataframe is a <code>koalas</code> Dataframe, your output will also be a <code>koalas</code> Dataframe (with the <code>pd.Series</code> automatically converted to <code>ks.Series</code>)</li>
<li>Keep the function and the data exactly the same and just convert the final dataframe to <code>koalas</code> using the <code>from_pandas</code> function</li>
<li>Do the whole thing in <code>koalas</code>. This is slightly more tricky because you are computing an aggregate column based on two <code>GroupBy</code> columns and <code>koalas</code> doesn't support lambda functions as a valid aggregation. One way we can get around this is by computing the other aggregations together and adding the multi-column aggregation afterwards:</li>
</ol>
</li>
</ol>
<pre class="lang-py prettyprint-override"><code>import databricks.koalas as ks
ks.set_option('compute.ops_on_diff_frames', True)
# Dummy data
old = ks.DataFrame({"A":[1,2,3,1,2,3], "B":[1,2,3,3,2,3], "v1":[10,20,30,40,50,60], "v2":[4,5,6,7,8,9], "v4":[0,0,1,1,2,2]})
new = old.groupby(['A', 'B']).agg({'v1':'sum', 'v2':'sum', 'v4': 'min'})
new['v3'] = old.groupby(['A', 'B']).apply(lambda x: x['v1'].sum() / x['v2'].sum())
</code></pre>
|
pandas|pandas-groupby|databricks|pandas-apply|spark-koalas
| 1
|
6,795
| 62,858,493
|
How to insert a gridline in specific position on seaborn heatmap
|
<p>I have the following data and heatmap and would like some help with the formatting of the gridlines:</p>
<pre><code>import pandas as pd
import numpy as np
data = [
['tom', 1, 1, 1, '0:00', '10:26'],
['tom', 1, 1, 2, '15:30', '18:50'],
['tom', 1, 2, 1, '2:00', '9:15'],
['tom', 1, 2, 2, '13:10', '22:40'],
['tom', 2, 1, 1, '5:00', '22:15'],
['tom', 2, 2, 1, '0:00', '13:40']
]
# Create the pandas DataFrame
df = pd.DataFrame(
columns=['Name',
'Day',
'AllottedPeriod',
'AttemptNo',
'StartTime',
'EndTime'],
data=data,
)
def parse_time_periods(x):
start_minute, start_second = x['StartTime'].split(':')
end_minute, end_second = x['EndTime'].split(':')
# calculate the start and end time in seconds
start = (int(start_minute) * 60) + int(start_second)
end = (int(end_minute) * 60) + int(end_second)
test_range = range(start, end)
for i in range(1, 26, 1):
# create range to check for intercection with testing time
time_range = range((i - 1) * 60, i * 60)
# create variables to indicate if there is overlap between
# test time and minute range. For example, if a test was active
# between minute 10 and minute 15, column `15` will be 1
if len(set(test_range).intersection(time_range)) > 0:
x[f'{i:02}'] = 1
return x
df = df.apply(lambda x: parse_time_periods(x), axis=1).fillna(0)
stacked_df = df.drop(columns=['StartTime', 'EndTime','AttemptNo']).groupby(
by=['Name', 'Day', 'AllottedPeriod']
).agg(sum).unstack().swaplevel(0, 1, 1).sort_index(1)
import seaborn as sns
import matplotlib.pyplot as plt
f, ax = plt.subplots(figsize=(10,3))
cmap = plt.cm.OrRd_r
ax = sns.heatmap(stacked_df,cbar=False,cmap=cmap)
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/ecl8e.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ecl8e.png" alt="enter image description here" /></a></p>
<p>How can I insert:
a) horizontal gridlines for each line
b) a vertical grid line at the point <code>2-01</code></p>
<p>Just struggling to find out how. Any help would be appreciated! Thanks very much!</p>
|
<p>Grid lines could not be displayed by 'sns', so I used 'matplotlib' to create horizontal and vertical lines painted. <code>ax.hlines()</code>,<code>ax.vlines()</code>.</p>
<pre><code>h = ax.get_yticks()
print(h)
[0.5 1.5]
w = ax.get_xticks()
print(w)
[ 0.5 1.5 2.5 3.5 4.5 5.5 6.5 7.5 8.5 9.5 10.5 11.5 12.5 13.5
14.5 15.5 16.5 17.5 18.5 19.5 20.5 21.5 22.5 23.5 24.5 25.5 26.5 27.5
28.5 29.5 30.5 31.5 32.5 33.5 34.5 35.5 36.5 37.5 38.5 39.5 40.5 41.5
42.5 43.5 44.5 45.5]
ax.hlines((h[0]+h[1])/2,w[-1],w[:0], linestyle='dashed', linewidth=2, color="gray")
ax.vlines(24.5,h[1]+0.5,h[0]-0.5, linestyle='dashed', linewidth=2, color="gray")
</code></pre>
<p><a href="https://i.stack.imgur.com/YdKao.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YdKao.png" alt="enter image description here" /></a></p>
|
python|pandas|seaborn
| 2
|
6,796
| 62,569,780
|
access only first 80% columns of a data frame
|
<p>I want to access only the first 80% columns of my dataframe and store it into new data frame whereas store the remainig 20% in another data frame. Here is something I tried:</p>
<pre><code>ratings_df=ratings_df.iloc[:,:int(ratings_df.shape()[1]*0.8)-1]
</code></pre>
<p>however this gave an error:</p>
<pre><code>Traceback (most recent call last):
File "S:\TIP\Code\MF_research.py", line 15, in <module>
ratins_df=ratings_df.iloc[:,:int(ratings_df.shape()[1]*0.8)-1]
TypeError: 'tuple' object is not callable
</code></pre>
<p>ratings_df:</p>
<pre><code>MovieID 1 2 3 4 5 6 ... 3947 3948 3949 3950 3951 3952
UserID ...
1 5.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0
2 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0
3 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0
4 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0
5 0.0 0.0 0.0 0.0 0.0 2.0 ... 0.0 0.0 0.0 0.0 0.0 0.0
... ... ... ... ... ... ... ... ... ... ... ... ... ...
6036 0.0 0.0 0.0 2.0 0.0 3.0 ... 0.0 0.0 0.0 0.0 0.0 0.0
6037 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0
6038 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0
6039 0.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0
6040 3.0 0.0 0.0 0.0 0.0 0.0 ... 0.0 0.0 0.0 0.0 0.0 0.0
[6040 rows x 3706 columns]
</code></pre>
|
<p>You should remove the brackets. You only need df.shape[1].
By the way for more readability, I suggest you use rather</p>
<pre><code>shape_80 = int(df.shape[1]*0.8)-1
ratings_df=ratings_df.iloc[:,:shape_80]
</code></pre>
<p>Or something like that</p>
|
python|pandas|numpy|dataframe|recommendation-system
| 2
|
6,797
| 54,683,569
|
How to concatenate ResNet50 hidden layer with another model input?
|
<p>I am trying to concatenate the output of a hidden layer in ResNet with the input of another model but I get the following error: </p>
<p><em>ValueError: Output tensors to a Model must be the output of a Keras <code>Layer</code> (thus holding past layer metadata)</em></p>
<p>I am using the Concatenate layer from Keras as recommended in <a href="https://stackoverflow.com/questions/43196636/how-to-concatenate-two-layers-in-keras">How to concatenate two layers in keras?</a>, however it did not work. What may I be missing? Do I have to add a dense layer to it too? The idea is not to change the second input until it is concatenated with the first input (the merged input will be an input of a third model).</p>
<pre><code>resnet_features = resnet.get_layer('avg_pool').output
model2_features = Input(shape=(None, 32))
all_features = Concatenate([resnet_features, model2_features])
mixer = Model(inputs=[resnet.input, model2_features],
outputs=all_features)
</code></pre>
|
<p>It looks like you are missing two brackets at your concatenation layer. It should look like this:</p>
<pre><code>all_features = Concatenate()([resnet_features, model2_features])
</code></pre>
<p>Moreover, you have to make sure that the shapes of <code>resnet_features</code> and <code>model2_features</code> are the same except for the concatenation axis since otherwise you won't be able to concatenate them.</p>
|
python|tensorflow|keras|resnet|transfer-learning
| 2
|
6,798
| 54,362,545
|
Calculate the average of the rows for each group
|
<p>I need to calculate the mean of a certain column in DataFrame, so that means for each row is calculated excluding the previous values of the row for which it's calculated in certain group. Lets assume we have this dataframe, this is the expected output </p>
<p>is there any way that like iterate each row by index, adding previous row by index in every iteration, and then calculating mean. I wonder if there's a more efficient way of doing it</p>
<pre><code>unit A Expected
T10 8 8
T10 7 7.5
T10 12 9
T11 10 10
T11 6 8
T12 17 17
T12 7 12
T12 3 9
</code></pre>
|
<p>You can use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.expanding.html#pandas.DataFrame.expanding" rel="nofollow noreferrer"><code>expanding</code></a>:</p>
<pre><code>df2 = df.groupby('unit')['A'].expanding().mean().reset_index()
df['Expected'] = df2['A']
</code></pre>
|
python|python-3.x|pandas|data-science
| 3
|
6,799
| 73,578,221
|
Adding columns to pandas not working with Django ORM
|
<p>I am trying to add columns to existing pandas DataFrame. The added column gets data using Django ORM. My approaches are like the following:</p>
<p>1.</p>
<pre><code>df['name'] = User.objects.get(id=df['id'])
</code></pre>
<ol start="2">
<li></li>
</ol>
<pre><code>df['name'] = df.assign(name=lambda x: User.objects.get(x.id))
</code></pre>
<p>But for both the approaches, I am getting the following error:</p>
<pre><code>TypeError: Field 'code' expected a number but got 0 1
1 18
Name: code, dtype: int64.
</code></pre>
<p><em>The field is expecting a number but getting a pandas Series instead.</em></p>
<p>How shall I approach this?</p>
|
<p>you need a <em>number</em> for the <code>id</code> field in <code>User.objects.get</code>, but <code>df['id']</code> returns a <em>pandas Series</em></p>
<p>you can do something like:</p>
<pre class="lang-py prettyprint-override"><code>df = df.assign(name=[User.objects.get(x) for x in df['id']])
</code></pre>
|
python|django|pandas
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.