Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
6,600
| 64,094,295
|
How to plot a vertical line on a time series axis?
|
<p>I would like to 'link' two pieces of code. One,</p>
<pre><code>x= df[df['Value']==True].sort_values(by='Date').head(1).Date
Out[111]:
8 2020-03-04
</code></pre>
<p>extracts the date where the first value appears; the other one</p>
<pre><code>df[df['Buy']==1].groupby('Date').size().plot(ax=ax, label='Buy')
</code></pre>
<p>should plot some information through time.</p>
<p>I would like to add a vertical line at the first date where the value is true, i.e. <code>2020-03-04</code>. In order to do it, I would need to extract this information from the first code (not using copy and paste) to the other piece of code which generates the plot.
Can you give me some guide on how to do it? Thanks a lot</p>
<p>Update:</p>
<p>I tried as follows:</p>
<pre><code>x= df[df['Value']==True].sort_values(by='Date').head(1).Date.tolist()
Out[111]:
8 ['2020-03-04']
df[df['Buy']==1].groupby('Date').size().plot(ax=ax, label='Buy')
ax.axvline(x, color="red", linestyle="--")
</code></pre>
<p>but I got a TypeError: unhashable type: 'numpy.ndarray'</p>
<p>Some data:</p>
<pre><code>Date Buy Value
0 2020-02-23 0 False
1 2020-02-23 0 False
2 2020-02-25 0 False
3 2020-02-27 1 False
4 2020-03-03 1 False
5 2020-03-03 1 False
6 2020-03-03 0 False
7 2020-03-04 1 False
8 2020-03-04 0 True
9 2020-03-04 0 True
10 2020-03-04 1 False
11 2020-03-05 0 True
12 2020-03-05 1 False
13 2020-03-05 1 False
14 2020-03-05 1 False
15 2020-03-06 0 False
16 2020-03-06 1 False
17 2020-03-06 1 False
18 2020-03-07 1 False
19 2020-03-07 1 False
20 2020-03-07 1 False
21 2020-03-08 1 False
22 2020-03-08 1 False
23 2020-03-09 1 False
24 2020-03-09 1 False
25 2020-03-09 1 False
26 2020-03-10 1 False
27 2020-03-10 1 False
28 2020-03-10 1 False
29 2020-03-10 0 True
30 2020-03-11 1 False
31 2020-03-11 1 False
32 2020-03-13 0 True
33 2020-03-13 0 False
34 2020-03-15 0 True
35 2020-03-16 0 False
36 2020-03-19 0 False
37 2020-03-22 0 True
</code></pre>
|
<ul>
<li>Make certain the <code>Date</code> column is in a <code>datetime</code> format.</li>
</ul>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import random # for test data
import matplotlib.pyplot as plt
# setup sample data
random.seed(365)
rows = 40
data = {'Date': [random.choice(pd.bdate_range('2020-02-23', freq='d', periods=rows).strftime('%Y-%m-%d').tolist()) for _ in range(rows)],
'Buy': [random.choice([0, 1]) for _ in range(rows)],
'Value': [random.choice([False, True]) for _ in range(rows)]}
df = pd.DataFrame(data)
# set the Date column to a datetime
df.Date = pd.to_datetime(df.Date)
# extract values
x = df[df['Value']==True].sort_values(by='Date').head(1).Date
# groupby and plot
ax = df[df['Buy']==1].groupby('Date').size().plot(figsize=(7, 5), label='Buy')
# plot the vertical line; axvline works as long as x is one value
ax.axvline(x, color="red", linestyle="--", label='my value')
# show the legend
plt.legend(bbox_to_anchor=(1.05, 1), loc='upper left')
</code></pre>
<p><a href="https://i.stack.imgur.com/JSxbU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JSxbU.png" alt="enter image description here" /></a></p>
<h3>package versions</h3>
<pre class="lang-py prettyprint-override"><code>import matplotlib as mpl
print(mpl.__version__)
print(pd.__version__)
[out]:
3.3.1
1.1.0
</code></pre>
|
python|pandas|matplotlib
| 1
|
6,601
| 64,154,762
|
"builtin_function_or_method' object has no attribute 'reshape'" what does this mean?
|
<p>I'm a novice, so this question may be somewhat obvious for someone.</p>
<pre><code>import numpy as np
print("array")
array = np.arange(8)
matrix = np.array.reshape(2,4)
print(matrix)
</code></pre>
<p>The result is this.</p>
<pre><code>array
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-4-88e5e9409344> in <module>
2 print("array")
3 array = np.arange(8)
----> 4 matrix = np.array.reshape(2,4)
5 print(matrix)
AttributeError: 'builtin_function_or_method' object has no attribute 'reshape'
</code></pre>
<p>I don't know why it does not work.</p>
|
<p>It looks like you're calling reshape on <code>np.array</code>, which is a function that is used to create a new array.</p>
<p>You already created your variable <code>array</code>.
Try to use this variable instead of <code>np.array</code>:</p>
<pre><code>import numpy as np
print("array")
array = np.arange(8)
matrix = array.reshape(2,4) # <-- remove the "np." to access a function on your array
print(matrix)
</code></pre>
<p>Why is this?</p>
<p><code>myArray = np.array(k)</code> is a function that creates a new NumPy array using the input <code>k</code>.
The result of this function is returned and saved to a variable (in my case <code>myArray</code>).</p>
<p>On this array, you can call functions to manipulate it (like <code>reshape</code>).</p>
<p>What you tried to do: You used <code>np.array</code> (remember, the function that creates an array). You did not use your array, but used a function pointer instead.</p>
|
python|arrays|numpy
| 1
|
6,602
| 46,718,178
|
Dataframe columns to key value dictionary pair
|
<p>I have following <code>DataFrame</code> </p>
<pre><code> product count
Id
175 '409' 41
175 '407' 8
175 '0.5L' 4
175 '1.5L' 4
177 'SCHWEPPES' 6
177 'TONIC 1L' 4
</code></pre>
<p>How I can transform it to following list of dictionaries:</p>
<pre><code>[{'409':41,'407':8, '0.5L':4, '1.5L':4},
{'SCHWEPPES':6, 'TONIC 1L':4}]
</code></pre>
<p>Thanks a lot for help.</p>
|
<pre><code>In [14]: df.set_index('product').T.to_dict('r')
Out[14]: [{'0.5L': 4, '1.5L': 4, '407': 8, '409': 41, 'SCHWEPPES': 6, 'TONIC 1L': 4}]
</code></pre>
|
python|pandas|dataframe
| 3
|
6,603
| 46,903,963
|
Python, numpy, piecewise answer gets rounded
|
<p>I have a question with regards to the outputs of numpy.piecewise.</p>
<p>my code:</p>
<pre><code>e=110
f=np.piecewise(e,[e<120,e>=120],[1/4,1])
print(f)
</code></pre>
<p>As a result i get:
0
and not the desired 0.25</p>
<p>Can someone explain me why piecewise seems to be rounding my answer?
Is there a way to go around this without doing?</p>
<pre><code>e=110
f=np.piecewise(e,[e<120,e>=120],[1,4])/4
print(f)
</code></pre>
<p>many thanks in advance</p>
|
<p>From the <a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.piecewise.html" rel="nofollow noreferrer"><code>np.piecewise</code> docs</a>:</p>
<blockquote>
<p>The output is the same shape and type as x</p>
</blockquote>
<p>Integer in, integer out. If you want a floating-point output, you need to pass a float in:</p>
<pre><code>e = 110.0
</code></pre>
<p>Additionally, if you're on Python 2, <code>1/4</code> will already just be <code>0</code> because that's floor division on Python 2, so you'll need to write something like <code>1.0/4.0</code> or just <code>0.25</code>.</p>
|
python|numpy|piecewise
| 0
|
6,604
| 32,751,512
|
Renaming columns in pandas dataframe using regular expressions
|
<pre><code> Y2010 Y2011 Y2012 Y2013 test
0 86574 77806 93476 99626 2
1 60954 67873 65135 64418 4
2 156 575 280 330 6
3 1435 1360 1406 1956 7
4 3818 7700 6900 5500 8
</code></pre>
<p>Is there a way to rename the columns of this dataframe from Y2010... to 2010.. i.e. removing the initial 'Y'. I want to use regular expressions since I have quite a few such columns. I tried this:</p>
<pre><code>df.rename(df.filter(regex='^Y\d{4}').columns.values, range(2010, 2013 + 1, 1))
</code></pre>
<p>--EDIT:
The dataframe doees include columns which do not start with a 'Y'</p>
|
<p>I'd use map:</p>
<pre><code>In [11]: df.columns.map(lambda x: int(x[1:]))
Out[11]: array([2010, 2011, 2012, 2013])
In [12]: df.columns = df.columns.map(lambda x: int(x[1:]))
In [13]: df
Out[13]:
2010 2011 2012 2013
0 86574 77806 93476 99626
1 60954 67873 65135 64418
2 156 575 280 330
3 1435 1360 1406 1956
4 3818 7700 6900 5500
</code></pre>
<hr>
<p>Edit: I forgot the <a href="https://stackoverflow.com/a/16667215/1240268">most-popular pandas question</a>:</p>
<pre><code>In [21]: df.rename(columns=lambda x: int(x[1:]))
Out[21]:
2010 2011 2012 2013
0 86574 77806 93476 99626
1 60954 67873 65135 64418
2 156 575 280 330
3 1435 1360 1406 1956
4 3818 7700 6900 5500
</code></pre>
<p>If you have additional columns, I would probably write a proper function (rather than a lambda):</p>
<pre><code>def maybe_rename(col_name):
if re.match(r"^Y\d{4}", col_name):
return int(col_name[1:])
else:
return col_name
In [31]: df.rename(columns=maybe_rename)
Out[31]:
2010 2011 2012 2013 test
0 86574 77806 93476 99626 2
1 60954 67873 65135 64418 4
2 156 575 280 330 6
3 1435 1360 1406 1956 7
4 3818 7700 6900 5500 8
</code></pre>
|
python|pandas
| 13
|
6,605
| 33,051,460
|
Numpy array of dicts
|
<p>In Python, I have a dict has pertinent information</p>
<pre><code>arrowCell = {'up': False, 'left': False, 'right': False}
</code></pre>
<p>How do I make an array with i rows and j columns of these dicts?</p>
|
<p>How to make a two dimensional i & j is explained really well on the site already
at this:
<a href="https://stackoverflow.com/questions/6667201/how-to-define-two-dimensional-array-in-python">How to define two-dimensional array in python</a> link.</p>
<p>Hope this helps, cheers!</p>
|
python|numpy|dictionary|multidimensional-array
| 1
|
6,606
| 63,221,468
|
RuntimeError: DataLoader worker (pid 27351) is killed by signal: Killed
|
<p>I'm running the data loader below which applies a filter to a microscopy image prior to training. In order to count the red and green. This code filters the red cells. Since I have applied this to the code I keep on getting the error message above. I have tried increasing the memory allocation to the maximum allowance possible but that didn't help. Is there a way I could modify the filter so it isn't causing this issue, please? Many thanks in advance</p>
<pre><code>import os
import numpy as np
import torch
from PIL import Image
from torch.utils.data import Dataset
from torchvision import transforms, utils
#from torchvision.transforms import Grayscalei
import pandas as pd
import pdb
import cv2
class CellsDataset(Dataset):
# a very simple dataset
def __init__(self, root_dir, transform=None, return_filenames=False):
self.root = root_dir
self.transform = transform
self.return_filenames = return_filenames
self.files = [os.path.join(self.root,filename) for filename in os.listdir(self.root)]
self.files = [path for path in self.files
if os.path.isfile(path) and os.path.splitext(path)[1]=='.png']
def __len__(self):
return len(self.files)
def __getitem__(self, idx):
path = self.files[idx]
image = cv2.imread(path)
sample = image.copy()
# set blue and green channels to 0
sample[:, :, 0] = 0
sample[:, :, 1] = 0
channel.
if self.transform:
sample = self.transform(sample)
if self.return_filenames:
return sample, path
else:
return sample
</code></pre>
|
<p>Similar problem I met before. <br/>
One possible solution is to disable <code>cv2</code> multi-processing by</p>
<pre class="lang-py prettyprint-override"><code>def __getitem__(self, idx):
import cv2
cv2.setNumThreads(0)
# ...
</code></pre>
<p>in your dataloader. It might be because the <code>cv2</code> multi-processing is conflict with <code>torch</code>'s <code>DataLoader</code> with multi-processing. Although it does not work for me since mine does not involve OpenCV. But indeed you might first try this.
<br/>
For my occasion, it's worth mentioning that torch's <a href="https://pytorch.org/docs/stable/notes/multiprocessing.html" rel="nofollow noreferrer">multiprocessing</a> with CUDA access must use <code>spawn</code> or <code>forkserver</code> instead of <code>fork</code> to spawn new process. To do this, you should</p>
<pre class="lang-py prettyprint-override"><code>if __name__ == '__main__':
import multiprocessing
multiprocessing.set_start_method('spawn')
# ... **The all rest code**
</code></pre>
<p>Notice you shall put all remaining code in the <code>if __name__ == '__main__'</code> block, including the imports, because it might be other imports (like <code>cv2</code>) setting it to <code>fork</code> and your loaders does not work.</p>
|
python|image-processing|deep-learning|pytorch
| 3
|
6,607
| 63,147,465
|
Having trouble creating a scatter plot for my kmeans clustering data
|
<p>I am trying to use kmeans clustering to perform some anomaly detection on a simple dataset.</p>
<p>I have some data in two variables. x and x_ax.<br />
Here is an example of the x data</p>
<pre><code>array([[44360.125],
[56385.958333333336],
[61500.5],
[61227.375],
[60049.333333333336],
[51396.916666666664],
[49225.208333333336],
[63211.083333333336],
[64631.916666666664],
[62546.708333333336],
[62825.125],
</code></pre>
<p>The x_ax data are timestamp values...</p>
<pre><code>array([Timestamp('2018-01-01 00:00:00'), Timestamp('2018-01-02 00:00:00'),
Timestamp('2018-01-03 00:00:00'), Timestamp('2018-01-04 00:00:00'),
Timestamp('2018-01-05 00:00:00'), Timestamp('2018-01-06 00:00:00'),
Timestamp('2018-01-07 00:00:00'), Timestamp('2018-01-08 00:00:00'),
Timestamp('2018-01-09 00:00:00'), Timestamp('2018-01-10 00:00:00'),
Timestamp('2018-01-11 00:00:00'), Timestamp('2018-01-12 00:00:00'),
Timestamp('2018-01-13 00:00:00'), Timestamp('2018-01-14 00:00:00'),
Timestamp('2018-01-15 00:00:00'), Timestamp('2018-01-16 00:00:00'),
Timestamp('2018-01-17 00:00:00'), Timestamp('2018-01-18 00:00:00'),
Timestamp('2018-01-19 00:00:00'), Timestamp('2018-01-20 00:00:00'),
Timestamp('2018-01-21 00:00:00'), Timestamp('2018-01-22 00:00:00'),
Timestamp('2018-01-23 00:00:00'), Timestamp('2018-01-24 00:00:00'),
</code></pre>
<p>The idea is that the first value in the x data is related to the first element in the x_ax data.</p>
<p>I.e. 2018-01-01 --> 44360.125</p>
<p>I instantiated a Kmeans cluster instance:</p>
<pre><code> kmeans = KMeans(n_clusters=1, random_state=0).fit(x)
center = kmeans.cluster_centers_
</code></pre>
<p>I then calculated the distance from the center for each point in x, sorted it and then extracted the top 5 greatest distances from the center (i.e., my potential anomalies).</p>
<pre><code>distance = sqrt((x - center)**2)
order_index = argsort(distance, axis = 0)
indexes = order_index[-5:]
values = x[indexes]
</code></pre>
<p>I then attempted to plot this data as a scatter plot where dots marked in red were potential anomalies.</p>
<pre><code>plt.plot(x_ax, x)
plt.scatter(indexes, values, color='r')
plt.show()
</code></pre>
<p>Unfortunately, I got a plot that looks like this:</p>
<p><a href="https://i.stack.imgur.com/OjGl5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OjGl5.png" alt="enter image description here" /></a></p>
<p>The y-axis ticks seem to be correct, but why is the x-axis tick range going from 0 to 4000 in increments of 2000 for the first value, and then by 400 after that?</p>
<p>Also, why has my plot got all of the values in the upper right as a straight line except for one red dot in the lower left?</p>
<p>Any help appreciated.</p>
|
<p>You just have to add <code>fig, ax = plt.subplots(1,1)</code> and everything works fine:</p>
<pre><code>import numpy as np
import pandas as pd
from sklearn.cluster import KMeans
import matplotlib.pyplot as plt
x = np.array([[44360.125],
[56385.958333333336],
[61500.5],
[61227.375],
[60049.333333333336],
[51396.916666666664],
[49225.208333333336],
[63211.083333333336],
[64631.916666666664],
[62546.708333333336],
[62825.125]])
x_ax = np.array([pd.Timestamp('2018-01-01 00:00:00'),
pd.Timestamp('2018-01-02 00:00:00'),
pd.Timestamp('2018-01-03 00:00:00'),
pd.Timestamp('2018-01-04 00:00:00'),
pd.Timestamp('2018-01-05 00:00:00'),
pd.Timestamp('2018-01-06 00:00:00'),
pd.Timestamp('2018-01-07 00:00:00'),
pd.Timestamp('2018-01-08 00:00:00'),
pd.Timestamp('2018-01-09 00:00:00'),
pd.Timestamp('2018-01-10 00:00:00'),
pd.Timestamp('2018-01-11 00:00:00'),
pd.Timestamp('2018-01-12 00:00:00'),
pd.Timestamp('2018-01-13 00:00:00'),
pd.Timestamp('2018-01-14 00:00:00'),
pd.Timestamp('2018-01-15 00:00:00'),
pd.Timestamp('2018-01-16 00:00:00'),
pd.Timestamp('2018-01-17 00:00:00'),
pd.Timestamp('2018-01-18 00:00:00'),
pd.Timestamp('2018-01-19 00:00:00'),
pd.Timestamp('2018-01-20 00:00:00'),
pd.Timestamp('2018-01-21 00:00:00'),
pd.Timestamp('2018-01-22 00:00:00'),
pd.Timestamp('2018-01-23 00:00:00'),
pd.Timestamp('2018-01-24 00:00:00')])
x_ax=x_ax[:11]
x_ax
kmeans = KMeans(n_clusters=1, random_state=0).fit(x)
center = kmeans.cluster_centers_
distance = ((x - center)**2)**.5
order_index = np.argsort(distance, axis = 0)
indexes = order_index[-5:]
values = x[indexes]
fig, ax = plt.subplots(1,1) # Added
plt.plot(x_ax,x)
plt.scatter(x_ax[indexes], values, color='r')
plt.show()
</code></pre>
<p>Result:</p>
<p><a href="https://i.stack.imgur.com/Mp5ac.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Mp5ac.png" alt="enter image description here" /></a></p>
|
python|pandas|matplotlib|scikit-learn
| 1
|
6,608
| 62,962,527
|
Python 3.7.0 Heroku buildpack issue
|
<p>I've read some people with the same issue, but nothing suggested has worked. I'm trying to deploy a silly project to Heroku but nothing is working.</p>
<p>Below this lines you can see the message:</p>
<blockquote>
<p>Writing objects: 100% (100/100), 55.42 MiB | 625.00 KiB/s, done.
Total 100 (delta 19), reused 4 (delta 0) remote: Compressing source
files... done. remote: Building source:</p>
<p>Counting objects: 100, done. Delta compression using up to 4 threads.
Compressing objects: 100% (94/94), done. Writing objects: 100%
(100/100), 55.42 MiB | 625.00 KiB/s, done. Total 100 (delta 19),
reused 4 (delta 0) remote: Compressing source files... done.</p>
<p>App not compatible with buildpack:
<a href="https://buildpack-registry.s3.amazonaws.com/buildpacks/heroku/python.tgz" rel="nofollow noreferrer">https://buildpack-registry.s3.amazonaws.com/buildpacks/heroku/python.tgz</a>
! Push failed</p>
</blockquote>
<p>This is my github repository, just in case someone wants to see the code:
<a href="https://github.com/exequielmoneva/cats-vs-dogs" rel="nofollow noreferrer">Pets vs Dogs</a></p>
|
<p>UPDATE: the problem was the structure itself. I had to place the requirements.txt and Procfile at the root directory.</p>
|
python|tensorflow|flask|heroku|buildpack
| 1
|
6,609
| 62,916,388
|
Numpy module not found when working with Azure Functions in VS Code and virtualenv
|
<p>I'm new to working with azure functions and tried to work out a small example locally, using VS Code with the Azure Functions extension.</p>
<p>Example:</p>
<pre><code># First party libraries
import logging
# Third party libraries
import numpy as np
from azure.functions import HttpResponse, HttpRequest
def main(req: HttpRequest) -> HttpResponse:
seed = req.params.get('seed')
if not seed:
try:
body = req.get_json()
except ValueError:
pass
else:
seed = body.get('seed')
if seed:
np.random.seed(seed=int(seed))
r_int = np.random.randint(0, 100)
logging.info(r_int)
return HttpResponse(
"Random Number: " f"{str(r_int)}", status_code=200
)
else:
return HttpResponse(
"Insert seed to generate a number",
status_code=200
)
</code></pre>
<p>When numpy is installed globally this code works fine. If I install it only in the virtual environment, however, I get the following error:</p>
<pre><code>*Worker failed to function id 1739ddcd-d6ad-421d-9470-327681ca1e69.
[15-Jul-20 1:31:39 PM] Result: Failure
Exception: ModuleNotFoundError: No module named 'numpy'. Troubleshooting Guide: https://aka.ms/functions-modulenotfound*
</code></pre>
<p>I checked multiple times that numpy is installed in the virtual environment, and the environment is also specified in the .vscode/settings.json file.</p>
<p>pip freeze of the virtualenv "worker_venv":</p>
<pre><code>$ pip freeze
azure-functions==1.3.0
flake8==3.8.3
importlib-metadata==1.7.0
mccabe==0.6.1
numpy==1.19.0
pycodestyle==2.6.0
pyflakes==2.2.0
zipp==3.1.0
</code></pre>
<p>.vscode/settings.json file:</p>
<pre><code>{
"azureFunctions.deploySubpath": ".",
"azureFunctions.scmDoBuildDuringDeployment": true,
"azureFunctions.pythonVenv": "worker_venv",
"azureFunctions.projectLanguage": "Python",
"azureFunctions.projectRuntime": "~2",
"debug.internalConsoleOptions": "neverOpen"
}
</code></pre>
<p>I tried to find something in the documentation, but found nothing specific regarding the virtual environment. I don't know if I'm missing something?</p>
<p>EDIT: I'm on a Windows 10 machine btw</p>
<p>EDIT: I included the folder structure of my project in the image below</p>
<p><a href="https://i.stack.imgur.com/0Ikqx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0Ikqx.png" alt="enter image description here" /></a></p>
<p>EDIT: Added the content of the virtual environment Lib folder in the image below</p>
<p><a href="https://i.stack.imgur.com/gGMIo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gGMIo.png" alt="enter image description here" /></a></p>
<p>EDIT: Added a screenshot of the terminal using the <code>pip install numpy</code> command below</p>
<p><a href="https://i.stack.imgur.com/dkSgv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dkSgv.png" alt="enter image description here" /></a></p>
<p>EDIT: Created a new project with a new virtual env and reinstalled numpy, screenshot below, problem still persists.</p>
<p><a href="https://i.stack.imgur.com/SE9Mq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SE9Mq.png" alt="enter image description here" /></a></p>
<p>EDIT: Added the launch.json code below</p>
<pre><code>{
"version": "0.2.0",
"configurations": [
{
"name": "Attach to Python Functions",
"type": "python",
"request": "attach",
"port": 9091,
"preLaunchTask": "func: host start"
}
]
}
</code></pre>
<p><strong>SOLVED</strong></p>
<p>So the problem was neither with python, nor with VS Code. The problem was that the execution policy on my machine (new laptop) was set to restricted and therefore the <code>.venv\Scripts\Activate.ps1</code> script could not be run.</p>
<p>To resolve this problem, just open powershell with admin rights and and run <code>set-executionpolicy remotesigned</code>. Restart VS Code and all should work fine</p>
<p>I didn't saw the error, due to the many logging in the terminal that happens
when you start azure. I'll mark the answer of @HuryShen as correct, because the comments got me to the solution. Thank all of you guys</p>
|
<p>For this problem, I'm not clear if you met the error when run it locally or on azure cloud. So provide both suggestions for these two situation.</p>
<p><strong>1.</strong> If the error shows when you run the function on azure, you may not have installed the modules success. When deploy the function from local to azure, you need to add the module to <code>requirements.txt</code>(as Anatoli mentioned in comment). You can generate the <code>requirements.txt</code> automatically by the command below:</p>
<pre><code>pip freeze > requirements.txt
</code></pre>
<p>After that, we can find the <code>numpy==1.19.0</code> exist in <code>requirements.txt</code>.
<a href="https://i.stack.imgur.com/GppID.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GppID.png" alt="enter image description here" /></a></p>
<p>Now, deploy the function from local to azure by the command below, it will install the modules success on azure and work fine on azure.</p>
<pre><code>func azure functionapp publish <your function app name> --build remote
</code></pre>
<p><strong>2.</strong> If the error shows when you run the function locally. Since you provided the modules installed in <code>worker_venv</code>, it seems you have installed <code>numpy</code> module success. I also test it in my side locally, install <code>numpy</code> and it works fine. So I think you can check if your virtual environment(<code>worker_venv</code>) exist in the correct location. Below is my function structure in local VS code, please check if your virtual environment locates in the same location with mine.</p>
<p><a href="https://i.stack.imgur.com/2uvB8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2uvB8.png" alt="enter image description here" /></a></p>
<p>-----<strong>Update</strong>------</p>
<p>Run the command to to set execution policy and then activate the virtual environment:</p>
<pre><code>set-executionpolicy remotesigned
.venv\Scripts\Activate.ps1
</code></pre>
|
python|numpy|azure-functions|virtualenv
| 4
|
6,610
| 67,856,625
|
comparing columns by ignoring null values (pandas)
|
<p>data frame:</p>
<pre><code> data=pd.DataFrame({'name':['A','B','C'],
'rank':[np.nan,2,3],
'rank1':[2,np.nan,2],
'rank2':[3,1,np.nan],
'rank4':[4,2,3]})
</code></pre>
<p>my code:</p>
<pre><code> data['Diff']=np.where((data['rank']<data['rank1'])&(data['rank1']<data['rank2'])&(data['rank2']<data['rank4']),1,0)
</code></pre>
<p>requirement ignore null and compare rest of numeric values want diff of A to be 1 (if the rank is continuously increasing ignore null)</p>
|
<p>We can <code>filter</code> the <code>rank</code> like columns, then forward fill on <code>axis=1</code> to propagate last valid value, then calculate <code>diff</code> along <code>axis=1</code> to check for monotonicity</p>
<pre><code>r = df.filter(like='rank')
data['diff'] = r.ffill(1).diff(axis=1).fillna(0).ge(0).all(1)
</code></pre>
<hr />
<pre><code> name rank rank1 rank2 rank4 diff
0 A NaN 2.0 3.0 4 True
1 B 2.0 NaN 1.0 2 False
2 C 3.0 2.0 NaN 3 False
</code></pre>
|
python|pandas
| 5
|
6,611
| 67,657,240
|
Pandas GrroupBy - get column with name "count"
|
<p>I try to get count of each category in DataFrame as:</p>
<pre><code>data = {'col_1': ['a', 'b', 'c', 'd','c'],'col_2': [3, 2, 1, 0, 4],'col3':[99,88,77,66,55]}
df = pd.DataFrame.from_dict(data)
print(df.groupby(['col_1']).count())
Output:
col_2 col3
col_1
a 1 1
b 1 1
c 2 2
d 1 1
</code></pre>
<p>Why there are two columns "col_2" and "col_3" and hot to get only one with name "count" ?</p>
<p>Wished output is :</p>
<pre><code> col_1 count
a 1
b 1
c 2
d 1
</code></pre>
|
<p>You can do:</p>
<pre><code>print(df.groupby(['col_1'],as_index=False).agg(count=('col_2','count')))
</code></pre>
<p><strong>OR</strong></p>
<pre><code>print(df.groupby(['col_1'],as_index=False).size().rename(columns={'size':'count'}))
</code></pre>
<p>Output:</p>
<pre><code>col_1 count
a 1
b 1
c 2
d 1
</code></pre>
|
python|pandas
| 1
|
6,612
| 67,860,456
|
Want to check the cell by cell value of dataframe is null or not if found null then it should fill with 0 using pandas
|
<p>I just wanted to check the cell by cell value of the dataframe is null or nan if found nan then it should be fill with zero using pandas .I have one csv file which contains the some NAN values</p>
<pre><code>import pandas as pd
df = pd.read_csv("samp.csv")
print(df)
for rowindex, row in df.iterrows():
for colind , value in row.items():
print(value)
if value.isnull():
</code></pre>
|
<p>There's a dedicated command for that: <code>df.fillna(0)</code>.</p>
<p>The full code would then be:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.read_csv("samp.csv")
df = df.fillna(0)
</code></pre>
|
python|pandas
| 1
|
6,613
| 68,008,973
|
How to subtract a value from the first element of an array?
|
<p>I want to subtract a number <code>x</code> from the first element of the array, and in case the first element in the array is smaller than <code>x</code> I want to subtract the remaining amount from the second element of the array and so on.</p>
<p>I have tried this:</p>
<pre><code>import numpy as np
x = 25
y = np.array([22, 30, 45])
result = np.copy(y)
for i in range(len(y)):
if y[i] < x:
result[i] = y[i] - x
x = x - y[i]
else:
result[i] = y[i] - x
x = x - y[i]
</code></pre>
<p>I get this result:</p>
<pre><code>array([-3, 27, 72])
</code></pre>
<p>But I want to have</p>
<pre><code>array([0, 27, 45])
</code></pre>
|
<ul>
<li><p>In the <code>if</code> case, you should assign <code>0</code></p>
</li>
<li><p>In the <code>else</code> case there must be a <code>break</code> to stop as all value to remove has been consumed</p>
</li>
</ul>
<pre><code>def sub_array(array, to_remove):
result = np.copy(array)
for i in range(len(array)):
if array[i] < to_remove:
result[i] = 0
to_remove -= array[i]
else:
result[i] = array[i] - to_remove
break
return result
</code></pre>
<pre><code>print(sub_array(np.array([10, 10, 10]), 39)) # [0 0 0]
print(sub_array(np.array([10, 10, 10]), 29)) # [0 0 1]
print(sub_array(np.array([10, 10, 10]), 19)) # [0 1 10]
print(sub_array(np.array([10, 10, 10]), 9)) # [1 10 10]
</code></pre>
|
python|arrays|numpy
| 1
|
6,614
| 67,711,752
|
How to extract tensors to numpy arrays or lists from a larger pytorch tensor
|
<p>I have a list of pytorch tensors as shown below:</p>
<pre><code>data = [[tensor([0, 0, 0]), tensor([1, 2, 3])],
[tensor([0, 0, 0]), tensor([4, 5, 6])]]
</code></pre>
<p>Now this is just a sample data, the actual one is quite large but the structure is similar.</p>
<p><strong>Question:</strong> I want to extract the <code>tensor([1, 2, 3])</code>, <code>tensor([4, 5, 6])</code> i.e., the index 1 tensors from <code>data</code> to either a numpy array or a list in flattened form.</p>
<p><strong>Expected Output:</strong></p>
<pre><code>out = array([1, 2, 3, 4, 5, 6])
</code></pre>
<p>OR</p>
<pre><code>out = [1, 2, 3, 4, 5, 6]
</code></pre>
<hr />
<p>I have tried several ways one including <code>map</code> function like:</p>
<pre><code>map(lambda x: x[1].numpy(), data)
</code></pre>
<p>This gives:</p>
<pre><code>[array([1, 2, 3]),
array([4, 5, 6])]
</code></pre>
<p>And I'm unable to get the desired result with any other method I'm using.</p>
|
<p>OK, you can just do this.</p>
<pre><code>out = np.concatenate(list(map(lambda x: x[1].numpy(), data)))
</code></pre>
|
python|list|numpy|pytorch|tensor
| 2
|
6,615
| 67,606,907
|
Computing the loss of a function of predictions with pytorch
|
<p>I have a convolutional neural network that predicts 3 quantities: Ux, Uy, and P. These are the x velocity, y-velocity, and pressure field. They are all 2D arrays of size [100,60], and my batch size is 10.</p>
<p>I want to compute the loss and update the network by calculating the CURL of the predicted velocity with the CURL of the target velocity. I have a function that does this: v = curl(Ux_pred, Uy_pred). Given the <em>predicted</em> Ux and Uy, I want to compute the loss by comparing it to ground truth targets that I have: true_curl = curl(Ux_true, Uy_true) - I've already computed the true curl and added it to my Y data, as the fourth channel.</p>
<p>However, I want my network to only predict Ux, Uy, and P. I want my NN parameters to update based on the LOSS of the curls to improve the accuracy of Ux and Uy. The loss of the curl has to be in terms of Ux and Uy. I have been trying to do this using Pytorch autograd, and have already read many similar questions, but I just can't get it to work. This is my code so far:</p>
<pre><code> print("pred_Curl shape:", np.shape(pred_curl))
print("pred_Ux shape:", np.shape(pred[:,0,:,:]))
print("pred_Uy shape:", np.shape(pred[:,1,:,:]))
true_curl = torch.from_numpy(y[:,3,:,:]) # not sure where to use the true curl?
pred_curl = Variable(pred_curl, requires_grad=True)
pred_ux = pred[:,0,:,:]
pred_uy = pred[:,1,:,:]
pred_ux = Variable(pred_ux, requires_grad=True)
pred_uy = Variable(pred_uy, requires_grad=True)
grad_tensor = torch.autograd.grad(outputs=pred_curl, inputs=(pred_ux, pred_uy),
grad_outputs=torch.ones_like(pred_curl),
retain_graph=True,create_graph=True)
loss = torch.sum(grad_tensor)
optimizer.zero_grad()
loss.backward()
optimizer.step()
</code></pre>
<p>This has the following output:</p>
<pre><code>pred_Curl shape: torch.Size([10, 100, 60])
pred_Ux shape: torch.Size([10, 100, 60])
pred_Uy shape: torch.Size([10, 100, 60])
RuntimeError: One of the differentiated Tensors appears to not have been used in the graph.
Set allow_unused=True if this is the desired behavior.
</code></pre>
<p>Any help would be appreciated!</p>
<p>Edit: Here is my curl function:</p>
<pre><code> def discrete_curl(self,x,y,new_arr):
for m in range(100):
for n in range(60):
if n <= 58:
if m <= 98:
if x[m,n] != 0 and y[m,n] != 0:
new_arr[m,n] = ((y[m+1,n] - y[m-1,n]) / 2*1) - ((x[m,n+1] - x[m,n-1]) / 2*1)
return new_arr
</code></pre>
<p>Where x and y are Ux ad Uy, and new_arr is the curl output.</p>
|
<p>You could try something like this:</p>
<pre><code>def discrete_curl(self, pred):
new_arr = torch.zeros((pred.shape[0],100,60))
for pred_idx in range(pred.shape[0]):
for m in range(100):
for n in range(60):
if n <= 58:
if m <= 98:
if pred[pred_idx,0,m,n] != 0 and pred[pred_idx,1,m,n] != 0:
new_arr[pred_idx,m,n] = ((pred[pred_idx,1,m+1,n] - pred[pred_idx,1,m-1,n]) / 2*1) - ((pred[pred_idx,0,m,n+1] - pred[pred_idx,0,m,n-1]) / 2*1)
return new_arr
pred_curl = discrete_curl(pred)
true_curl = torch.from_numpy(y[:,3,:,:])
loss = torch.nn.functional.mse_loss(pred_curl, true_curl)
optimizer.zero_grad()
loss.backward()
optimizer.step()
</code></pre>
<p>I think the curl computation can be optimized, but I tried to stick to your structure for the most part.</p>
|
python|machine-learning|neural-network|pytorch|autograd
| 1
|
6,616
| 31,732,728
|
Numpy array being rounded? subtraction of small floats
|
<p>I am assigning the elements of a numpy array to be equal to the subtraction of "small" valued, python float-type numbers. When I do this, and try to verify the results by printing to the command line, the array is reported as all zeros. Here is my code:</p>
<pre><code>import numpy as np
np.set_printoptions(precision=20)
pc1x = float(-0.438765)
pc2x = float(-0.394747)
v1 = np.array([0,0,0])
v1[0] = pc1x-pc2x
print pc1x
print pc2x
print v1
</code></pre>
<p>The output looks like this:</p>
<pre><code>-0.438765
-0.394747
[0 0 0]
</code></pre>
<p>I expected this for v1:</p>
<pre><code>[-0.044018 0 0]
</code></pre>
<p>I am new to numpy, I admit, this may be an obvious mis-understanding of how numpy and float work. I thought that changing the numpy print options would fix, but no luck. Any help is great! Thanks!</p>
|
<p>You're declaring the array with <code>v1 = np.array([0,0,0])</code>, which numpy assumes you want an int array for. Any subsequent actions on it will maintain this int array status, so after adding your small number element wise, it casts back to int (resulting in all zeros). Declare it with </p>
<pre><code>v1 = np.array([0,0,0],dtype=float)
</code></pre>
<p>There's a whole wealth of numpy specific/platform specific datatypes for numpy that are detailed in the <a href="http://docs.scipy.org/doc/numpy/reference/arrays.dtypes.html" rel="noreferrer">dtype docs page.</a></p>
|
python|arrays|numpy|rounding|pretty-print
| 6
|
6,617
| 41,456,831
|
Writing data into MySql in pandas tosql in python
|
<p>I am inserted data successfully. but i want to get the result whether data is inserted or not. my code is:</p>
<pre><code>unit_type.to_sql(con=self.mysql_hermes.conn, name='CiqHistEleData',
if_exists='append', flavor='mysql', index=False)
</code></pre>
<p>i want result in one variable true or false.</p>
|
<p>Sorry, that's not going to work because the to_sql method does not return a value. The usual approach in pandas is to raise exceptions rather than return True/False so in the same spirit you will have to sorround your code with try/except. TypeError and ValueError are two typical exceptions raised by to_sql</p>
<pre><code>try:
unit_type.to_sql(con=self.mysql_hermes.conn, name='CiqHistEleData',
if_exists='append', flavor='mysql', index=False)
# save successfull
except TypeError:
# save failed write some code
pass
except ValueError:
# save failed write some code
pass
</code></pre>
|
python|pandas
| 1
|
6,618
| 41,347,203
|
how to filter by day with pandas
|
<p>I am reading a series of data from file via pd.read_csv().
Then somehow I create a dataframe like the following:</p>
<pre><code> col1 col2
01/01/2001 a1 a2
02/01/2001 b1 b2
03/01/2001 c1 c2
04/01/2001 d1 d2
01/01/2002 e1 e2
02/01/2002 f1 d2
03/01/2002 g1 g2
04/01/2002 h1 h2
</code></pre>
<p>What I would like to do is to groub by the same day, and assign to it a value, I mean:</p>
<pre><code> col1
01/01 ax
02/01 bx
03/01 cx
04/01 dx
</code></pre>
<p>Does anyone have any clues how to perform this smoothly?</p>
<p>Thanks a lot in advance.</p>
<p>LS</p>
|
<p>First thing I'd do is make sure your index are dates. If you know they are, then skip this.</p>
<pre><code>df.index = pd.to_datetime(df.index)
</code></pre>
<p>Then you <code>groupby</code> with something like <code>[df.index.month, df.index.day]</code> or <code>df.index.strftime('%m-%d')</code>. However, you have to choose to aggregate or transform. You didn't specify what you wanted to do, so I chose the <code>first</code> function to aggregate.</p>
<pre><code>df.groupby(df.index.strftime('%m-%d')).first()
col1 col2
01-01 a1 a2
02-01 b1 b2
03-01 c1 c2
04-01 d1 d2
</code></pre>
|
pandas
| 0
|
6,619
| 61,563,765
|
Adding column to pandas dataframe taking values from list in other column
|
<p>I'm new to Python so I'm sorry if terminology is not correct; I've searched for similar posts but didn't find anything helpful for my case.
I have a dataframe like this:</p>
<pre><code> Column1 Column2
0 0001 [('A','B'),('C','D'),('E','F')]
1 0001 [('A','B'),('C','D'),('E','F')]
2 0001 [('A','B'),('C','D'),('E','F')]
3 0002 [('G','H'),('I','J')]
4 0002 [('G','H'),('I','J')]
</code></pre>
<p>Each row is replicated n times based on the number of tuples contained in the list of Column2.
What I'd like to do is to add a new column containing only one tuple per row:</p>
<pre><code>Column1 Column2 Column2_new
0 0001 [('A','B'),('C','D'),('E','F')] 'A' 'B'
1 0001 [('A','B'),('C','D'),('E','F')] 'C' 'D'
2 0001 [('A','B'),('C','D'),('E','F')] 'E' 'F'
3 0002 [('G','H'),('I','J')] 'G' 'H'
4 0002 [('G','H'),('I','J')] 'I' 'J'
</code></pre>
<p>Can you please help me with this?</p>
<p>Thanks in advance for any suggestion</p>
|
<p>We can do <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.lookup.html" rel="nofollow noreferrer"><code>df.lookup</code></a> after <code>groupby+cumcount</code></p>
<pre><code>idx = df.groupby('Column1').cumcount()
df['new']= pd.DataFrame(df['Column2'].tolist()).lookup(df.index,idx)
</code></pre>
<hr>
<pre><code>print(df)
Column1 Column2 new
0 1 [(A, B), (C, D), (E, F)] (A, B)
1 1 [(A, B), (C, D), (E, F)] (C, D)
2 1 [(A, B), (C, D), (E, F)] (E, F)
3 2 [(G, H), (I, J)] (G, H)
4 2 [(G, H), (I, J)] (I, J)
</code></pre>
|
python|pandas
| 2
|
6,620
| 61,442,914
|
Python error using Panda AttributeError: 'Series' object has no attribute 'asType'
|
<p>I'm following a pluralsight course to learn Python data manipulation, and have an error in the first module! I'm using Jupyter Notebooks, with Python 3.7 and Pandas 1.0.1. Can anyone help please?</p>
<pre><code>import pandas as pd
data = pd.read_csv('artwork_sample.csv')
data.dtypes
</code></pre>
<p>Returns:</p>
<pre><code>id int64
accession_number object
artist object
artistRole object
artistId int64
title object
dateText object
medium object
creditLine object
year float64
acquisitionYear int64
dimensions object
width int64
height int64
depth float64
units object
inscription float64
thumbnailCopyright float64
thumbnailUrl object
url object
dtype: object
</code></pre>
<p>then</p>
<pre><code>data.acquisitionYear.asType(float)
</code></pre>
<p>Produces this error:</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-19-9daf408c9065> in <module>
----> 1 data.acquisitionYear.asType(float)
C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\generic.py in __getattr__(self, name)
5272 if self._info_axis._can_hold_identifiers_and_holds_name(name):
5273 return self[name]
-> 5274 return object.__getattribute__(self, name)
5275
5276 def __setattr__(self, name: str, value) -> None:
AttributeError: 'Series' object has no attribute 'asType'
</code></pre>
|
<p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.astype.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.astype.html</a>
Looks like a type of uppercase in asType
I looked for the uppercase and could not find it.</p>
|
python|python-3.x|pandas
| 1
|
6,621
| 68,732,135
|
Index pandas dataframe rows 4 at a time
|
<p>I want to to be able to something along the lines of:</p>
<pre><code>for i in range(0, len(df), 4):
curr = pd.DataFrame()
vcch = int(df.loc[i, 'IN_CUSTOM_SELECT'])
icch = int(df.loc[i+1, 'IN_CUSTOM_SELECT'])
vccl = int(df.loc[i+2, 'IN_CUSTOM_SELECT'])
iccl = int(df.loc[i+3, 'IN_CUSTOM_SELECT'])
idlpwr = (vcch * icch) + (vccl * iccl)
idlpwr = idlpwr / (10**6)
</code></pre>
<p>where I do some calculations based on the specific values of columns in combinations of rows of 4.</p>
|
<p>If you're just working with a regular autonumbered index, one easy option is to reshape your data and use <code>pandas</code> vectorized operations for the math:</p>
<pre><code>In [196]: df = pd.DataFrame({'IN_CUSTOM_SELECT': np.random.random(24)})
In [197]: reshaped = df.set_index([df.index.map(lambda x: x // 4), df.index.map(lambda x: x % 4)]).unstack()['IN_CUSTOM_SELECT']
In [198]: reshaped['idlpwr'] = ((reshaped[0] * reshaped[1]) + (reshaped[2] * reshaped[3])) / 10**6
In [199]: reshaped
Out[199]:
0 1 2 3 idlpwr
0 0.788758 0.853356 0.627796 0.355143 8.960487e-07
1 0.312111 0.602934 0.908984 0.046183 2.301622e-07
2 0.842201 0.507629 0.541432 0.592680 7.484218e-07
3 0.506601 0.605108 0.497627 0.362006 4.866923e-07
4 0.308097 0.991945 0.822433 0.272082 5.293851e-07
5 0.573716 0.852356 0.009606 0.961437 4.982462e-07
</code></pre>
|
python|pandas
| 0
|
6,622
| 36,398,041
|
How to generate a sample sentence with LSTM model in Tensorflow?
|
<p>I'm working with the <a href="https://www.tensorflow.org/versions/r0.7/tutorials/recurrent/index.html#recurrent-neural-networks" rel="nofollow">LSTM model in Tensorflow</a>.<br>
I already trained and saved the LSTM model. Now I'm coming up to the last task to generate the sentences.
Here is my pseudo code:</p>
<pre><code># We have already the run_epoch(session, m, data, eval_op, verbose=False) function with fee_dict like this:
feed_dict = {m.input_data: x,
m.targets: y,
m.initial_state: state}
...
# train and save model
...
# load saved model for generating task
new_sentence = [START_TOKEN]
# Here I want to generate a sentence until END_TOKEN is generated.
while new_sentence[-1] != END_TOKEN:
logits = get_logits(model, new_sentence)
# get argmax(logits) or sample(logits)
next_word = argmax(logits)
new_sentence.append(next_word)
print(new_sentence)
</code></pre>
<p><strong>My question is:</strong><br>
When training, validating, or testing model I have to feed both of the <em>inputs</em> and their <em>labels</em> (by shifted inputs one) into model via <em>feed_dict</em> dictionary. But in the generating task, I have only one input which is the generating sentence <em>new_sentence</em>. </p>
<p><strong>How can I build the right <em>get_logits</em> function or full <em>generate</em> function also?</strong></p>
|
<p>when you train you have an output of the neural network, based on that output you calculate the error, based on error you create the optimizer to minimize the error. </p>
<p>In order to generate a new sentence you need to get just the output of the neural network(rnn). </p>
<p>Edited: </p>
<pre><code>"""
Placeholders
"""
x = tf.placeholder(tf.int32, [batch_size, num_steps], name='input_placeholder')
y = tf.placeholder(tf.int32, [batch_size, num_steps], name='labels_placeholder')
init_state = tf.zeros([batch_size, state_size])
"""
RNN Inputs
"""
# Turn our x placeholder into a list of one-hot tensors:
# rnn_inputs is a list of num_steps tensors with shape [batch_size, num_classes]
x_one_hot = tf.one_hot(x, num_classes)
rnn_inputs = tf.unpack(x_one_hot, axis=1)
"""
Definition of rnn_cell
This is very similar to the __call__ method on Tensorflow's BasicRNNCell. See:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/rnn_cell.py
"""
with tf.variable_scope('rnn_cell'):
W = tf.get_variable('W', [num_classes + state_size, state_size])
b = tf.get_variable('b', [state_size], initializer=tf.constant_initializer(0.0))
def rnn_cell(rnn_input, state):
with tf.variable_scope('rnn_cell', reuse=True):
W = tf.get_variable('W', [num_classes + state_size, state_size])
b = tf.get_variable('b', [state_size], initializer=tf.constant_initializer(0.0))
return tf.tanh(tf.matmul(tf.concat(1, [rnn_input, state]), W) + b)
state = init_state
rnn_outputs = []
for rnn_input in rnn_inputs:
state = rnn_cell(rnn_input, state)
rnn_outputs.append(state)
final_state = rnn_outputs[-1]
#logits and predictions
with tf.variable_scope('softmax'):
W = tf.get_variable('W', [state_size, num_classes])
b = tf.get_variable('b', [num_classes], initializer=tf.constant_initializer(0.0))
logits = [tf.matmul(rnn_output, W) + b for rnn_output in rnn_outputs]
predictions = [tf.nn.softmax(logit) for logit in logits]
# Turn our y placeholder into a list labels
y_as_list = [tf.squeeze(i, squeeze_dims=[1]) for i in tf.split(1, num_steps, y)]
#losses and train_step
losses = [tf.nn.sparse_softmax_cross_entropy_with_logits(logit,label) for \
logit, label in zip(logits, y_as_list)]
total_loss = tf.reduce_mean(losses)
train_step = tf.train.AdagradOptimizer(learning_rate).minimize(total_loss)
def train():
with tf.Session() as sess:
#load the model
training_losses = []
for idx, epoch in enumerate(gen_epochs(num_epochs, num_steps)):
training_loss = 0
training_state = np.zeros((batch_size, state_size))
if verbose:
print("\nEPOCH", idx)
for step, (X, Y) in enumerate(epoch):
tr_losses, training_loss_, training_state, _ = \
sess.run([losses,
total_loss,
final_state,
train_step],
feed_dict={x:X, y:Y, init_state:training_state})
training_loss += training_loss_
if step % 100 == 0 and step > 0:
if verbose:
print("Average loss at step", step,
"for last 250 steps:", training_loss/100)
training_losses.append(training_loss/100)
training_loss = 0
#save the model
def generate_seq():
with tf.Session() as sess:
#load the model
# load saved model for generating task
new_sentence = [START_TOKEN]
# Here I want to generate a sentence until END_TOKEN is generated.
while new_sentence[-1] != END_TOKEN:
logits = sess.run(final_state,{x:np.asarray([new_sentence])})
# get argmax(logits) or sample(logits)
next_word = argmax(logits[0])
new_sentence.append(next_word)
print(new_sentence)
</code></pre>
|
tensorflow|lstm
| 3
|
6,623
| 36,676,576
|
Map a NumPy array of strings to integers
|
<p><strong>Problem:</strong></p>
<p>Given an array of string data</p>
<pre><code>dataSet = np.array(['kevin', 'greg', 'george', 'kevin'], dtype='U21'),
</code></pre>
<p>I would like a function that returns the indexed dataset</p>
<pre><code>indexed_dataSet = np.array([0, 1, 2, 0], dtype='int')
</code></pre>
<p>and a lookup table</p>
<pre><code>lookupTable = np.array(['kevin', 'greg', 'george'], dtype='U21')
</code></pre>
<p>such that</p>
<pre><code>(lookupTable[indexed_dataSet] == dataSet).all()
</code></pre>
<p>is true. Note that the <code>indexed_dataSet</code> and <code>lookupTable</code> can both be permuted such that the above holds and that is fine (i.e. it is not necessary that the order of <code>lookupTable</code> is equivalent to the order of first appearance in <code>dataSet</code>).</p>
<p><strong>Slow Solution:</strong></p>
<p>I currently have the following slow solution</p>
<pre><code>def indexDataSet(dataSet):
"""Returns the indexed dataSet and a lookup table
Input:
dataSet : A length n numpy array to be indexed
Output:
indexed_dataSet : A length n numpy array containing values in {0, len(set(dataSet))-1}
lookupTable : A lookup table such that lookupTable[indexed_Dataset] = dataSet"""
labels = set(dataSet)
lookupTable = np.empty(len(labels), dtype='U21')
indexed_dataSet = np.zeros(dataSet.size, dtype='int')
count = -1
for label in labels:
count += 1
indexed_dataSet[np.where(dataSet == label)] = count
lookupTable[count] = label
return indexed_dataSet, lookupTable
</code></pre>
<p>Is there a quicker way to do this? I feel like I am not using numpy to its full potential here.</p>
|
<p>You can use <a href="http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.unique.html" rel="noreferrer"><code>np.unique</code></a> with the <code>return_inverse</code> argument:</p>
<pre><code>>>> lookupTable, indexed_dataSet = np.unique(dataSet, return_inverse=True)
>>> lookupTable
array(['george', 'greg', 'kevin'],
dtype='<U21')
>>> indexed_dataSet
array([2, 1, 0, 2])
</code></pre>
<p>If you like, you can reconstruct your original array from these two arrays:</p>
<pre><code>>>> lookupTable[indexed_dataSet]
array(['kevin', 'greg', 'george', 'kevin'],
dtype='<U21')
</code></pre>
<p>If you use pandas, <code>lookupTable, indexed_dataSet = pd.factorize(dataSet)</code> will achieve the same thing (and potentially be more efficient for large arrays).</p>
|
python|arrays|string|performance|numpy
| 22
|
6,624
| 36,249,233
|
Optimize Double For Loop Using NumPy
|
<p>I have a python function with a nested for loop that is called thousands of times, and is too slow. From what I have read online, there should be a way to optimize it with numpy vectorization so that the iteration is done in much faster C code rather than python. But, I have never worked with numpy before and I can't figure it out.</p>
<p>The function is below. The first parameter is a 2-dimensional array (list of lists). The second parameter is a list of rows of the 2D array to check. The third parameter is a list of columns of the 2D array to check (Note that the number of rows is not equal to the number of cols). The fourth parameter is a value with which to compare elements of the 2D array. I am trying to return a list that for each column contains a list has all row indices that correspond to elements equal to val.</p>
<pre><code>def filter_indices(my_2d_arr, rows, cols, val):
result_indices = []
for c in cols:
col_indices = []
for idx in rows:
if my_2d_arr[idx][c] == val:
col_indices.append(idx)
result_indices.append(col_indices)
return result_indices
</code></pre>
<p>Like I said, this is way too slow and I am rather confused about how I could vectorize this is numpy. Any pointers/guidance would be great.</p>
<h1>EDIT</h1>
<p>@B.M. Thanks for your answer. I ran your solution myself separately from the rest of my code and compared it with my previous function without numpy. Like you said, it worked much faster with numpy that my original function did. However, when running it as part of my code, my solution is actually slower for some reason. I did have to add a little to your function and modify some of my existing code to make them compatible, but I am being thrown off in that timeit is saying that the numpy version is faster while cProfile is showing that my original filter_indices function is faster than the new numpy one. I have no idea how the numpy filter_indices could take so much longer, considering that it was faster when run separately from the rest of my code.</p>
<p>Here's my original filter_indices without numpy:</p>
<pre><code>def filter_indices_orig(a, data_indices, feature_set, val):
result_indices = []
for feature_no in feature_set:
feature_indices = []
for idx in data_indices:
if a[idx][feature_no] == val:
feature_indices.append(idx)
result_indices.append(feature_indices)
return result_indices
</code></pre>
<p>Here's my slightly modified filter_indices with numpy:</p>
<pre><code>def filter_indices(a, data_indices, feature_set, val):
result_indices = {}
sub = a[np.meshgrid(data_indices, feature_set, indexing='ij')]
r, c = (sub == val).nonzero()
rs = np.take(data_indices, r)
cs = np.take(feature_set, c)
coords = zip(rs, cs)
for r, c in coords:
feat_indices = result_indices.get(c, [])
feat_indices.append(r)
result_indices[c] = feat_indices
return result_indices
</code></pre>
<h1>EDIT 2</h1>
<p>I figured out that the numpy solution is slower when I am only searching for a few columns, but it is faster when I am looking for a large number of columns. Unfortunately, even specifically using my original non-numpy solution when there are a few columns being searched while using the numpy solution when there are large numbers of columns being searched still is slower than my original solution, which I do not understand whatsoever.</p>
|
<p>Here a function which return 2 arrays, rows indices and cols indices of the pixels where value is <code>val</code>in the selected subarray :</p>
<pre><code>def filter_indices_numpy(a,rows,cols,val):
sub=a[meshgrid(rows,cols,indexing='ij')]
r,c = (sub==val).nonzero()
return take(rows,r),take(cols,c)
</code></pre>
<p>Example:</p>
<pre><code>a=randint(0,3,(5,5))
#array([[0, 1, 0, 2, 2],
# [0, 0, 2, 0, 0],
# [2, 1, 1, 0, 0],
# [1, 0, 0, 1, 2],
# [2, 1, 0, 0, 0]])
filter_indices_numpy(a,[1,2,3],[1,2,3],0)
#(array([1, 1, 2, 3, 3]), array([1, 3, 3, 1, 2]))
</code></pre>
<p>Some explanations :</p>
<p><code>meshgrid(rows,cols,indexing='ij')</code> are the indices of selected rows and cols.
<code>sub</code> is the sub-array. <code>r,c = (sub==val).nonzero()</code> are the indices where value is <code>val</code> in the sub-array. <code>take(rows,r),take(cols,c)</code> translate the indices in the array <code>a</code>.</p>
<p>Test for : <code>a=randint(0,200,(1000,1000));rows=cols=arange(100)</code></p>
<pre><code>In [4]: %timeit filter_indices(a,rows,cols,0)
10 loops, best of 3: 23.1 ms per loop
In [5]: %timeit filter_indices_numpy(a,rows,cols,0)
1000 loops, best of 3: 933 µs per loop
</code></pre>
<p>it's about 25X faster.</p>
|
python|arrays|python-2.7|numpy|vectorization
| 0
|
6,625
| 65,535,770
|
Solving linear vector addition for positive integer coefficients
|
<p>Say I have multiple different vectors of the same length</p>
<p>Example:</p>
<pre><code>1: [1, 2, 3, 4]
2: [5, 6, 7, 8]
3: [3, 8, 9, 10]
4: [6, 9, 12, 3]
</code></pre>
<p>And I want to figure out the optimal integer coefficients for these vectors such that the sum of the vectors is closest to a respective specified goal vector.</p>
<p>Goal Vector: <code>[55,101,115,60]</code></p>
<p>Assuming the combination only involves adding arrays together (no subtraction), how would I go about doing this? Are there any Python libraries (numpy, scikit, etc.) that would help me do this? I suspect that it is a linear algebra solution.</p>
<p>Example Combination Answer: <code>[3, 3, 3, 1, 2, 4, 1, 1, 1, 2, 3, 4]</code>
where each of the values are one of those arrays. (This is just a random example)</p>
|
<p>You could write your problem as a system of linear-equations:</p>
<pre><code>arr1[0] + b*arr2[0] + c*arr3[0] + d*arr4[0] = res[0]
a*arr1[1] + b*arr2[1] + c*arr3[1] + d*arr4[1] = res[1]
a*arr1[2] + b*arr2[2] + c*arr3[2] + d*arr4[2] = res[2]
a*arr1[3] + b*arr2[3] + c*arr3[3] + d*arr4[3] = res[3]
#For all positive a,b,c,d.
</code></pre>
<p>Which you could then solve, if there is an exact solution.</p>
<p>If there is no exact solution, there is a <code>scipy</code> method to calculate the non-negative least squares solution to a linear matrix equation called <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.nnls.html" rel="nofollow noreferrer">scipy.optimize.nnls</a>.</p>
<pre><code>from scipy import optimize
import numpy as np
arr1 = [1, 2, 3, 4]
arr2 = [5, 6, 7, 8]
arr3 = [3, 8, 9, 10]
arr4 = [6, 9, 12, 3]
res = [55,101,115,60]
a = np.array([
[arr1[0], arr2[0], arr3[0], arr4[0]],
[arr1[1], arr2[1], arr3[1], arr4[1]],
[arr1[2], arr2[2], arr3[2], arr4[2]],
[arr1[3], arr2[3], arr3[3], arr4[3]]
])
solution,_ = optimize.nnls(a,res)
print(solution)
print('Coefficients before Rounding', solution)
solution = solution.round()
print('Coefficients after Rounding', solution)
print('Resuls', [arr1[i]*solution[0] + arr2[i]*solution[1] + arr3[i]*solution[2] + arr4[i]*solution[3] for i in range(4)])
</code></pre>
<p>This would print</p>
<pre><code>Coefficients before Rounding [0. 0.1915493 3.83943662 6.98826291]
Coefficients after Rounding [0. 0. 4. 7.]
Resuls [54.0, 95.0, 120.0, 61.0]
</code></pre>
<p>Pretty close, isn't it?</p>
<p>It could indeed happen that this is not the perfect solution. But as discussed in <a href="https://stackoverflow.com/questions/13898233/solving-linear-system-over-integers-with-numpy">this thread</a> "integer problems are not even simple to solve" (@seberg)</p>
|
python|arrays|numpy|linear-algebra
| 1
|
6,626
| 65,735,328
|
How to best use @tf.function decorator for class methods?
|
<p>To examplify my problem with a minimal example, suppose I would like to create a class in the spirit of</p>
<pre class="lang-py prettyprint-override"><code>class test_a:
def __init__(self, X):
self.X = X
def predict(self, a):
return a * self.X
</code></pre>
<p>Importantly, the <code>predict()</code> function should change if I assign a new <code>X</code> to an instance of <code>test_a</code>.
In this example it works fine:</p>
<pre class="lang-py prettyprint-override"><code>X = tf.ones((1, 1))
a = test_a(X)
y = tf.ones((1, 1))
a.predict(y) # output [[1.]]
# now I want to change the value of a.X
Xnew = 2 * tf.ones((1, 1))
a.X = Xnew
a.predict(y) # output [[2.]], as desired.
</code></pre>
<p>Now suppose I want to use the <code>@tf.function</code> decorator to speed up <code>predict()</code>.</p>
<pre class="lang-py prettyprint-override"><code>class test_b:
def __init__(self, X):
self.X = X
@tf.function
def predict(self, a):
return a * self.X
</code></pre>
<p>Now the following undesired behavior occurs:</p>
<pre class="lang-py prettyprint-override"><code>X = tf.ones((1, 1))
b = test_b(X)
y = tf.ones((1, 1))
b.predict(y) # output [[1.]]
# now I want to change the value of b.X
Xnew = 2 * tf.ones((1, 1))
b.X = Xnew
b.predict(y) # output is still [[1.]], but I would like it to be [[2.]]
</code></pre>
<p>The only idea I have so far is having a method <code>_predict(X, a)</code>, which I could then decorate and then call <code>_predict(self.X, a)</code> inside the (not decorated) method <code>predict(self, a)</code>.
Any help how this could be done better would be greatly appreciated.</p>
|
<p>To avoid python side effects - <code>self.X</code> should be <code>tf.Variable(trainable=False)</code>. And you have to use <code>tf.Variable.assign()</code> to change it.</p>
|
python|python-3.x|tensorflow|tensorflow2.0
| 0
|
6,627
| 65,533,768
|
Pandas apply function throws NotImplementedError
|
<p>I have a pretty basic df, and I want to create 2 new columns based off of some regex of one column. I created a function to do this which returned 2 values.</p>
<pre><code>def get_value(s):
result = re.findall('(?<=Value":")(\d+)\.(\d+)?(?=")', s)
if len(result) != 2:
return -1, -1
else:
matches = []
for match in result:
matches.append(match[0] + '.' + match[1])
return float(matches[0]), float(matches[1])
</code></pre>
<p>When I try this: <code>data['Test1'], data['Test2'] = zip(*data['mod_data'].apply(get_value))</code></p>
<p>It throws an error saying "NotImplementedError: isna is not defined for MultiIndex",
but if I split it into 2 diff functions it works.</p>
<pre><code>def get_value1(s):
result = re.findall('(?<=Value":")(\d+)\.(\d+)?(?=")', s)
if len(result) != 2:
return -1
else:
matches = []
for match in result:
matches.append(match[0] + '.' + match[1])
return float(matches[0])
def get_value2(s):
result = re.findall('(?<=Value":")(\d+)\.(\d+)?(?=")', s)
if len(result) != 2:
return -1
else:
matches = []
for match in result:
matches.append(match[0] + '.' + match[1])
return float(matches[1])
data['From'] = data['mod_data'].apply(get_value1)
data['To'] = data['mod_data'].apply(get_value2)
</code></pre>
<p>Another thing to note is that the error NotImplementedError gets thrown at the very end. I added print statement in my get_value function, and it gets thrown after it calculated the last row.</p>
<p>Edit: Added example df of what I am dealing with</p>
<pre><code>test = pd.DataFrame([['A', 'A1', 'Top', '[{"Value":"37.29","ID":"S1234.1","Time":"","EXPTIME_Name":"","Value":"37.01"}]'],
['B', 'B1', 'Bottom', '[{"EXPO=T10;PID=.ABCDE149;"Value":"45.29";RETICLEID=S14G1490Y2;SEQ=5A423002",Value":"56.98"}]']],
columns=['Module', 'line', 'area', 'mod_data'])
</code></pre>
<p>desired result:</p>
<pre><code> Module line ... From To
0 A A1 ... 37.29 37.01
1 B B1 ... 45.29 56.98
</code></pre>
|
<p>First, your regex was a little bit off. Change <code>'(?<=Value":")(\d+)\.(\d+)?(?=")'</code> to <code>'(?<=Value":")(\d+\.\d+)?(?=")'</code>, so that the full float isin one capture group. You were separating the part before the decimal into one group and the part after into another:</p>
<p>Then, you can use <code>str.findall</code>:</p>
<pre><code>test = pd.DataFrame([['A', 'A1', 'Top', '[{"Value":"37.29","ID":"S1234.1","Time":"","EXPTIME_Name":"","Value":"37.01"}]'],
['B', 'B1', 'Bottom', '[{"EXPO=T10;PID=.ABCDE149;"Value":"45.29";RETICLEID=S14G1490Y2;SEQ=5A423002",Value":"56.98"}]']],
columns=['Module', 'line', 'area', 'mod_data'])
test[['From', 'To']] = test['mod_data'].str.findall('(?<=Value":")(\d+\.\d+)?(?=")')
test
Out[1]:
Module line area mod_data \
0 A A1 Top [{"Value":"37.29","ID":"S1234.1","Time":"","EX...
1 B B1 Bottom [{"EXPO=T10;PID=.ABCDE149;"Value":"45.29";RETI...
From To
0 37.29 37.01
1 45.29 56.98
</code></pre>
|
python|pandas|apply
| 1
|
6,628
| 3,195,781
|
Find n greatest numbers in a sparse matrix
|
<p>I am using sparse matrices as a mean of compressing data, with loss of course, what I do is I create a sparse dictionary from all the values greater than a specified treshold. I'd want my compressed data size to be a variable which my user can choose.</p>
<p>My problem is, I have a sparse matrix with alot of near-zero values, and what I must do is choose a treshold so that my sparse dictionary is of a specific size (or eventually that the reconstruction error is of a specific rate)
Here's how I create my dictionary (taken from stackoverflow I think >.< ):</p>
<pre><code>n = abs(smat) > treshold #smat is flattened(1D)
i = mega_range[n] #mega range is numpy.arange(smat.shape[0])
v = smat[n]
sparse_dict = dict(izip(i,v))
</code></pre>
<p>How can I find treshold so that it is equal to the nth greatest value of my array (smat)?</p>
|
<p><code>scipy.stats.scoreatpercentile(arr,per)</code> returns the value at a given percentile:</p>
<pre><code>import scipy.stats as ss
print(ss.scoreatpercentile([1, 4, 2, 3], 75))
# 3.25
</code></pre>
<p>The value is interpolated if the desired percentile lies between two points in <code>arr</code>.</p>
<p>So if you set <code>per=(len(smat)-n)/len(smat)</code> then </p>
<pre><code>threshold = ss.scoreatpercentile(abs(smat), per)
</code></pre>
<p>should give you (close to) the nth greatest value of the array smat. </p>
|
python|numpy|sparse-matrix
| 2
|
6,629
| 63,702,820
|
Expand a list in a DataFrame
|
<p>Lets say I have a list of numbers to concat. The results as follows:</p>
<pre>
A B ... Z
(1,a) [1] [2]
(2,b) [3] [4]
(3,c) [5,6] [7,8]
(4,d) [9] [10]
</pre>
<p>But I wish to convert it into something more presentable in a excel sheet/csv, how can i convert the DataFrame into something like this:</p>
<pre>
A B ... Z
1 a 1 2
2 b 3 4
3 c 5 7
3 c 6 8
4 d 9 10
</pre>
<p>Am trying to avoid running through each row to print into a csv file. Is there a better method to convert the DataFrame?</p>
|
<p>Use <code>pd.Series.explode</code> and reconstruct df from new index, finally <code>concat</code>:</p>
<pre><code>s = df.apply(pd.Series.explode)
print (pd.concat([pd.DataFrame(s.index.tolist()),s.reset_index(drop=True)], axis=1))
0 1 A B
0 1 a 1 2
1 2 b 3 4
2 3 c 5 7
3 3 c 6 8
4 4 d 9 10
</code></pre>
|
pandas|dataframe
| 1
|
6,630
| 63,611,563
|
Pythonic way to add numeric columns of the individual dataframes of a groupby object
|
<p>I have a time series data that I group and I want to sum the numerical columns of all the groups together.</p>
<p><strong>Note</strong>: This is not an aggregation of column of individual groups, but a sum of corresponding cells of all the dataframes in the group object.</p>
<p>Since it's a time series data, a few columns in essence remain the same in a dataframe like <code>Region</code> and <code>Region_Code</code> and <code>Time</code> itself remains same across the dataframes.</p>
<p>My pseudo code is -</p>
<ol>
<li>Groupby <code>Region_Code</code></li>
<li>Select only the numerical columns of grouped object</li>
<li>Make Region list</li>
<li>Call the dataframe in the group object by iterating over the region list and sum</li>
<li>Make the other columns like <code>Region</code>, <code>Region_Code</code> and <code>Time</code></li>
</ol>
<p>But the problem is that when I add the called dataframe with an empty dataframe, everything becomes empty/null so eventually I have nothing.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
countries = ['United States','United States','United States','United States','United States', 'Canada', 'Canada', 'Canada', 'Canada', 'Canada', 'China', 'China', 'China', 'China', 'China']
code = ['US', 'US','US','US','US','CAN','CAN','CAN','CAN','CAN', 'CHN','CHN','CHN','CHN','CHN']
time = [1,2,3,4,5,1,2,3,4,5,1,2,3,4,5]
temp = [2.1,2.2,2.3,2.4,2.5, 3.1,3.2,3.3,3.4,3.5, 4.1,4.2,4.3,4.4,4.5]
pressure = [1.0,1.0,1.0,1.0,1.0, 1.1, 1.1, 1.1, 1.1, 1.1, 1.2,1.2,1.2,1.2,1.2]
speed = [20,21,22,23,24, 10,11,12,13,14, 30,31,32,33,34]
df = pd.DataFrame({'Region': countries, 'Time': time, 'Region_Code': code, 'Temperature': temp, 'Pressure': pressure, 'Speed': speed})
countries_grouped = df.groupby('Region_Code')[list(df.columns)[3:]]
country_list = ['US', 'CAN', 'CHN']
temp = pd.DataFrame()
for country in country_list:
temp += countries_grouped.get_group(country) ## <--- Fails
temp
# Had the above worked, the rest of the columns can be made as follows
temp['Region'] = 'All'
temp['Time'] = df['Time']
temp['Region_Code'] = 'ALL'
</code></pre>
<p>It does not look pandorable. What's the best way to do this?</p>
<p><strong>Expected Output</strong>:</p>
<pre><code> Region Time Region_Code Temperature Pressure Speed
0 All 1 ALL 9.3 3.3 60
1 All 2 ALL 9.6 3.3 63
2 All 3 ALL 9.9 3.3 66
3 All 4 ALL 10.2 3.3 69
4 All 5 ALL 10.5 3.3 72
</code></pre>
|
<p>I think you need aggregate <code>sum</code> - all non numeric columns are exclude by default, so you can add them by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reindex.html" rel="nofollow noreferrer"><code>DataFrame.reindex</code></a> by original columns with repalce missing values by <code>ALL</code>:</p>
<pre><code>print (df.groupby('Time', as_index=False).sum())
Time Temperature Pressure Speed
0 1 9.3 3.3 60
1 2 9.6 3.3 63
2 3 9.9 3.3 66
3 4 10.2 3.3 69
4 5 10.5 3.3 72
df = df.groupby('Time', as_index=False).sum().reindex(df.columns, axis=1, fill_value='ALL')
print (df)
Region Time Region_Code Temperature Pressure Speed
0 ALL 1 ALL 9.3 3.3 60
1 ALL 2 ALL 9.6 3.3 63
2 ALL 3 ALL 9.9 3.3 66
3 ALL 4 ALL 10.2 3.3 69
4 ALL 5 ALL 10.5 3.3 72
</code></pre>
<p>EDIT: For custom replace missing values use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.fillna.html" rel="nofollow noreferrer"><code>DataFrame.fillna</code></a> with dictionary - column name with value for replace:</p>
<pre><code>d = {'Region':'GLOBAL','Region_Code':'ALL'}
df1 = df.groupby('Time', as_index=False).sum().reindex(df.columns, axis=1).fillna(d)
print (df1)
Region Time Region_Code Temperature Pressure Speed
0 GLOBAL 1 ALL 9.3 3.3 60
1 GLOBAL 2 ALL 9.6 3.3 63
2 GLOBAL 3 ALL 9.9 3.3 66
3 GLOBAL 4 ALL 10.2 3.3 69
4 GLOBAL 5 ALL 10.5 3.3 72
</code></pre>
|
python|python-3.x|pandas|dataframe
| 4
|
6,631
| 63,477,017
|
How to obtain all values assoicated with a multi-column grouping in pandas
|
<p>First off, I apologize if this question is not clear. I will extrapolate on what I mean here.</p>
<p>Basically, I am looking for a way to obtain all values in one column that correspond to a multi-column grouping. My original dataframe has three columns: latitude, longitude, and building ID. There are different building IDs that share the same latitude/longitude coordinates. I want to group together the latitude/longitude columns and indicate each building ID that is associated with those coordinates.</p>
<p>Right now, my dataframe looks like this:</p>
<pre><code> BldgID | Latitude | Longitude
---------------------------------------------------------
1 30.48583 -70.57566
2 27.87265 -67.28715
3 30.48583 -70.57566
4 45.26657 -75.14273
</code></pre>
<p>As can be seen, each building ID is paired with its latitude/longitude coordinates. Twi building IDs have the same coordinates. Because of that, I would like to group together the lat / lon columns and indicate all the building IDs that are associated with a set of coordinates.</p>
<p>I would like the output to look like this:</p>
<pre><code> Lat/Lon | BldgID
-------------------------------------------------------
('30.48583', '-70.57566') 1
('30.48583', '-70.57566') 3
('30.48583', '-70.57566') 9
('27.87265', '-67.28715') 2
('27.87265', '-67.28715') 6
('45.26657', '-75.14273') 4
('48.19456', '-81.23281') 12
</code></pre>
<p>You can see that building IDs 1, 3, and 9 are paired with their shared latitude / longitude coordinates. IDs 2 and 6 are also paired together. IDs 4 and 12 each have their own set of coordinates.</p>
<p>If I loop through the column grouping, it will print out which IDs correspond with the lat/lon coordinates, but I would like to capture this in a dataframe.</p>
<p>At first, I tried to do:</p>
<pre><code>for j in df.groupby(['Latitude', 'Longitude']):
data = pd.DataFrame(j)
</code></pre>
<p>But that wasn't working for me. I am sure there is an efficient way to do this.</p>
<p>Thank you for your help.</p>
|
<p>You could try with <code>set_index</code>, <code>agg</code> and <code>sort_values</code>:</p>
<pre><code>df.set_index('BldgID').agg(tuple,1)\
.reset_index().rename(columns={0:'Lat/Lon'}).sort_values('Lat/Lon')
</code></pre>
<p>Output:</p>
<pre><code> BldgID Lat/Lon
1 2 (27.87265, -67.28715)
0 1 (30.48583, -70.57566)
2 3 (30.48583, -70.57566)
3 4 (45.26657, -75.14273)
</code></pre>
|
python|pandas
| 1
|
6,632
| 63,611,749
|
Python Polygon external boundary
|
<p>Given some coordinates I'm trying to draw the external boundary of a polygon. But with the code that I attach you below:</p>
<pre><code>from shapely.geometry import Polygon
lat_point_list = [42.108288,42.13397,42.087456,42.085308000000005,42.087456,42.13397,42.095806,
42.085308000000005,42.10305,42.108288,42.10305,42.095806]
lon_point_list = [14.272663, 14.218105, 14.185248999999999,14.213285999999998,14.185248999999999,
14.218105, 14.261092999999999, 14.213285999999998, 14.268307, 14.272663, 14.268307, 14.261092999999999]
polygon_geom = Polygon(zip(lon_point_list, lat_point_list))
polygon_geom
</code></pre>
<p>I Obtain:</p>
<p><a href="https://i.stack.imgur.com/ovEwC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ovEwC.png" alt="enter image description here" /></a></p>
<p>How could I get only the external boundary, without the crossed lines within the boundary?</p>
|
<p>You need to decide whether your points are ordered or it is a point cloud.</p>
<p>In the first case you need to ensure that each segment doesn't cross any other segment. Otherwise, the points are not in order.</p>
<p>In the second case you only have an unordered set of points or point cloud and you may be interested in finding its <em>convex hull</em>, where not all points will (in general) be part of the polygon.</p>
|
python|google-maps|coordinates|polygon|geopandas
| 1
|
6,633
| 21,516,266
|
Store empty numpy array in h5py
|
<p>I'd like to write some data to a HDF5 file (since I have huge datasets and was told that HDF5 works well with those kinds of things).</p>
<p>I have a Python 2.7 dictionary with some values and some numpy arrays. What I'd like to do is simply dump that dictionary into the HDF5. No groups or whatever, just put the key-value pairs into the HDF5.</p>
<p>However, using h5py, if I write an empty array (or list) into the file, I get:</p>
<pre><code>>>> file["test"] = np.array([])
ValueError: zero sized dimension for non-unlimited dimension (Invalid arguments to routine: Bad value)
</code></pre>
<p>I can't believe that HDF5 wouldn't allow me to put empty arrays into it. It just so happens that sometimes my list is empty. Can't help it.</p>
<p>What am I missing?</p>
<p>Thanks :-)</p>
|
<p>This is what I use (it might be unnecessary for you since you don't require groups but you might change you mind):</p>
<pre><code>def save_group(outfile, d, group):
with h5py.File(outfile, 'a') as g:
for key in d:
value = d[key]
try:
group.require_dataset(key, value.shape, value.dtype)
except TypeError as e:
del group[key]
group.create_dataset(key, value.shape, value.dtype)
group[key][...] = value
return
</code></pre>
<p>To get rid of the group parameter:</p>
<pre><code>group = g.require_group('/')
</code></pre>
|
python|numpy|hdf5|h5py
| 0
|
6,634
| 29,828,675
|
Python Pandas : Group by one column and see the content of all columns?
|
<p>I have a dataframe like this:</p>
<pre><code>org group count
org1 1 2
org2 1 2
org3 2 1
org4 3 3
org5 3 3
org6 3 3
</code></pre>
<p>and this is what I would like to have , one entry from each unique groups from the 'group' column:</p>
<pre><code>org group count
org1 1 2
org3 2 1
org4 3 3
</code></pre>
<p>I am using the following group by command but I still get to see all of the rows:</p>
<pre><code>df.groupby('group').head()
</code></pre>
<p>Does any body know how to get the expected results?</p>
|
<p>You could <code>drop_duplicates</code> on <code>group</code>?</p>
<pre><code>In [172]: df.drop_duplicates('group')
Out[172]:
org group count
0 org1 1 2
2 org3 2 1
3 org4 3 3
</code></pre>
<p>Also, <code>df.drop_duplicates(['group', 'count'])</code> works in this case.</p>
<p>However, this may not be <strike>the best</strike> a very flexible method. @EdChum's <a href="https://stackoverflow.com/a/29828769/2137255">Answer</a> provides directions for flexibility.</p>
|
python|pandas|group-by|aggregation
| 3
|
6,635
| 29,960,089
|
How to retrieve values after a function crashes?
|
<p>How can I retrieve a value from a function after it crashes? E.g. how would I find the value of <code>a</code> after the function completes/crashes.</p>
<pre><code>>>> def a_is_3():
... a=3
... print 'a='+str(a)
...
>>> a_is_3()
a=3
>>> a
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'a' is not defined
</code></pre>
<p>My real case includes massive numpy arrays created and passed around through dozens of modules. The arrays are still taking up memory after the code errors, but I can't access them.</p>
<p>Is there a way to retrieve what the variables were at the moment when the function crashed? I want to avoid having to add debug code and re-run the lengthy execution to capture the variables. It seems like a particularly useful trick for troubleshooting.</p>
|
<p>try to use <code>global</code></p>
<pre><code>>>> def a_is_3():
... global a
... a = 3
... print('a=' + str(a))
...
>>> a_is_3()
a=3
>>> a
3
>>>
</code></pre>
<p>This will work quite well in an interactive session - but if you move this to a script later make sure you are aware of the implications.</p>
<p>Obviously, the variable <code>a</code> gets overridden in global scope, so be aware of that too.</p>
|
python|numpy
| -1
|
6,636
| 53,426,968
|
Matrix in which rows are coordinates of points of a meshgrid
|
<p>In <a href="https://jakevdp.github.io/PythonDataScienceHandbook/05.07-support-vector-machines.html" rel="nofollow noreferrer">this tutorial</a> I find code like</p>
<pre><code>import numpy as np
x = np.linspace(-5, 5, 20)
y = np.linspace(-1, 1, 10)
Y, X = np.meshgrid(y, x)
xy = np.vstack([X.ravel(), Y.ravel()]).T
print(xy)
</code></pre>
<p><strong>Isn't there a shorter way to get a matrix where rows are coordinates of points of the meshgrid, than using <code>meshgrid</code>+<code>vstack</code>+<code>ravel</code>+<code>transpose</code>?</strong></p>
<p>Output:</p>
<pre><code>[[-5. -1. ]
[-5. -0.77777778]
[-5. -0.55555556]
[-5. -0.33333333]
[-5. -0.11111111]
[-5. 0.11111111]
[-5. 0.33333333]
[-5. 0.55555556]
[-5. 0.77777778]
[-5. 1. ]
[-4.47368421 -1. ]
[-4.47368421 -0.77777778]
...
</code></pre>
|
<p>You can skip <code>meshgrid</code> and get what you want more directly by just taking the <a href="https://docs.python.org/3.6/library/itertools.html#itertools.product" rel="nofollow noreferrer">cartesian product</a> of <code>x</code> and <code>y</code>:</p>
<pre><code>from itertools import product
import numpy as np
x = np.linspace(-5, 5, 20)
y = np.linspace(-1, 1, 10)
xy = np.array(list(product(x,y)))
print(xy)
</code></pre>
<p>Output:</p>
<pre><code>[[-5. -1. ]
[-5. -0.77777778]
[-5. -0.55555556]
[-5. -0.33333333]
[-5. -0.11111111]
[-5. 0.11111111]
[-5. 0.33333333]
[-5. 0.55555556]
[-5. 0.77777778]
[-5. 1. ]
[-4.47368421 -1. ]
[-4.47368421 -0.77777778]
[-4.47368421 -0.55555556]
[-4.47368421 -0.33333333]
...
]
</code></pre>
|
python|arrays|numpy|mesh
| 2
|
6,637
| 20,251,684
|
Python: Replace every 5th value of numpy array
|
<p>I've got a numpy array full of floats. How can I replace every 5th value with <code>np.inf*0</code> so that I get a <code>NaN</code> value at every 5th index?</p>
<pre><code>my_array = np.array([5.0, 8.1, 3.2, 2.7, 8.4, 4.9 ...])
</code></pre>
<p>to</p>
<pre><code>my_array = np.array([5.0, 8.1, 3.2, 2.7, NaN, 4.9 ...])
</code></pre>
<p>and so on.</p>
|
<p>How about using slicing and striding? <code>L[::5]</code> takes every 5th element from list <code>L</code>:</p>
<pre><code>>>> my_array = np.arange(20.)
>>> my_array[4::5] = np.nan
>>> my_array
array([ 0., 1., 2., 3., nan, 5., 6., 7., 8., nan, 10.,
11., 12., 13., nan, 15., 16., 17., 18., nan])
</code></pre>
|
python|numpy
| 7
|
6,638
| 71,855,464
|
How to convert .dat to .csv using python? the data is being expressed in one column
|
<p>Hi i'm trying to convert .dat file to .csv file.
But I have a problem with it.
I have a file .dat which looks like(column name)</p>
<pre><code>region GPS name ID stop1 stop2 stopname1 stopname2 time1 time2 stopgps1 stopgps2
</code></pre>
<p>it delimiter is a tab.</p>
<p>so I want to convert dat file to csv file.
but the data keeps coming out in one column.</p>
<p>i try to that, using next code</p>
<pre><code>import pandas as pd
with open('file.dat', 'r') as f:
df = pd.DataFrame([l.rstrip() for l in f.read().split()])
</code></pre>
<p>and</p>
<pre><code>with open('file.dat', 'r') as input_file:
lines = input_file.readlines()
newLines = []
for line in lines:
newLine = line.strip('\t').split()
newLines.append(newLine)
with open('file.csv', 'w') as output_file:
file_writer = csv.writer(output_file)
file_writer.writerows(newLines)
</code></pre>
<p>But all the data is being expressed in one column.
(i want to express 15 column, 80,000 row, but it look 1 column, 1,200,000 row)
I want to convert this into a csv file with the original data structure.
Where is a mistake?</p>
<p>Please help me... It's my first time dealing with data in Python.</p>
|
<p>If you're already using pandas, you can just use <code>pd.read_csv()</code> with another delimiter:</p>
<pre><code>df = pd.read_csv("file.dat", sep="\t")
df.to_csv("file.csv")
</code></pre>
<p>See also the <a href="https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html" rel="nofollow noreferrer">documentation for read_csv</a> and <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_csv.html" rel="nofollow noreferrer">to_csv</a></p>
|
python|pandas|csv|export-to-csv
| 2
|
6,639
| 71,815,935
|
Running python script with multiple values of command line arguments
|
<p>I have a python script for pre-processing audio and it has frame length, frame step and fft length as the command line arguments. I am able to run the code if I have single values of these arguments. I wanted to know if there is a way in which I can run the python script with multiple values of the arguments? For example, get the output if values of fft lengths are 128, 256 and 512 instead of just one value.</p>
<p>The code for pre-processing is as follows:</p>
<pre><code>import numpy as np
import pandas as pd
import tensorflow as tf
from scipy.io import wavfile
import os
import time
import pickle
import random
import argparse
import configlib
from configlib import config as C
import mfccwithpaddingandcmd
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from sklearn.preprocessing import MultiLabelBinarizer
from tensorflow import keras
from tensorflow.python.keras import Sequential
from tensorflow.keras.layers import Dense,Conv2D,MaxPooling2D,Flatten,Dropout,BatchNormalization,LSTM,Lambda,Reshape,Bidirectional,GRU
from tensorflow.keras.callbacks import TensorBoard
start = time.time()
classes = ['blinds','fan','light','music','tv']
#dire = r"/mnt/beegfs/home/gehani/test_speech_command/"
parser = configlib.add_parser("Preprocessing config")
parser.add_argument("-dir","--dire", metavar="", help="Directory for the audio files")
def pp():
data_list=[] #To save paths of all the audio files.....all audio files in list format in data_list
#data_list-->folder-->files in folder
for index,label in enumerate(classes):
class_list=[]
if label=='silence': #creating silence folder and storing 1sec noise audio files
silence_path = os.path.join(C["dire"],'silence')
if not os.path.exists(silence_path):
os.mkdir(silence_path)
silence_stride = 2000
#sample_rate = 16000
folder = os.path.join(C["dire"],'_background_noise_') #all silence are kept in the background_noise folder
for file_ in os.listdir(folder):
if '.wav' in file_:
load_path = os.path.join(folder,file_)
sample_rate,y = wavfile.read(load_path)
for i in range(0,len(y)-sample_rate,silence_stride):
file_path = "silence/{}_{}.wav".format(file_[:-4],i)
y_slice = y[i:i+sample_rate]
wavfile.write(os.path.join(C["dire"],file_path),sample_rate,y_slice)
class_list.append(file_path)
else:
folder = os.path.join(C["dire"],label)
for file_ in os.listdir(folder):
file_path = '{}/{}'.format(label,file_) #Ex: up/c9b653a0_nohash_2.wav
class_list.append(file_path)
random.shuffle(class_list) #To shuffle files
data_list.append(class_list) #if not a silence file then just append to the datalist
X = []
Y = []
preemphasis = 0.985
print("Feature Extraction Started")
for i,class_list in enumerate(data_list): #datalist = all files, class list = folder name in datalist, sample = path to the audio file in that particular class list
for j,samples in enumerate(class_list): #samples are of the form classes_name/audio file
if(samples.endswith('.wav')):
sample_rate,audio = wavfile.read(os.path.join(C["dire"],samples))
if(audio.size<sample_rate):
audio = np.pad(audio,(sample_rate-audio.size,0),mode="constant")
#print("****")
#print(sample_rate)
#print(preemphasis)
#print(audio.shape)
coeff = mfccwithpaddingandcmd.mfcc(audio,sample_rate,preemphasis) # 0.985 = preemphasis
#print("****")
#print(coeff)
#print("****")
X.append(coeff)
#print(X)
if(samples.split('/')[0] in classes):
Y.append(samples.split('/')[0])
elif(samples.split('/')[0]=='_background_noise_'):
Y.append('silence')
#print(len(X))
#print(len(Y))
#X= coefficient array and Y = name of the class
A = np.zeros((len(X),X[0].shape[0],X[0][0].shape[0]),dtype='object')
for i in range(0,len(X)):
A[i] = np.array(X[i]) #Converting list X into array A
end1 = time.time()
print("Time taken for feature extraction:{}sec".format(end1-start))
MLB = MultiLabelBinarizer() # one hot encoding for converting labels into binary form
MLB.fit(pd.Series(Y).fillna("missing").str.split(', '))
Y_MLB = MLB.transform(pd.Series(Y).fillna("missing").str.split(', '))
MLB.classes_ #Same like classes array
print(Y_MLB.shape)
pickle_out = open("A_all.pickle","wb") #Writes array A to a file A.pickle
pickle.dump(A, pickle_out) #pickle is the file containing the extracted features
pickle_out.close()
pickle_out = open("Y_all.pickle","wb")
pickle.dump(Y_MLB, pickle_out)
pickle_out.close()
pickle_in = open("Y_all.pickle","rb")
Y = pickle.load(pickle_in)
X = tf.keras.utils.normalize(X)
X_train,X_valtest,Y_train,Y_valtest = train_test_split(X,Y,test_size=0.2,random_state=37)
X_val,X_test,Y_val,Y_test = train_test_split(X_valtest,Y_valtest,test_size=0.5,random_state=37)
print(X_train.shape,X_val.shape,X_test.shape,Y_train.shape,Y_val.shape,Y_test.shape)
if __name__ == "__main__":
configlib.parse(save_fname="last_arguments.txt")
print("Running with configuration:")
configlib.print_config()
pp()
</code></pre>
<p>The code for MFCC is as follows:</p>
<pre><code>import tensorflow as tf
import scipy.io.wavfile as wav
import numpy as np
import matplotlib.pyplot as plt
import pickle
import argparse
import configlib
from configlib import config as C
# Configuration arguments
parser = configlib.add_parser("MFCC config")
parser.add_argument("-fl","--frame_length", type=int, default=400, metavar="", help="Frame Length")
parser.add_argument("-fs","--frame_step", type=int, default=160, metavar="", help="Frame Step")
parser.add_argument("-fft","--fft_length", type=int, default=512, metavar="", help="FFT length")
#args = parser.parse_args()
def Preemphasis(signal,pre_emp):
return np.append(signal[0],signal[1:]-pre_emp*signal[:-1])
def Paddinggg(framelength,framestep,samplerate):
frameStart = np.arange(0,samplerate,framestep)
frameEnd = frameStart + framelength
padding = min(frameEnd[(frameEnd > samplerate)]) - samplerate
return padding
def mfcc(audio,sample_rate,pre_emp):
audio = np.pad(audio,(Paddinggg(C["frame_length"],C["frame_step"],sample_rate),0),mode='reflect')
audio = audio.astype('float32')
#Normalization
audio = tf.keras.utils.normalize(audio)
#Preemphasis
audio = Preemphasis(audio,pre_emp)
stfts = tf.signal.stft(audio,C["frame_length"],C["frame_step"],C["fft_length"],window_fn=tf.signal.hann_window)
spectrograms = tf.abs(stfts)
num_spectrogram_bins = stfts.shape[-1]
lower_edge_hertz, upper_edge_hertz, num_mel_bins = 0.0, sample_rate/2.0, 32
linear_to_mel_weight_matrix = tf.signal.linear_to_mel_weight_matrix(num_mel_bins, num_spectrogram_bins, sample_rate, lower_edge_hertz,upper_edge_hertz)
mel_spectrograms = tf.tensordot(spectrograms, linear_to_mel_weight_matrix, 1)
mel_spectrograms.set_shape(spectrograms.shape[:-1].concatenate(linear_to_mel_weight_matrix.shape[-1:]))
# Compute a stabilized log to get log-magnitude mel-scale spectrograms.
log_mel_spectrograms = tf.math.log(mel_spectrograms + 1e-6)
# Compute MFCCs from log_mel_spectrograms and take the first 13.
return log_mel_spectrograms
print("End")
</code></pre>
<p>And the code for configlib is as follows:</p>
<pre><code>from typing import Dict, Any
import logging
import pprint
import sys
import argparse
# Logging for config library
logger = logging.getLogger(__name__)
# Our global parser that we will collect arguments into
parser = argparse.ArgumentParser(description=__doc__, fromfile_prefix_chars="@")
# Global configuration dictionary that will contain parsed arguments
# It is also this variable that modules use to access parsed arguments
config:Dict[str, Any] = {}
def add_parser(title: str, description: str = ""):
"""Create a new context for arguments and return a handle."""
return parser.add_argument_group(title, description)
def parse(save_fname: str = "") -> Dict[str, Any]:
"""Parse given arguments."""
config.update(vars(parser.parse_args()))
logging.info("Parsed %i arguments.", len(config))
# Optionally save passed arguments
if save_fname:
with open(save_fname, "w") as fout:
fout.write("\n".join(sys.argv[1:]))
logging.info("Saving arguments to %s.", save_fname)
return config
def print_config():
"""Print the current config to stdout."""
pprint.pprint(config)
</code></pre>
<p>I use the following command to run my python file:</p>
<p>python3.7 preprocessingwithpaddingandcmd.py -fl 1103 -fs 88 -fft 512 -dir /mnt/beegfs/home/gehani/appliances_audio_one_channel</p>
<p>Should I be writing a shell script or python has some options for it?</p>
<p><strong>EDIT 1</strong></p>
<p>I tried using</p>
<pre><code>parser.add_argument('-fft', '--fft_length', type=int, default=[], nargs=3)
</code></pre>
<p>for getting fft length from the command line and used the command</p>
<p>run preprocessingwithpaddingandcmd -dir <em>filepath</em> -fl 1765 -fs 1102 -fft 512 218 64</p>
<p>to run it. But, it gives me this error: ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()</p>
<p>Can anyone please help?</p>
|
<p>I found you can do it by these. <a href="https://python-speech-features.readthedocs.io/en/latest/" rel="nofollow noreferrer">mfcc features extraction</a><br />
You can create your own mfcc features extraction or you can limit window lengths and ceptrums that is enough for simple works except you need logarithms scales where you can use target matrix ( convolution ) or else.<br />
It is logarithms when you use FFT or alternative derivation but mfcc is only extraction where I will provide the sample output in picture.</p>
<p><strong>[ Sample ]:</strong></p>
<pre><code>from python_speech_features import mfcc
from python_speech_features import logfbank
import scipy.io.wavfile as wav
import tensorflow as tf
import matplotlib.pyplot as plt
(rate,sig) = wav.read("F:\\temp\\Python\\Speech\\temple_of_love-sisters_of_mercy.wav")
mfcc_feat = mfcc(signal=sig, samplerate=rate, winlen=0.025, winstep=0.01, numcep=13, nfilt=26, nfft=512, lowfreq=0, highfreq=None, preemph=0.97, ceplifter=22, appendEnergy=True)
fbank_feat = logfbank(sig,rate)
plt.plot( mfcc_feat[50:42000,0] )
plt.xlabel("sample")
plt.show()
plt.close()
input('...')
</code></pre>
<p><a href="https://i.stack.imgur.com/CoDno.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CoDno.png" alt="Sample" /></a></p>
|
python|tensorflow|mfcc
| 0
|
6,640
| 72,089,220
|
Filter dataframe to get name of the youngest of a particular gender
|
<p>I waste 2 hours and shouldn't find solve to my problem.
I need filtering from csv Only name of female who has minimal age.</p>
<p>I do only this part, and don't know how i can combine my solve in one right solve. Can you please support me, and say what an attributes can help me in my problem.</p>
<p>Columns = ['name', 'gender', 'age', 'height', 'weight']</p>
<pre><code>frame = pd.read_csv("h03.csv")
out = pd.DataFrame(data=frame)
filtr = frame[frame['gender'] == 'F']
min_age = filtr['age']
ne = frame.loc[frame.gender == 'F']
ne = frame[frame['age']==frame['age']].min()
print(ne)
</code></pre>
|
<p>Without seeing more of your data this should be a good enough starting point for you to put your own column names and data.</p>
<pre><code>df = pd.DataFrame(
{
'Gender':['M', 'F', 'M', 'F', 'M', 'F'],
'Age':[20, 21, 21, 13, 22, 13]
}
)
df = df.loc[df['Gender'] == 'F']
df['Check'] = np.where(df['Age'] == df['Age'].min(), True, False)
df = df.loc[df['Check'] == True]
df
</code></pre>
|
python|pandas|dataframe|numpy|filtering
| 0
|
6,641
| 16,947,336
|
binning a dataframe in pandas in Python
|
<p>Given the following dataframe in pandas:</p>
<pre><code>import numpy as np
df = pandas.DataFrame({"a": np.random.random(100), "b": np.random.random(100), "id": np.arange(100)})
</code></pre>
<p>where <code>id</code> is an id for each point consisting of an <code>a</code> and <code>b</code> value, how can I bin <code>a</code> and <code>b</code> into a specified set of bins (so that I can then take the median/average value of <code>a</code> and <code>b</code> in each bin)? <code>df</code> might have <code>NaN</code> values for <code>a</code> or <code>b</code> (or both) for any given row in <code>df</code>.</p>
<p>Here's a better example using Joe Kington's solution with a more realistic <code>df</code>. The thing I'm unsure about is how to access the <code>df.b</code> elements for each <code>df.a</code> group below:</p>
<pre><code>a = np.random.random(20)
df = pandas.DataFrame({"a": a, "b": a + 10})
# bins for df.a
bins = np.linspace(0, 1, 10)
# bin df according to a
groups = df.groupby(np.digitize(df.a,bins))
# Get the mean of a in each group
print groups.mean()
## But how to get the mean of b for each group of a?
# ...
</code></pre>
|
<p>There may be a more efficient way (I have a feeling <code>pandas.crosstab</code> would be useful here), but here's how I'd do it:</p>
<pre><code>import numpy as np
import pandas
df = pandas.DataFrame({"a": np.random.random(100),
"b": np.random.random(100),
"id": np.arange(100)})
# Bin the data frame by "a" with 10 bins...
bins = np.linspace(df.a.min(), df.a.max(), 10)
groups = df.groupby(np.digitize(df.a, bins))
# Get the mean of each bin:
print groups.mean() # Also could do "groups.aggregate(np.mean)"
# Similarly, the median:
print groups.median()
# Apply some arbitrary function to aggregate binned data
print groups.aggregate(lambda x: np.mean(x[x > 0.5]))
</code></pre>
<hr>
<p>Edit: As the OP was asking specifically for just the means of <code>b</code> binned by the values in <code>a</code>, just do </p>
<pre><code>groups.mean().b
</code></pre>
<p>Also if you wanted the index to look nicer (e.g. display intervals as the index), as they do in @bdiamante's example, use <code>pandas.cut</code> instead of <code>numpy.digitize</code>. (Kudos to bidamante. I didn't realize <code>pandas.cut</code> existed.)</p>
<pre><code>import numpy as np
import pandas
df = pandas.DataFrame({"a": np.random.random(100),
"b": np.random.random(100) + 10})
# Bin the data frame by "a" with 10 bins...
bins = np.linspace(df.a.min(), df.a.max(), 10)
groups = df.groupby(pandas.cut(df.a, bins))
# Get the mean of b, binned by the values in a
print groups.mean().b
</code></pre>
<p>This results in:</p>
<pre><code>a
(0.00186, 0.111] 10.421839
(0.111, 0.22] 10.427540
(0.22, 0.33] 10.538932
(0.33, 0.439] 10.445085
(0.439, 0.548] 10.313612
(0.548, 0.658] 10.319387
(0.658, 0.767] 10.367444
(0.767, 0.876] 10.469655
(0.876, 0.986] 10.571008
Name: b
</code></pre>
|
python|numpy|pandas
| 63
|
6,642
| 17,913,330
|
Fitting data using UnivariateSpline in scipy python
|
<p>I have a experimental data to which I am trying to fit a curve using UnivariateSpline function in scipy. The data looks like: </p>
<pre><code> x y
13 2.404070
12 1.588134
11 1.760112
10 1.771360
09 1.860087
08 1.955789
07 1.910408
06 1.655911
05 1.778952
04 2.624719
03 1.698099
02 3.022607
01 3.303135
</code></pre>
<p>Here is what I am doing:</p>
<pre><code>import matplotlib.pyplot as plt
from scipy import interpolate
yinterp = interpolate.UnivariateSpline(x, y, s = 5e8)(x)
plt.plot(x, y, 'bo', label = 'Original')
plt.plot(x, yinterp, 'r', label = 'Interpolated')
plt.show()
</code></pre>
<p>That's how it looks: </p>
<p><img src="https://i.imgur.com/8rMKC3W.png" alt="Curve fit"></p>
<p>I was wondering if anyone has thought on other curve fitting options which scipy might have? I am relatively new to scipy. </p>
<p>Thanks!</p>
|
<p>There are a few issues.</p>
<p>The first issue is the order of the x values. From the documentation for <code>scipy.interpolate.UnivariateSpline</code> we find</p>
<pre><code>x : (N,) array_like
1-D array of independent input data. MUST BE INCREASING.
</code></pre>
<p>Stress added by me. For the data you have given the x is in the reversed order.
To debug this it is useful to use a "normal" spline to make sure everything makes sense. </p>
<p>The second issue, and the one more directly relevant for your issue, relates to the s parameter. What does it do? Again from the documentation we find</p>
<pre><code>s : float or None, optional
Positive smoothing factor used to choose the number of knots. Number
of knots will be increased until the smoothing condition is satisfied:
sum((w[i]*(y[i]-s(x[i])))**2,axis=0) <= s
If None (default), s=len(w) which should be a good value if 1/w[i] is
an estimate of the standard deviation of y[i]. If 0, spline will
interpolate through all data points.
</code></pre>
<p>So s determines how close the interpolated curve must come to the data points, in the least squares sense. If we set the value very large then the spline does not need to come near the data points.</p>
<p>As a complete example consider the following</p>
<pre><code>import scipy.interpolate as inter
import numpy as np
import pylab as plt
x = np.array([13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1])
y = np.array([2.404070, 1.588134, 1.760112, 1.771360, 1.860087,
1.955789, 1.910408, 1.655911, 1.778952, 2.624719,
1.698099, 3.022607, 3.303135])
xx = np.arange(1,13.01,0.1)
s1 = inter.InterpolatedUnivariateSpline (x, y)
s1rev = inter.InterpolatedUnivariateSpline (x[::-1], y[::-1])
# Use a smallish value for s
s2 = inter.UnivariateSpline (x[::-1], y[::-1], s=0.1)
s2crazy = inter.UnivariateSpline (x[::-1], y[::-1], s=5e8)
plt.plot (x, y, 'bo', label='Data')
plt.plot (xx, s1(xx), 'k-', label='Spline, wrong order')
plt.plot (xx, s1rev(xx), 'k--', label='Spline, correct order')
plt.plot (xx, s2(xx), 'r-', label='Spline, fit')
# Uncomment to get the poor fit.
#plt.plot (xx, s2crazy(xx), 'r--', label='Spline, fit, s=5e8')
plt.minorticks_on()
plt.legend()
plt.xlabel('x')
plt.ylabel('y')
plt.show()
</code></pre>
<p><img src="https://i.stack.imgur.com/OyZ0G.png" alt="Result from example code"></p>
|
python|numpy|scipy|curve-fitting
| 45
|
6,643
| 18,043,849
|
python pandas organizing multidimensional data into an object
|
<p>Having historical data from a stock exchange, with a number of stocks, and a number of a given stock's attributes (open, high, low, close, volume), I end up effectively having 3 dimensions in my data, i.e., <code>time stamp</code>, <code>stock's ticker</code> and the <code>attributes</code>. For a single stock (2D), I'd create one <code>pd.DataFrame</code>, but how should I go (efficiently and generically) about putting such data for many stocks into a single object? Is <code>pd.DataFrame</code> with multi-indexing the best possible solution?</p>
|
<p>I recommend you use a <a href="http://pandas.pydata.org/pandas-docs/dev/dsintro.html#panel" rel="nofollow">Panel</a>, for example:</p>
<pre><code>>>> from pandas.io.data import DataReader
>>> from pandas import Panel, DataFrame
>>> symbols = ['AAPL', 'GLD', 'SPX', 'MCD']
>>> data = dict((symbol, DataReader(symbol, "yahoo", pause=1)) for symbol in symbols)
>>> panel = Panel(data).swapaxes('items', 'minor')
>>> closing = panel['Close'].dropna()
>>> closing.head()
AAPL GLD MCD SPX
Date
2010-01-04 214.01 109.80 62.78 1132.99
2010-01-05 214.38 109.70 62.30 1136.52
2010-01-06 210.97 111.51 61.45 1137.14
2010-01-07 210.58 110.82 61.90 1141.69
2010-01-08 211.98 111.37 61.84 1144.98
</code></pre>
<p>If you want to see more take a look at <a href="http://nbviewer.ipython.org/5338034" rel="nofollow">this</a> example I made for a course.</p>
|
python|pandas
| 3
|
6,644
| 55,415,663
|
Is there a faster way to convert big file from hexa to binary and binary to int?
|
<p>I have a big DataFrame (1999048 rows and 1col), with hexadecimal datas. I want to put each line in binary, cut it into pieces and traduce each piece in decimal format. </p>
<p>I tried this: </p>
<pre><code>for i in range (len(df.index)):
hexa_line=hex2bin(str(f1.iloc[i]))[::-1]
channel = int(hexa_line[0:3][::-1], 2)
edge = int(hexa_line[3][::-1], 2)
time = int(hexa_line[4:32][::-1], 2)
sweep = int(hexa_line[32:48][::-1], 2)
tag = int(hexa_line[48:63][::-1], 2)
datalost = int(hexa_line[63][::-1], 2)
line=np.array([[channel, edge, time, sweep, tag, datalost]])
tab=np.concatenate((tab, line), axis=0)
</code></pre>
<p>But it is really really long.... Is there a faster way to do that ?</p>
|
<p>only thing I can imagine helping a lot would be changing these lines:</p>
<pre><code>line=np.array([[channel, edge, time, sweep, tag, datalost]])
tab=np.concatenate((tab, line), axis=0)
</code></pre>
<p>certainly in pandas, and I think also in numpy concatting is an expensive thing to do, and depends on the size of the total size of both arrays (rather than, say list.append)</p>
<p>I think what this does is re-writes the entire array <code>tab</code> each time you call it. Perhaps you could try appending each line to a list then concatting the whole list together.</p>
<p>eg something more like this:</p>
<pre><code>tab = []
for i in range (len(df.index)):
hexa_line=hex2bin(str(f1.iloc[i]))[::-1]
channel = int(hexa_line[0:3][::-1], 2)
edge = int(hexa_line[3][::-1], 2)
time = int(hexa_line[4:32][::-1], 2)
sweep = int(hexa_line[32:48][::-1], 2)
tag = int(hexa_line[48:63][::-1], 2)
datalost = int(hexa_line[63][::-1], 2)
line=np.array([[channel, edge, time, sweep, tag, datalost]])
tab.append(line)
final_tab = np.concatenate(tab, axis=0)
# or whatever the syntax is :p
</code></pre>
|
python|pandas|performance
| 0
|
6,645
| 55,483,456
|
How do I match a list of strings to a list of of filenames so I can save those files into one master file?
|
<p>I have a list of barcodes. I want to read and append files from a folder that match the barcode, but of course the barcodes are not a 1-to-1 match.</p>
<p>Example of the Barcode is <code>07002991H3</code> and an Example of the File Name is <code>07002991H3001</code>.</p>
<p>I am able to match the barcodes with a trimmed file name, but the file will is not able to read in</p>
<pre><code>import pandas as pd
import glob
import os
with open('BarcodeList.txt','r') as WaferList:
lines = WaferList.read().splitlines()
FileList = os.listdir('//FolderThatContainsFiles')
df = []
for file in FileList:
for afile in lines:
if afile == file.split("_")[0][0:10]:
df = pd.read_csv(file)
### The "df" step above does not work ###
print('success')
### The success part works ####
</code></pre>
<p>I expect the df step above to read the csv of matched file, but instead I receive this message:</p>
<p><code>FileNotFoundError: File b'07001382A7044_summary.csv' does not exist</code></p>
|
<p>You need to give pandas the file path as well as the file name; try </p>
<pre class="lang-py prettyprint-override"><code>df = pd.read_csv(os.path.join('//FolderThatContainsFiles', file))
</code></pre>
|
python|pandas|dataframe|string-matching
| 0
|
6,646
| 55,217,032
|
for loop and adding additional columns groupby pandas dataframe in Python
|
<p>below code is my original way.</p>
<pre><code>import pandas as pd
data = {'id':[1001,1001,1001,1001,1001,1001,1001,1001,1002,1002,1002,1002,1002,1002,1002,1002],
'name':['Tom', 'Tom', 'Tom', 'Tom','Tom', 'Tom', 'Tom', 'Tom','Jack','Jack','Jack','Jack','Jack','Jack','Jack','Jack'],
'team':['A','A', 'B', 'B', 'C','C', 'D', 'D','A','A', 'B', 'B', 'C','C', 'D', 'D',],
'year':[2011,2011,2012,2012,2013,2013,2014,2014,2011,2011,2012,2012,2013,2013,2014,2014],
'avg':[0.500,0.400,0.300,0.200,0.100,0.200,0.300,0.400,0.500,0.400,0.300,0.200,0.100,0.200,0.300,0.400]}
df = pd.DataFrame(data)
print (df)
team_names = [c for c in df['team'].value_counts().index]
team_names
for i in team_names:
df[i+'_vs_avg_2011'] = df.loc[(df['team']==i)&(df['year']==2011)].groupby(['id','name'])['avg'].transform('mean')
df[i+'_vs_avg_2012'] = df.loc[(df['team']==i)&(df['year']==2012)].groupby(['id','name'])['avg'].transform('mean')
df[i+'_vs_avg_2013'] = df.loc[(df['team']==i)&(df['year']==2013)].groupby(['id','name'])['avg'].transform('mean')
df[i+'_vs_avg_2014'] = df.loc[(df['team']==i)&(df['year']==2014)].groupby(['id','name'])['avg'].transform('mean')
print(i)
</code></pre>
<p>for the loop part
I tried </p>
<pre><code>years_from_to = [str(i).zfill(2) for i in range(2011,2014)]
years_from_to
for i,j in team_names, years_from_to:
df[i+'_vs_avg_'+j] = df.loc[(df['team']==i)&(df['year']==j)].groupby(['id','name'])['avg'].transform('mean')
print(i)
</code></pre>
<p>ValueError: too many values to unpack (expected 2)</p>
<p>Is there a way to simplify this or fix this code?</p>
|
<p>I think you can use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pivot_table.html" rel="nofollow noreferrer"><code>DataFrame.pivot_table</code></a> instaed loops with flattening columns in <code>MultiIndex</code> and then <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.join.html" rel="nofollow noreferrer"><code>DataFrame.join</code></a> to original <code>DataFrame</code>:</p>
<pre><code>df1 = df.pivot_table(index=['id','name'],columns=['team','year'],values='avg', aggfunc='mean')
df1.columns = [f'{a}_vs_avg_{b}' for a, b in df1.columns]
print (df1)
A_vs_avg_2011 B_vs_avg_2012 C_vs_avg_2013 D_vs_avg_2014
id name
1001 Tom 0.45 0.25 0.15 0.35
1002 Jack 0.45 0.25 0.15 0.35
df = df.join(df1, on=['id','name'])
print (df)
</code></pre>
|
python|pandas|loops|dataframe|for-loop
| 4
|
6,647
| 55,503,250
|
How to iterate through rows and query specific fields based on the contents of other fields in the same row?
|
<p>I've generated a tabular dataset and I'm trying to query and generate a report from it in xls format using python. I've got the data into a pandas dataframe and have all the potentially relevant fields lined up in the right order, now I need to select where specific columns within each row meets a specific criteria. If that criteria is met, I want to include a specific field from the same row. I'm just a bit lost on how I can query this dataset further.</p>
<pre><code> columns = ['type_1','price_1','color_1','type_2','price_2','color_2','type_3','price_3','color_3' ]
1'Car','300','Grey','None','None','None,'Truck','500','blue'
2'Van','250','White','Car','300','Green','Car','350','Black'
3'None','None','None','None','None','None','None','None','None'
4'None','None','None''Car','600','Yellow''None','None','None'
5'Van','250','White','Car','300','Green','Van','250','White'
</code></pre>
<p>I want to query this dataset to output cars, and if cars, then include the price and color. So how can I iterate through the rows above and generate the output below using pandas?</p>
<pre><code> 'Car','300','Grey'
'Car','300','Green''Car','350','Black'
'Car','600','Yellow'
'Car','300','Green'
</code></pre>
<p>I get the feeling here are two approaches, query the dataset, peeling away where desired conditions are not met i.e.:</p>
<pre><code> df[df.type_1 != 'Car' OR df.type_2 != 'Car' OR df.type_3 != 'Car']
</code></pre>
<p>or create a new dataframe and write/append to it when the condition is met. </p>
<p>I'm brand new to pandas and it's functions, so a bit of guidance would be greatly appreciated!</p>
|
<p>This takes several steps.</p>
<ol>
<li>Filter columns bases on <code>_1</code>, <code>_2</code> and <code>_3</code></li>
<li>Loop through dataframes and filter <code>type</code> columns on <code>'Car'</code></li>
<li><code>Concat</code> the dataframes together to one final dataframe </li>
</ol>
<p>btw: I think your expected output should a bit adjusted to make sense:</p>
<pre><code>df = df.replace("'None'", np.NaN)
df1 = df[df.filter(like='_1').columns]
df2 = df[df.filter(like='_2').columns]
df3 = df[df.filter(like='_3').columns]
dfs = [df1, df2, df3]
dfs_new = []
cntr = 1
for d in dfs:
dfs_new.append(d[d['type_'+str(cntr)] == "'Car'"])
cntr += 1
print(pd.concat(dfs_new, axis=1).fillna(''))
type_1 price_1 color_1 type_2 price_2 color_2 type_3 price_3 color_3
0 'Car' '300' 'Grey'
1 'Car' '300' 'Green' 'Car' '350' 'Black'
3 'Car' '600' 'Yellow'
4 'Car' '300' 'Green'
</code></pre>
<p><strong>Note</strong> I can get your expected output exactly the same, but that would not make sense since <code>type_1</code> and <code>type_2</code> cars are on top of each other, while <code>type_3</code> has its own column. </p>
|
python|pandas|numpy
| 1
|
6,648
| 55,405,948
|
Processing each row in column
|
<ol>
<li>I'm trying to go through each row in column 'birth' </li>
<li>Check if the last part of the string separated by "," ends in two characters
2.a. If it does, I will append "US" to it.</li>
</ol>
<p>So, "Los Angeles, Ca" would be "Los Angeles, Ca, US"
And "Bisacquino, Sicily, Italy" would stay the same</p>
<p>I want to process this in a function. </p>
<p>I've tried this but when checking checking the length of birthStr it gives me the length of all the rows</p>
<pre><code>for row in subset.itertuples():
birthStr= subset['birth'].str.rsplit(",", 1).str[-1]
if len(birthStr) ==2:
subset.birth = birthStr + "," + "US"
</code></pre>
|
<p>We can use the <code>str</code> methods provided by <code>pandas</code> to solve for this. Let's use the following dataframe that I define below.</p>
<pre><code>print(df)
place
0 Los Angeles, Ca
1 Bisacquino, Sicily, Italy
2 New York, NY
condition = df.place.str.split(',').str[-1].str.strip().str.len() == 2
df.loc[condition, 'place'] = df.place + ', US'
print(df)
place
0 Los Angeles, Ca, US
1 Bisacquino, Sicily, Italy
2 New York, NY, US
</code></pre>
|
python|pandas|bigdata
| 0
|
6,649
| 56,863,136
|
Get Percent change where axis equals columns in python pandas?
|
<p>I have the following dataset:</p>
<pre><code>import pandas as pd
w = pd.Series(['EY', 'EY', 'EY', 'KPMG', 'KPMG', 'KPMG', 'BAIN', 'BAIN', 'BAIN'])
x = pd.Series([2020,2019,2018,2020,2019,2018,2020,2019,2018])
y = pd.Series([100000, 500000, 1000000, 50000, 100000, 40000, 1000, 500, 4000])
z = pd.Series([10000, 10000, 20000, 25000, 50000, 10000, 100000, 50500, 120000])
df = pd.DataFrame({'consultant': w, 'fiscal_year':x, 'actual_cost':y, 'budgeted_cost':z})
indexer_consultant_fy = ['consultant', 'fiscal_year']
df = df.set_index(indexer_consultant_fy).sort_index(ascending=True)
df['actual_budget_pct_diff'] = df.pct_change(axis='columns',fill_method='ffill')['budgeted_cost']
</code></pre>
<p>How can I get actual_cost and budgeted_cost to switch within the last line of code without switching the columns in the dataframe? </p>
<p>The result should that be when the actual_cost is higher than the <strong>budgeted_cost</strong> the <strong>actual_budget_pct_diff</strong> will be a <strong>positive number</strong>? Thanks all! </p>
|
<p>just specify <code>periods=-1</code> and pick column <code>[actual_cost]</code> as follows:</p>
<pre><code>df['actual_budget_pct_diff'] = df.pct_change(periods=-1, axis='columns',fill_method='ffill')['actual_cost']
Out[160]:
actual_cost budgeted_cost actual_budget_pct_diff
consultant fiscal_year
BAIN 2018 4000 120000 -0.966667
2019 500 50500 -0.990099
2020 1000 100000 -0.990000
EY 2018 1000000 20000 49.000000
2019 500000 10000 49.000000
2020 100000 10000 9.000000
KPMG 2018 40000 10000 3.000000
2019 100000 50000 1.000000
2020 50000 25000 1.000000
</code></pre>
|
python|python-3.x|pandas|group-by
| 3
|
6,650
| 56,546,766
|
Python pandas: How to alter one column that is (complicatedly) based on another?
|
<p>I have booking data whereas a new row is inserted whenwever a customer iniciates, changes, deletes or reactivates an order. "delivered" shows if the product was actually delivered which is generally the case if the order is not deleted in the last update.</p>
<p>Here is some sample code:</p>
<pre><code>df = pd.DataFrame(
{
"booking id": [1,1,1,2,2,2,3,3,4,4,4],
"booking type": ["initiation", "change", "change", "initiation", "change", "deletion", "reactivation", "change", "initiation", "change", "deletion"],
"delivered": ["yes", "yes", "yes", "yes", "yes", "yes", "yes", "yes", "no", "no", "no"]
}
)
</code></pre>
<p><a href="https://i.stack.imgur.com/NnY1s.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NnY1s.png" alt="enter image description here" /></a></p>
<p>Some of the data is incorrect. If the last update (last row of a booking id) has <code>booking type == deletion</code>, all rows of this booking id should have <code>delivered = no</code>.</p>
<p>In this example, I'm looking for this:</p>
<pre><code>df = pd.DataFrame(
{
"booking id": [1,1,1,2,2,2,3,3,4,4,4],
"booking type": ["initiation", "change", "change", "initiation", "change", "deletion", "reactivation", "change", "initiation", "change", "deletion"],
"delivered": ["yes", "yes", "yes", "no", "no", "no", "yes", "yes", "no", "no", "no"]
}
)
</code></pre>
<p><a href="https://i.stack.imgur.com/XHZTP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XHZTP.png" alt="enter image description here" /></a></p>
<p>How do I do this? Thanks a lot!</p>
|
<p>Here's one approach using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>GroupBy</code></a> and <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.where.html" rel="nofollow noreferrer"><code>DataFrame.where</code></a>:</p>
<pre><code>df.loc[:, 'delivered'] = df.where(df.groupby('booking id')['booking type']
.transform('last')
.ne('deletion'), 'no')
booking id booking type delivered
0 1 initiation yes
1 1 change yes
2 1 change yes
3 2 initiation no
4 2 change no
5 2 deletion no
6 3 reactivation yes
7 3 change yes
8 4 initiation no
9 4 change no
10 4 deletion no
</code></pre>
|
python|pandas|group-by
| 2
|
6,651
| 56,738,233
|
API call While loop not doing all lines of csv
|
<p>While loop doesn't send all line parameters before stopping thus finishes after second pass and sometimes directly after first pass.</p>
<p>i have tried reducing the number of steps but plays little difference</p>
<pre><code>import requests
import rek #request script
import time
import pandas as pd
dfbig= pd.read_csv ('C:\\Users\\libraries2.csv')
step = 19
init = 0; final = init + step
while (final<= dfbig.shape[0]):
df = dfbig.iloc [ init : final ]
rek.req (df , init, final)
init = final
final = init + step
time.sleep (60)
</code></pre>
<p>It just stops before completing all lines of csv</p>
|
<p>Your code is incrementing <code>final</code> at the end of the loop. On the 2nd iteration <code>final</code> exceeded <code>dfbig.shape[0]</code> (ie: 19+19+19 <= 43 is false) thereby ending the loop. This has the effect of skipping over the last 5 rows.</p>
<p>To resolve:</p>
<pre><code>step = 19
init = 0
final = init + step
while (init <= dfbig.shape[0]):
df = dfbig.iloc[init:final]
rek.req (df , init, final)
init = final
final = init + step
</code></pre>
<p>Building on this <a href="https://stackoverflow.com/a/312464/110772">chunking idea</a> you could do this to simplify: </p>
<pre><code>rows = dfbig.shape[0]
for i in range(0, rows, step):
df = dfbig.iloc[i:i + step]
rek.req(df, i, i + step)
</code></pre>
|
python|pandas|python-2.7
| 0
|
6,652
| 56,531,463
|
Find partial matches of words in pandas column of URLs, directly after https://
|
<p>Basically, I have a dataframe where one column is a list of names, and the other is associated URLs that are related to the name in some way (sample df):</p>
<pre><code> Name Domain
'Apple Inc' 'https://mapquest.com/askjdnas387y1/apple-inc', 'https://linkedin.com/apple-inc/askjdnas387y1/', 'https://www.apple-inc.com/asdkjsad542/'
'Aperture Industries' 'https://www.cakewasdelicious.com/aperture/run-away/', 'https://aperture-incorporated.com/aperture/', 'https://www.buzzfeed.com/aperture/the-top-ten-most-evil-companies=will-shock-you/'
'Umbrella Corp' 'https://www.umbrella-corp.org/were-not-evil/', 'https://umbrella.org/experiment-death/', 'https://www.most-evil.org/umbrella-corps/'
</code></pre>
<p>I'm trying to find the URLs that have the keyword or at least a partial match to the keyword directly AFTER either:</p>
<pre><code>'https://NAME.whateverthispartdoesntmatter' # ...or...
'https://www.NAME.whateverthispartdoesntmatter' # <- not a real link
</code></pre>
<p>Right now I'm using <code>fuzzywuzzy</code> package to gain the partial matches:</p>
<pre><code>fuzz.token_set_ratio(name, value)
</code></pre>
<p>It works great for partial matching, however the matches aren't location dependent, so I'll get a perfect keyword match but its located somewhere in the middle of the URL which isn't what I need like:</p>
<pre><code>https://www.bloomberg.com/profiles/companies/aperture-inc/0117091D
</code></pre>
|
<h3>Using <code>explode/unnest string</code>, <code>str.extract</code> & <code>fuzzywuzzy</code></h3>
<p>First we will unnest your string to rows using <a href="https://stackoverflow.com/a/51752361/9081267">this</a> function:</p>
<pre><code>df = explode_str(df, 'Domain', ',').reset_index(drop=True)
</code></pre>
<p>Then we use regular expressions to find the two patterns with or without the <code>www</code> and extract the names from them:</p>
<pre><code>m = df['Domain'].str.extract('https://www.(.*)\.|https://(.*)\.')
df['M'] = m[0].fillna(m[1])
print(df)
Name Domain M
0 Apple Inc https://mapquest.com/askjdnas387y1/apple-inc mapquest
1 Apple Inc https://linkedin.com/apple-inc/askjdnas387y1/ linkedin
2 Apple Inc https://www.apple-inc.com/asdkjsad542/ apple-inc
3 Aperture Industries https://www.cakewasdelicious.com/aperture/run-... cakewasdelicious
4 Aperture Industries https://aperture-incorporated.com/aperture/ aperture-incorporated
5 Aperture Industries https://www.buzzfeed.com/aperture/the-top-ten... buzzfeed
6 Umbrella Corp https://www.umbrella-corp.org/were-not-evil/ umbrella-corp
7 Umbrella Corp https://umbrella.org/experiment-death/ umbrella
8 Umbrella Corp https://www.most-evil.org/umbrella-corps/ most-evil
</code></pre>
<p>Then we use <code>fuzzywuzzy</code> to filter the rows with a higher match than <code>80</code>:</p>
<pre><code>from fuzzywuzzy import fuzz
m2 = df.apply(lambda x: fuzz.token_sort_ratio(x['Name'], x['M']), axis=1)
df[m2>80]
</code></pre>
<hr>
<pre><code>
Name Domain M
2 Apple Inc https://www.apple-inc.com/asdkjsad542/ apple-inc
6 Umbrella Corp https://www.umbrella-corp.org/were-not-evil/ umbrella-corp
</code></pre>
<hr>
<p><strong>Note</strong> I used <code>token_sort_ratio</code> instead of <code>token_set_ratio</code> to catch the <code>umbrella</code> and <code>umbrella-corp</code> difference</p>
<hr>
<p>Function used from linked answer:</p>
<pre><code>def explode_str(df, col, sep):
s = df[col]
i = np.arange(len(s)).repeat(s.str.count(sep) + 1)
return df.iloc[i].assign(**{col: sep.join(s).split(sep)})
</code></pre>
|
python|pandas
| 1
|
6,653
| 67,063,843
|
Non-Identical results from String Identifier and Actual Class names for activations, loss functions, and metrics
|
<p>I have the following keras model that is working fine:</p>
<pre><code>model = tf.keras.Sequential(
[
#first convolution
tf.keras.layers.Conv2D(16, (3,3), activation="relu",
input_shape=(IMAGE_SIZE,IMAGE_SIZE,3)),
tf.keras.layers.MaxPooling2D(2,2),
#second convolution
tf.keras.layers.Conv2D(32, (3,3), activation="relu"),
tf.keras.layers.MaxPooling2D(2,2),
#third convolution
tf.keras.layers.Conv2D(64, (3,3), activation="relu"),
tf.keras.layers.MaxPooling2D(2,2),
#flatten the results to feed into a DNN
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation="relu"),
#only 1 neuron, as its a binary classification problem
tf.keras.layers.Dense(1, activation="sigmoid")
]
)
model.compile(optimizer = tf.keras.optimizers.RMSprop(lr=0.001),
loss="binary_crossentropy",
metrics=["acc"])
history = model.fit_generator(train_generator, epochs=15,
steps_per_epoch=100, validation_data = validation_generator,
validation_steps=50, verbose=1)
</code></pre>
<p>However, when attempting to replace the magic strings, and use actual class names for activations, loss functions, and metrics, I have the following model, which compiles fine, but accuracy is always 0. This model is behaving differently than the one above, with everything else remaining the same. Here is the new model:</p>
<pre><code>model = tf.keras.Sequential(
[
#first convolution
tf.keras.layers.Conv2D(16, (3,3), activation=tf.keras.activations.relu,
input_shape=(IMAGE_SIZE,IMAGE_SIZE,3)),
tf.keras.layers.MaxPooling2D(2,2),
#second convolution
tf.keras.layers.Conv2D(32, (3,3), activation=tf.keras.activations.relu),
tf.keras.layers.MaxPooling2D(2,2),
#third convolution
tf.keras.layers.Conv2D(64, (3,3), activation=tf.keras.activations.relu),
tf.keras.layers.MaxPooling2D(2,2),
#flatten the results to feed into a DNN
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation=tf.keras.activations.relu),
#only 1 neuron, as its a binary classification problem
tf.keras.layers.Dense(1, activation=tf.keras.activations.sigmoid)
]
)
model.compile(optimizer = tf.keras.optimizers.RMSprop(learning_rate=0.001),
loss=tf.keras.losses.BinaryCrossentropy(),
metrics=[tf.keras.metrics.Accuracy()])
history = model.fit_generator(train_generator, epochs=15,
steps_per_epoch=100, validation_data = validation_generator,
validation_steps=50, verbose=1)
</code></pre>
<p>I am guessing I have made a mistake in replacing the magic strings with class names, but I can't spot the mistake. Any recommendations?</p>
|
<p>When we set <strong>String Identifier</strong> for accuracy as <code>['acc']</code> or <code>['accuracy']</code>, the program will choose the relevant metrics for our problem, like whether it's binary or categorical type. But when we set the actual class name, we need to be a bit more specific. So, in your case, you need to change your metrics from</p>
<pre><code>tf.keras.metrics.Accuracy()
</code></pre>
<p>to</p>
<pre><code>tf.keras.metrics.BinaryAccuracy()
</code></pre>
<p>Read the content of each from <a href="https://www.tensorflow.org/api_docs/python/tf/keras/metrics/Accuracy" rel="nofollow noreferrer"><code>.Accuracy()</code></a>, and <a href="https://www.tensorflow.org/api_docs/python/tf/keras/metrics/BinaryAccuracy" rel="nofollow noreferrer"><code>.BinaryAccuracy()</code></a></p>
<hr />
<p>Here is one dummy example to reproduce the issue and solution for a complete reference.</p>
<pre><code># Generate dummy data
np.random.seed(10)
x_train = np.random.random((1000, 20))
y_train = np.random.randint(2, size=(1000, 1))
x_test = np.random.random((800, 20))
y_test = np.random.randint(2, size=(800, 1))
# model
model = Sequential()
model.add(Dense(64, input_dim=20, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
# compile and run
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
model.fit(x_train, y_train,
epochs=10, verbose=2,
batch_size=128, validation_data=(x_test, y_test))
</code></pre>
<p>With <strong>String Identifier</strong>, it will run OK. But if you change metrics to <code>.Accuracy()</code>, it will give you zero scores for both the train and validation part. To solve it, you need to set <code>.BinaryAccuracy()</code>, then things will run as expected.</p>
|
python|tensorflow|machine-learning|keras|deep-learning
| 0
|
6,654
| 67,154,091
|
Float 16 quantized Tflite model not working for custom models?
|
<p>I have a custom tensorflow model which I converted into tflite using the Float16 quantization as mentioned <a href="https://www.tensorflow.org/lite/performance/post_training_quantization#float16_quantization" rel="nofollow noreferrer">here</a>.<br />
But the the input details of the tflite model using the tflite interpreter are</p>
<pre><code>[{'name': 'input_1',
'index': 0,
'shape': array([ 1, 256, 256, 3], dtype=int32),
'shape_signature': array([ -1, 256, 256, 3], dtype=int32),
'dtype': numpy.float32,
'quantization': (0.0, 0),
'quantization_parameters': {'scales': array([], dtype=float32),
'zero_points': array([], dtype=int32),
'quantized_dimension': 0},
'sparsity_parameters': {}}]
</code></pre>
<p>while the output details are</p>
<pre><code>[{'name': 'Identity',
'index': 636,
'shape': array([ 7, 1, 256, 256, 1], dtype=int32),
'shape_signature': array([ 7, -1, 256, 256, 1], dtype=int32),
'dtype': numpy.float32,
'quantization': (0.0, 0),
'quantization_parameters': {'scales': array([], dtype=float32),
'zero_points': array([], dtype=int32),
'quantized_dimension': 0},
'sparsity_parameters': {}}]
</code></pre>
<p>Is something wrong with the conversion ?</p>
<p>I also received this warning while converting the tf model to tflite</p>
<pre><code>WARNING:absl:Found untraced functions such as _defun_call, _defun_call, _defun_call, _defun_call, _defun_call while saving (showing 5 of 63). These functions will not be directly callable after loading.
WARNING:absl:Found untraced functions such as _defun_call, _defun_call, _defun_call, _defun_call, _defun_call while saving (showing 5 of 63). These functions will not be directly callable after loading.
</code></pre>
<p><strong>P.S</strong> I also tried doing <a href="https://www.tensorflow.org/lite/performance/post_training_quantization#dynamic_range_quantization" rel="nofollow noreferrer">this</a> quantization, but received the same input/output details for that tflite model.</p>
|
<p>Float16 and post-training quantization don not modify the input/output tensors but the intermediate weight tensors only. The above behavior looks like an intended behavior. If you want to fully quantize your model, try the full integer quantization. You can find the details at <a href="https://www.tensorflow.org/lite/performance/post_training_quantization#full_integer_quantization" rel="nofollow noreferrer">here</a>.</p>
|
python|tensorflow|tensorflow-lite
| 0
|
6,655
| 66,861,301
|
Set cross section of pandas MultiIndex to DataFrame from addition of other cross sections
|
<p>I am currently trying to assign rows with certain indices based on the other indices within the group.</p>
<p>Consider the following pandas data frame:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
index = pd.MultiIndex.from_product([list('abc'), ['aa', 'bb', 'cc']])
df = pd.DataFrame({'col1': np.arange(9),
'col2': np.arange(9, 18),
'col3': np.arange(18,27)},
index=index)
</code></pre>
<p>Output of <code>df</code>:</p>
<pre><code> col1 col2 col3
a aa 0 9 18
bb 1 10 19
cc 2 11 20
b aa 3 12 21
bb 4 13 22
cc 5 14 23
c aa 6 15 24
bb 7 16 25
cc 8 17 26
</code></pre>
<p>I want to assign the indices 'cc' equal to 'aa' plus 'bb' per the first level of indices.</p>
<p>The following works fine, but I am wondering if there is a way to set values without having to reference the underlying NumPy array.</p>
<pre class="lang-py prettyprint-override"><code>df.loc[pd.IndexSlice[:, 'cc'], :] = (df.xs('aa', level=1)
+ df.xs('bb', level=1)).values
</code></pre>
<p>Is there a way to set the 'cc' rows directly to the output below? I believe the issue with trying to set the below directly is due to an index mismatch. Can I get around this somehow?</p>
<pre class="lang-py prettyprint-override"><code>df.xs('aa', level=1) + df.xs('bb', level=1)
</code></pre>
|
<p><strong>Update</strong></p>
<p>You can use <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.iloc.html" rel="nofollow noreferrer"><code>pandas.DataFrame.iloc</code></a></p>
<pre><code>df.iloc[df.index.get_level_values(1)=='cc'] = df.xs('aa', level=1) + df.xs('bb', level=1)
</code></pre>
<hr />
<p><strong>Old answer</strong></p>
<p>You can do this:</p>
<pre><code>df[df.index.get_level_values(1)=='cc'] = df.xs('aa', level=1) + df.xs('bb', level=1)
</code></pre>
<p>Disclaimer: it works for pandas version 1.2.1 and it doesn't work in pandas 1.2.3. I haven't tested any other version</p>
|
python|pandas|dataframe|numpy|multi-index
| 1
|
6,656
| 67,012,645
|
Join two dataframes by the closest datetime
|
<p>I have two dataframes <code>df_A</code> and <code>df_B</code> where each has date, time and a value. An example below:</p>
<pre><code>import pandas as pd
df_A = pd.DataFrame({
'date_A': ["2021-02-01", "2021-02-01", "2021-02-02"],
'time_A': ["22:00:00", "23:00:00", "00:00:00"],
'val_A': [100, 200, 300]})
df_B = pd.DataFrame({
'date_B': ["2021-02-01", "2021-02-01", "2021-02-01", "2021-02-01", "2021-02-02"],
'time_B': ["22:01:12", "22:59:34", "23:00:17", "23:59:57", "00:00:11"],
'val_B': [104, 203, 195, 296, 294]})
</code></pre>
<p>I need to join this dataframes but date and time never match. So I want a left join by the closest datetime from <code>df_B</code> to <code>df_A</code>. So the output should be:</p>
<pre><code>df_out = pd.DataFrame({
'date_A': ["2021-02-01", "2021-02-01", "2021-02-02"],
'time_A': ["22:00:00", "23:00:00", "00:00:00"],
'val_A': [100, 200, 300],
'date_B': ["2021-02-01", "2021-02-01", "2021-02-01"],
'time_B': ["22:01:12", "23:00:17", "23:59:57"],
'val_B': [104, 195, 296]})
</code></pre>
<p><a href="https://i.stack.imgur.com/IiMld.png" rel="nofollow noreferrer">df_out</a></p>
<p><a href="https://i.stack.imgur.com/IiMld.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IiMld.png" alt="df_out" /></a></p>
|
<p>Pandas has a handy <code>merge_asof()</code> function for these types of problems (<a href="https://pandas.pydata.org/docs/reference/api/pandas.merge_asof.html" rel="nofollow noreferrer">https://pandas.pydata.org/docs/reference/api/pandas.merge_asof.html</a>)</p>
<p>It requires a single key to merge on, so you can create a single date-time column in each dataframe and perform the merge:</p>
<pre><code>df_A['date_time'] = pd.to_datetime(df_A.date_A + " " + df_A.time_A)
df_B['date_time'] = pd.to_datetime(df_B.date_B + " " + df_B.time_B)
# Sort the two dataframes by the new key, as required by merge_asof function
df_A.sort_values(by="date_time", inplace=True, ignore_index=True)
df_B.sort_values(by="date_time", inplace=True, ignore_index=True)
result_df = pd.merge_asof(df_A, df_B, on="date_time", direction="nearest")
</code></pre>
<p>Note the direction argument's value is "nearest" as you requested. There are other values you can choose, like "backward" and "forward".</p>
|
python|pandas|datetime|left-join
| 2
|
6,657
| 66,986,109
|
Why is Python loading numpy 1.20.0 in my dask env when conda says 1.20.1 is installed?
|
<p>Why is Python loading numpy 1.20.0 in my dask env when conda says 1.20.1 is installed?</p>
<pre><code>(dask) ➜ dask: conda list -n dask | grep numpy
numpy 1.20.1 py38h18fd61f_0 conda-forge
(dask) ➜ dask: python
Python 3.8.8 | packaged by conda-forge | (default, Feb 20 2021, 16:22:27)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
>>> numpy.version.full_version
'1.20.0'
</code></pre>
|
<p>This might happen because by default, python packages installed in the so-called user site (<code>pip install --user package</code> or in newer pip versions automatically when the prefix directory is not writable for the current user) are still considered first in the <code>PYTHONPATH</code>.</p>
<p>You can deactivate the user site using three methods:</p>
<ul>
<li><p><code>-s</code> command line option for python: <code>python -s -c 'import numpy; print(numpy.__version__)'</code></p>
</li>
<li><p>the <code>PYTHONNOUSERSITE</code> environment variable: <code>export PYTHONNOUSERSITE=1</code> (the actual value does not matter, just that it is set) and then <code>python -c 'import numpy; print(numpy.__version__)'</code></p>
</li>
<li><p>In your script, before importing anything, do</p>
<pre><code>import site
site.ENABLE_USER_SITE = False
</code></pre>
</li>
</ul>
<p>I would go for one of the first two, things like these shouldn't be inside scripts but solved by the environment.</p>
<p>To get the location of your user site, you can do:</p>
<pre><code>python -c 'import site; print(site.USER_SITE)'
</code></pre>
|
python-3.x|numpy|conda
| 1
|
6,658
| 66,812,693
|
ValueError: cannot reshape array of size 4096 into shape (64,64,3)
|
<p>Hi stack overflow friends :)<br />
I trained a CNN model for predicting number images.<br />
Learned it successfully And I want to predict a number from an image in which several numbers.<br />
But I got an error.<br />
I'll show you my dataset and some of my code.</p>
<p><strong>Dataset.py</strong></p>
<pre><code>batch_size = 32
img_height = 64
img_width = 64
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
data_dir,
validation_split=0.2,
subset="training",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
</code></pre>
<p><strong>Model.py</strong></p>
<pre><code>num_classes = 10
model = Sequential([
layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3)),
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Flatten(),
layers.Dense(128, activation='relu'),
layers.Dense(num_classes)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
</code></pre>
<p><strong>Predict Several Numbers.py</strong></p>
<pre><code>import cv2 as cv
import numpy as np
from tensorflow.keras.models import load_model
img_color = cv.imread('mypath/test.PNG', cv.IMREAD_COLOR)
img_gray = cv.cvtColor(img_color, cv.COLOR_BGR2GRAY)
ret,img_binary = cv.threshold(img_gray, 0, 255, cv.THRESH_BINARY_INV | cv.THRESH_OTSU)
kernel = cv.getStructuringElement( cv.MORPH_RECT, ( 5, 5 ) )
img_binary = cv.morphologyEx(img_binary, cv. MORPH_CLOSE, kernel)
contours, hierarchy = cv.findContours(img_binary, cv.RETR_EXTERNAL,
cv.CHAIN_APPROX_SIMPLE)
for contour in contours:
x, y, w, h = cv.boundingRect(contour)
length = max(w, h) + 60
img_digit = np.zeros((length, length, 1),np.uint8)
new_x,new_y = x-(length - w)//2, y-(length - h)//2
img_digit = img_binary[new_y:new_y+length, new_x:new_x+length]
model = load_model('mypath/saved_model.h5')
img_digit = cv.resize(img_digit,(64,64), interpolation=cv.INTER_AREA)
img_digit = img_digit / 255.0
img_input = np.array(img_digit).reshape(-1,64,64,3)
predictions = model.predict(img_input)
number = np.argmax(predictions)
print(number)
cv.rectangle(img_color, (x, y), (x+w, y+h), (255, 255, 0), 2)
location = (x + int(w *0.5), y - 10)
font = cv.FONT_HERSHEY_COMPLEX
fontScale = 1.2
cv.putText(img_color, str(number), location, font, fontScale, (0,255,0), 2)
cv.imshow('digit', img_digit)
cv.waitKey(0)
</code></pre>
<p>I think it's the perfect code. but I got error :(</p>
<blockquote>
<p>ValueError: cannot reshape array of size 4096 into shape (64,64,3)</p>
</blockquote>
<p>Why I getting this error? Help me!!!</p>
|
<p>Your problem is that you are declaring im_digit to be 2D array but reshaping it to 3D (3 channels). Also note that your img_binary is also single channel (2D) image. All that you need to change is to keep working with gray scale:</p>
<pre><code>img_input = np.array(img_digit).reshape(1,64,64,1)
</code></pre>
<p>This answer however assumes that you have trained your model using a 64x64 input.</p>
|
tensorflow|opencv|conv-neural-network|reshape
| 0
|
6,659
| 47,161,842
|
How to merge two pandas dataframes on column of sets
|
<p>I have columns in two dataframes representing interacting partners in a biological system, so if gene_A interacts with gene_B, the entry in column 'gene_pair' would be {gene_A, gene_B}. I want to do an inner join, but trying:</p>
<pre><code>pd.merge(df1, df2, how='inner', on=['gene_pair'])
</code></pre>
<p>throws the error</p>
<pre><code>TypeError: type object argument after * must be a sequence, not itertools.imap
</code></pre>
<p>I need to merge on the unordered pair, so as far as I can tell I can't merge on a combination of two individual columns with gene names. Is there another way to achieve this merge?</p>
<p>Some example dfs:</p>
<pre><code>gene_pairs1 = [
set(['gene_A','gene_B']),
set(['gene_A','gene_C']),
set(['gene_D','gene_A'])
]
df1 = pd.DataFrame({'r_name': ['r1','r2','r3'], 'gene_pair': gene_pairs1})
gene_pairs2 = [
set(['gene_A','gene_B']),
set(['gene_F','gene_A']),
set(['gene_C','gene_A'])
]
df2 = pd.DataFrame({'function': ['f1','f2','f3'], 'gene_pair': gene_pairs2})
pd.merge(df1,df2,how='inner',on=['gene_pair'])
</code></pre>
<p>and I would like entry 'r1' line up with 'f1' and 'r2' to line up with 'f3'.</p>
|
<p>Pretty simple in the end: I used frozenset, rather than set.</p>
|
python|pandas|dataframe
| 1
|
6,660
| 68,290,921
|
Google Colab No Such File or Directory Error
|
<p>Hello I'm trying to start tensorflow training process on google colab. I'm trying to run this code block in integrated notebook on google colab. Code block is:</p>
<pre><code>!apt-get install protobuf-compiler python-pil python-lxml python-tk
!pip install Cython
%cd '/content/gdrive/My Drive/models/research/'
!protoc object_detection/protos/*.proto --python_out=.
import os
os.environ['PYTHONPATH'] += ':/content/gdrive/My Drive/models/research/:/content/gdrive/My Drive/models/research/slim'
!python setup.py build
!python setup.py install
</code></pre>
<p>It gives this output:</p>
<pre><code>Reading package lists... Done
Building dependency tree
Reading state information... Done
protobuf-compiler is already the newest version (3.0.0-9.1ubuntu1).
python-lxml is already the newest version (4.2.1-1ubuntu0.4).
python-pil is already the newest version (5.1.0-1ubuntu0.6).
python-tk is already the newest version (2.7.17-1~18.04).
0 upgraded, 0 newly installed, 0 to remove and 39 not upgraded.
Requirement already satisfied: Cython in /usr/local/lib/python3.7/dist-packages (0.29.23)
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
/content/gdrive/My Drive/models/research
python3: can't open file 'setup.py': [Errno 2] No such file or directory
python3: can't open file 'setup.py': [Errno 2] No such file or directory
</code></pre>
<p>I think I cant set python path correctly. Can anyone help me please?</p>
|
<p>You have to first mount your Google drive:</p>
<pre><code>from google.colab import drive
drive.mount('/content/gdrive')
</code></pre>
<p>Try to do</p>
<pre><code>import os
os.listdir(os.getcwd())
</code></pre>
<p>This should return</p>
<pre><code>['.config', 'sample_data']
</code></pre>
<p>before you mount the drive, and</p>
<pre><code>['.config', 'gdrive', 'sample_data']
</code></pre>
<p>after you've mounted the drive.</p>
|
python|tensorflow|google-colaboratory
| 0
|
6,661
| 68,036,333
|
Pandas does not recognize NaN values on M1 macbook pro
|
<p>Pandas on m1 macbook pro is not recognizing NaN values even though NaN values apparently exist in the data.
<a href="https://i.stack.imgur.com/aWPPj.png" rel="nofollow noreferrer">The first picture shows the first 5 rows of the data, and you can apparently see that there are NaN values</a></p>
<p><a href="https://i.stack.imgur.com/p4KtW.jpg" rel="nofollow noreferrer">In the second picture, however, when I do data.info() which prints out Non-Null count of each column, it says that there is no null values for any column</a></p>
<p>When I run the same code on Windows and Linux (haven't tried it on macbooks without the m1 chip), they correctly recognize the NaN values.
I've tried it on two different m1 chip macboooks, but same result.
Anyone having the same problem?? or know what might be causing this?
Thanks a lot in advance.</p>
|
<p>Try:</p>
<pre><code>data.info()
</code></pre>
<p>to see what type of variable they are in the dataframe. My guess is that nothing in that column is being recognised as a number.</p>
|
python|pandas|null|nan|apple-m1
| 0
|
6,662
| 68,365,142
|
Group by weekdays and plot graph panda
|
<pre><code>df1=df.groupby(['usertype','day']).size().reset_index(name='times')
</code></pre>
<p><a href="https://i.stack.imgur.com/69SfT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/69SfT.png" alt="enter image description here" /></a></p>
<p>how to plot bar graph by group day and user type.</p>
|
<p>I think you want this:</p>
<pre><code>days = ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday"]
>>> df.pivot(index="day", columns="usertype", values="times").reindex(days).plot.bar()
</code></pre>
<p><a href="https://i.stack.imgur.com/TRptY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TRptY.png" alt="Graph" /></a></p>
|
python|pandas|matplotlib
| 0
|
6,663
| 68,061,523
|
ValueError: Input 0 of layer sequential_7 is incompatible with the layer: : expected min_ndim=4, found ndim=2. Full shape received: (None, 1024)
|
<p>New Python developer here. I looked at other similar posts here but i'm not able to get it right. Would appreciate any help.</p>
<pre><code>print('X_train:', X_train.shape)
print('y_train:', y_train1.shape)
print('X_test:', X_train.shape)
print('y_test:', y_train1.shape)
</code></pre>
<p>X_train: (42000, 32, 32)
y_train: (42000,)
X_test: (42000, 32, 32)
y_test: (42000,)</p>
<pre><code>from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D
def featuremodel() :
model = Sequential()
model.add(Conv2D(32, kernel_size=4, activation='relu', input_shape=(X_train.shape[0],32,64)))
model.add(MaxPooling2D(pool_size=3))
model.add(Conv2D(64, kernel_size=4, activation='relu'))
model.add(Flatten())
model.add(Dense(len(y_train[0]), activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adadelta',
metrics=['acc'])
model.summary()
model.fit(X_train, y_train, epochs = 10, validation_data = (X_test,y_test))
</code></pre>
<p>return model</p>
<p>ValueError: Input 0 of layer sequential_7 is incompatible with the layer: : expected min_ndim=4, found ndim=2. Full shape received: (None, 1024)</p>
|
<p>The input shape you have specified should be changed. Your input has 42000 samples and each one has <code>(32,32)</code> shape. You should not pass number of samples <code>(42000)</code> to input layer, and you should add a channel dimension. So the input shape should be <code>(32,32,1)</code>.</p>
<p>The modified code should be like this:</p>
<pre><code>from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D
# test data
X_train = tf.random.uniform((42000,32,32))
y_train1 = tf.random.uniform((42000,))
X_train = tf.expand_dims(X_train, axis=3) #add channel axis (42000,32,32) => (42000,32,32,1)
model = Sequential()
model.add(Conv2D(32, kernel_size=4, activation='relu', input_shape=(32,32,1))) #change input shape
model.add(MaxPooling2D(pool_size=3))
model.add(Conv2D(64, kernel_size=4, activation='relu'))
model.add(Flatten())
#last layer should have output like your y data. in this case it should be 1, since you have 1 y for each sample
model.add(Dense(1, activation='softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adadelta',
metrics=['acc'])
model.summary()
model.fit(X_train, y_train1, epochs = 10) #, validation_data = (X_test,y_test))
</code></pre>
|
python-3.x|tensorflow|machine-learning|keras|tf.keras
| 1
|
6,664
| 59,453,037
|
Dask - Merge multiple columns into a single column
|
<p>I have a dask dataframe as below:</p>
<pre><code> Column1 Column2 Column3 Column4 Column5
0 a 1 2 3 4
1 a 3 4 5
2 b 6 7 8
3 c 7 7
</code></pre>
<p>I want to merge all of the columns into a single one efficiently. And I want each row to be a <strong>single string</strong>. Like below:</p>
<pre><code> Merged_Column
0 a,1,2,3,4
1 a,3,4,5
2 b,6,7,8
3 c,7,7,7
</code></pre>
<p>I've seen <a href="https://stackoverflow.com/questions/33098383/merge-multiple-column-values-into-one-column-in-python-pandas">this question</a> but it doesn't seem efficient since it is using the apply function. How can I achieve this as efficient as possible? (Speed + memory usage) Or is apply isn't as problematic as I believe since this is dask, not pandas.</p>
<p><strong>This is what I tried. It seems like it is working but I am worried about the speed of it with the big dataframe.</strong></p>
<pre><code>cols= df.columns
df['combined'] = df[cols].apply(func=(lambda row: ' '.join(row.values.astype(str))), axis=1, meta=('str'))
df = df.drop(cols, axis=1)
</code></pre>
<p>I also need to get rid of the column header.</p>
|
<p>When you have to join string @saravanan saminathan methods wins hands down. Here there are some timing with <code>dask</code></p>
<pre class="lang-py prettyprint-override"><code>import dask.dataframe as dd
import numpy as np
import pandas as pd
N = int(1e6)
df = pd.DataFrame(np.random.randint(0,100,[N,10]))
df = dd.from_pandas(df, npartitions=4)
df = df.astype("str")
df_bk = df.copy()
</code></pre>
<h1>Apply</h1>
<pre class="lang-py prettyprint-override"><code>%%time
df["comb"] = df.apply(lambda x:",".join(x), axis=1,meta=("str"))
df = df.compute()
CPU times: user 44.4 s, sys: 925 ms, total: 45.3 s
Wall time: 44.6 s
</code></pre>
<h1>Add (explicit)</h1>
<pre class="lang-py prettyprint-override"><code>df = df_bk.copy()
%%time
df["comb"] = df[0]+","+df[1]+","+df[2]+","+df[3]+","+df[4]+","+\
df[5]+","+df[6]+","+df[7]+","+df[8]+","+df[9]
df = df.compute()
CPU times: user 8.95 s, sys: 860 ms, total: 9.81 s
Wall time: 9.56 s
</code></pre>
<h1>Add (loop)</h1>
<p>In case you have many columns and you don't want to write down all of them</p>
<pre class="lang-py prettyprint-override"><code>df = df_bk.copy()
%%time
df["comb"] = ''
for col in df.columns:
df["comb"]+=df[col]+","
df = df.compute()
CPU times: user 11.6 s, sys: 1.32 s, total: 12.9 s
Wall time: 12.3 s
</code></pre>
|
python|pandas|dask
| 3
|
6,665
| 59,331,933
|
How to create clusters/groups from knowing associations?
|
<p>I have a dataframe that has 2 columns: [ID, ASSOCIATED_ID]
For each ID, I have a list of other associated IDS from the dataframe.
Here is a synthesized version of it:</p>
<pre><code>ID ASSOCIATED_ID
1 [2,3]
2 [1,4]
3 [1]
4 [2]
5 []
</code></pre>
<p>If I want to create clusters (groups) of IDs that are associated to each other (not necessary that they have a direct association but even if there is any transitive association). How can I do that programmatically?</p>
|
<p>IIUC,you can use networkx and connect_components:</p>
<pre><code>df_e = df.explode('ASSOCIATED_ID')
G = nx.from_pandas_edgelist(df_e, 'ID','ASSOCIATED_ID')
[i for i in nx.connected_components(G)]
</code></pre>
<p>Output:</p>
<pre><code>[{1, 2, 3, 4}, {nan, 5}]
</code></pre>
|
python|python-3.x|pandas|algorithm|cluster-analysis
| 2
|
6,666
| 59,262,403
|
Why does my sigmoid function return values not in the interval ]0,1[?
|
<p>I am implementing logistic regression in Python with numpy. I have generated the following data set:</p>
<pre><code># class 0:
# covariance matrix and mean
cov0 = np.array([[5,-4],[-4,4]])
mean0 = np.array([2.,3])
# number of data points
m0 = 1000
# class 1
# covariance matrix
cov1 = np.array([[5,-3],[-3,3]])
mean1 = np.array([1.,1])
# number of data points
m1 = 1000
# generate m gaussian distributed data points with
# mean and cov.
r0 = np.random.multivariate_normal(mean0, cov0, m0)
r1 = np.random.multivariate_normal(mean1, cov1, m1)
X = np.concatenate((r0,r1))
</code></pre>
<p>Now I have implemented the sigmoid function with the aid of the following methods:</p>
<pre><code>def logistic_function(x):
""" Applies the logistic function to x, element-wise. """
return 1.0 / (1 + np.exp(-x))
def logistic_hypothesis(theta):
return lambda x : logistic_function(np.dot(generateNewX(x), theta.T))
def generateNewX(x):
x = np.insert(x, 0, 1, axis=1)
return x
</code></pre>
<p>After applying logistic regression, I found out that the best thetas are:</p>
<pre><code>best_thetas = [-0.9673200946417307, -1.955812236119612, -5.060885703369424]
</code></pre>
<p>However, when I apply the logistic function with these thetas, then the output is numbers that are not inside the interval [0,1]</p>
<p>Example:</p>
<pre><code>data = logistic_hypothesis(np.asarray(best_thetas))(X)
print(data
</code></pre>
<p>This gives the following result:</p>
<pre><code>[2.67871968e-11 3.19858822e-09 3.77845881e-09 ... 5.61325410e-03
2.19767618e-01 6.23288747e-01]
</code></pre>
<p>Can someone help me understand what has gone wrong with my implementation? I cannot understand why I am getting such big values. Isnt the sigmoid function supposed to only give results in the [0,1] interval?</p>
|
<p>It does, it's just in <a href="https://docs.python.org/3.3/library/string.html#formatspec" rel="nofollow noreferrer">scientific notation</a>.</p>
<blockquote>
<p>'e' Exponent notation. Prints the number in scientific notation using
the letter ‘e’ to indicate the exponent.</p>
</blockquote>
<pre><code>>>> a = [2.67871968e-11, 3.19858822e-09, 3.77845881e-09, 5.61325410e-03]
>>> [0 <= i <= 1 for i in a]
[True, True, True, True]
</code></pre>
|
python|numpy|regression|logistic-regression|sigmoid
| 2
|
6,667
| 59,300,346
|
Getting a file not found error when iterating over a directory
|
<p>I'm trying to iterate through a file directory in the following way:</p>
<pre><code>path = r'C:\my\path'
for filename in os.listdir(path):
nodes_arr = np.genfromtxt(filename, delimiter=',')
</code></pre>
<p>And I get an error:</p>
<pre class="lang-none prettyprint-override"><code>IOError("%s not found." % path)
OSError: 10028057_nodes not found.
</code></pre>
<p>When I try to print the files in the following way:</p>
<pre><code>path = r'C:\my\path'
for filename in os.listdir(path):
print(filename)
</code></pre>
<p>I get a list and it contains all files in the directory, first one being "10028057_nodes" which provides the error...</p>
|
<p><code>os.listdir</code> returns the file names only. Python IO functions, whether called directly (<code>open</code>...) or through <code>numpy</code>, do not actually know that that these names reside in <code>path</code>. Unless your path is the current directory, which will what Python will assume, this will fail - since the said file name does not exist in the current directory.</p>
<p>What you need, is to concatenate the path to the file name, so:</p>
<pre><code>nodes_arr = np.genfromtxt(os.path.join(path, filename), delimiter=',')
</code></pre>
|
python|numpy
| 2
|
6,668
| 59,094,787
|
Unknown error when running train.py or model_main.py
|
<p>I've been following this guide on github with a couple of changes since I'm using the ssdlite_mobilenet_v2_coco model and I'm getting an unkown error.</p>
<p><a href="https://github.com/EdjeElectronics/TensorFlow-Object-Detection-API-Tutorial-Train-Multiple-Objects-Windows-10" rel="nofollow noreferrer">https://github.com/EdjeElectronics/TensorFlow-Object-Detection-API-Tutorial-Train-Multiple-Objects-Windows-10</a></p>
<p>message:</p>
<pre><code>**Traceback (most recent call last):
File "model_main.py", line 109, in <module>
tf.app.run()
File "C:\Users\luke9\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\platform\app.py", line 126, in run
_sys.exit(main(argv))
File "model_main.py", line 105, in main
tf.estimator.train_and_evaluate(estimator, train_spec, eval_specs[0])
File "C:\Users\luke9\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\estimator\training.py", line 439, in train_and_evaluate
executor.run()
File "C:\Users\luke9\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\estimator\training.py", line 518, in run
self.run_local()
File "C:\Users\luke9\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\estimator\training.py", line 650, in run_local
hooks=train_hooks)
File "C:\Users\luke9\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\estimator\estimator.py", line 363, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "C:\Users\luke9\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\estimator\estimator.py", line 843, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "C:\Users\luke9\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\estimator\estimator.py", line 853, in _train_model_default
input_fn, model_fn_lib.ModeKeys.TRAIN))
File "C:\Users\luke9\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\estimator\estimator.py", line 691, in _get_features_and_labels_from_input_fn
result = self._call_input_fn(input_fn, mode)
File "C:\Users\luke9\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\estimator\estimator.py", line 798, in _call_input_fn
return input_fn(**kwargs)
File "C:\tensorflow\models\research\object_detection\inputs.py", line 504, in _train_input_fn
params=params)
File "C:\tensorflow\models\research\object_detection\inputs.py", line 607, in train_input
batch_size=params['batch_size'] if params else train_config.batch_size)
File "C:\tensorflow\models\research\object_detection\builders\dataset_builder.py", line 130, in build
num_additional_channels=input_reader_config.num_additional_channels)
File "C:\tensorflow\models\research\object_detection\data_decoders\tf_example_decoder.py", line 319, in __init__
default_value=''),
File "C:\tensorflow\models\research\object_detection\data_decoders\tf_example_decoder.py", line 64, in __init__
label_map_proto_file, use_display_name=False)
File "C:\tensorflow\models\research\object_detection\utils\label_map_util.py", line 172, in get_label_map_dict
label_map = load_labelmap(label_map_path_or_proto)
File "C:\tensorflow\models\research\object_detection\utils\label_map_util.py", line 139, in load_labelmap
label_map_string = fid.read()
File "C:\Users\luke9\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 120, in read
self._preread_check()
File "C:\Users\luke9\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 80, in _preread_check
compat.as_bytes(self.__name), 1024 * 512, status)
File "C:\Users\luke9\Anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 519, in __exit__
c_api.TF_GetCode(self.status.status))
esearch\object_detection raining\labelmap.pbtxt : The filename, directory name, or volume label syntax is incorrect.odels
; Unknown error**
</code></pre>
<p>I've tried both train.py and model_main.py as I'm running on tensorflow-gpu 1.8 through anaconda.</p>
<p>in the training folder labelmap.pbtxt does exist I'm not sure why it is trying to find "raining\labelmap.pbtxt"</p>
|
<p>Replace backslash (\) with forward (/) in the file path you specified. It worked for me.</p>
|
python-3.x|windows|tensorflow|gpu|object-detection
| 0
|
6,669
| 59,202,053
|
How to find the area under the curve
|
<p>I am a beginner Python user.</p>
<p>There are two data files for the curve (x, y): <a href="https://drive.google.com/open?id=1ZB39G3SmtamjVjmLzkC2JefloZ9iShpO" rel="nofollow noreferrer">https://drive.google.com/open?id=1ZB39G3SmtamjVjmLzkC2JefloZ9iShpO</a></p>
<p>How to find two areas under the curve, as shown in Figure: <a href="https://i.stack.imgur.com/nUpYq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nUpYq.png" alt="enter image description here"></a></p>
<p>Black area (A) and red area (B)</p>
<p>I`m only know to how find total area: </p>
<pre><code>from scipy.integrate import trapz
with open('./x_data.txt', 'rt') as f:
x_file = f.read()
with open('./y_data.txt', 'rt') as f:
y_file = f.read()
xlist = []
for line in x_file.split('\n'):
if line:
xlist.append(float(line.strip()))
ylist = []
for line in y_file.split('\n'):
if line:
ylist.append(float(line.strip()))
if len(xlist) != len(ylist):
print(len(xlist), len(ylist))
raise Exception('X and Y have different length')
xData = np.array(xlist)
yData = np.array(ylist)
area = trapz(y = yData, x = xData)
print("area =", area)
</code></pre>
|
<p>You can use Simpsons rule or the Trapezium rule to calculate the area under a graph given a table of y-values at a regular interval.</p>
<pre><code>from scipy.integrate import simps
from numpy import trapz
</code></pre>
<p>reference: <a href="https://stackoverflow.com/questions/13320262/calculating-the-area-under-a-curve-given-a-set-of-coordinates-without-knowing-t">Calculating the area under a curve given a set of coordinates, without knowing the function</a></p>
|
python|python-3.x|numpy|data-science|area
| 0
|
6,670
| 59,354,733
|
Compare a row of a dataframe based on multiple conditions
|
<p>My dataframe looks like this: </p>
<pre><code> price trend decreasing increasing condition_decreasing
0 7610.4 no trend 0.0 False
1 7610.4 no trend 0.0 False
2 7610.4 no trend 0.0 False
3 7610.4 decreasing 7610.4 True
4 7610.4 decreasing 7610.4 False
5 7610.4 decreasing 7610.4 False
6 7610.4 decreasing 7610.4 False
7 7610.4 decreasing 7610.4 False
8 7610.4 decreasing 7610.4 False
9 7610.4 decreasing 7610.4 False
10 7610.3 no trend 0.0 True
11 7610.3 no trend 0.0 False
12 7613.9 no trend 0.0 False
13 7613.9 no trend 0.0 False
14 7613.4 no trend 0.0 False
15 7613 decreasing 7613 True
16 7612 decreasing 7612 False
17 7612 decreasing 7612 False
18 7612 no trend 7612 True
</code></pre>
<p>What I need to do basically is when the column <code>trend</code> goes from <code>no trend</code> to <code>decreasing</code> is to take that value from the column <code>price</code> and compare it with the value of the column <code>price</code> when the column trend goes from <code>decreasing</code> to <code>no trend</code>. So in the example above it would be to compared the value 7610.4 from row 3 with the value 7610.3 from row 10. </p>
<p>I tried to add a column which indicates when the column trend changes, using the following code:
<code>condition_decreasing = (data['trend'] != data['trend'].shift(1))</code></p>
<p>But then after I do not know how to iterate on the dataframe in a loop and compare the two prices... any idea? thanks for the help!</p>
<p>The expected output would be a dataframe like that:</p>
<pre><code>price trend decreasing increasing condition_decreasing output
0 7610.4 no trend 0.0 False
1 7610.4 no trend 0.0 False
2 7610.4 no trend 0.0 False
3 7610.4 decreasing 7610.4 True
4 7610.4 decreasing 7610.4 False
5 7610.4 decreasing 7610.4 False
6 7610.4 decreasing 7610.4 False
7 7610.4 decreasing 7610.4 False
8 7610.4 decreasing 7610.4 False
9 7610.4 decreasing 7610.4 False
10 7610.3 no trend 0.0 True -0.1
11 7610.3 no trend 0.0 False
12 7613.9 no trend 0.0 False
13 7613.9 no trend 0.0 False
14 7613.4 no trend 0.0 False
15 7613 decreasing 7613 True
16 7612 decreasing 7612 False
17 7612 decreasing 7612 False
18 7612 no trend 7612 True -1
</code></pre>
<p>So basically a column with the difference of the two prices <code>7610.3 - 7610.4</code></p>
|
<p>Maybe you want to do this?</p>
<pre><code>import pandas as pd
import numpy as np
price = [7610.3, 7610.3, 7610.4, 7610.4, 7610.4, 7610.4, 7610.3, 7610.3, 7610.9]
df = pd.DataFrame({'price': price})
df['diff'] = df['price'].diff()
conditions = [
(df['diff'] == 0),
(df['diff'] > 0),
(df['diff'] < 0)]
choices = ['no trend', 'increasing', 'decreasing']
df['trend'] = np.select(conditions, choices, default = None)
print(df)
price diff trend
0 7610.3 NaN None
1 7610.3 0.0 no trend
2 7610.4 0.1 increasing
3 7610.4 0.0 no trend
4 7610.4 0.0 no trend
5 7610.4 0.0 no trend
6 7610.3 -0.1 decreasing
7 7610.3 0.0 no trend
8 7610.9 0.6 increasing
</code></pre>
|
python|pandas
| 1
|
6,671
| 45,250,108
|
To count every 3 rows to fit the condition by Pandas rolling
|
<p>I have dataframe look like this: </p>
<pre><code>raw_data ={'col0':[1,4,5,1,3,3,1,5,8,9,1,2]}
df = DataFrame(raw_data)
col0
0 1
1 4
2 5
3 1
4 3
5 3
6 1
7 5
8 8
9 9
10 1
11 2
</code></pre>
<p>What I want to do is to count every 3 rows to fit condition(df['col0']>3) and make new col looks like this: </p>
<pre><code> col0 col_roll_count3
0 1 0
1 4 1
2 5 2 #[index 0,1,2/ 4,5 fit the condition]
3 1 2
4 3 1
5 3 0 #[index 3,4,5/no fit the condition]
6 1 0
7 5 1
8 8 2
9 9 3
10 1 2
11 2 1
</code></pre>
<p>How can I achieve that?</p>
<p>I tried this but failed:</p>
<pre><code>df['col_roll_count3'] = df[df['col0']>3].rolling(3).count()
print(df)
col0 col1
0 1 NaN
1 4 1.0
2 5 2.0
3 1 NaN
4 3 NaN
5 3 NaN
6 1 NaN
7 5 3.0
8 8 3.0
9 9 3.0
10 1 NaN
11 2 NaN
</code></pre>
|
<p>Let's use <code>rolling</code>, <code>apply</code>, <code>np.count_nonzero</code>:</p>
<pre><code>df['col_roll_count3'] = df.col0.rolling(3,min_periods=1)\
.apply(lambda x: np.count_nonzero(x>3))
</code></pre>
<p>Output:</p>
<pre><code> col0 col_roll_count3
0 1 0.0
1 4 1.0
2 5 2.0
3 1 2.0
4 3 1.0
5 3 0.0
6 1 0.0
7 5 1.0
8 8 2.0
9 9 3.0
10 1 2.0
11 2 1.0
</code></pre>
|
pandas
| 1
|
6,672
| 45,061,902
|
Converting JSON to CSV w/ Pandas Library
|
<p>I'm having trouble converting a JSON file to CSV in Python and I'm not sure what's going wrong. The conversion completes but it is not correct. I think there's an issue due to the formatting of the JSON file; however, it's a valid JSON. </p>
<p><strong>Here's the content of my JSON file:</strong></p>
<pre><code>{
"tags": [{
"name": "ACDTestData",
"results": [{
"groups": [{
"name": "type",
"type": "number"
}],
"values": [
[
1409154300000,
1.16003418,
3
],
[
1409154240000,
0.024047852,
3
],
[
1409153280000,
10.25598145,
3
],
[
1409152200000,
10.73193359,
3
],
[
1409151240000,
0.024047852,
3
],
[
1409080200000,
14.34393311,
3
],
[
1409039580000,
4.883850098,
3
],
[
1408977480000,
5.520019531,
3
],
[
1408977360000,
0.00793457,
3
],
[
1408974300000,
2.695922852,
3
],
[
1408968480000,
0.011962891,
3
],
[
1408965720000,
0.427978516,
3
],
[
1408965660000,
0.011962891,
3
]
]
}]
}]
}
</code></pre>
<p><strong>Here's what I tried:</strong></p>
<pre><code>import pandas as pd
json_file = pd.read_json("QueryExportTest2.json")
json_file.to_csv()
</code></pre>
<p><strong>Here's my output:</strong></p>
<p><code>,tags\n0,"{u\'name\': u\'ACDTestData\', u\'results\': [{u\'values\': [[1409154300000L, 1.16003418, 3], [1409154240000L, 0.024047852, 3], [1409153280000L, 10.25598145, 3], [1409152200000L, 10.73193359, 3], [1409151240000L, 0.024047852, 3], [1409080200000L, 14.34393311, 3], [1409039580000L, 4.883850098, 3], [1408977480000L, 5.520019531, 3], [1408977360000L, 0.00793457, 3], [1408974300000L, 2.695922852, 3], [1408968480000L, 0.011962891000000002, 3], [1408965720000L, 0.42797851600000003, 3], [1408965660000L, 0.011962891000000002, 3]], u\'groups\': [{u\'type\': u\'number\', u\'name\': u\'type\'}]}]}"\n</code></p>
<p>This isn't right, because when I put it into a new Excel CSV doc instead of just printing it, the CSV is all in one cell.</p>
<p><strong>If it helps, when I try this:</strong></p>
<pre><code>import json
with open('QueryExportTest2.json') as json_data:
d = json.load(json_data)
print(d)
</code></pre>
<p><strong>I get this:</strong></p>
<p><code>{u'tags': [{u'name': u'ACDTestData', u'results': [{u'values': [[1409154300000L, 1.16003418, 3], [1409154240000L, 0.024047852, 3], [1409153280000L, 10.25598145, 3], [1409152200000L, 10.73193359, 3], [1409151240000L, 0.024047852, 3], [1409080200000L, 14.34393311, 3], [1409039580000L, 4.883850098, 3], [1408977480000L, 5.520019531, 3], [1408977360000L, 0.00793457, 3], [1408974300000L, 2.695922852, 3], [1408968480000L, 0.011962891, 3], [1408965720000L, 0.427978516, 3], [1408965660000L, 0.011962891, 3]], u'groups': [{u'type': u'number', u'name': u'type'}]}]}]}</code></p>
<p>How can I convert this nested JSON to CSV properly?</p>
|
<p>Your json is a nested dict (with lists and other dictionaries). I guess that you are interested in the <code>values</code> section of the <code>json</code>. If my assumption is correct, since this is a single entry json, try the following</p>
<pre><code>df = pd.DataFrame.from_dict(json_str['tags'][0]['results'][0]['values'])
df.columns = ['var1','var2', 'var3']
df.to_csv(filename)
</code></pre>
<p>If you will have more records you will have to iterate over the lists of values, namely you could append them.</p>
<pre><code>all_results = json['tags'][0]['results']
for i in range(0, len(all_results))
if i == 0:
my_df = pd.DataFrame(all_results[i]['values'])
else:
my_df.append(pd.DataFrame(all_results[i]['values']))
</code></pre>
|
python|json|csv|pandas
| 1
|
6,673
| 57,184,622
|
looping df.query by setting the condition as a variable
|
<p>So I have multiple criteria and I want to use a loop to query them each time so I can filter the data. So essentially I want to:</p>
<pre><code>metric = ["case1", "case2"] #etc
df = df.query('Metric == metric[i]')
#instead of-->
df = df.query('Metric == "case1"')
df = df.query('Metric == "case2"')
metric = ["case1", "case2"] etc
df = df.query('Metric == metric[i]')
#I will loop back around instead of doing them individually
df = df.query('Metric == "case1"')
df = df.query('Metric == "case2"')
</code></pre>
|
<p>You can do with </p>
<pre><code>df.query('Symbol==@metric[0]')
</code></pre>
|
python|pandas
| 1
|
6,674
| 56,938,875
|
Encoding a column in Pandas based on occurance of value 0
|
<p>I have a Pandas dataframe with a column like this,</p>
<pre><code>df = pd.DataFrame()
df['A'] = [1, 1, 0, 1, 1, 0]
</code></pre>
<p>I want to make another column with values like this,</p>
<p><code>[1, 1, 1, 2, 2, 2]</code></p>
<p>The idea is to start with value <code>1</code> and increment the value when I get a <code>1</code> and only if the last value was <code>0</code>. In other words, if I have a <code>0</code> then increment the value in the next step.</p>
<p>I used an apply to do this like shown below,</p>
<pre><code>k = 1
def fn(row):
global k
a, b = row['A'], row['x']
if a == 1 and b == 1:
pass
elif a == 1 and b == 0:
pass
elif a == 0 and b == 1:
k += 1
return (k - 1)
else:
k += 1
return (k - 1)
return k
df['x'] = df['A'].shift(-1)
df['k'] = df.apply(lambda row : fn(row), axis=1)
</code></pre>
<p>Which is really inefficient. I can't figure out a faster method for this.</p>
<p>How to implement this in Pandas efficiently.?</p>
|
<p>IIUC, you want to count the occurrence of <code>0</code> but shifted:</p>
<pre><code>df['A'].eq(0).cumsum().shift(fill_value=0)+1
</code></pre>
<p>Or:</p>
<pre><code>df['A'].shift().eq(0).cumsum()+1
</code></pre>
<p>Output:</p>
<pre><code>0 1
1 1
2 1
3 2
4 2
5 2
Name: A, dtype: int32
</code></pre>
|
python|pandas
| 3
|
6,675
| 57,054,637
|
np.vectorize fails on a 2-d numpy array as input
|
<p>I am trying to vectorize a function that takes a numpy array as input. I have a 2-d numpy array (shape is 1000,100) on which the function is to be applied on each of the 1000 rows. I tried to vectorize the function using <code>np.vectorize</code>. Here is the code:</p>
<pre class="lang-py prettyprint-override"><code>def fun(i):
print(i)
location = geocoder.google([i[1], i[0]], method="reverse")
#print type(location)
location = str(location)
location = location.split("Reverse")
if len(location) > 1:
location1 = location[1]
return [i[0], i[1], location1]
#using np.vectorize
vec_fun = np.vectorize(fun)
</code></pre>
<p>Which raises the error</p>
<pre class="lang-none prettyprint-override"><code><ipython-input-19-1ee9482c6161> in fun(i)
1 def fun(i):
2 print(i)
----> 3 location = geocoder.google([i[1], i[0]], method="reverse")
4 #print type(location)
5 location = lstr(location)
IndexError: invalid index to scalar variable.
</code></pre>
<p>I have printed the argument that is passed in to the fun which prints a single value (the first element of the vector) rather than the vector(1 row) that is the reason of the index error but I'm not getting any idea how to resolve this.</p>
|
<p>By this time I think yo have solved your problem. However, I just found a way that solve this and may help other people with the same question. You can pass a <code>signature="str"</code> parameter to <code>np.vectorize</code> in order to specify the input and output shape. For example, the signature <code>"(n) -> ()"</code> expects an input shape with length <code>(n)</code> (rows) and outputs a scalar <code>()</code>. Therefore, it will broadcast up to rows:</p>
<pre><code>def my_sum(row):
return np.sum(row)
row_sum = np.vectorize(my_sum, signature="(n) -> ()")
my_mat = np.array([
[1, 1, 1],
[2, 2, 2],
])
row_sum(my_mat)
OUT: array([3, 6])
</code></pre>
|
python|numpy|vectorization
| 2
|
6,676
| 46,085,874
|
Occupies all GPU memory with the following tensorflow code?
|
<p>I was experimenting with word2vec with the code from <a href="https://github.com/chiphuyen/stanford-tensorflow-tutorials/blob/master/examples/04_word2vec_no_frills.py" rel="nofollow noreferrer">https://github.com/chiphuyen/stanford-tensorflow-tutorials/blob/master/examples/04_word2vec_no_frills.py</a> </p>
<p>However, it easily uses up all my GPU memories, any idea why?</p>
<pre><code>with tf.name_scope('data'):
center_words = tf.placeholder(tf.int32, shape=[BATCH_SIZE], name='center_words')
target_words = tf.placeholder(tf.int32, shape=[BATCH_SIZE, 1], name='target_words')
with tf.name_scope("embedding_matrix"):
embed_matrix = tf.Variable(tf.random_uniform([VOCAB_SIZE, EMBED_SIZE], -1.0, 1.0), name="embed_matrix")
with tf.name_scope("loss"):
embed = tf.nn.embedding_lookup(embed_matrix, center_words, name="embed")
nce_weight = tf.Variable(tf.truncated_normal([VOCAB_SIZE, EMBED_SIZE], stddev=1.0/(EMBED_SIZE ** 0.5)), name="nce_weight")
nce_bias = tf.Variable(tf.zeros([VOCAB_SIZE]), name="nce_bias")
loss = tf.reduce_mean(tf.nn.nce_loss(weights=nce_weight, biases=nce_bias, labels=target_words, inputs=embed, num_sampled=NUM_SAMPLED, num_classes=VOCAB_SIZE), name="loss")
optimizer = tf.train.GradientDescentOptimizer(LEARNING_RATE).minimize(loss)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
total_loss = 0.0 # we use this to calculate the average loss in the last SKIP_STEP steps
writer = tf.summary.FileWriter('./graphs/no_frills/', sess.graph)
for index in range(NUM_TRAIN_STEPS):
centers, targets = next(batch_gen)
loss_batch, _ = sess.run([loss, optimizer], feed_dict={center_words:centers, target_words:targets})
total_loss += loss_batch
if (index + 1) % SKIP_STEP == 0:
print('Average loss at step {}: {:5.1f}'.format(index, total_loss / SKIP_STEP))
total_loss = 0.0
writer.close()
</code></pre>
<p><a href="https://i.stack.imgur.com/MoBgj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MoBgj.png" alt="memory used up"></a></p>
|
<p>This is default tensorflow behaviour. If you want to limit GPU memory allocation to only what is needed, specify this in the session config.</p>
<pre><code>config = tf.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.Session(config=config)
</code></pre>
<p>Alternatively you can specify a maximum fraction of GPU memory to use:</p>
<pre><code>config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.5
session = tf.Session(config=config)
</code></pre>
<p>It's not very well documented, but the <a href="https://github.com/tensorflow/tensorflow/blob/r1.3/tensorflow/core/protobuf/config.proto" rel="noreferrer">message definitions</a> are the place to start if you want to do more than that.</p>
|
memory|tensorflow|gpu
| 5
|
6,677
| 11,459,106
|
How use the mean method on a pandas TimeSeries with Decimal type values?
|
<p>I need to store Python decimal type values in a pandas <code>TimeSeries</code>/<code>DataFrame</code> object. Pandas gives me an error when using the "groupby" and "mean" on the TimeSeries/DataFrame. The following code based on floats works well:</p>
<pre><code>[0]: by = lambda x: lambda y: getattr(y, x)
[1]: rng = date_range('1/1/2000', periods=40, freq='4h')
[2]: rnd = np.random.randn(len(rng))
[3]: ts = TimeSeries(rnd, index=rng)
[4]: ts.groupby([by('year'), by('month'), by('day')]).mean()
2000 1 1 0.512422
2 0.447235
3 0.290151
4 -0.227240
5 0.078815
6 0.396150
7 -0.507316
</code></pre>
<p>But i get an error if do the same using decimal values instead of floats:</p>
<pre><code>[5]: rnd = [Decimal(x) for x in rnd]
[6]: ts = TimeSeries(rnd, index=rng, dtype=Decimal)
[7]: ts.groupby([by('year'), by('month'), by('day')]).mean() #Crash!
Traceback (most recent call last):
File "C:\Users\TM\Documents\Python\tm.py", line 100, in <module>
print ts.groupby([by('year'), by('month'), by('day')]).mean()
File "C:\Python27\lib\site-packages\pandas\core\groupby.py", line 293, in mean
return self._cython_agg_general('mean')
File "C:\Python27\lib\site-packages\pandas\core\groupby.py", line 365, in _cython_agg_general
raise GroupByError('No numeric types to aggregate')
pandas.core.groupby.GroupByError: No numeric types to aggregate
</code></pre>
<p>The error message is "GroupByError('No numeric types to aggregate')". Is there any chance to use the standard aggregations like sum, mean, and quantileon on the TimeSeries or DataFrame containing Decimal values? </p>
<p>Why doens't it work and is there a chance to have an equally fast alternative if it is not possible?</p>
<p>EDIT: I just realized that most of the other functions (min, max, median, etc.) work fine but not the mean function that i desperately need :-(.</p>
|
<pre><code>import numpy as np
ts.groupby([by('year'), by('month'), by('day')]).apply(np.mean)
</code></pre>
|
python|decimal|dataframe|pandas
| 11
|
6,678
| 28,716,324
|
Multiindex pandas groupby + aggregate, keep full index
|
<p>I have a two-level hierarchically-indexed sequence of integers. </p>
<pre><code> >> s
id1 id2
1 a 100
b 10
c 9
2 a 2000
3 a 5
b 10
c 15
d 20
...
</code></pre>
<p>I want to group by id1, and select the maximum value, but have the <em>full</em> index in the result. I have tried the following:</p>
<pre><code> >> s.groupby(level=0).aggregate(np.max)
id1
1 100
2 2000
3 20
</code></pre>
<p>But result is indexed by id1 only. I want my output to look like this:</p>
<pre><code> id1 id2
1 a 100
2 a 2000
3 d 20
</code></pre>
<p>A related, but more complicated, question was asked here:
<a href="https://stackoverflow.com/questions/24382078/multiindexed-pandas-groupby-ignore-a-level">Multiindexed Pandas groupby, ignore a level?</a>
As it states, the answer is kind of a hack. </p>
<p>Does anyone know a better solution? If not, what about the special case where every value of id2 is unique?</p>
|
<p>One way to select full rows after a groupby is to use <code>groupby/transform</code> to build a boolean mask and then use the mask to select the full rows from <code>s</code>:</p>
<pre><code>In [110]: s[s.groupby(level=0).transform(lambda x: x == x.max()).astype(bool)]
Out[110]:
id1 id2
1 a 100
2 a 2000
3 d 20
Name: s, dtype: int64
</code></pre>
<p>Another way, which is faster in some cases -- such as when there are a lot of groups -- is to merge the max values <code>m</code> into a DataFrame along with the values in <code>s</code>, and then select rows based on equality between <code>m</code> and <code>s</code>:</p>
<pre><code>def using_merge(s):
m = s.groupby(level=0).agg(np.max)
df = s.reset_index(['id2'])
df['m'] = m
result = df.loc[df['s']==df['m']]
del result['m']
result = result.set_index(['id2'], append=True)
return result['s']
</code></pre>
<p>Here is an example showing <code>using_merge</code>, while more complicated, may be faster than <code>using_transform</code>:</p>
<pre><code>import numpy as np
import pandas as pd
def using_transform(s):
return s[s.groupby(level=0).transform(lambda x: x == x.max()).astype(bool)]
N = 10**5
id1 = np.random.randint(100, size=N)
id2 = np.random.choice(list('abcd'), size=N)
index = pd.MultiIndex.from_arrays([id1, id2])
ss = pd.Series(np.random.randint(100, size=N), index=index)
ss.index.names = ['id1', 'id2']
ss.name = 's'
</code></pre>
<hr>
<p>Timing these two functions using IPython's <code>%timeit</code> function yields:</p>
<pre><code>In [121]: %timeit using_merge(ss)
100 loops, best of 3: 12.8 ms per loop
In [122]: %timeit using_transform(ss)
10 loops, best of 3: 45 ms per loop
</code></pre>
|
python|pandas
| 5
|
6,679
| 28,444,101
|
How to print the elements (which are array) of list in Python?
|
<p>I have some code in python which outputs the array named datafile. One of the element of that array is: first element for example</p>
<pre><code>datafile[0]=(array([[ 1.],
[ 2.],
[ 3.],
[ 4.],
[ 5.]]), array([[ 10.],
[ 20.],
[ 30.],
[ 40.],
[ 50.]]))
</code></pre>
<p>I like to print first and second element of the list:</p>
<p>First element:</p>
<pre><code>(array([[ 1.],
[ 2.],
[ 3.],
[ 4.],
[ 5.]]))
</code></pre>
<p>What's the easy way to print or separate those elements? Thank you
EDIT:
If I do as you Joran said then,
Once I can separate those elements, I want to superimpose plots of datafile[i][0] versus datafile[i][1].</p>
<p>I was trying to achieve that by doing for loop:</p>
<pre><code>for i in datafile:
plt.plot(datafile[i][0],datafile[i][1])
plt.show
</code></pre>
<p>But I am getting error "list indices must be integers, not tuple ". I have been stuck on this for a while. </p>
<p>NEVERMIND! I fixed it! Thank you guys for your help! :)</p>
|
<p>ok, after the edits, I think I found the issue.</p>
<p>in your code:</p>
<pre><code>for i in datafile:
plt.plot(datafile[i][0],datafile[i][1])
plt.show
</code></pre>
<p>You are in fact trying to use a tuple ( each datafile[i] element is a tuple) as the index when you call plot. </p>
<p>Try to use something like:</p>
<pre><code>for element in datafile:
plt.plot(element[0],element[1])
plt.show
</code></pre>
<p>And next time, try to include as many examples, code, results, etc as you can, as it will help a lot in figuring out the solution ;)</p>
|
python|arrays|list|numpy
| 0
|
6,680
| 50,686,873
|
Most efficient way to average bunches of x embeddings from a Tensorflow variable that contains y total embeddings
|
<p>Say that I have y total embeddings which were retrieved using this code</p>
<pre><code>embeds = tf.nn.embedding_lookup(embeddings, train_dataset)
</code></pre>
<p>So the data would look something like this</p>
<pre><code>embeds = [embed45, embed2, embed939, embed3, embed32, embed2, . . . etc]
</code></pre>
<p>And lets say, I want to take the average of groups of 3 embeddings. So something like</p>
<pre><code>averaged_embeds = [ averageOf(embed45, embed2, embed939) , averageOf(embed3, embed32, embed2), . . . . etc]
</code></pre>
<p>so when evaluated it'll look something like this</p>
<pre><code>averaged_embeds = [ averagedEmbeds1, averagedEmbeds2, averagedEmbeds3, . . . etc]
</code></pre>
<p>What is the best way to go about doing that?</p>
<p>My first thought was tf.segment_mean but as far as I can tell, it can only take averages within each of the embeddings, it doesn't average a bunch of embeddings (let me know if this is wrong). </p>
<p>There is also tf.reduce_mean which can averages along a specified dimension, but it'll take the average across all embeddings, not bunches of a particular number. </p>
|
<p>You could use <a href="https://www.tensorflow.org/api_docs/python/tf/split" rel="nofollow noreferrer"><code>tf.split</code></a>, but that would mean, the param <code>num_or_size_splits</code> should be a multiple of length of the input if its a scalar, or the sum of the dimensions along the split dimensions should match with the length of the input (same for <a href="https://www.tensorflow.org/api_docs/python/tf/segment_mean" rel="nofollow noreferrer"><code>tf.segment_mean</code></a> as well). A better approach is to use <code>tf.extract_image_patches</code> where those restrictions dont apply:</p>
<pre><code># generate batch of inputs
def get_batch(tensor, batch, k):
return tf.extract_image_patches(tensor,
ksizes=[1, batch, k, 1],
strides=[1, batch, k, 1],
rates=[1, 1, 1, 1], padding='VALID')
embed_dim = 5
batch = 3
x = np.arange(200).reshape(-1, embed_dim)
embeddings = tf.constant(x)
train_dataset = tf.constant([0,1,2,5,6,7])
embeds = tf.nn.embedding_lookup(embeddings, train_dataset)
split = tf.reshape(get_batch(embeds[None,..., None], batch, embed_dim),
[-1, batch, embed_dim])
avg = tf.reduce_mean(split, 1)
with tf.Session() as sess:
print(sess.run(embeds))
#[[ 0 1 2 3 4]
# [ 5 6 7 8 9]
# [10 11 12 13 14]
# [25 26 27 28 29]
# [30 31 32 33 34]
# [35 36 37 38 39]]
print(sess.run(split))
#[[[ 0 1 2 3 4]
# [ 5 6 7 8 9]
# [10 11 12 13 14]]
# [[25 26 27 28 29]
# [30 31 32 33 34]
# [35 36 37 38 39]]]
print(sess.run(avg))
#[[ 5 6 7 8 9]
# [30 31 32 33 34]]
</code></pre>
<p>For 3D segments the code changes to:</p>
<pre><code> dim1 = 2
x = np.arange(200).reshape(-1, dim1, embed_dim)
split = tf.reshape(get_batch(embeds[None,...], batch, dim1),
[-1, batch, dim1, embed_dim])
</code></pre>
|
python|tensorflow
| 0
|
6,681
| 51,047,286
|
Why I am getting this value error in KNN model?
|
<p>I am applying KNN model on breast cancer wisconsin data but everytime I run the code I get this error:</p>
<blockquote>
<p>ValueError: Found input variables with inconsistent numbers of samples: [559, 140]</p>
</blockquote>
<pre><code>import numpy as np
import pandas as pd
from sklearn import preprocessing,cross_validation,neighbors
df=pd.read_csv('breast-cancer-wisconsin.data.txt')
df.replace('?',-99999,inplace=True)
df.drop(['id'],1,inplace=True)
X=np.array(df.drop(['class'],1))
y=np.array(df['class'])
X_train, y_train, X_test, y_test = cross_validation.train_test_split(X, y, test_size=0.2)
clf = neighbors.KNeighborsClassifier()
clf.fit(X_train, y_train)
accuracy=clf.score(X_test, y_test)
print(accuracy)
example=np.array([4,2,1,1,1,2,3,2,1])
example=example.reshape(-1,1)
prediction=clf.predict(example)
print(prediction)
</code></pre>
|
<p>The output of cross_validation.train_test_split, as per the <a href="http://scikit-learn.org/0.16/modules/generated/sklearn.cross_validation.train_test_split.html" rel="nofollow noreferrer">documentation</a>, should be <code>X_train, X_test, y_train, y_test</code>. Change that line in your code to:</p>
<pre><code>X_train,X_test,y_train,y_test=cross_validation.train_test_split(X,y,test_size=0.2)
</code></pre>
|
python|pandas|scikit-learn
| 1
|
6,682
| 50,744,746
|
Follow-up rolling_apply deprecated
|
<p>Following up on this answer: <a href="https://stackoverflow.com/questions/23898631/is-there-a-way-to-do-a-weight-average-rolling-sum-over-a-grouping">Is there a way to do a weight-average rolling sum over a grouping?</a></p>
<pre><code>rsum = pd.rolling_apply(g.values,p,lambda x: np.nansum(w*x),min_periods=p)
</code></pre>
<p>rolling_apply is deprecated now. How would you change this to work under current functionality.</p>
|
<p>As of 0.18+, use <code>Series.rolling.apply</code>.</p>
<pre><code>w = np.array([0.1,0.1,0.2,0.6])
df.groupby('ID').VALUE.apply(
lambda x: x.rolling(window=4).apply(lambda x: np.dot(x, w), raw=False))
0 NaN
1 NaN
2 NaN
3 146.0
4 166.0
5 NaN
6 NaN
7 NaN
8 2.5
9 NaN
10 NaN
11 NaN
12 35.5
13 21.4
14 NaN
15 NaN
16 NaN
17 8.3
18 9.8
19 NaN
Name: VALUE, dtype: float64
</code></pre>
<p>The <code>raw</code> argument is new to 0.23 (set it to specify passing Series v/s arrays), so remove it if you're having trouble on older versions.</p>
|
python|pandas
| 2
|
6,683
| 50,787,553
|
Converting a flat table of records to an aggregate dataframe in Pandas
|
<p>I have a flat table of records about objects. Object have a type (ObjType) and are hosted in containers (ContainerId). The records also have some other attributes about the objects. However, they are not of interest at present. So, basically, the data looks like this:</p>
<pre><code>Id ObjName XT ObjType ContainerId
2 name1 x1 A 2
3 name2 x5 B 2
22 name5 x3 D 7
25 name6 x2 E 7
35 name7 x3 G 7
..
..
92 name23 x2 A 17
95 name24 x8 B 17
99 name25 x5 A 21
</code></pre>
<p>What I am trying to do is 're-pivot' this data to further analyze which containers are 'similar' by looking at the types of objects they host in aggregate. </p>
<p>So, I am looking to convert the above data to the form below:</p>
<pre><code>ObjType A B C D E F G
ContainerId
2 2.0 1.0 1.0 0.0 0.0 0.0 0.0
7 0.0 0.0 0.0 1.0 2.0 1.0 1.0
9 1.0 1.0 0.0 1.0 0.0 0.0 0.0
11 0.0 0.0 0.0 2.0 3.0 1.0 1.0
14 1.0 1.0 0.0 1.0 0.0 0.0 0.0
17 1.0 1.0 0.0 0.0 0.0 0.0 0.0
21 1.0 0.0 0.0 0.0 0.0 0.0 0.0
</code></pre>
<p>This is how I have managed to do it currently (after a lot of stumbling and using various tips from questions such as <a href="https://stackoverflow.com/questions/10373660/converting-a-pandas-groupby-object-to-dataframe">this one</a>). I am getting the right results but, being new to Pandas and Python, I feel that I must be taking a long route. (I have added a few comments to explain the pain points.) </p>
<pre><code>import pandas as pd
rdf = pd.read_csv('.\\testdata.csv')
#The data in the below group-by is all that is needed but in a re-pivoted format...
rdfg = rdf.groupby('ContainerId').ObjType.value_counts()
#Remove 'ContainerId' and 'ObjType' from the index
#Had to do reset_index in two steps because otherwise there's a conflict with 'ObjType'.
#That is, just rdfg.reset_index() does not work!
rdx = rdfg.reset_index(level=['ContainerId'])
#Renaming the 'ObjType' column helps get around the conflict so the 2nd reset_index works.
rdx.rename(columns={'ObjType':'Count'}, inplace=True)
cdx = rdx.reset_index()
#After this a simple pivot seems to do it
cdf = cdx.pivot(index='ContainerId', columns='ObjType',values='Count')
#Replacing the NaNs because not all containers have all object types
cdf.fillna(0, inplace=True)
</code></pre>
<p><strong>Ask:</strong> Can someone please share other possible approaches that could perform this transformation?</p>
|
<p>This is a use case for <code>pd.crosstab</code>. <a href="http://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.crosstab.html" rel="nofollow noreferrer">Docs</a>.</p>
<p>e.g.</p>
<pre><code>In [539]: pd.crosstab(df.ContainerId, df.ObjType)
Out[539]:
ObjType A B D E G
ContainerId
2 1 1 0 0 0
7 0 0 1 1 1
17 1 1 0 0 0
21 1 0 0 0 0
</code></pre>
|
pandas|dataframe|group-by|pivot-table
| 1
|
6,684
| 50,886,294
|
Error on running prediction for a model
|
<p>While running the below prediction for a model</p>
<pre><code>y_pred_m16 = lm_16.predict(X_test_m16)
</code></pre>
<p>i am getting the following error. Any clue as to why this is happening ?</p>
<pre><code>>ValueError Traceback (most recent call last)
<ipython-input-148-ff5c2d04d6a6> in <module>()
1 # Making predictions
----> 2 y_pred_m16 = lm_16.predict(X_test_m16)
>~\AppData\Local\Continuum\anaconda3\lib\site-packages\statsmodels\base\model.py in predict(self, exog, transform, *args, **kwargs)
790 exog = np.atleast_2d(exog) # needed in count model shape[1]
791
--> 792 predict_results = self.model.predict(self.params, exog, *args, **kwargs)
793
794 if exog_index is not None and not hasattr(predict_results, 'predicted_values'):
>~\AppData\Local\Continuum\anaconda3\lib\site-packages\statsmodels\regression\linear_model.py in predict(self, params, exog)
259 exog = self.exog
260
--> 261 return np.dot(exog, params)
262
263 def get_distribution(self, params, scale, exog=None, dist_class=None):
>ValueError: shapes (62,7) and (8,) not aligned: 7 (dim 1) != 8 (dim 0)
</code></pre>
|
<p>It seem that the training and testing sets have different dimensions. Is it possible you trained with 8 features and testing on 7?</p>
|
python|pandas|statsmodels
| 1
|
6,685
| 50,822,119
|
Numpy array to PIL image format
|
<p>I'm trying to convert an image from a numpy array format to a PIL one. This is my code:</p>
<pre><code>img = numpy.array(image)
row,col,ch= np.array(img).shape
mean = 0
# var = 0.1
# sigma = var**0.5
gauss = np.random.normal(mean,1,(row,col,ch))
gauss = gauss.reshape(row,col,ch)
noisy = img + gauss
im = Image.fromarray(noisy)
</code></pre>
<p>The input to this method is a PIL image. This method should add Gaussian noise to the image and return it as a PIL image once more.</p>
<p>Any help is greatly appreciated! </p>
|
<p>In my comments I meant that you do something like this:</p>
<pre><code>import numpy as np
from PIL import Image
img = np.array(image)
mean = 0
# var = 0.1
# sigma = var**0.5
gauss = np.random.normal(mean, 1, img.shape)
# normalize image to range [0,255]
noisy = img + gauss
minv = np.amin(noisy)
maxv = np.amax(noisy)
noisy = (255 * (noisy - minv) / (maxv - minv)).astype(np.uint8)
im = Image.fromarray(noisy)
</code></pre>
|
python|image|numpy|python-imaging-library
| 7
|
6,686
| 20,853,179
|
Purpose of 'ax' keyword in pandas scatter_matrix function
|
<p>I'm puzzled by the meaning of the '<strong>ax</strong>' keyword in the pandas <strong>scatter_matrix</strong> function:</p>
<p>pd.scatter_matrix(frame, alpha=0.5, figsize=None, <strong>ax=None</strong>, grid=False, diagonal='hist', marker='.', density_kwds={}, hist_kwds={}, **kwds)</p>
<p>The only clue given in the docstring for the ax keyword is too cryptic for me:</p>
<pre><code>ax : Matplotlib axis object
</code></pre>
<p>I had a look in the pandas code for the scatter_matrix function, and the ax variable is incorporated in the following matplotlib subplots call:</p>
<pre><code>fig, axes = plt.subplots(nrows=n, ncols=n, figsize=figsize, ax=ax,
squeeze=False)
</code></pre>
<p>But, for the life of me, I can't find any reference to an 'ax' keyword in matplotlib subplots!</p>
<p>Can anyone tell me what this ax keyword is for???</p>
|
<p>This is tricky here. When looking at the source of pandas <code>scatter_matrix</code> you will find this line right after the docstring:</p>
<pre><code>fig, axes = _subplots(nrows=n, ncols=n, figsize=figsize, ax=ax, squeeze=False)
</code></pre>
<p>Hence, internally, a new figure, axes combination is created using the internal <code>_subplots</code> method. This is strongly related to the matplotlibs <code>subplots</code> command but slightly different. Here, the <code>ax</code> keyword is supplied as well. If you look at the corresponding source (<code>pandas.tools.plotting._subplots</code>) you will find these lines:</p>
<pre><code>if ax is None:
fig = plt.figure(**fig_kw)
else:
fig = ax.get_figure()
fig.clear()
</code></pre>
<p>Hence, if you supply an axes object (e.g. created using matplotlibs <code>subplots</code> command), pandas <code>scatter_matrix</code> grabs the corresponding (matplolib) figure object and deletes its content. Afterwards a new subplots grid is created into this figure object.</p>
<p>All in all, the <code>ax</code> keyword allows to plot the scatter matrix into a <em>given</em> figure (even though IMHO in a slightly strange way).</p>
|
python|matplotlib|plot|pandas|scatter-plot
| 3
|
6,687
| 33,251,320
|
DataFrame object has no attribute 'sample'
|
<p>Simple code like this won't work anymore on my python shell:</p>
<pre><code>import pandas as pd
df=pd.read_csv("K:/01. Personal/04. Models/10. Location/output.csv",index_col=None)
df.sample(3000)
</code></pre>
<p>The error I get is:</p>
<pre><code>AttributeError: 'DataFrame' object has no attribute 'sample'
</code></pre>
<p>DataFrames definitely have a sample function, and this used to work.
I recently had some trouble installing and then uninstalling another distribution of python. I don't know if this could be related.</p>
<p>I've previously had a similar problem when trying to execute a script which had the same name as a module I was importing, this is not the case here, and pandas.read_csv is actually working.</p>
<p>What could cause this?</p>
|
<p>As given in the <a href="http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.DataFrame.sample.html" rel="nofollow">documentation of <code>DataFrame.sample</code></a> -</p>
<blockquote>
<p><strong><code>DataFrame.sample(n=None, frac=None, replace=False, weights=None, random_state=None, axis=None)</code></strong></p>
<p>Returns a random sample of items from an axis of object.</p>
<p><strong>New in version 0.16.1.</strong></p>
</blockquote>
<p>(Emphasis mine).</p>
<p><code>DataFrame.sample</code> is added in <code>0.16.1</code> , you can either -</p>
<ol>
<li><p>Upgrade your <code>pandas</code> version to latest, you can use <code>pip</code> for that, Example -</p>
<pre><code>pip install pandas --upgrade
</code></pre></li>
<li><p>Or if you don't want to upgrade, and want to sample few rows from the dataframe, you can also use <a href="https://docs.python.org/2/library/random.html#random.sample" rel="nofollow"><code>random.sample()</code></a>, Example -</p>
<pre><code>import random
num = 100 #number of samples
sampleddata = df.loc[random.sample(list(df.index),num)]
</code></pre></li>
</ol>
|
python|pandas|module
| 7
|
6,688
| 66,733,616
|
Installing version 1.15 of tensorflow-serving-api to Centos 8
|
<p>I am trying to install Tensorflow-serving to my Centos 8 machine. Installing with Docker image is not an option for Centos. So I try to install with pip. These are the commands for installing tensorflow-model-server:</p>
<pre><code>pip3 install tensorflow-serving-api==1.15
echo "deb [arch=amd64] http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal" | sudo tee /etc/apt/sources.list.d/tensorflow-serving.list
curl https://storage.googleapis.com/tensorflow-serving-apt/tensorflow-serving.release.pub.gpg | sudo apt-key add -
sudo apt-get update && sudo apt-get install tensorflow-model-server
</code></pre>
<p>The problem is I need version 1.15.0 and I couldn't find how to modify links to install the 1.15 version. Any help for modifying links, ideas for installing "tensorflow/serving" to Centos 8 will be appreciated by me :)</p>
|
<p>I found the links:</p>
<pre><code>wget 'http://storage.googleapis.com/tensorflow-serving-apt/pool/tensorflow-model-server-1.15.0/t/tensorflow-model-server/tensorflow-model-server_1.15.0_all.deb'
dpkg -i tensorflow-model-server_1.15.0_all.deb
pip3 install tensorflow-serving-api==1.15
</code></pre>
<p>With these commands, it works :)</p>
|
tensorflow|centos|tensorflow-serving|tensorflow1.15
| 2
|
6,689
| 66,523,003
|
to_csv storing columns label on every insert of data to csv file
|
<p>when i am inserting data first time in csv file it is good
but on second time it again inserts column name</p>
<pre><code>import pandas as pd
name = input("Enter student name")
print("")
print("enter marks info below")
print("")
eng= input("enter English mark : ")
maths= input("enter Maths mark : ")
physics= input("enter Physics mark : ")
chemistry= input("enter Chemistry mark : ")
ip= input("enter Informatic Practices mark : ")
dict = {
"name":{name:name},
"english":{name:eng},
"maths":{name:maths},
"physics":{name:physics},
"chemistry":{name:chemistry},
"ip":{name:ip}
}
df= pd.DataFrame(dict)
df.to_csv("hello.csv", sep="|",index=False,na_rep="null",mode='a')
print("hello.csv")
read = pd.read_csv("hello.csv", sep='|')
print(read)
</code></pre>
<p><strong>data in csv file :</strong></p>
<pre><code>name|english|maths|physics|chemistry|ip
dddd|dd|e3|3|3|3
name|english|maths|physics|chemistry|ip
ddddddd|e33|33|3||3
name|english|maths|physics|chemistry|ip
dddddd|33|333||3|
</code></pre>
<p><strong>please help in solving how to fix so that column not get added multiple time</strong></p>
|
<p>you can read the csv file everytime before your run this script.</p>
<pre><code>import pandas as pd
import os
df = pd.DataFrame() if not os.path.exists("hello.csv") else pd.read_csv("hello.csv", sep='|')
name = input("Enter student name")
print("")
print("enter marks info below")
print("")
eng = input("enter English mark : ")
maths = input("enter Maths mark : ")
physics = input("enter Physics mark : ")
chemistry = input("enter Chemistry mark : ")
ip = input("enter Informatic Practices mark : ")
dict = {
"name": {name: name},
"english": {name: eng},
"maths": {name: maths},
"physics": {name: physics},
"chemistry": {name: chemistry},
"ip": {name: ip}
}
df = df.append(pd.DataFrame(dict))
df.to_csv("hello.csv", sep="|", index=False, na_rep="null", mode='w')
print("hello.csv")
read = pd.read_csv("hello.csv", sep='|')
print(read)
</code></pre>
<p>and you can also use this <a href="https://stackoverflow.com/questions/19781609/how-do-you-remove-the-column-name-row-when-exporting-a-pandas-dataframe">code</a> below to export df without columns, but may still have to check file existence or columns sequence first.</p>
<p><code>df.to_csv('filename.csv', header=False, sep='|', mode='a')</code></p>
|
python|pandas|csv|export-to-csv
| 0
|
6,690
| 66,535,465
|
Check for two dataframes (pivot tables) similarity
|
<p>I am struggling to check the percent of similarity between two pandas pivot tables (which are filled with values 1 and Nan) which have same row and column indices. I want to count the number of the same rows and divide them by the total number of rows.
Giving basic example:</p>
<pre><code>df1
column1 column2 column3
idx1 Nan 1 Nan
idx2 1 Nan 1
idx3 Nan Nan 1
df1
column1 column2 column3
idx1 1 Nan 1
idx2 1 Nan 1
idx3 Nan 1 1
</code></pre>
<p>In this basic example, the only row with idx2 has the same value in both data frames, so the output would be 1/3 ~ 33%</p>
<p>I tried inner join (to check overlapping) but I got some error 'key error 0'
Another attempt was with c = a[!a.isin(b)], but I got some weird values..</p>
|
<p>Becasue same index and columns values you can first replace missing values by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.fillna.html" rel="nofollow noreferrer"><code>DataFrame.fillna</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.eq.html" rel="nofollow noreferrer"><code>DataFrame.eq</code></a> and test if all <code>True</code>s by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.all.html" rel="nofollow noreferrer"><code>DataFrame.all</code></a>, last for percentage use <code>mean</code> (<code>True/False</code>s are processing like <code>1/0</code>):</p>
<pre><code>out = df1.fillna('missing').eq(df2.fillna('missing')).all(axis=1).mean() * 100
print (out)
33.33333333333333
</code></pre>
|
python|pandas
| 3
|
6,691
| 66,532,381
|
Pandas creating a column which counts the length of a previous column entries without getting a Set Copy Warning
|
<p>When I look at the answer to a similiar question as shown in this link: <a href="https://stackoverflow.com/questions/42815768/pandas-adding-column-with-the-length-of-other-column-as-value">Pandas: adding column with the length of other column as value</a></p>
<p>I come across an issue where the solution its suggesting i.e</p>
<pre><code>df['name_length'] = df['seller_name].str.len()
</code></pre>
<p>Throws the following warning</p>
<pre><code>'''
SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
'''
</code></pre>
<p><strong>My question is:</strong> How could this be done inorder to prevent this warning from occuring? As in this command I would like to add a new column to the original dataframe not create some sort of copy of a slice.</p>
|
<p>I took a sample data set to test this issue in Python 3.8</p>
<p>Sample data
<a href="https://i.stack.imgur.com/8tDm0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8tDm0.png" alt="enter image description here" /></a></p>
<p>here is the same code which you ran
df['name_length'] = df['seller_name'].str.len()</p>
<p>there was no error
<a href="https://i.stack.imgur.com/tJI6X.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tJI6X.png" alt="enter image description here" /></a></p>
|
python|pandas|dataframe
| 0
|
6,692
| 57,591,352
|
Pandas - Filling missing dates within groups with different time ranges
|
<p>I'm working with a dataset which has monthly information about several users. And each user has a different time range. There is also missing "time" data for each user. What I would like to do is fill in the missing month data for each user based on the time range for each user(from min.time to max.time in months)</p>
<p>I've read approaches to similar situation using re-sample, re-index from here, but I'm not getting the desired output/there is row mismatch after filling the missing months. </p>
<p>Any help/pointers would be much appreciated.</p>
<p>-Luc</p>
<p>Tried using re-sample, re-index, but not getting desired output</p>
<pre><code>x = pd.DataFrame({'user': ['a','a','b','b','c','a','a','b','a','c','c','b'], 'dt': ['2015-01-01','2015-02-01', '2016-01-01','2016-02-01','2017-01-01','2015-05-01','2015-07-01','2016-05-01','2015-08-01','2017-03-01','2017-08-01','2016-09-01'], 'val': [1,33,2,1,5,4,2,5,66,7,5,1]})
</code></pre>
<pre><code> date id value
0 2015-01-01 a 1
1 2015-02-01 a 33
2 2016-01-01 b 2
3 2016-02-01 b 1
4 2017-01-01 c 5
5 2015-05-01 a 4
6 2015-07-01 a 2
7 2016-05-01 b 5
8 2015-08-01 a 66
9 2017-03-01 c 7
10 2017-08-01 c 5
11 2016-09-01 b 1
</code></pre>
<p>What I would like to see is - for each 'id' generate missing months based on min.date and max.date for that id and fill 'val' for those months with 0.</p>
|
<p>Create <code>DatetimeIndex</code>, so possible use <code>groupby</code> with custom lambda function and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.asfreq.html" rel="nofollow noreferrer"><code>Series.asfreq</code></a>:</p>
<pre><code>x['dt'] = pd.to_datetime(x['dt'])
x = (x.set_index('dt')
.groupby('user')['val']
.apply(lambda x: x.asfreq('MS', fill_value=0))
.reset_index())
print (x)
user dt val
0 a 2015-01-01 1
1 a 2015-02-01 33
2 a 2015-03-01 0
3 a 2015-04-01 0
4 a 2015-05-01 4
5 a 2015-06-01 0
6 a 2015-07-01 2
7 a 2015-08-01 66
8 b 2016-01-01 2
9 b 2016-02-01 1
10 b 2016-03-01 0
11 b 2016-04-01 0
12 b 2016-05-01 5
13 b 2016-06-01 0
14 b 2016-07-01 0
15 b 2016-08-01 0
16 b 2016-09-01 1
17 c 2017-01-01 5
18 c 2017-02-01 0
19 c 2017-03-01 7
20 c 2017-04-01 0
21 c 2017-05-01 0
22 c 2017-06-01 0
23 c 2017-07-01 0
24 c 2017-08-01 5
</code></pre>
<p>Or use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.reindex.html" rel="nofollow noreferrer"><code>Series.reindex</code></a> with min and max datetimes per groups:</p>
<pre><code>x = (x.set_index('dt')
.groupby('user')['val']
.apply(lambda x: x.reindex(pd.date_range(x.index.min(),
x.index.max(), freq='MS'), fill_value=0))
.rename_axis(('user','dt'))
.reset_index())
</code></pre>
|
python-3.x|pandas|pandas-groupby
| 1
|
6,693
| 57,310,987
|
Count values based on a condition on another column
|
<p>I am trying to create a dataset where, for each job department, I count the total amount of people in that department and the total amount of people who left (or not) the company.</p>
<pre><code> Name Total Non left Left
Finance 3000 2500 5000
IT 1500 1000 500
Marketing 1000 750 250
...
</code></pre>
<p>My initial datase list, row by row, each perosn in the company. My initial data set is:</p>
<pre><code>ID Department Left
1 Finance 0
2 Finance 1
3 Marketing 0
4 Marketing 0
5 IT 1
...
</code></pre>
<p>I managed to get the total amount of people per department:</p>
<pre><code>df["department"].value_counts()
</code></pre>
<p>Now I need something that does:</p>
<pre><code>df["department"].value_counts(#If element in Left column is 1)
df["department"].value_counts(#If element in Left column is 0)
</code></pre>
<p>However I am not sure how to start it.</p>
|
<p>May use <code>crosstab</code></p>
<pre><code>pd.crosstab(df.Left, df.department ,margins = True)
</code></pre>
|
pandas
| 1
|
6,694
| 57,444,768
|
How to compute every sum for every argument a from an array of numbers A
|
<p>I'd like to compute the following sums for each value of a in A: </p>
<pre><code>D = np.array([1, 2, 3, 4])
A = np.array([0.5, 0.25, -0.5])
beta = 0.5
np.sum(np.square(beta) - np.square(D-a))
</code></pre>
<p>and the result is an array of all the sums. To compute it by hand, it would look something like this: </p>
<pre><code> [np.sum(np.square(beta)-np.square(D-0.5)),
np.sum(np.square(beta)-np.square(D-0.25)),
np.sum(np.square(beta)-np.square(D-0.5))]
</code></pre>
|
<p>Use <code>np.sum</code> with broadcasting</p>
<pre><code>np.sum(np.square(beta) - np.square(D[None,:] - A[:,None]), axis=1)
Out[98]: array([-20. , -24.25, -40. ])
</code></pre>
<hr>
<p><strong>Explain</strong>: We need the whole array D subtracts each element of array A. We can't simple call <code>D</code> - <code>A</code> because it just does subtraction element-wise between <code>D</code> and <code>A</code>. Therefore, we need employing numpy broadcasting. We need to add an additional dimension to <code>D</code> and <code>A</code> to satisfy rules of broadcasting. After that, just do calculation and <code>sum</code> them along axis=1 </p>
<p><strong>Step by step</strong>:<br>
Increase dimension <code>D</code> from <code>1D</code> to <code>2D</code> at axis=0</p>
<pre><code>In [10]: D[None,:]
Out[10]: array([[1, 2, 3, 4]])
In [11]: D.shape
Out[11]: (4,)
In [12]: D[None,:].shape
Out[12]: (1, 4)
</code></pre>
<p>Doing the same for <code>A</code>, but at axis=1</p>
<pre><code>In [13]: A[:,None]
Out[13]:
array([[ 0.5 ],
[ 0.25],
[-0.5 ]])
In [14]: A.shape
Out[14]: (3,)
In [15]: A[:,None].shape
Out[15]: (3, 1)
</code></pre>
<p>On subtraction, numpy broadcasting kicks in to broadcast each array to compatible dimension and does subtraction to create 2D array result</p>
<pre><code>In [16]: D[None,:] - A[:,None]
Out[16]:
array([[0.5 , 1.5 , 2.5 , 3.5 ],
[0.75, 1.75, 2.75, 3.75],
[1.5 , 2.5 , 3.5 , 4.5 ]])
</code></pre>
<p>Next, it is just element-wise <code>square</code> and subtraction and <code>square</code>.</p>
<pre><code>np.square(beta) - np.square(D[None,:] - A[:,None])
Out[17]:
array([[ 0. , -2. , -6. , -12. ],
[ -0.3125, -2.8125, -7.3125, -13.8125],
[ -2. , -6. , -12. , -20. ]])
</code></pre>
<p>Lastly, sum alongs axis=1 to get the final output:</p>
<pre><code>np.sum(np.square(beta) - np.square(D[None,:] - A[:,None]), axis=1)
Out[18]: array([-20. , -24.25, -40. ])
</code></pre>
<p>You may read docs on numpy broadcasting here to get more info <a href="https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html</a></p>
|
numpy
| 1
|
6,695
| 57,512,593
|
How to create a tensor with an unknown dimension
|
<p>I have a layer in my neural network with an output vector <code>x</code> of size <code>[?, N]</code>. (with first dimension for the batch size). I want declare a tensor of <code>ones</code> of the same size in the next layer (Lambda layer). I see that I cannot use <code>y = keras.backend.ones(x.shape)</code> as the batch size is only available in runtime. How can I create this tensor?</p>
|
<p>As suggested by today in the comments, <code>K.ones_like</code> works:</p>
<pre><code>from keras import backend as K
a = K.placeholder(shape=(None, 5))
b = K.ones_like(a)
print(b.shape)
>> TensorShape([Dimension(None), Dimension(5)])
</code></pre>
<p>Depending on the type of operation you're doing, you can also make a ones tensor of shape [N] and rely on broadcasting to save memory:</p>
<pre><code>from keras import backend as K
a = K.placeholder(shape=(None, 5))
b = K.ones(a.shape[-1])
print(a + b)
>> <tf.Tensor 'add:0' shape=(?, 5) dtype=float32>
</code></pre>
|
python|tensorflow|keras|tensor
| 1
|
6,696
| 43,888,010
|
python pandas delete row on string condition
|
<p>i have a data frame with a column of strings and integers.
On one of the columns containing strings I want to search all the items of that column for a specific substring let say "abc" and delete the row if the substring exists. How do I do that? It sounds easy but somehow I struggle with this.
The substring is always the last three characters.
I tried the following:</p>
<pre><code>df1 = df.drop(df[df.Hostname[-4:]== "abc"])
</code></pre>
<p>which gives me</p>
<blockquote>
<p>UserWarning: Boolean Series key will be reindexed to match DataFrame
index</p>
</blockquote>
<p>so I tried to modify the values in that column and filter out all values that do not have "abc" at the end:</p>
<pre><code>red = [c for c in df.Hostname[-4:] if c != 'abc']
</code></pre>
<p>which gives me</p>
<blockquote>
<p>KeyError('%s not in index' % objarr[mask])</p>
</blockquote>
<p>What do I do wrong?</p>
<p>Thanks for your help!</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a>, add <a href="http://pandas.pydata.org/pandas-docs/stable/text.html#indexing-with-str" rel="nofollow noreferrer"><code>indexing with str</code></a> if need check last <code>4</code> (<code>3</code>) chars of column <code>Hostname</code> and change condition from <code>==</code> to <code>!=</code>:</p>
<pre><code>df1 = df[df.Hostname.str[-4:] != "abc"]
</code></pre>
<p>Or maybe:</p>
<pre><code>df1 = df[df.Hostname.str[-3:] != "abc"]
</code></pre>
<p>Sample:</p>
<pre><code>df = pd.DataFrame({'Hostname':['k abc','abc','dd'],
'b':[1,2,3],
'c':[4,5,6]})
print (df)
Hostname b c
0 k abc 1 4
1 abc 2 5
2 dd 3 6
df1 = df[df.Hostname.str[-3:] != "abc"]
print (df1)
Hostname b c
2 dd 3 6
</code></pre>
<p>Also works <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.endswith.html" rel="nofollow noreferrer"><code>str.endswith</code></a> if need check last chars:</p>
<pre><code>df1 = df[~df.Hostname.str.endswith("abc")]
print (df1)
Hostname b c
2 dd 3 6
</code></pre>
<p>EDIT:</p>
<p>If need check in last 4 chars if <code>abc</code> and then remove rows first extract values and then use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.contains.html" rel="nofollow noreferrer"><code>str.contains</code></a>:</p>
<pre><code>df1 = df[~df.Hostname.str[-4:].str.contains('abc')]
print (df1)
Hostname b c
2 dd 3 6
</code></pre>
<p>EDIT1:</p>
<p>For default index add <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html" rel="nofollow noreferrer"><code>reset_index</code></a> - python counts form <code>0</code>, so values of index are <code>0,1,2,...</code>:</p>
<pre><code>df1 = df[df.Hostname.str[-3:] != "abc"].reset_index(drop=True)
</code></pre>
|
python|pandas
| 2
|
6,697
| 43,869,394
|
Python column and row interaction in dataframe
|
<p>Let's imagine I have a dataframe :</p>
<pre><code>question user level
1 a 1
1 b 2
1 a 3
2 a 1
2 b 2
2 a 3
2 b 4
3 c 1
3 b 2
3 c 3
3 a 4
3 b 5
</code></pre>
<p>Column level specifies who started the topic and who replied to it. If user's level is 1, it means that he asked the question. If user's level is 2, it means that he replied to the user who asked the question. If user's level is 3, it means that he replied to the user whose level is 2 and so on.</p>
<p>I would like to extract a new dataframe that should present a communication between users through question. It should contain three columns: "user source", "user destination" and "reply count". Reply count is a number of times in which User Destination "directly" replied to User Source.</p>
<pre><code> us_source us_dest reply_count
a b 2
a c 0
b a 0
b c 0
c a 0
c b 1
</code></pre>
<p>I tried to find first two columns using this code..</p>
<pre><code>idx_cols = ['question']
std_cols = ['user_x', 'user_y']
df1 = df.merge(df, on=idx_cols)
df2 = df1.loc[f1.user_x != f1.user_y, idx_cols + std_cols]
df2.loc[:, std_cols] = np.sort(df2.loc[:, std_cols])
</code></pre>
<p>Does anyone have some suggestions for the third column?
Consider "direct" a reply from B to A if and only if B replied at level k to a message of A at level k-1 in the same topic. If a topic is started by student A (by sending a message at level 1), B replied to A (sending a message at level 2), so B directly replied to A. Only students' replies from level 2 to level 1.</p>
|
<p>My suggestion:</p>
<p>I would use a dictionary containing 'source-destination' as keys and reply_counts as values.</p>
<p>Loop over the first dataframe, for each question, store who posted 1st message as the destination, store who posted 2nd message as source, add a counter in dictionary at key 'source-destination'.
eg (not familiar with pandas, I'll let you format it the good way):</p>
<pre><code>from itertools import permutations
reply_counts = {} # the dictionary where results is going to be stored
users = set()
destination = False # a simple boolean to make sure message 2 follows message 1
for row in dataframe: # iterate over the dataframe
users.add(row[1]) # collect users' name
if row[2] == 1: # if it is an initial message
destination = row[1] # we store users as destination
elif row[2] == 2 and destination: # if this is a second message
source = row[1] # store user as source
key = source + "-" + destination # construct a key based on source/destination
if key not in reply_counts: # if the key is new to dictionary
reply_counts[key] = 1 # create the new entry
else: # otherwise
reply_counts[key] += 1 # add a counter to the existing entry
destination = False # reset destination
else:
destination = False # reset destination
# add the pairs of source-destination who didn't interact in the dictionnary
for pair in permutations(users, 2):
if "-".join(pair) not in reply_counts:
reply_counts["-".join(pair)] = 0
</code></pre>
<p>then you can convert your dictionary back into a dataframe.</p>
|
python|pandas
| 2
|
6,698
| 72,867,751
|
Invalid MultiPolygon value even though all the Polygons are valid
|
<p>I am trying to convert coordinates to WKT format. Here I have a list of polygons which should be identified as a Multi polygon.</p>
<pre><code> 'geometry': [[[129093.87770000007, 6638201.563100001],
[129145.82270000037, 6638246.0934],
[129170.66339999996, 6638267.387800001],
[129234.25879999995, 6638194.9081],
[129263.13030000031, 6638162.0021],
[129256.51460000034, 6638153.420600001],
[129264.29519999959, 6638144.4186],
[129227.29739999957, 6638111.6921],
[129198.93099999987, 6638086.598099999],
[129197.43379999977, 6638085.2764],
[129189.4422000004, 6638094.0996],
[129176.43039999995, 6638108.3499],
[129159.07469999976, 6638127.3638],
[129151.15539999958, 6638136.5965],
[129133.93259999994, 6638156.149599999],
[129159.67819999997, 6638179.257999999],
[129154.22589999996, 6638185.2235],
[129151.20010000002, 6638188.5338],
[129146.49629999977, 6638193.680299999],
[129143.46549999993, 6638196.9956],
[129136.80140000023, 6638191.0813],
[129117.92229999974, 6638174.326099999],
[129101.88219999988, 6638192.5428],
[129100.7592000002, 6638193.8189],
[129093.87770000007, 6638201.563100001]],
[[128941.56969999988, 6638372.659700001],
[128943.42590000015, 6638374.3345],
[128960.41220000014, 6638389.665999999],
[128973.69660000037, 6638401.5129],
[128996.57529999968, 6638375.679],
[128987.46860000025, 6638367.0911],
[128983.83999999985, 6638363.6691],
[128972.29009999987, 6638376.4816],
[128953.31819999963, 6638359.575999999],
[128943.23959999997, 6638370.7996],
[128941.56969999988, 6638372.659700001]],
[[129090.9358000001, 6638503.8179],
[129097.49689999968, 6638510.4618],
[129097.59499999974, 6638513.236199999],
[129134.7444000002, 6638546.606799999],
[129147.49670000002, 6638558.0616],
[129149.08770000003, 6638558.060900001],
[129150.47169999965, 6638558.0603],
[129176.9729000004, 6638534.216700001],
[129176.96339999977, 6638531.551899999],
[129160.03479999956, 6638516.535],
[129124.70949999988, 6638485.1993],
[129121.82320000045, 6638485.3100000005],
[129115.54370000027, 6638479.5166],
[129090.9358000001, 6638503.8179]],
[[129158.46559999976, 6638281.2897],
[129181.3964999998, 6638301.693600001],
[129202.42819999997, 6638320.4079],
[129222.96760000009, 6638338.684900001],
[129228.84059999976, 6638332.3202],
[129164.44720000029, 6638274.4724],
[129158.46559999976, 6638281.2897]],
[[129129.06099999975, 6638386.475199999],
[129155.41009999998, 6638410.2303],
[129169.55939999968, 6638422.9824],
[129172.49909999967, 6638422.9364],
[129180.67559999973, 6638429.6515],
[129180.72410000023, 6638433.040999999],
[129214.33800000045, 6638463.1961],
[129235.7324000001, 6638482.398399999],
[129238.75250000041, 6638482.4419],
[129272.70870000031, 6638453.891899999],
[129276.91830000002, 6638450.3521],
[129311.60599999968, 6638421.1872000005],
[129313.13900000043, 6638419.8989],
[129312.24859999958, 6638419.095799999],
[129319.1475999998, 6638413.4484],
[129256.64639999997, 6638357.3002],
[129250.38129999954, 6638363.2841],
[129248.93879999965, 6638364.661900001],
[129224.56620000023, 6638387.8639],
[129203.5848000003, 6638407.837400001],
[129202.72339999955, 6638408.6601],
[129195.30989999976, 6638401.2228999995],
[129179.70480000041, 6638385.5688000005],
[129178.5390999997, 6638384.5253],
[129160.28000000026, 6638368.1357],
[129151.56759999972, 6638360.3236],
[129129.06099999975, 6638386.475199999]]],
'type': 'MultiPolygon'}
</code></pre>
<p>The following code will resulting an error</p>
<pre><code>from shapely.geometry import Point, Polygon, MultiPolygon
MultiPolygon(jk['geometry'])
</code></pre>
<p>Resulting Error</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
File /usr/local/lib/python3.10/dist-packages/shapely/geometry/multipolygon.py:189, in geos_multipolygon_from_polygons(arg)
188 try:
--> 189 N = len(exemplar[0][0])
190 except TypeError:
TypeError: object of type 'float' has no len()
During handling of the above exception, another exception occurred:
AttributeError Traceback (most recent call last)
Input In [68], in <cell line: 1>()
----> 1 MultiPolygon(jk['geometry'])
File /usr/local/lib/python3.10/dist-packages/shapely/geometry/multipolygon.py:60, in MultiPolygon.__init__(self, polygons, context_type)
58 return
59 elif context_type == 'polygons':
---> 60 geom, n = geos_multipolygon_from_polygons(polygons)
61 elif context_type == 'geojson':
62 geom, n = geos_multipolygon_from_py(polygons)
File /usr/local/lib/python3.10/dist-packages/shapely/geometry/multipolygon.py:191, in geos_multipolygon_from_polygons(arg)
189 N = len(exemplar[0][0])
190 except TypeError:
--> 191 N = exemplar._ndim
193 assert N == 2 or N == 3
195 subs = (c_void_p * L)()
AttributeError: 'list' object has no attribute '_ndim'
</code></pre>
<p>But the inner polygons seems to be fine and this code works perfectly.</p>
<pre><code>import shapely
for item in jk['geometry']:
print(shapely.wkt.dumps(Polygon(item)))
</code></pre>
<p>Why this is happening? Why Iam not able to convert to Multypolygon directly to WKT?</p>
|
<p>MultiPolygon takes a sequence of rings <em><strong>and</strong></em> holes list tuples, or a sequence of polygons. You can either do:</p>
<pre><code>MultiPolygon((x, None) for x in jk['geometry'])
</code></pre>
<p>or</p>
<pre><code>MultiPolygon(Polygon(x) for x in jk['geometry'])
</code></pre>
|
python|geojson|geopandas|shapely
| 2
|
6,699
| 72,843,042
|
How can we deploy a Machine Learning Model using Flask?
|
<p>I am trying, for the first time ever, to deploy a ML model, using Flask. I'm following the instructions from the link below.</p>
<p><a href="https://towardsdatascience.com/deploy-a-machine-learning-model-using-flask-da580f84e60c" rel="nofollow noreferrer">https://towardsdatascience.com/deploy-a-machine-learning-model-using-flask-da580f84e60c</a></p>
<p>I created three separate and distinct .py files named 'model.py', 'server.py', and 'request.py'. I open my Anaconda Prompt end entered this: 'C:\Users\ryans>C:\Users\ryans\model.py'</p>
<p>Now, I get this.</p>
<p><a href="https://i.stack.imgur.com/iuRf4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iuRf4.png" alt="enter image description here" /></a></p>
<p>I definitely have Numpy installed! Something must be wrong with my setup, or maybe the way I am starting the process is wrong, but I'm not sure what the issue is. Has anyone encountered this problem before.</p>
|
<p>I would suggest the following steps since you mentioned its your first time and when deploying a project for experimenting, it's good practice to put it in a virtual environment, which we can do with the <a href="https://virtualenv.pypa.io/en/latest/" rel="nofollow noreferrer"><code>virtualenv</code></a> tool.</p>
<ol>
<li><p>Assuming you already have <code>pip</code> installed, install virtualenv:</p>
<p><code>C:\> pip install virtualenv</code></p>
</li>
<li><p>Create a fresh working directory, switch to it and <a href="https://github.com/vyashemang/flask-salary-predictor" rel="nofollow noreferrer">clone the repository from Github</a> mentioned by the author of the article:</p>
<p><code>C:\your-working-directory\> git clone https://github.com/vyashemang/flask-salary-predictor.git</code></p>
</li>
<li><p>Start virtualenv in the repository's top directory</p>
<p><code>C:\your-working-directory\flask-salary-predictor\> virtualenv env</code></p>
</li>
<li><p>Activate the created batch file, <code>\env\Scripts\activate.bat</code>:</p>
<p><code>C:\Users\'Username'\venv\Scripts\activate.bat</code></p>
</li>
<li><p>Now in the virtual environment, install all requirements:</p>
<p><code>(env) C:\your-working-directory\flask-salary-predictor\> pip install -r requirements.txt</code></p>
</li>
<li><p>You can now run the <code>model.py</code> like you have above. I'd suggest that you run the server.py using gunicorn or <a href="https://stackoverflow.com/questions/11087682/does-gunicorn-run-on-windows">on Windows use waitress according to this post</a> so that instead of running <code>request.py</code> every time you want to send a request, you can use <a href="https://insomnia.rest/" rel="nofollow noreferrer">Insomnia</a> or <a href="https://www.postman.com/" rel="nofollow noreferrer">Postman</a> to send API requests.</p>
</li>
</ol>
|
python|numpy|anaconda|conda
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.