Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
1,700
| 63,060,623
|
Can you create a data frame of a dictionary?
|
<p>I am trying to understand a black of code. Is it possible to have a data frame of a dictionary?</p>
<pre><code>def plot_dists(num_samples, mu=0, sigma=1):
norm_samples = numpy.random.normal(
loc=mu, scale=sigma, size=num_samples)
poisson_samples = numpy.random.poisson(
lam=sigma**2, size=num_samples)
dists = pandas.DataFrame({
'norm': norm_samples,
'poisson': poisson_samples,
})
min_x = dists.min().min()
max_x = dists.max().max()
bw = (max_x - min_x) / 60
pyplot.hist(dists.norm, width=bw, bins=60,
label='N(%.1f, %.1f)' % (mu, sigma), alpha=.5, normed=True)
pyplot.hist(dists.poisson, width=bw, bins=60,
label='Poisson(%.1f)' % sigma, alpha=.5, normed=True)
pyplot.legend()
plot_dists(100000)
</code></pre>
<p>The following block is throwing me off:</p>
<pre><code> dists = pandas.DataFrame({
'norm': norm_samples,
'poisson': poisson_samples,
})
</code></pre>
<p>Is this a data frame of a dictionary? Everything I am reading online is telling me how to convert a dictionary to a data frame or a data frame to a dictionary. I am not sure if this is a data frame of a dictionary in it or how that works. If you any can help me understand the code a little better it would be much appreciated. Thanks in advance</p>
|
<p>pandas dataframe is a way to represent tabular data. if you read the documentation the first parameter of the class constrcutor (data ) accepts ndarray, Iterable, dict, or DataFrame.</p>
<p>[https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html]</p>
<p>so to create a data frame you can pass a dictionary as parameter for this example it will look like this:
(first row only)</p>
<pre><code>| | norm | poisson |
|---|------|----------|
| 0 |0.455 | 2 |
</code></pre>
<p>you can notice that the dictionary keys (norm and poisson) are the name of columns respectively.</p>
<p>i reproduced your code using google colab:</p>
<pre class="lang-py prettyprint-override"><code>
import matplotlib.pyplot as pyplot
import numpy
import pandas
def plot_dists(num_samples, mu=0, sigma=1):
norm_samples = numpy.random.normal(
loc=mu, scale=sigma, size=num_samples)
poisson_samples = numpy.random.poisson(
lam=sigma**2, size=num_samples)
dist = {'norm': norm_samples,
'poisson': poisson_samples}
dists = pandas.DataFrame(dist)
min_x = dists.min().min()
max_x = dists.max().max()
bw = (max_x - min_x) / 60
#normed is deprecated i think use density instead
pyplot.hist(dists.norm, width=bw, bins=60,
label='N(%.1f, %.1f)' % (mu, sigma), alpha=.5, density =True)
pyplot.hist(dists.poisson, width=bw, bins=60,
label='Poisson(%.1f)' % sigma, alpha=.5, density =True)
pyplot.legend()
#return the dataframe for debugging and visualization.
return dists
dists = plot_dists(100000)
dists.tail()
</code></pre>
|
python|pandas|dictionary
| 0
|
1,701
| 62,968,978
|
How to add new column data by individual rows in Pandas DataFrame
|
<p>I have a dataframe <code>df</code>. I want to add 2 new columns <code>0</code> and <code>1</code> and add data to these columns by one row at a time, and not the complete column at once. By using <code>pd.Series</code> for all the rows in <code>df</code> I am getting <code>NaN</code> value in the new column data other than the last row. Provide me a way to fix this.</p>
<p>I need to add data by one row at a time. Please provide solution accordingly.</p>
<p><strong>df</strong></p>
<pre><code>val
1
2
3
</code></pre>
<p><strong>code</strong></p>
<pre><code>for j in range(len(df)):
for i in range(2):
cal = df.val.iloc[j] + 10
df[i] = pd.Series(cal, index=df.index[[j]])
</code></pre>
<p><strong>output</strong></p>
<pre><code>val | 0 | 1
1 | NaN | NaN
2 | NaN | NaN
3 | 13.0 | 13.0
</code></pre>
<p><strong>expected output</strong></p>
<pre><code> val | 0 | 1
1 | 11.0 | 11.0
2 | 12.0 | 12.0
3 | 13.0 | 13.0
</code></pre>
<p><strong>EDIT</strong>
I had actually asked a question on stackoverflow whose answer I could not get. That is why I had tried to condense the question and present it this way. If possible you all can check the original question <a href="https://stackoverflow.com/questions/62958702/how-to-get-count-of-values-based-on-datetime-in-python">here</a></p>
|
<p>It is not clear why you are trying to add rows one at a time with inefficient methods, hence I suggest not to use this code but to rely on vectorized solutions.</p>
<p>However, if you really want to do it for some reason, you should modify your cycle like this</p>
<pre><code>for j in range(len(df)):
for i in range(2):
cal = df.val.iloc[j] + 10
df.loc[j, i] = cal
# val 0 1
# 0 1 11.0 11.0
# 1 2 12.0 12.0
# 2 3 13.0 13.0
</code></pre>
|
python|pandas|loops|dataframe|nan
| 1
|
1,702
| 63,252,135
|
Pandas dataframes too large to append to dask dataframe?
|
<p>I'm not sure what I'm missing here, I thought dask would resolve my memory issues. I have 100+ pandas dataframes saved in .pickle format. I would like to get them all in the same dataframe but keep running into memory issues. I've already increased the memory buffer in jupyter. It seems I may be missing something in creating the dask dataframe as it appears to crash my notebook after completely filling my RAM (maybe). Any pointers?</p>
<p>Below is the basic process I used:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import dask.dataframe as dd
ddf = dd.from_pandas(pd.read_pickle('first.pickle'),npartitions = 8)
for pickle_file in all_pickle_files:
ddf = ddf.append(pd.read_pickle(pickle_file))
ddf.to_parquet('alldata.parquet', engine='pyarrow')
</code></pre>
<ul>
<li>I've tried a variety of <code>npartitions</code> but no number has allowed the code to finish running.</li>
<li>all in all there is about 30GB of pickled dataframes I'd like to combine</li>
<li>perhaps this is not the right library but the docs suggest dask should be able to handle this</li>
</ul>
|
<p>Have you considered to first convert the <code>pickle</code> files to <code>parquet</code> and then load to dask? I assume that all your data is in a folder called <code>raw</code> and you want to move to <code>processed</code></p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import dask.dataframe as dd
import os
def convert_to_parquet(fn, fldr_in, fldr_out):
fn_out = fn.replace(fldr_in, fldr_out)\
.replace(".pickle", ".parquet")
df = pd.read_pickle(fn)
# eventually change dtypes
df.to_parquet(fn_out, index=False)
fldr_in = 'data'
fldr_out = 'processed'
os.makedirs(fldr_out, exist_ok=True)
# you could use glob if you prefer
fns = os.listdir(fldr_in)
fns = [os.path.join(fldr_in, fn) for fn in fns]
</code></pre>
<p>If you know than no more than one file fits in memory you should use a loop</p>
<pre class="lang-py prettyprint-override"><code>for fn in fns:
convert_to_parquet(fn, fldr_in, fldr_out)
</code></pre>
<p>If you know that more files fit in memory you can use <code>delayed</code></p>
<pre class="lang-py prettyprint-override"><code>from dask import delayed, compute
# this is lazy
out = [delayed(fun)(fn) for fn in fns]
# now you are actually converting
out = compute(out)
</code></pre>
<p>Now you can use dask to do your analysis.</p>
|
python|pandas|dataframe|jupyter|dask
| 1
|
1,703
| 67,665,159
|
How do I groupby, count or sum and then plot two lines in Pandas?
|
<p>Say I have the following dataframes:</p>
<p><code>Earthquakes</code>:</p>
<pre><code> latitude longitude place year
0 36.087000 -106.168000 New Mexico 1973
1 33.917000 -90.775000 Mississippi 1973
2 37.160000 -104.594000 Colorado 1973
3 37.148000 -104.571000 Colorado 1973
4 36.500000 -100.693000 Oklahoma 1974
… … … … …
13941 36.373500 -96.818700 Oklahoma 2016
13942 36.412200 -96.882400 Oklahoma 2016
13943 37.277167 -98.072667 Kansas 2016
13944 36.939300 -97.896000 Oklahoma 2016
13945 36.940500 -97.906300 Oklahoma 2016
</code></pre>
<p>and <code>Wells</code>:</p>
<pre><code> LAT LONG BBLS Year
0 36.900324 -98.218260 300.0 1977
1 36.896636 -98.177720 1000.0 2002
2 36.806113 -98.325840 1000.0 1988
3 36.888589 -98.318530 1000.0 1985
4 36.892128 -98.194620 2400.0 2002
… … … … …
11117 36.263285 -99.557631 1000.0 2007
11118 36.263220 -99.548647 1000.0 2007
11119 36.520160 -99.334183 19999.0 2016
11120 36.276728 -99.298563 19999.0 2016
11121 36.436857 -99.137391 60000.0 2012
</code></pre>
<p>How do I manage to make a line graph showing the number of BBLS per year (from <code>Wells</code>), and the number of Earthquakes that occurred in a year (from <code>Earthquakes</code>), where the x-axis shows the year since 1980 and the y1-axis shows the sum of BBLS per year, while y2-axis shows the number of earthquakes.</p>
<p>I believe I need to make a groupby, count(for earthquakes) and sum(for BBLS) in order to make the plot but I really tried so many codings and I just don't get how to do it.</p>
<p>The only one that kinda worked was the line graph for earthquakes as follows:</p>
<pre><code>Earthquakes.pivot_table(index=['year'],columns='type',aggfunc='size').plot(kind='line')
</code></pre>
<p><a href="https://i.stack.imgur.com/NocVm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NocVm.png" alt="enter image description here" /></a></p>
<p>Still, for the line graph for BBLS nothing has worked</p>
<pre><code>Wells.pivot_table(index=['Year'],columns='BBLS',aggfunc='count').plot(kind='line')
</code></pre>
<p>This one either:</p>
<pre><code>plt.plot(Wells['Year'].values, Wells['BBL'].values, label='Barrels Produced')
plt.legend() # Plot legends (the two labels)
plt.xlabel('Year') # Set x-axis text
plt.ylabel('Earthquakes') # Set y-axis text
plt.show() # Display plot
</code></pre>
<p>This one from another <a href="https://stackoverflow.com/questions/64013061/plotting-multiple-lines-from-one-dataframe-and-adding-a-secondary-axis-to-plot-a">thread</a> either:</p>
<pre><code>fig, ax = plt.subplots(figsize=(10,8))
Earthquakes.plot(ax = ax, marker='v')
ax.title.set_text('Earthquakes and Injection Wells')
ax.set_ylabel('Earthquakes')
ax.set_xlabel('Year')
ax.set_xticks(Earthquakes['year'])
ax2=ax.twinx()
ax2.plot(Wells.Year, Wells.BBL, color='c',
linewidth=2.0, label='Number of Barrels', marker='o')
ax2.set_ylabel('Annual Number of Barrels')
lines_1, labels_1 = ax.get_legend_handles_labels()
lines_2, labels_2 = ax2.get_legend_handles_labels()
lines = lines_1 + lines_2
labels = labels_1 + labels_2
ax.legend(lines, labels, loc='upper center')
</code></pre>
|
<p>Input data:</p>
<pre><code>>>> df2 # Earthquakes
year
0 2007
1 1974
2 1979
3 1992
4 2006
.. ...
495 2002
496 2011
497 1971
498 1977
499 1985
[500 rows x 1 columns]
>>> df1 # Wells
BBLS year
0 16655 1997
1 7740 1998
2 37277 2000
3 20195 2014
4 11882 2018
.. ... ...
495 30832 1981
496 24770 2018
497 14949 1980
498 24743 1975
499 46933 2019
[500 rows x 2 columns]
</code></pre>
<p>Prepare data to plot:</p>
<pre><code>data1 = df1.value_counts("year").sort_index().rename("Earthquakes")
data2 = df2.groupby("year")["BBLS"].sum()
</code></pre>
<p>Simple plot:</p>
<pre><code>ax1 = data1.plot(legend=data1.name, color="blue")
ax2 = data2.plot(legend=data2.name, color="red", ax=ax1.twinx())
</code></pre>
<p>Now, you can do whatever with the 2 axes.</p>
<p><a href="https://i.stack.imgur.com/Gx4Bm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Gx4Bm.png" alt="Earthquakes / BBLS plot" /></a></p>
<p><strong>A more controlled chart</strong></p>
<pre><code># Figure and axis
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
# Data
line1, = ax1.plot(data1.index, data1.values, label="Earthquakes", color="b")
line2, = ax2.plot(data2.index, data2.values / 10**6, label="Barrels", color="r")
# Legend
lines = [line1, line2]
ax1.legend(lines, [line.get_label() for line in lines])
# Titles
ax1.set_title("")
ax1.set_xlabel("Year")
ax1.set_ylabel("Earthquakes")
ax2.set_ylabel("Barrels Produced (MMbbl)")
</code></pre>
|
python|pandas|matplotlib|counter|spyder
| 1
|
1,704
| 67,941,538
|
Drop duplicate values from hierarchical index
|
<p>I have a hierarchical index. I want to get the unique values for each index. How can I do that?</p>
<pre><code>Party Name
Bahujan Agiaon
Agiaon
Amarpur
Samajwadi Vaishali
Vaishali
Wazirganj
Wazirganj
</code></pre>
<p>How can I drop the duplicate values in Name?</p>
|
<p>You can factorize the index , then convert to series and keep only the ones which aren't duplicated:</p>
<pre><code>df[~pd.Series(df.index.factorize()[0]).duplicated().to_numpy()]
</code></pre>
<hr />
<pre><code>Party Name
Bahujan Agiaon
Amarpur
Samajwadi Vaishali
Wazirganj
</code></pre>
<p>Or for just a dataframe with unique indexes:</p>
<pre><code>pd.DataFrame(index=df.index.unique())
</code></pre>
|
python|pandas|dataframe|data-science
| 0
|
1,705
| 68,002,677
|
model.predict() output dimensions is not the same as y_train dimensions
|
<p>I am currently working on an LSTM model to predict the closing price of a stock based on other data. It is my first time working with RNNs. I am using tensorflow.</p>
<p>The issue arises when I try to predict prices over the X train data (which is what the model was trained on). I get different dimensions when compared to the y train data.</p>
<p>I am using 7 features with a timestep of 100 to predict the closing price.</p>
<p>These are the shapes of my input data:</p>
<pre><code>x_train = (3697, 100, 7)
y_train = (3697, 1)
x_test = (1584, 100, 7)
y_test = (1584, 1)
</code></pre>
<p>The input data shape seems correct to me. And I pass input shape of (100, 7) to the model</p>
<p>I then run:</p>
<pre><code>predicted_stock_price_train = model.predict(x_train)
predicted_stock_price_train.shape
</code></pre>
<p>and the output I get is (3697, 100, 1). I was expecting (3697, 1) which is y_train's dimensions.</p>
<p>As a result, while doing an inverse_transform, I get the error:</p>
<pre><code>ValueError: Found array with dim 3. Estimator expected <= 2.
</code></pre>
<p>since fit_transform was passed on y_train.</p>
<p>I don't understand what I'm doing wrong.</p>
<p>Edit:
Here's the model.summary() output</p>
<pre><code> Model: "sequential_6"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
lstm_15 (LSTM) (None, 100, 50) 11600
_________________________________________________________________
dropout_6 (Dropout) (None, 100, 50) 0
_________________________________________________________________
lstm_16 (LSTM) (None, 100, 50) 20200
_________________________________________________________________
dropout_7 (Dropout) (None, 100, 50) 0
_________________________________________________________________
dense_6 (Dense) (None, 100, 1) 51
=================================================================
Total params: 31,851
Trainable params: 31,851
Non-trainable params: 0
_________________________________________________________________
</code></pre>
|
<p>I managed to figure out what was wrong. Turns out it was a pretty silly mistake.</p>
<p>This was the model that gave the error</p>
<pre><code>model = Sequential()
model.add(LSTM(50,return_sequences=True,input_shape=(100, 7)))
model.add(Dropout(0.7))
model.add(LSTM(50,return_sequences=True))
model.add(Dropout(0.7))
model.add(Dense(1))
</code></pre>
<p>What's wrong here is that I have <code>return_sequences = True</code> in the last layer before the dense. So the LSTM past on the data and that made the output shape (None, 100, 1)</p>
<p>Here's the new model without the error</p>
<pre><code>model = Sequential()
model.add(LSTM(50,return_sequences=True,input_shape=(100, 7)))
model.add(Dropout(0.7))
model.add(LSTM(50,return_sequences=False))
model.add(Dropout(0.7))
model.add(Dense(1))
</code></pre>
<p>This gives an output shape of (None, 1) and solves the problem.</p>
|
python|tensorflow|deep-learning|lstm
| 0
|
1,706
| 67,694,895
|
module 'tensorflow._api.v1.compat.v2' has no attribute '__internal__' google colab error
|
<p>I am running a tensorflow model on google colab. Today, I got this error:</p>
<pre><code> Using TensorFlow backend.
Traceback (most recent call last):
File "train.py", line 6, in <module>
from yolo import create_yolov3_model, dummy_loss
File "/content/drive/MyDrive/yolo/yolo_plz_work/yolo.py", line 1, in <module>
from keras.layers import Conv2D, Input, BatchNormalization, LeakyReLU, ZeroPadding2D, UpSampling2D, Lambda
File "/usr/local/lib/python3.7/dist-packages/keras/__init__.py", line 3, in <module>
from . import utils
File "/usr/local/lib/python3.7/dist-packages/keras/utils/__init__.py", line 26, in <module>
from .vis_utils import model_to_dot
File "/usr/local/lib/python3.7/dist-packages/keras/utils/vis_utils.py", line 7, in <module>
from ..models import Model
File "/usr/local/lib/python3.7/dist-packages/keras/models.py", line 10, in <module>
from .engine.input_layer import Input
File "/usr/local/lib/python3.7/dist-packages/keras/engine/__init__.py", line 3, in <module>
from .input_layer import Input
File "/usr/local/lib/python3.7/dist-packages/keras/engine/input_layer.py", line 7, in <module>
from .base_layer import Layer
File "/usr/local/lib/python3.7/dist-packages/keras/engine/base_layer.py", line 12, in <module>
from .. import initializers
File "/usr/local/lib/python3.7/dist-packages/keras/initializers/__init__.py", line 124, in <module>
populate_deserializable_objects()
File "/usr/local/lib/python3.7/dist-packages/keras/initializers/__init__.py", line 49, in populate_deserializable_objects
LOCAL.GENERATED_WITH_V2 = tf.__internal__.tf2.enabled()
File "/usr/local/lib/python3.7/dist-packages/tensorflow_core/python/util/module_wrapper.py", line 193, in __getattr__
attr = getattr(self._tfmw_wrapped_module, name)
AttributeError: module 'tensorflow._api.v1.compat.v2' has no attribute '__internal__'
</code></pre>
<p>Previously, things had been running smoothly, so I'm not sure why this happened.
I am using Python 3.7.10, and these are the packages I am supposed to use:</p>
<pre><code>absl-py==0.9.0
astor==0.8.1
gast==0.2.2
google-pasta==0.1.8
grpcio==1.26.0
h5py==2.10.0
Keras==2.3.1
Keras-Applications==1.0.8
Keras-Preprocessing==1.1.0
Markdown==3.1.1
numpy==1.18.1
opencv-contrib-python==4.1.2.30
opt-einsum==3.1.0
protobuf==3.11.2
PyYAML==5.3
scipy==1.4.1
six==1.14.0
tensorboard==1.15.0
tensorflow==1.15.0
tensorflow-estimator==1.15.1
termcolor==1.1.0
tqdm==4.41.1
Werkzeug==0.16.0
wrapt==1.11.2
</code></pre>
<p>Perhaps colab recently upgraded some libraries? I am sure that I followed the same installation steps as I usually do.</p>
<p>EDIT:
I think there may be an issue in the keras version.
Here are the first few lines of the file I am running:</p>
<pre><code>from keras.layers import Conv2D, Input, BatchNormalization, LeakyReLU, ZeroPadding2D, UpSampling2D, Lambda
from keras.layers.merge import add, concatenate
from keras.models import Model
from keras.engine.topology import Layer
import tensorflow as tf
</code></pre>
<p>If I remove all of the lines starting with "from keras", I don't get the error. However, I never touched these lines before, so I don't know why they would suddenly cause an error now. Also, it is not the python version causing this error, because colab changed it to 3.7.10 in April and I had no problem.</p>
|
<p>Try these parameters, it works with me</p>
<pre><code>!pip3 uninstall keras-nightly
!pip3 uninstall -y tensorflow
!pip3 install keras==2.1.6
!pip3 install tensorflow==1.15.0
!pip3 install h5py==2.10.0
</code></pre>
|
python|tensorflow|google-colaboratory
| 17
|
1,707
| 41,242,945
|
Python 3.x - Pandas apply is very slow
|
<p>I have created a recommender system. There are 2 dataframes – input_df and recommended_df</p>
<p>input_df – Dataframe of content already viewed by users. This df is used for generating the recommendations</p>
<pre><code>User_Name Viewed_Content_Name
User1 Content1
User1 Content2
User1 Content5
User2 Content1
User2 Content3
User2 Content5
User2 Content6
User2 Content8
</code></pre>
<p>Recommended_df – Dataframe of content recommended to users</p>
<pre><code>User_Name Recommended_Content_Name
User1 Content1 # This recommendation has already been viewed by User1. Hence this recommendation should be removed
User1 Content8
User2 Content2
User2 Content7
</code></pre>
<p>I want to remove recommendations if they have already been viewed by the user. I have tried following two approaches, but both of them are very time consuming. I need an approach which will identify occurrence of row in input_df and recommended_df </p>
<p>Approach 1 - Using subsetting, for each row in recommended_df, I try to see if that row has already occurred in input_df</p>
<pre><code>for i in range(len(recommended_df)):
recommended_df.loc[i,'Recommendation_Completed']=len(input_df [(input_df ['User_Name']== recommended_df.loc[i,'User_Name']) & (input_df ['Viewed_Content_Name']== recommended_df.loc[i,'Recommended_Content_Name'])])
recommended_df = recommended_df.loc[recommended_df['Recommendation_Completed']==0]
# Remove row if already occured in input_df
</code></pre>
<p>Approach 2 - Try to see if the row in recommended_df occurs in input_df using apply</p>
<p>Created a key column in input_df and recommended_df. This is unique key for each user and content</p>
<p>Input_df = </p>
<pre><code>User_Name Viewed_Content_Name keycol (User_Name + Viewed_Content_Name)
User1 Content1 User1Content1
User1 Content2 User1Content2
User1 Content5 User1Content5
User2 Content1 User2Content1
User2 Content3 User2Content3
User2 Content5 User2Content5
User2 Content6 User2Content6
User2 Content8 User2Content8
</code></pre>
<p>Recommended_df = </p>
<pre><code>User_Name Recommended_Content_Name keycol (User_Name + Recommended_Content_Name)
User1 Content1 User1Content1
User1 Content8 User1Content8
User2 Content2 User2Content2
User2 Content7 User2Content7
recommended_df ['Recommendation_Completed'] = recommended_df ['keycol'].apply(lambda d: d in input_df ['keycol'].values)
recommended_df = recommended_df.loc[recommended_df['Recommendation_Completed']==False]
# Remove if row occurs in input_df
</code></pre>
<p>The second approach using apply is faster than approach 1, but i can still do the same thing faster in excel if i use the countifs function. How can I replicate it faster using python?</p>
|
<p>Try to only use apply as a last resort. You can concatenate user and content and then use boolean selection.</p>
<pre><code>user_content_seen = input_df.User_Name + input_df.Viewed_Content_Name
user_all = Recommended_df.User_Name + Recommended_df.Recommended_Content_Name
Recommended_df[~user_all.isin(user_content_seen)]
</code></pre>
|
python|pandas|apply|recommendation-engine
| 2
|
1,708
| 41,627,300
|
Pandas element wise if/else (IIF)
|
<p>Is there a element-wise IIF function in Pandas? </p>
<p>E.g. given a dataframe:</p>
<pre><code>w = pd.DataFrame({'Date':pd.to_datetime(['2016-01-01','2016-01-02','2016-01-03']),'A1':[0.3,0.1,0.1],'A2':[0.4,0.4,0.4]}).set_index(['Date'])
</code></pre>
<p>If the element > 0.2, set to 1, else set to 0. Such as below:</p>
<pre><code>w2 = pd.DataFrame({'Date':pd.to_datetime(['2016-01-01','2016-01-02','2016-01-03']),'A1':[1,0,0],'A2':[1,1,1]}).set_index(['Date'])
</code></pre>
<p>There is mask()/where(), but the true value is from the old dataframe. </p>
|
<p>You need compare with <code>0.2</code> and <code>boolean DataFrame</code> cast to <code>np.uint8</code>:</p>
<pre><code>print (w > .2)
A1 A2
Date
2016-01-01 True True
2016-01-02 False True
2016-01-03 False True
w1 = (w > .2).astype(np.uint8)
print (w1)
A1 A2
Date
2016-01-01 1 1
2016-01-02 0 1
2016-01-03 0 1
</code></pre>
<hr>
<pre><code>print (w.gt(.2).astype(np.uint8))
A1 A2
Date
2016-01-01 1 1
2016-01-02 0 1
2016-01-03 0 1
</code></pre>
<p>Comparing solutions:</p>
<pre><code>#[300000 rows x 2 columns]
#for testing index is not necessary
w = pd.concat([w]*100000).reset_index(drop=True)
In [49]: %timeit ((w > .2).astype(int))
100 loops, best of 3: 2.11 ms per loop
In [50]: %timeit ((w > .2).astype(np.short))
1000 loops, best of 3: 1.8 ms per loop
In [51]: %timeit ((w > .2).astype(np.uint8))
1000 loops, best of 3: 1.35 ms per loop
In [82]: %timeit (w.gt(.2).astype(np.uint8))
1000 loops, best of 3: 1.02 ms per loop
In [52]: %timeit (w.applymap(lambda x: 1 if x>0.2 else 0))
1 loop, best of 3: 334 ms per loop
</code></pre>
<p>Thank you <a href="https://stackoverflow.com/questions/41627300/pandas-element-wise-if-else-iif/41628329?noredirect=1#comment70462298_41628329"><code>piRSquared</code></a> for another solution:</p>
<pre><code>pd.DataFrame((w.values > .2).astype(np.uint8), w.index, w.columns)
In [112]: %timeit (pd.DataFrame((w.values > .2).astype(np.uint8), w.index, w.columns))
1000 loops, best of 3: 877 µs per loop
</code></pre>
|
pandas
| 3
|
1,709
| 61,519,128
|
How to fix the dimension error in the loss function/softmax?
|
<p>I am implementing a logistic regression in PyTorch for XOR (I don't expect it to work well it's just a demonstration). For some reason I am getting an error 'IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)'. It is not clear to me where this originates. The error points to log_softmax during training. </p>
<pre><code>import torch.nn as nn
import torch.nn.functional as F
class LogisticRegression(nn.Module):
# input_size: Dimensionality of input feature vector.
# num_classes: The number of classes in the classification problem.
def __init__(self, input_size, num_classes):
# Always call the superclass (nn.Module) constructor first!
super(LogisticRegression, self).__init__()
# Set up the linear transform
self.linear = nn.Linear(input_size, num_classes)
# Forward's sole argument is the input.
# input is of shape (batch_size, input_size)
def forward(self, x):
# Apply the linear transform.
# out is of shape (batch_size, num_classes)
out = self.linear(x)
# Softmax the out tensor to get a log-probability distribution
# over classes for each example.
out_distribution = F.softmax(out, dim=-1)
return out_distribution
# Binary classifiation
num_outputs = 1
num_input_features = 2
# Create the logistic regression model
logreg_clf = LogisticRegression(num_input_features, num_outputs)
print(logreg_clf)
lr_rate = 0.001
X = torch.Tensor([[0,0],[0,1], [1,0], [1,1]])
Y = torch.Tensor([0,1,1,0]).view(-1,1) #view is similar to numpy.reshape()
# Run the forward pass of the logistic regression model
sample_output = logreg_clf(X) #completely random at the moment
print(X)
loss_function = nn.CrossEntropyLoss() # computes softmax and then the cross entropy
optimizer = torch.optim.SGD(logreg_clf.parameters(), lr=lr_rate)
from torch.autograd import Variable
#training loop:
epochs = 201 #how many times we go through the training set
steps = X.size(0) #steps = 4; we have 4 training examples
for i in range(epochs):
for j in range(steps):
#sample from the training set:
data_point = np.random.randint(X.size(0))
x_var = Variable(X[data_point], requires_grad=False)
y_var = Variable(Y[data_point], requires_grad=False)
optimizer.zero_grad() # zero the gradient buffers
y_hat = logreg_clf(x_var) #get the output from the model
loss = loss_function.forward(y_hat, y_var) #calculate the loss
loss.backward() #backprop
optimizer.step() #does the update
if i % 500 == 0:
print ("Epoch: {0}, Loss: {1}, ".format(i, loss.data.numpy()))
</code></pre>
|
<p>First of all, you are doing a binary classification task. So the number of output features should be 2; i.e., <code>num_outputs = 1</code>.</p>
<p>Second, as it's been declared in <a href="https://pytorch.org/docs/stable/nn.html#crossentropyloss" rel="nofollow noreferrer"><code>nn.CrossEntropyLoss()</code></a> documentation, the <code>.forward</code> method accepts two tensors as below:</p>
<ul>
<li><code>Input: (N, C)</code> where <code>C</code> is the number of classes (in your case it is 2).</li>
<li><code>Target: (N)</code></li>
</ul>
<p><code>N</code> in the example above is the number of training examples that you pass in to the loss function; for simplicity, you can set it to one (i.e., doing a forward pass for each instance and update gradients thereafter). </p>
<p><em>Note:</em> Also, you don't need to use <code>.Softmax()</code> before <code>nn.CrossEntropyLoss()</code> module as this class has <code>nn.LogSoftmax</code> included in itself.</p>
<p>I modified your code as below, this is a working example of your snippet:</p>
<pre><code>import torch.nn as nn
import torch.nn.functional as F
import numpy as np
import torch
class LogisticRegression(nn.Module):
# input_size: Dimensionality of input feature vector.
# num_classes: The number of classes in the classification problem.
def __init__(self, input_size, num_classes):
# Always call the superclass (nn.Module) constructor first!
super(LogisticRegression, self).__init__()
# Set up the linear transform
self.linear = nn.Linear(input_size, num_classes)
# Forward's sole argument is the input.
# input is of shape (batch_size, input_size)
def forward(self, x):
# Apply the linear transform.
# out is of shape (batch_size, num_classes)
out = self.linear(x)
# Softmax the out tensor to get a log-probability distribution
# over classes for each example.
return out
# Binary classifiation
num_outputs = 2
num_input_features = 2
# Create the logistic regression model
logreg_clf = LogisticRegression(num_input_features, num_outputs)
print(logreg_clf)
lr_rate = 0.001
X = torch.Tensor([[0,0],[0,1], [1,0], [1,1]])
Y = torch.Tensor([0,1,1,0]).view(-1,1) #view is similar to numpy.reshape()
# Run the forward pass of the logistic regression model
sample_output = logreg_clf(X) #completely random at the moment
print(X)
loss_function = nn.CrossEntropyLoss() # computes softmax and then the cross entropy
optimizer = torch.optim.SGD(logreg_clf.parameters(), lr=lr_rate)
from torch.autograd import Variable
#training loop:
epochs = 201 #how many times we go through the training set
steps = X.size(0) #steps = 4; we have 4 training examples
for i in range(epochs):
for j in range(steps):
#sample from the training set:
data_point = np.random.randint(X.size(0))
x_var = Variable(X[data_point], requires_grad=False).unsqueeze(0)
y_var = Variable(Y[data_point], requires_grad=False).long()
optimizer.zero_grad() # zero the gradient buffers
y_hat = logreg_clf(x_var) #get the output from the model
loss = loss_function(y_hat, y_var) #calculate the loss
loss.backward() #backprop
optimizer.step() #does the update
if i % 500 == 0:
print ("Epoch: {0}, Loss: {1}, ".format(i, loss.data.numpy()))
</code></pre>
<p><strong>Update</strong></p>
<p>To get the predicted class labels which is either 0 or 1:</p>
<pre><code>pred = np.argmax(y_hat.detach().numpy, axis=0)
</code></pre>
<p>As for the <code>.detach()</code> function, numpy expects the tensor/array to get detached from the computation graph; i.e., the tensor should not have <code>require_grad=True</code> and detach method would do the trick for you.</p>
|
python|pytorch
| 1
|
1,710
| 61,602,880
|
Tensorflow 2.x: Convert byte string to int in map
|
<p>I have a TFRecordDataset of images. Each record has an image, an integer label, and a byte-array ID. The byte-array is a hex representation of some number.</p>
<p>I wish to seed the random operations with a derivative of the ID. How do I do that?</p>
<p>The following attempt failed:</p>
<pre class="lang-py prettyprint-override"><code>def func(image, label, idnum):
'''Example idnum: b"abcdef012"'''
seed = tf.py_function(func=lambda x: int(x.numpy().decode(), 16),
inp=[idnum], Tout=tf.int64)
ran = tf.random.uniform(shape=(), seed=seed)
</code></pre>
<p>Here's the error message:</p>
<pre><code>TypeError: Expected int for argument 'seed2' not <tf.Tensor 'random_uniform/mod:0' shape=<unknown> dtype=int64>.
</code></pre>
<p>In ordinary python, I would convert such a byte-string to an int as follows:</p>
<pre class="lang-py prettyprint-override"><code>x = b'abcdef012'
i = int(x.decode(), 16)
</code></pre>
|
<p>Decoding a binary string (tensor) into an int (tensor) is actually quite straight-forward:</p>
<pre class="lang-py prettyprint-override"><code>tf.io.decode_raw(tf.constant(b'ABC'), tf.uint8)
</code></pre>
<p>-> <a href="https://www.tensorflow.org/api_docs/python/tf/io/decode_raw" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/io/decode_raw</a></p>
<p>The bigger issue here is that TensorFlow considers seeds to be configuration, not data. Hence, <code>tf.random.uniform</code> and other random operations expect the seed to be a 'Python integer', not a tensor (see <a href="https://www.tensorflow.org/api_docs/python/tf/random/uniform" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/random/uniform</a>). That means, they don't expect to receive the seed as input tensor during <strong>graph execution</strong>, but as constant during <strong>graph creation</strong> (or rather, tracing <a href="https://www.tensorflow.org/guide/function#tracing" rel="nofollow noreferrer">https://www.tensorflow.org/guide/function#tracing</a>). Note that Dataset transformations like <code>Dataset.map</code> are translated into graph operations, capturing all parameters that aren't tensors.</p>
<p>This should also explain the error message that you are seeing.</p>
<p>The solution to this depends on what you are trying to achieve. If you keep your global random seed fixed (<code>tf.random.set_seed</code>), then the values of <code>ran</code> are actually fully deterministic and you could just precompute and store them inside the TFRecords. If not, an option would be to compute <code>ran</code> values for each image upfront in Python, then use <code>tf.data.Dataset.from_tensor_slices</code> to turn the list into a Dataset and then zip the Dataset with the TFRecordDataset.</p>
|
tensorflow2.0|tensorflow-datasets
| 0
|
1,711
| 68,703,586
|
Identify and count objects different from background
|
<p>I try to use python, NumPy, and OpenCV to analyze the image below and just draw a circle on each object found. The idea here is not to identify the bug only identify any object that is different from the background.</p>
<p>Original Image:
<a href="https://i.stack.imgur.com/ykK7A.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ykK7A.jpg" alt="enter image description here" /></a></p>
<p>Here is the code that I'm using.</p>
<pre><code>import cv2
import numpy as np
img = cv2.imread('per.jpeg', cv2.IMREAD_GRAYSCALE)
if cv2.__version__.startswith('2.'):
detector = cv2.SimpleBlobDetector()
else:
detector = cv2.SimpleBlobDetector_create()
keypoints = detector.detect(img)
print(len(keypoints))
imgKeyPoints = cv2.drawKeypoints(img, keypoints, np.array([]), (0,0,255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
status = cv2.imwrite('teste.jpeg',imgKeyPoints)
print("Image written to file-system : ",status)
</code></pre>
<p>But the problem is that I'm getting only a greyscale image as result without any counting or red circle, as shown below:
<a href="https://i.stack.imgur.com/Dhx5d.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Dhx5d.jpg" alt="enter image description here" /></a></p>
<p>Since I'm new to OpenCV and object recognition world I'm not able to identify what is wrong, and any help will be very appreciated.</p>
|
<p>Here is one way in Python/OpenCV.</p>
<p>Threshold on the bugs color in HSV colorspace. Then use morphology to clean up the threshold. Then get contours. Then find the minimum enclosing circle around each contour. Then bias the radius to make a bit larger and draw the circle around each bug.</p>
<p>Input:</p>
<p><a href="https://i.stack.imgur.com/B4yA1.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/B4yA1.jpg" alt="enter image description here" /></a></p>
<pre><code>import cv2
import numpy as np
# read image
img = cv2.imread('bugs.jpg')
# convert image to hsv colorspace
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
# threshold on bugs color
lower=(0,90,10)
upper=(100,250,170)
thresh = cv2.inRange(hsv, lower, upper)
# apply morphology to clean up
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3,3))
morph = cv2.morphologyEx(thresh, cv2.MORPH_OPEN, kernel)
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (6,6))
morph = cv2.morphologyEx(morph, cv2.MORPH_CLOSE, kernel)
# get external contours
contours = cv2.findContours(morph, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
result = img.copy()
bias = 10
for cntr in contours:
center, radius = cv2.minEnclosingCircle(cntr)
cx = int(round(center[0]))
cy = int(round(center[1]))
rr = int(round(radius)) + bias
cv2.circle(result, (cx,cy), rr, (0, 0, 255), 2)
# save results
cv2.imwrite('bugs_threshold.jpg', thresh)
cv2.imwrite('bugs_cleaned.jpg', morph)
cv2.imwrite('bugs_circled.jpg', result)
# display results
cv2.imshow('thresh', thresh)
cv2.imshow('morph', morph)
cv2.imshow('result', result)
cv2.waitKey(0)
cv2.destroyAllWindows()
</code></pre>
<p>Threshold Image:</p>
<p><a href="https://i.stack.imgur.com/sp3Sl.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sp3Sl.jpg" alt="enter image description here" /></a></p>
<p>Morphology Cleaned Image:</p>
<p><a href="https://i.stack.imgur.com/kQoOx.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kQoOx.jpg" alt="enter image description here" /></a></p>
<p>Resulting Circles:</p>
<p><a href="https://i.stack.imgur.com/faFD0.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/faFD0.jpg" alt="enter image description here" /></a></p>
|
python|python-3.x|numpy|opencv
| 1
|
1,712
| 68,645,655
|
Use a multidimensional index on a MultiIndex pandas dataframe?
|
<p>I have a multiindex pandas dataframe that looks like this (called p_z):</p>
<pre><code> p_z
entry subentry
0 0 0.338738
1 0.636035
2 -0.307365
3 -0.167779
4 0.243284
... ...
26692 891 -0.459227
892 0.055993
893 -0.469857
894 0.192554
895 0.155738
[11742280 rows x 1 columns]
</code></pre>
<p>I want to be able to select certain rows based on another dataframe (or numpy array) which is multidimensional. It would look like this as a pandas dataframe (called tofpid):</p>
<pre><code> tofpid
entry subentry
0 0 0
1 2
2 4
3 5
4 7
... ...
26692 193 649
194 670
195 690
196 725
197 737
[2006548 rows x 1 columns]
</code></pre>
<p>I also have it as an awkward array, where it's a (26692, ) array (each of the entries has a non-standard number of subentries). This is a selection df/array that tells the p_z df which rows to keep. So in entry 0 of p_z, it should keep subentries 0, 2, 4, 5, 7, etc.</p>
<p>I can't find a way to get this done in pandas. I'm new to pandas, and even newer to multiindex; but I feel there ought to be a way to do this. If it's able to be broadcast even better as I'll be doing this over ~1500 dataframes of similar size. If it helps, these dataframes are from a *.root file imported using uproot (if there's another way to do this without pandas, I'll take it; but I would love to use pandas to keep things organised).</p>
<p>Edit: Here's a reproducible example (courtesy of Jim Pavinski's answer; thanks!).</p>
<pre><code>import awkward as ak
import pandas as pd
>>> p_z = ak.Array([[ 0.338738, 0.636035, -0.307365, -0.167779, 0.243284,
0.338738, 0.636035],
[-0.459227, 0.055993, -0.469857, 0.192554, 0.155738,
-0.459227]])
>>> p_z = ak.to_pandas(p_z)
>>> tofpid = ak.Array([[0, 2, 4, 5], [1, 2, 4]])
>>> tofpid = ak.to_pandas(tofpid)
</code></pre>
<p>Both of these dataframes are produced natively in uproot, but this will reproduce the same dataframes that uproot would (using the awkward library).</p>
|
<p>IIUC:</p>
<p>Input data:</p>
<pre><code>>>> p_z
p_z
entry subentry
0 0 0.338738
1 0.636035
2 -0.307365
3 -0.167779
4 0.243284
>>> tofpid
tofpid
entry subentry
0 0 0
1 2
2 4
3 5
4 7
</code></pre>
<p>Create a new multiindex from the columns (entry, tofpid) of your second dataframe:</p>
<pre><code>mi = pd.MultiIndex.from_frame(tofpid.reset_index(level='subentry', drop=True)
.reset_index())
</code></pre>
<p>Output result:</p>
<pre><code>>>> p_z.loc[mi.intersection(p_z.index)]
p_z
entry
0 0 0.338738
2 -0.307365
4 0.243284
</code></pre>
|
python|pandas|multi-index|uproot|awkward-array
| 1
|
1,713
| 68,574,625
|
Calculated column with shift
|
<p>This is the base DataFrame:</p>
<pre><code> g_accessor number_opened number_closed
0 49 - 20 3.0 1.0
1 50 - 20 2.0 14.0
2 51 - 20 1.0 6.0
3 52 - 20 0.0 6.0
4 1 - 21 1.0 4.0
5 2 - 21 3.0 5.0
6 3 - 21 4.0 11.0
7 4 - 21 2.0 7.0
8 5 - 21 6.0 10.0
9 6 - 21 2.0 8.0
10 7 - 21 4.0 9.0
11 8 - 21 2.0 3.0
12 9 - 21 2.0 1.0
13 10 - 21 1.0 11.0
14 11 - 21 6.0 3.0
15 12 - 21 3.0 3.0
16 13 - 21 2.0 6.0
17 14 - 21 5.0 9.0
18 15 - 21 9.0 13.0
19 16 - 21 7.0 7.0
20 17 - 21 9.0 4.0
21 18 - 21 3.0 8.0
22 19 - 21 6.0 3.0
23 20 - 21 6.0 1.0
24 21 - 21 3.0 5.0
25 22 - 21 5.0 3.0
26 23 - 21 1.0 0.0
</code></pre>
<p>I want to add a calculated new column <code>number_active</code> which relies on previous values. For this I'm trying to use <code>pd.DataFrame.shift()</code>, like this:</p>
<pre><code># Creating new column and setting all rows to 0
df['number_active'] = 0
# Active from previous period
PREVIOUS_PERIOD_ACTIVE = 22
# Calculating active value for first period in the DataFrame, based on `PREVIOUS_PERIOD_ACTIVE`
df.iat[0,3] = (df.iat[0,1] + PREVIOUS_PERIOD_ACTIVE) - df.iat[0,2]
# Calculating all columns using DataFrame.shift()
df['number_active'] = (df['number_opened'] + df['number_active'].shift(1)) - df['number_closed']
# Recalculating first active value as it was overwritten in the previous step.
df.iat[0,3] = (df.iat[0,1] + PREVIOUS_PERIOD_ACTIVE) - df.iat[0,2]
</code></pre>
<p>The result:</p>
<pre><code> g_accessor number_opened number_closed number_active
0 49 - 20 3.0 1.0 24.0
1 50 - 20 2.0 14.0 12.0
2 51 - 20 1.0 6.0 -5.0
3 52 - 20 0.0 6.0 -6.0
4 1 - 21 1.0 4.0 -3.0
5 2 - 21 3.0 5.0 -2.0
6 3 - 21 4.0 11.0 -7.0
7 4 - 21 2.0 7.0 -5.0
8 5 - 21 6.0 10.0 -4.0
9 6 - 21 2.0 8.0 -6.0
10 7 - 21 4.0 9.0 -5.0
11 8 - 21 2.0 3.0 -1.0
12 9 - 21 2.0 1.0 1.0
13 10 - 21 1.0 11.0 -10.0
14 11 - 21 6.0 3.0 3.0
15 12 - 21 3.0 3.0 0.0
16 13 - 21 2.0 6.0 -4.0
17 14 - 21 5.0 9.0 -4.0
18 15 - 21 9.0 13.0 -4.0
19 16 - 21 7.0 7.0 0.0
20 17 - 21 9.0 4.0 5.0
21 18 - 21 3.0 8.0 -5.0
22 19 - 21 6.0 3.0 3.0
23 20 - 21 6.0 1.0 5.0
24 21 - 21 3.0 5.0 -2.0
25 22 - 21 5.0 3.0 2.0
26 23 - 21 1.0 0.0 1.0
</code></pre>
<p>Oddly, it seems that only the first active value (index 1) is calculated correctly (since the value at index 0 is calculated independently, via <code>df.iat</code>). For the rest of the values it seems that <code>number_closed</code> is interpreted as negative value - for some reason.</p>
<h4 id="what-am-i-missingdoing-wrong-qmza">What am I missing/doing wrong?</h4>
|
<p>You are assuming that the result for the previous row is available when the current row is calculated. This is not how pandas calculations work. Pandas calculations treat each row in isolation, unless you are applying multi-row operations like <code>cumsum</code> and <code>shift</code>.</p>
<p>I would calculate the number active with a minimal example as:</p>
<pre class="lang-py prettyprint-override"><code>df = pandas.DataFrame({'ignore': ['a','b','c','d','e'], 'number_opened': [3,4,5,4,3], 'number_closed':[1,2,2,1,2]})
df['number_active'] = df['number_opened'].cumsum() + 22 - df['number_closed'].cumsum()
</code></pre>
<p>This gives a result of:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: left;">ignore</th>
<th style="text-align: right;">number_opened</th>
<th style="text-align: right;">number_closed</th>
<th style="text-align: right;">number_active</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: left;">a</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">24</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: left;">b</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">26</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: left;">c</td>
<td style="text-align: right;">5</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">29</td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td style="text-align: left;">d</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">32</td>
</tr>
<tr>
<td style="text-align: right;">4</td>
<td style="text-align: left;">e</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">33</td>
</tr>
</tbody>
</table>
</div>
<p>The code in your question with my minimal example gave:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: left;">ignore</th>
<th style="text-align: right;">number_opened</th>
<th style="text-align: right;">number_closed</th>
<th style="text-align: right;">number_active</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: left;">a</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">24</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: left;">b</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">26</td>
</tr>
<tr>
<td style="text-align: right;">2</td>
<td style="text-align: left;">c</td>
<td style="text-align: right;">5</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">3</td>
</tr>
<tr>
<td style="text-align: right;">3</td>
<td style="text-align: left;">d</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">3</td>
</tr>
<tr>
<td style="text-align: right;">4</td>
<td style="text-align: left;">e</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">1</td>
</tr>
</tbody>
</table>
</div>
|
python|pandas|dataframe
| 1
|
1,714
| 68,855,571
|
How to convert a nested JSON to CSV
|
<p>I want to convert nested json into csv format including sub rows for grouped list/dict.</p>
<p>Here my json</p>
<pre class="lang-json prettyprint-override"><code>data =\
{
"id": "1",
"name": "HIGHLEVEL",
"description": "HLD",
"item": {
"id": "11",
"description": "description"
},
"packages": [{
"id": "1",
"label": "Package 1",
"products": [{
"id": "1",
"price": 5
}, {
"id": "2",
"price": 3
}
]
}, {
"id": "2",
"label": "Package 3",
"products": [{
"id": "1",
"price": 5
}, {
"id": "2",
"price": 3
}
]
}
]
}
</code></pre>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.json_normalize(data)
# display(df)
description id name packages item.description item.id
0 HLD 1 HIGHLEVEL [{'id': '1', 'label': 'Package 1', 'products': [{'id': '1', 'price': 5}, {'id': '2', 'price': 3}]}, {'id': '2', 'label': 'Package 3', 'products': [{'id': '1', 'price': 5}, {'id': '2', 'price': 3}]}] description 11
</code></pre>
<p>Output of <a href="https://data.page/json/csv" rel="nofollow noreferrer">JSON to CSV Converter</a></p>
<pre><code>"id","name","description","item__id","item__description","packages__id","packages__label","packages__products__id","packages__products__price"
"1","HIGHLEVEL","HLD","11","description","1","Package 1","1","5"
"","","","","","","","2","3"
"","","","","","2","Package 3","1","5"
"","","","","","","","2","3"
</code></pre>
<p>I tried pandas normalization but the results are not the same as wanted.
JSON Array are not converted into sub rows in csv.
I want to keep empty string in the csv.</p>
<p>I want to do the same but with a Python Script.</p>
|
<p><strong>This should work for you:</strong></p>
<pre><code>from copy import deepcopy
import pandas
def cross_join(left, right):
new_rows = [] if right else left
for left_row in left:
for right_row in right:
temp_row = deepcopy(left_row)
for key, value in right_row.items():
temp_row[key] = value
new_rows.append(deepcopy(temp_row))
return new_rows
def flatten_list(data):
for elem in data:
if isinstance(elem, list):
yield from flatten_list(elem)
else:
yield elem
def json_to_dataframe(data_in):
def flatten_json(data, prev_heading=''):
if isinstance(data, dict):
rows = [{}]
for key, value in data.items():
rows = cross_join(rows, flatten_json(value, prev_heading + '_' + key))
elif isinstance(data, list):
rows = []
if(len(data) != 0):
for i in range(len(data)):
[rows.append(elem) for elem in flatten_list(flatten_json(data[i], prev_heading))]
else:
data.append("")
[rows.append(elem) for elem in flatten_list(flatten_json(data[0], prev_heading))]
else:
rows = [{prev_heading[1:]: data}]
return rows
return pandas.DataFrame(flatten_json(data_in))
def remove_duplicates(df):
columns = list(df)[:7]
for c in columns:
df[c] = df[c].mask(df[c].duplicated(), "")
return df
if __name__ == '__main__':
df = json_to_dataframe(data)
df = remove_duplicates(df)
print(df)
df.to_csv('data.csv', index=False)
</code></pre>
<p><strong>Input 01:</strong></p>
<pre><code>data = {
"id": "1",
"name": "HIGHLEVEL",
"description": "HLD",
"item": {
"id": "11",
"description": "description"
},
"packages": [{
"id": "1",
"label": "Package 1",
"products": [{
"id": "1",
"price": 5
}, {
"id": "2",
"price": 3
}, {
"id": "3",
"price": 9
}
]
}, {
"id": "2",
"label": "Package 3",
"products": [{
"id": "1",
"price": 5
}, {
"id": "2",
"price": 3
}, {
"id": "3",
"price": 9
}
]
}
]
}
</code></pre>
<p><strong>Output 01:</strong></p>
<p><a href="https://i.stack.imgur.com/feKnw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/feKnw.png" alt="enter image description here" /></a></p>
<p><strong>Input 02:</strong></p>
<pre><code>data = {
"id": "1",
"name": "HIGHLEVEL",
"description": "HLD",
"item": {
"id": "11",
"description": "description"
},
"packages": [{
"id": "1",
"label": "Package 1",
"products": []
}, {
"id": "2",
"label": "Package 3",
"products": []
}
]
}
</code></pre>
<p><strong>Output 02:</strong>
<a href="https://i.stack.imgur.com/CGIbw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CGIbw.png" alt="enter image description here" /></a></p>
<p><strong>Hope it will resolve your issue. If you need any explanation then please let me know.</strong></p>
<p><strong>Thanks</strong></p>
|
python|json|pandas|csv|json-normalize
| 3
|
1,715
| 36,294,145
|
Python: summarizing data in dataframe using a series
|
<p>I want to reduce dataframe down to more of summary data. I have the following dataframe:</p>
<pre><code>In [8]: df
Out[8]:
CTRY_NM ser_no date
0 a 1 2016-01-01
1 a 1 2016-01-02
2 b 1 2016-03-01
3 e 2 2016-01-01
4 e 2 2016-01-02
5 a 2 2016-06-05
6 b 2 2016-07-01
7 b 3 2016-01-01
8 b 3 2016-01-02
9 d 3 2016-08-02
</code></pre>
<p>I created this with:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'ser_no': [1, 1, 1, 2, 2, 2, 2, 3, 3, 3],
'CTRY_NM': ['a', 'a', 'b', 'e', 'e', 'a', 'b', 'b', 'b', 'd'],
'day': ['01', '02', '01', '01', '02', '05', '01', '01', '02', '02'],
'month': ['01', '01', '03', '01', '01', '06', '07', '01', '01', '08'],
'year': ['2016','2016', '2016', '2016', '2016', '2016', '2016', '2016',\
'2016', '2016']})
df['date'] = pd.to_datetime(df.day + df.month + df.year, format = "%d%m%Y")
df = df.drop(df.columns[[1,2,4]], axis = 1)
def check(data, key):
mask = data[key].shift(1) == data[key]
mask.iloc[0] = np.nan
return mask
match = df.groupby(by = ['ser_no']).apply(lambda x: check(x, 'CTRY_NM'))
</code></pre>
<p>Now the <code>match</code> series tells me when a <code>ser_no</code> is in the same country and when it is not with a <code>NaN</code> at the serial number change location. Match returns:</p>
<pre><code>In [9]: match
Out[9]:
ser_no
1 0 NaN
1 1.0
2 0.0
2 3 NaN
4 1.0
5 0.0
6 0.0
3 7 NaN
8 1.0
9 0.0
Name: CTRY_NM, dtype: float64
</code></pre>
<p>I want to use match to summarize my dataframe as </p>
<pre><code>ser_no CTRY_NM start_dt end_dt number_of_dt
1 a 2016-01-01 2016-01-02 2
1 b 2016-03-01 2016-03-01 1
2 e 2016-01-01 2016-01-02 2
2 a 2016-06-05 2016-06-05 1
2 b 2016-07-01 2016-07-01 1
3 b 2016-01-01 2016-01-02 2
3 d 2016-08-02 2016-08-02 1
</code></pre>
<p>So I get the date the range that <code>ser_no</code> has been in a specific country and how many dates were recorded in that time frame.</p>
<p>I am not sure how to do this summarization in Python.</p>
|
<p>You can use <code>agg</code> and specify an operation for each date value:</p>
<pre><code>>>> df.groupby(['ser_no', 'CTRY_NM']).date.agg(
{'start_dt': min,
'end_dt': max,
'number_of_dt': 'count'})
number_of_dt start_dt end_dt
ser_no CTRY_NM
1 a 2 2016-01-01 2016-01-02
b 1 2016-03-01 2016-03-01
2 a 1 2016-06-05 2016-06-05
b 1 2016-07-01 2016-07-01
e 2 2016-01-01 2016-01-02
3 b 2 2016-01-01 2016-01-02
d 1 2016-08-02 2016-08-02
</code></pre>
|
python|pandas|dataframe
| 2
|
1,716
| 36,516,019
|
adding a counting colum to a numpy array in python
|
<p>I have a 1d numpy array <code>ans</code>, i.e.: </p>
<pre><code>ans=[8,5,9,2,4]
</code></pre>
<p>I want to convert it into a 2d array like:</p>
<pre><code>ans=
{[1,8],
[2,5],
[3,9],
[4,2],
[5,4]}
</code></pre>
<p>the first column is in sequence:</p>
<pre><code>[1,2,3......500,501..]
</code></pre>
<p>How to do this in python?</p>
|
<p>Assuming you are actually working with numpy, and your question is just sloppy, here's one way with <code>numpy.vstack</code>:</p>
<pre><code>>>> import numpy as np
>>> right = np.array([8,5,9,2,4])
>>> np.vstack([np.arange(1, len(right) + 1), right]).T
array([[1, 8],
[2, 5],
[3, 9],
[4, 2],
[5, 4]])
</code></pre>
|
python|numpy
| 1
|
1,717
| 53,148,176
|
Tensorflow installation Mac error: unable to find matching distribution
|
<p>I am attempting to install TensorFlow on my Macintosh computer. I was following the instructions as provided on their website when I reached a problem. I had established a virtual environment in the MacOS terminal and attempted to use pip to install TensorFlow with the command</p>
<pre><code>pip install tensorflow
</code></pre>
<p>when I received the following message:</p>
<pre><code>Collecting tensorflow
Could not find a version that satisfies the requirement tensorflow (from versions: )
No matching distribution found for tensorflow
</code></pre>
<p>How can I resolve this? Thank you for any assistance you can provide.</p>
<p>Sincerely,
Suren Grigorian</p>
|
<p>try updating pip using
python3 -m pip3 install --upgrade pip3
and try again by specifying tensorflow version with
pip3 install tensorflow==2.3.1</p>
|
python|macos|tensorflow|neural-network|pip
| 0
|
1,718
| 65,561,554
|
Counting elements of an array in a new column of a data frame row by row
|
<p>I didn't find a solution in the forum that helped me.
I have a very big data frame of transportation data. One of my 33 columns of my data frame is an array which includes the allowed labels of the solution (of this row).</p>
<p>So the column is:</p>
<pre><code>usedLabels
[db_fv, blablacar, flixbus]
[db_fv, blablacar, flixbus]
[db_fv, blablacar, flixbus, airplane]
[db_fv, blablacar]
</code></pre>
<p>and I want to add a column that counts the entry of each array per row:</p>
<pre><code>usedLabelsCount
3
3
4
2
</code></pre>
<p>This is what I tried so far:</p>
<pre><code>size = 1
for dim in df['usedLabels']: size *= dim
df['usedLabelsCount'] = df.set_index(['usedLabels']).count(level="usedLabels")
df['usedLabelsCount'] = len(df['usedLabels'])
df['usedLabelsCount'] = df['usedLabels'].count
</code></pre>
<p>my results with <code>.count</code> is:</p>
<pre><code><bound method Series.count of 0 [db_fv...>
</code></pre>
<p>and with <code>len</code> I get the count of all rows (and not for each row). So each row of usedLabelsCount would contain 903829 (which is the overall count and not per row)</p>
<p>Thank you!</p>
<p>Edit:
The suggested solution (see below) didn't quite work:</p>
<pre><code>df['UsedLabelsCount']=[len(i) for i in df['usedLabels']]
</code></pre>
<p>I tried it, but now it counts 27. Which is the overall unique value of the labels (and not per row). I don't know why.. I tried this too:</p>
<pre><code>for index, row in df.iterrows(): a = (len(i) for i in df['usedLabels']) df['usedLabelsCount']= a
</code></pre>
<p>but this prints that error into the data frame (the codes runs): <code><generator object <genexpr> at 0x7f9566666c80></code>
any ideas?</p>
<p>Edit 2:
so this is some sample data:</p>
<blockquote>
<p><a href="https://github.com/Hektor1997/sample-data.git" rel="nofollow noreferrer">https://github.com/Hektor1997/sample-data.git</a></p>
</blockquote>
|
<p>try:</p>
<pre><code>df['UsedLabelsCount']=[len(i) for i in df['usedLabels']]
</code></pre>
|
arrays|pandas|dataframe|count
| 1
|
1,719
| 65,762,778
|
Python pandas: how to compare multiple value within a dataframe
|
<p>My dataframe looks like this</p>
<pre><code> cn id amount date
1 0051 45897 2021-01-14
1 0051 78484 2021-01-15
subtotal 124381
2 0052 1751591 2021-01-14
2 0052 2110386 2021-01-15
subtotal 3861977
3 04R3 40484 2021-01-14
3 04R3 68598 2021-01-15
subtotal 109082
5 973G 3420332 2021-01-14
5 973G 3355539 2021-01-15
subtotal 6775871
</code></pre>
<p>There are few thousands of row but I only show 5.</p>
<p>What I ultimately want to do is to compare the amount from each 2021-01-14 to its subtotal.</p>
<p>For example, I want to write a function that returns the id(0051) if the amount(45897) in 2021-01-14
is equal to or greater than 50% of the subtotal of (124381).</p>
<p>In this case, only index 5, id number(973G) has the amount of 342033 that above 50% of its subtotal of 6775871.</p>
<p>Any idea of how I should approach this?</p>
|
<p>You can just compare the amount with 2021-01-14 with the next amount in that column cause if it's bigger it takes more than 50 percent of subtotal</p>
|
python|pandas|dataframe
| 0
|
1,720
| 21,057,397
|
Possible Bug in pandas.groupby.agg?
|
<p>I might have found a bug in pandas.groupby.agg. Try the following code. It looks like what is passed to the aggregate function fn() is a data frame including the key. In my understanding, the agg function is applied to each column separately and only one column is passed. Since the 'year' column appears in groupby, it should be removed from the grouped results. </p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'year' : [2011,2011,2012,2012,2013], '5-1' : [1.2, 2.1,2.1,11., 13.]})
def fn(x):
print x
#return np.mean(x) will explode
return 0
res = df.groupby('year').agg(fn)
print res
</code></pre>
<p>The above gives the output, which clearly tells me that x of fn(x) is passed as a DataFrame with two columns (year, 5-1).</p>
<pre><code> 5-1 year
0 1.2 2011
1 2.1 2011
5-1 year
2 2.1 2012
3 11.0 2012
5-1 year
4 13 2013
5-1
year
2011 0
2012 0
2013 0
</code></pre>
|
<p>To answer your question, if you absolutely want the function applied to a <code>Series</code>, use the <code>{column: aggfunc}</code> syntax in <code>.agg()</code>.</p>
<p>That said, your code seems to work fine (at least on the current master). The function isn't actually being applied to the <code>year</code> column.</p>
<hr>
<p>A bit of explanation. For this I'm assuming that you are on an older version of pandas, and that that version had a bug that has since been patched. To reproduce the behavior <em>I think</em> you were getting, lets redefine <code>fn</code>:</p>
<pre><code>In [32]: def fn(x):
print("Printing x+1 : {}".format(x + 1))
print("Printing x: {}".format(x))
return 0
</code></pre>
<p>And let's redefine <code>df['year']</code></p>
<pre><code>In [33]: df['year'] = ['a', 'a', 'b', 'b', 'c']
</code></pre>
<p>All these objects are defined in <code>pandas/core/groupby.py</code>.
The <code>df.groupby('year')</code> part returns a <code>DataFrameGroupby</code> object, since <code>df</code> is a <code>DataFrame</code>. <code>.agg()</code> isn't actually defined on <code>DataFrameGroupBy</code>, that's on its parent class <code>NDFrameGroupBy</code>.</p>
<p>Since this ins't a Cython function, things get handed off to <code>NDFrameGroupBy._aggregate_generic()</code>. That tries to execute the function, and if it fails, falls back to a separate section of code:</p>
<pre><code> try:
for name, data in self:
result[name] = self._try_cast(func(data, *args, **kwargs),
data)
except Exception:
return self._aggregate_item_by_item(func, *args, **kwargs)
</code></pre>
<p>If the <code>try</code> part succeeds, the function is applied to the entire object (which is why <code>print x</code> shows both columns), and the results are presented nicely with the grouper on the index and the values in the columns.</p>
<p>If the <code>try</code> part fails, things are handed off to <code>_aggregate_item_by_item</code>, <strong>which excludes the grouping column</strong>.</p>
<p>This means that by changing your code from <code>return np.mean(x)</code> to <code>return 0</code>, <em>you actually changed the path the code follows</em>. Before, when you tried to take the <code>mean</code>, I think it failed and fell back to <code>_aggregate_item_by_item</code> (That's why I had you redefine <code>df['year']</code>, and <code>fn</code>, that will fail for sure). But when you switched to <code>return 0</code>, that succeeded, and so followed the <code>try</code> part.</p>
<p>This is all just a bit of guesswork, but I think that's what's happening.</p>
<p>I'm actually working on the group by code right now, and this issue has come up (see <a href="https://github.com/pydata/pandas/issues/5264#issuecomment-30571813" rel="nofollow">here</a>). I don't think the function should ever be applied to the grouping column, but it <em>sometimes</em> is (R does the same). Post there if you have an opinion on the matter.</p>
|
python|pandas
| 2
|
1,721
| 21,217,108
|
python datapanda: getting values from rows into list
|
<p>I'd like to get the values of my dataframe's rows into a list:</p>
<pre><code> A B C
1 2 3 2
2 4 2 6
list1 = [2, 3 2]
list2 = [4, 2, 6]
</code></pre>
<p>How can I do that?</p>
|
<p>You can do it using <code>values.tolist()</code>:</p>
<pre><code>from pandas import DataFrame
df = DataFrame({'a': [2,4], 'b': [3,2], 'c': [2,6]})
print df
list1 = df.irow(0).values.tolist()
list2 = df.irow(1).values.tolist()
</code></pre>
<p><strong>output:</strong></p>
<pre><code> a b c
0 2 3 2
1 4 2 6
[2L, 3L, 2L]
[4L, 2L, 6L]
</code></pre>
<p>If you want it as <code>int</code> you can map the list using <code>map(int, list1)</code></p>
|
python|pandas
| 2
|
1,722
| 21,371,180
|
numpy: apply operation to multidimensional array
|
<p>Assume I have a matrix of matrices, which is an order-4 tensor. What's the best way to apply the same operation to all the submatrices, similar to Map in Mathematica?<br></p>
<pre><code>#!/usr/bin/python3
from pylab import *
t=random( (8,8,4,4) )
#t2=my_map(det,t)
#then shape(t2) becomes (8,8)
</code></pre>
<p><b>EDIT</b><br>
Sorry for the bad English, since it's not my native one.<br></p>
<p>I tried <code>numpy.linalg.det</code>, but it doesn't seem to cope well with 3D or 4D tensors:</p>
<pre><code>>>> import numpy as np
>>> a=np.random.rand(8,8,4,4)
>>> np.linalg.det(a)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3/dist-packages/numpy/linalg/linalg.py", line 1703, in det
sign, logdet = slogdet(a)
File "/usr/lib/python3/dist-packages/numpy/linalg/linalg.py", line 1645, in slogdet
_assertRank2(a)
File "/usr/lib/python3/dist-packages/numpy/linalg/linalg.py", line 155, in _assertRank2
'two-dimensional' % len(a.shape))
numpy.linalg.linalg.LinAlgError: 4-dimensional array given. Array must be two-dimensional
</code></pre>
<p><b>EDIT2 (Solved)</b>
The problem is older numpy version (<1.8) doesn't support inner loop in <code>numpy.linalg.det</code>, updating to numpy 1.8 solves the problem.</p>
|
<p>First check the documentation for the operation that you intend to use. Many have a way of specifying which axis to operate on (<code>np.sum</code>). Others specify which axes they use (e.g. <code>np.dot</code>).</p>
<p>For <code>np.linalg.det</code> the documentation includes:</p>
<blockquote>
<p>a : (..., M, M) array_like
Input array to compute determinants for.</p>
</blockquote>
<p>So <code>np.linalg.det(t)</code> returns an <code>(8,8)</code> array, having calculated each <code>det</code> using the last 2 dimensions.</p>
<p>While it is possible to iterate on dimensions (the first is the default), it is better to write a function that makes use of <code>numpy</code> operations that use the whole array.</p>
|
python|arrays|numpy|multidimensional-array
| 1
|
1,723
| 20,903,865
|
Geometric progression using Python / Pandas / Numpy (without loop and using recurrence)
|
<p>I'd like to implement a geometric progression using Python / Pandas / Numpy.</p>
<p>Here is what I did:</p>
<pre><code>N = 10
n0 = 0
n_array = np.arange(n0, n0 + N, 1)
u = pd.Series(index = n_array)
un0 = 1
u[n0] = un0
for n in u.index[1::]:
#u[n] = u[n-1] + 1.2 # arithmetic progression
u[n] = u[n-1] * 1.2 # geometric progression
print(u)
</code></pre>
<p>I get:</p>
<pre><code>0 1.000000
1 1.200000
2 1.440000
3 1.728000
4 2.073600
5 2.488320
6 2.985984
7 3.583181
8 4.299817
9 5.159780
dtype: float64
</code></pre>
<p>I wonder how I could avoid to use this for loop.</p>
<p>I had a look at
<a href="https://fr.wikipedia.org/wiki/Suite_g%C3%A9om%C3%A9trique" rel="noreferrer">https://fr.wikipedia.org/wiki/Suite_g%C3%A9om%C3%A9trique</a>
and found that u_n can be expressed as: u_n = u_{n_0} * q^{n-n_0}</p>
<p>So I did that</p>
<pre><code>n0 = 0
N = 10
n_array = np.arange(n0, n0 + N, 1)
un0 = 1
q = 1.2
u = pd.Series(map(lambda n: un0 * q ** (n - n0), n_array), index = n_array)
</code></pre>
<p>That's ok... but I'm looking for a way to define it in a recurrent way like</p>
<pre><code>u_n0 = 1
u_n = u_{n-1} * 1.2
</code></pre>
<p>But I don't see how to do it using Python / Pandas / Numpy... I wonder if it's possible.</p>
|
<p>Another possibility, that is probably more computationally efficient than using exponentiation:</p>
<pre><code>>>> N, un0, q = 10, 1, 1.2
>>> u = np.empty((N,))
>>> u[0] = un0
>>> u[1:] = q
>>> np.cumprod(u)
array([ 1. , 1.2 , 1.44 , 1.728 , 2.0736 ,
2.48832 , 2.985984 , 3.5831808 , 4.29981696, 5.15978035])
</code></pre>
|
python|math|numpy|pandas
| 11
|
1,724
| 21,030,391
|
How to normalize a NumPy array to a unit vector?
|
<p>I would like to convert a NumPy array to a unit vector. More specifically, I am looking for an equivalent version of this normalisation function:</p>
<pre><code>def normalize(v):
norm = np.linalg.norm(v)
if norm == 0:
return v
return v / norm
</code></pre>
<p>This function handles the situation where vector <code>v</code> has the norm value of 0.</p>
<p>Is there any similar functions provided in <code>sklearn</code> or <code>numpy</code>?</p>
|
<p>If you're using scikit-learn you can use <a href="http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.normalize.html#sklearn.preprocessing.normalize" rel="noreferrer"><code>sklearn.preprocessing.normalize</code></a>:</p>
<pre><code>import numpy as np
from sklearn.preprocessing import normalize
x = np.random.rand(1000)*10
norm1 = x / np.linalg.norm(x)
norm2 = normalize(x[:,np.newaxis], axis=0).ravel()
print np.all(norm1 == norm2)
# True
</code></pre>
|
python|numpy|scikit-learn|statistics|normalization
| 222
|
1,725
| 3,006,844
|
Python Library installation
|
<p>I have two questions regarding python libraries:</p>
<ol>
<li><p>I would like to know if there is something like a "super" python library which lets me install ALL or at least all scientific useful python libraries, which I can install once and then I have all I need.</p></li>
<li><p>There is a number of annoying problems when installing different libraries (pythonpath, cant import because it is not installed BUT it is installed). Is there any good documentation about common installation errors and how to avoid them.</p></li>
<li><p>If there is no total solution I would be interested in numpy, scipy, matplotlib, PIL</p></li>
</ol>
<p>Thanks a lot for the attention and help</p>
<p>Best</p>
<p>Z</p>
|
<p>In Windows enviroments, <a href="http://www.pythonxy.com/" rel="noreferrer">pythonXY</a> is what your are looking for.</p>
|
python|numpy|matplotlib|python-imaging-library
| 5
|
1,726
| 63,585,260
|
Reading Columns without headers
|
<p>I have some code that reads all the CSV files in a certain folder and concatenates them into one excel file. This code works as long as the CSV's have headers but I'm wondering if there is a way to alter my code if my CSV's didn't have any headers.</p>
<p>Here is what works:</p>
<pre><code>path = r'C:\Users\Desktop\workspace\folder'
all_files = glob.glob(path + "/*.csv")
li = []
for filename in all_files:
df = pd.read_csv(filename, index_col=None, header=0)
df = df[~df['Ran'].isin(['Active'])]
li.append(df)
frame = pd.concat(li, axis=0, ignore_index=True)
frame.drop_duplicates(subset=None, inplace=True)
</code></pre>
<p>What this is doing is deleting any row in my CSV's with the word "Active" under the "Ran" column. But if I didn't have a "Ran" header for this column, is there another way to read this and do the same thing?</p>
<p>Thanks in advance!</p>
|
<pre><code> df = df[~df['Ran'].isin(['Active'])]
</code></pre>
<p>Instead of selecting a column by name, select it by index. If the <code>'Ran'</code> column is the third column in the csv use...</p>
<pre><code> df = df[~df.iloc[:,2].isin(['Active'])]
</code></pre>
<hr />
<p>If some of your files have headers and some don't then you probably should look at the first line of each file before you make a DataFrame with it.</p>
<pre><code>for filename in all_files:
with open(filename) as f:
first = next(f).split(',')
if first == ['my','list','of','headers']:
header=0
names=None
else:
header=None
names=['my','list','of','headers']
f.seek(0)
df = pd.read_csv(filename, index_col=None, header=header,names=names)
df = df[~df['Ran'].isin(['Active'])]
</code></pre>
|
python|pandas|csv
| 0
|
1,727
| 63,632,541
|
Order of spline interpolation for pandas dataframe
|
<p>I have the following dataframe which shows data from Motion Capture, where each column is a marker (i.e. position data) and rows are time:</p>
<pre><code> LTHMB X RTHMB X
0 932.109 872.921
1 934.605 873.798
2 932.383 873.998
3 940.946 875.609
4 941.549 875.875
... ... ...
14765 NaN 602.700
14766 562.350 NaN
14767 562.394 NaN
14768 562.421 NaN
14769 562.490 602.705
</code></pre>
<p>In the data, there are some NaN values that I need to fill. I'm not really an expert in this so I'm not sure what is the best way to fill these.</p>
<p>I know I can do forward/backward fill, and I also read about spline interpolation, which seems more sophisticated. In the documentation for <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.interpolate.html#pandas.DataFrame.interpolate" rel="nofollow noreferrer">pandas.DataFrame.interpolate</a> it states that for spline you have to specify the order.</p>
<p>What would I use for the order in this case? Each marker has an X, Y and Z. Does that mean I'd use a cubic spline, or is it not that simple?</p>
|
<p>The order of spline has nothing to do with the number of features that you have in the dataset. Each feature will be interpolated independently to each other. Before applying an algorithm it is therefore important to understand how it works and what each of its parameters (such as 'order') contributes towards.</p>
<p>For intuition, a cubic (order = 3) spline is the process of constructing a spline which consists of "piecewise" polynomials of degree three.</p>
<p><a href="https://i.stack.imgur.com/dSSaX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dSSaX.png" alt="enter image description here" /></a></p>
<p>Note that all polynomials are just valid within an interval; they compose the interpolation function. While extrapolation predicts a development outside the range of the data, interpolation works just within the data boundaries.</p>
<p>The "order" of the spline is the order of these "piecewise" polynomials.</p>
<p><a href="https://i.stack.imgur.com/NeqpK.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NeqpK.jpg" alt="enter image description here" /></a>
Source: Google</p>
<p>As you can see, a linear spline (order=1) fits degree one polynomials (straight ines) between the ranges, while a 7th order Spline fits 7th order polynomials.</p>
<hr />
<p><strong>Which should you use?</strong></p>
<p>No one can simply tell you which would be a better fit. You will have to visualize it to see if a specific interpolation technique is able to give you a relevant imputation or not.</p>
<p>The only way you can guarantee that you are using the right interpolation technique is by comparing them with R2_score. You can do the following -</p>
<ol>
<li>Take a complete sequence from your data (no missing values)</li>
<li>Randomly set a percentage of this data as missing (keep these hidden values separately)</li>
<li>Try multiple interplotation methods to complete the sequence (use order 3, 5, 7 splines etc)</li>
<li>Take the predicted sequence and compare it to the actual sequence using R2_score.</li>
<li>The one with the highest r2_score is the one that should fit your data the best</li>
<li>Repeat this multiple times, at multiple % of injected missing data to form a valid study on which one is better that other in general.</li>
</ol>
<p>You can find this approach implemented roughtly <a href="https://medium.com/@drnesr/filling-gaps-of-a-time-series-using-python-d4bfddd8c460" rel="nofollow noreferrer">here</a></p>
<p><a href="https://i.stack.imgur.com/gw4Q7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gw4Q7.png" alt="enter image description here" /></a></p>
|
python|pandas|dataframe|interpolation|spline
| 2
|
1,728
| 63,582,734
|
What can I apply numpy.std() to?
|
<p>I have very little knowledge of statistics, so forgive me, but I'm very confused by how the numpy function <code>std</code> works, and the documentation is unfortunately not clearing it up.</p>
<p>From what I understand it will compute the standard deviation of a distribution from the array, but when I set up a Gaussian with a standard deviation of <code>0.5</code> with the following code, <code>numpy.std</code> returns 0.2:</p>
<pre><code>sigma = 0.5
mu = 1
x = np.linspace(0, 2, 100)
f = (1 / (sigma * np.sqrt(2 * np.pi))) * np.exp((-1 / 2) * ((x - mu) / sigma)**2)
plt.plot(x, f)
plt.show()
print(np.std(f))
</code></pre>
<p>This is the distribution:</p>
<p><a href="https://i.stack.imgur.com/8CScr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8CScr.png" alt="enter image description here" /></a></p>
<p>I have no idea what I'm misunderstanding about how the function works. I thought maybe I would have to tell it the x-values associated with the y-values of the distribution but there's no argument for that in the function. Why is <code>numpy.std</code> not returning the actual standard deviation of my distribution?</p>
|
<p>I suspect that you understand perfectly well how the function works, but are misunderstanding the meaning of your data. Standard deviation is a measure of the spread of data about the mean value.</p>
<p>When you say <code>std(f)</code>, you are computing the the spread of the y-values about their mean. Looking at the graph in the question, a vertical mean of ~0.5 and a standard deviation of ~0.2 are not far fetched. Notice that <code>std(f)</code> does not involve the x-values in any way.</p>
<p>What you are expecting to get is the standard deviation of the x-values, weighted by the y-values. This is essentially the idea behind a probability density function (PDF).</p>
<p>Let's go through the computation manually to understand the differences. The mean of the x-values is normally <code>x.sum() / x.size</code>. But that is only true the the weight of each value is 1. If you weigh each value by the corresponding <code>f</code> value, you can write</p>
<pre><code>m = (x * f).sum() / f.sum()
</code></pre>
<p>Standard deviation is the root-mean-square about the mean. That means computing the average squared deviation from the mean, and taking the square root. We can compute the weighted mean of squared deviation in the exact same way we did before:</p>
<pre><code> s = np.sqrt(np.sum((x - m)**2 * f) / f.sum())
</code></pre>
<p>Notice that the value of <code>s</code> computed this way from your question is not 0.5, but rather 0.44. This is because your PDF is incomplete, and the missing tails add significantly to the spread.</p>
<p>Here is an example showing that the standard deviation converges to the expected value as you compute it for a larger sample of the PDF:</p>
<pre><code>>>> def s(x, y):
... m = (x * y).sum() / y.sum()
... return np.sqrt(np.sum((x - m)**2 * y) / y.sum())
>>> sigma = 0.5
>>> x1 = np.linspace(-1, 1, 100)
>>> y1 = (1 / (sigma * np.sqrt(2 * np.pi))) * np.exp(-0.5 * (x1 / sigma)**2)
>>> s(x1, y1)
0.4418881290522094
>>> x2 = np.linspace(-2, 2, 100)
>>> y2 = (1 / (sigma * np.sqrt(2 * np.pi))) * np.exp(-0.5 * (x2 / sigma)**2)
>>> s(x2, y2)
0.49977093783005005
>>> x3 = np.linspace(-3, 3, 100)
>>> y3 = (1 / (sigma * np.sqrt(2 * np.pi))) * np.exp(-0.5 * (x3 / sigma)**2)
>>> s(x3, y3)
0.49999998748515206
</code></pre>
|
python|numpy|standard-deviation
| 4
|
1,729
| 63,508,035
|
How to iterate through dataframe rows, split data to separate dataframes based on column?
|
<p>I've looked at iterrows, list comprehension, dictionary comprehension, apply, and itertuples. I cannot get any of those to do the scenario below. Any help would be greatly appreciated!</p>
<p>Example original dataframe:</p>
<pre><code>ID |State |Invoice|Price|Email
1000|Texas |1 |2 |texas@test.com
1000|Texas |2 |5 |texas@test.com
1001|Alabama|3 |4 |alabama@test.com
1000|Texas |4 |8 |texas@test.com
1002|Georgia|5 |3 |georgia@test.com
1001|Alabama|6 |6 |alabama@test.com
</code></pre>
<p><strong>Expected result</strong> Iterate through original dataframe, pull by ID to include all data to separate dataframes.</p>
<p>DF1:</p>
<pre><code>ID |State |Invoice|Price|Email
1000|Texas |1 |2 |texas@test.com
1000|Texas |2 |5 |texas@test.com
1000|Texas |4 |8 |texas@test.com
</code></pre>
<p>Df2:</p>
<pre><code>ID |State |Invoice|Price|Email
1001|Alabama|3 |4 |alabama@test.com
1001|Alabama|6 |6 |alabama@test.com
</code></pre>
<p>Df3:</p>
<pre><code>ID |State |Invoice|Price|Email
1002|Georgia|5 |3 |georgia@test.com
</code></pre>
|
<p>I was able to create a dictionary that has each dataframe split out by ID using the following code:</p>
<pre><code>dict_of_dfs = {
ID: group_df
for ID, group_df in df.groupby('ID')
}
</code></pre>
<p>I was also able to create a list that has each dataframe split out by ID using the following code:</p>
<pre><code>list_of_dfs = [
group_df
for _, group_df in df.groupby('ID')
]
</code></pre>
|
python|pandas|dataframe
| 0
|
1,730
| 21,924,303
|
Numpy Mutidimensional Subsetting
|
<p>I have searched long and hard for an answer to this question, but haven't found anything that quite fits the bill. I have a multidimensional numpy array containing data (in my case 3 dimensional) and another array (2 dimensional) that contains information on which value I want along the last dimension of the original array. For instance, here is a simple example illustrating the problem. I have an array <code>a</code> of data, and another array <code>b</code> containing indices along dimension 2 of <code>a</code>. I want a new two dimensional array <code>c</code> where <code>c[i, j] = a[i, j, b[i, j]]</code>.The only way that I can think to do it is with a loop, as outlined below. However, this seems clunky and slow.</p>
<pre><code>In [3]: a = np.arange(8).reshape((2, 2, 2))
In [4]: a
Out[4]:
array([[[0, 1],
[2, 3]],
[[4, 5],
[6, 7]]])
In [6]: b = np.array([[0, 1], [1, 1]])
In [8]: c = np.zeros_like(b)
In [9]: for i in xrange(2):
...: for j in xrange(2):
...: c[i, j] = a[i, j, b[i, j]]
In [10]: c
Out[10]:
array([[0, 3],
[5, 7]])
</code></pre>
<p>Is there a more pythonic way of doing this, perhaps some numpy indexing feature of which I am unaware?</p>
|
<p>When you fancy-index a multidimensional array with multidimensional arrays, the indices for each dimension are broadcasted together. With that in mind, you can do:</p>
<pre><code>>>> rows = np.arange(a.shape[0])
>>> cols = np.arange(a.shape[1])
>>> a[rows[:, None], cols, b]
array([[0, 3],
[5, 7]])
</code></pre>
|
python|arrays|numpy
| 2
|
1,731
| 21,687,633
|
adding column with per-row computed time difference from group start?
|
<p>(newbie to python and pandas)</p>
<p>I have a data set of 15 to 20 million rows, each row is a time-indexed observation of a time a 'user' was seen, and I need to analyze the visit-per-day patterns of each user, normalized to their first visit. So, I'm hoping to plot with an X axis of "days after first visit" and a Y axis of "visits by this user on this day", i.e., I need to get a series indexed by a timedelta and with values of visits in the period ending with that delta [0:1, 3:5, 4:2, 6:8,] But I'm stuck very early ...</p>
<p>I start with something like this:</p>
<pre><code>rng = pd.to_datetime(['2000-01-01 08:00', '2000-01-02 08:00',
'2000-01-01 08:15', '2000-01-02 18:00',
'2000-01-02 17:00', '2000-03-01 08:00',
'2000-03-01 08:20','2000-01-02 18:00'])
uid=Series(['u1','u2','u1','u2','u1','u2','u2','u3'])
misc=Series(['','x1','A123','1.23','','','','u3'])
df = DataFrame({'uid':uid,'misc':misc,'ts':rng})
df=df.set_index(df.ts)
grouped = df.groupby('uid')
firstseen = grouped.first()
</code></pre>
<p>The <code>ts</code> values are unique to each <code>uid</code>, but can be duplicated (two <code>uid</code> can be seen at the same time, but any one <code>uid</code> is seen only once at any one timestamp)</p>
<p>The first step is (I think) to add a new column to the DataFrame, showing for each observation what the timedelta is back to the first observation for that user. But, I'm stuck getting that column in the DataFrame. The simplest thing I tried gives me an obscure-to-newbie error message:</p>
<pre><code>df['sinceseen'] = df.ts - firstseen.ts[df.uid]
...
ValueError: cannot reindex from a duplicate axis
</code></pre>
<p>So I tried a brute-force method:</p>
<pre><code>def f(row):
return row.ts - firstseen.ts[row.uid]
df['sinceseen'] = Series([{idx:f(row)} for idx, row in df.iterrows()], dtype=timedelta)
</code></pre>
<p>In this attempt, <code>df</code> gets a <code>sinceseen</code> but it's all <code>NaN</code> and shows a type of <code>float</code> for <code>type(df.sinceseen[0])</code> - though, if I just print the Series (in iPython) it generates a nice list of <code>timedeltas</code>.</p>
<p>I'm working back and forth through "Python for Data Analysis" and it seems like <code>apply()</code> should work, but</p>
<pre><code>def fg(ugroup):
ugroup['sinceseen'] = ugroup.index - ugroup.index.min()
return ugroup
df = df.groupby('uid').apply(fg)
</code></pre>
<p>gives me a <code>TypeError</code> on the "<code>ugroup.index - ugroup.index.min(</code>" even though each of the two operands is a <code>Timestamp</code>.</p>
<p>So, I'm flailing - can someone point me at the "pandas" way to get to the data structure Ineed?</p>
|
<p>Does this help you get started?</p>
<pre><code>>>> df = DataFrame({'uid':uid,'misc':misc,'ts':rng})
>>> df = df.sort(["uid", "ts"])
>>> df["since_seen"] = df.groupby("uid")["ts"].apply(lambda x: x - x.iloc[0])
>>> df
misc ts uid since_seen
0 2000-01-01 08:00:00 u1 0 days, 00:00:00
2 A123 2000-01-01 08:15:00 u1 0 days, 00:15:00
4 2000-01-02 17:00:00 u1 1 days, 09:00:00
1 x1 2000-01-02 08:00:00 u2 0 days, 00:00:00
3 1.23 2000-01-02 18:00:00 u2 0 days, 10:00:00
5 2000-03-01 08:00:00 u2 59 days, 00:00:00
6 2000-03-01 08:20:00 u2 59 days, 00:20:00
7 u3 2000-01-02 18:00:00 u3 0 days, 00:00:00
[8 rows x 4 columns]
</code></pre>
|
python|pandas
| 3
|
1,732
| 29,955,457
|
Cast dataframe groupby to dataframe
|
<p>All I need is to cast a DataFrameGroupBy object to a DataFrame in order to export to excel using <code>df.to_excel()</code>. When I try to do <code>df_groupby = pd.DataFrame(df_groupby)</code> I get the error: <code>PandasError: DataFrame constructor not properly called!</code></p>
<p>Original df:</p>
<pre><code> df = DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
'foo', 'bar', 'foo', 'foo'],
'B' : ['one', 'one', 'two', 'three',
'two', 'two', 'one', 'three'],
'C' : randn(8), 'D' : randn(8)})
In [2]: df
Out[2]:
A B C D
0 foo one 0.469112 -0.861849
1 bar one -0.282863 -2.104569
2 foo two -1.509059 -0.494929
3 bar three -1.135632 1.071804
4 foo two 1.212112 0.721555
5 bar two -0.173215 -0.706771
6 foo one 0.119209 -1.039575
7 foo three -1.044236 0.271860
</code></pre>
<p><code>grouped = df.groupby('A')</code></p>
<p>I want to export <code>grouped</code> to excel.</p>
|
<p>I am not sure what you want to achieve. So here goes for a set of possible answers. </p>
<p>But first: the bad news. A groupby object is not a DataFrame, and it cannot be saved to excel (or simply turned into a DataFrame).</p>
<p>1) If you just want to sort the DataFrame, this will also "group" things</p>
<pre><code>df.sort('A').to_excel('filename.xls')
</code></pre>
<p>2) If you want to get rid of the default index in Excel</p>
<pre><code>df.sort('A').to_excel('filename.xls', index=None)
</code></pre>
<p>3) If you want each group on its own worksheet in Excel</p>
<pre><code>grouped = df.groupby('A')
from pandas import ExcelWriter
writer = ExcelWriter('filename.xls')
for k, g in grouped:
g.to_excel(writer, k)
writer.save()
</code></pre>
<p>4) You can concatenate the groups into a new DataFrame. But this is pretty much the same as the first sort option above</p>
<pre><code>grouped = df.groupby('A')
new_df = pd.DataFrame()
for k, g in grouped:
new_df = pd.concat([new_df, g], axis=0)
new_df.to_excel('filename.xls')
</code></pre>
<p>5) A first exercise in meaninglessness ... a pass through transform function ... but this just gives you your DataFrame back ...</p>
<pre><code>df = df.set_index('A')
grouped = df.groupby(level=0)
grouped.transform(lambda x: x).to_excel('filename.xls')
</code></pre>
<p>6) Another exercise in meaninglessness ... this time with a pass through filter function ...</p>
<pre><code># start with initial data
grouped = df.groupby('A')
grouped.filter(lambda x: True).to_excel('filename.xls')
</code></pre>
<p>7) If you want to look inside of the groupby you can always do the following ... (but note this does not save to Excel) ...</p>
<pre><code># start with initial data
grouped = df.groupby('A')
groups = dict(list(grouped))
print[groups['foo'])
print[groups['bar'])
</code></pre>
|
python|pandas|export-to-excel
| 3
|
1,733
| 30,267,338
|
Setting values in pandas Series is slow, why?
|
<h3>Question</h3>
<p>Does anyone know why setting an item directly on a pandas series is so incredibly slow? Am I doing something wrong, or is it just the way it is?</p>
<p>I ran a couple of tests to see what the fastest method is to set a value on a pandas Series object. Here are the results, ordered from fast to slow:</p>
<h3>initialize array, set using integer index, create series</h3>
<pre><code>%%timeit
a = np.empty(1000, dtype='float')
for i in range(len(a)):
a[i] = 1.0
s = pd.Series(data=a)
</code></pre>
<p>1000 loops, best of 3: 630 µs per loop</p>
<h3>create empty list, add item using append, create series</h3>
<pre><code>%%timeit
l = []
for i in range(1000):
l.append(1.0)
s = pd.Series(data=l)
</code></pre>
<p>1000 loops, best of 3: 1.05 ms per loop</p>
<h3>initialize array, create series, set using set_value</h3>
<pre><code>%%timeit
a = np.empty(1000, dtype='float')
s = pd.Series(data=a)
for i in range(len(a)):
s.set_value(i, 1.0)
</code></pre>
<p>100 loops, best of 3: 18.5 ms per loop</p>
<h3>initialize array, create series, set using integer index</h3>
<pre><code>%%timeit
a = np.empty(1000, dtype='float')
s = pd.Series(data=a)
for i in range(len(a)):
s[i] = 1.0
</code></pre>
<p>10 loops, best of 3: 30.2 ms per loop</p>
<h3>intialize array, create series, set using iat</h3>
<pre><code>%%timeit
a = np.empty(1000, dtype='float')
s = pd.Series(data=a)
for i in range(len(a)):
s.iat[i] = 1.0
</code></pre>
<p>10 loops, best of 3: 36.2 ms per loop</p>
<h3>initialize array, create series, set using iloc</h3>
<pre><code>%%timeit
a = np.empty(1000, dtype='float')
s = pd.Series(data=a)
for i in range(len(a)):
s.iloc[i] = 1.0
</code></pre>
<p>1 loops, best of 3: 280 ms per loop</p>
|
<p>From the <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#fast-scalar-value-getting-and-setting" rel="noreferrer">docs</a></p>
<blockquote>
<p>Since indexing with [] must handle a lot of cases (single-label
access, slicing, boolean indexing, etc.), it has a bit of overhead in
order to figure out what you’re asking for.</p>
</blockquote>
<p>So I get the following which should be comparable:</p>
<pre><code>In [13]:
%%timeit
a = np.empty(1000,dtype='float')
s = pd.Series(data=a)
for i in range(len(a)):
s.iat[i] = 1.0
10 loops, best of 3: 23.3 ms per loop
In [14]:
%%timeit
a = np.empty(1000,dtype='float')
s = pd.Series(data=a)
for i in range(len(a)):
s.iloc[i] = 1.0
10 loops, best of 3: 159 ms per loop
</code></pre>
<p>for the other tests:</p>
<pre><code>In [15]:
%%timeit
l = []
for i in range(1000):
l.append(1.0)
s = pd.Series(data=l)
1000 loops, best of 3: 525 µs per loop
In [16]:
%%timeit
a = np.empty(1000,dtype='float')
s = pd.Series(data=a)
for i in range(len(a)):
s.set_value(i,1.0)
100 loops, best of 3: 10.1 ms per loop
In [17]:
%%timeit
a = np.empty(1000,dtype='float')
s = pd.Series(data=a)
for i in range(len(a)):
s[i] = 1.0
100 loops, best of 3: 17.5 ms per loop
</code></pre>
|
python|pandas
| 5
|
1,734
| 53,380,971
|
Saving static files in redis session (Python flask)
|
<p>I'm trying to build a machine learning web app where users can input the parameters and the predictions will be outputted as a .txt file. I'm also trying to use redis sessions as part of the web app so each users' .txt file will be different from each other.</p>
<p>I'm using </p>
<pre><code>df.to_csv(filename.txt)
</code></pre>
<p>to transform my predictions dataframe into a .txt file. <strong>Is it possible to save this .txt file in a redis session instead of saving it to the directory where the source code lies?</strong> </p>
|
<p>you need to serialize your .txt file but in my opinion I would not use Redis for such a task, maybe sqlite or directly in the server. </p>
|
python|pandas|dataframe|machine-learning|redis
| 0
|
1,735
| 53,689,868
|
Splitting Python MeshGrid Into Cells
|
<p><strong>Problem Statement</strong></p>
<p>Need to split an N-Dimensional MeshGrid into "cubes":</p>
<p>Ex)
2-D Case:</p>
<p>(-1,1) |(0,1) |(1,1)</p>
<p>(-1,0) |(0,0) |(1,0)</p>
<p>(-1,-1)|(0,-1)|(1,-1)</p>
<p>There will be 4 cells, each with 2^D points:</p>
<p>I want to be able to process the mesh, placing the coordinate points of each cell into a container to do further processing.</p>
<pre><code>Cells = [{(-1,1) (0,1)(-1,0),(0,0)},
{(0,1),(1,1),(0,0),(1,0)},
{(-1,0),(0,0)(-1,-1),(0,-1)}
{(0,0),(1,0)(0,-1),(1,-1)}]
</code></pre>
<p>I use the following to generate the mesh for an arbitrary dimension d:</p>
<pre><code>grid = [np.linspace(-1.0 , 1.0, num = K+1) for i in range(d)]
res_to_unpack = np.meshgrid(*grid,indexing = 'ij')
</code></pre>
<p>Which has output:</p>
<pre><code>[array([[-1., -1., -1.],
[ 0., 0., 0.],
[ 1., 1., 1.]]), array([[-1., 0., 1.],
[-1., 0., 1.],
[-1., 0., 1.]])]
</code></pre>
<p>So I want to be able to generate the above cells container for a given D dimensional mesh grid. Split on a given K that is a power of 2. </p>
<p>I need this container so for each cell, I need to reference all 2^D points associated and calculate the distance from an origin. </p>
<p><strong>Edit For Clarification</strong></p>
<p>K should partition the grid into K ** D number of cells, with (K+1) ** D points. Each Cell should have 2 ** D number of points. Each "cell" will have volume (2/K)^D. </p>
<p>So for K = 4, D = 2</p>
<pre><code>Cells = [ {(-1,1),(-0.5,1),(-1,0.5),(-0.5,0.5)},
{(-0.5,1),(-0.5,0.5)(0.0,1.0),(0,0.5)},
...
{(0.0,-0.5),(0.5,-0.5),(0.0,-1.0),(0.5,-1.0)},
{(0.5,-1.0),(0.5,-1.0),(1.0,-0.5),(1.0,-1.0)}]
</code></pre>
<p>This is output for TopLeft, TopLeft + Right Over, Bottom Left, Bottom Left + Over Left. There will we 16 cells in this set, each with four coordinates each. For increasing K, say K = 8. There will be 64 cells, each with four points.</p>
|
<p>This should give you what you need:</p>
<pre><code>from itertools import product
import numpy as np
def splitcubes(K, d):
coords = [np.linspace(-1.0 , 1.0, num=K + 1) for i in range(d)]
grid = np.stack(np.meshgrid(*coords)).T
ks = list(range(1, K))
for slices in product(*([[slice(b,e) for b,e in zip([None] + ks, [k+1 for k in ks] + [None])]]*d)):
yield grid[slices]
def cubesets(K, d):
if (K & (K - 1)) or K < 2:
raise ValueError('K must be a positive power of 2. K: %s' % K)
return [set(tuple(p.tolist()) for p in c.reshape(-1, d)) for c in splitcubes(K, d)]
</code></pre>
<h1>Demonstration of 2D case</h1>
<p>Here's a little demonstration of the 2D case:</p>
<pre><code>import matplotlib.pyplot as plt
def assemblecube(c, spread=.03):
c = np.array(list(c))
c = c[np.lexsort(c.T[::-1])]
d = int(np.log2(c.size))
for i in range(d):
c[2**i:2**i + 2] = c[2**i + 1:2**i - 1:-1]
# get the point farthest from the origin
sp = c[np.argmax((c**2).sum(axis=1)**.5)]
# shift all points a small distance towards that farthest point
c += sp * .1 #np.copysign(np.ones(sp.size)*spread, sp)
# create several different orderings of the same points so that matplotlib will draw a closed shape
return [(np.roll(c, i, axis=1) - (np.roll(c, i, axis=1)[0] - c[0])[None,:]).T for i in range(d)]
fig = plt.figure(figsize=(6,6))
ax = fig.gca()
for i,c in enumerate(cubesets(4, 2)):
for cdata in assemblecube(c):
p = ax.plot(*cdata, c='C%d' % (i % 9))
ax.set_aspect('equal', 'box')
fig.show()
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/QEZjz.png" rel="noreferrer"><img src="https://i.stack.imgur.com/QEZjz.png" alt="enter image description here"></a></p>
<p>The cubes have been shifted slightly apart for visualization purposes (so they don't overlap and cover each other up). </p>
<h1>Demonstration of 3D case</h1>
<p>Here's the same thing for the 3D case:</p>
<pre><code>import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(111, projection='3d')
for i,c in enumerate(cubesets(2,3)):
for cdata in assemblecube(c, spread=.05):
ax.plot(*cdata, c=('C%d' % (i % 9)))
plt.gcf().gca().set_aspect('equal', 'box')
plt.show()
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/5Qq2M.png" rel="noreferrer"><img src="https://i.stack.imgur.com/5Qq2M.png" alt="enter image description here"></a></p>
<h1>Demonstration for <code>K=4</code></h1>
<p>Here's the output for the same 2D and 3D demonstrations as above, but with <code>K=4</code>:</p>
<p><a href="https://i.stack.imgur.com/VqIsz.png" rel="noreferrer"><img src="https://i.stack.imgur.com/VqIsz.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/NukLU.png" rel="noreferrer"><img src="https://i.stack.imgur.com/NukLU.png" alt="enter image description here"></a></p>
|
python|numpy|geometry|simulation|computational-geometry
| 9
|
1,736
| 53,736,983
|
How to interpret TensorBoard loss graph?
|
<p><a href="https://i.stack.imgur.com/xyzNk.png" rel="nofollow noreferrer">TensorBoard</a></p>
<p>I'm using Tensorflow API for object detection and here is the loss plot from its default settings. Can you help me interpret differences of <code>classification_loss</code>, <code>localization_loss</code> and <code>objectness_loss</code>? Thanks!</p>
|
<p>You could calculate the location loss like this:</p>
<pre><code>def location_loss( x, y, width, height, l_x, l_y, l_width, l_height, alpha = 0.001 ):
point_loss = ( tf.square( l_x - x ) + tf.square( l_y - y ) ) * alpha
size_loss = ( tf.square( tf.sqrt( l_width ) - tf.sqrt( width ) ) + tf.square( tf.sqrt( l_height ) - tf.sqrt( height ) ) ) * alpha
location_loss = point_loss + size_loss
return location_loss
</code></pre>
<p>These losses you mentioned are not part of the TensorBoard. There is another TensorBoard plugin you should check the source code and the definitions for these. </p>
|
tensorflow|tensorboard|object-detection-api
| 0
|
1,737
| 53,470,831
|
pandas get dummies for column with list
|
<p>Input:- </p>
<pre><code>empNo name
1234 [ AB, DE ]
5678 [ FG, IJ ]
</code></pre>
<p>Command:-</p>
<pre><code>dataFrame = dataFrame.join(dataFrame.name.str.join('|').str.get_dummies().add_prefix('dummy_name_'))
</code></pre>
<p>The above command brings dummy "for each character of the column name"</p>
<p>Output:- </p>
<pre><code>empNo name dummy_name_A dummy_name_B dummy_name_D dummy_name_E dummy_name_F dummy_name_G dummy_name_I dummy_name_J
1234 [ AB, DE ] 1 1 1 1 0 0 0 0
5678 [ FG, IJ ] 0 0 0 0 1 1 1 1
</code></pre>
<p>Expected:- </p>
<pre><code>empNo name dummy_name_AB dummy_name_DE dummy_name_FG dummy_name_IJ
1234 [ AB, DE ] 1 1 0 0
5678 [ FG, IJ ] 0 0 1 1
</code></pre>
|
<p>I think the list is not the list , so we using ast to convert the string type column back to list </p>
<pre><code>import ast
df.name=df.name.apply(ast.literal_eval)
</code></pre>
<p>Then using str <code>get_dummies</code></p>
<pre><code>s=df.name.apply(pd.Series).stack().str.get_dummies().sum(level=0).add_prefix('dummy_name_')
s
dummy_name_AB dummy_name_DE dummy_name_FG dummy_name_IJ
0 1 1 0 0
1 0 0 1 1
</code></pre>
<p>Then </p>
<pre><code>pd.concat([df[['empNo']],s],axis=1)
</code></pre>
<p>The data input </p>
<pre><code>df.to_dict()
{'empNo': {0: 1234, 1: 5678}, 'name': {0: ['AB', 'DE'], 1: ['FG', 'IJ']}}
</code></pre>
|
python|pandas|dataframe
| 5
|
1,738
| 53,470,273
|
flask pandas to tabulator tables are all messed up
|
<p>I'm a new Dev , trying to pass Pandas DataFrame to Tabulator , it works but the table is so messed up </p>
<p>original file :
<a href="https://i.stack.imgur.com/4U3cV.png" rel="nofollow noreferrer">original file in excel</a></p>
<p>without tabulator :
<a href="https://i.stack.imgur.com/OLWok.png" rel="nofollow noreferrer">to_html() render result</a></p>
<p>it renders fine as html but once i attach Tabulator to it everything is messed up .</p>
<p>after tabulator:
<a href="https://i.stack.imgur.com/3IogA.png" rel="nofollow noreferrer">after attaching tabulator to it using ID </a></p>
<p>the way i install it was copying and pasting all files in Dist to my app folder JS files in the JS folder and all CSS files in CSS folder </p>
<p>in HTML head : </p>
<pre><code><link href="{{ url_for ('static', filename='css/tabulator.min.css') }}" rel="stylesheet"></link>
<script type="text/javascript" src="{{ url_for ('static', filename='tabulator.min.js') }}"></script>
</code></pre>
<p>HTML Body :</p>
<pre><code>var table = new Tabulator("#tableId", {});
</code></pre>
<p>PYTHON :</p>
<pre><code>df = pd.read_excel(destination)
return render_template('fileviewer.html',x=df.to_html(table_id='tableId'))
</code></pre>
|
<p>Working with .to_json(orient = "records") is always the best for Tabulator</p>
<p>Try this:</p>
<p>1) If there's no Id yet, just:</p>
<pre><code>df.insert(0,'id',range(1,len(df)+1))
df= df.to_json(orient='records')
JsonResponse({'myDF':df})
</code></pre>
<p>2) Once back in Javascript, when you try to load your data from Ajax response, you must use</p>
<pre><code> data: eval(json.myDF)
</code></pre>
<p>in order to Tabulator read it correctly</p>
|
javascript|python|pandas|flask|tabulator
| 0
|
1,739
| 17,628,589
|
perform varimax rotation in python using numpy
|
<p>I am working on principal component analysis of a matrix. I have already found the component matrix shown below</p>
<pre><code>A = np.array([[-0.73465832 -0.24819766 -0.32045055]
[-0.3728976 0.58628043 -0.63433607]
[-0.72617152 0.53812819 -0.22846634]
[ 0.34042864 -0.08063226 -0.80064174]
[ 0.8804307 0.17166265 0.04381426]
[-0.66313032 0.54576874 0.37964986]
[ 0.286712 0.68305196 0.21769803]
[ 0.94651412 0.14986739 -0.06825887]
[ 0.40699665 0.73202276 -0.08462949]])
</code></pre>
<p>I need to perform varimax rotation in this component matrix but could not find the exact method and degree to rotate. Most of the examples are shown in R. However I need the method in python.</p>
|
<p>You can find a lot of examples with Python. Here is an example I found for Python using only <code>numpy</code>, on <a href="http://en.wikipedia.org/wiki/Talk:Varimax_rotation" rel="noreferrer">Wikipedia</a>:</p>
<pre><code>def varimax(Phi, gamma = 1, q = 20, tol = 1e-6):
from numpy import eye, asarray, dot, sum, diag
from numpy.linalg import svd
p,k = Phi.shape
R = eye(k)
d=0
for i in xrange(q):
d_old = d
Lambda = dot(Phi, R)
u,s,vh = svd(dot(Phi.T,asarray(Lambda)**3 - (gamma/p) * dot(Lambda, diag(diag(dot(Lambda.T,Lambda))))))
R = dot(u,vh)
d = sum(s)
if d/d_old < tol: break
return dot(Phi, R)
</code></pre>
|
python|arrays|numpy
| 16
|
1,740
| 20,344,908
|
Optimizing a nested for-loop which uses the indices of an array for function
|
<p>Let's imagine an empty NumPy array of 3x4 where you've got the coordinate of the top-left corner and the step size in horizontal and vertical direction.
Now I would like to know the coordinates for the middle of each cell for the whole array. Like this:</p>
<p><img src="https://i.stack.imgur.com/1IEuh.png" alt="enter image description here"></p>
<p>For this I implemented a nested for-loop.</p>
<pre><code>In [12]:
import numpy as np
# extent(topleft_x, stepsize_x, 0, topleft_y, 0, stepsize_y (negative since it's top-left)
extent = (5530000.0, 5000.0, 0.0, 807000.0, 0.0, -5000.0)
array = np.zeros([3,4],object)
cols = array.shape[0]
rows = array.shape[1]
# function to apply to each cell
def f(x,y):
return x*extent[1]+extent[0]+extent[1]/2, y*extent[5]+extent[3]+extent[5]/2
# nested for-loop
def nestloop(cols,rows):
for col in range(cols):
for row in range(rows):
array[col,row] = f(col,row)
In [13]:
%timeit nestloop(cols,rows)
100000 loops, best of 3: 17.4 µs per loop
In [14]:
array.T
Out[14]:
array([[(5532500.0, 804500.0), (5537500.0, 804500.0), (5542500.0, 804500.0)],
[(5532500.0, 799500.0), (5537500.0, 799500.0), (5542500.0, 799500.0)],
[(5532500.0, 794500.0), (5537500.0, 794500.0), (5542500.0, 794500.0)],
[(5532500.0, 789500.0), (5537500.0, 789500.0), (5542500.0, 789500.0)]], dtype=object)
</code></pre>
<p>But eager to learn, how can I optimize this? I was thinking of vectorizing or using lambda. I tried to vectorize it as follow: </p>
<pre><code>array[:,:] = np.vectorize(check)(cols,rows)
ValueError: could not broadcast input array from shape (2) into shape (3,4)
</code></pre>
<p>But, than I got a broadcasting error. Currently the array is 3 by 4, but this also can become 3000 by 4000. </p>
|
<p>Surely the way you are computing the <code>x</code> and <code>y</code> coordinates is highly inefficient because it's not vectorized at all. You can do:</p>
<pre><code>In [1]: import numpy as np
In [2]: extent = (5530000.0, 5000.0, 0.0, 807000.0, 0.0, -5000.0)
...: x_steps = np.array([0,1,2]) * extent[1]
...: y_steps = np.array([0,1,2,3]) * extent[-1]
...:
In [3]: x_coords = extent[0] + x_steps + extent[1]/2
...: y_coords = extent[3] + y_steps + extent[-1]/2
...:
In [4]: x_coords
Out[4]: array([ 5532500., 5537500., 5542500.])
In [5]: y_coords
Out[5]: array([ 804500., 799500., 794500., 789500.])
</code></pre>
<p>At this point the coordinates of the points are given by the cartesian <a href="http://docs.python.org/3.3/library/itertools.html#itertools.product" rel="nofollow noreferrer"><code>product()</code></a> of these two arrays:</p>
<pre><code>In [5]: list(it.product(x_coords, y_coords))
Out[5]: [(5532500.0, 804500.0), (5532500.0, 799500.0), (5532500.0, 794500.0), (5532500.0, 789500.0), (5537500.0, 804500.0), (5537500.0, 799500.0), (5537500.0, 794500.0), (5537500.0, 789500.0), (5542500.0, 804500.0), (5542500.0, 799500.0), (5542500.0, 794500.0), (5542500.0, 789500.0)]
</code></pre>
<p>You just have to group them 4 by 4.</p>
<p>To obtain the product with <code>numpy</code> you can do (based on <a href="https://stackoverflow.com/questions/11144513/numpy-cartesian-product-of-x-and-y-array-points-into-single-array-of-2d-points">this</a> answer):</p>
<pre><code>In [6]: np.transpose([np.tile(x_coords, len(y_coords)), np.repeat(y_coords, len(x_coords))])
Out[6]:
array([[ 5532500., 804500.],
[ 5537500., 804500.],
[ 5542500., 804500.],
[ 5532500., 799500.],
[ 5537500., 799500.],
[ 5542500., 799500.],
[ 5532500., 794500.],
[ 5537500., 794500.],
[ 5542500., 794500.],
[ 5532500., 789500.],
[ 5537500., 789500.],
[ 5542500., 789500.]])
</code></pre>
<p>Which can be reshaped:</p>
<pre><code>In [8]: product.reshape((3,4,2)) # product is the result of the above
Out[8]:
array([[[ 5532500., 804500.],
[ 5537500., 804500.],
[ 5542500., 804500.],
[ 5532500., 799500.]],
[[ 5537500., 799500.],
[ 5542500., 799500.],
[ 5532500., 794500.],
[ 5537500., 794500.]],
[[ 5542500., 794500.],
[ 5532500., 789500.],
[ 5537500., 789500.],
[ 5542500., 789500.]]])
</code></pre>
<p>If this is not the order you want you can do something like:</p>
<pre><code>In [9]: ar = np.zeros((3,4,2), float)
...: ar[0] = product[::3]
...: ar[1] = product[1::3]
...: ar[2] = product[2::3]
...:
In [10]: ar
Out[10]:
array([[[ 5532500., 804500.],
[ 5532500., 799500.],
[ 5532500., 794500.],
[ 5532500., 789500.]],
[[ 5537500., 804500.],
[ 5537500., 799500.],
[ 5537500., 794500.],
[ 5537500., 789500.]],
[[ 5542500., 804500.],
[ 5542500., 799500.],
[ 5542500., 794500.],
[ 5542500., 789500.]]])
</code></pre>
<p>I believe there are better ways to do this last reshaping, but I'm not a <code>numpy</code> expert.</p>
<p>Note that using <code>object</code> as dtype it's a <em>huge</em> performance penalty, since <code>numpy</code> cannot optimize anything (and is sometimes slower than using normal <code>list</code>s). I have used a <code>(3,4,2)</code> array instead which allows faster operations.</p>
|
python|arrays|numpy|lambda|vectorization
| 3
|
1,741
| 15,914,742
|
Python Pandas Series gives NaN data when passing a dict with large index values
|
<p>I am trying to build a Pandas series by passing it a dictionary containing index and data pairs. While doing so I noticed an interesting quirk. If the index of the data pair is a very large integer the data will show up as NaN. This is fixed by reducing the size of the index values, or creating the Series using two lists instead of a single dict. I have large index values because I am using time-stamps in microseconds-since-1970 format. Am I doing something wrong or is this a bug?</p>
<p>Here's an example:</p>
<pre><code>import pandas as pd
test_series_time = [1357230060000000, 1357230180000000, 1357230300000000]
test_series_value = [1, 2, 3]
series = pd.Series(test_series_value, test_series_time, name="this works")
test_series_dict = {1357230060000000: 1, 1357230180000000: 2, 1357230300000000: 3}
series2 = pd.Series(test_series_dict, name="this doesn't")
test_series_dict_smaller_index = {1357230060: 1, 1357230180: 2, 1357230300: 3}
series3 = pd.Series(test_series_dict_smaller_index, name="this does")
print series
print series2
print series3
</code></pre>
<p>and the output:</p>
<pre><code>1357230060000000 1
1357230180000000 2
1357230300000000 3
Name: this works
1357230060000000 NaN
1357230180000000 NaN
1357230300000000 NaN
Name: this doesn't
1357230060 1
1357230180 2
1357230300 3
Name: this does
</code></pre>
<p>So what's up with this?</p>
|
<p>I bet you are on 32-bit; on 64-bit this works fine. In 0.10.1, the default of creation via dicts is to use the default numpy integer creation, which is system dependent (e.g. int32 on 32-bit, and int64 on 64-bit). You are overflowing the dtype, which results in unpredictable behavior.</p>
<p>In 0.11 (coming out this week!), this will work as it will default to creating int64s regardless of the system.</p>
<pre><code>In [12]: np.iinfo(np.int32).max
Out[12]: 2147483647
In [13]: np.iinfo(np.int64).max
Out[13]: 9223372036854775807
</code></pre>
<p>Convert your microseconds to Timestamps (multiply by 1000 to put in nanoseconds which is what Timestamp accepts as an integer input, then you are good to go</p>
<pre><code>In [5]: pd.Series(test_series_value,
[ pd.Timestamp(k*1000) for k in test_series_time ])
Out[5]:
2013-01-03 16:21:00 1
2013-01-03 16:23:00 2
2013-01-03 16:25:00 3
</code></pre>
|
dictionary|python-2.7|pandas|series
| 0
|
1,742
| 71,808,375
|
Expected scalar type Double but found Float
|
<p>I am getting this error while trying to give an input image batch to my Pytorch model</p>
<p><code>"RuntimeError: Given groups=1, weight of size [64, 3, 4, 4], expected input[5, 96, 96, 3] to have 3 channels, but got 96 channels instead".</code></p>
<p>I read images with skimage. My images are 96x96 and batch size is 5. Here is my Generator class:</p>
<pre><code>class Generator(nn.Module):
def __init__(self):
super().__init__()
def downsample(input_filters, output_filters, normalize=True):
layers = [nn.Conv2d(input_filters, output_filters, kernel_size=4, padding=1, stride=2)]
if normalize:
layers.append(nn.BatchNorm2d(output_filters, 0.8))
layers.append(nn.LeakyReLU(0.2))
return layers
def upsample(input_filters, output_filters, normalize=True, last_layer=False):
layers = [nn.ConvTranspose2d(input_filters, output_filters, kernel_size=4, stride=2, padding=1)]
if normalize:
layers.append(nn.BatchNorm2d(output_filters, 0.8))
if not last_layer:
layers.append(nn.ReLU())
return layers
self.model = nn.Sequential(
*downsample(3, 64, normalize=False), # 96x96
*downsample(64, 64), # 48x48
*downsample(64, 128), # 24x24
*downsample(128, 256), # 12x12
*downsample(256, 512), # 6x6
nn.Conv2d(512, 4000, kernel_size=4), # 3x3
*upsample(4000, 512), # 6x6
*upsample(512, 256), # 12x12
*upsample(256, 128), # 24x24
*upsample(128, 64), # 48x48
*upsample(64, 64), # 96x96
*upsample(64, 3, last_layer=True), # 192x192
nn.Tanh()
)
def forward(self, x):
return self.model(x)
</code></pre>
<p>Here is my Dataset Class:</p>
<pre><code>class OutpaintDataset(Dataset):
def __init__(self, data_path, input_size, output_size):
self.data_path = data_path
self.input_size = input_size
self.output_size = output_size
self.image_names = glob.glob(data_path)
def outpaint(self, image):
masked = image
mask_size = int(self.input_size/2)
masked[:, :mask_size, :] = 1
masked[:, -1*mask_size:, :] = 1
masked[:mask_size, :, :] = 1
masked[-1*mask_size:, :, :] = 1
return masked
def custom_resize(self, image, size):
return transform.resize(image, (size, size))
def __len__(self):
return len(self.image_names)
def __getitem__(self, index):
image = io.imread(self.image_names[index])
# image to size of (96, 96)
input_image = self.custom_resize(image, self.input_size)
# image to size of (192, 192)
ground_image = self.custom_resize(image, self.output_size)
masked_image = self.outpaint(image=ground_image.copy())
return input_image, masked_image, ground_image
</code></pre>
<h2> --UPDATE-- </h2><br>
I changed the shape of the given image batch from torch.Size([5, 192, 192, 3]) to torch.Size([5, 3, 192, 192]). Now I am getting a new error <b>RuntimeError: expected scalar type Double but found Float</b>.
<p>I reshape and use my images at the following code block</p>
<pre><code>for i, data in enumerate(train_loader, 0):
input_image, masked_image, ground_image, = data
reshaped = masked_image.permute(0, 3, 1, 2)
reshaped = reshaped.type(torch.double)
output = generator(reshaped)
break
</code></pre>
<p>I used io and transform functions from skimage</p>
|
<p>I usually print model summary using torchinfo library to debug such errors.
Your input should be in the form of [5, 3, 192, 192].
If image size is 96x96, your image size before applying the convolution is less than 4x4 so it shows error.
<a href="https://i.stack.imgur.com/Ott4L.png" rel="nofollow noreferrer">this image will show you image size after each of layer with number of parameters</a></p>
|
python|pytorch|computer-vision
| 0
|
1,743
| 71,866,676
|
How to fill locations of shapefile based on CSV data set?
|
<p>I'm using GeoPandas in Python to create a heatmap of the state of Florida from a given CSV dataset and a shapefile of Florida:</p>
<p><a href="https://i.stack.imgur.com/QVOZv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QVOZv.png" alt="enter image description here" /></a></p>
<p>This is the code I have for displaying the state from the shapefile, and the CSV dataset contents (It's a list of Covid cases in the Florida counties),</p>
<p>The shapefile also conveniently has data for the the name of the counties along with their respective polygons:</p>
<p><a href="https://i.stack.imgur.com/6YRVn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6YRVn.png" alt="enter image description here" /></a></p>
<p>My plan is to parse through the CSV and keep track of how many cases there are for each county then build a heatmap from that, but I'm unsure of how to work with shapefiles.</p>
|
<ul>
<li>I believe I have found same shape file you are working with. I don't know the source of your COVID data so have used NY Times data (<a href="https://github.com/nytimes/covid-19-data" rel="nofollow noreferrer">https://github.com/nytimes/covid-19-data</a>)</li>
<li>the normal terminology for a map based heatmap is a choropleth</li>
<li>it's a case of lining up your geometry with your COVID data
<ul>
<li>COVID data is by county, geometry by sub-county. Hence have rolled geometry up to county to make it consistent</li>
<li>geometry encodes FIPS in two columns, have created a new column with it combined. COVID data has FIPS as a float. Have modified this to be a string</li>
<li>now it's a simple case of a <strong>pandas</strong> <code>merge()</code> to combine / join geometry and COVID data</li>
</ul>
</li>
<li>finally generating the visual. This is a simple case of generating a choropleth. <a href="https://geopandas.org/en/stable/docs/user_guide/mapping.html" rel="nofollow noreferrer">https://geopandas.org/en/stable/docs/user_guide/mapping.html</a></li>
</ul>
<pre><code>import geopandas as gpd
import pandas as pd
gdf = gpd.read_file(
"https://www2.census.gov/geo/tiger/TIGER2016/COUSUB/tl_2016_12_cousub.zip"
)
# NY Times data is by county not sub-county. rollup geometry to county
gdf_county = (
gdf.dissolve("COUNTYFP")
.reset_index()
.assign(fips=lambda d: d["STATEFP"] + d["COUNTYFP"])
.loc[:, ["fips", "geometry", "STATEFP", "COUNTYFP", "NAME"]]
)
# get NY times data by county
df = pd.read_csv(
"https://raw.githubusercontent.com/nytimes/covid-19-data/master/live/us-counties.csv"
)
# limit to florida and make fips same type as geometry
df_fl = (
df.loc[df["state"].eq("Florida")]
.dropna(subset=["fips"])
.assign(fips=lambda d: d["fips"].astype(int).astype(str))
)
# merge geometry and covid data
gdf_fl_covid = gdf_county.merge(df_fl, on="fips")
# interactive folium choropleth
gdf_fl_covid.explore(column="cases")
# static matplotlib choropleth
gdf_fl_covid.plot(column="cases")
</code></pre>
<p><a href="https://i.stack.imgur.com/cV2gy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cV2gy.png" alt="enter image description here" /></a></p>
|
python|csv|geopandas|shapefile
| 1
|
1,744
| 72,067,723
|
Find duplicates in dataframe and group them by assigning a key
|
<p>I´ve looked around and found similar questions but none of them really helped me to find a solution.
I want my script to read a csv which looks like this:</p>
<pre><code>hot_dict = {'Links': links, 'Titles': titles, 'Datestamps': datestamp_extended,'GroupID': "" }
</code></pre>
<p>I want to find all duplicate links in column links and assign all links that are identical the same key in column "GroupID"</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Links</th>
<th>GroupID</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>Key1</td>
</tr>
<tr>
<td>B</td>
<td>Key2</td>
</tr>
<tr>
<td>A</td>
<td>Key1</td>
</tr>
<tr>
<td>B</td>
<td>Key2</td>
</tr>
</tbody>
</table>
</div>
<p>This gives me just true and false values obviously:</p>
<pre><code>df['GroupID'] =df.duplicated(subset=['Links'], keep=False)
</code></pre>
<p>Is there an elegant way to continue from here?</p>
<p>Thanks a lot!</p>
|
<p>For a simple key with an integer ID, you can first convert the Links column to <a href="https://pandas.pydata.org/docs/user_guide/categorical.html" rel="nofollow noreferrer">categorical data</a>, then just obtain the category code from that:</p>
<pre><code>df['GroupID'] = df['Links'].astype('category').cat.codes
</code></pre>
|
python|pandas|dataframe|csv
| 1
|
1,745
| 72,034,269
|
Using pandas how to add(math operations) multiple rows between two columns to get a total
|
<p>I have a csv file with multiple columns that represent office supplies sales.
I want to find the total sales for <em>pens</em> (<strong>unit price</strong> and <strong>item</strong> are the two <strong>columns</strong>)
so firstly I sorted the df to search for <em>pens</em> under the column <strong>item</strong> and listed its <strong>unit</strong> <strong>price</strong>.
now I cant figure out how to add all of the unit prices for each pen sale. how would I do it?</p>
|
<p>I think there is a slight flaw in the logic you presented, but it is hard without the full context - i.e. sample dataframe, code snippet, etc.</p>
<p>In order to calculate total sales of pen items you would need to calculate the sum(price * quantity) of all sales.</p>
<p>so simple df:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{
'item': ['pen', 'pen', 'chair'],
'price':[5, 5, 10],
'qtt': [1, 1, 1]
}
)
sales = sum(df.loc[df['item']=='pen', 'price'] * df.loc[df['item']=='pen', 'qtt'])
</code></pre>
<p>learning to deal with slicing is key.
<a href="https://pandas.pydata.org/docs/user_guide/indexing.html" rel="nofollow noreferrer">https://pandas.pydata.org/docs/user_guide/indexing.html</a></p>
|
python|pandas|math|add
| 2
|
1,746
| 16,930,632
|
Create new numpy array-scalar of flexible dtype
|
<p>I have a working solution to my problem, but when trying different things I was astounded there wasn't a better solution that I could find. It all boils down to creating a single flexible dtype value for comparing and inserting into an array.</p>
<p>I have an RGB 24-bit image (so 8-bits for each R, G, and B) image array. It turns out for some actions it is best to use it as a 3D array with HxWx3 other times it is best to use it as a structured array with the dtype([('R',uint8),('G',uint8),('B',uint8)]). One example is when trying to relabel the image colors so that every unique color is given a different value. I do this with the following code:</p>
<pre><code># Given im as an array of HxWx3, dtype=uint8
from numpy import dtype, uint8, unique, insert, searchsorted
rgb_dtype = dtype([('R',uint8),('G',uint8),('B',uint8)]))
im = im.view(dtype=rgb_dtype).squeeze() # need squeeze to remove the third dim
values = unique(im)
if tuple(values[0]) != (0, 0, 0):
values = insert(values, 0, 0) # value 0 needs to always be (0, 0, 0)
labels = searchsorted(values, im)
</code></pre>
<p>This works beautifully, however I am tried to make the <code>if</code> statement look nicer and just couldn't find a way. So lets look at the comparison first:</p>
<pre><code>>>> values[0]
(0, 0, 0)
>>> values[0] == 0
False
>>> values[0] == (0, 0, 0)
False
>>> values[0] == array([0, 0, 0])
False
>>> values[0] == array([uint8(0), uint8(0), uint8(0)]).view(dtype=rgb_dtype)[0]
True
>>> values[0] == zeros((), dtype=rgb_dtype)
True
</code></pre>
<p>But what if you wanted something besides (0, 0, 0) or (1, 1, 1) and something that was not ridiculous looking? It seems like there should be an easier way to construct this, like <code>rgb_dtype.create((0,0,0))</code>.</p>
<p>Next with the insert statement, you need to insert 0 for <code>(0, 0, 0)</code>. For other values this really does not work, for example inserting <code>(1, 2, 3)</code> actually inserts <code>(1, 1, 1), (2, 2, 2), (3, 3, 3)</code>.</p>
<p>So in the end, is there a nicer way? Thanks!</p>
|
<p>I could make <code>insert()</code> work for your case doing (note that instead of <code>0</code> it is used <code>[0]</code>):</p>
<pre><code>values = insert(values, [0], (1,2,3))
</code></pre>
<p>giving (for example):</p>
<pre><code>array([(0, 1, 3), (0, 0, 0), (0, 0, 4), ..., (255, 255, 251), (255, 255, 253), (255, 255, 255)],
dtype=[('R', 'u1'), ('G', 'u1'), ('B', 'u1')])
</code></pre>
<p>Regarding another way to do your <code>if</code>, you can do this:</p>
<pre><code>str(values[0]) == str((0,0,0))
</code></pre>
<p>or, perhaps more robust:</p>
<pre><code>eval(str(values[0])) == eval(str(0,0,0))
</code></pre>
|
python|numpy
| 1
|
1,747
| 22,081,878
|
get previous row's value and calculate new column pandas python
|
<p>Is there a way to look back to a previous row, and calculate a new variable? so as long as the previous row is the same case what is the (previous change) - (current change), and attribute it to the previous 'ChangeEvent' in new columns?</p>
<p>here is my DataFrame</p>
<pre><code>>>> df
ChangeEvent StartEvent case change open
0 Homeless Homeless 1 2014-03-08 00:00:00 2014-02-08
1 other Homeless 1 2014-04-08 00:00:00 2014-02-08
2 Homeless Homeless 1 2014-05-08 00:00:00 2014-02-08
3 Jail Homeless 1 2014-06-08 00:00:00 2014-02-08
4 Jail Jail 2 2014-06-08 00:00:00 2014-02-08
</code></pre>
<p>to add columns</p>
<pre><code>Jail Homeless case
0 6 1
0 30 1
0 0 1
</code></pre>
<p>... and so on</p>
<p>here is the df build</p>
<pre><code>import pandas as pd
import datetime as DT
d = {'case' : pd.Series([1,1,1,1,2]),
'open' : pd.Series([DT.datetime(2014, 3, 2), DT.datetime(2014, 3, 2),DT.datetime(2014, 3, 2),DT.datetime(2014, 3, 2),DT.datetime(2014, 3, 2)]),
'change' : pd.Series([DT.datetime(2014, 3, 8), DT.datetime(2014, 4, 8),DT.datetime(2014, 5, 8),DT.datetime(2014, 6, 8),DT.datetime(2014, 6, 8)]),
'StartEvent' : pd.Series(['Homeless','Homeless','Homeless','Homeless','Jail']),
'ChangeEvent' : pd.Series(['Homeless','irrelivant','Homeless','Jail','Jail']),
'close' : pd.Series([DT.datetime(2015, 3, 2), DT.datetime(2015, 3, 2),DT.datetime(2015, 3, 2),DT.datetime(2015, 3, 2),DT.datetime(2015, 3, 2)])}
df=pd.DataFrame(d)
</code></pre>
|
<p>The way to get the previous is using the shift method:</p>
<pre><code>In [11]: df1.change.shift(1)
Out[11]:
0 NaT
1 2014-03-08
2 2014-04-08
3 2014-05-08
4 2014-06-08
Name: change, dtype: datetime64[ns]
</code></pre>
<p>Now you can subtract these columns. <em>Note: This is with 0.13.1 (datetime stuff has had a lot of work recently, so YMMV with older versions).</em></p>
<pre><code>In [12]: df1.change.shift(1) - df1.change
Out[12]:
0 NaT
1 -31 days
2 -30 days
3 -31 days
4 0 days
Name: change, dtype: timedelta64[ns]
</code></pre>
<p>You can just apply this to each case/group:</p>
<pre><code>In [13]: df.groupby('case')['change'].apply(lambda x: x.shift(1) - x)
Out[13]:
0 NaT
1 -31 days
2 -30 days
3 -31 days
4 NaT
dtype: timedelta64[ns]
</code></pre>
|
python|pandas
| 98
|
1,748
| 22,258,491
|
Read a small random sample from a big CSV file into a Python data frame
|
<p>The CSV file that I want to read does not fit into main memory. How can I read a few (~10K) random lines of it and do some simple statistics on the selected data frame?</p>
|
<p>Assuming no header in the CSV file:</p>
<pre><code>import pandas
import random
n = 1000000 #number of records in file
s = 10000 #desired sample size
filename = "data.txt"
skip = sorted(random.sample(range(n),n-s))
df = pandas.read_csv(filename, skiprows=skip)
</code></pre>
<p>would be better if <code>read_csv</code> had a <code>keeprows</code>, or if <code>skiprows</code> took a callback func instead of a list.</p>
<p>With header and unknown file length:</p>
<pre><code>import pandas
import random
filename = "data.txt"
n = sum(1 for line in open(filename)) - 1 #number of records in file (excludes header)
s = 10000 #desired sample size
skip = sorted(random.sample(range(1,n+1),n-s)) #the 0-indexed header will not be included in the skip list
df = pandas.read_csv(filename, skiprows=skip)
</code></pre>
|
python|pandas|random|io|import-from-csv
| 96
|
1,749
| 17,836,880
|
orthogonal projection with numpy
|
<p>I have a list of 3D-points for which I calculate a plane by numpy.linalg.lstsq - method. But Now I want to do a orthogonal projection for each point into this plane, but I can't find my mistake:</p>
<pre><code>from numpy.linalg import lstsq
def VecProduct(vek1, vek2):
return (vek1[0]*vek2[0] + vek1[1]*vek2[1] + vek1[2]*vek2[2])
def CalcPlane(x, y, z):
# x, y and z are given in lists
n = len(x)
sum_x = sum_y = sum_z = sum_xx = sum_yy = sum_xy = sum_xz = sum_yz = 0
for i in range(n):
sum_x += x[i]
sum_y += y[i]
sum_z += z[i]
sum_xx += x[i]*x[i]
sum_yy += y[i]*y[i]
sum_xy += x[i]*y[i]
sum_xz += x[i]*z[i]
sum_yz += y[i]*z[i]
M = ([sum_xx, sum_xy, sum_x], [sum_xy, sum_yy, sum_y], [sum_x, sum_y, n])
b = (sum_xz, sum_yz, sum_z)
a,b,c = lstsq(M, b)[0]
'''
z = a*x + b*y + c
a*x = z - b*y - c
x = -(b/a)*y + (1/a)*z - c/a
'''
r0 = [-c/a,
0,
0]
u = [-b/a,
1,
0]
v = [1/a,
0,
1]
xn = []
yn = []
zn = []
# orthogonalize u and v with Gram-Schmidt to get u and w
uu = VecProduct(u, u)
vu = VecProduct(v, u)
fak0 = vu/uu
erg0 = [val*fak0 for val in u]
w = [v[0]-erg0[0],
v[1]-erg0[1],
v[2]-erg0[2]]
ww = VecProduct(w, w)
# P_new = ((x*u)/(u*u))*u + ((x*w)/(w*w))*w
for i in range(len(x)):
xu = VecProduct([x[i], y[i], z[i]], u)
xw = VecProduct([x[i], y[i], z[i]], w)
fak1 = xu/uu
fak2 = xw/ww
erg1 = [val*fak1 for val in u]
erg2 = [val*fak2 for val in w]
erg = [erg1[0]+erg2[0], erg1[1]+erg2[1], erg1[2]+erg2[2]]
erg[0] += r0[0]
xn.append(erg[0])
yn.append(erg[1])
zn.append(erg[2])
return (xn,yn,zn)
</code></pre>
<p>This returns me a list of points which are all in a plane, but when I display them, they are not at the positions they should be.
I believe there is already a certain built-in method to solve this problem, but I couldn't find any =(</p>
|
<p>You are doing a very poor use of <code>np.lstsq</code>, since you are feeding it a precomputed 3x3 matrix, instead of letting it do the job. I would do it like this:</p>
<pre><code>import numpy as np
def calc_plane(x, y, z):
a = np.column_stack((x, y, np.ones_like(x)))
return np.linalg.lstsq(a, z)[0]
>>> x = np.random.rand(1000)
>>> y = np.random.rand(1000)
>>> z = 4*x + 5*y + 7 + np.random.rand(1000)*.1
>>> calc_plane(x, y, z)
array([ 3.99795126, 5.00233364, 7.05007326])
</code></pre>
<p>It is actually more convenient to use a formula for your plane that doesn't depend on the coefficient of <code>z</code> not being zero, i.e. use <code>a*x + b*y + c*z = 1</code>. You can similarly compute <code>a</code>, <code>b</code> and <code>c</code> doing:</p>
<pre><code>def calc_plane_bis(x, y, z):
a = np.column_stack((x, y, z))
return np.linalg.lstsq(a, np.ones_like(x))[0]
>>> calc_plane_bis(x, y, z)
array([-0.56732299, -0.70949543, 0.14185393])
</code></pre>
<p>To project points onto a plane, using my alternative equation, the vector <code>(a, b, c)</code> is perpendicular to the plane. It is easy to check that the point <code>(a, b, c) / (a**2+b**2+c**2)</code> is on the plane, so projection can be done by referencing all points to that point on the plane, projecting the points onto the normal vector, subtract that projection from the points, then referencing them back to the origin. You could do that as follows:</p>
<pre><code>def project_points(x, y, z, a, b, c):
"""
Projects the points with coordinates x, y, z onto the plane
defined by a*x + b*y + c*z = 1
"""
vector_norm = a*a + b*b + c*c
normal_vector = np.array([a, b, c]) / np.sqrt(vector_norm)
point_in_plane = np.array([a, b, c]) / vector_norm
points = np.column_stack((x, y, z))
points_from_point_in_plane = points - point_in_plane
proj_onto_normal_vector = np.dot(points_from_point_in_plane,
normal_vector)
proj_onto_plane = (points_from_point_in_plane -
proj_onto_normal_vector[:, None]*normal_vector)
return point_in_plane + proj_onto_plane
</code></pre>
<p>So now you can do something like:</p>
<pre><code>>>> project_points(x, y, z, *calc_plane_bis(x, y, z))
array([[ 0.13138012, 0.76009389, 11.37555123],
[ 0.71096929, 0.68711773, 13.32843506],
[ 0.14889398, 0.74404116, 11.36534936],
...,
[ 0.85975642, 0.4827624 , 12.90197969],
[ 0.48364383, 0.2963717 , 10.46636903],
[ 0.81596472, 0.45273681, 12.57679188]])
</code></pre>
|
python|arrays|numpy
| 14
|
1,750
| 55,277,068
|
How to sort DataFrame columns sequently from the first column?
|
<p>I sorted df columns by max value of rows.</p>
<pre><code>dff = centroids.reindex(df.sum().sort_values(ascending=False).index, axis=1)
print(dff)
13 9 2 6 7 0 5
0 0.423586 0.472548 0.366301 0.423973 0.312807 0.476197 0.384652
1 0.639636 0.734712 0.503772 0.600164 0.416451 0.730942 0.515370
2 0.749716 0.835071 0.549806 0.637331 0.419558 0.782306 0.507648
3 0.817579 0.844361 0.577874 0.621483 0.408825 0.727671 0.458346
4 0.890916 0.831640 0.631127 0.611741 0.438974 0.654338 0.430330
5 0.952046 0.802077 0.694321 0.601616 0.496798 0.572743 0.423915
6 0.995009 0.768293 0.749186 0.590912 0.553378 0.500568 0.427607
7 1.000000 0.718386 0.781207 0.570253 0.598234 0.425387 0.436355
8 0.993004 0.690660 0.779607 0.550149 0.600459 0.396121 0.422891
</code></pre>
<p>Now i need sort this columns by correlation between each other, but perform this sequently. So define second column by best correlation with first, define third column by best correlation with second and so on.
And also i want to save original labels of columns</p>
<p>I have some thoughts about that, but because i am newbie in python code not work</p>
<pre><code>k_num = 7 # number of columns in df
def corelation(df):
col = 1
for column in dff.columns[col:]:
dff.reindex(dff.corr().sort_values(dff.columns[col], ascending=False).index, axis = 1)
col += 1
if col == k_num:
return(df)
</code></pre>
<p>I will be very appreciate if some help me</p>
|
<p>We can create a list that will hold the required order of columns. Let's call it <code>l</code> and initially populate it with the first column <code>0</code>. Then we iteratively find the max correlation between the column stored as the last element in <code>l</code> and the subset of the DataFrame that excludes columns that are already in <code>l</code>, adding on each step the new column with max correlation to the list <code>l</code>. When there are no more columns left, <code>l</code> will hold the required order of columns, and <code>df[l]</code> will give us the DataFrame with columns sorted by max correlation:</p>
<pre><code>np.random.seed(42)
df = pd.DataFrame(np.random.randn(10, 10))
l = [0]
while len(l) < len(df.columns):
i = df[df.columns.difference(l)].corrwith(df[l[-1]]).abs().idxmax()
l += [i]
df[l]
</code></pre>
|
python|pandas|sorting|dataframe|correlation
| 1
|
1,751
| 55,193,970
|
loading saved model causes "'getIndices' : no matching overloaded function found" in tfjs
|
<p>I meet a problem like <a href="https://stackoverflow.com/questions/52857230/loading-saved-model-causes-failed-to-compile-fragment-shader-for-gather-op">Loading saved_model causes "Failed to compile fragment shader" for gather op</a>
<a href="https://i.stack.imgur.com/mocFF.png" rel="nofollow noreferrer">enter image description here</a></p>
<pre><code>const MODEL_URL = './web_model/tensorflowjs_model.pb';
const WEIGHTS_URL = './web_model/weights_manifest.json';
async function predict(){
const model = await tf.loadFrozenModel(MODEL_URL, WEIGHTS_URL);
var input_x = tf.tensor([[2714, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0]],shape=[1,50],dtype='int32');
var dropout_keep_prob = 1.0;
var output = model.execute({dropout_keep_prob:dropout_keep_prob, input_x:input_x});
console.log(output);
}
predict();
</code></pre>
<p>This is something about my model:</p>
<pre><code>self.input_x = tf.placeholder(tf.int32, [None, sequence_length], name="input_x")
self.input_y = tf.placeholder(tf.float32, [None, num_classes], name="input_y")
self.dropout_keep_prob = tf.placeholder(tf.float32, name="dropout_keep_prob")
self.scores = tf.nn.xw_plus_b(self.h_drop, W, b, name="scores")
self.predictions = tf.argmax(self.scores, 1, name="predictions")
</code></pre>
<pre><code>tf.saved_model.simple_save(sess, "./saved_model",
inputs={"input_x": cnn.input_x, }, outputs={"predictions": cnn.predictions,"scores":cnn.scores,})
</code></pre>
<p>And I just convert my saved_model like that webpage.</p>
<p>This is the Python tensorflow input, as you know, the model used to be a python script, and I want to convert it to tfjs.</p>
<pre><code>inputs_major = np.zeros(shape=[1,max_seq_length], dtype=np.int32)
for v in range(len(vec)):
inputs_major[0][v] = vec[v]
</code></pre>
<p>I updated tf.js, but loadFrozenModel has been removed, and I change it to loadGraphModel, error is
"Uncaught (in promise) Error: Failed to parse model JSON of response from ./web_model/tensorflowjs_model.pb. Your path contains a .pb file extension. Support for .pb models have been removed in TensorFlow.js 1.0 in favor of .json models. You can re-convert your Python TensorFlow model using the TensorFlow.js 1.0 conversion scripts or you can convert your.pb models with the 'pb2json'NPM script in the tensorflow/tfjs-converter repository."</p>
<p>So I try to use tensorflowjs converter 1.0.1 instead of 0.8, and my tf version is 1.13.</p>
<p>The erorr is
"tensorflow.python.eager.lift_to_graph.UnliftableError: Unable to lift tensor because it depends transitively on placeholder via at least one path, e.g.: IdentityN (IdentityN) <- scores (BiasAdd) <- scores/MatMul (MatMul) <- dropout/dropout/mul (Mul) <- dropout/dropout/Floor (Floor) <- dropout/dropout/add (Add) <- dropout_keep_prob (Placeholder)"</p>
<p>I think is because my wrong saved_model,</p>
<pre><code>self.input_x = tf.placeholder(tf.int32, [None, sequence_length], name="input_x")
self.input_y = tf.placeholder(tf.float32, [None, num_classes], name="input_y")
self.dropout_keep_prob = tf.placeholder(tf.float32, name="dropout_keep_prob")
self.scores = tf.nn.xw_plus_b(self.h_drop, W, b, name="scores")
self.predictions = tf.argmax(self.scores, 1, name="predictions")
tf.saved_model.simple_save(sess, "./saved_model",
inputs={"input_x": cnn.input_x,},
outputs={"predictions": cnn.predictions, "scores": cnn.scores, })
</code></pre>
<p>So I changed my code.</p>
<pre><code>tf.saved_model.simple_save(sess, "./saved_model",
inputs={"input_x": cnn.input_x, "dropout_keep_prob":cnn.dropout_keep_prob,},
outputs={"predictions": cnn.predictions, "scores": cnn.scores, })
</code></pre>
<p>When saving
"WARNING:tensorflow:From D:\Python\Python35\lib\site-packages\tensorflow\python\saved_model\signature_def_utils_impl.py:205: build_tensor_info (from tensorflow.python.saved_model.utils_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.utils.build_tensor_info or tf.compat.v1.saved_model.build_tensor_info."</p>
<p>The error is
"Limited tf.compat.v2.summary API due to missing TensorBoard installation
2019-03-20 19:43:18.894836: I tensorflow/core/grappler/devices.cc:53] Number of eligible GPUs (core count >= 8): 0 (Note: TensorFlow was not compiled with CUDA support)
2019-03-20 19:43:18.909183: I tensorflow/core/grappler/clusters/single_machine.cc:359] Starting new session
2019-03-20 19:43:18.931823: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2019-03-20 19:43:18.989768: E tensorflow/core/grappler/grappler_item_builder.cc:636] Init node embedding/W/Assign doesn't exist in graph
Traceback (most recent call last):
File "d:\anaconda3\lib\site-packages\tensorflow\python\grappler\tf_optimizer.py", line 43, in OptimizeGraph
verbose, graph_id, status)
SystemError: returned NULL without setting an error</p>
<p>During handling of the above exception, another exception occurred:</p>
<p>Traceback (most recent call last):
File "d:\anaconda3\lib\runpy.py", line 193, in _run_module_as_main
"<strong>main</strong>", mod_spec)
File "d:\anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "D:\Anaconda3\Scripts\tensorflowjs_converter.exe__main__.py", line 9, in
File "d:\anaconda3\lib\site-packages\tensorflowjs\converters\converter.py", line 358, in main
strip_debug_ops=FLAGS.strip_debug_ops)
File "d:\anaconda3\lib\site-packages\tensorflowjs\converters\tf_saved_model_conversion_v2.py", line 271, in convert_tf_saved_model
concrete_func)
File "d:\anaconda3\lib\site-packages\tensorflow\python\framework\convert_to_constants.py", line 140, in convert_variables_to_constants_v2
graph_def = _run_inline_graph_optimization(func)
File "d:\anaconda3\lib\site-packages\tensorflow\python\framework\convert_to_constants.py", line 59, in _run_inline_graph_optimization
return tf_optimizer.OptimizeGraph(config, meta_graph)
File "d:\anaconda3\lib\site-packages\tensorflow\python\grappler\tf_optimizer.py", line 43, in OptimizeGraph
verbose, graph_id, status)
File "d:\anaconda3\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 548, in <strong>exit</strong>
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.InvalidArgumentError: Failed to import metagraph, check error log for more info."</p>
<p>I hope you could give me some help, the problem nearly makes me crazy.
Thanks!</p>
|
<p>In the end, I give up it, as latest version has removed loadFrozenModel, and the support is little. I try to use keras model and it works.</p>
|
tensorflow.js|tensorflowjs-converter
| 0
|
1,752
| 56,813,049
|
Tensorflow: concat two tensors with shapes [B, None, feat_dim1] and [B, feat_dim2] during graph construction
|
<p>As a tensorflow newbie, I'm trying to concatenate two tensor, <code>t1</code> and <code>t2</code>, together during graph construction. <code>t1</code>, <code>t2</code> have different ranks: <code>[B, T, feat_dim1]</code> and <code>[B, feat_dim2]</code>. But <code>T</code> can only be known during runtime, so in graph construction the shapes of <code>t1</code>, <code>t2</code> are actually <code>[B, None, feat_dim1]</code> and <code>[B, feat_dim2]</code>. What I wanted is to append <code>t2</code> to <code>t1</code> to get a tensor with the shape: <code>[B, None, feat1+feat2]</code>.</p>
<p>The first thing I thought of using is <code>tf.stack([t2, t2, ...], axis=1)</code> to expand the rank, but since <code>T=None</code> during graph construction, I cannot build the list for <code>tf.stack()</code>. I also checked <code>tf.while_loop</code> for building the list with <code>tf.Tensor</code> object, but couldn't get the gist of using function. </p>
<p>Currently the code I am working on doesn't support eager mode, so could someone give me some hint about how to concatenate <code>t1</code> and <code>t2</code>? or how to expand <code>t2</code> to <code>[B, T, feat2]</code> given <code>T=None</code> during graph construction? Thanks a lot for any suggestions.</p>
|
<ol>
<li>Add another dimension to tensor <code>t2</code>: <code>(B, feat_dim2) --> (B, 1, feat_dim2)</code>.</li>
<li>Tile tensor <code>t2</code> <code>None</code> times along the previously added second dimension, where <code>None</code> is the dynamic second dimension of tensor <code>t1</code>.</li>
<li>Concatenate <code>t1</code> and <code>t2</code> along the last dimension.</li>
</ol>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
import numpy as np
B = 5
feat_dim1 = 3
feat_dim2 = 4
t1 = tf.placeholder(tf.float32, shape=(B, None, feat_dim1)) # [5, None, 3]
t2 = 2.*tf.ones(shape=(B, feat_dim2)) # [5, 4]
def concat_tensors(t1, t2):
t2 = t2[:, None, :] # 1. `t1`: [5, 4]` --> `[5, 1, 4]`
tiled = tf.tile(t2, [1, tf.shape(t1)[1], 1]) # 2. `[5, 1, 4]` --> `[5, None, 4]`
res = tf.concat([t1, tiled], axis=-1) # 3. concatenate `t1`, `t2` --> `[5, None, 7]`
return res
res = concat_tensors(t1, t2)
with tf.Session() as sess:
print(res.eval({t1: np.ones((B, 2, feat_dim1))}))
# [[[1. 1. 1. 2. 2. 2. 2.]
# [1. 1. 1. 2. 2. 2. 2.]]
#
# [[1. 1. 1. 2. 2. 2. 2.]
# [1. 1. 1. 2. 2. 2. 2.]]
#
# [[1. 1. 1. 2. 2. 2. 2.]
# [1. 1. 1. 2. 2. 2. 2.]]
#
# [[1. 1. 1. 2. 2. 2. 2.]
# [1. 1. 1. 2. 2. 2. 2.]]
#
# [[1. 1. 1. 2. 2. 2. 2.]
# [1. 1. 1. 2. 2. 2. 2.]]]
</code></pre>
|
tensorflow
| 2
|
1,753
| 56,616,824
|
IndexSlice on a datetime multindex not working, but doesn't seem different from a properly-working toy equivalent
|
<p>I'm used to using <code>IndexSlice</code> on <code>datetime</code> indices. This is a toy equivalent of my multindex DataFrame and you can see the slicing works</p>
<pre><code>#slicing works on a simple DateTime index
qf = pd.DataFrame(index=pd.date_range(start="1Jan2019",freq="d",periods=30))
qf.loc[idx['2019-1-15':None]] #works
#the same slicing works on a multindex
qf.reset_index(inplace=True)
qf['foo']="bar"
qf['other']=range(len(qf))
qf['filler']="egbdf"
qf.set_index(['index','foo', 'other'], inplace=True)
qf.loc[idx['2019-1-15':'2019-1-20',:,:],:] #wrks
qf.loc[idx['2019-1-15':None,'bar',:],:] #works
</code></pre>
<p>But something is going on with a my real DataFrame. I cannot see what the difference is.</p>
<pre><code>xf.loc[idx['2019-5-1':'2019-6-1',"squat",:],:] # This works ok
xf.loc[idx['2019-5-1':None,"squat",:],:] # This fails
</code></pre>
<p>The error I get when I slice with a <code>'2019-5-1':None</code> is </p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-280-b0dce8e9e337> in <module>
1 xf.loc[idx['2019-5-1':'2019-6-1',"squat",:],:] # This works ok
----> 2 xf.loc[idx['2019-5-1':None,"squat",:],:] # This fails
3 #xf
C:\ProgramData\Anaconda3\envs\nambu\lib\site-packages\pandas\core\indexing.py in __getitem__(self, key)
1492 except (KeyError, IndexError, AttributeError):
1493 pass
-> 1494 return self._getitem_tuple(key)
1495 else:
1496 # we by definition only have the 0th axis
C:\ProgramData\Anaconda3\envs\nambu\lib\site-packages\pandas\core\indexing.py in _getitem_tuple(self, tup)
866 def _getitem_tuple(self, tup):
867 try:
--> 868 return self._getitem_lowerdim(tup)
869 except IndexingError:
870 pass
C:\ProgramData\Anaconda3\envs\nambu\lib\site-packages\pandas\core\indexing.py in _getitem_lowerdim(self, tup)
967 # we may have a nested tuples indexer here
968 if self._is_nested_tuple_indexer(tup):
--> 969 return self._getitem_nested_tuple(tup)
970
971 # we maybe be using a tuple to represent multiple dimensions here
C:\ProgramData\Anaconda3\envs\nambu\lib\site-packages\pandas\core\indexing.py in _getitem_nested_tuple(self, tup)
1046
1047 current_ndim = obj.ndim
-> 1048 obj = getattr(obj, self.name)._getitem_axis(key, axis=axis)
1049 axis += 1
1050
C:\ProgramData\Anaconda3\envs\nambu\lib\site-packages\pandas\core\indexing.py in _getitem_axis(self, key, axis)
1904 # nested tuple slicing
1905 if is_nested_tuple(key, labels):
-> 1906 locs = labels.get_locs(key)
1907 indexer = [slice(None)] * self.ndim
1908 indexer[axis] = locs
C:\ProgramData\Anaconda3\envs\nambu\lib\site-packages\pandas\core\indexes\multi.py in get_locs(self, seq)
2774 # a slice, include BOTH of the labels
2775 indexer = _update_indexer(_convert_to_indexer(
-> 2776 self._get_level_indexer(k, level=i, indexer=indexer)),
2777 indexer=indexer)
2778 else:
C:\ProgramData\Anaconda3\envs\nambu\lib\site-packages\pandas\core\indexes\multi.py in _get_level_indexer(self, key, level, indexer)
2635 # note that the stop ALREADY includes the stopped point (if
2636 # it was a string sliced)
-> 2637 return convert_indexer(start.start, stop.stop, step)
2638
2639 elif level > 0 or self.lexsort_depth == 0 or step is not None:
AttributeError: 'int' object has no attribute 'stop'
</code></pre>
<p>I cannot see any material difference between the toy index and the real index, and I cannot see how the error message results to passing None into the slicer.</p>
<p><a href="https://i.stack.imgur.com/5aElD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5aElD.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/fBgkJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fBgkJ.png" alt="enter image description here"></a></p>
<p>========================================================</p>
<p>I figured out why it works/doesn't work in different examples.</p>
<p>The code works ok when the index is entirely <code>dates</code>. But if the index has <code>datetimes</code> in it, it fails.</p>
<pre><code>#this index is solely dates, not dateTimes, and everything works
dt_index = pd.date_range(start="1jan2019",periods=100,freq="d")
zf = pd.DataFrame(index=dt_index)
zf['foo']=10
zf['bar']="squat"
zf['zaa']=range(len(dt_index))
zf.index.name="date"
zf = zf.reset_index().set_index(["date", "bar", "zaa"])
zf.loc[idx['2019-1-1':'2019-1-3',"squat",:],:] # This works ok
zf.loc[idx['2019-1-1':,"squat",:],:] # This works
zf.loc[idx['2019-1-1':None,'squat',:,:],:] # This works
</code></pre>
<p>The failing example:</p>
<pre><code>dt_index = pd.date_range(start="1jan2019 00:15:33",periods=100,freq="h")
zf = pd.DataFrame(index=dt_index)
zf['foo']=10
zf['bar']="squat"
zf['zaa']=range(len(dt_index))
zf.index.name="date"
zf = zf.reset_index().set_index(["date", "bar", "zaa"])
zf.loc[idx['2019-1-1':'2019-1-3',"squat",:],:] # This works ok
#zf.loc[idx['2019-1-1':,"squat",:],:] # This fails AttributeError: 'int' object has no attribute 'stop'
#zf.loc[idx['2019-1-1':None,'squat',:,:],:] # AttributeError: 'int' object has no attribute 'stop'
</code></pre>
|
<p>This seems like a bug. According to <a href="https://github.com/pandas-dev/pandas/issues/10331#issuecomment-189582481" rel="nofollow noreferrer">this discussion</a>, check line 2614-2637 of the <code>multi.py</code> of pandas package:</p>
<pre><code> try:
if key.start is not None:
start = level_index.get_loc(key.start)
else:
start = 0
if key.stop is not None:
stop = level_index.get_loc(key.stop)
else:
stop = len(level_index) - 1
step = key.step
except KeyError:
# we have a partial slice (like looking up a partial date
# string)
start = stop = level_index.slice_indexer(key.start, key.stop,
key.step, kind='loc')
step = start.step
if isinstance(start, slice) or isinstance(stop, slice):
# we have a slice for start and/or stop
# a partial date slicer on a DatetimeIndex generates a slice
# note that the stop ALREADY includes the stopped point (if
# it was a string sliced)
return convert_indexer(start.start, stop.stop, step)
</code></pre>
<p>The <strong><em>stop</em></strong> would always be an <code>int</code>, because the endpoint is <code>None</code>. But the <strong><em>start</em></strong> is different for <code>qf</code> and <code>xf</code>. The datetime_index of <code>qf</code> has a resolution of 1day, and <code>qf.index.levels[0].get_loc('2019-01-17')</code> would be an 'int'. But the resolution of <code>xf</code> is 0.001S, and <code>xf.index.levels[0].get_loc('2019-01-17')</code> would be a <code>slice</code>, which result the calling of <code>stop.stop</code>, while the <strong><em>stop</em></strong> being an <code>int</code>.</p>
<p>As a work around, you can use a very large date instead of <code>None</code>:</p>
<pre><code>xf.loc[idx['2019-5-1':'2222',"squat",:],:]
</code></pre>
|
python|pandas
| 3
|
1,754
| 56,642,128
|
How to use k means for a product recommendation dataset
|
<p>I have a data set with columns titled as product name, brand,rating(1:5),review text, review-helpfulness. What I need is to propose a recommendation algorithm using reviews. I have to use python for coding here. data set is in .csv format. </p>
<p>To identify the nature of the data set I need to use kmeans on the data set. How to use k means on this data set? </p>
<p>Thus I did following,<br>
1.data pre-processing,<br>
2.review text data cleaning,<br>
3.sentiment analysis,<br>
4.giving sentiment score from 1 to 5 according to the sentiment value (given by sentiment analysis) they get and tagging reviews as very negative, negative, neutral, positive, very positive.</p>
<p>after these procedures i have these columns in my data set, product name, brand,rating(1:5),review text, review-helpfulness, sentiment-value, sentiment-tag.
This is the link to the data set <a href="https://drive.google.com/file/d/1YhCJNvV2BQk0T7PbPoR746DCL6tYmH7l/view?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/file/d/1YhCJNvV2BQk0T7PbPoR746DCL6tYmH7l/view?usp=sharing</a></p>
<p>I tried to get k means using following code It run without error. but I don't know this is something useful or is there any other ways to use kmeans on this data set to get some other useful outputs. To identify more about data how should i use k means in this data set..</p>
<pre><code>import pandas as pd
import numpy as np
from sklearn.cluster import KMeans
import matplotlib.pyplot as plt
df.info()
X = np.array(df.drop(['sentiment_value'], 1).astype(float))
y = np.array(df['rating'])
kmeans = KMeans(n_clusters=2)
kmeans.fit(X)
KMeans(algorithm='auto', copy_x=True, init='k-means++', max_iter=300,
n_clusters=2, n_init=10, n_jobs=1, precompute_distances='auto',
random_state=None, tol=0.0001, verbose=0)
plt.show()
</code></pre>
|
<p>You did not plot anything.</p>
<p>So nothing shows up.</p>
|
python|data-mining|k-means|recommendation-engine|sklearn-pandas
| 2
|
1,755
| 26,409,153
|
Slicing a pandas series using a list of float slices
|
<p>I have a large pandas Series with a float64 index.</p>
<p>e.g. </p>
<pre><code>s = pandas.Series([1,2,3,4,5], index=[1.0,2.0,3.0,4.0,5.0])
</code></pre>
<p>but with 100,000s of rows.</p>
<p>I would like to pull back multiple slices into a single subsetted series. At the moment I am doing this by building a list of slices and then concatenating them</p>
<p>e.g.</p>
<pre><code>intervals = [(1,2), (4,8)]
s2 = pandas.concat([s.ix[start:end] for start, end in intervals])
</code></pre>
<p>Where intervals will be a list that is generally around 10-20 entries.
However, this is <em>slow</em>. Infact this line takes up 62% of the whole execution time of my program, which is taking about 30 seconds on a small subset of my data (about 1/2,000 of the whole dataset).</p>
<p>Does anyone know of a better way to do this?</p>
|
<p>It will require some clever <code>numpy</code> <code>array</code> broadcasting, to check, for each value in <code>index</code>, if the value is between each of the intervals in the <code>interval</code> list (open on both end, such that >=low_end and <=high_end):</p>
<pre><code>In [158]:
import numpy as np
def f(a1, a2):
return (((a1 - a2[:,:,np.newaxis])).prod(1)<=0).any(0)
In [159]:
f(s.index.values, np.array(intervals))
Out[159]:
array([ True, True, False, True, True], dtype=bool)
In [160]:
%timeit s.ix[f(s.index.values, np.array(intervals))]
1000 loops, best of 3: 212 µs per loop
In [161]:
%timeit s[f(s.index.values, np.array(intervals))]
10000 loops, best of 3: 177 µs per loop
In [162]:
%timeit pd.concat([s.ix[start: end] for start, end in intervals])
1000 loops, best of 3: 1.64 ms per loop
</code></pre>
<p>result:</p>
<pre><code>1 1
2 2
4 4
5 5
dtype: int64
</code></pre>
|
python|pandas
| 0
|
1,756
| 66,991,667
|
Numba jit unknown error during python function
|
<p>I made this function, but numba always give me error. Both chr_pos and pos are 1D arrays. What can be the problem?</p>
<pre><code>@nb.njit
def create_needed_pos(chr_pos, pos):
needed_pos=[]
needed_pos=np.array(needed_pos,dtype=np.float64)
for i in range(len(chr_pos)):
for k in range(len(pos)):
if chr_pos[i] == pos[k]:
if i==1 and k==1:
needed_pos=pos[k]
else:
a=pos[k]
needed_pos=np.append(needed_pos,[a])
return needed_pos
needed_pos=create_needed_pos(chr_pos, pos)
</code></pre>
<p>The errors:</p>
<pre><code>warnings.warn(errors.NumbaDeprecationWarning(msg,
<input>:1: NumbaWarning:
Compilation is falling back to object mode WITHOUT looplifting enabled because Function "create_needed_pos" failed type inference due to: Cannot unify array(float64, 1d, C) and int32 for 'needed_pos.1', defined at <input> (5)
File "<input>", line 5:
<source missing, REPL/exec in use?>
During: typing of intrinsic-call at <input> (9)
File "<input>", line 9:
<source missing, REPL/exec in use?>
</code></pre>
|
<p>The message</p>
<pre><code>Cannot unify array(float64, 1d, C) and int32 for 'needed_pos.1'
</code></pre>
<p>is telling you that you are trying to assign an integer variable to an array. That happens in this line:</p>
<pre><code> needed_pos=pos[k]
</code></pre>
<p>You can do that in normal Python, but Numba requires static types. You must assign an array of floats to an array of floats. For example, replacing the line by</p>
<pre><code> needed_pos = pos[k:k+1]
</code></pre>
<p>The same error message says you are trying to assign an int, and this indicates that <code>pos</code> receives an array of ints. You must pass an array of floats instead.</p>
<p>After those changes, Numba still complains here:</p>
<pre><code>needed_pos = []
needed_pos = np.array(needed_pos, dtype=np.float64)
</code></pre>
<p>with the message</p>
<pre><code>numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend)
Cannot infer the type of variable 'needed_pos', have imprecise type: list(undefined)<iv=None>.
</code></pre>
<p>because it doesn't know the type of the elements that <code>needed_pos</code> will contain.</p>
<p>You can replace those two lines with one that creates an array of size zero with a known type:</p>
<pre><code>needed_pos = np.array((0,), dtype=np.float64)
</code></pre>
<p>Now the program compiles and produces the same result with or without Numba.</p>
<p>But a problem remains. Numpy arrays work best when they have a fixed size. If you are continuously adding elements you'd better use lists (Numba lists in this case). This way for example:</p>
<pre><code>@nb.njit
def create_needed_pos(chr_pos, pos):
needed_pos = nb.typed.List.empty_list(nb.float64)
for i in range(len(chr_pos)):
for k in range(len(pos)):
if chr_pos[i] == pos[k]:
if i == k == 1:
needed_pos = nb.typed.List([pos[k]])
else:
needed_pos.append(pos[k])
return needed_pos
</code></pre>
|
numpy|numba
| 1
|
1,757
| 66,977,227
|
"Could not load dynamic library 'libcudnn.so.8'" when running tensorflow on ubuntu 20.04
|
<p>Note: there are many similar questions but for different versions of ubuntu and somewhat different specific libraries. I have not been able to figure out what combination of symbolic links, additional environment variables such as <code>LD_LIBRARY_PATH</code> would work</p>
<p>Here is my <em>nvidia</em> configuration</p>
<pre><code>$ nvidia-smi
Tue Apr 6 11:35:54 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.80.02 Driver Version: 450.80.02 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce RTX 2070 Off | 00000000:01:00.0 Off | N/A |
| 18% 25C P8 9W / 175W | 25MiB / 7982MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 1081 G /usr/lib/xorg/Xorg 20MiB |
| 0 N/A N/A 1465 G /usr/bin/gnome-shell 3MiB |
+-----------------------------------------------------------------------------+
</code></pre>
<p>When running a TF program the following happened:</p>
<pre><code>2021-04-06 14:35:01.589906: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudnn.so.8'; dlerror: libcudnn.so.8: cannot open shared object file: No such file or directory
2021-04-06 14:35:01.589914: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1757] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.
Skipping registering GPU devices...
</code></pre>
<p>Has anyone seen this particular mix and how did you resolve it?</p>
<p>Here is one of the additional fixes attempted, but with no change:</p>
<pre><code>conda install cudatoolkit=11.0
</code></pre>
|
<p>So I had the same issue. As the comments say, it's because you need to install CUDNN. For that, there is a guide <a href="https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html" rel="noreferrer">here</a>.</p>
<p>But as I know already your distro (Ubuntu 20.04) I can give you the command lines already:</p>
<pre><code>wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin
sudo mv cuda-ubuntu2004.pin /etc/apt/preferences.d/cuda-repository-pin-600
sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/${last_public_key}.pub
sudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/ /"
sudo apt-get update
sudo apt-get install libcudnn8
sudo apt-get install libcudnn8-dev
</code></pre>
<p>where <code>${last_public_key}</code> is the last public key (file with <code>.pub</code> extension) published on <a href="https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/" rel="noreferrer">https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/</a>. (At May 9th 2022 when this post was edit, it was <code>3bf863cc.pub</code>).</p>
<p>And if you want to install a specific version, the last 2 commands would be replaced with</p>
<pre><code>sudo apt-get install libcudnn8=${cudnn_version}-1+${cuda_version}
sudo apt-get install libcudnn8-dev=${cudnn_version}-1+${cuda_version}
</code></pre>
<p>where
<code>${cudnn_version}</code> is for example <code>8.2.4.*</code> and <code>${cuda_version}</code> is for example <code>cuda11.0</code> (as I see you have 11.0 on the command <code>nvidia-smi</code>, although I have not tested it as mine was 11.4 but I guess it should work Ok)</p>
|
python|tensorflow
| 41
|
1,758
| 66,806,959
|
How to assigne a dataframe mean to specific rows of dataframe?
|
<p>I have a data frame like this</p>
<pre><code>df_a = pd.DataFrame({'a': [2, 4, 5, 6, 12],
'b': [3, 5, 7, 9, 15]})
Out[112]:
a b
0 2 3
1 4 5
2 5 7
3 6 9
4 12 15
</code></pre>
<p>and mean out</p>
<pre><code>df_a.mean()
Out[118]:
a 5.800
b 7.800
dtype: float64
</code></pre>
<p>I want this;</p>
<pre><code>df_a[df_a.index.isin([3, 4])] = df.mean()
</code></pre>
<p>But I'm getting an error. How do I achieve this?
I gave an example here. There are observations that I need to change a lot in the data that I am working with. And I keep their index values in a list</p>
|
<p>If you want to overwrite the values of rows in a list, you can do it with <code>iloc</code></p>
<pre class="lang-py prettyprint-override"><code>df_a = pd.DataFrame({'a': [2, 4, 5, 6, 12], 'b': [3, 5, 7, 9, 15]})
idx_list = [3, 4]
df_a.iloc[idx_list,:] = df_a.mean()
</code></pre>
<p>Output</p>
<pre><code> a b
0 2.0 3.0
1 4.0 5.0
2 5.0 7.0
3 5.8 7.8
4 5.8 7.8
</code></pre>
<h1>edit</h1>
<p>If you're using an older version of <code>pandas</code> and see <code>NaN</code>s instead of wanted values, you can use a <code>for</code> loop</p>
<pre><code>df_a_mean = df_a.mean()
for i in idx_list:
df_a.iloc[i,:] = df_a_mean
</code></pre>
|
pandas|dataframe|row|mean|unassigned-variable
| 0
|
1,759
| 47,105,912
|
Pandas dataframe values not changing outside of function
|
<p>I have a pandas dataframe inside a for loop where I change a value in pandas dataframe like this:</p>
<pre><code>df[item].ix[(e1,e2)] = 1
</code></pre>
<p>However when I access the df, the values are still unchanged. Do you know where exactly am I going wrong?</p>
<p>Any suggestions?</p>
|
<p>You are using chained indexing, which usually causes problems. In your code, <code>df[item]</code> returns a series, and then <code>.ix[(e1,e2)] = 1</code> modifies that series, leaving the original dataframe untouched. You need to modify the original dataframe instead, like this:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'colA': [5, 6, 1, 2, 3],
'colB': ['a', 'b', 'c', 'd', 'e']})
print df
df.ix[[1, 2], 'colA'] = 111
print df
</code></pre>
<p>That code sets rows 1 and 2 of colA to 111, which I believe is the kind of thing you were looking to do. 1 and 2 could be replaced with variables of course.</p>
<pre><code> colA colB
0 5 a
1 6 b
2 1 c
3 2 d
4 3 e
colA colB
0 5 a
1 111 b
2 111 c
3 2 d
4 3 e
</code></pre>
<p>For more information on chained indexing, see the documentation:
<a href="https://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy</a></p>
<p><strong>Side note:</strong> you may also want to rethink your code in general since you mentioned modifying a dataframe in a loop. When using pandas, you usually can and should avoid looping and leverage set-based operations instead. It takes some getting used to, but it's the way to unlock the full power of the library.</p>
|
pandas|dataframe
| 0
|
1,760
| 68,130,307
|
Python Excel file to Dictionary
|
<p>I would like to create depandant combobox from the Excel file. If the combo1 is selected, the combo2 will be changed depending on Combo1. The input is as below.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">City</th>
<th style="text-align: right;">Name</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">AA</td>
<td style="text-align: right;">John</td>
</tr>
<tr>
<td style="text-align: left;">AA</td>
<td style="text-align: right;">Anne</td>
</tr>
<tr>
<td style="text-align: left;">BB</td>
<td style="text-align: right;">Sean</td>
</tr>
<tr>
<td style="text-align: left;">BB</td>
<td style="text-align: right;">Dylan</td>
</tr>
</tbody>
</table>
</div>
<p>I have tried to create Pandas dataframe and use to_dict(). But it was not the result I expect.</p>
<p>The expected result
{"AA": [["John"], ["Anne"]], "BB": [["Sean"], ["Dylan"]]}.</p>
<p>Thank you.</p>
<p>Edit : Code I have tried. If combo1 "AA" is selected, combo2 will display "John","Anne"</p>
<pre><code>df = (pd.read_excel("test.xls"))
#due to I don't need the "City" && "Name".
df.set_index('City')['Name'].to_dict()
</code></pre>
<p>Output :
<code>{'AA': 'Anne', 'BB': 'Dylan'}</code></p>
<pre><code>df.to_dict("r")
</code></pre>
<p>Output :
<code>[{'City': 'AA', 'Name': 'John'}, {'City': 'AA', 'Name': 'Anne'}, {'City': 'BB', 'Name': 'Sean'}, {'City': 'BB', 'Name': 'Dylan'}]</code></p>
<pre><code>df.to_dict("list")
</code></pre>
<p>Output :
<code>{'City': ['AA', 'AA', 'BB', 'BB'], 'Name': ['John', 'Anne', 'Sean', 'Dylan']}</code></p>
|
<p>Try via <code>groupby()</code>,<code>agg()</code> and <code>to_dict()</code> method:</p>
<pre><code>out=df.groupby('City')['Name'].agg(list).to_dict()
#you can also use apply() in place of agg() method
</code></pre>
<p>output of <code>out</code>:</p>
<pre><code>{'AA': ['John', 'Anne'], 'BB': ['Sean', 'Dylan']}
</code></pre>
|
python|pandas|pyqt
| 0
|
1,761
| 68,358,632
|
How to interpolate with groupby object in pandas?
|
<p>Original Dataframe</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>a</th>
<th>b</th>
<th>yyyymm</th>
<th>price</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>a</td>
<td>200101</td>
<td>3000</td>
</tr>
<tr>
<td>1</td>
<td>a</td>
<td>200102</td>
<td>np.nan</td>
</tr>
<tr>
<td>1</td>
<td>a</td>
<td>200103</td>
<td>np.nan</td>
</tr>
<tr>
<td>1</td>
<td>a</td>
<td>200104</td>
<td>6000</td>
</tr>
<tr>
<td>1</td>
<td>b</td>
<td>200101</td>
<td>np.nan</td>
</tr>
<tr>
<td>1</td>
<td>b</td>
<td>200102</td>
<td>np.nan</td>
</tr>
<tr>
<td>1</td>
<td>b</td>
<td>200103</td>
<td>np.nan</td>
</tr>
<tr>
<td>1</td>
<td>b</td>
<td>200104</td>
<td>3000</td>
</tr>
<tr>
<td>2</td>
<td>a</td>
<td>200101</td>
<td>3000</td>
</tr>
<tr>
<td>2</td>
<td>a</td>
<td>200102</td>
<td>np.nan</td>
</tr>
<tr>
<td>2</td>
<td>a</td>
<td>200103</td>
<td>np.nan</td>
</tr>
<tr>
<td>2</td>
<td>a</td>
<td>200104</td>
<td>np.nan</td>
</tr>
</tbody>
</table>
</div>
<p>I want the dataframe above to be like the following.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>a</th>
<th>b</th>
<th>yyyymm</th>
<th>price</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>a</td>
<td>200101</td>
<td>3000</td>
</tr>
<tr>
<td>1</td>
<td>a</td>
<td>200102</td>
<td>4000</td>
</tr>
<tr>
<td>1</td>
<td>a</td>
<td>200103</td>
<td>5000</td>
</tr>
<tr>
<td>1</td>
<td>a</td>
<td>200104</td>
<td>6000</td>
</tr>
<tr>
<td>1</td>
<td>b</td>
<td>200101</td>
<td>np.nan</td>
</tr>
<tr>
<td>1</td>
<td>b</td>
<td>200102</td>
<td>np.nan</td>
</tr>
<tr>
<td>1</td>
<td>b</td>
<td>200103</td>
<td>np.nan</td>
</tr>
<tr>
<td>1</td>
<td>b</td>
<td>200104</td>
<td>3000</td>
</tr>
<tr>
<td>2</td>
<td>a</td>
<td>200101</td>
<td>3000</td>
</tr>
<tr>
<td>2</td>
<td>a</td>
<td>200102</td>
<td>np.nan</td>
</tr>
<tr>
<td>2</td>
<td>a</td>
<td>200103</td>
<td>np.nan</td>
</tr>
<tr>
<td>2</td>
<td>a</td>
<td>200104</td>
<td>np.nan</td>
</tr>
</tbody>
</table>
</div>
<p>I tried using the code below. But it seemed to be working but I found out that the first row in 2 consecutive worked but second row of it wasn't be filled.</p>
<pre><code>df.price = df.groupby(['a', 'b'])['price'].apply(
lambda group: group.interpolate(method='linear', limit=2, limit_area='inside')
)
</code></pre>
|
<p>This works as expected:</p>
<pre><code>df = pd.DataFrame({'a': [1,1,1,1,1,1,1,1,2,2,2,2],
'b': list('aaaabbbbaaaa'),
'yyyymm': [200101, 200102, 200103, 200104, 200101, 200102, 200103, 200104,
200101, 200102, 200103, 200104],
'price': [3000,np.NaN,np.NaN,6000,np.NaN,np.NaN,np.NaN,3000,3000,np.NaN,np.NaN,np.NaN]
})
df.groupby(['a', 'b'])['price'].apply(
lambda group: group.interpolate(method='linear', limit=2, limit_area='inside')
)
</code></pre>
<p>output:</p>
<pre><code>0 3000.0
1 4000.0
2 5000.0
3 6000.0
4 NaN
5 NaN
6 NaN
7 3000.0
8 3000.0
9 NaN
10 NaN
11 NaN
Name: price, dtype: float64
</code></pre>
|
pandas|dataframe
| 1
|
1,762
| 68,098,208
|
changing index of 1 row in pandas
|
<p>I have the the below df build from a pivot of a larger df. In this table 'week' is the the index (dtype = object) and I need to show week 53 as the first row instead of the last</p>
<p>Can someone advice please? I tried reindex and custom sorting but can't find the way
Thanks!</p>
<p><a href="https://i.stack.imgur.com/czXn5.png" rel="nofollow noreferrer">here is the table</a></p>
|
<p>One way of doing this would be:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(range(10))
new_df = df.loc[[df.index[-1]]+list(df.index[:-1])].reset_index(drop=True)
</code></pre>
<p>output:</p>
<pre><code> 0
9 9
0 0
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
</code></pre>
<p>Alternate method:</p>
<pre><code>new_df = pd.concat([df[df["Year week"]==52], df[~(df["Year week"]==52)]])
</code></pre>
|
python|pandas
| 0
|
1,763
| 68,344,150
|
Cythonize list of ndarrays to indirect_contiguous
|
<p>I want to cythonize a list of ndarrays (with different sizes) to speed up the performance of a function. A data structure of the type [:: view.indirect_contiguous,::1] seems the way to go, creating a contiguous array of pointers linked to contiguous memoryview of different sizes, but it is not clear to me how to setup it properly. How do I do it up and how do I access its elements?</p>
<p>In the following MWE I put the simple sum of elements just to test the access of the elements (I am not interested in speeding it up with other formulations)</p>
<pre><code>from typing import List
import numpy as np
def python_foo(array_list: List[np.ndarray]):
list_len = len(array_list)
results = np.zeros((list_len,1), dtype=np.int8)
# prints few elements and sum them
print(array_list[0][0], array_list[0][1], array_list[1][0], array_list[1][1])
for k in range(list_len):
results[k] = np.sum(array_list[k])
return results
import cython
cimport numpy as np
from cython cimport view
from libc.stdio cimport printf
DTYPE = np.float64
ctypedef np.float64_t DTYPE_t
def cython_foo(array_list: List[np.ndarray]):
cdef int list_len = len(array_list)
cdef DTYPE_t[::view.indirect_contiguous, ::1] my_mem_view
# (1) - how do I assign the ndarrays to my_mem_view?
# prints few elements and sum them
# (2) - how do I access the elements of my_mem_view? is this correct?
printf("%f %f %f %f\n", my_mem_view[0,0], my_mem_view[0,1], my_mem_view[1,0], my_mem_view[1,1])
cdef DTYPE_t results[list_len] = {0}
cdef int k
cdef int n
for k in range(list_len):
for n in range(array_list[k].size) # should I also create an array of lengths?
results[k] += my_mem_view[k,n]
# BONUS question: I probably need to convert results to Python objects (list, ndarrays), right?
return results
</code></pre>
|
<p>Here's a possible solution that uses a temporary memoryview to get the pointer to the data. If anyone finds a better, cleaner or quicker answer please let me know.</p>
<p>I wonder if I got the memory management right or if something is missing.</p>
<pre><code># indirect_contiguous.pyx
cimport numpy as np
from cpython.mem cimport PyMem_Malloc, PyMem_Free
DTYPE = np.float64
ctypedef np.float64_t DTYPE_t
def cython_foo(array_list: List[np.ndarray]):
cdef int list_len = len(array_list)
# (1)
cdef DTYPE_t ** my_mem_view = <DTYPE_t **> PyMem_Malloc(list_len * sizeof(DTYPE_t *))
cdef int idx
cdef DTYPE_t[:] item_1 # cdef not allowed in conditionals
cdef DTYPE_t[:,:] item_2
cdef DTYPE_t[:,:,:] item_3
# ... other cdef for DTYPE_t[...] up to dimension 8 - the maximum allowed for a memoryview
cdef int *array_len = <int *> PyMem_Malloc(list_len * sizeof(int))
for idx in range(list_len):
if len(array_list[idx].shape) == 1:
item_1 = np.ascontiguousarray(array_list[idx], dtype=DTYPE)
my_mem_view[idx] = <DTYPE_t *> &item_1[0]
elif len(array_list[idx].shape) == 2:
item_2 = np.ascontiguousarray(array_list[idx], dtype=DTYPE)
my_mem_view[idx] = <DTYPE_t *> &item_2[0, 0]
elif len(array_list[idx].shape) == 3:
item_3 = np.ascontiguousarray(array_list[idx], dtype=DTYPE)
my_mem_view[idx] = <DTYPE_t *> &item_3[0, 0, 0]
# ... other elif for DTYPE_t[:,:,:] up to dimension 8
array_len[idx] = array_list[idx].size
# prints few elements and sum them
# (2)
printf("%f %f %f %f\n", my_mem_view[0][0], my_mem_view[0][1], my_mem_view[1][0], my_mem_view[1][1])
cdef DTYPE_t *results = <DTYPE_t *> PyMem_Malloc(list_len * sizeof(DTYPE_t))
cdef int k
cdef int n
for k in range(list_len):
results[k] = 0
for n in range(array_len[k]):
results[k] += my_mem_view[k][n]
py_results = []
for k in range(list_len):
py_results.append(results[k])
# free memory
PyMem_Free(my_mem_view)
PyMem_Free(array_len)
PyMem_Free(results)
return py_results
</code></pre>
<p>Testing the speed with</p>
<pre><code>import timeit
print(timeit.timeit(stmt="indirect_contiguous.python_foo([np.random.random((100,100,100)), np.random.random((100,100))])",
setup="import numpy as np; import indirect_contiguous; ", number=100))
print(timeit.timeit(stmt="indirect_contiguous.cython_foo([np.random.random((100,100,100)), np.random.random((100,100))])",
setup="import numpy as np; import indirect_contiguous; ", number=100))
</code></pre>
<p>I get a small improvement (1.45 sec. vs 1.42 sec.) of the 2-3%, possibly due to the fact that I am just doing sum of elements (for which numpy is already optimized).</p>
|
python|performance|numpy|cython
| 0
|
1,764
| 59,433,143
|
TensorFlowJS fails to load a JS generated image
|
<p>TensorFlowJS can load the image from HTML but cannot normally load an image from a JavaScript generated image object. </p>
<p>The code is shown as follows. The first group of loading methods can load the image from HTML. </p>
<pre><code>h = document.getElementById("dandelion");
let image1 = await tf.browser.fromPixels(h);
var x1 = document.createElement("CANVAS");
tf.browser.toPixels(image1, x1);
document.body.append(x1);
</code></pre>
<p>The second group of loading methods will generate a <code>online_img</code> with a link to an online image. But when I try to draw the loaded image, I only got a black square. </p>
<pre><code>let online_img = await document.createElement("IMG");
online_img.setAttribute("id", "loaded_image")
online_img.setAttribute("src", str);
online_img.setAttribute("width", "200");
online_img.setAttribute("height", "100");
await document.body.appendChild(online_img);
let image = await tf.browser.fromPixels(online_img);
var x = document.createElement("CANVAS");
tf.browser.toPixels(image, x);
document.body.append(x);
</code></pre>
<p>So I wonder if there is any way to successfully load the images <strong>with second group of methods</strong>. </p>
<p>Thanks!! </p>
<p>The full HTML code is presented here. </p>
<pre><code><html>
<script src="https://unpkg.com/@tensorflow/tfjs"> </script>
<head></head>
<body>
<img id="dandelion" crossorigin height="320" width="480" src="https://images-na.ssl-images-amazon.com/images/I/719fF478nPL._SX425_.jpg"></img>
</body>
<script>
async function getImage_online(str){
// load and draw from html: success.
// const image1 = tf.browser.fromPixels(dandelion);
h = document.getElementById("dandelion");
let image1 = await tf.browser.fromPixels(h);
var x1 = document.createElement("CANVAS");
tf.browser.toPixels(image1, x1);
document.body.append(x1);
// load the image from generated image: loaded image is black.
let online_img = await document.createElement("IMG");
online_img.setAttribute("id", "loaded_image")
online_img.setAttribute("src", str);
online_img.setAttribute("width", "200");
online_img.setAttribute("height", "100");
await document.body.appendChild(online_img);
let image = await tf.browser.fromPixels(online_img);
var x = document.createElement("CANVAS");
tf.browser.toPixels(image, x);
document.body.append(x);
}
async function newRun_online(){
let c = await getImage_online("https://images-na.ssl-images-amazon.com/images/I/719fF478nPL._SX425_.jpg"); // c: rose
}
newRun_online();
</script>
</html>
</code></pre>
|
<p>It is failing to load the image, because the image has not completed to load before.</p>
<pre><code>image.onload = () => {
tf.browser.toPixels(image, x);
}
</code></pre>
|
javascript|html|tensorflow|tensorflow.js
| 0
|
1,765
| 59,092,292
|
How do I reindex my dataframe and also apply some of the operations or transformation on that data at the same time i am performing reindexing
|
<p>I have tried by this piece of code, but its not working for me</p>
<pre><code>import pandas as pd
Df1=pd.DataFrame({Price:[10,20,30,40],Company:['Abcd','Efgh','Ijkl','mnop'],City:['Delhi','Bangalore','Bombay','Chennai']})
Df2=Df1.reindex(index=[0,2],columns=['Price',Company],Df1['Price'].fill_value=Df1['Price']*12)
print(Df2)
</code></pre>
<p>My Expected output is this :</p>
<pre><code>Price Company
10*12 Abcd
30*12 Efgh
</code></pre>
<p>A quick help is really appreciated.
Thanks in advance!!!!</p>
|
<p>not sure your expected output is correct and worded correctly.
3 assumptions; </p>
<ol>
<li>10*12 should be 120 ? </li>
<li>corresponding Price values should be used? (so 20 and not 30 for Company 'Efgh')</li>
<li>you want the first 2 (or first 'x') rows? or any slice? (rather then condition based)</li>
</ol>
<p>You could create the new dataframe like this:</p>
<pre><code>import pandas as pd
df1=pd.DataFrame({'Price':[10,20,30,40],'Company':['Abcd','Efgh','Ijkl','mnop'],'City':['Delhi','Bangalore','Bombay','Chennai']})
df2 = pd.DataFrame({'Price': Df1['Price'][0:2] * 12, 'Company': df1['Company'][0:2]})
print(df2)
Out:
Price Company
0 120 Abcd
1 240 Efgh
</code></pre>
|
python|pandas
| 0
|
1,766
| 59,071,835
|
Seaborn / MatplotLib Axis and Data Values formatting: Hundreds, Thousands, and Millions
|
<p>I have a problem, which, as far as I know hasn't been solved yet. </p>
<p>I need formatting of my axes and data points for my Seaborn / Matplotlib graphs to be dynamic. An example of what I'd like to achieve is below (done via Keynote, I used a log scale to make the point clearer).</p>
<p>What would be the best way to go about it? I saw answers that format either as Ks or Ms, but never as both. Am I missing something?</p>
<p><a href="https://i.stack.imgur.com/fS60G.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fS60G.png" alt="enter image description here"></a></p>
<p>Right now I am using the FuncFormatter option</p>
<pre><code>if options.get('ctype') == 'number':
df = df.loc[df[ycat2] >= 1].copy()
plt.figure(figsize=(14,7.5))
plot = sns.barplot(xcat1, ycat2, data=df, color='#00c6ff', saturation=1, ci=None)
sns.despine(top = True, right = True, left=True)
#plot.set_title('Instagram - Engagement Rate', fontweight='bold',y=1.04, loc = 'left', fontsize=12)
plot.yaxis.grid(True)
plot.yaxis.get_major_ticks()[0].label1.set_visible(False)
plot.yaxis.set_major_formatter(FuncFormatter(lambda y, _: '{:,}'.format(int(y))))
plot.set_xlabel('')
plot.set_ylabel('')
plot.tick_params(axis="x", labelsize=13)
plot.tick_params(axis="y", labelsize=13)
for i, bar in enumerate(plot.patches):
h = bar.get_height()
plot.text(
i,
h,
'{:,}'.format(int(h)),
ha='center',
va='bottom',
fontweight='heavy',
fontsize=12.5)
return plot.figure
</code></pre>
|
<p>Matplotlib has an <a href="https://matplotlib.org/api/ticker_api.html?highlight=engformatter#matplotlib.ticker.EngFormatter" rel="nofollow noreferrer">Engineering formatter</a> for specifically this purpose. You can use it to format the axis (using <code>set_major_formatter()</code>) or to format any number, using <a href="https://matplotlib.org/api/ticker_api.html?highlight=engformatter#matplotlib.ticker.EngFormatter.format_eng" rel="nofollow noreferrer"><code>EngFormatter.format_eng()</code></a></p>
<pre><code>from matplotlib.ticker import EngFormatter
fmt = EngFormatter(places=0)
y = np.arange(1,10)
data = np.exp(3*y)
fig, ax = plt.subplots()
ax.set_xscale('log')
bars = ax.barh(y=y, width=data)
ax.xaxis.set_major_formatter(fmt)
for b in bars:
w = b.get_width()
ax.text(w, b.get_y()+0.5*b.get_height(),
fmt.format_eng(w),
ha='left', va='center')
</code></pre>
<p><a href="https://i.stack.imgur.com/vuJXv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vuJXv.png" alt="enter image description here"></a></p>
|
python|pandas|matplotlib|seaborn
| 2
|
1,767
| 59,256,876
|
Best strategy for reading multiple large csv files in python with multiprocessing?
|
<p>I am writing some code and hoping to improve it with multiprocessing.</p>
<p>Originally, I had the following code:</p>
<pre class="lang-py prettyprint-override"><code>with Pool() as p:
lst = p.map(self._path_to_df, paths)
...
df = pd.concat(lst, ignore_index=True)
</code></pre>
<p>where <code>self._path_to_df()</code> basically just calls <code>pandas.read_csv(...)</code> and returns a pandas DataFrame.</p>
<p>Which results in the following error:</p>
<pre><code>.
.
.
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/multiprocessing/pool.py", line 268, in map
return self._map_async(func, iterable, mapstar, chunksize).get()
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.7/lib/python3.7/multiprocessing/pool.py", line 657, in get
raise self._value
multiprocessing.pool.MaybeEncodingError: Error sending result: '[ ts id.orig ... successful history_category
0 1.331901e+09 ... True other
1 1.331901e+09 ... True ^
2 1.331901e+09 ... True Sh
3 1.331901e+09 ... True Sh
4 1.331901e+09 ... True Sh
... ... ... ... ...
23192090 1.332018e+09 ... False other
23192091 1.332017e+09 ... True other
23192092 1.332018e+09 ... True other
23192093 1.332018e+09 ... True other
23192094 1.332018e+09 ... True other
[23192095 rows x 24 columns]]'. Reason: 'error("'i' format requires -2147483648 <= number <= 2147483647")'
</code></pre>
<p>The error is from one of the files it's reading being too large for <code>self._path_to_df()</code> to return the DataFrame of while using multiprocessing.</p>
<p>There are potentially multiple files of varying sizes (small to very big 3GB+) involved so I was trying to figure out what is the best way to use multiprocessing for this task.</p>
<p>Should I somehow chunk all the data so that <code>p.map()</code> can work or is that too much overhead? If so, how would I do that? Should I use multiprocessing in the reading of each file and look at each file sequentially?</p>
<p>Edit: Additionally, it doesn't seem to error when it's only smaller files involved</p>
|
<p>If the end result is too large to fit into memory try dask,</p>
<pre><code>import dask.dataframe as dd
df = dd.read_csv('*.csv')
</code></pre>
<p>then once it is read, you can do your aggregates, etc. and finally compute to get your desired answer.</p>
|
python|pandas|multiprocessing
| 0
|
1,768
| 59,462,913
|
Python group by and Combine all Text
|
<p>Working on an NLP Project in python is there a way to group all feedback below per specific Issue Group?</p>
<pre class="lang-py prettyprint-override"><code>Out[40]:
Issue Group Feedback
24 Accessories Nope, just make a longer charging cord :)
49 Accessories Everything was very helpful and nice handled
1003 Connectivity kEEP DOING WHAT YOU ARE DOING.
2003 Connectivity None! Keep up the good work!
Desired Result will be:
Issue Group Feedback
Accessories Nope, just make a longer charging cord :) Everything was very helpful and nice handled
Connectivity kEEP DOING WHAT YOU ARE DOING None! Keep up the good work!
</code></pre>
|
<p>You can try groupby,</p>
<pre><code>df.groupby('Issue Group').agg(lambda x: ','.join(x))
</code></pre>
<p>Output of this will be text separated by comma ,</p>
<pre><code>Nope, just make a longer charging cord :),Everything was very helpful and nice handled
kEEP DOING WHAT YOU ARE DOING,None! Keep up the good work!
</code></pre>
<p>If you want list in the output,</p>
<pre><code>df.groupby('Issue Group').agg(list)
</code></pre>
<p>Output of this will be in the form list as follows,</p>
<pre><code>['Nope, just make a longer charging cord :)', 'Everything was very helpful and nice handled']
['kEEP DOING WHAT YOU ARE DOING', 'None! Keep up the good work!']
</code></pre>
|
python|pandas|join|nlp
| 1
|
1,769
| 44,923,701
|
How to find matching python data frame values with other data frame
|
<p>I want to update in df1['Result'] as True or False if df1['Field1'] values exist in other dataframe df2['SersName']</p>
<p>Please help...</p>
<p>df1:</p>
<pre><code>Field1 Field2 Result
2020RATIO001001 A TRUE
2020RATIO001003 B TRUE
2020RATIO001005 C TRUE
2020RATIO001XYZ D FALSE
2020RATIO001123 E FALSE
</code></pre>
<p>df2:</p>
<pre><code>SersName Field2
2020RATIO001001 1
2020RATIO001003 2
2020RATIO001005 3
2020RATIO001007 4
2020RATIO001009 5
2020RATIO001011 6
2020RATIO001013 7
2020RATIO001015 8
</code></pre>
<p>I tried below script:</p>
<pre><code>SeriesFileNameMap = df1['Field1'].set_index('Field1')
SeriesFileNameMap.update(df2.set_index('SersName'))
df1['Result'] = SeriesFileNameMap.values
</code></pre>
|
<p>You need to use <code>isin</code></p>
<pre><code>import pandas as pd
df1 = pd.read_csv(StringIO("""Field1 Field2
2020RATIO001001 A
2020RATIO001003 B
2020RATIO001005 C
2020RATIO001XYZ D
2020RATIO001123 E"""),sep=" ")
df2 = pd.read_csv(StringIO("""SersName Field2
2020RATIO001001 1
2020RATIO001003 2
2020RATIO001005 3
2020RATIO001007 4
2020RATIO001009 5
2020RATIO001011 6
2020RATIO001013 7
2020RATIO001015 8"""),sep=" ")
df1['Result'] = df1['Field1'].isin(df2['SersName'])
</code></pre>
<p>The resultant df1 is:</p>
<pre><code>Field1 Field2 Result
0 2020RATIO001001 A True
1 2020RATIO001003 B True
2 2020RATIO001005 C True
3 2020RATIO001XYZ D False
4 2020RATIO001123 E False
</code></pre>
|
python|pandas|dataframe
| 0
|
1,770
| 56,944,155
|
Merging two string lists in pandas
|
<p>I have a dataframe with two series x and y. I want to merge them to create a new series: tag, but I'm not able to achieve the expected output. I've tried:</p>
<pre><code>df['tag'] = df['x'] + df['y']
</code></pre>
<p>I've looked everywhere and haven't been able to find a solution to the problem.</p>
<p><strong>Current output:</strong></p>
<pre><code>x y tag
['fast food', 'american'] ['chicken'] ['fast food', 'american']['chicken']
</code></pre>
<p><strong>Expected output:</strong></p>
<pre><code>x y tag
['fast food', 'american'] ['chicken'] ['fast food', 'american', 'chicken']
</code></pre>
<p><strong>df.to_dict()</strong></p>
<pre><code>{'x': "['fast food', 'american']",
'y': "['chicken']"}
</code></pre>
|
<p>I do not think that is <code>list</code> , so you may convert it into <code>list</code> , them you can <code>sum</code> </p>
<pre><code>import ast
df.x = df.x.apply(ast.literal_eval)
df.y = df.y.apply(ast.literal_eval)
df['tag'] = df['x'] + df['y']
</code></pre>
<hr>
<p>More info </p>
<pre><code>df=pd.DataFrame()
df['y']=["['chicken']"]
df['x']=["['fast food', 'american']"]
df.applymap(type)
Out[295]:
y x
0 <class 'str'> <class 'str'>
df.x = df.x.apply(ast.literal_eval)
df.y = df.y.apply(ast.literal_eval)
df.applymap(type)
Out[297]:
y x
0 <class 'list'> <class 'list'>
</code></pre>
|
python|pandas
| 2
|
1,771
| 57,023,470
|
Pandas how to keep the LAST trailing zeros when exporting DataFrame into CSV
|
<p>In this question, my goal is to preserve the <strong>last</strong> <strong>trailing</strong> <code>zeros</code> when exporting the <code>DataFrame</code> to <code>CSV</code></p>
<p>My <code>dataset</code> looks like this:</p>
<pre><code>EST_TIME Open High
2017-01-01 1.0482 1.1200
2017-01-02 1.0483 1.1230
2017-01-03 1.0485 1.0521
2017-01-04 1.0480 1.6483
2017-01-05 1.0480 1.7401
...., ...., ....
2017-12-31 1.0486 1.8480
</code></pre>
<p>I import and create a DataFrame and save to CSV by doing this:</p>
<pre><code>df_file = '2017.csv'
df.to_csv(df_file, index=False)
files.download(df_file)
</code></pre>
<p>When I view the CSV, I see this:</p>
<pre><code>EST_TIME Open High
2017-01-01 1.0482 1.12
2017-01-02 1.0483 1.123
2017-01-03 1.0485 1.0521
2017-01-04 1.048 1.6483
2017-01-05 1.048 1.7401
...., ...., ....
2017-12-31 1.0486 1.848
</code></pre>
<p>All the zeros at the end are gone. I want to preserve the trailing zeros when I save the CSV and I want it at 4 decimal place.</p>
<p>Could you please let me know how can I achieve this?</p>
|
<blockquote>
<p>Try this: Float format both to display your data with 4 decimal places
and to save it with 4 decimal.</p>
<p>when reading to pandas:</p>
</blockquote>
<pre><code>pd.options.display.float_format = '{:,.4f}'.format
</code></pre>
<blockquote>
<p>when saving to CSV.</p>
</blockquote>
<pre><code>df.to_csv('your_file.csv', float_format='%.4f',index=False)
</code></pre>
<p><a href="https://i.stack.imgur.com/smCNV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/smCNV.png" alt="enter image description here"></a></p>
|
python|python-3.x|pandas|dataframe|export-to-csv
| 3
|
1,772
| 57,063,872
|
Weird tf.Print bug
|
<p>I am trying to use a tf.Print like this:</p>
<pre><code>residual = tf.Print(residual, [residual], message='enc', summarize=100)
</code></pre>
<p>but it crashes with this error:</p>
<pre><code>ValueError: Single tensor passed to 'data', expected list while building NodeDef 'tf_op_layer_tf_op_layer_TransformerEncoder/TransformerEncoderBlock/Print/TransformerEncoder/TransformerEncoderBlock/Print' using Op<name=Print; signature=input:T, data: -> output:T; attr=T:type; attr=U:list(type),min=0; attr=message:string,default=""; attr=first_n:int,default=-1; attr=summarize:int,default=3; is_stateful=true>
</code></pre>
<p>This makes no sense to me because the data argument is wrapped in a list.</p>
|
<p>Found answer here:</p>
<p><a href="https://epcsirmaz.blogspot.com/2018/06/display-full-value-of-tensor-in.html" rel="nofollow noreferrer">https://epcsirmaz.blogspot.com/2018/06/display-full-value-of-tensor-in.html</a></p>
<p>Basically, when using Keras, you have to wrap it in a lambda layer.</p>
|
python|python-3.x|tensorflow
| 2
|
1,773
| 57,041,639
|
How to make piechart from Pandas Dataframe
|
<p>I have created a shape (1,105) dataframe which has the classroom number as column name and the only row of the dataframe contains the total number of students in each classroom inside their appropriate column. I would like to make a piechart with as labels the column names and as data the corresponding number inside the first row. </p>
<p>Thank you very much in advance.</p>
|
<p>To plot a pie chart from your DataFrame very simply, you can do:</p>
<pre><code>df.transpose().plot.pie(subplots=True)
</code></pre>
<p>I tried it with this simple DataFrame:</p>
<pre><code>df = pd.DataFrame([[24, 65, 13, 23, 10]], columns=[1, 2, 3, 4, 5])
</code></pre>
|
python|pandas|pie-chart
| 0
|
1,774
| 56,896,021
|
Model.fit in keras with multi-label classification
|
<p>I'm trying to learn how to implement my own dataset on the model seen here: <a href="https://keras.io/examples/cifar10_resnet/" rel="nofollow noreferrer">resnet </a>which is just a resnet model written in keras. Within the code they write this line</p>
<pre><code>(x_train, y_train), (x_test, y_test) = cifar10.load_data()
</code></pre>
<p>and then use the respective data to 'Convert class vectors to binary class matrices.' </p>
<pre><code>y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
</code></pre>
<p>and then pass these values into the fit function for the model that was built like so:</p>
<pre><code>model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test),
shuffle=True,
callbacks=callbacks)
</code></pre>
<p>I believe that I can create the x_train by doing something similar to(assumes i have an array of image paths):</p>
<pre><code>#pseudocode
x_train = nparray
for image in images:
im = PIL.Image.open(image).asNumpy()
x_train.append(im)
</code></pre>
<p>Is the above correct?</p>
<p>As for y_train I do not quite understand what is being passed into model.fit, is it an array of one hot encoded arrays? So if I had 3 images containing; a cat and dog, a dog, a cat respectively would the y_train be </p>
<pre><code>[
[1, 1, 0],#cat and dog
[0, 1, 0],#dog
[1, 0, 0]#cat
]
</code></pre>
<p>or am I mistaken on this as well?</p>
|
<p>So, <code>model.fit()</code> expects <code>x_train</code> as the features and <code>y_train</code> as the labels for a particular classification problem. I'll be taking into consideration <strong>multiclass image classification</strong>.</p>
<ul>
<li><p><code>x_train</code>: For image classification, this argument will have the shape <code>(num_images, width, height, num_channels )</code>. Where <code>num_images</code> refers to the number of images present in a training batch. See <a href="https://stats.stackexchange.com/questions/153531/what-is-batch-size-in-neural-network">here</a>.</p></li>
<li><p><code>y_train</code>: The labels which are one-hot encoded. The required shape is <code>(num_images, num_classes )</code>.</p></li>
</ul>
<blockquote>
<p>Notice the <code>num_images</code> is common in both the arguments. You need to take care to ensure that there is an equal number of images and labels.</p>
</blockquote>
<p>Hope that helps.</p>
|
python|tensorflow|keras
| 2
|
1,775
| 35,376,293
|
Extracting selected feature names from scikit pipeline
|
<pre><code># Load dataset
iris = datasets.load_iris()
X, y = iris.data, iris.target
rf_feature_imp = RandomForestClassifier(100)
feat_selection = SelectFromModel(rf_feature_imp, threshold=0.5)
clf = RandomForestClassifier(5000)
model = Pipeline([
('fs', feat_selection),
('clf', clf),
])
params = {
'fs__threshold': [0.5, 0.3, 0.7],
'fs__estimator__max_features': ['auto', 'sqrt', 'log2'],
'clf__max_features': ['auto', 'sqrt', 'log2'],
}
gs = GridSearchCV(model, params, ...)
gs.fit(X,y)
</code></pre>
<p>The above code is based on <a href="https://stackoverflow.com/questions/35256876/ensuring-right-order-of-operations-in-random-forest-classification-in-scikit-lea">Ensuring right order of operations in random forest classification in scikit learn</a></p>
<p>Since I am using SelectFromModel, I would like to print the names of the features that were selected (in the SelectFromModel pipeline), but not sure how to extract them.</p>
|
<p><a href="http://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectFromModel.html" rel="noreferrer"><code>SelectFromModel</code></a> has a <a href="http://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.SelectFromModel.html#sklearn.feature_selection.SelectFromModel.get_support" rel="noreferrer"><code>get_support()</code></a> method that returns a boolean mask for the features that were selected. So you could do (in addition to the preliminary step described by @David Maust):</p>
<pre><code>feature_names = np.array(iris.feature_names)
selected_features = feature_names[fs.get_support()]
</code></pre>
|
python|numpy|scikit-learn
| 6
|
1,776
| 11,697,887
|
Converting Django QuerySet to pandas DataFrame
|
<p>I am going to convert a Django QuerySet to a pandas <code>DataFrame</code> as follows:</p>
<pre><code>qs = SomeModel.objects.select_related().filter(date__year=2012)
q = qs.values('date', 'OtherField')
df = pd.DataFrame.from_records(q)
</code></pre>
<p>It works, but is there a more efficient way?</p>
|
<pre><code>import pandas as pd
import datetime
from myapp.models import BlogPost
df = pd.DataFrame(list(BlogPost.objects.all().values()))
df = pd.DataFrame(list(BlogPost.objects.filter(date__gte=datetime.datetime(2012, 5, 1)).values()))
# limit which fields
df = pd.DataFrame(list(BlogPost.objects.all().values('author', 'date', 'slug')))
</code></pre>
<p>The above is how I do the same thing. The most useful addition is specifying which fields you are interested in. If it's only a subset of the available fields you are interested in, then this would give a performance boost I imagine.</p>
|
python|django|pandas
| 135
|
1,777
| 50,808,911
|
Segmentation fault when opencv and tensorflow both use libgtk
|
<p>When OpenCV and tensorflow both use libgtk, a segmentation fault occurs. I have given below a simple script that creates the problem, relevant hardware and software versions and a stack trace. FWITW, the same versions of opencv, tensorflow, pandas etc worked just fine when I installed it on another machine in March. Not sure exactly what has changed. </p>
<p><strong>How to create the problem</strong></p>
<p>The following script works just fine. Captures and displays the frame as expected</p>
<pre><code>import cv2
cv2.namedWindow('frame')
</code></pre>
<p>However, if I add a line "import pandas" or "import tensorflow" anywhere above, I get a segmentation fault. For example..</p>
<pre><code>import tensorflow
import cv2
cv2.namedWindow('frame')
</code></pre>
<p><strong>Relevant hardware and software information</strong>:</p>
<p>Hardware x86 architecture (Intel I5 core)
GPU GTX 1060
OS Linux Mint 18.2</p>
<p>Ubuntu kernel version 4.8.0-53-generic #56~16.04.1-Ubuntu
OpenCV version 3.4
Tensorflow version 1.4.1
Pandas version 0.20.1</p>
<p>CUDA 9.1
Nvidia driver 396.26</p>
<p><strong>A few things I have tried</strong>
1. Compiling OpenCV with GTK2.4 and GTK3. Same result
2. Changing NVIDIA driver version</p>
<p>I plan to try CUDA 9.0 next, though honestly I don't know what that has to do with anything. </p>
<p><strong>Segmentation fault backtrace</strong>
(gdb) bt</p>
<pre><code>#0 0x000000000052b88c in ?? ()
#1 0x00000000005653ab in PyErr_WarnEx ()
#2 0x00007fff840f7938 in ?? () from /usr/lib/python2.7/dist-packages/gobject/_gobject.x86_64-linux-gnu.so
#3 0x00007fffd539e9a4 in g_logv () from /lib/x86_64-linux-gnu/libglib-2.0.so.0
#4 0x00007fffd539ebcf in g_log () from /lib/x86_64-linux-gnu/libglib-2.0.so.0
#5 0x00007fffd5690d7d in ?? () from /usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0
#6 0x00007fffd569107b in g_type_register_static () from /usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0
#7 0x00007fffd5691695 in g_type_register_static_simple () from /usr/lib/x86_64-linux-gnu/libgobject-2.0.so.0
#8 0x00007fffd5e173a4 in gdk_display_manager_get_type () from /usr/lib/x86_64-linux-gnu/libgdk-3.so.0
#9 0x00007fffd5e17409 in gdk_display_manager_get () from /usr/lib/x86_64-linux-gnu/libgdk-3.so.0
#10 0x00007fffd62fcc8b in ?? () from /usr/lib/x86_64-linux-gnu/libgtk-3.so.0
#11 0x00007fffd62d420b in ?? () from /usr/lib/x86_64-linux-gnu/libgtk-3.so.0
#12 0x00007fffd53a2f67 in g_option_context_parse () from /lib/x86_64-linux-gnu/libglib-2.0.so.0
#13 0x00007fffd62d3fe8 in gtk_parse_args () from /usr/lib/x86_64-linux-gnu/libgtk-3.so.0
#14 0x00007fffd62d4049 in gtk_init_check () from /usr/lib/x86_64-linux-gnu/libgtk-3.so.0
#15 0x00007fffd62d4099 in gtk_init () from /usr/lib/x86_64-linux-gnu/libgtk-3.so.0
#16 0x00007fffeef176c3 in cvInitSystem () from /usr/local/lib/libopencv_highgui.so.3.4
#17 0x00007fffeef1a764 in cvNamedWindow () from /usr/local/lib/libopencv_highgui.so.3.4
#18 0x00007fffeef1aead in cvShowImage () from /usr/local/lib/libopencv_highgui.so.3.4
#19 0x00007fffeef11349 in cv::imshow(cv::String const&, cv::_InputArray const&) () from /usr/local/lib/libopencv_highgui.so.3.4
#20 0x00007ffff67078d3 in pyopencv_cv_imshow(_object*, _object*, _object*) () from /usr/local/lib/python2.7/dist-packages/cv2.so
#21 0x00000000004bc3fa in PyEval_EvalFrameEx ()
#22 0x00000000004c136f in PyEval_EvalFrameEx ()
#23 0x00000000004c136f in PyEval_EvalFrameEx ()
#24 0x00000000004b9ab6 in PyEval_EvalCodeEx ()
#25 0x00000000004eb30f in ?? ()
#26 0x00000000004e5422 in PyRun_FileExFlags ()
#27 0x00000000004e3cd6 in PyRun_SimpleFileExFlags ()
#28 0x0000000000493ae2 in Py_Main ()
#29 0x00007ffff7810830 in __libc_start_main (main=0x4934c0 <main>, argc=2, argv=0x7fffffffe058, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>,
stack_end=0x7fffffffe048) at ../csu/libc-start.c:291
#30 0x00000000004933e9 in _start ()
</code></pre>
|
<p>The basic problem here is that the same process was using two different version of gtk. </p>
<p>My OpenCV used GTK3.
Python uses GTK2 (in import gtk). Tensorflow and pandas both import gtk somewhere in their processing, hence they do have the same problem. </p>
<p>For now, I have worked around this by recompiling opencv with gtk2. Other alternatives would be..</p>
<ol>
<li><p>Upgrade python to gtk3. I looked around at the available resources on this, and wasn't convinced they were solid enough. </p></li>
<li><p>Compile both tensorflow and pandas with GTK3 (currently I've just imported pre-compiled libraries for both). Obviously, this is not a robust solution since some other package may use the built-in gtk leading to the same problem again. </p></li>
</ol>
|
tensorflow|gtk3|opencv3.0
| 0
|
1,778
| 51,097,526
|
Making a 2 dimensional list/matrix with different number of columns for each row
|
<p>I would like to make a list or matrix that has a known number of rows(3), but for each row the number of elements will be different.
So it could look something like this:</p>
<pre><code>[[4, 6, 8],
[1, 2, 3, 4],
[0, 2, 3, 4, 8]]
</code></pre>
<p>Each row will have a known maximum of elements(8). </p>
<p>So far I have tried the following:</p>
<pre><code>Sets1=np.zeros((3,8))
for j in range(3):
Sets1[0,:]=[i for i, x in enumerate(K[:-1]) if B[x,j]==1 or B[x+1,j]==1]
</code></pre>
<p>I want this because I want to have a list for each j in range(3) over which I can do a for loop and add constraints to my ILP.</p>
<p>Any help will be greatly appreciated!</p>
|
<p>Here is a sample of what your code should look like:</p>
<pre><code>from random import *
myMatrix = [
[],
[],
[]
]
elementsNums = [1, 2, 3, 4, 5, 6, 7, 8] # possible length of each list in the matrix
elements = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10] # random values in each list in the matrix
def addElements(Num, List, Matrix):
count = 0
a = choice(Num) #length of first list in matrix
Num.remove(a) # removing to prevent same length lists in matrix
b = choice(Num) # length of second list in matrix
Num.remove(b)
c = choice(Num) # length of third list in matrix
nums = [a, b, c]
for x in nums: # iterating through each list (a, b, c)
for i in range(x): # placing x number of values in each list
Matrix[count].append(choice(List)) # adding the values to the list
count += 1 # moving to the next list in the matrix
addElements(elementsNums, elements, myMatrix)
print(myMatrix)
</code></pre>
|
python|python-2.7|list|numpy|arraylist
| 0
|
1,779
| 51,100,921
|
How to find out how many times a max value occurs in pandas?
|
<p>I wondered how to solve the following problem in pandas: </p>
<p>I have a dataframe with a number of rows that have different values and would like to find out how often the highest value occurs per row. I have used df2 ['MAX_Value']=df2.max(axis=1) to get the highest value per row. </p>
<p>This is an example of my dataframe:</p>
<pre><code>Col1 Col2 Col3 Col4 Col5 Col6 MAX_Value
0 5 6 6 6 3 6
</code></pre>
<p>Thank you! </p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.assign.html" rel="nofollow noreferrer"><code>assign</code></a> with comapring by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.eq.html" rel="nofollow noreferrer"><code>eq</code></a> and <code>sum</code> per rows:</p>
<pre><code>max_val = df2.max(axis=1)
count_max = df2.eq(max_val, axis=0).sum(axis=1)
</code></pre>
<p>For improve performance is possible use <code>numpy</code>:</p>
<pre><code>arr = df2.values
max_val = arr.max(axis=1)
count_max = (arr == max_val[:, None]).sum(axis=1)
</code></pre>
<hr>
<pre><code>df = df2.assign(MAX_Value = max_val, No = count_max)
print (df)
Col1 Col2 Col3 Col4 Col5 Col6 MAX_Value No
0 0 5 6 6 6 3 6 3
</code></pre>
<p><strong>Detail</strong>:</p>
<pre><code>print (df2.eq(max_val, axis=0))
Col1 Col2 Col3 Col4 Col5 Col6
0 False False True True True False
</code></pre>
|
python|pandas|dataframe|max
| 0
|
1,780
| 33,324,652
|
String Kernel SVM with Scikit-learn
|
<p>I am new to scikit-learn and I saw a sample solution posted in one of other questions on string kernels in scikitearn on Stackoverflow. So i tried that out, but I am getting this error message:</p>
<pre><code>>>> X = np.arange(len(data)).reshape(-1, 1)
>>> X
array([[0],
[1],
[2]])
def string_kernel(X, Y):
... R = np.zeros((len(x), len(y)))
... for x in X:
... for y in Y:
... i = int(x[0])
... j = int(y[0])
... R[i, j] = data[i][0] == data[j][0]
... return R
>>> clf = SVC(kernel=string_kernel)
>>> clf.fit(X, ['no', 'yes', 'yes'])
</code></pre>
<p>This is the Error message I get:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Python/2.7/site-packages/sklearn/svm/base.py", line 178, in fit
fit(X, y, sample_weight, solver_type, kernel, random_seed=seed)
File "/Library/Python/2.7/site-packages/sklearn/svm/base.py", line 217, in _dense_fit
X = self._compute_kernel(X)
File "/Library/Python/2.7/site-packages/sklearn/svm/base.py", line 345, in _compute_kernel
kernel = self.kernel(X, self.__Xfit)
File "<stdin>", line 2, in string_kernel
UnboundLocalError: local variable 'x' referenced before assignment
</code></pre>
|
<p>The specific error you're seeing actually has nothing to do with SVM. On this line in your <code>string_kernel</code> function:</p>
<pre><code>R = np.zeros((len(x), len(y)))
</code></pre>
<p>lowercase <code>x</code> (and <code>y</code>) are currently undefined, hence the <code>UnboundLocalError</code>. <s>You</s> <a href="https://stackoverflow.com/a/26413868/1461210">larsmans</a> probably meant <code>len(X)</code> and <code>len(Y)</code>.</p>
|
python|numpy|machine-learning|scikit-learn|svm
| 1
|
1,781
| 9,595,898
|
numpy array of histograms
|
<p>I am currently working with a 2d numpy object array filled with collections.counter objects
Each counter is basically a histogram.</p>
<ul>
<li>Keys are always from a limited set of integers eg between 0 and 1500</li>
<li>number of items in each counter is variable, most are small but some have every key</li>
</ul>
<p>This all works fine for my needs with smaller datasets but with a dataset around the 500 million cells mark the memory use is around 120Gb which is a little high.</p>
<p>Interestingly numpy.save writes it out to a 4gb file which makes me think there is something better i can be doing.</p>
<p>Any suggestions on how i can reduce my memory usage.</p>
<p>I considered a 3d array but because of the amount of empty counts it would have to hold it required even more memory.</p>
<p>I make lots of use of counter.update in constructing the array so any method needs a quick/neat way of getting similar functionality.</p>
<p>The access after the data is created isnt a big issue as long for each cell i can get the value for each key - no need for a dictionaries indexing.</p>
<p>Below is a very simplified example that produces a small dataset that is roughly analogous to what ive described above. My code would have a skew further towards less keys per counter and higher counts per key</p>
<pre><code>def counterArray_init(v):
return collections.Counter([v])
e = np.random.random_integers(0,1500,[10,10])
row_len, col_len = e.shape
counterArray = np.zeros([row_len,col_len], dtype= object)
vinit = np.vectorize(counterArray_init)
counterArray[:,:] = vinit(e)
for row in xrange(1,row_len):
for col in xrange(0,col_len):
counterArray[row,col].update(counterArray[row - 1,col])
return counterArray
</code></pre>
<p>Thanks</p>
<p>Edit: I have realised that in my smaller counters the keys used fall within a small range. The random example code above is not a good example of this behaviour.
As a result i am investigating using an object array filled with different length int arrays and a separate array that stores the minimum key value for each of those int arrays. It seems like an odd solution but initial testing looks like its using only about 20% of the memory used by the counter method.</p>
|
<p>If there are lots of empty counts,a sparse matrix representation may be a good fit, where the memory use is proportional to the number of non-empty elements in the array. SciPy has decent support for what it sounds like you're looking at: <a href="http://docs.scipy.org/doc/scipy/reference/sparse.html" rel="nofollow">scipy.sparse</a></p>
|
python|numpy
| 1
|
1,782
| 66,534,803
|
How to scatter plot values in a range color with cartopy?
|
<p>I have this <code>df</code>:</p>
<pre><code> STATION LONGITUDE LATITUDE TMAXPERC
0 000130 -80.45750 -3.81333 9.034495
1 000132 -80.45722 -3.50833 7.291477
2 000134 -80.23722 -3.57611 15.760175
3 000135 -80.32194 -3.44056 5.256434
4 000136 -80.66083 -3.94889 12.301515
.. ... ... ... ...
366 158323 -69.99972 -17.55917 57.318854
367 158325 -69.94889 -17.66083 87.762365
368 158326 -69.74556 -17.18639 40.719109
369 158327 -69.81306 -17.23722 86.950173
370 158328 -69.77944 -17.52500 53.413032
[371 rows x 4 columns]
</code></pre>
<p>I'm making a scatter plot with latitude and longitude with this code.</p>
<pre><code>import cartopy.crs as ccrs
import pandas as pd
import cartopy
%matplotlib inline
import numpy as np
import cartopy.io.img_tiles as cimgt
import cartopy.io.shapereader as shpreader
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
data = pd.ExcelFile("path/filename.xlsx")
df =pd.read_excel(data,'Hoja1',dtype={'STATION': str})
df=df[['STATION','LONGITUDE','LATITUDE','TMAXPERC']]
reader = shpreader.Reader('C:/Ubuntu/Python/Shapefile/Dz/direcciones.shp')
counties = list(reader.geometries())
COUNTIES = cfeature.ShapelyFeature(counties, ccrs.PlateCarree())
plt.figure(figsize=(15,15))
stamen_terrain = cimgt.Stamen('terrain-background')
m10 = plt.axes(projection=stamen_terrain.crs)
plt.scatter(
x=df["LONGITUDE"],
y=df["LATITUDE"],
color="red",
s=4,
alpha=1,
transform=ccrs.PlateCarree()
)
# (x0, x1, y0, y1)
m10.set_extent([-85, -65, 2, -20], ccrs.PlateCarree())
# add map, zoom level
m10.add_image(stamen_terrain, 8)
m10.add_feature(COUNTIES, facecolor='none', edgecolor='black')
m10.legend()
m10.gridlines()
plt.show()
</code></pre>
<p>And this is the result:
<a href="https://i.stack.imgur.com/acDli.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/acDli.png" alt="enter image description here" /></a></p>
<p>But i want to make a scatter plot with <code>df['TMAXPERC']</code>. I want to plot dots in the map with scalecolor based on the TMAXPERC values in their corresponding latitude and longitude. How can i do this? Thanks in advance.</p>
|
<p>You need to specify some options properly:</p>
<pre><code>plt.scatter(
x=df["LONGITUDE"],
y=df["LATITUDE"],
c=df["TMAXPERC"], cmap='viridis', #this is the changes
s=4,
alpha=1,
transform=ccrs.PlateCarree()
)
</code></pre>
|
pandas|matplotlib|cartopy
| 3
|
1,783
| 66,747,190
|
Count number of unique names per ID and write result in new pandas column
|
<p>I have a pandas dataframe:</p>
<pre><code>df = pd.DataFrame(
{
"id": ["K0", "K0", "K0", "K1", "K1", "K2", "K2","K2"],
"name": ["Peter", "Peter", "Max", "Jim", "Lucy", "Lucy", "Lucy", "Pam"]
}
)
</code></pre>
<p><a href="https://i.stack.imgur.com/hVu7g.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hVu7g.png" alt="enter image description here" /></a></p>
<p>I want to get the frequency of each name for each ID. First I tried to groupby <code>ID</code>. Next I tried to count the number of unique names and write the result in a new column. Did't realy work in my case. The final df should look like this:</p>
<p><a href="https://i.stack.imgur.com/ybUw9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ybUw9.png" alt="enter image description here" /></a></p>
|
<p>Try with <code>transform</code></p>
<pre><code>df['freq_name'] = df.groupby(['id','name'])['id'].transform('count')
df
Out[401]:
id name freq_name
0 K0 Peter 2
1 K0 Peter 2
2 K0 Max 1
3 K1 Jim 1
4 K1 Lucy 1
5 K2 Lucy 2
6 K2 Lucy 2
7 K2 Pam 1
</code></pre>
|
pandas|group-by|pandas-groupby|aggregate
| 1
|
1,784
| 66,368,668
|
Crop Boxes in Tensorflow Object Detection and display it as jpg image
|
<p>I'm using the tensorflow objection detection to detect specific data on passports like full name and other things. I've already trained the data and everything is working fine. It perfectly identifies data surrounding it with a bounding box. However, now I just want to crop the detected boxes.</p>
<p>Code:</p>
<pre><code>import os
import cv2
import numpy as np
import tensorflow as tf
import sys
sys.path.append("..")
from object_detection.utils import label_map_util
from object_detection.utils import visualization_utils as vis_util
MODEL_NAME = 'inference_graph'
CWD_PATH = os.getcwd()
PATH_TO_CKPT = 'C:/Users/UI UX/Desktop/Captcha 3/CAPTCHA_frozen_inference_graph.pb'
PATH_TO_LABELS = 'C:/Users/UI UX/Desktop/Captcha 3/CAPTCHA_labelmap.pbtxt'
PATH_TO_IMAGE = 'C:/Users/UI UX/Desktop/(47).jpg'
NUM_CLASSES = 11
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories)
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
sess = tf.Session(graph=detection_graph)
image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
detection_boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
detection_scores = detection_graph.get_tensor_by_name('detection_scores:0')
detection_classes = detection_graph.get_tensor_by_name('detection_classes:0')
num_detections = detection_graph.get_tensor_by_name('num_detections:0')
image = cv2.imread(PATH_TO_IMAGE)
image_np = cv2.resize(image, (0, 0), fx=2.0, fy=2.0)
image_expanded = np.expand_dims(image_np, axis=0)
(boxes, scores, classes, num) = sess.run(
[detection_boxes, detection_scores, detection_classes, num_detections],
feed_dict={image_tensor: image_expanded})
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
np.squeeze(boxes),
np.squeeze(classes).astype(np.int32),
np.squeeze(scores),
category_index,
use_normalized_coordinates=True,
line_thickness=2,
min_score_thresh=0.60)
width, height = image_np.shape[:2]
for i, box in enumerate(np.squeeze(boxes)):
if(np.squeeze(scores)[i] > 0.80):
(ymin, xmin, ymax, xmax) = (box[0]*height, box[1]*width, box[2]*height, box[3]*width)
cropped_image = tf.image.crop_to_bounding_box(image_np, ymin, xmin, ymax - ymin, xmax - xmin)
cv2.imshow('cropped_image', image_np)
cv2.waitKey(0)
cv2.imshow('Object detector', image_np)
cv2.waitKey(0)
cv2.destroyAllWindows()
</code></pre>
<h1>but get this error:</h1>
<p>Traceback (most recent call last):
File "C:/Users/UI UX/PycharmProjects/pythonProject1/vedio_object_detection.py", line 71, in
cropped_image = tf.image.crop_to_bounding_box(image_np, ymin, xmin, ymax - ymin, xmax - xmin)
File "C:\ProgramData\Anaconda2\envs\tf_cpu\lib\site-packages\tensorflow_core\python\ops\image_ops_impl.py", line 875, in crop_to_bounding_box
array_ops.stack([-1, target_height, target_width, -1]))
File "C:\ProgramData\Anaconda2\envs\tf_cpu\lib\site-packages\tensorflow_core\python\ops\array_ops.py", line 855, in slice
return gen_array_ops.<em>slice(input</em>, begin, size, name=name)
File "C:\ProgramData\Anaconda2\envs\tf_cpu\lib\site-packages\tensorflow_core\python\ops\gen_array_ops.py", line 9222, in _slice
"Slice", input=input, begin=begin, size=size, name=name)
File "C:\ProgramData\Anaconda2\envs\tf_cpu\lib\site-packages\tensorflow_core\python\framework\op_def_library.py", line 632, in _apply_op_helper
param_name=input_name)
File "C:\ProgramData\Anaconda2\envs\tf_cpu\lib\site-packages\tensorflow_core\python\framework\op_def_library.py", line 61, in _SatisfiesTypeConstraint
", ".join(dtypes.as_dtype(x).name for x in allowed_list)))
TypeError: Value passed to parameter 'begin' has DataType float32 not in list of allowed values: int32, int64</p>
<p>Any Kind of help?</p>
|
<p>I found the solution of this by add this pice of code after end of this line:</p>
<pre><code>(boxes, scores, classes, num) = sess.run([detection_boxes, detection_scores, detection_classes, num_detections],feed_dict={image_tensor: image_expanded})
</code></pre>
<p>I add this:</p>
<pre><code>(frame_height, frame_width) = image.shape[:2]
for i in range(len(np.squeeze(scores))):
#print(np.squeeze(boxes)[i])
ymin = int((np.squeeze(boxes)[i][0]*frame_height))
xmin = int((np.squeeze(boxes)[i][1]*frame_width))
ymax = int((np.squeeze(boxes)[i][2]*frame_height))
xmax = int((np.squeeze(boxes)[i][3]*frame_width))
cropped_img = image[ymax:ymin,xmax:xmin]
cv2.imwrite(f'/your/path/img_{i}.png', cropped_img)
</code></pre>
|
python|tensorflow|object-detection
| 0
|
1,785
| 66,644,162
|
Extract data using indicator matrix (binary matrix)
|
<p>How can one use a binary matrix in order to get the specific positions in a dataset. So, for example if we took a matrix with categories 1 and 2 looked like this</p>
<pre><code>1 2 0
0 2 1
0 0 2
</code></pre>
<p>and the original data (<code>A</code>) looked like this:</p>
<pre><code>a b c
e f g
i j k
</code></pre>
<p>To give me a dataset 1 (using <code>B1</code>):</p>
<pre><code>a 0 0
0 e 0
0 0 0
</code></pre>
<p>and dataset 2 (using <code>B2</code>):</p>
<pre><code>0 b 0
0 c 0
0 0 k
</code></pre>
<p>I have put a template here to try any solution using Sympy or NumPy to test any possible answers:</p>
<pre><code>import sympy as sym
import numpy as np
from sympy import *
init_printing()
a , b, c, d, e, f, g, h, i = sym.symbols("a b c d e f g h i")
B1 = sym.Matrix([[1,0,0],[0,0,1],[0,0,0]]) # To get dataset 1
B2 = sym.Matrix([[0,1,0],[0,1,0],[0,0,1]]) # To get dataset 2
A = sym.Matrix([[a,b,c],[d,e,f],[g,h,i]])
B1
1 0 0
0 1 0
0 0 0
B2
0 1 0
0 1 0
0 0 1
</code></pre>
|
<p>Rather than using this "indicator matrix" one could use a nested for loop with a nested if statement. This would let you get to the final answer. It would be inefficient but would still get you the desired result.</p>
|
python|arrays|numpy|matrix|sympy
| 1
|
1,786
| 66,642,466
|
Pandas - Stacked bar chart with multiple boolean columns
|
<p><a href="https://i.stack.imgur.com/PKPVm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PKPVm.png" alt="enter image description here" /></a></p>
<p>I have data like this. I would like to make a stacked bar chart where the x-axis is the ball color and each stack in the bar is the percentage of balls with that color that have that attribute (note each column in the bar chart will not sum to 100). I'm trying something like this</p>
<pre><code>data = {'Ball Color' : ['Red', 'Blue', 'Blue', 'Red', 'Red', 'Red'],
'Heavy?' : [True, True, False, True, False, True],
'Shiny?' : [True, True, False, True, True, False]}
code_samp = pd.DataFrame(data)
code_samp.groupby('Ball Color')[['Heavy?', 'Shiny?']].value_counts().plot.bar()
</code></pre>
<p>But value_counts is only supported for series. Any ideas? Thanks in advance</p>
|
<p>Use:</p>
<pre><code>code_samp.groupby('Ball Color').sum().plot.bar()
</code></pre>
<p>or</p>
<pre><code>code_samp.groupby('Ball Color').mean().plot.bar()
</code></pre>
|
python|pandas|matplotlib
| 0
|
1,787
| 66,354,845
|
Create a new dataframe using the values of a categorical variable in a dataframe?
|
<p>Tried to retrieve Cost, if s['O_Status'] values is Closed, using below code.</p>
<blockquote>
<p>Got this error, ValueError: The truth value of a Series is ambiguous.
Use a.empty, a.bool(), a.item(), a.any() or a.all()</p>
</blockquote>
<p>.</p>
<pre><code> FClose = [i for i in s['Cost'] if s['O_Status'] == 'Closed']
Cost Year O_Status ----> data frame column name
-------------------------------
6100000 2001 Closed
100004 2009 Operating
2004000 2015 Closed
144007 1999 Operating
</code></pre>
<p>Also is it possible to make the Categorical variable values Closed and Operating in to a new dataframe in the below format and store relative cost values,</p>
<pre><code>Closed Operating ------> data frame column name
--------------------------
6100000 100004
2004000 144007
</code></pre>
|
<pre><code>import io
df = pd.read_csv(io.StringIO('''Cost Year O_Status
6100000 2001 Closed
100004 2009 Operating
2004000 2015 Closed
144007 1999 Operating'''), sep='\s+', engine='python')
FClose = df[df['O_Status'] == 'Closed']['Cost'].tolist()
print(FClose)
FOp = df[df['O_Status'] == 'Operating']['Cost'].tolist()
print(FOp)
dfnew = pd.concat([pd.DataFrame(FClose, columns=['Closed']), pd.DataFrame(FOp, columns=['Operating'])], axis=1)
</code></pre>
<p>Output</p>
<pre><code> Closed Operating
0 6100000 100004
1 2004000 144007
</code></pre>
|
pandas|dataframe
| 0
|
1,788
| 66,446,450
|
Is it possible for me to run two python processes at once, one using the coral tpu, and another using only the cpu?
|
<p>For example if I wanted to run a lane detection algorithm, and an object detection algorithm at simulataneously.</p>
|
<p>Running two or more simultaneous python processes is possible but only one model can be executed on the TPU at once. When multiple models are too slow to run on a single Edge TPU, they needs to be executed across multiple Edge TPUs, so plugging the USB Accelerator into the DevBoard is a good test environment for that method.</p>
<p>One model can be loaded to the Dev Board and other one to the USB Accelerator. Please check this out : <a href="https://coral.ai/docs/edgetpu/multiple-edgetpu/" rel="nofollow noreferrer">https://coral.ai/docs/edgetpu/multiple-edgetpu/</a></p>
|
python|tensorflow|raspberry-pi|tpu|google-coral
| 0
|
1,789
| 57,528,376
|
Extract future timeseries data and join on past timeseries that are 12 hours apart?
|
<p>I am in a data science course and my instructor isn't very strong in python. </p>
<p>Use a shift function to pull prices by 12 hours (aligning prices 12 hours in the future with a row's current prices). Then create a new column populated with this info. </p>
<p>So I should have my index, column 1, and newcolumn</p>
<p>I have tried a few different ways. I have tried extracting the 12 hours into a list and merging, I have tried using .slice, and I have tried creating a function.</p>
<p><a href="https://imgur.com/a/AYaM1Ye" rel="nofollow noreferrer">https://imgur.com/a/AYaM1Ye</a></p>
|
<p>This seemed to work </p>
<pre><code>slice= currency [currency.index.min():currency.index.max()]
#Move the datetime values forward an hour
shifted = slice.shift(periods=1, freq='12H')
</code></pre>
|
python|pandas|slice|data-science
| 1
|
1,790
| 24,147,029
|
Parsing a Multi-Index Excel File in Pandas
|
<p>I have a time series excel file with a tri-level column MultiIndex that I would like to successfully parse if possible. There are some results on how to do this for an index on stack overflow but not the columns and the <code>parse</code> function has a <code>header</code> that does not seem to take a list of rows.</p>
<p>The ExcelFile looks like is like the following:</p>
<ul>
<li>Column A is all the time series dates starting on A4</li>
<li>Column B has top_level1 (B1) mid_level1 (B2) low_level1 (B3) data (B4-B100+)</li>
<li>Column C has null (C1) null (C2) low_level2 (C3) data (C4-C100+)</li>
<li>Column D has null (D1) mid_level2 (D2) low_level1 (D3) data (D4-D100+)</li>
<li>Column E has null (E1) null (E2) low_level2 (E3) data (E4-E100+)</li>
<li>...</li>
</ul>
<p>So there are two <code>low_level</code> values many <code>mid_level</code> values and a few <code>top_level</code> values but the trick is the top and mid level values are null and are assumed to be the values to the left. So, for instance all the columns above would have top_level1 as the top multi-index value.</p>
<p>My best idea so far is to use <code>transpose</code>, but the it fills <code>Unnamed: #</code> everywhere and doesn't seem to work. In Pandas 0.13 <code>read_csv</code> seems to have a <code>header</code> parameter that can take a list, but this doesn't seem to work with <code>parse</code>.</p>
|
<p>You can <code>fillna</code> the null values. I don't have your file, but you can test </p>
<pre><code>#Headers as rows for now
df = pd.read_excel(xls_file,0, header=None, index_col=0)
#fill in Null values in "Headers"
df = df.fillna(method='ffill', axis=1)
#create multiindex column names
df.columns=pd.MultiIndex.from_arrays(df[:3].values, names=['top','mid','low'])
#Just name of index
df.index.name='Date'
#remove 3 rows which are already used as column names
df = df[pd.notnull(df.index)]
</code></pre>
|
python|excel|parsing|pandas|time-series
| 7
|
1,791
| 43,876,776
|
Can't edit dataframe data through iloc in pandas
|
<p>So something really weird is happening when I try editing :</p>
<pre><code>In [119]: print(GDP.iloc[1][0])
Out [119]: Andorra
</code></pre>
<p>When I try to edit it with <code>.iloc</code> and query it again this happens:</p>
<pre><code>In [120]: GDP.iloc[1][0]="Cats"
print(GDP.iloc[1][0])
Out [120]: Andorra
</code></pre>
<p>I remember reading that <code>.iloc</code> may call a copy or an image depending on the <code>numpy</code> type. Anyway to fix this or is there other way I should be editing my data? Thanks.</p>
|
<p>It is best to avoid chaining assignments in pandas, see this <a href="https://stackoverflow.com/questions/21463589/pandas-chained-assignments">SO post</a>
which refers to this Pandas doc about <a href="http://pandas-docs.github.io/pandas-docs-travis/indexing.html#why-does-assignment-fail-when-using-chained-indexing" rel="nofollow noreferrer">chaining assignments</a></p>
<p>Whenever, you have "][" in pandas it is generally bad and should be rewritten.</p>
<p>It is best written as Divakar suggests:</p>
<pre><code>GDP.iloc[1,0]="Cats"
</code></pre>
|
python|pandas|numpy|dataframe|copy
| 2
|
1,792
| 72,851,617
|
how to select particular rows and columns without hardcoding in pandas python
|
<p>I want to select name and score from index 1 to 3 and store it to a dataframe. Can we extract that without hardcoding by giving a start word and end word.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
name_dict = {
'Name': ['a','b','c','d', 'e'],
'Score': ['0.90(subject to approval)',80,95,20,10]
}
df = pd.DataFrame(name_dict)
print (df)
df.set_index('Name').loc['a', 'Score']
</code></pre>
<p><a href="https://i.stack.imgur.com/W4lMM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/W4lMM.png" alt="enter image description here" /></a></p>
|
<p>You can use <code>iloc</code>:</p>
<pre><code>>>> df.iloc[1:4]
Name Score
1 b 80
2 c 95
3 d 20
</code></pre>
|
python|pandas
| 0
|
1,793
| 72,965,898
|
Python DataFrame - merging many urls into one cell
|
<p>I'm trying to merge many urls into one cell and save it as excel file, each row has many urls. This is the code of what I have tried</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
urls1 = ["https://url1.com/","https://url2.com/"]
urls2 = ["https://url3.com/","https://url4.com/"]
df1 = pd.DataFrame([["First url", urls1],["Second url", urls2]], columns=['column 1', 'column 2'])
df1.to_excel('excel.xlsx', index=False, header=True)
</code></pre>
<p>The expected output</p>
<p><img src="https://i.stack.imgur.com/JLEQT.png" alt="Excel file" /></p>
|
<p>Try this:</p>
<pre><code>import pandas as pd
urls1 = ["https://url1.com/","https://url2.com/"]
urls2 = ["https://url3.com/","https://url4.com/"]
df1 = pd.DataFrame([urls1, urls2], columns=['column 1', 'column 2'], index=["First url", "Second url"])
df1.to_excel('excel.xlsx', index=True, header=True)
</code></pre>
|
python|excel|pandas|dataframe
| 0
|
1,794
| 73,143,171
|
how to calculate weighted average or sum by groupby from a list?
|
<p>I have a data frame and a list of weight as follows</p>
<pre><code>import pandas as pd
import numpy as np
data = [
['A',1,2,3,4],
['A',5,6,7,8],
['A',9,10,11,12],
['B',13,14,15,16],
['B',17,18,19,20],
['B',21,22,23,24],
['B',25,26,27,28],
['C',29,30,31,32],
['C',33,34,35,36],
['C',37,38,39,40],
]
df = pd.DataFrame(data, columns=['Name', 'num1', 'num2', 'num3', 'num4'])
df
</code></pre>
<p>Now I want to calculate weighted average with the help of list</p>
<pre><code>weights = [10,20,30,40] = 100
for cols in df.columns:
df[cols]=df.groupby(['dpid'])[cols].transform(lambda x:)
</code></pre>
<p>But I am not sure how to calculate the weighted average from each Name.</p>
<p>For A it has only three rows so, it should be calculated
for num1 as <code>(1*30+5*30+9*40)/100</code> if I am not wrong. and same for columns num2,num3, and num4.</p>
<p>For B it has 4 rows, so, it should be calculated as <code>(13*10+17*20+21*30+25*40)/100</code>.</p>
<p>Can somebody help me with this?</p>
|
<p>IIUC, you can use a <code>groupby.apply</code>:</p>
<pre><code>df.groupby('Name').agg(lambda g: sum(g*weights[:len(g)])/sum(weights[:len(g)]))
</code></pre>
<p>output:</p>
<pre><code> num1 num2 num3 num4
Name
A 6.333333 7.333333 8.333333 9.333333
B 21.000000 22.000000 23.000000 24.000000
C 34.333333 35.333333 36.333333 37.333333
</code></pre>
|
python|pandas|dataframe|group-by|weighted-average
| 2
|
1,795
| 72,972,473
|
Extract full year from Quarter value using Python
|
<p>I have a dataset where there is a column from where I would like to extract the full year.</p>
<pre><code>**Data**
ID Qtr
AA Q123
AA Q123
BB Q226
BB Q327
**Desired**
ID Qtr Year
AA Q123 2023
AA Q123 2023
BB Q226 2026
BB Q327 2027
**Doing**
df1 = datetime.datetime.strptime(df, '%YY').date()
</code></pre>
<p>However, the quarter is not an actual date, still researching on how to perform this.
Any suggestion is helpful</p>
|
<p><a href="https://stackoverflow.com/users/9177877/it-is-chris">@It_is_Chris</a> talked through the simplest way, which is just string slicing. The following method actually creates datetimes. This is slower, but more powerful if you need to do anything more complex with your dates.</p>
<p><code>> df</code></p>
<pre><code> ID Qtr
0 AA Q123
1 AA Q123
2 BB Q226
3 BB Q327
</code></pre>
<h5>1. Extract two digit year</h5>
<p><code> df["Qtr"].str.slice(2)</code></p>
<pre><code>0 23
1 23
2 26
3 27
</code></pre>
<h5>2. Convert into datetimes</h5>
<p><code>> pd.to_datetime(df["Qtr"].str.slice(2), format="%y")</code></p>
<pre><code>0 2023-01-01
1 2023-01-01
2 2026-01-01
3 2027-01-01
</code></pre>
<h5>3. Extract the year</h5>
<p><code>> pd.to_datetime(df["Qtr"].str.slice(2), format="%y").dt.year</code></p>
<pre><code>0 2023
1 2023
2 2026
3 2027
</code></pre>
<h5>4. Reassign</h5>
<pre><code>> df["Year"] = pd.to_datetime(df["Qtr"].str.slice(2), format="%y").dt.year
> df
</code></pre>
<pre><code> ID Qtr Year
0 AA Q123 2023
1 AA Q123 2023
2 BB Q226 2026
3 BB Q327 2027
</code></pre>
|
python|pandas|numpy
| 1
|
1,796
| 72,844,557
|
How to solve memory error? Should I increase memory limit?
|
<p>I was loading this but it says error.</p>
<pre><code>import pandas as pd
import numpy as np
userMovie = np.load('userMovieMatrixAction.npy')
numberUsers, numberGenreMovies = userMovie.shape
genreFilename = 'Action.csv'
genre = pd.read_csv(genreFilename)
</code></pre>
<p>MemoryError: Unable to allocate 3.63 GiB for an array with shape (487495360,) and data type float64
What can I do? It's driving me crazy.</p>
|
<p>If the program runs out of memory, it seems like an issue with <a href="https://www.etalabs.net/overcommit.html" rel="nofollow noreferrer">overcommit handling</a> of your operative system. If you are in Linux, you can try to run the following command to enable "always overcommit" mode, which can help you load the 3.63GiB npy file with <code>numpy</code>:</p>
<pre><code>$ echo 1 > /proc/sys/vm/overcommit_memory
</code></pre>
|
python|pandas|numpy|memory|out-of-memory
| 2
|
1,797
| 10,707,671
|
How to call a java function from python/numpy?
|
<p>it is clear to me how to extend Python with C++, but what if I want to write a function in Java to be used with numpy? </p>
<p>Here is a simple scenario: I want to compute the average of a numpy array using a Java class. How do I pass the numpy vector to the Java class and gather the result?</p>
<p>Thanks for any help!</p>
|
<p>I spent some time on my own question and would like to share my answer as I feel there is not much information on this topic on <em>stackoverflow</em>. I also think Java will become more relevant in scientific computing (e.g. see WEKA package for data mining) because of the improvement of performance and other good software development features of Java.</p>
<hr>
<p><em>In general, it turns out that using the right tools it is much easier to extend Python with Java than with C/C++!</em></p>
<hr>
<h1>Overview and assessment of tools to call Java from Python</h1>
<ul>
<li><p><a href="http://pypi.python.org/pypi/JCC" rel="noreferrer">http://pypi.python.org/pypi/JCC</a>: because of no proper
documentation this tool is useless.</p></li>
<li><p>Py4J: requires to start the Java process before using python. As
remarked by others this is a possible point of failure. Moreover, not many examples of use are documented.</p></li>
<li><p><a href="http://jpype.sourceforge.net/" rel="noreferrer">JPype</a>: although development seems to be death, it works well and there are
many examples on it on the web (e.g. see <a href="http://kogs-www.informatik.uni-hamburg.de/~meine/weka-python/" rel="noreferrer">http://kogs-www.informatik.uni-hamburg.de/~meine/weka-python/</a> for using data mining libraries written in Java) . Therefore <em>I decided to focus
on this tool</em>.</p></li>
</ul>
<h1>Installing JPype on Fedora 16</h1>
<p>I am using Fedora 16, since there are some issues when installing JPype on Linux, I describe my approach.
Download <a href="http://jpype.sourceforge.net/" rel="noreferrer">JPype</a>, then modify <em>setup.py</em> script by providing the JDK path, in line 48:</p>
<pre><code>self.javaHome = '/usr/java/default'
</code></pre>
<p>then run:</p>
<pre><code>sudo python setup.py install
</code></pre>
<p>Afters successful installation, check this file:</p>
<p><em>/usr/lib64/python2.7/site-packages/jpype/_linux.py</em></p>
<p>and remove or rename the method <em>getDefaultJVMPath()</em> into <em>getDefaultJVMPath_old()</em>, then add the following method:</p>
<pre><code>def getDefaultJVMPath():
return "/usr/java/default/jre/lib/amd64/server/libjvm.so"
</code></pre>
<p><strong>Alternative approach</strong>: do not make any change in the above file <em>_linux.py</em>, but never use the method getDefaultJVMPath() (or methods which call this method). At the place of using <em>getDefaultJVMPath()</em> provide directly the path to the JVM. Note that there are several paths, for example in my system I also have the following paths, referring to different versions of the JVM (it is not clear to me whether the client or server JVM is better suited):</p>
<ul>
<li>/usr/lib/jvm/java-1.5.0-gcj-1.5.0.0/jre/lib/x86_64/client/libjvm.so</li>
<li>/usr/lib/jvm/java-1.5.0-gcj-1.5.0.0/jre/lib/x86_64/server/libjvm.so</li>
<li>/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre/lib/amd64/server/libjvm.so</li>
</ul>
<p>Finally, add the following line to <em>~/.bashrc</em> (or run it each time before opening a python interpreter):</p>
<pre><code>export JAVA_HOME='/usr/java/default'
</code></pre>
<p>(The above directory is in reality just a symbolic link to my last version of JDK, which is located at <em>/usr/java/jdk1.7.0_04</em>).</p>
<p>Note that all the tests in the directory where JPype has been downloaded, i.e. <em>JPype-0.5.4.2/test/testsuite.py</em> will fail (so do not care about them).</p>
<p>To see if it works, test this script in python:</p>
<pre><code>import jpype
jvmPath = jpype.getDefaultJVMPath()
jpype.startJVM(jvmPath)
# print a random text using a Java class
jpype.java.lang.System.out.println ('Berlusconi likes women')
jpype.shutdownJVM()
</code></pre>
<h1>Calling Java classes from Java also using Numpy</h1>
<p>Let's start implementing a Java class containing some functions which I want to apply to <em>numpy arrays</em>. Since there is no concept of state, I use static functions so that I do not need to create any Java object (creating Java objects would not change anything). </p>
<pre><code>/**
* Cookbook to pass numpy arrays to Java via Jpype
* @author Mannaggia
*/
package test.java;
public class Average2 {
public static double compute_average(double[] the_array){
// compute the average
double result=0;
int i;
for (i=0;i<the_array.length;i++){
result=result+the_array[i];
}
return result/the_array.length;
}
// multiplies array by a scalar
public static double[] multiply(double[] the_array, double factor) {
int i;
double[] the_result= new double[the_array.length];
for (i=0;i<the_array.length;i++) {
the_result[i]=the_array[i]*factor;
}
return the_result;
}
/**
* Matrix multiplication.
*/
public static double[][] mult_mat(double[][] mat1, double[][] mat2){
// find sizes
int n1=mat1.length;
int n2=mat2.length;
int m1=mat1[0].length;
int m2=mat2[0].length;
// check that we can multiply
if (n2 !=m1) {
//System.err.println("Error: The number of columns of the first argument must equal the number of rows of the second");
//return null;
throw new IllegalArgumentException("Error: The number of columns of the first argument must equal the number of rows of the second");
}
// if we can, then multiply
double[][] the_results=new double[n1][m2];
int i,j,k;
for (i=0;i<n1;i++){
for (j=0;j<m2;j++){
// initialize
the_results[i][j]=0;
for (k=0;k<m1;k++) {
the_results[i][j]=the_results[i][j]+mat1[i][k]*mat2[k][j];
}
}
}
return the_results;
}
/**
* @param args
*/
public static void main(String[] args) {
// test case
double an_array[]={1.0, 2.0,3.0,4.0};
double res=Average2.compute_average(an_array);
System.out.println("Average is =" + res);
}
}
</code></pre>
<p>The name of the class is a bit misleading, as we do not only aim at computing the average of a numpy vector (using the method <em>compute_average</em>), but also multiply a numpy vector by a scalar (method <em>multiply</em>), and finally, the matrix multiplication (method <em>mult_mat</em>).</p>
<p>After compiling the above Java class we can now run the following Python script:</p>
<pre><code>import numpy as np
import jpype
jvmPath = jpype.getDefaultJVMPath()
# we to specify the classpath used by the JVM
classpath='/home/mannaggia/workspace/TestJava/bin'
jpype.startJVM(jvmPath,'-Djava.class.path=%s' % classpath)
# numpy array
the_array=np.array([1.1, 2.3, 4, 6,7])
# build a JArray, not that we need to specify the Java double type using the jpype.JDouble wrapper
the_jarray2=jpype.JArray(jpype.JDouble, the_array.ndim)(the_array.tolist())
Class_average2=testPkg.Average2
res2=Class_average2.compute_average(the_jarray2)
np.abs(np.average(the_array)-res2) # ok perfect match!
# now try to multiply an array
res3=Class_average2.multiply(the_jarray2,jpype.JDouble(3))
# convert to numpy array
res4=np.array(res3) #ok
# matrix multiplication
the_mat1=np.array([[1,2,3], [4,5,6], [7,8,9]],dtype=float)
#the_mat2=np.array([[1,0,0], [0,1,0], [0,0,1]],dtype=float)
the_mat2=np.array([[1], [1], [1]],dtype=float)
the_mat3=np.array([[1, 2, 3]],dtype=float)
the_jmat1=jpype.JArray(jpype.JDouble, the_mat1.ndim)(the_mat1.tolist())
the_jmat2=jpype.JArray(jpype.JDouble, the_mat2.ndim)(the_mat2.tolist())
res5=Class_average2.mult_mat(the_jmat1,the_jmat2)
res6=np.array(res5) #ok
# other test
the_jmat3=jpype.JArray(jpype.JDouble, the_mat3.ndim)(the_mat3.tolist())
res7=Class_average2.mult_mat(the_jmat3,the_jmat2)
res8=np.array(res7)
res9=Class_average2.mult_mat(the_jmat2,the_jmat3)
res10=np.array(res9)
# test error due to invalid matrix multiplication
the_mat4=np.array([[1], [2]],dtype=float)
the_jmat4=jpype.JArray(jpype.JDouble, the_mat4.ndim)(the_mat4.tolist())
res11=Class_average2.mult_mat(the_jmat1,the_jmat4)
jpype.java.lang.System.out.println ('Goodbye!')
jpype.shutdownJVM()
</code></pre>
|
java|numpy
| 13
|
1,798
| 70,689,385
|
How to fit using float64?
|
<p>I've set all the layers in my model to use float64, yet when fitting the loss appears to still be coming out as float32 (based on the rounding I see). I'd like to ensure that all processing under the hood is in double. How can I guarantee that?</p>
|
<p>instead of manually setting the dtype for each layer, you can define a global policy with:</p>
<pre><code>policy = tf.keras.mixed_precision.Policy("float64")
tf.keras.mixed_precision.set_global_policy(policy)
</code></pre>
<p>or alternativly set the backend default float:</p>
<pre><code>tf.keras.backend.set_floatx(policyConfig)
</code></pre>
<p>In that way all layers automatically use float64.</p>
<p>For your loss calculation: if intitialize values you should initialize them using <code>tf.constant(0.0, dtype=tf.float64)</code> instead of only initializing with 0.0 to ensure float64.</p>
|
tensorflow|tensorflow2.0
| 1
|
1,799
| 42,820,063
|
divide by previous element by group in Pandas
|
<p>My question is the same as the one I posted for R (<a href="https://stackoverflow.com/questions/42726018/divide-by-previous-element-by-group">divide by previous element by group</a>), except I'd now like to do the same thing in Pandas.</p>
<p>I have a data frame as created below: </p>
<pre><code>origdate = pd.Series(np.repeat(['2011-01-01', '2011-02-01', '2011-03-01'],[5, 4, 3]))
date = pd.Series(['2011-01-01', '2011-02-01', '2011-03-01', '2011-04-01', '2011-05-01',
'2011-02-01', '2011-03-01', '2011-04-01', '2011-05-01', '2011-03-01', '2011-04-01', '2011-05-01'])
bal = pd.Series(range(20,32))
A = pd.DataFrame({'origdate': origdate, 'date': date, 'bal': bal})
A
bal date origdate
0 20 2011-01-01 2011-01-01
1 21 2011-02-01 2011-01-01
2 22 2011-03-01 2011-01-01
3 23 2011-04-01 2011-01-01
4 24 2011-05-01 2011-01-01
5 25 2011-02-01 2011-02-01
6 26 2011-03-01 2011-02-01
7 27 2011-04-01 2011-02-01
8 28 2011-05-01 2011-02-01
9 29 2011-03-01 2011-03-01
10 30 2011-04-01 2011-03-01
11 31 2011-05-01 2011-03-01
</code></pre>
<p>What I want to do is divide <code>bal</code> by the previous <code>bal</code> for each increment of <code>date</code>, but not when <code>origdate</code> changes. So what I want to obtain is shown below in the column <code>dbal</code>:</p>
<pre><code> origdate date bal dbal
1 2011-01-01 2011-01-01 20 NA
2 2011-01-01 2011-02-01 21 1.050000
3 2011-01-01 2011-03-01 22 1.047619
4 2011-01-01 2011-04-01 23 1.045455
5 2011-01-01 2011-05-01 24 1.043478
6 2011-02-01 2011-02-01 25 NA
7 2011-02-01 2011-03-01 26 1.040000
8 2011-02-01 2011-04-01 27 1.038462
9 2011-02-01 2011-05-01 28 1.037037
10 2011-03-01 2011-03-01 29 NA
11 2011-03-01 2011-04-01 30 1.034483
12 2011-03-01 2011-05-01 31 1.033333
</code></pre>
|
<p>Use <code>groupby</code> + <code>pct_change</code> + <code>add(1)</code></p>
<pre><code>A.assign(dbal=A.groupby('origdate').bal.pct_change().add(1))
</code></pre>
<p>Or <code>groupby</code> + <code>shift</code> + <code>div</code></p>
<pre><code>A.assign(dbal=A.groupby('origdate').bal.apply(lambda x: x.div(x.shift())))
</code></pre>
<hr>
<p>Both yield</p>
<pre><code> bal date origdate dbal
0 20 2011-01-01 2011-01-01 NaN
1 21 2011-02-01 2011-01-01 1.050000
2 22 2011-03-01 2011-01-01 1.047619
3 23 2011-04-01 2011-01-01 1.045455
4 24 2011-05-01 2011-01-01 1.043478
5 25 2011-02-01 2011-02-01 NaN
6 26 2011-03-01 2011-02-01 1.040000
7 27 2011-04-01 2011-02-01 1.038462
8 28 2011-05-01 2011-02-01 1.037037
9 29 2011-03-01 2011-03-01 NaN
10 30 2011-04-01 2011-03-01 1.034483
11 31 2011-05-01 2011-03-01 1.033333
</code></pre>
|
python|pandas
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.