Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
377,400
| 56,498,442
|
concatenate two sting columns - python
|
<p>I have this dataframe:</p>
<pre><code>df = pd.DataFrame({"X" : ["2017-12-17","2017-12-18","2017-12-19"],
"Y": ["F","W","Q"]})
</code></pre>
<p>And I'm looking for the <code>key</code> column:</p>
<pre><code> X Y key
0 2017-12-17 F 2017-12-17_F
1 2017-12-18 W 2017-12-18_W
2 2017-12-19 Q 2017-12-19_Q
</code></pre>
<p>I have tried <a href="https://stackoverflow.com/questions/50653208/paste0-like-function-in-python-for-multiple-strings">1</a>,<a href="https://stackoverflow.com/questions/10972410/pandas-combine-two-columns-in-a-dataframe">2</a>,<a href="https://stackoverflow.com/questions/11858472/string-concatenation-of-two-pandas-columns">3</a>, and the best solution is (for speed, as they are near 1 million rows):</p>
<pre><code>df.assign(key=[str(x) + "_" + y for x, y in zip(df["X"], df["Y"])])
</code></pre>
<p>And it gives me this error:</p>
<pre><code>TypeError: unsupported operand type(s) for +: 'Timestamp' and 'str'
</code></pre>
<p>Why?</p>
|
<p>Looks like your <code>X</code> column is not string as posted, but <code>TimeStamp</code>. Anyway, you can try:</p>
<pre><code>df['key'] = df.X.astype(str) + '_' + df.Y.astype(str)
</code></pre>
|
python|pandas|numpy|data-manipulation
| 0
|
377,401
| 56,546,251
|
Custom range bins in Pandas (interval starting always from zero)
|
<p>I use:</p>
<pre><code>bins = pd.cut(data['R10rank'], list(np.arange(0.0, 1.1, 0.1)))
sum=data.groupby(bins)['Ret20d'].agg(['count', 'mean'])
</code></pre>
<p>to create stats like:</p>
<pre><code> count mean
R10rank
(0.0, 0.1] 1044 4.782833
(0.1, 0.2] 809 5.527745
(0.2, 0.3] 746 5.181306
(0.3, 0.4] 706 4.034747
(0.4, 0.5] 627 3.119654
(0.5, 0.6] 585 1.977387
(0.6, 0.7] 609 -0.602742
(0.7, 0.8] 493 -2.745312
(0.8, 0.9] 412 -2.476791
(0.9, 1.0] 374 -6.364374
</code></pre>
<p>Next I would like to see bins that would aggregate stats over different intervals of value.</p>
<p>Like:</p>
<pre><code><0.1
<0.3
<0.5
>0.5
>0.7
etc
</code></pre>
<p>thus the second line would contain count and mean for all values in R10rank that have value 0-3. The fourth line would create count and mean for all values in R10rank that has value >0.5</p>
<p>Can I use pd.cut for that too? If not, what would be the easier way?</p>
<p>Thank you.</p>
|
<p>You can check with <code>expanding</code></p>
<pre><code>df['New']=df['count']*df['mean']
df.expanding(min_periods=1).sum().assign(mean=lambda x : x['New']/x['count'])
Out[105]:
count mean New
R10rank
(0.0,0.1] 1044.0 4.782833 4993.277652
(0.1,0.2] 1853.0 5.108054 9465.223357
(0.2,0.3] 2599.0 5.129080 13330.477633
(0.3,0.4] 3305.0 4.895313 16179.009015
(0.4,0.5] 3932.0 4.612165 18135.032073
(0.5,0.6] 4517.0 4.270933 19291.803468
(0.6,0.7] 5126.0 3.691911 18924.733590
(0.7,0.8] 5619.0 3.127121 17571.294774
(0.8,0.9] 6031.0 2.744297 16550.856882
(0.9,1.0] 6405.0 2.212425 14170.581006
</code></pre>
|
pandas|data-binding
| 0
|
377,402
| 56,493,760
|
How to split the column with same delimiter
|
<p>My dataframe is this and I want to split my data frame by colon (<code>:</code>)</p>
<pre><code>+------------------+
|Name:Roll_no:Class|
+------------------+
| #ab:cd#:23:C|
| #sd:ps#:34:A|
| #ra:kh#:14:H|
| #ku:pa#:36:S|
| #ra:sh#:50:P|
+------------------+
</code></pre>
<p>and I want my dataframe like:</p>
<pre><code>+-----+-------+-----+
| Name|Roll_no|Class|
+-----+-------+-----+
|ab:cd| 23| C|
|sd:ps| 34| A|
|ra:kh| 14| H|
|ku:pa| 36| S|
|ra:sh| 50| P|
+-----+-------+-----+
</code></pre>
|
<p>If need split by last 2 <code>:</code> use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.rsplit.html" rel="nofollow noreferrer"><code>Series.str.rsplit</code></a>, then set columns by split column name and last remove first and last <code>#</code> by indexing:</p>
<pre><code>col = 'Name:Roll_no:Class'
df1 = df[col].str.rsplit(':', n=2, expand=True)
df1.columns = col.split(':')
df1['Name'] = df1['Name'].str[1:-1]
#if only first and last value
#df1['Name'] = df1['Name'].str.strip('#')
print (df1)
Name Roll_no Class
0 ab:cd 23 C
1 sd:ps 34 A
2 ra:kh 14 H
3 ku:pa 36 S
4 ra:sh 50 P
</code></pre>
|
python|pandas|pyspark
| 4
|
377,403
| 56,690,524
|
matplotlib geopandas plot chloropleth with set bins for colorscheme
|
<h1>How do I set a consistent colorscheme for three <code>axes</code> in the same figure?</h1>
<p>The following should be a wholly reproducible example to run the code and get the same figure I have posted below.</p>
<p>Get the shapefile data from the <a href="http://geoportal.statistics.gov.uk/datasets/8edafbe3276d4b56aec60991cbddda50_1?selectedAttribute=lad15cd" rel="noreferrer">Office for National Statistics</a>. Run this in a terminal as a <code>bash</code> file / commands.</p>
<pre class="lang-sh prettyprint-override"><code>wget --output-document 'LA_authorities_boundaries.zip' 'https://opendata.arcgis.com/datasets/8edafbe3276d4b56aec60991cbddda50_1.zip?outSR=%7B%22latestWkid%22%3A27700%2C%22wkid%22%3A27700%7D&session=850489311.1553456889'
mkdir LA_authorities_boundaries
cd LA_authorities_boundaries
unzip ../LA_authorities_boundaries.zip
</code></pre>
<p>The python code that reads the shapefile and creates a dummy <code>GeoDataFrame</code> for reproducing the behaviour.</p>
<pre class="lang-py prettyprint-override"><code>import geopandas as gpd
import pandas as pd
import matplotlib.pyplot as plt
gdf = gpd.read_file(
'LA_authorities_boundaries/Local_Authority_Districts_December_2015_Full_Extent_Boundaries_in_Great_Britain.shp'
)
# 380 values
df = pd.DataFrame([])
df['AREA_CODE'] = gdf.lad15cd.values
df['central_pop'] = np.random.normal(30, 15, size=(len(gdf.lad15cd.values)))
df['low_pop'] = np.random.normal(10, 15, size=(len(gdf.lad15cd.values)))
df['high_pop'] = np.random.normal(50, 15, size=(len(gdf.lad15cd.values)))
</code></pre>
<p>Join the shapefile from ONS and create a <code>geopandas.GeoDataFrame</code></p>
<pre class="lang-py prettyprint-override"><code>def join_df_to_shp(pd_df, gpd_gdf):
""""""
df_ = pd.merge(pd_df, gpd_gdf[['lad15cd','geometry']], left_on='AREA_CODE', right_on='lad15cd', how='left')
# DROP the NI counties
df_ = df_.dropna(subset=['geometry'])
# convert back to a geopandas object (for ease of plotting etc.)
crs = {'init': 'epsg:4326'}
gdf_ = gpd.GeoDataFrame(df_, crs=crs, geometry='geometry')
# remove the extra area_code column joined from gdf
gdf_.drop('lad15cd',axis=1, inplace=True)
return gdf_
pop_gdf = join_df_to_shp(df, gdf)
</code></pre>
<p>Make the plots</p>
<pre class="lang-py prettyprint-override"><code>fig,(ax1,ax2,ax3,) = plt.subplots(1,3,figsize=(15,6))
pop_gdf.plot(
column='low_pop', ax=ax1, legend=True, scheme='quantiles', cmap='OrRd',
)
pop_gdf.plot(
column='central_pop', ax=ax2, legend=True, scheme='quantiles', cmap='OrRd',
)
pop_gdf.plot(
column='high_pop', ax=ax3, legend=True, scheme='quantiles', cmap='OrRd',
)
for ax in (ax1,ax2,ax3,):
ax.axis('off')
</code></pre>
<p><a href="https://i.stack.imgur.com/aNinv.png" rel="noreferrer"><img src="https://i.stack.imgur.com/aNinv.png" alt="enter image description here"></a></p>
<h1>I want all three <code>ax</code> objects to share the same bins (preferable the <code>central_pop</code> scenario <code>quantiles</code>) so that the legend is consistent for the whole figure. The <code>quantiles</code> from ONE scenario (central) would become the <code>levels</code> for all</h1>
<p>This way I should see darker colors (more red) in the far right <code>ax</code> showing the <code>high_pop</code> scenario.</p>
<p>How can I set the colorscheme bins for the whole figure / each of the <code>ax</code> objects?</p>
<p>The simplest way I can see this working is either
a) Provide a set of bins to the <code>geopandas.plot()</code> function
b) extract the colorscheme / bins from one <code>ax</code> and apply it to another.</p>
|
<p>From geopandas 0.5 onwards you can use a custom scheme defined as <code>scheme="User_Defined"</code> and supply the binning via <code>classification_kwds</code>.</p>
<pre><code>import geopandas as gpd
print(gpd.__version__) ## 0.5
import numpy as np; np.random.seed(42)
import matplotlib.pyplot as plt
gdf = gpd.read_file(gpd.datasets.get_path('naturalearth_lowres'))
gdf['quant']=np.random.rand(len(gdf))*100-20
fig, ax = plt.subplots()
gdf.plot(column='quant', cmap='RdBu', scheme="User_Defined",
legend=True, classification_kwds=dict(bins=[-10,20,30,50,70]),
ax=ax)
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/gmDk2.png" rel="noreferrer"><img src="https://i.stack.imgur.com/gmDk2.png" alt="enter image description here"></a></p>
<p>So the remaining task is to get a list of bins from the quantiles of one of the columns. This should be easily done, e.g. via </p>
<pre><code>import mapclassify
bins = mapclassify.Quantiles(gdf['quant'], k=5).bins
</code></pre>
<p>then setting <code>classification_kwds=dict(bins=bins)</code> in the above code.</p>
<p><a href="https://i.stack.imgur.com/W76VS.png" rel="noreferrer"><img src="https://i.stack.imgur.com/W76VS.png" alt="enter image description here"></a></p>
|
python|python-3.x|matplotlib|geopandas
| 13
|
377,404
| 56,512,810
|
Categorizing a column in a pandas dataframe
|
<p>I have a dataframe with a column that has university ranks as follows: <code>1,2,3,4,5,...,99,100,101-150,151-200,201-300,301-400,401-500,>500</code> and looks like this:</p>
<pre><code> Uni_Rank
1
3
4
101-150
20
22
151-200
201-300
301-400
10
15
44
53
70
>500
</code></pre>
<p>I need to generate another column that would categorize them as follow: <code>1-10, 11-20, 21-30, 31-40, 41-50, 51-60, 61-70, 71-80, 81-90, 91-100, 101-150, 101-150,151-200,201-300,301-400,401-500,>500</code></p>
<p>My problem is that I cannot categorize them into numbers as some as strings. So I'm not sure how to do this.</p>
|
<p>you can do something like this (just add more bin boundaries and labels as you wish)</p>
<p>I am assuming the string ranks are your eventual categories that you want</p>
<pre class="lang-py prettyprint-override"><code>df_train_final['uni_original'] = [1,3,4,'101-150',20,22,'151-200']
bins = [0, 10, 20, 30]
names = ['1-10', '11-20', '21-30']
df_train_final['uni_rank'] = df_train_final['uni_original'].apply(lambda x: x if isinstance(x, str) else pd.cut([x], bins, labels=names)[0])
</code></pre>
|
python|pandas
| 1
|
377,405
| 56,731,076
|
Combine text of a column in dataframe with conditions in pandas/python
|
<p>I'm am testing a ML model and need to merge my text to cut my audio file and train the model. How can I merge the text using conditions ?</p>
<p>My goal is to merge the text in the 'Text' column until I reach an end punctuation to form a sentence. I want to continue to form sentences until I reach the end of the text file. </p>
<p>I have tried to use pandas groupby.</p>
<pre><code>df.groupby(['Name','Speaker','StTime','EnTime'])['Text'].apply(' '.join).reset_index()
Example:
Name Speaker StTime Text EnTime
s1 tom 6.8 I would say 7.3
s1 tom 7.3 7.6
s1 tom 7.6 leap frog 8.3
s1 tom 8.3 9.2
s1 tom 9.2 a pig. 10.1
Name Speaker StTime Text EnTime
s1 tom 6.8 I would say leap frog a pig. 10.1
</code></pre>
|
<p>Or use:</p>
<pre><code>>>> df['Text'] = df.groupby(['Name', 'Speaker'])['Text'].transform(' '.join).str.split().str.join(' ')
>>> df2 = df.head(1)
>>> df2['EnTime'] = df['EnTime'].iloc[-1]
>>> df2
Name Speaker StTime Text EnTime
0 s1 tom 6.8 I would say leap frog a pig. 10.1
>>>
</code></pre>
|
python|pandas|pandas-groupby|data-cleaning|data-processing
| 1
|
377,406
| 56,765,205
|
A column name is 0, and it can not be renamed or selected
|
<p>I turned a Series <code>word</code> into a dataframe with same name.
After reindexing, I now have the dataframe as such:</p>
<pre><code> index 0
0 a A
1 b B
2 c C
</code></pre>
<p>and after I renamed the dataframe:</p>
<pre><code>words.rename({'index':'word','0':'counts'},axis='columns',inplace=True)
</code></pre>
<p>it becomes:</p>
<pre><code> word 0
0 a A
1 b B
2 c C
</code></pre>
<p>As you can see, the column name of 0 remains unchanged.</p>
<p>Then, when I select it, </p>
<pre><code>words['0']
</code></pre>
<p>it shows me this error:</p>
<pre><code>KeyError: '0'
</code></pre>
<p>I tried, it's not alpha bate O, I am sure I did not mix them up....</p>
<p>Pls Help!</p>
<p>ps:codes that I used for creating the Series</p>
<pre><code>words=df_new['text'].str.split(expand=True).stack().value_counts()
words=pd.DataFrame(words)
words.reset_index(level=0,inplace=True)
</code></pre>
|
<p>Here DataFrame constructor is not necessary, rather use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.reset_index.html" rel="nofollow noreferrer"><code>Series.reset_index</code></a> with parameter <code>name</code> and for rename <code>index</code> add <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.rename_axis.html" rel="nofollow noreferrer"><code>Series.rename_axis</code></a> for avoid default <code>index</code> column name:</p>
<pre><code>df_new = pd.DataFrame({
'text' : ['A B', 'A C'],
})
print (df_new)
text
0 A B
1 A C
words=(df_new['text'].str.split(expand=True)
.stack()
.value_counts()
.rename_axis('word')
.reset_index(name='counts'))
print (words)
word counts
0 A 2
1 B 1
2 C 1
</code></pre>
<p>Your solution working with rename <code>0</code> column - it is integer, not <code>string</code> - <code>'0'</code>:</p>
<pre><code>words=df_new['text'].str.split(expand=True).stack().value_counts().reset_index()
words.rename({'index':'word',0:'counts'},axis='columns',inplace=True)
print (words)
word counts
0 A 2
1 B 1
2 C 1
</code></pre>
|
python|pandas
| 1
|
377,407
| 56,834,934
|
How to replace tensorflow softmax with max for generating one hot vector at the output layer of Neural Network?
|
<p>For a classification problem, softmax function is used in the last layer of the Neural Network.<br>
I want to replace the softmax layer with the max layer that generates one hot vector with one set to the index where maximum value occurred and set all other entries to zero.</p>
<p>I can do it with tf.argmax as suggested in <a href="https://stackoverflow.com/questions/44724948/tensorflow-dense-vector-to-one-hot">TensorFlow - dense vector to one-hot</a> and <a href="https://stackoverflow.com/questions/46841116/tensorflow-convert-output-tensor-to-one-hot">Tensorflow: Convert output tensor to one-hot</a>, but these are not a differentiable way of doing it and gradients cannot be calculated.</p>
<p>If not exact 0's and 1's can be obtained then values should be close enough. </p>
<p>I was thinking to apply softmax multiple times but it is not recommended and I do not understand the reason behind it. </p>
<p>Please suggest a differentiable solution.</p>
|
<p>No, there is no differentiable solution, that is why we use the <code>softmax</code> activation, because it is a differentiable approximation to the max function.</p>
|
python|tensorflow|neural-network|one-hot-encoding|softmax
| 0
|
377,408
| 56,632,417
|
Convert header and values list to pandas dataframe
|
<p>I'm reading data from Google Sheets and was able to successfully get it in:</p>
<pre><code>header = result.get('values', [])[0] #First line is column names
values = result.get('values', [])[1:] #Everything else is data
</code></pre>
<p>Then after that I'm doing this:</p>
<pre><code>if not values:
print('No data found.')
else:
df = pd.DataFrame(data=values, columns=header)
print(df.head(10))
</code></pre>
<p>but I'm getting an error:</p>
<blockquote>
<p>AssertionError: 26 columns passed, passed data had 14 columns</p>
</blockquote>
<p>If I print header or values I'm getting the data in List successfully.</p>
|
<p>First you can make your dataframe with the present values, after that create a dataframe with the rest of the columns with <code>NaN</code> and <code>concat</code> them together:</p>
<pre><code>df = pd.DataFrame(data=values, columns=header[:14])
df2 = pd.DataFrame({head : [np.NaN]*len(df) for head in header[14:]})
df_final = pd.concat([df, df2], axis=1)
</code></pre>
<blockquote>
<p>Q: how did you know that the columns will be 14 from 5297</p>
</blockquote>
<p>A: Because your values contains rows, so 5297 is the amount of rows in your data.</p>
|
python|pandas|dataframe|google-sheets-api
| 1
|
377,409
| 56,649,369
|
pandas: cannot reindex from a duplicate axis
|
<p>I have this code:</p>
<pre><code>missing_columns = list(set(model_header) - set(combined_data.columns))
if missing_columns:
combined_data = combined_data.reindex(columns=np.append(combined_data.columns.values, missing_columns))
</code></pre>
<p>which is sometimes generating this error</p>
<blockquote>
<p>cannot reindex from a duplicate axis</p>
</blockquote>
<p>I understand from other posts that this happens when you have a duplicate columns but I don't see how I can given I am adding the missing columns</p>
<p>This is the traceback</p>
<pre><code>Traceback:
File "/home/henry/Documents/Sites/Development/web-cdi/env/local/lib/python2.7/site-packages/django/core/handlers/exception.py" in inner
41. response = get_response(request)
File "/home/henry/Documents/Sites/Development/web-cdi/env/local/lib/python2.7/site-packages/django/core/handlers/base.py" in _legacy_get_response
249. response = self._get_response(request)
File "/home/henry/Documents/Sites/Development/web-cdi/env/local/lib/python2.7/site-packages/django/core/handlers/base.py" in _get_response
187. response = self.process_exception_by_middleware(e, request)
File "/home/henry/Documents/Sites/Development/web-cdi/env/local/lib/python2.7/site-packages/django/core/handlers/base.py" in _get_response
185. response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/henry/Documents/Sites/Development/web-cdi/env/local/lib/python2.7/site-packages/django/contrib/admin/options.py" in wrapper
552. return self.admin_site.admin_view(view)(*args, **kwargs)
File "/home/henry/Documents/Sites/Development/web-cdi/env/local/lib/python2.7/site-packages/django/utils/decorators.py" in _wrapped_view
149. response = view_func(request, *args, **kwargs)
File "/home/henry/Documents/Sites/Development/web-cdi/env/local/lib/python2.7/site-packages/django/views/decorators/cache.py" in _wrapped_view_func
57. response = view_func(request, *args, **kwargs)
File "/home/henry/Documents/Sites/Development/web-cdi/env/local/lib/python2.7/site-packages/django/contrib/admin/sites.py" in inner
224. return view(request, *args, **kwargs)
File "/home/henry/Documents/Sites/Development/web-cdi/env/local/lib/python2.7/site-packages/django/utils/decorators.py" in _wrapper
67. return bound_func(*args, **kwargs)
File "/home/henry/Documents/Sites/Development/web-cdi/env/local/lib/python2.7/site-packages/django/utils/decorators.py" in _wrapped_view
149. response = view_func(request, *args, **kwargs)
File "/home/henry/Documents/Sites/Development/web-cdi/env/local/lib/python2.7/site-packages/django/utils/decorators.py" in bound_func
63. return func.__get__(self, type(self))(*args2, **kwargs2)
File "/home/henry/Documents/Sites/Development/web-cdi/env/local/lib/python2.7/site-packages/django/contrib/admin/options.py" in changelist_view
1590. response = self.response_action(request, queryset=cl.get_queryset(request))
File "/home/henry/Documents/Sites/Development/web-cdi/env/local/lib/python2.7/site-packages/django/contrib/admin/options.py" in response_action
1287. response = func(self, request, queryset)
File "/home/henry/Documents/Sites/Development/web-cdi/webcdi/researcher_UI/admin_actions.py" in scoring_data
20. return download_data(request, study_obj, administrations)
File "/home/henry/Documents/Sites/Development/web-cdi/env/local/lib/python2.7/site-packages/django/contrib/auth/decorators.py" in _wrapped_view
23. return view_func(request, *args, **kwargs)
File "/home/henry/Documents/Sites/Development/web-cdi/webcdi/researcher_UI/views.py" in download_data
145. combined_data = combined_data.reindex(columns=np.append(combined_data.columns.values, missing_columns))
File "/home/henry/Documents/Sites/Development/web-cdi/env/local/lib/python2.7/site-packages/pandas/core/frame.py" in reindex
2733. **kwargs)
File "/home/henry/Documents/Sites/Development/web-cdi/env/local/lib/python2.7/site-packages/pandas/core/generic.py" in reindex
2515. fill_value, copy).__finalize__(self)
File "/home/henry/Documents/Sites/Development/web-cdi/env/local/lib/python2.7/site-packages/pandas/core/frame.py" in _reindex_axes
2674. fill_value, limit, tolerance)
File "/home/henry/Documents/Sites/Development/web-cdi/env/local/lib/python2.7/site-packages/pandas/core/frame.py" in _reindex_columns
2699. allow_dups=False)
File "/home/henry/Documents/Sites/Development/web-cdi/env/local/lib/python2.7/site-packages/pandas/core/generic.py" in _reindex_with_indexers
2627. copy=copy)
File "/home/henry/Documents/Sites/Development/web-cdi/env/local/lib/python2.7/site-packages/pandas/core/internals.py" in reindex_indexer
3886. self.axes[axis]._can_reindex(indexer)
File "/home/henry/Documents/Sites/Development/web-cdi/env/local/lib/python2.7/site-packages/pandas/core/indexes/base.py" in _can_reindex
2836. raise ValueError("cannot reindex from a duplicate axis")
Exception Type: ValueError at /wcadmin/researcher_UI/study/
Exception Value: cannot reindex from a duplicate axis
</code></pre>
<p>I've looked at the columns and I don't see an overlap. Could this be anything else?</p>
|
<p>I guess there are duplicated columns names in one or both <code>DataFrames</code>, solution is deduplicated them before your solution manually or by code below:</p>
<pre><code>model_header = pd.DataFrame(columns=list('ABDB'))
combined_data = pd.DataFrame(columns=list('ABCA'))
print (model_header)
Empty DataFrame
Columns: [A, B, D, B]
Index: []
print (combined_data)
Empty DataFrame
Columns: [A, B, C, A]
Index: []
</code></pre>
<hr>
<pre><code>s1 = model_header.columns.to_series()
model_header.columns = (model_header.columns +
s1.groupby(s1).cumcount().astype(str).radd('_').str.replace('_0',''))
s2 = combined_data.columns.to_series()
combined_data.columns = (combined_data.columns +
s2.groupby(s2).cumcount().astype(str).radd('_').str.replace('_0',''))
print (model_header)
Empty DataFrame
Columns: [A, B, D, B_1]
Index: []
print (combined_data)
Empty DataFrame
Columns: [A, B, C, A_1]
Index: []
</code></pre>
<hr>
<pre><code>missing_columns = list(set(model_header) - set(combined_data.columns))
print (missing_columns)
['D', 'B_1']
if missing_columns:
combined_data = combined_data.reindex(columns=np.append(combined_data.columns.values, missing_columns))
print (combined_data)
Empty DataFrame
Columns: [A, B, C, A_1, D, B_1]
Index: []
</code></pre>
|
pandas
| 3
|
377,410
| 25,577,213
|
contour plot with python loop and matplotlib
|
<p>I can't figure out what's preventing me from getting a contour plot of this cost function. After much trial and error, I'm getting:</p>
<pre><code>ValueError: zero-size array to reduction operation minimum which has no identity
</code></pre>
<p>If I print J it doesn't give me any values, just a 100x100 array full of nan. Is that the reason? J should be full of cost values, right? Thanks so much for any help.</p>
<pre><code>X,y,ComputeCost = defined earlier and 90% sure not the problem
theta_zero = np.linspace(-10,10,100)
theta_one = np.linspace(-1,4,100)
L,Q = np.meshgrid(theta_zero,theta_one)
J = np.zeros((len(theta_zero),len(theta_one)))
for i in range(0,len(theta_zero)):
for j in range(0,len(theta_one)):
t = DataFrame([theta_zero[i],theta_one[j]])
J[i,j] = ComputeCost(X,y,t)
plt.contour(L,Q,J)
</code></pre>
|
<p>If <code>J</code> is just <code>nan</code>s, then the problem is in the way you're generating <code>J</code> and <strong>not</strong> the <code>contour()</code> call. </p>
|
python|numpy|matplotlib
| 1
|
377,411
| 25,552,741
|
Python numpy not saving array ()
|
<p>I am getting a strange error when trying to (binary) save some arrays in python 2
I have isolated the error, in particular supposing </p>
<pre><code>p1 = [1, 5, 10, 20]
p2 = [1, 5, 10, 20, 30]
p3 =np.zeros( (5,10), dtype=float)
</code></pre>
<p>then</p>
<pre><code>np.save("foo1", (p1, p2))
np.save("foo2", (p1, p3))
</code></pre>
<p>works ok,but </p>
<pre><code>np.save("foo3", (p2, p3))
</code></pre>
<p>returns an error
<img src="https://i.stack.imgur.com/hbHCm.jpg" alt="enter image description here"></p>
<p>Any ideas what is happening?
The error says "setting an array element with a sequence"
Tried looking around, converting the arrays and so on but to no avail.
What is funny is that as mentioned the first saves are ok, and p1 is very similar to p2...</p>
|
<p>The error is not due to <code>np.save</code>, but coming from trying to create an array from nested sequences. I get a similar but different error, probably because I am working on the development version, using any of the variants of <code>np.array</code>:</p>
<pre><code>>>> np.array((p2, p3))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: could not broadcast input array from shape (5,10) into shape (5)
</code></pre>
<p>Not sure if this qualifies as a bug, but what's tripping numpy is the fact that the first dimension of <code>p2</code> and <code>p3</code> is the same, 5 in your case. So numpy thinks it should create an array <code>arr</code> of shape <code>(2, 5, ...)</code>. It then assigns the values in <code>p2</code> to <code>arr[0, :]</code> with no problems. But when it tries to assign the values in <code>p3</code> to <code>arr[1, :]</code> is when the error happens: you are trying to stick into a single position, e.g. <code>arr[1, 0]</code>, the 5 elements in <code>p3[0, :]</code>.</p>
<p>Numpy could probably be smarter about this, and not assume that a matching dimension means that the depth of all sequences is the same, as it seems to be doing. You may want to ping the numpy mailing list to see if one of the devs has a more informed opinion on whether this is undesirable behavior, or a design choice. </p>
|
python-2.7|numpy|save
| 5
|
377,412
| 25,825,720
|
python: join group size to member rows in dataframe
|
<p>(Python 2.7) I wish to create a column in a python dataframe with the size of the group to which member rows belong (indexed by row ID number). Groups are based on rows with identical values in two columns, date and amount. I've attempted to use groubpy and size - which is suggested for similar problems - but I can't get the resulting size values back to the source dataframe due to indexing problems. Should I use a dictionary to read all unique value pairings instead, and what would that look like? Or should I learn how to merge the groupby object to the original dataframe with a join operation. Note: this is large dataset.</p>
<p>Sample data:</p>
<pre><code> date amount address
ID
176820 1/4/2008 0:00 400 13496 ST LOUIS
176821 1/4/2008 0:00 500 13475 NEWBERN
176822 1/4/2008 0:00 2000 8011 DAYTON
176823 1/4/2008 0:00 4000 13406 LONGVIEW
176824 1/4/2008 0:00 7000 19174 ARCHDALE
</code></pre>
<p>Here's what I thought might work:</p>
<pre><code> df['group_size'] = df.groupby(['date','amount']).size()
</code></pre>
<p>But I received this: TypeError: incompatible index of inserted column with frame index</p>
<p>UPDATE: elyase's solution works for the original sample data I posted. My source dataframe actually has 13 columns, not 3, but elyase's solution doesn't work when even one additional column is added to the sample frame.</p>
<pre><code> date amount address tract
ID
176820 1/4/2008 0:00 400 13496 ST LOUIS 510200
176821 1/4/2008 0:00 500 13475 NEWBERN 510400
176822 1/4/2008 0:00 2000 8011 DAYTON 526200
176823 1/4/2008 0:00 4000 13406 LONGVIEW 504200
176824 1/4/2008 0:00 7000 19174 ARCHDALE 540200
</code></pre>
<p>I get the error: Wrong number of items passed 1, indices imply 2</p>
|
<p>Have you tried:</p>
<pre><code>df.groupby(['date','amount']).transform('count')
</code></pre>
|
python|pandas
| 1
|
377,413
| 25,819,172
|
How to initialize empty pandas Panel
|
<p>I'm trying to fill pandas Panel in a <code>for</code> loop:</p>
<pre><code>dp = pd.Panel()
for i in range(x):
# read data in 2D numpy array as 'arr'
dp[i] = arr
</code></pre>
<p>which raises:</p>
<pre><code>ValueError: shape of value must be (0, 0), shape of given object was (309, 495)
</code></pre>
<p>Trying:</p>
<pre><code>dp = pd.Panel()
for i in range(x):
...
dp.update({i: arr})
</code></pre>
<p>does not raise error, but after completion I get empty pane.</p>
<p>If I initialize Panel with single item, I can use the first method to update the panel, but I want to initialize empty Panel.</p>
<p>So how to initialize empty pandas Panel, to be able to add data?</p>
|
<p>It seems it is necessary to initialize Panel with the shape of expected data, so in my case this worked fine:</p>
<pre><code>dp = pd.Panel(major_axis=range(309), minor_axis=range(495))
for i in range(x):
# read data in 2D numpy array as 'arr'
dp[i] = arr
</code></pre>
<p>Same applies to DataFrame - if user wants to add a column to empty DataFrame it will fail if DataFrame index is not defined, so:</p>
<pre><code>df = DataFrame(index=range(5))
</code></pre>
<p>is the right way to initialize empty DataFrame with 5 rows.</p>
|
python|pandas
| 1
|
377,414
| 25,697,450
|
Cleaner pandas apply with function that cannot use pandas.Series and non-unique index
|
<p>In the following, <code>func</code> represents a function that uses multiple columns (with coupling across the group) and cannot operate directly on <code>pandas.Series</code>. The <code>0*d['x']</code> syntax was the lightest I could think of to force the conversion, but I think it's awkward.</p>
<p>Additionally, the resulting <code>pandas.Series</code> (<code>s</code>) still includes the group index, which must be removed before adding as a column to the <code>pandas.DataFrame</code>. The <code>s.reset_index(...)</code> index manipulation seems fragile and error-prone, so I'm curious if it can be avoided. Is there an idiom for doing this?</p>
<pre><code>import pandas
import numpy
df = pandas.DataFrame(dict(i=[1]*8,j=[1]*4+[2]*4,x=list(range(4))*2))
df['y'] = numpy.sin(df['x']) + 1000*df['j']
df = df.set_index(['i','j'])
print('# df\n', df)
def func(d):
x = numpy.array(d['x'])
y = numpy.array(d['y'])
# I want to do math with x,y that cannot be applied to
# pandas.Series, so explicitly convert to numpy arrays.
#
# We have to return an appropriately-indexed pandas.Series
# in order for it to be admissible as a column in the
# pandas.DataFrame. Instead of simply "return x + y", we
# have to make the conversion.
return 0*d['x'] + x + y
s = df.groupby(df.index).apply(func)
# The Series is still adorned with the (unnamed) group index,
# which will prevent adding as a column of df due to
# Exception: cannot handle a non-unique multi-index!
s = s.reset_index(level=0, drop=True)
print('# s\n', s)
df['z'] = s
print('# df\n', df)
</code></pre>
|
<p>Instead of </p>
<pre><code>0*d['x'] + x + y
</code></pre>
<p>you could use</p>
<pre><code>pd.Series(x+y, index=d.index)
</code></pre>
<hr>
<p>When using <code>groupy-apply</code>, instead of dropping the group key index using:</p>
<pre><code>s = df.groupby(df.index).apply(func)
s = s.reset_index(level=0, drop=True)
df['z'] = s
</code></pre>
<p>you can tell <code>groupby</code> to drop the keys using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html#pandas-dataframe-groupby" rel="nofollow">the keyword parameter <code>group_keys=False</code></a>:</p>
<pre><code>df['z'] = df.groupby(df.index, group_keys=False).apply(func)
</code></pre>
<hr>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame(dict(i=[1]*8,j=[1]*4+[2]*4,x=list(range(4))*2))
df['y'] = np.sin(df['x']) + 1000*df['j']
df = df.set_index(['i','j'])
def func(d):
x = np.array(d['x'])
y = np.array(d['y'])
return pd.Series(x+y, index=d.index)
df['z'] = df.groupby(df.index, group_keys=False).apply(func)
print(df)
</code></pre>
<p>yields</p>
<pre><code> x y z
i j
1 1 0 1000.000000 1000.000000
1 1 1000.841471 1001.841471
1 2 1000.909297 1002.909297
1 3 1000.141120 1003.141120
2 0 2000.000000 2000.000000
2 1 2000.841471 2001.841471
2 2 2000.909297 2002.909297
2 3 2000.141120 2003.141120
</code></pre>
|
numpy|pandas
| 3
|
377,415
| 25,749,285
|
How to get rid of values from Numpy array without loop?
|
<p>I have a numpy array similar to the following that represents neighbors of each individual (This is first generated by igraph package then converted to numpy array</p>
<pre><code>import numpy as np
import igraph
Edges = 2
NumNodes = 30
DisGraph = igraph.GraphBase.Barabasi(NumNodes, Edges)
Neighbors = map(DisGraph.neighbors, range(NumNodes))
Neighbors = np.asarray(DisNeighbors)
</code></pre>
<p>):</p>
<pre><code>Neighbors=[[1, 2, 3, 4, 5, 6, 8, 9, 11, 23, 24, 27]
[0, 2, 3, 4, 9, 10, 13, 16, 17, 19, 25, 27] [0, 1, 10, 22]
[0, 1, 5, 6, 7, 8, 12, 14, 15, 21, 22] [0, 1]
[0, 3, 7, 11, 15, 23, 24, 25, 29] [0, 3] [3, 5, 18] [0, 3, 12, 16, 18]
[0, 1, 13] [1, 2, 14, 20] [0, 5] [3, 8, 19] [1, 9, 21, 28]
[3, 10, 17, 20, 26] [3, 5] [1, 8] [1, 14, 26] [7, 8] [1, 12] [10, 14, 28]
[3, 13] [2, 3] [0, 5] [0, 5] [1, 5] [14, 17] [0, 1, 29] [13, 20] [5, 27]]
</code></pre>
<p>I would like to find a way to get ride of certain numbers from this array,
possibly without using loop.
For example, if I have a list:</p>
<pre><code>List = [0 1 2 3 4 5 6 7 8 9 10]
</code></pre>
<p>Then, I would like the resulting Neighbors array to have these values in List removed.</p>
<p>Any help will be appreciated.</p>
<p>Current answer that I have is the following:</p>
<pre><code>for aa in List:
i=0
for bb in Neighbors:
Neighbors[i] = [cc for cc in bb if cc != aa]
i=i+1
</code></pre>
<p>But I would like to know if there are more efficient way of handling this, as I am working with arrays sizing millions.</p>
|
<p>I don't know what you mean by "have these values in List removed" (what do you mean, "remove"?). Generally, though, you can select points within an array via: </p>
<pre><code>import numpy as np
a = np.random.random_integers(0,10,[10,10])
b = np.random.random_integers(0,10,5)
for r in b:
a[a==r] = -999
a
Out[12]:
array([[ 5, 1, -999, 3, 7, 5, 8, 3, 8, 4],
[ 8, 8, -999, 7, -999, 8, -999, -999, 4, 7],
[ 10, -999, -999, -999, -999, 1, -999, 7, 10, -999],
[ 3, 10, 8, -999, 8, 4, -999, 7, 4, 3],
[ 4, -999, 4, -999, -999, -999, -999, -999, -999, -999],
[ 5, 3, -999, 10, 10, -999, 10, 10, 3, 8],
[ 8, 5, -999, -999, 7, -999, 1, 8, -999, 8],
[ 4, 3, 8, -999, 3, 5, 4, -999, 4, 10],
[ 4, 3, 7, 4, -999, 7, 7, 7, -999, 8],
[-999, 10, -999, 5, 1, 5, 1, 10, 5, 1]])
</code></pre>
|
python|arrays|numpy
| 1
|
377,416
| 25,973,514
|
Combining Series in Pandas
|
<p>I need to combine multiple Pandas <code>Series</code> that contain string values. The series are messages that result from multiple validation steps. I try to combine these messages into 1 <code>Series</code> to attach it to the <code>DataFrame</code>. The problem is that the result is empty.</p>
<p>This is an example:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'a': ['a', 'b', 'c', 'd'], 'b': ['aa', 'bb', 'cc', 'dd']})
index1 = df[df['a'] == 'b'].index
index2 = df[df['a'] == 'a'].index
series = df.iloc[index1].apply(lambda x: x['b'] + '-bbb', axis=1)
series += df.iloc[index2].apply(lambda x: x['a'] + '-aaa', axis=1)
print series
# >>> series
# 0 NaN
# 1 NaN
</code></pre>
<p><strong>Update</strong></p>
<pre><code>import pandas as pd
df = pd.DataFrame({'a': ['a', 'b', 'c', 'd'], 'b': ['aa', 'bb', 'cc', 'dd']})
index1 = df[df['a'] == 'b'].index
index2 = df[df['a'] == 'a'].index
series1 = df.iloc[index1].apply(lambda x: x['b'] + '-bbb', axis=1)
series2 = df.iloc[index2].apply(lambda x: x['a'] + '-aaa', axis=1)
series3 = df.iloc[index2].apply(lambda x: x['a'] + '-ccc', axis=1)
# series3 causes a ValueError: cannot reindex from a duplicate axis
series = pd.concat([series1, series2, series3])
df['series'] = series
print df
</code></pre>
<p><strong>Update2</strong></p>
<p>In this example the indices seem to get mixed up.</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'a': ['a', 'b', 'c', 'd'], 'b': ['aa', 'bb', 'cc', 'dd']})
index1 = df[df['a'] == 'a'].index
index2 = df[df['a'] == 'b'].index
index3 = df[df['a'] == 'c'].index
series1 = df.iloc[index1].apply(lambda x: x['a'] + '-aaa', axis=1)
series2 = df.iloc[index2].apply(lambda x: x['a'] + '-bbb', axis=1)
series3 = df.iloc[index3].apply(lambda x: x['a'] + '-ccc', axis=1)
print series1
print
print series2
print
print series3
print
df['series'] = pd.concat([series1, series2, series3], ignore_index=True)
print df
print
df['series'] = pd.concat([series2, series1, series3], ignore_index=True)
print df
print
df['series'] = pd.concat([series3, series2, series1], ignore_index=True)
print df
print
</code></pre>
<p>This results in this output:</p>
<pre><code>0 a-aaa
dtype: object
1 b-bbb
dtype: object
2 c-ccc
dtype: object
a b series
0 a aa a-aaa
1 b bb b-bbb
2 c cc c-ccc
3 d dd NaN
a b series
0 a aa b-bbb
1 b bb a-aaa
2 c cc c-ccc
3 d dd NaN
a b series
0 a aa c-ccc
1 b bb b-bbb
2 c cc a-aaa
3 d dd NaN
</code></pre>
<p>I would expect only a's in row0, only b's in row1 and only c's in row2, but that's not the case...</p>
<p><strong>Update 3</strong></p>
<p>Here's a better example which should demonstrate the expected behaviour. As I said, the use case is that for a given <code>DataFrame</code>, a function evaluates each row and possibly returns an error message for some of the rows as a <code>Series</code> (some indexes are contained, some are not; if no error returns, the error series is empty).</p>
<pre><code>In [12]:
s1 = pd.Series(['b', 'd'], index=[1, 3])
s2 = pd.Series(['a', 'b'], index=[0, 1])
s3 = pd.Series(['c', 'e'], index=[2, 4])
s4 = pd.Series([], index=[])
pd.concat([s1, s2, s3, s4]).sort_index()
# I'd like to get:
#
# 0 a
# 1 b b
# 2 c
# 3 d
# 4 e
Out[12]:
0 a
1 b
1 b
2 c
3 d
4 e
dtype: object
</code></pre>
|
<p>When concatenating the default is to use the existing indices, however if they collide then this will raise a <code>ValueError</code> as you've found so you need to set <code>ignore_index=True</code>:</p>
<pre><code>In [33]:
series = pd.concat([series1, series2, series3], ignore_index=True)
df['series'] = series
print (df)
a b series
0 a aa bb-bbb
1 b bb a-aaa
2 c cc a-ccc
3 d dd NaN
</code></pre>
<p><strong>EDIT</strong></p>
<p>I think I know what you want now, you can achieve what you want by converting the series into a dataframe and then merging using the indices:</p>
<pre><code>In [96]:
df = pd.DataFrame({'a': ['a', 'b', 'c', 'd'], 'b': ['aa', 'bb', 'cc', 'dd']})
index1 = df[df['a'] == 'b'].index
index2 = df[df['a'] == 'a'].index
series1 = df.iloc[index1].apply(lambda x: x['b'] + '-bbb', axis=1)
series2 = df.iloc[index2].apply(lambda x: x['a'] + '-aaa', axis=1)
series3 = df.iloc[index2].apply(lambda x: x['a'] + '-ccc', axis=1)
# we now don't ignore the index in order to preserve the identity of the row we want to merge back to later
series = pd.concat([series1, series2, series3])
# construct a dataframe from the series and give the column a name
df1 = pd.DataFrame({'series':series})
# perform an outer merge on both df's indices
df.merge(df1, left_index=True, right_index=True, how='outer')
Out[96]:
a b series
0 a aa a-aaa
0 a aa a-ccc
1 b bb bb-bbb
2 c cc NaN
3 d dd NaN
</code></pre>
|
python|string|pandas|series
| 2
|
377,417
| 26,147,180
|
Convert row to column header for Pandas DataFrame,
|
<p>The data I have to work with is a bit messy.. It has header names inside of its data. How can I choose a row from an existing pandas dataframe and make it (rename it to) a column header?</p>
<p>I want to do something like:</p>
<pre><code>header = df[df['old_header_name1'] == 'new_header_name1']
df.columns = header
</code></pre>
|
<pre><code>In [21]: df = pd.DataFrame([(1,2,3), ('foo','bar','baz'), (4,5,6)])
In [22]: df
Out[22]:
0 1 2
0 1 2 3
1 foo bar baz
2 4 5 6
</code></pre>
<p>Set the column labels to equal the values in the 2nd row (index location 1):</p>
<pre><code>In [23]: df.columns = df.iloc[1]
</code></pre>
<p>If the index has unique labels, you can drop the 2nd row using:</p>
<pre><code>In [24]: df.drop(df.index[1])
Out[24]:
1 foo bar baz
0 1 2 3
2 4 5 6
</code></pre>
<p>If the index is not unique, you could use:</p>
<pre><code>In [133]: df.iloc[pd.RangeIndex(len(df)).drop(1)]
Out[133]:
1 foo bar baz
0 1 2 3
2 4 5 6
</code></pre>
<p>Using <code>df.drop(df.index[1])</code> removes <em>all</em> rows with the same label as the second row. Because non-unique indexes can lead to stumbling blocks (or potential bugs) like this, it's often better to take care that the index is unique (even though Pandas does not require it).</p>
|
python|pandas|rename|dataframe
| 271
|
377,418
| 25,982,638
|
H5py: fast way to save list of list of numpy.ndarray?
|
<p>How can I save and read efficiently a list of list of numpy.ndarray, with h5py? E.g. I want to save/read:</p>
<pre><code>Y = np.arange(3**3).reshape(3,3,3)
X = [[Y,Y],[Y,Y,Y],[Y]]
</code></pre>
<p>I am looking for the most efficient (no double loops etc) solution.</p>
|
<p>I'm going to assume Y is the same type, e.g., int32 or string. Generally, the most efficient way to handle this will be to emit Y as a single vector (flattened) who's length is:</p>
<pre><code>totalLen = sum(map(len, X))
offsets = cumsum(map(len, X))
</code></pre>
<p>You can stick the offsets into the hdf5 file as well and load it into memory at the start. Then getting list i is just:</p>
<pre><code>offsets[i]:(offsets[i+1] - 1)
</code></pre>
|
python|numpy|h5py
| 1
|
377,419
| 26,188,131
|
Prepend values to Panda's dataframe based on index level of another dataframe
|
<p>Below I have two dataframes. The first dataframe (d1) has a 'Date' index, and the 2nd dataframe (d2) has a 'Date' and 'Name' index.<br>
You'll notice that d1 starts at 2014-04-30 and d2 starts at 2014-01-31.</p>
<p>d1:</p>
<pre><code> Value
Date
2014-04-30 1
2014-05-31 2
2014-06-30 3
2014-07-31 4
2014-08-31 5
2014-09-30 6
2014-10-31 7
</code></pre>
<p>d2: </p>
<pre><code> Value
Date Name
2014-01-31 n1 5
2014-02-30 n1 6
2014-03-30 n1 7
2014-04-30 n1 8
2014-05-31 n2 9
2014-06-30 n2 3
2014-07-31 n2 4
2014-08-31 n2 5
2014-09-30 n2 6
2014-10-31 n2 7
</code></pre>
<p>What I want to do is to prepend the earlier dates from d2, but use the first value from the d1 to populate the value rows of the prepended rows.</p>
<p>The result should look like this:</p>
<pre><code> Value
Date
2014-01-31 1
2014-02-30 1
2014-03-30 1
2014-04-30 1
2014-05-31 2
2014-06-30 3
2014-07-31 4
2014-08-31 5
2014-09-30 6
2014-10-31 7
</code></pre>
<p>What the most efficient or easiest way to do this using <code>pandas</code></p>
|
<p>This is a direct formulation of your problem, and it is quite fast already:</p>
<pre><code>In [126]: def direct(d1, d2):
dates2 = d2.index.get_level_values('Date')
dates1 = d1.index
return d1.reindex(dates2[dates2 < min(dates1)].append(dates1), method='bfill')
.....:
In [127]: direct(d1, d2)
Out[127]:
Value
Date
2014-01-31 1
2014-02-28 1
2014-03-30 1
2014-04-30 1
2014-05-31 2
2014-06-30 3
2014-07-31 4
2014-08-31 5
2014-09-30 6
2014-10-31 7
In [128]: %timeit direct(d1, d2)
1000 loops, best of 3: 362 µs per loop
</code></pre>
<p>If you are willing to sacrifice some readability for performance, you could compare dates by their internal representation (integers are faster) and do the "backfilling" manually:</p>
<pre><code>In [129]: def fast(d1, d2):
dates2 = d2.index.get_level_values('Date')
dates1 = d1.index
new_dates = dates2[dates2.asi8 < min(dates1.asi8)]
new_index = new_dates.append(dates1)
new_values = np.concatenate((np.repeat(d1.values[:1], len(new_dates), axis=0), d1.values))
return pd.DataFrame(new_values, index=new_index, columns=d1.columns, copy=False)
.....:
In [130]: %timeit fast(d1, d2)
1000 loops, best of 3: 213 µs per loop
</code></pre>
|
python|numpy|pandas
| 1
|
377,420
| 25,967,580
|
how to convert series integer to datetime in pandas
|
<p>i want to convert integer type date to datetime.</p>
<p>ex) i : 20130601000011( 2013-6-1 00:00: 11 ) </p>
<p>i don't know exactly how to use pd.to_datetime </p>
<p>please any advice </p>
<p>thanks</p>
<p>ps. my script is below</p>
<pre><code>rent_date_raw = pd.Series(1, rent['RENT_DATE'])
return_date_raw = pd.Series(1, rent['RETURN_DATE'])
rent_date = pd.Series([pd.to_datetime(date)
for date in rent_date_raw])
daily_rent_ts = rent_date.resample('D', how='count')
monthly_rent_ts = rent_date.resample('M', how='count')
</code></pre>
|
<p>Pandas seems to deal with your format fine as long as you convert to string first:</p>
<pre><code>import pandas as pd
eg_date = 20130601000011
pd.to_datetime(str(eg_date))
Out[4]: Timestamp('2013-06-01 00:00:11')
</code></pre>
<p>Your data at the moment is really more of a string than an integer, since it doesn't really represent a single number. Different subparts of the string reflect different aspects of the time.</p>
|
python|pandas
| 3
|
377,421
| 26,371,720
|
How can I keep the intersection of a panel between dates using Pandas?
|
<p>I've got a panel of price data that has multiple IDs for each date. </p>
<pre><code>Date ID price
2012-06-08 1234 6.09
2345 5.08
3456 1.23
2012-06-09 1234 6.10
3456 1.25
</code></pre>
<p>I need to keep only the rows where the IDs are the same for consecutive dates. I'm trying to calculate returns for a portfolio that changes every month and the only coherent way to do it is take the intersection of securities for consecutive dates and take the difference of the sum of those prices. I tried to filter the dataframe by iterating through the dates, but it wasn't fruitful. Here's my attempt ('hol' is my original dataframe and 'dates' is a list of unique dates in 'hol'): </p>
<pre><code>newD = pd.Dataframe()
for i in range(1, len(dates)+1):
newD.append(hol[hol['ID'][dates[i-1].isin(
list(set(hol['ID'][dates[i-1]]).intersection(
set(hol['ID'][dates[i]]))
</code></pre>
<p>PLEASE HELP!</p>
|
<p>One thing you could do is exploit the <code>DataFrame.shift()</code> method in order to find the differences. If you combine this with groupby, when grouping on the IDs then you will end up results as I see that you want them. The trick is though, you need a DataFrame that has a date/ID pair of every unique date and every unique ID in order for this to work.</p>
<p>The process is as follows:</p>
<ul>
<li>Create DF with data you have</li>
<li>Create a 'balanced panel' of data from that data frame that contains every date/ID combination possible from the DF you have. This will have price values where appropriate, and NA values where not.</li>
<li>Group this new dataframe on ID, and use the <code>shift()</code> method to get the differences in the stock prices using the <code>apply</code> method
+drop the NA row, which is akin to keeping only those observations that have consecutive days. </li>
</ul>
<p>So I extended your data to the following:</p>
<pre><code>import pandas as pd
import datetime
from numpy import nan as NA
D = [datetime.datetime(2012, 6, 8).date(), datetime.datetime(2012, 6, 8).date(), datetime.datetime(2012, 6, 8).date(),
datetime.datetime(2012, 6, 9).date(), datetime.datetime(2012, 6, 9).date(), datetime.datetime(2012, 6, 9).date(),
datetime.datetime(2012, 6, 10).date(), datetime.datetime(2012, 6, 10).date(), datetime.datetime(2012, 6, 10).date()]
ID = [1234, 2345, 3456, 1234, 3456, 4567, 1234, 2345, 4567]
price = [6.09, 5.08, 1.23, 6.10, 1.25, 9.9, 6.0, 5.10, 10.0,]
DF = pd.DataFrame({'date' : D, 'ID' : ID, 'price' : price})
</code></pre>
<p>Then as follows:</p>
<pre><code>#Now create a balanced panel of data based on the DF
DF2 = pd.DataFrame({'date' : [date for x in xrange(len(DF.ID.unique())) for date in DF.date.unique()],
'ID' : [ID for x in xrange(len(DF.date.unique())) for ID in DF.ID.unique()]})
#set the index for both dataframes
DF = DF.set_index(['date', 'ID'])
DF2 = DF2.set_index(['date', 'ID'])
#Create a price column in DF2 that is NA where relevant observations are missing in the DF.
DF2['price'] = pd.Series([DF.loc[row, 'price'] if row in DF.index else NA for row in DF2.index], index = DF2.index)
#Sort the DF2 index
DF2 = DF2.sort_index()
#Group the data and apply a function that find the differences in price by shifting the data 1 place
DF2.groupby(level = 1, as_index = False).apply(lambda x: x.price - x.price.shift()).dropna()
</code></pre>
<p>Gives me the following output:</p>
<pre><code> date ID
0 2012-06-09 1234 0.01
2012-06-10 1234 -0.10
2 2012-06-09 3456 0.02
3 2012-06-10 4567 0.10
</code></pre>
<p>Which seems to be what you want?</p>
|
python|pandas|filter|panel|intersection
| 1
|
377,422
| 26,371,509
|
n-dimensional sliding window with Pandas or Numpy
|
<p>How do I do the R(xts) equivalent of rollapply(...., by.column=FALSE), using Numpy or Pandas? When given a dataframe, pandas rolling_apply seems only to work column by column instead of providing the option to provide a full (window-size) x (data-frame-width) matrix to the target function. </p>
<pre><code>import pandas as pd
import numpy as np
xx = pd.DataFrame(np.zeros([10, 10]))
pd.rolling_apply(xx, 5, lambda x: np.shape(x)[0])
0 1 2 3 4 5 6 7 8 9
0 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
1 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
2 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
3 NaN NaN NaN NaN NaN NaN NaN NaN NaN NaN
4 5 5 5 5 5 5 5 5 5 5
5 5 5 5 5 5 5 5 5 5 5
6 5 5 5 5 5 5 5 5 5 5
7 5 5 5 5 5 5 5 5 5 5
8 5 5 5 5 5 5 5 5 5 5
9 5 5 5 5 5 5 5 5 5 5
</code></pre>
<p>So what's happening is rolling_apply is going down each column in turn and applying a sliding 5-length window down each one of these, whereas what I want is for the sliding windows to be a 5x10 array each time, and in this case, I would get a single column vector (not 2d array) result. </p>
|
<p>I indeed cannot find a way to compute "wide" rolling application in pandas
docs, so I'd use numpy to get a "windowing" view on the array and apply a ufunc
to it. Here's an example:</p>
<pre><code>In [40]: arr = np.arange(50).reshape(10, 5); arr
Out[40]:
array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24],
[25, 26, 27, 28, 29],
[30, 31, 32, 33, 34],
[35, 36, 37, 38, 39],
[40, 41, 42, 43, 44],
[45, 46, 47, 48, 49]])
In [41]: win_size = 5
In [42]: isize = arr.itemsize; isize
Out[42]: 8
</code></pre>
<p><code>arr.itemsize</code> is 8 because default dtype is <code>np.int64</code>, you need it for the following "window" view idiom:</p>
<pre><code>In [43]: windowed = np.lib.stride_tricks.as_strided(arr,
shape=(arr.shape[0] - win_size + 1, win_size, arr.shape[1]),
strides=(arr.shape[1] * isize, arr.shape[1] * isize, isize)); windowed
Out[43]:
array([[[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24]],
[[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24],
[25, 26, 27, 28, 29]],
[[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24],
[25, 26, 27, 28, 29],
[30, 31, 32, 33, 34]],
[[15, 16, 17, 18, 19],
[20, 21, 22, 23, 24],
[25, 26, 27, 28, 29],
[30, 31, 32, 33, 34],
[35, 36, 37, 38, 39]],
[[20, 21, 22, 23, 24],
[25, 26, 27, 28, 29],
[30, 31, 32, 33, 34],
[35, 36, 37, 38, 39],
[40, 41, 42, 43, 44]],
[[25, 26, 27, 28, 29],
[30, 31, 32, 33, 34],
[35, 36, 37, 38, 39],
[40, 41, 42, 43, 44],
[45, 46, 47, 48, 49]]])
</code></pre>
<p>Strides are number of bytes between two neighbour elements along given axis,
thus <code>strides=(arr.shape[1] * isize, arr.shape[1] * isize, isize)</code> means skip 5
elements when going from windowed[0] to windowed[1] and skip 5 elements when
going from windowed[0, 0] to windowed[0, 1]. Now you can call any ufunc on the
resulting array, e.g.:</p>
<pre><code>In [44]: windowed.sum(axis=(1,2))
Out[44]: array([300, 425, 550, 675, 800, 925])
</code></pre>
|
python|arrays|r|numpy|pandas
| 6
|
377,423
| 26,302,249
|
Split a column value to mutliple columns pandas /python
|
<p>I am new to Python/Pandas and have a data frame with two columns one a series and another a string.
I am looking to split the contents of a Column(Series) to multiple columns .Appreciate your inputs on this regard .
This is my current dataframe content </p>
<pre><code> Songdetails Density
0 ["'t Hof Van Commerce", "Chance", "SORETGR12AB... 4.445323
1 ["-123min.", "Try", "SOERGVA12A6D4FEC55"] 3.854437
2 ["10_000 Maniacs", "Please Forgive Us (LP Vers... 3.579846
3 ["1200 Micrograms", "ECSTACY", "SOKYOEA12AB018... 5.503980
4 ["13 Cats", "Please Give Me Something", "SOYLO... 2.964401
5 ["16 Bit Lolitas", "Tim Likes Breaks (intermez... 5.564306
6 ["23 Skidoo", "100 Dark", "SOTACCS12AB0185B85"] 5.572990
7 ["2econd Class Citizen", "For This We'll Find ... 3.756746
8 ["2tall", "Demonstration", "SOYYQZR12A8C144F9D"] 5.472524
</code></pre>
<p>Desired output is SONG , ARTIST , SONG ID ,DENSITY i.e. split song details into columns.</p>
<p>for e.g. for the sample data</p>
<pre><code> SONG DETAILS DENSITY
8 ["2tall", "Demonstration", "SOYYQZR12A8C144F9D"] 5.472524
SONG ARTIST SONG ID DENSITY
2tall Demonstration SOYYQZR12A8C144F9D 5.472524
</code></pre>
<p>Thanks </p>
|
<p>Thank you , i had a do an insert of column to the new data frame and was able to achieve what i needed thanks df2 = pd.DataFrame(series.apply(lambda x: pd.Series(x.split(','))))
df2.insert(3,'Density',finaldf['Density'])</p>
|
python|pandas
| 0
|
377,424
| 26,184,977
|
urllib2.URLError when using Quandl for Python behind a proxy
|
<p>I'm posting this because I tried searching for the answer myself and I was not able to find a solution. I was eventually able to figure out a way to get this to work & I hope this helps someone else in the future.</p>
<h2>Scenario:</h2>
<p>In Windows XP, I'm using Python with Pandas & Quandl to get data for a US Equity security using the following line of code:</p>
<pre><code>bars = Quandl.get("GOOG/NYSE_SPY", collapse="daily")
</code></pre>
<p>Unfortunately, I was getting the following error:</p>
<pre><code>urllib2.URLError: <urlopen error [Errno 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond>
</code></pre>
<hr>
<p><sup>@user3150079: kindly <kbd>Ctrl+X</kbd> / <kbd>Ctrl+V</kbd> your solution as an <strong>[Answer]</strong>. Such MOV is perfectly within StackOverflow</sup></p>
<h2>Solution:</h2>
<p>I recognized that this was an issue with trying to contact a server without properly targeting my network's proxy server. Since I was not able to set the system variable for HTTP_PROXY, I added the following line which corrected the issue:</p>
<pre><code>import os
os.environ['HTTP_PROXY']="10.11.123.456:8080"
</code></pre>
<p>Thanks - I'm interested to hear about any improvements to this solution or other suggestions.</p>
|
<p>You can set your <strong>user</strong> environment variable <code>HTTP_PROXY</code> if you can't or won't set the system environment variable:</p>
<pre><code>set HTTP_PROXY "10.11.123.456:8080"
python yourscript.py
</code></pre>
<p>and to permanently set it (using setx from <a href="https://www.microsoft.com/en-us/download/details.aspx?id=18546" rel="nofollow" title="Windows XP Service Pack 2 Support Tools">Windows XP Service Pack 2 Support Tools</a>):</p>
<pre><code>setx HTTP_PROXY "10.11.123.456:8080"
python yourscript.py
</code></pre>
<p>Other ways to get this environment variable set include: registry entries, putting <code>os.environ["HTTP_PROXY"] = ..." in</code>sitecustomize.py`.</p>
|
python|pandas|proxy|quantitative-finance|quandl
| 0
|
377,425
| 26,002,474
|
pandas, name of the column after a group by function
|
<p>I have a simple Pandas Dataframe named purchase_cat_df:</p>
<pre><code> email cat
0 email1@gmail.com Mobiles & Tablets
1 email2@gmail.com Mobiles & Tablets
2 email1@gmail.com Mobiles & Tablets
3 email3@gmail.com Mobiles & Tablets
4 email3@gmail.com Home & Living
5 email1@gmail.com Home & Living
</code></pre>
<p>I'm grouping by the 'email' and and putting 'cat' in a list like this:</p>
<pre><code>test = purchase_cat_df.groupby('email').apply(lambda x: list(x.cat))
</code></pre>
<p>but then my DataFrame test is:</p>
<pre><code>email
email1@gmail.com [Mobiles & Tablets, Mobiles & Tablets, Home & ...
email2@gmail.com [Mobiles & Tablets]
email3@gmail.com [Mobiles & Tablets, Home & Living]
</code></pre>
<p>I lost the indexs and the name, how can I named the column 2?</p>
|
<p>If you want to keep your original index, you were probably looking for something like this:</p>
<pre><code>purchase_cat_df.groupby('email', as_index=False)
</code></pre>
<p>as_index=False keeps the original index. You can then continue to address the column by its name.</p>
|
python|pandas|group-by
| 4
|
377,426
| 66,801,401
|
how can i work with my GPU in python Visual Studio Code
|
<p>Hello I know that the key to analyzing data and working with artificial intelligence is to use the gpu and not the cpu. The problem is that I don't know how to use it with Python in the visual studio code, I use Ubuntu, I already have nvidia installed</p>
|
<p>You have to use with the libraries that are designed to work with the GPUs.</p>
<p>You can use Numba to compile Python code directly to binary with CUDA/ROC support, but I don't really know how limiting it is.</p>
<p>Another way is to call APIs that are designed for parallel computing such as OpenCL, there is PyOpenCL binding for this.</p>
<p>A bit limiting, but sometimes valid way - OpenGL/DirectX compute shaders, they are extremely easy, but not so fast if you need to transfer data back and forth in small batches.</p>
|
python|python-3.x|pandas|dataframe|visual-studio-code
| 1
|
377,427
| 66,851,208
|
Passing multiple parameters with apply in Pandas
|
<p>How do I pass multiple parameters with apply in Pandas?</p>
<p><code>do_something</code> is a function:</p>
<pre><code>def do_something(x,test="testFoo")
</code></pre>
<p>This can be used with <code>dataframe.apply</code></p>
<pre><code>df2.apply(do_something, test="testBar",axis=1)
</code></pre>
<p>I want to pass another parameter (df) like this:</p>
<pre><code>def do_something(x,test="testFoo",df)
</code></pre>
<p>How do I now call apply with this <code>df</code> parameter similar to this:</p>
<pre><code>df2.apply(do_something, test="testBar",df=df,axis=1)
</code></pre>
|
<p>For me working small change <code>df=df</code> in <code>def</code>:</p>
<pre><code>df = pd.DataFrame({'a':[1,2]})
df2 = pd.DataFrame({'g':[50,40]})
def do_something(x,test="testFoo",df=df):
print (df)
a
0 1
1 2
a
0 1
1 2
df2.apply(do_something, test="testBar",df=df,axis=1)
</code></pre>
<p>EDIT:</p>
<pre><code>df2 = pd.DataFrame({'g':[50,40]})
def get_df():
return pd.DataFrame({'a':[1,2]})
def do_something(x,df,test="testFoo"):
print (df)
df2.apply(do_something, test="testBar",df=get_df(),axis=1)
</code></pre>
|
pandas
| 1
|
377,428
| 66,879,986
|
TensorFlow 2 Quantization Aware Training (QAT) with tf.GradientTape
|
<p>Can anyone point to references where one can learn how to perform Quantization Aware Training (QAT) with <code>tf.GradientTape</code> on TensorFlow 2?</p>
<p>I only see this done with the tf.keras API. I do not use <code>tf. keras</code>, I always build customized training with <code>tf.GradientTape</code> provides more control over the training process. I now need to quantize a model but I only see references on how to do it using the <code>tf. keras</code> API.</p>
|
<p>In the official examples <a href="https://www.tensorflow.org/model_optimization/guide/quantization/training_example" rel="nofollow noreferrer">here</a>, they showed QAT training with <code>model. fit</code>. Here is a demonstration of <strong>Quantization Aware Training</strong> using <code>tf.GradientTape()</code>. But for complete reference, let's do both here.</p>
<hr />
<p>Base model training. This is directly from the <a href="https://www.tensorflow.org/model_optimization/guide/quantization/training_example" rel="nofollow noreferrer">official doc</a>. For more details, please check there.</p>
<pre><code>import os
import tensorflow as tf
from tensorflow import keras
import tensorflow_model_optimization as tfmot
# Load MNIST dataset
mnist = keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images / 255.0
test_images = test_images / 255.0
# Define the model architecture.
model = keras.Sequential([
keras.layers.InputLayer(input_shape=(28, 28)),
keras.layers.Reshape(target_shape=(28, 28, 1)),
keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation='relu'),
keras.layers.MaxPooling2D(pool_size=(2, 2)),
keras.layers.Flatten(),
keras.layers.Dense(10)
])
# Train the digit classification model
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.summary()
model.fit(
train_images,
train_labels,
epochs=1,
validation_split=0.1,
)
</code></pre>
<pre><code>10ms/step - loss: 0.5411 - accuracy: 0.8507 - val_loss: 0.1142 - val_accuracy: 0.9705
<tensorflow.python.keras.callbacks.History at 0x7f9ee970ab90>
</code></pre>
<h3>QAT <code>.fit</code>.</h3>
<p>Now, performing <strong>QAT</strong> over the base model.</p>
<pre><code># -----------------------
# ------------- Quantization Aware Training -------------
import tensorflow_model_optimization as tfmot
quantize_model = tfmot.quantization.keras.quantize_model
# q_aware stands for for quantization aware.
q_aware_model = quantize_model(model)
# `quantize_model` requires a recompile.
q_aware_model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
q_aware_model.summary()
train_images_subset = train_images[0:1000]
train_labels_subset = train_labels[0:1000]
q_aware_model.fit(train_images_subset, train_labels_subset,
batch_size=500, epochs=1, validation_split=0.1)
356ms/step - loss: 0.1431 - accuracy: 0.9629 - val_loss: 0.1626 - val_accuracy: 0.9500
<tensorflow.python.keras.callbacks.History at 0x7f9edf0aef90>
</code></pre>
<p>Checking performance</p>
<pre><code>_, baseline_model_accuracy = model.evaluate(
test_images, test_labels, verbose=0)
_, q_aware_model_accuracy = q_aware_model.evaluate(
test_images, test_labels, verbose=0)
print('Baseline test accuracy:', baseline_model_accuracy)
print('Quant test accuracy:', q_aware_model_accuracy)
Baseline test accuracy: 0.9660999774932861
Quant test accuracy: 0.9660000205039978
</code></pre>
<hr />
<h3>QAT <code>tf.GradientTape()</code>.</h3>
<p>Here is the <strong>QAT</strong> training part on the base model. Note we can also perform custom training over the base model.</p>
<pre><code>batch_size = 500
train_dataset = tf.data.Dataset.from_tensor_slices((train_images_subset,
train_labels_subset))
train_dataset = train_dataset.batch(batch_size=batch_size,
drop_remainder=False)
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
optimizer = tf.keras.optimizers.Adam()
for epoch in range(1):
for x, y in train_dataset:
with tf.GradientTape() as tape:
preds = q_aware_model(x, training=True)
loss = loss_fn(y, preds)
grads = tape.gradient(loss, q_aware_model.trainable_variables)
optimizer.apply_gradients(zip(grads, q_aware_model.trainable_variables))
_, baseline_model_accuracy = model.evaluate(
test_images, test_labels, verbose=0)
_, q_aware_model_accuracy = q_aware_model.evaluate(
test_images, test_labels, verbose=0)
print('Baseline test accuracy:', baseline_model_accuracy)
print('Quant test accuracy:', q_aware_model_accuracy)
</code></pre>
<pre><code>Baseline test accuracy: 0.9660999774932861
Quant test accuracy: 0.9645000100135803
</code></pre>
|
tensorflow|keras|quantization
| 2
|
377,429
| 67,174,506
|
Pandas: Duplicated level name: <Column Name>, assigned to level 1, is already used for level 0."
|
<pre><code>Item AddDate COUNT(Number)
Item 3 2021-01-05 111
Item 3 2021-01-06 223
Item 3 2021-01-07 44
Item 3 2021-01-26 431
Item 3 2021-01-25 12
Item 3 2020-12-25 43
Item 1 2021-01-19 53
Item 1 2021-01-18 12
Item 2 2021-04-06 15
Item 2 2020-11-30 132
</code></pre>
<p>I have a Pandas dataframe that looks like above (type is <code><class 'pandas.core.frame.DataFrame'></code>)</p>
<p>I tried to generate a pivot_table by doing</p>
<pre><code>df = df.pivot_table(
index='AddDate',
columns=['AddDate', 'Item'],
values='COUNT(Number)',
fill_value=0,
aggfunc=sum)
</code></pre>
<p>This gives me <code>Duplicated level name: \"AddDate\", assigned to level 1, is already used for level 0."</code></p>
<p>I tried changing the <code>AddDate</code> column name to something else, but it didn't fix the issue.
Any help?</p>
|
<p>It seems there is wrongly assigned parameter <code>columns=['AddDate', 'Item']</code>, so is used <code>AddDate</code> for generate new index values (<code>index='AddDate'</code>), but also for <code>MultiIndex</code> by <code>AddDate, Item</code> combinations (<code>columns=['AddDate', 'Item']</code>).</p>
<p>I think you need only <code>columns='Item'</code>:</p>
<pre><code>df = df.pivot_table(
index='AddDate',
columns='Item',
values='COUNT(Number)',
fill_value=0,
aggfunc=sum)
</code></pre>
|
python|pandas
| 1
|
377,430
| 66,994,770
|
Calculate sum of amount in a month before a certain date
|
<p>Initial df</p>
<pre><code>d = {'salesman': ['Andy', 'Brown','Charlie'],
'training_date': ['2020-04-16','2021-03-04','2021-03-08'],
'sales_in_training_month':['0','2634','2856.5']
}
df_initial = pd.DataFrame(data=d)
df_initial
</code></pre>
<p>Expected df</p>
<pre><code>d2 = {'salesman': ['Andy', 'Brown','Charlie'],
'training_date': ['2020-04-16','2021-03-04','2021-03-08'],
'sales_in_training_month':[0,2634,2856.5],
'sales_per_day_in_training_month':[0,87.8,92.14],
'sales_in_training_month_before_training_date':[0,263.4,644.98]
}
df_post = pd.DataFrame(data=d2)
df_post
</code></pre>
<p>Explanation:
I want to calculate the sum of the sales of each salesman in the month, before his/her training date. <br>
Andy, 0 sales, so 0.<br>
Brown was trained on March 4th. In March, his sales was $2634. There are 31 days in March, so his sales per day during his training month was approximately $87.8. $87.8 multiplied by the number of days before his training day (3) results in $263.4.<br>
I am open to receiving a more efficient approach to achieve the same goal: getting the sales amount in the training month, before the training date.</p>
<p>Apart from the initial df, the table I received from the data warehouse is in the format of:</p>
<pre><code>d_0 = {'salesman': ['Andy', 'Brown','Charlie'],
'sales':['0','2634','2856.5'],
'transaction_month':['2020-04-01','2020-05-01','2020-06-01']
}
df_0 = pd.DataFrame(data=d_0)
df_0
</code></pre>
<p>Note:
I am using an approximation method as the dataset is huge, so querying daily sales will take a long time.</p>
|
<ul>
<li>Convert the dates to <code>datetime</code>.</li>
<li><code>groupby</code> "salesman", then by month.</li>
<li>The "day" field of the training date gives you the number of days before training (after subtracting 1).</li>
<li>Divide this by days in the month (a function call on the month
number) to get the proportion of sales before training.</li>
<li>Multiple that proportion by the total sales; that gives you the interpolated sales before training.</li>
</ul>
<p>Follow the business logic for why this is a useful figure, but these steps should give you the result you describe. Note that the arithmetic in you example is incorrect; $2634 / 31 gives $84.97; you divided by 30, not 31.</p>
|
python|python-3.x|pandas
| 0
|
377,431
| 67,088,286
|
pandas drop rows based on cell content and no headers
|
<p>I'm reading a csv file with pandas that has no headers.</p>
<pre><code>df = pd.read_csv('file.csv', header=0)
</code></pre>
<p>csv file containing 1 row with several users:</p>
<pre><code>admin
user
system
sysadmin
adm
administrator
</code></pre>
<p>I need to read the file to a df or a list except for example: <code>sysadmin</code>
and save the result to the csv file</p>
<pre><code>admin
user
system
adm
administrator
</code></pre>
|
<p>Select first columns, filter by <a href="http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a> and write to file:</p>
<pre><code>df = pd.read_csv('file.csv', header=0)
df[df.iloc[:, 0].ne('sysadmin')].to_csv(file, index=False)
#if there is csv header converted to column name
#df[df['colname'].ne('sysadmin')].to_csv(file, index=False)
</code></pre>
<p>If no header in csv need parameters like:</p>
<pre><code>df = pd.read_csv('file.csv', header=None)
df[df.iloc[:, 0].ne('sysadmin')].to_csv(file, index=False, header=False)
</code></pre>
|
python|pandas
| 3
|
377,432
| 66,791,716
|
Pass DataFrame to parse() in spider class
|
<p>Im making a database with products of an eShop, i want to save all the data as a JSON file.
I was wondering a way to pass values of the dataframe used to list the links the spider crawl</p>
<pre><code>import scrapy
import pandas as pd
class subcategoryExtractorSpider(scrapy.Spider):
name = 'subcategorySpider'
# page to scrape
targets = pd.read_json('categories.json')
start_urls = targets["link"].values.tolist()
def parse(self, response):
#
subcategories = response.css('div.list-content.j_option_list.j_category_type')
for subcategory in subcategories.css('a'):
yield {
#'category' : category name
'subcategory': subcategory.css('a::text').get(),
'link': subcategory.css('a').attrib['href']
}
</code></pre>
<p>as you can see i commented 'category' in the yield, i'll like to output the category of the link i'm crawling witch is in targets</p>
|
<p>If I've understood correctly, you need to pass categories from Pandas through to your final output?</p>
<p>Scrapy has this : <a href="https://docs.scrapy.org/en/latest/topics/request-response.html#passing-additional-data-to-callback-functions" rel="nofollow noreferrer">cb_kwargs</a> which allows you to pass values</p>
<p>see: <a href="https://www.youtube.com/watch?v=i-zX4xQUzT8" rel="nofollow noreferrer">https://www.youtube.com/watch?v=i-zX4xQUzT8</a></p>
|
pandas|scrapy|web-crawler
| 0
|
377,433
| 67,082,891
|
Pythonic way of replace values in one column from a two column table
|
<p>I have a df with the origin and destination between two points and I want to convert the strings to a numerical index, and I need to have a representation to back convert it for model interpretation.</p>
<pre><code>df1 = pd.DataFrame({"Origin": ["London", "Liverpool", "Paris", "..."], "Destination": ["Liverpool", "Paris", "Liverpool", "..."]})
</code></pre>
<p>I separately created a new index on the sorted values.</p>
<pre><code>df2 = pd.DataFrame({"Location": ["Liverpool", "London", "Paris", "..."], "Idx": ["1", "2", "3", "..."]})
</code></pre>
<p>What I want to get is this:</p>
<pre><code>df3 = pd.DataFrame({"Origin": ["1", "2", "3", "..."], "Destination": ["1", "3", "1", "..."]})
</code></pre>
<p>I am sure there is a simpler way of doing this but the only two methods I can think of are to do a left join onto the Origin column by the Origin to Location and the same for destination then remove extraneous columns, or loop of every item in df1 and df2 and replace matching values. I've done the looped version and it works but it's not very fast, which is to be expected.</p>
<p>I am sure there must be an easier way to replace these values but I am drawing a complete blank.</p>
|
<p>You can use <code>.map()</code>:</p>
<pre><code>mapping = dict(zip(df2.Location, df2.Idx))
df1.Origin = df1.Origin.map(mapping)
df1.Destination = df1.Destination.map(mapping)
print(df1)
</code></pre>
<p>Prints:</p>
<pre class="lang-none prettyprint-override"><code> Origin Destination
0 2 1
1 1 3
2 3 1
3 ... ...
</code></pre>
<hr />
<p>Or "bulk" <code>.replace()</code>:</p>
<pre><code>df1 = df1.replace(mapping)
print(df1)
</code></pre>
|
python|pandas
| 1
|
377,434
| 66,883,277
|
Add a column in second DataFrame based on 2 matched column in 2 DataFrame
|
<p>I have 2 DataFrame, I want to add a column in second DataFrame based on multiple (In my case 2) matched column of both DataFrame
I tried the code below, but I don't get the right answer. Can anyone help me please?</p>
<pre><code>
result = result.merge(out[out['COL1.']!=''].drop(['month'], axis=1), on=['COL1.'], how='left').merge(out[out['month']!=''].drop(['COL1.'], axis=1), on=['month'], how='left')
</code></pre>
<p>df1:</p>
<pre><code> COL1. count month
1 Plassen 1293 4
2 Adamstuen 567 4
3 AHO. 1799 5
4 Akersgata 2418 4
</code></pre>
<p>df2 :</p>
<pre><code> station month
1 Plassen 4
2 Adamstuen 4
3 AHO. 5
4 Akersgata. 6
</code></pre>
<p>What I want is :</p>
<pre><code> station month. count
1 Plassen 4. 1293
2 Adamstuen 4. 567
3 AHO. 5. 1799
</code></pre>
|
<p>Use <code>merge()</code> method and chain <code>drop()</code> method to it:</p>
<pre><code>result=df2.merge(df1,right_on=['COL1.','month'],left_on=['station','month']).drop(columns=['COL1.'])
</code></pre>
<p>Now if you print <code>result</code> you will get your desired output:</p>
<pre><code> station month count
0 Plassen 4 1293
1 Adamstuen 4 567
2 AHO. 5 1799
</code></pre>
|
python|pandas|dataframe
| 1
|
377,435
| 66,877,406
|
Python Dataframe Conditional If Statement Using pd.np.where Erroring Out
|
<p>I have the following dataframe:</p>
<pre><code>count country year age_group gender type
7 Albania 2006 014 f ep
1 Albania 2007 014 f ep
3 Albania 2008 014 f ep
2 Albania 2009 014 f ep
2 Albania 2010 014 f ep
</code></pre>
<p>I'm trying to make adjustments to the "gender" column so that 'f' becomes 'female' and same for m and male.</p>
<p>I tried the following code:</p>
<pre><code>who3['gender'] = pd.np.where(who3['gender'] == 'f', "female")
</code></pre>
<p>But it gives me this error:</p>
<p><a href="https://i.stack.imgur.com/R1RLB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/R1RLB.png" alt="enter image description here" /></a></p>
<p>Now when I try this code:</p>
<pre><code>who3['gender'] = pd.np.where(who3['gender'] == 'f', "female",
pd.np.where(who3['gender'] == 'm', "male"))
</code></pre>
<p>I get error below:</p>
<p><a href="https://i.stack.imgur.com/j7vG4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/j7vG4.png" alt="enter image description here" /></a></p>
<p>What am I doing wrong?</p>
|
<p>You can use also <code>.replace()</code>:</p>
<pre><code>df["gender"] = df["gender"].replace({"f": "female", "m": "male"})
print(df)
</code></pre>
<p>Prints:</p>
<pre><code> count country year age_group gender type
0 7 Albania 2006 14 female ep
1 1 Albania 2007 14 female ep
2 3 Albania 2008 14 female ep
3 2 Albania 2009 14 female ep
4 2 Albania 2010 14 female ep
</code></pre>
|
python|pandas|if-statement|conditional-statements
| 2
|
377,436
| 67,114,236
|
Inserting thousands of JSONs into a dataframe
|
<p>I'm sure this is probably a fairly simple query for most but I'm quite new to Python and thus Pandas as well. Ultimately, I have thousands of JSON files in a folder that I would like to get into a dataframe. This is the code I'm currently using but unfortunately, it is brutally slow. My guess is because I'm opening every file and processing before moving to the next, this is causing a lag. Seems it would be better to get all the JSON together before loading into a dataframe but I'm not sure.</p>
<pre><code>df = p.DataFrame()
for f in glob.glob("log_test/*.json"):
with io.open(f,"rb") as infile:
binstr = infile.read()
objlist = loads(binstr.decode("utf-8"))
temp = p.json_normalize(objlist['data'])
df = df.append(temp, ignore_index=True)
</code></pre>
<p>I have tried adding the JSON file values into a dictionary but I can't seem to correctly normalize as I only need the data at the data level within the JSON.</p>
<pre><code>resultdict = {}
for f in glob.glob("log_test/*.json"):
with io.open(f,"rb") as infile:
resultdict[f] = json.load(infile)
</code></pre>
<p>Here is structure of the sample data:</p>
<pre><code>{ "type" : "MTA", "data" : [{"foo":"bar","foo":"bar","foo":"bar"}]}
</code></pre>
|
<p>You can use a <code>ThreadPoolExecutor</code> to speed things up:</p>
<pre class="lang-py prettyprint-override"><code>from concurrent.futures import ThreadPoolExecutor
def read_file(filename):
with open(filename) as f:
x = json.load(f)
return pd.json_normalize(x['data'])
with ThreadPoolExecutor() as ex:
frames = list(ex.map(read_file, glob.glob('log_test/*.json')))
df = pd.concat(frames, ignore_index=True)
</code></pre>
<p>Example setup:</p>
<pre class="lang-py prettyprint-override"><code>for i in range(1000):
d = {
'type': 'MTA',
'data': pd.DataFrame(
np.random.randint(0, 10, (50, 10)),
columns=list('abcdefghij')).to_dict(orient='records')
}
with open(f'tests/foo-{i:03d}.json', 'w') as f:
json.dump(d, f)
</code></pre>
<p>Then timing the above shows roughly 1.85s to read and concatenate the 1000 files.</p>
<p><strong>alternatively</strong>, you can just merge the lists and then do a single <code>pd.json_normalize()</code>:</p>
<pre class="lang-py prettyprint-override"><code>def read_file(filename):
with open(filename) as f:
return json.load(f)['data']
with ThreadPoolExecutor() as ex:
merged_data = [e for lst in ex.map(read_file, glob.glob('tests/*.json')) for e in lst]
df2 = pd.json_normalize(merged_data)
</code></pre>
<p>This goes about twice as fast on the example data above (859 ms). But it assumes that your JSON files are each a list of dicts.</p>
|
python|json|pandas
| 0
|
377,437
| 67,003,865
|
Pandas reading tall data into a DataFrame
|
<p>I have a text file which consists of tall data. I want to iterate through each line within the text file and create a Dataframe.</p>
<p>The text file looks like this, note that the same fields don't exist for all Users (e.g some might have an email field some might not), Also note that each User is separated by[User]:</p>
<pre><code>[User]
Field=Data
employeeNo=123
last_name=Toole
first_name=Michael
language=english
department=Marketing
role=Marketing Lead
[User]
employeeNo=456
last_name= Ronaldo
first_name=Juan
language=Spanish
email=juan.ronaldo@sms.ie
department=Data Science
role=Team Lead
Location=Spain
[User]
employeeNo=998
last_name=Lee
first_name=Damian
language=english
email=damian.lee@email.com
[User]
</code></pre>
<p>My issue is as follows:
My code iterates through the data but for any field that is not present for that User it iterates down through the list and takes the next piece of data relating to that field.</p>
<p><strong>For example</strong> Look at the output below (click on the link below) the first User does not have an email associated with him so the code assigns the email of the second user in the list, however what I want to do is return Nan/N/A/blank if no information is available</p>
<p><a href="https://i.stack.imgur.com/Vcpnr.png" rel="nofollow noreferrer">Click here to view DataFrame</a></p>
<pre><code>## Import Libraries
import pandas as pd
import numpy as np
from pandas import DataFrame
## Import Data
## Set column names so that no lines in the text file are missed"
col_names = ['Field',
'Data']
## If you have been sent this script you need to change the file path below, change it to where you have the .txt file saved
textFile = pd.read_csv(r'Desktop\SampleData.txt', delimiter="=", engine='python', names=col_names)
## Get a list of the unique IDs
new_cols = pd.unique(textFile['Field'])
userListing_DF = pd.DataFrame()
## Create a for loop to iterate through the first column and get the unique columns, then concatenate those unique values with data
for col in new_cols:
tmp = textFile[textFile['Field'] == col]
tmp.reset_index(inplace=True)
userListing_DF = pd.concat([userListing_DF, tmp['Data']], axis=1)
userListing_DF.columns = new_cols
</code></pre>
|
<p>Read in the single long column, and then form a group indicator by seeing where the value is '[User]'. Then separate the column labels and values, with a <code>str.split</code> and join back to your DataFrame. Finally pivot to your desired shape.</p>
<pre><code>df = pd.read_csv('test.txt', sep='\n', header=None)
df['Group'] = df[0].eq('[User]').cumsum()
df = df[df[0].ne('[User]')] # No longer need these rows
df = pd.concat([df, df[0].str.split('=', expand=True).rename(columns={0: 'col', 1: 'val'})],
axis=1)
df = df.pivot(index='Group', columns='col', values='val').rename_axis(columns=None)
</code></pre>
<hr />
<pre><code> Field Location department email employeeNo first_name language last_name role
Group
1 Data NaN Marketing NaN 123 Michael english Toole Marketing Lead
2 NaN Spain Data Science juan.ronaldo@sms.ie 456 Juan Spanish Ronaldo Team Lead
3 NaN NaN NaN damian.lee@email.com 998 Damian english Lee NaN
</code></pre>
|
python|pandas|dataframe
| 0
|
377,438
| 67,175,670
|
how to split values in columns using dataframe?
|
<p>I have a dataframe that is consist of 3 columns where one of these columns includes as values 2 values separated with <strong>-</strong> in one record.</p>
<p>I want to another columns that includes all these values <strong>but with one value each record</strong></p>
<p>For this I created a function but when i run the code it crash and display the below error:</p>
<pre><code>KeyError: "['loc1-loc2'] not found in axis"
Traceback:
File "f:\aienv\lib\site-packages\streamlit\script_runner.py", line 333, in _run_script
exec(code, module.__dict__)
File "F:\AIenv\streamlit\app.py", line 346, in <module>
df[y] = df[y].apply(splitting)
File "f:\aienv\lib\site-packages\pandas\core\series.py", line 4200, in apply
mapped = lib.map_infer(values, f, convert=convert_dtype)
File "pandas\_libs\lib.pyx", line 2401, in pandas._libs.lib.map_infer
File "F:\AIenv\streamlit\app.py", line 295, in splitting
df1=df.drop(record,axis=1).join(record.str.split("-",expand=True).stack().reset_index(level=1,drop=True).rename("spltting"))
File "f:\aienv\lib\site-packages\pandas\core\frame.py", line 4169, in drop
errors=errors,
File "f:\aienv\lib\site-packages\pandas\core\generic.py", line 3884, in drop
obj = obj._drop_axis(labels, axis, level=level, errors=errors)
File "f:\aienv\lib\site-packages\pandas\core\generic.py", line 3918, in _drop_axis
new_axis = axis.drop(labels, errors=errors)
File "f:\aienv\lib\site-packages\pandas\core\indexes\base.py", line 5278, in drop
raise KeyError(f"{labels[mask]} not found in axis")
</code></pre>
<p>Code:</p>
<pre><code>import numpy as np
import pandas as pd
df =pd.DataFrame({
"source_number": [
[11199,11328,11287,32345,12342,1232,13456,123244,13456],
"location":
["loc2","loc1-loc3","loc3","loc1","loc2-loc1","loc2","loc3-loc2","loc2","loc1"],
"category":
["cat1","cat2","cat1","cat3","cat3","cat3","cat2","cat3","cat2"],
})
def splitting(record):
df1=df.drop(record,axis=1).join(record.str.split("-",expand=True).stack().reset_index(level=1,drop=True).rename("spltting"))
return df1
for y in df.columns:
df[y] = df[y].apply(splitting)
</code></pre>
|
<p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.explode.html" rel="nofollow noreferrer"><code>DataFrame.explode</code></a> with split column passed to function:</p>
<pre><code>def splitting(df, r):
df[r] = df[r].str.split("-")
return df.explode(r)
df = splitting(df, 'location')
print (df)
source_number location category
0 11199 loc2 cat1
1 11328 loc1 cat2
1 11328 loc3 cat2
2 11287 loc3 cat1
3 32345 loc1 cat3
4 12342 loc2 cat3
4 12342 loc1 cat3
5 1232 loc2 cat3
6 13456 loc3 cat2
6 13456 loc2 cat2
7 123244 loc2 cat3
8 13456 loc1 cat2
</code></pre>
|
python|pandas|function
| 3
|
377,439
| 67,172,936
|
iterating a numpy matrix, and assign its rows with information from other dataframe and numpy array
|
<p>I have a matrix, e.g., defined as <code>x_matrix = np.zeros(200,16)</code> Iterating over the rows, I need to assign each row of this matrix with two component vectors, <code>a1</code> is an array with 10 elements, <code>a2</code> is a corresponding row belonging to a pandas dataframe <code>y_dataframe</code> <code>y_dataframe</code> has shape of <code>(200,6)</code></p>
<p>I can iterate the matrix as follows. But I also need the row number of x_matrix to retreive the corresponding row in the y_dataframe. Are there other ways to iterate the matrix rows, and compose its rows with different component vectors described as above.</p>
<pre><code>for row in x_matrix
</code></pre>
|
<p>You can do this without iteration if you wish using <code>np.repeat</code> and <code>np.hstack</code>:</p>
<pre class="lang-py prettyprint-override"><code># assuming `a1` is shaped (10,) i.e. 1D array
a1_repeated = np.repeat(a1[np.newaxis, :], 200, axis=0)
x_matrix = np.hstack((a1_repeated, y_dataframe))
</code></pre>
<p>where we first convert <code>a1</code> into a row vector of shape <code>(10, 1)</code> via <code>[np.new_axis, :]</code>, then <code>repeat</code> it <code>200</code> times row-wise (<code>axis=0</code>). Lastly we <code>h</code>orizontally <code>stack</code> this <code>(200, 10)</code> shaped <code>a1_repeated</code> and <code>y_dataframe</code> to get an array of shape <code>(200, 16)</code>.</p>
<p>But if you want to iterate, <code>enumerate</code> gives you index you are at:</p>
<pre class="lang-py prettyprint-override"><code>for row_number, row in enumerate(x_matrix):
x_matrix[row_number] = [*a1, *y_dataframe.iloc[row_number]]
</code></pre>
<p>where <code>y_dataframe.iloc[row_number]</code> is equal to <code>a2</code> you mention i.e. a row of dataframe.</p>
|
python|pandas|numpy|scipy
| 0
|
377,440
| 67,174,341
|
Keras Lambda layer, how to use multiple arguments
|
<p>I have this function:</p>
<pre><code>def sampling(x):
zeros = x*0
samples = tf.random.categorical(tf.math.log(x), 1)
samples = tf.squeeze(tf.one_hot(samples, depth=2), axis=1)
return zeros+samples
</code></pre>
<p>That I call from this layer:</p>
<pre><code>x = layers.Lambda(sampling, name="lambda")(x)
</code></pre>
<p>But I need to change the <em>depth</em> variable in the <em>sampling</em> function, so I would need something like this:</p>
<pre><code>def sampling(x, depth):
</code></pre>
<p>But, how can I make it work with the Lambda layer ?</p>
<p>Thanks a lot</p>
|
<p>Use a lambda function inside the Lambda layer...</p>
<pre><code>def sampling(x, depth):
zeros = x*0
samples = tf.random.categorical(tf.math.log(x), 1)
samples = tf.squeeze(tf.one_hot(samples, depth=depth), axis=1)
return zeros+samples
</code></pre>
<p>usage:</p>
<pre><code>Lambda(lambda t: sampling(t, depth=3), name="lambda")(x)
</code></pre>
|
python|tensorflow|machine-learning|keras|deep-learning
| 3
|
377,441
| 66,949,800
|
ImageDataGenerator flow_from_directory with grandchild folder
|
<p>I have a k-fold train dataset but its structure has a grandchild folder for ex:</p>
<pre><code>/monkey
/ howler monkey
- img1
- img2
/ japanese macaque
- img1
- img2
/dog
/ bulldog
- img1
- img2
/ Rottweiler
- img1
- img2
</code></pre>
<p>In this situation when I use <code>ImageDataGenerator</code> <code>flow_from_directory</code>. Found 8 img exactly but the class has 2, not 4. How can I get 4 classes?</p>
|
<p>I have this question for a long time and I couldnt find a direct answer using <code>.flow_from_directory</code>. What I did instead, was use <code>.flow_from_dataframe</code> instead. Where first, I just created a dataframe with the images paths and their corresponding label (in your case howler monkey, japanese macque, etc). You dont actual load the image at any point when making this dataframe.</p>
<p>it would go something like this:</p>
<pre><code>images_paths_label = []
for root_class in os.listdir(root_folder):
temp_class = os.path.join(root_folder, root_class)
for class in os.listdir(temp_class):
temp_subclass = os.path.join(temp_class, class)
for image in os.listdir(temp_subclass):
temp_img_path = os.path.join(temp_subclass, image)
images_paths_label.append([temp_img_path, class])
df = pd.DataFrame(images_paths_label, columns = ['image_path', 'label'])
# Now the flow_from_dataframe part
generator = ImageDataGenerator(validation_split = 0.2)
train_generator = generator.flow_from_directory(df, directory = None, x_col = 'image_path', y_col = 'label', seed = 14,...)
</code></pre>
<ul>
<li>Check the indent of the code snippet before using it. I just type it here in stackoverflow.</li>
</ul>
<p>You are giving <code>directory = None</code>, because you are putting an absolute path in the <code>image_path</code> column of the dataframe. And a seed is specified because <code>shuffle= True</code> by default and it should be true because your dataframe samples are ordered by class. And setting the seed here, allows you to be sure that the validation data will remain the same.</p>
<p>This should give you an overall idea of how to overcome this problem while still using a generator. Tell me if you find any problem.</p>
|
python|tensorflow|machine-learning|keras|dataset
| 0
|
377,442
| 67,140,380
|
How to list JSON non-list items together with list items with pandas.json_normalize with Python?
|
<pre><code>[{
"builtin_name": "custom_template",
"fields": [{
"id": 10012,
"field_type": "OBJECT_SET",
"tooltip_text": "",
"name_plural": "",
"name_singular": "reference",
"backref_name": "reference",
"backref_tooltip_text": "",
"allow_multiple": False,
"allowed_otypes": [
"schema",
"table",
"attribute",
"user",
"groupprofile",
"groupprofile"
],
"options": None,
"builtin_name": None
}, {
"id": 8,
"field_type": "OBJECT_SET",
"tooltip_text": None,
"name_plural": "Stewards",
"name_singular": "Steward",
"backref_name": "Steward",
"backref_tooltip_text": None,
"allow_multiple": True,
"allowed_otypes": [
"user",
"groupprofile",
"groupprofile"
],
"options": None,
"builtin_name": "steward"
}
],
"id": 16,
"title": "Custom template"
}]
</code></pre>
<p>Using this JSON object, I want to normalize it using pandas.json_normalize.</p>
<p>When I do this:</p>
<pre><code>pd.json_normalize(data, "fields", errors='ignore', record_prefix='')
</code></pre>
<p>The I get the <code>fields</code> listed out in nice table form like this:</p>
<blockquote>
<p>id field_type tooltip_text name_plural name_singular backref_name backref_tooltip_text allow_multiple allowed_otypes options builtin_name</p>
</blockquote>
<p>(followed by data rows)</p>
<p>But I also was the Outer properties, <code>id</code>, <code>title</code> and <code>builtin_name</code> listed along with fields</p>
<p>So that I end up this:</p>
<blockquote>
<blockquote>
<p>id builtin_name title id field_type tooltip_text name_plural name_singular backref_name backref_tooltip_text allow_multiple allowed_otypes options builtin_name</p>
</blockquote>
</blockquote>
<p>I have tried this:</p>
<pre><code>pd.json_normalize(data, ["id", "builtin_name", "title"], "fields", errors='ignore', record_prefix='')
</code></pre>
<p>But it throws an error saying that id is not a list.</p>
<p>Also tried without the square brackets to no avail.</p>
<p>How can I get these fields <code>"id", "builtin_name", "title"</code> to list along with the other ones in each row?</p>
<p>Thanks!</p>
|
<p>I'd use <code>.json_normalize</code> on whole <code>data</code> list and <code>.explode()</code> the <code>fields</code> column. Then concat back to obtain desired DataFrame:</p>
<pre><code>df = pd.json_normalize(data, errors="ignore", record_prefix="")
df = pd.concat(
[df, df.explode("fields")["fields"].apply(pd.Series)], axis=1
).drop(columns="fields")
print(df)
</code></pre>
<p>Prints:</p>
<pre class="lang-none prettyprint-override"><code> builtin_name id title id field_type tooltip_text name_plural name_singular backref_name backref_tooltip_text allow_multiple allowed_otypes options builtin_name
0 custom_template 16 Custom template 10012 OBJECT_SET reference reference False [schema, table, attribute, user, groupprofile,... None None
0 custom_template 16 Custom template 8 OBJECT_SET None Stewards Steward Steward None True [user, groupprofile, groupprofile] None steward
</code></pre>
|
python|pandas|json-normalize
| 1
|
377,443
| 66,933,028
|
How to use lower version of keras and tensorflow
|
<p>I'm running a code which requires keras version 1.2.0 and tensorflow version 1.1.0.
I'm using Jupyter notebook and I created an <a href="https://github.com/llSourcell/How_to_simulate_a_self_driving_car/blob/master/environments.yml" rel="nofollow noreferrer">environment</a> for all the dependencies.</p>
<p>However, I mistakenly installed both libraries again through pip command which installed the latest versions.</p>
<p>I closed the notebook, opened it again and created the environment once again so the older version of both libraries were installed again.</p>
<p>But when I run <code>keras.__version___</code> command, it shows 2.4.3, which i do not want.</p>
<p>I also ran <code>conda remove keras --force</code> and <code>pip uninstall keras</code>, but it's still showing the latest version.</p>
<p>The code is only compatible with the older version. Please help.</p>
|
<p>It's probably because it is getting uninstalled on a different environment. Identify which python and pip executable you are using by running the following commands:</p>
<pre class="lang-sh prettyprint-override"><code>$ which pip
$ which python
</code></pre>
<p>These two commands will give out the path of the executable from which we can determine the environment. If it is different from what you were using you can try installing to the desired environment by running:</p>
<pre class="lang-sh prettyprint-override"><code>/path/to/desired/pip uninstall keras
/path/to/desired/pip install keras==1.2.0
</code></pre>
|
python|tensorflow|keras|libraries
| 0
|
377,444
| 66,921,105
|
How to edit running total column to restart with every new column value?
|
<p>I have the data frame pictured below. I need the 'Total #' column to restart every time there is a new value in the 'Item Number' column. For example, if Index 4 was the last occurrence of 104430-003 then 14 would be the last 'Total #' and it would start recounting the 'Total #' of VTHY-039 in the appropriate 'Bin Loc.'.</p>
<p>Once I figure out that part my final step is to drop any of the same remaining 'Item Numbers' after the 'Total #' is equal or greater than the PV Pick #.</p>
<p><a href="https://i.stack.imgur.com/lFEf2.jpg" rel="nofollow noreferrer">Code</a></p>
|
<pre><code>pv['cumsum'] = pv.groupby('Item Number')['Items'].transform(pd.Series.cumsum)
pv
Item Number Bin Loc. PV Pick Items cumsum
0 104430-003 A-P28-17B 4 2 2
1 104430-003 A-P39-20B 4 4 6
2 104430-003 A-P39-20C 4 1 7
3 104430-003 A-P39-26C 4 2 9
4 104430-003 A-P40-23C 4 5 14
... ... ... ... ... ...
829 VTHY-039 A-P45-09B 1 2 36
830 VTHY-039 A-P45-13B 1 2 38
831 VTHY-039 A-P45-19B 1 2 40
832 VTHY-039 A-P45-21B 1 3 43
833 VTHY-039 A-P46-21B 1 2 45
</code></pre>
|
python|pandas|group-by|cumsum
| 0
|
377,445
| 66,856,539
|
Python Selenium - Scraping a Table from a Dynamic Page
|
<p>I'm completely new to Python. I want to scrape data from a html table and put it into MS Excel. The website I'm scraping from is dynamic, so I have to select options from 3 drop down boxes to build the table.</p>
<p>Please note that the code below gets me to the website and selects the options I need to build the table.</p>
<p>Please note that the url of this site does not change. It stays the same as the drop down options are selected.</p>
<p>This is what the table looks like once I select the options I need:</p>
<p><a href="https://i.stack.imgur.com/tJHTc.png" rel="nofollow noreferrer">Table</a></p>
<p>Here is a sample of the html for the table:</p>
<p><a href="https://i.stack.imgur.com/7uaeF.png" rel="nofollow noreferrer">Sample HTML of Table</a></p>
<p>My question is on how to read the table with Python and bring the header and contents of the table neatly into MS Excel. The preference would be to maintain the formatting (the font, alternating colors, etc) if possible, but that's not super important.</p>
<p>This is the code I'm using to go to the website and select the options I need from the drop down boxes:</p>
<pre><code>from selenium import webdriver
DRIVER_PATH = 'path to chrome driver'
from selenium.webdriver import chrome
from selenium.webdriver.support.select import Select
driver = webdriver.Chrome(executable_path='path to chrome driver')
#open page
driver.get('url of web page')
#Select drop down box 1 option
select = Select(driver.find_element_by_id('cboGroup'))
select.select_by_visible_text('Drop down box 1 option')
#Insert wait
import time
time.sleep(1)
#driver.implicitly_wait(10000)
#Select drop down box 2 option
select = Select(driver.find_element_by_id('cboElements'))
select.select_by_visible_text('Drop down box 2 option')
#Insert wait
import time
time.sleep(1)
#Select drop down box 3 option
import datetime
from pandas.tseries.offsets import BDay
ReportDate = datetime.datetime.today() - BDay(1)
NewReportDate = ReportDate.strftime("%m/%d/%Y")
print(NewReportDate)
select = Select(driver.find_element_by_id('cboDelDate'))
select.select_by_visible_text(NewReportDate)
#Insert wait
import time
time.sleep(1)
</code></pre>
<p>I've tried using the send keys command to copy/paste the whole page into MS Excel (Ctrl+A, Ctrl+V) but the formatting gets thrown off and it doesn't look right.</p>
<p>I've also tried using Pandas, but I haven't been able to grab the table data.</p>
|
<p>Instead of copying and pasting the content, I used soup.find_all function from the BeautifulSoup library to find the table. Then, I used Pandas to create a dataframe from the table and send it to my Excel sheet.</p>
<p>Here is the code I used:</p>
<pre><code>html = driver.page_source
soup = BeautifulSoup(html)
table = soup.find_all("table")[1]
df = pd.read_html(str(table))[0]
#print(df)
new_table = pd.DataFrame(df)
new_table.to_excel ("<PATH TO EXCEL SHEET", sheet_name="<SHEET NAME>", index=False, header=False)
</code></pre>
|
python|pandas|dataframe|selenium|web-scraping
| 1
|
377,446
| 66,786,787
|
pytorch multiple branches of a model
|
<p><a href="https://i.stack.imgur.com/na6Px.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/na6Px.png" alt="enter image description here" /></a></p>
<p>Hi I'm trying to make this model using pytorch.</p>
<p>Each input is consisted of 20 images of size 28 X 28, which is C1 ~ Cp in the image.
Each image goes to CNN of same structure, but their outputs are concatenated eventually.</p>
<p>I'm currently struggling with feeding multiple inputs to each of its respective CNN model.
Each model in the first box with three convolutional layers will look like this as a code, but I'm not quite sure how I can put 20 different input to separate models of same structure to eventually concatenate.</p>
<pre><code> self.features = nn.Sequential(
nn.Conv2d(1,10, kernel_size = 3, padding = 1),
nn.ReLU(),
nn.Conv2d(10, 14, kernel_size=3, padding=1),
nn.ReLU(),
nn.Conv2d(14, 18, kernel_size=3, padding=1),
nn.ReLU(),
nn.Flatten(),
nn.Linear(28*28*18, 256)
)
</code></pre>
<p>I've tried out giving a list of inputs as an input to forward function, but it ended up with an error and won't go through.
I'll be more than happy to explain further if anything is unclear.</p>
|
<p>Assuming each path have it's own weights, may be this could be done with grouped convolution, although pre fusion <code>Linear</code> can cause some trouble.</p>
<pre><code> P = 20
self.features = nn.Sequential(
nn.Conv2d(1*P,10*P, kernel_size = 3, padding = 1, groups = P ),
nn.ReLU(),
nn.Conv2d(10*P, 14*P, kernel_size=3, padding=1, groups = P),
nn.ReLU(),
nn.Conv2d(14*P, 18*P, kernel_size=3, padding=1, groups = P),
nn.ReLU(),
nn.Conv2d(18*P, 256*P, kernel_size=28, groups = P), # not shure about this one
nn.Flatten(),
nn.Linear(256*P, 1024 )
)
</code></pre>
|
pytorch|concatenation|conv-neural-network
| 3
|
377,447
| 66,906,652
|
How to download hugging face sentiment-analysis pipeline to use it offline?
|
<p><strong>How to download hugging face sentiment-analysis pipeline to use it offline?</strong> I'm unable to use hugging face sentiment analysis pipeline without internet. How to download that pipeline?</p>
<p>The basic code for sentiment analysis using hugging face is</p>
<pre><code>from transformers import pipeline
classifier = pipeline('sentiment-analysis') #This code will download the pipeline
classifier('We are very happy to show you the Transformers library.')
</code></pre>
<p>And the output is</p>
<pre><code>[{'label': 'POSITIVE', 'score': 0.9997795224189758}]
</code></pre>
|
<p>Use the <a href="https://huggingface.co/transformers/main_classes/pipelines.html#transformers.Pipeline.save_pretrained" rel="nofollow noreferrer">save_pretrained()</a> method to save the configs, model weights and vocabulary:</p>
<pre class="lang-py prettyprint-override"><code>classifier.save_pretrained('/some/directory')
</code></pre>
<p>and load it by specifying the tokenizer and model parameter:</p>
<pre class="lang-py prettyprint-override"><code>from transformers import pipeline
c2 = pipeline(task = 'sentiment-analysis', model='/some/directory', tokenizer='/some/directory')
</code></pre>
|
deep-learning|nlp|huggingface-transformers|huggingface-tokenizers
| 3
|
377,448
| 67,132,348
|
Best way to debug or step over a sequential pytorch model
|
<p>I used to write the PyTorch model with <code>nn.Module</code> which included <code>__init__</code> and forward so that I can step over my model to check how the variable dimension changes along the network.
However I have since realized that you can also do it with <code>nn.Sequential</code> which only requires an <code>__init__</code>, you don't need to write a forward function as below:</p>
<p><img src="https://i.stack.imgur.com/NBDG8.png" alt="screenshot of an example from pytorch" /></p>
<p>However, the problem is when I try to step over this network, it is not easy to check the variable any more. It just jumps to another place and back.</p>
<p>Does anyone know how to do step over in this situation?</p>
<p>P.S: I am using PyCharm.</p>
|
<p>You can iterate over the children of model like below and print sizes for debugging. This is similar to writing forward but you write a separate function instead of creating an <code>nn.Module</code> class.</p>
<pre><code>import torch
from torch import nn
model = nn.Sequential(
nn.Conv2d(1,20,5),
nn.ReLU(),
nn.Conv2d(20,64,5),
nn.ReLU()
)
def print_sizes(model, input_tensor):
output = input_tensor
for m in model.children():
output = m(output)
print(m, output.shape)
return output
input_tensor = torch.rand(100, 1, 28, 28)
print_sizes(model, input_tensor)
# output:
# Conv2d(1, 20, kernel_size=(5, 5), stride=(1, 1)) torch.Size([100, 20, 24, 24])
# ReLU() torch.Size([100, 20, 24, 24])
# Conv2d(20, 64, kernel_size=(5, 5), stride=(1, 1)) torch.Size([100, 64, 20, 20])
# ReLU() torch.Size([100, 64, 20, 20])
# you can also nest the Sequential models like this. In this case inner Sequential will be considered as module itself.
model1 = nn.Sequential(
nn.Conv2d(1,20,5),
nn.ReLU(),
nn.Sequential(
nn.Conv2d(20,64,5),
nn.ReLU()
)
)
print_sizes(model1, input_tensor)
# output:
# Conv2d(1, 20, kernel_size=(5, 5), stride=(1, 1)) torch.Size([100, 20, 24, 24])
# ReLU() torch.Size([100, 20, 24, 24])
# Sequential(
# (0): Conv2d(20, 64, kernel_size=(5, 5), stride=(1, 1))
# (1): ReLU()
# ) torch.Size([100, 64, 20, 20])
</code></pre>
|
python|debugging|deep-learning|pycharm|pytorch
| 1
|
377,449
| 66,962,823
|
Subset pandas dataframe using function applied to a column/series
|
<p>I have a pandas dataframe <code>df</code> that I would like to subset based on the result of running <code>Name</code> through a certain function <code>is_valid()</code></p>
<pre><code>import pandas as pd
data = [['foo', 10], ['baar', 15], ['baz', 14]]
df = pd.DataFrame(data, columns = ['name', 'age'])
df
name age
0 foo 10
1 baar 15
2 baz 14
</code></pre>
<p>The function checks if the length of the input string is 3 and returns either True or False:</p>
<pre><code>def is_valid(x):
assert isinstance(x, str)
return True if len(x) == 3 else False
</code></pre>
<p>My goal is to subset <code>df</code> where this function returns True, which would return an output of</p>
<pre><code> name age
0 foo 10
2 baz 14
</code></pre>
<p>The following syntax returns an error; what is the correct syntax for applying a function to values of a column (series) and subsetting a dataframe if the output meets a condition (in this case = True) ?</p>
<pre><code>df[is_valid(df['name'])]
</code></pre>
|
<p>Try:</p>
<pre><code>df[df['name'].str.len()==3]
</code></pre>
<p>Or use your code with <code>apply</code>:</p>
<pre><code>df[df['name'].apply(is_valid)]
</code></pre>
|
python|pandas|dataframe|lambda|subset
| 4
|
377,450
| 66,814,443
|
How to define a weighted loss function for TF2.0+ keras CNN for image classification?
|
<p>I would like to integrate the <a href="https://www.tensorflow.org/api_docs/python/tf/nn/weighted_cross_entropy_with_logits" rel="noreferrer">weighted_cross_entropy_with_logits</a> to deal with data imbalance. I am not sure how to do it. Class 0 has 10K images, while class 1 has 500 images. Here is my code.</p>
<pre><code>model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3, 3), input_shape=(dim, dim, 3), activation='relu'),
....
tf.keras.layers.Dense(2, activation='softmax')
])
model.compile(optimizer="nadam",
loss=tf.keras.losses.CategoricalCrossentropy(),
metrics=['accuracy'])
class_weight = {0: 1.,
1: 20.}
model.fit(
train_ds,
val_ds,
epochs=epc,
verbose=1,
class_weight=class_weight)
</code></pre>
|
<p>You can simply wrap <code>tf.nn.weighted_cross_entropy_with_logits</code> inside a custom loss function.</p>
<p>Remember also that <code>tf.nn.weighted_cross_entropy_with_logits</code> expects logits so your network must produce it and not probabilities (remove <code>softmax</code> activation from the last layer)</p>
<p>Here a dummy example:</p>
<pre><code>X = np.random.uniform(0,1, (10,32,32,3))
y = np.random.randint(0,2, (10,))
y = tf.keras.utils.to_categorical(y)
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3, 3), input_shape=(32, 32, 3), activation='relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(2) ### must be logits (remove softmax)
])
def my_loss(weight):
def weighted_cross_entropy_with_logits(labels, logits):
loss = tf.nn.weighted_cross_entropy_with_logits(
labels, logits, weight
)
return loss
return weighted_cross_entropy_with_logits
model.compile(optimizer="nadam",
loss=my_loss(weight=0.8),
metrics=['accuracy'])
model.fit(X,y, epochs=3)
</code></pre>
<p>At inference time you obtain the probabilities in this way:</p>
<pre><code>tf.nn.softmax(model.predict(X))
</code></pre>
|
python|tensorflow|keras
| 4
|
377,451
| 67,092,744
|
How to replicate rows based on number of comma separated values in column
|
<p>I have a large dataframe containing changelog of the data. Each line represents 'product' and each product can have multiple 'action types'. It can be new, deleted, renamed, moved, updated, etc. or it can be combination of them separated by comma.</p>
<p>Examples:</p>
<ul>
<li>Move, Rename</li>
<li>Move, Rename, Update</li>
</ul>
<p>What I am trying to achieve is, to replicate row for those ones, that have such combination for action type and create separate row for each of them. Here is example of what I have:</p>
<p>dataframe:</p>
<pre class="lang-py prettyprint-override"><code>Product Description Effective date Action Type
Phone Nokia 2019-08-08 Move, Text
Car Honda 2018-12-12 Move, Text, Update
PC Lenovo 2020-04-04 New
</code></pre>
<p>And what I want to achieve:</p>
<pre class="lang-py prettyprint-override"><code>Product Description Effective date Action Type
Phone Nokia 2019-08-08 Move
Phone Nokia 2019-08-08 Text
Car Honda 2018-12-12 Move
Car Honda 2018-12-12 Text
Car Honda 2018-12-12 Move, Text, Update
PC Lenovo 2020-04-04 New
</code></pre>
<p>How this can be done?</p>
|
<p>You can use <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.explode.html" rel="nofollow noreferrer"><code>explode</code></a> for this after you split your column by commas:</p>
<pre><code>df['Action Type'] = df['Action Type'].str.split(', ')
df.explode('Action Type')
Product Description Effective date Action Type
0 Phone Nokia 2019-08-08 Move
0 Phone Nokia 2019-08-08 Text
1 Car Honda 2018-12-12 Move
1 Car Honda 2018-12-12 Text
1 Car Honda 2018-12-12 Update
2 PC Lenovo 2020-04-04 New
</code></pre>
|
python|pandas|dataframe
| 3
|
377,452
| 67,113,753
|
Zipping List of Pandas DataFrames Yields Unexpected Results
|
<p>Can somebody explain the following code?</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
a = pd.DataFrame({"col1": [1,2,3], "col2": [2,3,4]})
b = pd.DataFrame({"col3": [1,2,3], "col4": [2,3,4]})
list(zip(*[a,b]))
</code></pre>
<p>Output:</p>
<pre><code>[('col1', 'col3'), ('col2', 'col4')]
</code></pre>
|
<p>a:</p>
<p><a href="https://i.stack.imgur.com/OF6NI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OF6NI.png" alt="enter image description here" /></a></p>
<p>b:</p>
<p><a href="https://i.stack.imgur.com/0GKoz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0GKoz.png" alt="enter image description here" /></a></p>
<hr />
<p>zip function returns tuple:</p>
<pre><code>a = ("John", "Charles", "Mike")
b = ("Jenny", "Christy", "Monica", "Vicky")
x = zip(a, b)
#use the tuple() function to display a readable version of the result:
print(tuple(x))
</code></pre>
<p><a href="https://i.stack.imgur.com/IYWVV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IYWVV.png" alt="enter image description here" /></a></p>
<hr />
<p>with [a,b] inside zip - U get the whole values from df.</p>
<p><a href="https://i.stack.imgur.com/AWqwH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AWqwH.png" alt="enter image description here" /></a></p>
<hr />
<p>There is also combine the all possible combination (16 permutations) :</p>
<p>eg:</p>
<pre><code>d = list(zip(a['col1'],b['col4']))
</code></pre>
<p><a href="https://i.stack.imgur.com/nqEaE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nqEaE.png" alt="enter image description here" /></a></p>
|
python|pandas
| 0
|
377,453
| 67,085,037
|
Tensorflow import issues on the anaconda virtual environment
|
<p>I'm using OS X now and got a problem with importing tensorflow.
I'm using anaconda and made a new virtual environment with python 3.7 and tensor flow 1.15.0.
I've no problem so far but from yesterday I got error massages like below</p>
<blockquote>
<p>import tensorflow as tf</p>
<p>Traceback (most recent call last): File "", line 1, in
File
"/usr/local/anaconda3/envs/python3.7_tf1/lib/python3.7/site-packages/tensorflow/<strong>init</strong>.py",
line 99, in
from tensorflow_core import * File "/usr/local/anaconda3/envs/python3.7_tf1/lib/python3.7/site-packages/tensorflow_core/<strong>init</strong>.py",
line 28, in
from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import ImportError: cannot import name
'pywrap_tensorflow' from 'tensorflow_core.python' (unknown location)</p>
</blockquote>
<p>I tried to make a new virtual environment and set the same setting but it didn't work neither.
what should I do?</p>
|
<p>Problem solved.
The main issue was a directory of python file that I tried to run by using conda.
I moved that file out of problematic folder and it worked.</p>
|
python|tensorflow
| 0
|
377,454
| 67,084,739
|
Creating Dataframe from Lists
|
<p>I am seeing 5 items in each list below, but it looks like I am running into issues with the shape of my data when creating my dataframe. Any idea why this error is happening?</p>
<p>Code:</p>
<pre><code>b = {'Link':links, 'Tax':taxes, 'Description':descrip}
bet = pd.DataFrame(b)
</code></pre>
<pre><code>['http://www.redfin.com/IL/Chicago/195-N-Harbor-Dr-60601/unit-509/home/14093313', 'http://www.redfin.com/IL/Chicago/1235-N-Astor-St-60610/unit-3N/home/13054822', 'http://www.redfin.com/MO/St-Louis/2622-S-11th-St-63118/home/93686930', 'http://www.redfin.com/IL/Chicago/426-W-Barry-Ave-60657/unit-408/home/13373863', 'http://www.redfin.com/IL/Chicago/310-S-Michigan-Ave-60604/unit-1608/home/45513284']
['$631', '$859', '$377', '$201', '$575']
["Valet parking included! Floor to ceiling bay windows with spectacular views of Navy Pier and Lake Michigan. Open kitchen features newer stainless steel appliances, granite countertops and new dishwasher and washer/dryer. Wood floor in living room and den, nice designer sliding door for second bedroom. Both bathrooms are remodeled recently. Fantastic location, steps away from Millennium Park, Navy Pier, DuSable Harbor, beautiful lakeshore east park, bike paths, Gems World Academy, restaurants, Mariano's, museums, theatre district and more. Many amenities in building include rooftop pool, jacuzzi, fitness center, two party rooms (56th floor library/party room with postcard views of lake), tennis court, grilling and pet relief areas. Perfect for in town, investment or home!", 'Very easy and safe to show! Sellers are highly motivated and can close quickly. Very best value in the building and in the neighborhood. Charming and sophisticated 4-bedroom plus den 2.1 bath penthouse in Gold Coast on quiet and wonderful tree lined Astor Street. Much larger square footage and more rooms than many other comparatively priced co-ops. Small and beautiful vintage elevator building (only 9 units) that is very well cared for and cleaned daily by two person live- in engineering/custodial team. Staff is full service to the building. Unit is very wide and bright with four exposures. Large rooms that flow-great for entertaining in the front-- and privacy and quiet in the back-- perfect for live-in, family visits and could be wonderful multiple "work from home" offices. Hardwood floors, gas fireplace, eat in Poggenpohl kitchen, cozy den, laundry in unit, ADT security system and great storage. Charming and bright sunroom with views down Astor. 50% financing allowed. Healthy reserve fund and no major capital projects planned. Ample space to spread out in this home. One dog or one cat allowed. Assessments are very reasonable and include real estate taxes. Building has an exercise room, kids\' playroom, storage room, bike room and two separate private patios. Patios have garden setting, along with seating and grills. Exterior garden has won many awards from the Gold Coast Neighbors Association. Healthy association and well maintained building. Steps to Lake Shore Drive, Beach, the Red Line train, Oak Street shopping, Northwestern Memorial Hospital, Michigan Ave walking distance to Latin School, Walter Payton, Xavier Warde, St Chrysostom\'s, Lincoln Park High School, Ogden Elementary School district. Multiple parking options right nearby. Assessment is as follows: $2,502.40 (Base Assmt. ) + $1,837.60 (2018 RE Taxes)= $4,340 (Total Assmt. ).', 'When you decide to leave your backyard oasis that includes hot tub, pool & multiple decks, you won’t lose your parking spot ! 2 car garage & garage space for TWO golf carts is yours! Dining & drinks are mere steps away. This renovated (2010) historic Founder Home is three stories; entering into a first floor parlor/dining and kitchen w/½ bath; mudroom off the kitchen;2nd flr master suite w/ walk in closet with own private deck, walk in jacuzzi tub & separate shower; full bath on 3rd flr for the 2 extra bds - perfect for your guests for Mardi Gras and brewery tours! Let’s talk about the (finished) basement – there’s indoor and exterior access, and it’s just begging to be your own pub! Plumbed wetbar & .5 bath – separate office area or storage w/2nd walk out. Oh yeah… there’s 2nd floor laundry too! It’s checking all the right boxes! Here’s some of the boring details that are important: approx. 2200 sq ft plus, new roof: 2020, zoned hvac, working gas fireplace, gas line run to grill.', "PERFECT EAST LAKEVIEW LOCATION IN ELEVATOR BUILDING. Own your own private condo for less than the cost of renting! Fantastic location near the lakefront, park, Belmont and Diversey Harbor, LAC, Mariano's, Broadway and Clark Street shopping and dining, Trader Joes, Belmont El + bus lines, LSD, etc. Welcome home to this condo located on a beautiful tree-lined street. Completely move-in ready, not a thing to do! This home has been meticulously maintained and features custom window coverings, USB outlets and updated lighting. Additional storage included. The building offers common laundry (get it all done in one hour) and secured entry. Rental parking available or street parking. No rental cap! Click on the 3D tour and take a walk around before booking your private showing.", 'Prime location in the heart of the Loop. Corner 3 BD 2BA or 2BD + den split floor plan w/ balcony. Hardwood floors in living space and kitchen with carpet in bedrooms and window coverings throughout. Open kitchen w/ granite, stainless steel appliances, island, breakfast bar etc. Master bedroom suite has large walk in closet and master bath w/ radiant heated floor, soaking tub, double vanity sinks and separate glass shower. Large storage on same floor! Wonderful amenities including fitness center, whirlpool, sauna, outdoor terrace and 24 hr doorman. Parking additional $35,000. Grant Park, steps to Millennium Park, Art Institute and more!']
</code></pre>
<p>Error:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\Python\Price Tracking\Real Estate\RealEstate-Scraping.py", line 79, in <module>
bet = pd.DataFrame(b, index=[1])
File "C:\Python38\lib\site-packages\pandas\core\frame.py", line 435, in __init__
mgr = init_dict(data, index, columns, dtype=dtype)
File "C:\Python38\lib\site-packages\pandas\core\internals\construction.py", line 254, in init_dict
return arrays_to_mgr(arrays, data_names, index, columns, dtype=dtype)
File "C:\Python38\lib\site-packages\pandas\core\internals\construction.py", line 74, in arrays_to_mgr
return create_block_manager_from_arrays(arrays, arr_names, axes)
File "C:\Python38\lib\site-packages\pandas\core\internals\managers.py", line 1675, in create_block_manager_from_arrays
construction_error(len(arrays), arrays[0].shape, axes, e)
File "C:\Python38\lib\site-packages\pandas\core\internals\managers.py", line 1694, in construction_error
raise ValueError(f"Shape of passed values is {passed}, indices imply {implied}")
ValueError: Shape of passed values is (5, 3), indices imply (1, 3)
</code></pre>
|
<p>The code you wrote works for me</p>
<pre><code>links = ['http://www.redfin.com/IL/Chicago/195-N-Harbor-Dr-60601/unit-509/home/14093313', 'http://www.redfin.com/IL/Chicago/1235-N-Astor-St-60610/unit-3N/home/13054822', 'http://www.redfin.com/MO/St-Louis/2622-S-11th-St-63118/home/93686930', 'http://www.redfin.com/IL/Chicago/426-W-Barry-Ave-60657/unit-408/home/13373863', 'http://www.redfin.com/IL/Chicago/310-S-Michigan-Ave-60604/unit-1608/home/45513284']
taxes = ['$631', '$859', '$377', '$201', '$575']
descrip = ["Valet parking included! Floor to ceiling bay windows with spectacular views of Navy Pier and Lake Michigan. Open kitchen features newer stainless steel appliances, granite countertops and new dishwasher and washer/dryer. Wood floor in living room and den, nice designer sliding door for second bedroom. Both bathrooms are remodeled recently. Fantastic location, steps away from Millennium Park, Navy Pier, DuSable Harbor, beautiful lakeshore east park, bike paths, Gems World Academy, restaurants, Mariano's, museums, theatre district and more. Many amenities in building include rooftop pool, jacuzzi, fitness center, two party rooms (56th floor library/party room with postcard views of lake), tennis court, grilling and pet relief areas. Perfect for in town, investment or home!", 'Very easy and safe to show! Sellers are highly motivated and can close quickly. Very best value in the building and in the neighborhood. Charming and sophisticated 4-bedroom plus den 2.1 bath penthouse in Gold Coast on quiet and wonderful tree lined Astor Street. Much larger square footage and more rooms than many other comparatively priced co-ops. Small and beautiful vintage elevator building (only 9 units) that is very well cared for and cleaned daily by two person live- in engineering/custodial team. Staff is full service to the building. Unit is very wide and bright with four exposures. Large rooms that flow-great for entertaining in the front-- and privacy and quiet in the back-- perfect for live-in, family visits and could be wonderful multiple "work from home" offices. Hardwood floors, gas fireplace, eat in Poggenpohl kitchen, cozy den, laundry in unit, ADT security system and great storage. Charming and bright sunroom with views down Astor. 50% financing allowed. Healthy reserve fund and no major capital projects planned. Ample space to spread out in this home. One dog or one cat allowed. Assessments are very reasonable and include real estate taxes. Building has an exercise room, kids\' playroom, storage room, bike room and two separate private patios. Patios have garden setting, along with seating and grills. Exterior garden has won many awards from the Gold Coast Neighbors Association. Healthy association and well maintained building. Steps to Lake Shore Drive, Beach, the Red Line train, Oak Street shopping, Northwestern Memorial Hospital, Michigan Ave walking distance to Latin School, Walter Payton, Xavier Warde, St Chrysostom\'s, Lincoln Park High School, Ogden Elementary School district. Multiple parking options right nearby. Assessment is as follows: $2,502.40 (Base Assmt. ) + $1,837.60 (2018 RE Taxes)= $4,340 (Total Assmt. ).', 'When you decide to leave your backyard oasis that includes hot tub, pool & multiple decks, you won’t lose your parking spot ! 2 car garage & garage space for TWO golf carts is yours! Dining & drinks are mere steps away. This renovated (2010) historic Founder Home is three stories; entering into a first floor parlor/dining and kitchen w/½ bath; mudroom off the kitchen;2nd flr master suite w/ walk in closet with own private deck, walk in jacuzzi tub & separate shower; full bath on 3rd flr for the 2 extra bds - perfect for your guests for Mardi Gras and brewery tours! Let’s talk about the (finished) basement – there’s indoor and exterior access, and it’s just begging to be your own pub! Plumbed wetbar & .5 bath – separate office area or storage w/2nd walk out. Oh yeah… there’s 2nd floor laundry too! It’s checking all the right boxes! Here’s some of the boring details that are important: approx. 2200 sq ft plus, new roof: 2020, zoned hvac, working gas fireplace, gas line run to grill.', "PERFECT EAST LAKEVIEW LOCATION IN ELEVATOR BUILDING. Own your own private condo for less than the cost of renting! Fantastic location near the lakefront, park, Belmont and Diversey Harbor, LAC, Mariano's, Broadway and Clark Street shopping and dining, Trader Joes, Belmont El + bus lines, LSD, etc. Welcome home to this condo located on a beautiful tree-lined street. Completely move-in ready, not a thing to do! This home has been meticulously maintained and features custom window coverings, USB outlets and updated lighting. Additional storage included. The building offers common laundry (get it all done in one hour) and secured entry. Rental parking available or street parking. No rental cap! Click on the 3D tour and take a walk around before booking your private showing.", 'Prime location in the heart of the Loop. Corner 3 BD 2BA or 2BD + den split floor plan w/ balcony. Hardwood floors in living space and kitchen with carpet in bedrooms and window coverings throughout. Open kitchen w/ granite, stainless steel appliances, island, breakfast bar etc. Master bedroom suite has large walk in closet and master bath w/ radiant heated floor, soaking tub, double vanity sinks and separate glass shower. Large storage on same floor! Wonderful amenities including fitness center, whirlpool, sauna, outdoor terrace and 24 hr doorman. Parking additional $35,000. Grant Park, steps to Millennium Park, Art Institute and more!']
b = {'Link':links, 'Tax':taxes, 'Description':descrip}
bet = pd.DataFrame(b)
</code></pre>
<p>It creates the DataFrame for me.</p>
|
python|pandas
| 0
|
377,455
| 66,853,776
|
how to replace values in 2 dataframe columns based on multiple conditions?
|
<p>I'm having columns in pandas dataframe which look like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Positive</th>
<th style="text-align: center;">Neutral</th>
<th style="text-align: right;">Negative</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">0</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: left;">0</td>
<td style="text-align: center;">1</td>
<td style="text-align: right;">0</td>
</tr>
</tbody>
</table>
</div>
<p>I want it to look like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Positive</th>
<th style="text-align: center;">Neutral</th>
<th style="text-align: right;">Negative</th>
<th style="text-align: right;">Mixed</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">0</td>
<td style="text-align: center;">0</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">1</td>
</tr>
<tr>
<td style="text-align: left;">0</td>
<td style="text-align: center;">1</td>
<td style="text-align: right;">0</td>
<td style="text-align: right;">0</td>
</tr>
</tbody>
</table>
</div>
<p>First, I created column called mixed based on the fact that sentence is both positive and negative. Now, since I already have column "mixed", I do not need double information, so I would like to replace values in positive and negative columnwith 0 (only for mixed sentiment sentences). I've tried different variation of np.where but nothing seems to understand how to replace value in 2 columns based on condition from these 2 columns.
Any suggestions? Thanks :)</p>
|
<p>You can just do it in two steps - set the mixed column - then set the pos/neg columns to 0.</p>
<pre><code>>>> df['Mixed'] = 0
>>> df
Positive Neutral Negative Mixed
0 1 0 1 0
1 0 1 0 0
>>> rows = (df.Positive == 1) & (df.Negative == 1)
>>> df.loc[rows, 'Mixed'] = 1
>>> df.loc[rows, ['Positive', 'Negative']] = 0
>>> df
Positive Neutral Negative Mixed
0 0 0 0 1
1 0 1 0 0
</code></pre>
<p>You can use
<a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.mask.html#pandas-dataframe-mask" rel="nofollow noreferrer"><code>df.mask()</code></a> if you want to do it all together.</p>
<pre><code>>>> df
Positive Neutral Negative Mixed
0 1 0 1 0
1 0 1 0 0
>>> rows = (df.Positive == 1) & (df.Negative == 1)
>>> df.mask(rows, [0, 0, 0, 1])
Positive Neutral Negative Mixed
0 0 0 0 1
1 0 1 0 0
</code></pre>
|
python|pandas|dataframe
| 0
|
377,456
| 67,129,188
|
logical evaluations of multiple boolean variables in python
|
<p>Assuming have m different boolean variables, c1, c2,..cm. How to evaluate whether all of them are true, or any one of these elements is not true, etc. It can be very efficient to check them one by one.</p>
|
<p>Use <a href="https://docs.python.org/library/functions.html#any" rel="nofollow noreferrer"><code>any</code></a> and <a href="https://docs.python.org/library/functions.html#all" rel="nofollow noreferrer"><code>all</code></a>:</p>
<pre><code>c = [True, False]
print(any(c))
# True
print(all(c))
# False
</code></pre>
|
python|python-3.x|pandas|numpy|boolean-logic
| 0
|
377,457
| 67,081,802
|
Keep first occurrence in one column according to certain conditions
|
<p>I'm not sure how to explain this so I'll do my best.</p>
<p>I have two datasets which I merged to obtain the following:</p>
<pre><code>ID | active_date | datestamp | code1 | code2 | code3 | payment
01 | 01/01/2020 | 10/06/2020 | AAA | . | . | 1
01 | 01/01/2020 | 11/06/2020 | AAA | . | . | 1
01 | 01/01/2020 | 12/06/2020 | BBB | AAA | . | 2
01 | 01/01/2020 | 13/06/2020 | BBB | AAA | . | 2
02 | 10/01/2020 | . | . | . | . | .
03 | 18/01/2020 | 15/05/2020 | CCC | BBB | AAA | 4
03 | 18/01/2020 | 16/05/2020 | CCC | BBB | AAA | 4
04 | 20/01/2020 | 24/04/2020 | AAA | . | . | 2
04 | 20/01/2020 | 25/04/2020 | AAA | . | . | 3
04 | 20/01/2020 | 26/04/2020 | AAA | . | . | 3
05 | 24/01/2020 | 06/05/2020 | DDD | . | . | 1
05 | 24/01/2020 | 07/05/2020 | DDD | . | . | 1
</code></pre>
<p>What I need to do basically is end up with one row per ID. But with a few things to take into account:</p>
<p>-get the first occurrence of when code1, code2 or code3 is either "BBB" or "CCC" in either <code>code1</code>, <code>code2</code> or <code>code3</code>, or if <code>payment</code> is larger or equal than 3 .</p>
<p>After that it is easy to create a variable called <code>length</code> that is the difference in days between <code>datestamp</code> and <code>active_date</code>, but I need to make it so that there's only 1 row per ID with these characteristics.</p>
<p>The final output should look like this:</p>
<pre><code>ID | active_date | datestamp | code1 | code2 | code3 | payment
01 | 01/01/2020 | 12/06/2020 | BBB | AAA | . | 2
02 | 10/01/2020 | . | . | . | . | .
03 | 18/01/2020 | 15/05/2020 | CCC | BBB | AAA | 4
04 | 20/01/2020 | 25/04/2020 | AAA | . | . | 3
05 | 24/01/2020 | 06/05/2020 | DDD | . | . | 1
</code></pre>
<p>-kept the third row of <code>01</code> because there's BBB in code1.</p>
<p>-have to keep <code>02</code> even if it has nothing populated</p>
<p>-kept <code>03</code> because it's the first row with BBB and CCC in code1 and code2 for that ID</p>
<p>-kept the second row of <code>04</code> because it has a payment of 3, so I kept the first one.</p>
<p>-kept the first row of <code>05</code> because it doesn't meet the conditions, but it could be any row of <code>05</code></p>
<p>I hope this makes sense. In summary, I want to group by/remove duplicates but the row I leave has to be the first occurrence if that ID meets the conditions at one point.</p>
<p>Tried groupby's but I can't make it work with these many conditions in different rows.</p>
|
<p>gnarly one. Compute multiple options independently and combine. Code below</p>
<pre><code>#meets greater than equal to 3 rule
df['m']=df['payment'].str.extract('(\d+)').astype(float).ge(3)#create temp ro
a=df[df['m']]
#meets BBB, CCC rule
b=df[df['code1'].isin(["BBB","CCC"])|df['code2'].isin(["BBB","CCC"])|df['code3'].isin(["BBB","CCC"])].drop_duplicates(subset=['code1','code1','code1'],keep='first')
#meets unique row rule
c=df.drop_duplicates(subset=['ID'],keep='first')
#combine a,b,c and drop duplicates
df1=pd.concat([a,b,c], axis=0).drop_duplicates(subset=['code1','code1','code1'],keep='first').drop('m',1)
print(df1)
ID active_date datestamp code1 code2 code3 payment
5 3 18/01/2020 15/05/2020 CCC BBB AAA 4
8 4 20/01/2020 25/04/2020 AAA . . 3
2 1 01/01/2020 12/06/2020 BBB AAA . 2
4 2 10/01/2020 . . . . .
10 5 24/01/2020 06/05/2020 DDD . . 1
</code></pre>
|
python|pandas|filter|data-mining
| 0
|
377,458
| 66,874,669
|
Get Pytorch - tensor values as a integer in python
|
<p>I have my output of my torch tensor which looks like below</p>
<p>(coordinate of a bounding box in object detection)</p>
<pre><code>[tensor(299., device='cuda:0'), tensor(272., device='cuda:0'), tensor(327., device='cuda:0'), tensor(350., device='cuda:0')]
</code></pre>
<p>I wanted to extract each of the tensor value as an int in the form of <strong>minx,miny,maxx,maxy</strong> <br>
so that I can pass it to a shapely function in the below form</p>
<pre><code>from shapely.geometry import box
minx,miny,maxx,maxy=1,2,3,4
b = box(minx,miny,maxx,maxy)
</code></pre>
<p>What's the best way to do it? by avoiding, Cuda enabled or not or other exceptions?</p>
|
<pre><code>minx, miny, maxx, maxy = [int(t.item()) for t in tensors]
</code></pre>
<p>where <code>tensors</code> is the list of tensors.</p>
|
python|pytorch|torch
| 0
|
377,459
| 66,991,843
|
ModuleNotFoundError: No module named 'pandas' in Jupyter Notebooks
|
<p>I am using a Jupyter Notebook in VSCode for a simple data science project. I've imported pandas in the past and had no problems, only now when I try to execute the code, "ModuleNotFoundError: No module named 'pandas'" is raised in the Notebook.</p>
<p>I installed pandas with pip, and when I type <code>pip install pandas</code> into the terminal, I get "requirement already satisfied". Note: I have no problems importing pandas into a basic .py file. The error only occurs in the Jupyter Notebook. (Also, I am not using a virtual environment.)</p>
<p>I tried using the solution found in <a href="https://stackoverflow.com/questions/58740605/jupyter-notebook-modulenotfounderror-no-module-named-pandas">(Jupyter Notebook) ModuleNotFoundError: No module named 'pandas'</a>) by adding "C:\Users\AppData\Local\Programs\Python\Python39" to the path, but it hasn't made a difference.</p>
<pre><code> ModuleNotFoundError Traceback (most recent call last)
<ipython-input-9-2e52ded19b86> in <module>
----> 1 import pandas as pd
2 df = pd.read_csv("archive\IPIP-FFM-data-8Nov2018\data-final.csv", delimiter="\t")
3 df
ModuleNotFoundError: No module named 'pandas'
</code></pre>
<p>Pandas (1.2.3)</p>
<p>Python (3.9)</p>
|
<p>After installing Jupyter Notebook, I re-ran the anaconda install. Seemed to fix it.</p>
|
python|pandas|pip|jupyter-notebook|modulenotfounderror
| 0
|
377,460
| 66,809,521
|
How can I change two dimensional grayscale image to one dimensional vector image?
|
<p>I have a grayscale image with size 28 by 28, and I plot it with plt.imshow(..., cmap='gray_r')
I'd like to plot the second figure, that has pixel number as a xlabel, and grayscale value as a ylabel.
But I don't know how to make it.
I tried it to make with imshow function after reshape a 2_d image vector to 1_d vector, but it gave me a simple black line that has no information.
And I don't have pytorch or tensorflow, so I'd like to make it with python module such as numpy, matplotlib.</p>
<p><a href="https://i.stack.imgur.com/JFCY9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JFCY9.png" alt="enter image description here" /></a></p>
|
<pre><code># flat_image = 1_d vector
pixel_nums = range(28 * 28)
matplotlib.pyplot.scatter(pixel_nums, flat_image)
</code></pre>
<p>This will plot the pixel numbers and their grayscale values in a scatterplot. More information can be found <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.scatter.html" rel="nofollow noreferrer">here</a>.</p>
|
python|numpy|matplotlib|grayscale
| 0
|
377,461
| 66,994,657
|
Combining output in pandas?
|
<p>I have a movie recommender system I have been working on and currently it is printing two different sets of output because I have two different types of recommendation engines. Code is like this:</p>
<pre><code>while True:
user_input3 = input('Please enter movie title: ')
if user_input3 == 'done':
break
try:
print(' ')
print(get_input_movie(user_input3))
print(get_input_movie(user_input3, cosine_sim1))
# print(get_input_movie(user_input3), get_input_movie(user_input3,cosine_sim1))
print(' ')
except KeyError or ValueError:
print('Check your movie title spelling, capitalization and try again.')
print(' ')
continue
</code></pre>
<p>And the example output looks like this.</p>
<pre><code>Please enter movie title: Casino
34652 The Under-Gifted
28155 The Plague
12738 The Incredible Hulk
41823 The Ugly Duckling
44976 12 Feet Deep
Name: Title, dtype: object
1177 GoodFellas
1192 Raging Bull
25900 The Big Shave
109 Taxi Driver
7617 Mean Streets
Name: Title, dtype: object
</code></pre>
<p>How can I make it so it combines the answers? So it looks like this:</p>
<pre><code>Please enter movie title: Casino
34652 The Under-Gifted
28155 The Plague
12738 The Incredible Hulk
41823 The Ugly Duckling
44976 12 Feet Deep
1177 GoodFellas
1192 Raging Bull
25900 The Big Shave
109 Taxi Driver
7617 Mean Streets
</code></pre>
|
<p>If the return type of <code>get_input_movie()</code> is a Pandas DataFrame or a Pandas Series, you can try:</p>
<p>Replace the following 2 lines:</p>
<pre><code>print(get_input_movie(user_input3))
print(get_input_movie(user_input3, cosine_sim1))
</code></pre>
<p>by using <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.append.html" rel="nofollow noreferrer"><code>Series.append()</code></a> or <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.append.html" rel="nofollow noreferrer"><code>DataFrame.append()</code></a> as follows:</p>
<pre><code>print(get_input_movie(user_input3).append(get_input_movie(user_input3, cosine_sim1)))
</code></pre>
<p>Here, we appended the results of the 2 function calls before printing it. The combined results will be printed then.</p>
|
python|pandas|dataframe|recommendation-system
| 1
|
377,462
| 66,919,995
|
how to execute a function more than once and concatenate its outputs into an array only in python
|
<p>I'm trying to create a function that calls a function that I already have to execute it N times and then concatenate in its array only its outputs. this is the function that I want to perform more than once.</p>
<pre><code>def r_sample(m_base, m_external):
n_linhas_m_base = m_base.shape[0] # matriz com as variáveis do detector central
n_linhas_m_external = m_external.shape[0] # matriz com as variáveis dos prótons
index_m_external = [i for i in range(n_linhas_m_external)]
mask = random.sample(index_m_external, k=n_linhas_m_base)
m_sampled = np.concatenate((m_base, m_external[mask, :]), axis=1)
return m_sampled
</code></pre>
<p>Isso vai me retornar um numpy array de dimensão (NxM). O que eu preciso é de uma função (ou apenas um código) que execute esse <em>r_sample</em> 10 ou 20 vezes, mas que não produza 10 ou 20 arrays separados, Eu preciso deles juntos, num array só.
Tipo assim, a saída da função r_sample é assim:</p>
<pre><code>array( [[ 1,2,3],
[4,5,6],
[7,8,9]])
</code></pre>
<p>Next, I would like something like that</p>
<pre><code>array( [[ 1,2,3],
[4,5,6],
[7,8,9],
[1,2,3],
[4,5,6],
[7,8,9],
[1,2,3],
[4,5,6],
[7,8,9]])
</code></pre>
<p>In this example, I ran the r_sample code 3 times</p>
|
<p>You can create a variable called <code>all_arrays</code> which is an empty array. Where you call the function you could put a loop. Then you can get hold of the functions callback and add the <code>current_array</code> <code>to all_arrays</code> like this: <code>all_arrays += current_array</code>. The whole code can look something like this:</p>
<pre><code>all_arrays = []
for _ in range(8):
current_array = r_sample(m_base, m_external)
all_arrays += current_array
</code></pre>
<p>Is that what you need?</p>
|
python|arrays|python-3.x|numpy
| 0
|
377,463
| 67,181,917
|
How should I convert numbers in a column to the same format style in Python?
|
<p>I have two columns Lat and Long in a data frame. I try to convert these strings into float but I have the following error:</p>
<pre><code>ValueError: Unable to parse string "" at position 61754
</code></pre>
<p>I've noticed that in my data frame I have numbers written in different styles, even in bold text.
I'm wondering if there is a way to convert the numbers in the same style.</p>
|
<p>Suppose you have this DataFrame:</p>
<pre><code> funny_numbers
0
1
2
</code></pre>
<p>You can try <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.normalize.html#pandas-series-str-normalize" rel="nofollow noreferrer"><code>.str.normalize</code></a> to convert the Unicode characters to standard form:</p>
<pre><code>df["funny_numbers"] = df["funny_numbers"].str.normalize("NFKD")
df["funny_numbers"] = pd.to_numeric(df["funny_numbers"])
print(df)
</code></pre>
<p>Prints:</p>
<pre class="lang-none prettyprint-override"><code> funny_numbers
0 4
1 5
2 76
</code></pre>
|
python|pandas|valueerror
| 3
|
377,464
| 66,910,315
|
AttributeError: 'float' object has no attribute 'isnumeric'
|
<p>I have a pandas df and I want to remove non-numeric values of <code>col1</code>.</p>
<p>If I use <code>df[df.col1.apply(lambda x: x.isnumeric())]</code>, I get the following error:</p>
<p><code>AttributeError: 'float' object has no attribute 'isnumeric'</code></p>
<p>any suggestion on doing this efficiently in pandas?</p>
|
<p>You could use standard method of strings <code>isnumeric</code> and apply it to each value in your <code>id</code> column:<br />
<a href="https://stackoverflow.com/questions/33961028/remove-non-numeric-rows-in-one-column-with-pandas">Remove non-numeric rows in one column with pandas</a><br />
<a href="https://stackoverflow.com/questions/51517023/python-replace-non-digit-character-in-a-dataframe">Python replace non digit character in a dataframe</a></p>
|
pandas|filtering|numeric
| 1
|
377,465
| 67,112,369
|
how to concatenate numpy array of different shape
|
<p>I have two <code>NumPy</code> arrays of shape: (Batch, H, W, Canal). I would like to concat these arrays in one array, but they have different shapes <code>[209, 450, 450, 24]</code> and <code>[209, 112, 112]</code>.</p>
<p>Does anyone know how to do it?</p>
|
<p>A simple way to concatenate vectors with different sizes is to fill the smaller one with zeros, so that it is the same shape as the bigger.</p>
<pre class="lang-py prettyprint-override"><code>bigger = np.random.uniform(size=(209, 450, 450, 24)) # should be your input
smaller = np.random.uniform(size=(209, 112, 112))
s = np.zeros(bigger.shape[:-1])
s[:smaller.shape[0], :smaller.shape[1], :smaller.shape[2]] = smaller
s = s[..., np.newaxis] # reshaping to (209, 450, 450, 1)
result = np.concatenate([bigger, s], axis=-1)
</code></pre>
<p>But if they are images, you should use a image library to resize the images to the correct shape.</p>
<pre><code>import PIL.Image
bigger = np.random.uniform(size=(209, 450, 450, 24)) # should be your input
smaller = np.random.uniform(size=(209, 112, 112))
s = np.zeros(bigger.shape[:-1])
for i in range(bigger.shape[0]):
s[i] = np.array(PIL.Image.fromarray(smaller[i]).resize((s.shape[2], s.shape[1])))
s = s[..., np.newaxis] # reshaping to (209, 450, 450, 1)
result = np.concatenate([bigger, s], axis=-1)
</code></pre>
|
python|numpy
| 0
|
377,466
| 67,177,319
|
Importing csv files based on list names
|
<p>I'm looking to create a function that will import csv files based on a user input of file names that were created as a list. This is for some data analysis where i will then use pandas to resample the data etc and calculate the percentages of missing data. So far I have:</p>
<pre><code>parser = lambda x: pd.datetime.strptime(x, '%d/%m/%Y %H:%M')
number_stations = input(" Please tell how many stations you want to analyse: ")
list_of_stations_name_number = []
i = 0
while i < int(number_stations):
i += 1
name = input(" Please the stations name for station number {}: ".format(i))
list_of_stations_name_number.append(name+ '.csv')
</code></pre>
<p>This works as intended whereby, the user will add the name of the stations they are looking to analyse and then will be left with a list located in list_of_stations_name_number. Such as:</p>
<p><code>list_of_stations_name_number "['DM00115_D.csv', 'DM00117_D.csv', 'DM00118_D.csv', 'DM00121_D.csv', 'DM00129_D.csv']" </code></p>
<p>Is there any easy way for which i can then redirect to the directory (using os.chdir) and import the csv files based on them matching names. I'm not sure how complicated or simple this would be and am open to try more efficient methods if applicable</p>
|
<p>To read all files, you can do something like -</p>
<pre><code>list_of_dfs = [pd.read_csv(f) for f in list_of_stations_name_number]
</code></pre>
<p><code>list_of_dfs[0]</code> will correspond to the csv file <code>list_of_stations_name_number[0]</code></p>
<p>If your files are not in the current directory, you can prepend the directory path to the file names -</p>
<pre><code>list_of_stations_name_number = [f'location/to/folder/{fname}' for fname in list_of_stations_name_number]
</code></pre>
|
python|pandas|csv
| 0
|
377,467
| 66,988,685
|
Break ties using rank function (OR other function) PYTHON
|
<p>I have the following dataframe:</p>
<pre><code>ID Name Weight Score
1 Amazon 2 11
1 Apple 4 10
1 Netflix 1 10
2 Amazon 2 8
2 Apple 4 8
2 Netflix 1 5
</code></pre>
<p>Currently I have a code which looks like this</p>
<pre><code>#add weight and score column
df['Rank'] = df['Weight'] + df['Score']
#create score rank on ID column
df['Score_Rank'] = df.groupby('ID')['Rank'].rank("first", ascending = False)
</code></pre>
<p>This code does not give me exactly what I want.</p>
<p>I would like to first rank on Score, without including the weight. And then break any ties in the rank by adding weight column to break them.
If there are further ties after weight column has been added, then rank would be by random selection.</p>
<p>I think an if statement could work in this scenario, just not sure how.</p>
<p>Expected output:</p>
<pre><code>ID Name Weight Score Score_Rank
1 Amazon 2 11 1
1 Apple 4 10 2
1 Netflix 1 10 3
2 Amazon 2 8 2
2 Apple 4 8 1
2 Netflix 1 5 3
</code></pre>
|
<p>You can use <code>rank</code> with <code>method='first'</code> with some presorting first:</p>
<pre><code>df['Score_Rank'] = (df.sort_values('Weight', ascending=False)
.groupby(['ID'])['Score']
.rank(method='first', ascending=False)
)
</code></pre>
<p>Output:</p>
<pre><code> ID Name Weight Score Score_Rank
0 1 Amazon 2 11 1.0
1 1 Apple 4 10 2.0
2 1 Netflix 1 10 3.0
3 2 Amazon 2 8 2.0
4 2 Apple 4 8 1.0
5 2 Netflix 1 5 3.0
</code></pre>
<p><em>Details:</em></p>
<p>First, sort your dataframe by weights descending, then use rank with method first on Score which will break ties based on sort order of the dataframe. And because, pandas does intrinsic data alignment, assign to new column 'Score_Rank' yeilds the based on original order of the dataframe.</p>
|
python|pandas|dataframe|numpy|rank
| 3
|
377,468
| 67,054,993
|
Find thresholds of bins by sum of column values in pandas
|
<p>I need to find thresholds of bins (for ex. 0-999, 1000-1999 etc.), so that on each bin there was approximately an equal amount (1/n of total value, for ex 1/3 if we split into 3 bins).</p>
<pre><code>d = {'amount': [600,400,250,340,200,500,710]}
df = pd.DataFrame(data=d)
df
amount
600
400
250
340
200
500
710
</code></pre>
<p>expected output if we split into 3 bins by sum of amount column:</p>
<pre><code>bin sum
threshold_1(x value-x value) 1000
threshold_2(x-x) 1000
threshold_3(x-x) 1000
</code></pre>
<p>something like this, but i need sum value instead of count</p>
<pre><code>pd.cut(amount, 3).value_counts()
</code></pre>
<p>maybe it could be solved in python, not only via pandas?</p>
|
<p>If need approximately an equal amount aggregate <code>sum</code> with <code>pd.cut</code>:</p>
<pre><code>df = df.groupby(pd.cut(df.amount, 3)).sum()
print (df)
amount
amount
(199.49, 370.0] 790
(370.0, 540.0] 900
(540.0, 710.0] 1310
</code></pre>
|
python|pandas|split
| 1
|
377,469
| 66,980,011
|
File operation using numpy
|
<p>I am trying to delete phrase from text file using numpy.I have tried
num = [] and num1.append(num1)
'a' instead of 'w' to write the file back.
While append doesn't delete the phrase
writes' first run deletes the phrase
second run deletes second line which is not phrase
third run empties the file</p>
<pre><code>import numpy as np
phrase = 'the dog barked'
num = 0
with open("yourfile.txt") as myFile:
for num1, line in enumerate(myFile, 1):
if phrase in line:
num += num1
else:
break
a=np.genfromtxt("yourfile.txt",dtype=None, delimiter="\n", encoding=None )
with open('yourfile.txt','w') as f:
for el in np.delete(a,(num),axis=0):
f.write(str(el)+'\n')
'''
the bird flew
the dog barked
the cat meowed
'''
</code></pre>
|
<p>I think you can still use <code>nums.append(num1)</code> with <code>w</code> mode, the issue I think you're getting is that you used the <code>enumerate</code> function for <code>myFile</code>'s lines using 1-index instead of 0-index as expected in numpy array. Changing it from <code>enumerate(myFile, 1)</code> to <code>enumerate(myFile, 0)</code> seems to fix the issue</p>
<pre><code>import numpy as np
phrase = 'the dog barked'
nums = []
with open("yourfile.txt") as myFile:
for num1, line in enumerate(myFile, 0):
if phrase in line:
nums.append(num1)
a=np.genfromtxt("yourfile.txt",dtype=None, delimiter="\n", encoding=None )
with open('yourfile.txt','w') as f:
for el in np.delete(a,nums,axis=0):
f.write(str(el)+'\n')
</code></pre>
|
python-3.x|numpy
| 0
|
377,470
| 67,181,773
|
Create numeric variable with condition
|
<p>I would like to create a dummy variable for % Change PMI. If % Change PMI is positive is 1 and if negative a 0.</p>
<pre><code>print(Overview3)
Adjusted Close % Change PMI % Change IP
Date
1970-02-27 0.052693 -0.026694 -0.0007
1970-03-31 0.001453 -0.010549 -0.0013
1970-04-30 -0.090483 -0.040512 -0.0026
1970-05-29 -0.060967 0.048889 -0.0012
1970-06-30 -0.050033 0.082627 -0.0032
... ... ... ...
2020-08-31 0.070065 0.035382 0.0096
2020-09-30 -0.039228 0.001799 -0.0008
2020-10-30 -0.027666 0.055655 0.0101
2020-11-30 0.107546 -0.018707 0.0089
2020-12-31 0.037121 0.048527 0.0102
[611 rows x 3 columns]
</code></pre>
|
<pre><code>df['dummy'] = 0
for i in range(0,len(df)):
if df["% Change PMI"][i] > 0:
df['dummy'][i] = 1
else:
df['dummy'][i] = 0
</code></pre>
|
python|pandas|dataframe|dummy-variable
| -1
|
377,471
| 67,099,881
|
The pandas value error still shows, but the code is totally correct and it loads normally the visualization
|
<p>I really wanted to use <code>pd.options.mode.chained_assignment = None</code>, but I wanted a code clean of error.</p>
<p>My start code:</p>
<pre class="lang-py prettyprint-override"><code>import datetime
import altair as alt
import operator
import pandas as pd
s = pd.read_csv('../../data/aparecida-small-sample.csv', parse_dates=['date'])
city = s[s['city'] == 'Aparecida']
</code></pre>
<p>Based on <a href="/users/15592592/dpkandy">@dpkandy</a>'s code:</p>
<pre><code>city['total_cases'] = city['totalCases']
city['total_deaths'] = city['totalDeaths']
city['total_recovered'] = city['totalRecovered']
tempTotalCases = city[['date','total_cases']]
tempTotalCases["title"] = "Confirmed"
tempTotalDeaths = city[['date','total_deaths']]
tempTotalDeaths["title"] = "Deaths"
tempTotalRecovered = city[['date','total_recovered']]
tempTotalRecovered["title"] = "Recovered"
temp = tempTotalCases.append(tempTotalDeaths)
temp = temp.append(tempTotalRecovered)
totalCases = alt.Chart(temp).mark_bar().encode(alt.X('date:T', title = None), alt.Y('total_cases:Q', title = None))
totalDeaths = alt.Chart(temp).mark_bar().encode(alt.X('date:T', title = None), alt.Y('total_deaths:Q', title = None))
totalRecovered = alt.Chart(temp).mark_bar().encode(alt.X('date:T', title = None), alt.Y('total_recovered:Q', title = None))
(totalCases + totalRecovered + totalDeaths).encode(color=alt.Color('title', scale = alt.Scale(range = ['#106466','#DC143C','#87C232']), legend = alt.Legend(title="Legend colour"))).properties(title = "Cumulative number of confirmed cases, deaths and recovered", width = 800)
</code></pre>
<p>This code works perfectly and loaded normally the visualization image, but it still shows the pandas error, asking to try to set <code>.loc[row_indexer,col_indexer] = value instead</code>, then I was reading the documentation "Returning a view versus a copy" whose linked cited and also tried this code, but it still shows the same error. Here is the code with <code>loc</code>:</p>
<pre class="lang-py prettyprint-override"><code># 1st attempt
tempTotalCases.loc["title"] = "Confirmed"
tempTotalDeaths.loc["title"] = "Deaths"
tempTotalRecovered.loc["title"] = "Recovered"
# 2nd attempt
tempTotalCases["title"].loc = "Confirmed"
tempTotalDeaths["title"].loc = "Deaths"
tempTotalRecovered["title"].loc = "Recovered"
</code></pre>
<p>Here is the error message:</p>
<pre><code><ipython-input-6-f16b79f95b84>:6: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
tempTotalCases["title"] = "Confirmed"
<ipython-input-6-f16b79f95b84>:9: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
tempTotalDeaths["title"] = "Deaths"
<ipython-input-6-f16b79f95b84>:12: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
tempTotalRecovered["title"] = "Recovered"
</code></pre>
<p>Jupyter and Pandas version:</p>
<pre><code>$ jupyter --version
jupyter core : 4.7.1
jupyter-notebook : 6.3.0
qtconsole : 5.0.3
ipython : 7.22.0
ipykernel : 5.5.3
jupyter client : 6.1.12
jupyter lab : 3.1.0a3
nbconvert : 6.0.7
ipywidgets : 7.6.3
nbformat : 5.1.3
traitlets : 5.0.5
$ pip show pandas
Name: pandas
Version: 1.2.4
Summary: Powerful data structures for data analysis, time series, and statistics
Home-page: https://pandas.pydata.org
Author: None
Author-email: None
License: BSD
Location: /home/gus/PUC/.env/lib/python3.9/site-packages
Requires: pytz, python-dateutil, numpy
Required-by: ipychart, altair
</code></pre>
<h2>Update 2</h2>
<p>I followed the answer, it worked, but there is another problem:</p>
<pre><code>temp = tempTotalCases.append(tempTotalDeaths)
temp = temp.append(tempTotalRecovered)
</code></pre>
<p>Error log:</p>
<pre><code>A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
iloc._setitem_with_indexer(indexer, value, self.name)
InvalidIndexError: Reindexing only valid with uniquely valued Index objects
---------------------------------------------------------------------------
InvalidIndexError Traceback (most recent call last)
<ipython-input-7-b2649a676837> in <module>
17 tempTotalRecovered.loc["title"] = _("Recovered")
18
---> 19 temp = tempTotalCases.append(tempTotalDeaths)
20 temp = temp.append(tempTotalRecovered)
21
~/GitLab/Gustavo/global/.env/lib/python3.9/site-packages/pandas/core/frame.py in append(self, other, ignore_index, verify_integrity, sort)
7980 to_concat = [self, other]
7981 return (
-> 7982 concat(
7983 to_concat,
7984 ignore_index=ignore_index,
~/GitLab/Gustavo/global/.env/lib/python3.9/site-packages/pandas/core/reshape/concat.py in concat(objs, axis, join, ignore_index, keys, levels, names, verify_integrity, sort, copy)
296 )
297
--> 298 return op.get_result()
299
300
~/GitLab/Gustavo/global/.env/lib/python3.9/site-packages/pandas/core/reshape/concat.py in get_result(self)
514 obj_labels = obj.axes[1 - ax]
515 if not new_labels.equals(obj_labels):
--> 516 indexers[ax] = obj_labels.get_indexer(new_labels)
517
518 mgrs_indexers.append((obj._mgr, indexers))
~/GitLab/Gustavo/global/.env/lib/python3.9/site-packages/pandas/core/indexes/base.py in get_indexer(self, target, method, limit, tolerance)
3169
3170 if not self.is_unique:
-> 3171 raise InvalidIndexError(
3172 "Reindexing only valid with uniquely valued Index objects"
3173 )
InvalidIndexError: Reindexing only valid with uniquely valued Index objects
</code></pre>
|
<p>This <code>SettingWithCopyWarning</code> is a <code>warning</code> and not an <code>error</code>. The importance in this distinction is that <code>pandas</code> isn't sure whether your code will produce the intended output so is letting the programmer make this decision where as a <code>error</code> means that something is definitely wrong.</p>
<p>The <code>SettingWithCopyWarning</code> is warning you about the difference between when you do something like <code>df['First selection']['Second selection']</code> compared to <code>df.loc[:, ('First selection', 'Second selection')</code>.</p>
<p>In the first case 2 separate events occur <code>df['First selection']</code> takes place, then the object returned from this is used for the next seleciton <code>returned_df['Second selection']</code>. <code>pandas</code> has no way to know whether the <code>returned_df</code> is the original <code>df</code> or just temporary 'view' of this object. Most of the time is doesn't matter (see docs for more info)...but if you want to change a value on a temporary view of an object you'll be confused as to why your code runs error free but you don't see changes you made reflected. Using <code>.loc</code> bundles <code>'First selection'</code> and <code>'Second selection'</code> into one call so <code>pandas</code> can guarantee that what's returned is not just a view.</p>
<p>The <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#why-does-assignment-fail-when-using-chained-indexing" rel="nofollow noreferrer">documentation</a> you linked show's you why your attempts to use <code>.loc</code> didn't work at you intended (eg. taken from docs):</p>
<blockquote>
<pre><code>def do_something(df):
foo = df[['bar', 'baz']] # Is foo a view? A copy? Nobody knows!
# ... many lines here ...
# We don't know whether this will modify df or not!
foo['quux'] = value
return foo
</code></pre>
</blockquote>
<p>You have something similar in your code. Look at how <code>tempTotalCases</code> is created:</p>
<pre><code>city = s[s['city'] == 'Aparecida']
# some lines of code
tempTotalCases = city[['date','total_cases']]
</code></pre>
<p>And then some more lines of code before you attempt to do:</p>
<pre><code>tempTotalCases.loc["title"] = "Confirmed"
</code></pre>
<p>So <code>pandas</code> throws the warning.</p>
<p>Separate from your original question you might find <code>df.rename()</code> useful. <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.rename.html" rel="nofollow noreferrer">Link to docs</a>.</p>
<p>You'll be able to do something like:</p>
<pre><code>city = city.rename(columns={'totalCases': 'total_cases',
'totalDeaths': 'total_deaths',
'totalRecovered': 'total_recovered})
</code></pre>
|
python|python-3.x|pandas|dataframe|jupyter-notebook
| 1
|
377,472
| 66,799,694
|
Image Resolution in Deep Learning
|
<p>Let me summarize my problem as follows,</p>
<p>I have been facing a memory error whenever I try to train a model with my custom dataset. Later, I noticed that some of the images are very high resolution compared to other images in the same dataset. However their size was not that greater.</p>
<p>There is a image resizer in the my pre-trained model, so I thought the situation I mentioned above wouldn't be a problem, but I could not be sure. Does it cause a problem?</p>
|
<p>Yes, it could be a potential source for memory error. Usually, memory error happens because of two reasons, the first is a high image size (maybe even the resized dimensions are high) and batch size.</p>
<p>These are kind of related in some sense. If you use a higher image size, you must use a lower batch size to compensate for the memory. However, don't reduce the image size too much. If you think that despite having a reasonable image size you face memory errors, you have to go with reducing the batch size. Usually, the input image size varies from model to model.</p>
|
tensorflow|deep-learning|gpu|object-detection
| 1
|
377,473
| 66,927,880
|
Pandas: Extracting data from sorted dataframe
|
<p>Consider I have a dataframe with 2 columns: the first column is 'Name' in the form of a string and the second is 'score' in type int. There are many duplicate Names and they are sorted such that the all 'Name1's will be in consecutive rows, followed by 'Name2', and so on. Each row may contain a different score.The number of duplicate names may also be different for each unique string.'</p>
<p>I wish to extract data afrom this dataframe and put it in a new dataframe such that There are no duplicate names in the name column, and each name's corresponding score is the average of his scores in the original dataframe.</p>
<p>I've provided a picture for a better visualization:
<a href="https://i.stack.imgur.com/jrmKY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jrmKY.png" alt="x need not necessarily be equal to y." /></a></p>
|
<p>Firstly make use of <code>groupby()</code> method as mentioned by @QuangHong:</p>
<pre><code>result=df.groupby('Name', as_index=False)['Score'].mean()
</code></pre>
<p>Finally make use of <code>rename()</code> method:</p>
<pre><code>result=result.rename(columns={'Score':'Avg Score'})
</code></pre>
|
pandas|dataframe
| 1
|
377,474
| 67,000,060
|
loading model failed in torchserving
|
<p>i learning to serve a model using pytorch serving and i am new to this serving
this is the handler file i created for serving the vgg16 model
i am using the model from kaggle</p>
<p>Myhandler.py file</p>
<pre><code>
import io
import os
import logging
import torch
import numpy as np
import torch.nn.functional as F
from PIL import Image
from torchvision import transforms,datasets, models
from ts.torch_handler.image_classifier import ImageClassifier
from ts.torch_handler.base_handler import BaseHandler
from ts.utils.util import list_classes_from_module
import importlib
from torch.autograd import Variable
import seaborn as sns
import torchvision
from torch import optim, cuda
from torch.utils.data import DataLoader, sampler
import torch.nn as nn
import warnings
warnings.filterwarnings('ignore', category=FutureWarning)
# Data science tools
import pandas as pd
#path = 'C:\\Users\\fazil\\OneDrive\\Desktop\\pytorch\\vgg11\\vgg16.pt'
path = r'C:\Users\fazil\OneDrive\Desktop\pytorch\vgg11\vgg16.pt'
#image = r'C:\Users\fazil\OneDrive\Desktop\pytorch\vgg11\normal.jpeg'
class VGGImageClassifier(ImageClassifier):
"""
Overriding the model loading code as a workaround for issue :
https://github.com/pytorch/serve/issues/535
https://github.com/pytorch/vision/issues/2473
"""
def __init__(self):
self.model = None
self.mapping = None
self.device = None
self.initialized = False
def initialize(self,context):
"""load eager mode state_dict based model"""
properties = context.system_properties
#self.device = torch.device(
#"cuda:" + str(properties.get("gpu_id"))
#if torch.cuda.is_available()
# else "cpu"
#)
model_dir = properties.get("model_dir")
model_pt_path = os.path.join(model_dir, "model.pt")
# Read model definition file
model_def_path = os.path.join(model_dir, "model.py")
if not os.path.isfile(model_def_path):
raise RuntimeError("Missing the model definition file")
checkpoint = torch.load(path, map_location='cpu')
logging.error('%s ',checkpoint)
self.model = models.vgg16(pretrained=True)
logging.error('%s ',self.model)
self.model.classifier = checkpoint['classifier']
logging.error('%s ', self.model.classifier )
self.model.load_state_dict(checkpoint['state_dict'], strict=False)
self.model.class_to_idx = checkpoint['class_to_idx']
self.model.idx_to_class = checkpoint['idx_to_class']
self.model.epochs = checkpoint['epochs']
optimizer = checkpoint['optimizer']
optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
for param in model.parameters():
param.requires_grad = False
logger.debug('Model file {0} loaded successfully'.format(model_pt_path))
self.initialized = True
def preprocess(self,data):
image = data.get("data")
if image is None:
image = data.get("body")
image_transform =transforms.Compose([
transforms.Resize(size=256),
transforms.CenterCrop(size=224),
transforms.ToTensor(),
transforms.Normalize((0.5), (0.5))
])
image = Image.open(io.BytesIO(image)).convert('RGB')
image = image_transform(image)
image = image.unsqueeze(0)
return image
def inference(self, image):
outs = self.model.forward(image)
probs = F.softmax(outs , dim=1)
preds = torch.argmax(probs, dim=1)
logging.error('%s ',preds)
return preds
def postprocess(self, preds):
res = []
preds = preds.cpu().tolist()
for pred in preds:
label = self.mapping[str(pred)] [1]
res.append({'label': label , 'index': pred })
return res
_service = VGGImageClassifier()
def handle(data,context):
if not _service.initialized:
_service.initialize(context)
if data is None:
return None
data = _service.preprocess(data)
data = _service.inference(data)
data = _service.postprocess(data)
return data
</code></pre>
<p>this is the error i got</p>
<pre><code>Torchserve version: 0.3.1
TS Home: C:\Users\fazil\anaconda3\envs\serve\Lib\site-packages
Current directory: C:\Users\fazil\OneDrive\Desktop\pytorch\vgg11
Temp directory: C:\Users\fazil\AppData\Local\Temp
Number of GPUs: 0
Number of CPUs: 4
Max heap size: 3038 M
Python executable: c:\users\fazil\anaconda3\envs\serve\python.exe
Config file: ./config.properties
Inference address: http://0.0.0.0:8080
Management address: http://0.0.0.0:8081
Metrics address: http://0.0.0.0:8082
Model Store: C:\Users\fazil\OneDrive\Desktop\pytorch\vgg11\model_store
Initial Models: vgg16.mar
Log dir: C:\Users\fazil\OneDrive\Desktop\pytorch\vgg11\logs
Metrics dir: C:\Users\fazil\OneDrive\Desktop\pytorch\vgg11\logs
Netty threads: 32
Netty client threads: 0
Default workers per model: 4
Blacklist Regex: N/A
Maximum Response Size: 6553500
Maximum Request Size: 6553500
Prefer direct buffer: false
Allowed Urls: [file://.*|http(s)?://.*]
Custom python dependency for model allowed: false
Metrics report format: prometheus
Enable metrics API: true
2021-04-08 12:33:22,517 [INFO ] main org.pytorch.serve.ModelServer - Loading initial models: vgg16.mar
2021-04-08 12:33:40,392 [INFO ] main org.pytorch.serve.archive.ModelArchive - eTag 85b61fc819804aea9db0ca8786c2e427
2021-04-08 12:33:40,423 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Adding new version 1.0 for model vgg16
2021-04-08 12:33:40,424 [DEBUG] main org.pytorch.serve.wlm.ModelVersionedRefs - Setting default version to 1.0 for model vgg16
2021-04-08 12:33:40,424 [INFO ] main org.pytorch.serve.wlm.ModelManager - Model vgg16 loaded.
2021-04-08 12:33:40,426 [DEBUG] main org.pytorch.serve.wlm.ModelManager - updateModel: vgg16, count: 4
2021-04-08 12:33:40,481 [INFO ] main org.pytorch.serve.ModelServer - Initialize Inference server with: NioServerSocketChannel.
2021-04-08 12:33:41,173 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: None
2021-04-08 12:33:41,177 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: None
2021-04-08 12:33:41,180 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]12328
2021-04-08 12:33:41,180 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]14588
2021-04-08 12:33:41,180 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started.
2021-04-08 12:33:41,181 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started.
2021-04-08 12:33:41,181 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.8
2021-04-08 12:33:41,181 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.8
2021-04-08 12:33:41,186 [DEBUG] W-9001-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - W-9001-vgg16_1.0 State change null -> WORKER_STARTED
2021-04-08 12:33:41,186 [DEBUG] W-9002-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - W-9002-vgg16_1.0 State change null -> WORKER_STARTED
2021-04-08 12:33:41,199 [INFO ] W-9001-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /127.0.0.1:9001
2021-04-08 12:33:41,199 [INFO ] W-9002-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /127.0.0.1:9002
2021-04-08 12:33:41,240 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: None
2021-04-08 12:33:41,244 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]12008
2021-04-08 12:33:41,244 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started.
2021-04-08 12:33:41,245 [DEBUG] W-9000-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - W-9000-vgg16_1.0 State change null -> WORKER_STARTED
2021-04-08 12:33:41,245 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.8
2021-04-08 12:33:41,245 [INFO ] W-9000-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /127.0.0.1:9000
2021-04-08 12:33:41,255 [INFO ] W-9003-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Listening on port: None
2021-04-08 12:33:41,260 [INFO ] W-9003-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - [PID]15216
2021-04-08 12:33:41,260 [INFO ] W-9003-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Torch worker started.
2021-04-08 12:33:41,261 [DEBUG] W-9003-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - W-9003-vgg16_1.0 State change null -> WORKER_STARTED
2021-04-08 12:33:41,261 [INFO ] W-9003-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Python runtime: 3.8.8
2021-04-08 12:33:41,262 [INFO ] W-9003-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - Connecting to: /127.0.0.1:9003
2021-04-08 12:33:41,768 [INFO ] main org.pytorch.serve.ModelServer - Inference API bind to: http://0.0.0.0:8080
2021-04-08 12:33:41,768 [INFO ] main org.pytorch.serve.ModelServer - Initialize Management server with: NioServerSocketChannel.
2021-04-08 12:33:41,774 [INFO ] main org.pytorch.serve.ModelServer - Management API bind to: http://0.0.0.0:8081
2021-04-08 12:33:41,775 [INFO ] main org.pytorch.serve.ModelServer - Initialize Metrics server with: NioServerSocketChannel.
2021-04-08 12:33:41,777 [INFO ] main org.pytorch.serve.ModelServer - Metrics API bind to: http://0.0.0.0:8082
2021-04-08 12:33:41,784 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: ('127.0.0.1', 9001).
2021-04-08 12:33:41,784 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: ('127.0.0.1', 9002).
2021-04-08 12:33:41,784 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: ('127.0.0.1', 9000).
2021-04-08 12:33:41,784 [INFO ] W-9003-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Connection accepted: ('127.0.0.1', 9003).
Model server started.
2021-04-08 12:33:48,486 [INFO ] pool-2-thread-1 TS_METRICS - CPUUtilization.Percent:100.0|#Level:Host|#hostname:fazil,timestamp:1617865428
2021-04-08 12:33:48,487 [INFO ] pool-2-thread-1 TS_METRICS - DiskAvailable.Gigabytes:74.49674987792969|#Level:Host|#hostname:fazil,timestamp:1617865428
2021-04-08 12:33:48,491 [INFO ] pool-2-thread-1 TS_METRICS - DiskUsage.Gigabytes:147.9403419494629|#Level:Host|#hostname:fazil,timestamp:1617865428
2021-04-08 12:33:48,496 [INFO ] pool-2-thread-1 TS_METRICS - DiskUtilization.Percent:66.5|#Level:Host|#hostname:fazil,timestamp:1617865428
2021-04-08 12:33:48,499 [INFO ] pool-2-thread-1 TS_METRICS - MemoryAvailable.Megabytes:4488.515625|#Level:Host|#hostname:fazil,timestamp:1617865428
2021-04-08 12:33:48,504 [INFO ] pool-2-thread-1 TS_METRICS - MemoryUsed.Megabytes:7658.80859375|#Level:Host|#hostname:fazil,timestamp:1617865428
2021-04-08 12:33:48,513 [INFO ] pool-2-thread-1 TS_METRICS - MemoryUtilization.Percent:63.0|#Level:Host|#hostname:fazil,timestamp:1617865428
2021-04-08 12:34:24,385 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process died.
2021-04-08 12:34:24,439 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last):
2021-04-08 12:34:24,440 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\anaconda3\envs\serve\Lib\site-packages\ts\model_service_worker.py", line 182, in <module>
2021-04-08 12:34:24,443 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - worker.run_server()
2021-04-08 12:34:24,444 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\anaconda3\envs\serve\Lib\site-packages\ts\model_service_worker.py", line 154, in run_server
2021-04-08 12:34:24,446 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.handle_connection(cl_socket)
2021-04-08 12:34:24,446 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\anaconda3\envs\serve\Lib\site-packages\ts\model_service_worker.py", line 116, in handle_connection
2021-04-08 12:34:24,447 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service, result, code = self.load_model(msg)
2021-04-08 12:34:24,448 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\anaconda3\envs\serve\Lib\site-packages\ts\model_service_worker.py", line 89, in load_model
2021-04-08 12:34:24,523 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service = model_loader.load(model_name, model_dir, handler, gpu, batch_size, envelope)
2021-04-08 12:34:24,582 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "c:\users\fazil\anaconda3\envs\serve\lib\site-packages\ts\model_loader.py", line 104, in load
2021-04-08 12:34:24,597 [INFO ] nioEventLoopGroup-5-2 org.pytorch.serve.wlm.WorkerThread - 9000 Worker disconnected. WORKER_STARTED
2021-04-08 12:34:24,583 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process died.
2021-04-08 12:34:24,646 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - initialize_fn(service.context)
2021-04-08 12:34:24,646 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last):
2021-04-08 12:34:24,649 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\anaconda3\envs\serve\Lib\site-packages\ts\model_service_worker.py", line 182, in <module>
2021-04-08 12:34:24,649 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - worker.run_server()
2021-04-08 12:34:24,650 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\anaconda3\envs\serve\Lib\site-packages\ts\model_service_worker.py", line 154, in run_server
2021-04-08 12:34:24,648 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "c:\users\fazil\anaconda3\envs\serve\lib\site-packages\ts\model_loader.py", line 131, in <lambda>
2021-04-08 12:34:24,652 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.handle_connection(cl_socket)
2021-04-08 12:34:24,649 [DEBUG] W-9000-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - System state is : WORKER_STARTED
2021-04-08 12:34:24,734 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\anaconda3\envs\serve\Lib\site-packages\ts\model_service_worker.py", line 116, in handle_connection
2021-04-08 12:34:24,653 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - initialize_fn = lambda ctx: entry_point(None, ctx)
2021-04-08 12:34:24,734 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service, result, code = self.load_model(msg)
2021-04-08 12:34:24,735 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\AppData\Local\Temp\models\85b61fc819804aea9db0ca8786c2e427\hanndler.py", line 268, in handle
2021-04-08 12:34:24,735 [DEBUG] W-9000-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died.
java.lang.InterruptedException
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2056)
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2133)
at java.base/java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:432)
at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:188)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
2021-04-08 12:34:24,736 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\anaconda3\envs\serve\Lib\site-packages\ts\model_service_worker.py", line 89, in load_model
2021-04-08 12:34:24,753 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service = model_loader.load(model_name, model_dir, handler, gpu, batch_size, envelope)
2021-04-08 12:34:24,736 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - _service.initialize(context)
2021-04-08 12:34:24,754 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\AppData\Local\Temp\models\85b61fc819804aea9db0ca8786c2e427\hanndler.py", line 111, in initialize
2021-04-08 12:34:24,754 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "c:\users\fazil\anaconda3\envs\serve\lib\site-packages\ts\model_loader.py", line 104, in load
2021-04-08 12:34:24,754 [WARN ] W-9000-vgg16_1.0 org.pytorch.serve.wlm.BatchAggregator - Load model failed: vgg16, error: Worker died.
2021-04-08 12:34:24,756 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - initialize_fn(service.context)
2021-04-08 12:34:24,755 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.model.classifier = checkpoint['classifier']
2021-04-08 12:34:24,758 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "c:\users\fazil\anaconda3\envs\serve\lib\site-packages\ts\model_loader.py", line 131, in <lambda>
2021-04-08 12:34:24,810 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - initialize_fn = lambda ctx: entry_point(None, ctx)
2021-04-08 12:34:24,811 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\AppData\Local\Temp\models\85b61fc819804aea9db0ca8786c2e427\hanndler.py", line 268, in handle
2021-04-08 12:34:24,757 [DEBUG] W-9000-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - W-9000-vgg16_1.0 State change WORKER_STARTED -> WORKER_STOPPED
2021-04-08 12:34:24,871 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - _service.initialize(context)
2021-04-08 12:34:24,872 [WARN ] W-9000-vgg16_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9000-vgg16_1.0-stderr
2021-04-08 12:34:24,812 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - KeyError: 'classifier'
2021-04-08 12:34:24,872 [WARN ] W-9000-vgg16_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9000-vgg16_1.0-stdout
2021-04-08 12:34:24,872 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\AppData\Local\Temp\models\85b61fc819804aea9db0ca8786c2e427\hanndler.py", line 111, in initialize
2021-04-08 12:34:24,874 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.model.classifier = checkpoint['classifier']
2021-04-08 12:34:24,903 [INFO ] nioEventLoopGroup-5-1 org.pytorch.serve.wlm.WorkerThread - 9001 Worker disconnected. WORKER_STARTED
2021-04-08 12:34:24,876 [INFO ] W-9000-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - Retry worker: 9000 in 1 seconds.
2021-04-08 12:34:24,931 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - KeyError: 'classifier'
2021-04-08 12:34:24,932 [DEBUG] W-9001-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - System state is : WORKER_STARTED
2021-04-08 12:34:24,974 [DEBUG] W-9001-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died.
java.lang.InterruptedException
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2056)
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2133)
at java.base/java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:432)
at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:188)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
2021-04-08 12:34:25,015 [WARN ] W-9001-vgg16_1.0 org.pytorch.serve.wlm.BatchAggregator - Load model failed: vgg16, error: Worker died.
2021-04-08 12:34:25,015 [DEBUG] W-9001-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - W-9001-vgg16_1.0 State change WORKER_STARTED -> WORKER_STOPPED
2021-04-08 12:34:25,016 [WARN ] W-9001-vgg16_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9001-vgg16_1.0-stderr
2021-04-08 12:34:25,017 [WARN ] W-9001-vgg16_1.0 org.pytorch.serve.wlm.WorkerLifeCycle - terminateIOStreams() threadName=W-9001-vgg16_1.0-stdout
2021-04-08 12:34:25,017 [INFO ] W-9001-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - Retry worker: 9001 in 1 seconds.
2021-04-08 12:34:25,038 [INFO ] W-9000-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Stopped Scanner - W-9000-vgg16_1.0-stdout
2021-04-08 12:34:25,038 [INFO ] W-9000-vgg16_1.0-stderr org.pytorch.serve.wlm.WorkerLifeCycle - Stopped Scanner - W-9000-vgg16_1.0-stderr
2021-04-08 12:34:25,085 [INFO ] W-9001-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Stopped Scanner - W-9001-vgg16_1.0-stdout
2021-04-08 12:34:25,085 [INFO ] W-9001-vgg16_1.0-stderr org.pytorch.serve.wlm.WorkerLifeCycle - Stopped Scanner - W-9001-vgg16_1.0-stderr
2021-04-08 12:34:25,247 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Backend worker process died.
2021-04-08 12:34:25,247 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - Traceback (most recent call last):
2021-04-08 12:34:25,248 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\anaconda3\envs\serve\Lib\site-packages\ts\model_service_worker.py", line 182, in <module>
2021-04-08 12:34:25,248 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - worker.run_server()
2021-04-08 12:34:25,248 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\anaconda3\envs\serve\Lib\site-packages\ts\model_service_worker.py", line 154, in run_server
2021-04-08 12:34:25,249 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.handle_connection(cl_socket)
2021-04-08 12:34:25,250 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\anaconda3\envs\serve\Lib\site-packages\ts\model_service_worker.py", line 116, in handle_connection
2021-04-08 12:34:25,250 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service, result, code = self.load_model(msg)
2021-04-08 12:34:25,251 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\anaconda3\envs\serve\Lib\site-packages\ts\model_service_worker.py", line 89, in load_model
2021-04-08 12:34:25,251 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - service = model_loader.load(model_name, model_dir, handler, gpu, batch_size, envelope)
2021-04-08 12:34:25,253 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "c:\users\fazil\anaconda3\envs\serve\lib\site-packages\ts\model_loader.py", line 104, in load
2021-04-08 12:34:25,253 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - initialize_fn(service.context)
2021-04-08 12:34:25,254 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "c:\users\fazil\anaconda3\envs\serve\lib\site-packages\ts\model_loader.py", line 131, in <lambda>
2021-04-08 12:34:25,254 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - initialize_fn = lambda ctx: entry_point(None, ctx)
2021-04-08 12:34:25,254 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\AppData\Local\Temp\models\85b61fc819804aea9db0ca8786c2e427\hanndler.py", line 268, in handle
2021-04-08 12:34:25,255 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - _service.initialize(context)
2021-04-08 12:34:25,256 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - File "C:\Users\fazil\AppData\Local\Temp\models\85b61fc819804aea9db0ca8786c2e427\hanndler.py", line 111, in initialize
2021-04-08 12:34:25,257 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - self.model.classifier = checkpoint['classifier']
2021-04-08 12:34:25,257 [INFO ] W-9002-vgg16_1.0-stdout org.pytorch.serve.wlm.WorkerLifeCycle - KeyError: 'classifier'
2021-04-08 12:34:25,454 [INFO ] nioEventLoopGroup-5-4 org.pytorch.serve.wlm.WorkerThread - 9002 Worker disconnected. WORKER_STARTED
2021-04-08 12:34:25,456 [DEBUG] W-9002-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - System state is : WORKER_STARTED
2021-04-08 12:34:25,457 [DEBUG] W-9002-vgg16_1.0 org.pytorch.serve.wlm.WorkerThread - Backend worker monitoring thread interrupted or backend worker process died.
java.lang.InterruptedException
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:2056)
at java.base/java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2133)
at java.base/java.util.concurrent.ArrayBlockingQueue.poll(ArrayBlockingQueue.java:432)
at org.pytorch.serve.wlm.WorkerThread.run(WorkerThread.java:188)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
2021-04-08 12:34:25,482 [WARN ] W-9002-vgg16_1.0 org.pytorch.serve.wlm.BatchAggregator - Load model failed: vgg16, error: Worker died.
</code></pre>
<p>and also i load the model from path
because i got error if i use model_pt_path
can some one help me with this</p>
|
<blockquote>
<p>i am using the model from kaggle</p>
</blockquote>
<p>I presume you got the model from <a href="https://www.kaggle.com/pytorch/vgg16" rel="nofollow noreferrer">https://www.kaggle.com/pytorch/vgg16</a></p>
<p>I think you are loading the model incorrectly.
You are loading a checkpoint, which would work if your model was saved like this:</p>
<pre><code>torch.save({
'epoch': epoch,
'model_state_dict': model.state_dict(),
'optimizer_state_dict': optimizer.state_dict(),
'loss': loss,
...
}, PATH)
</code></pre>
<p>But it was probably saved like this:</p>
<pre><code>torch.save(model.state_dict(), PATH)
</code></pre>
<p>Which would explain the KeyError.
I modified the initialize method according to the second case:</p>
<pre><code>def initialize(self,context):
"""load eager mode state_dict based model"""
properties = context.system_properties
model_dir = properties.get("model_dir")
model_pt_path = os.path.join(model_dir, "model.pt")
# Read model definition file
model_def_path = os.path.join(model_dir, "model.py")
if not os.path.isfile(model_def_path):
raise RuntimeError("Missing the model definition file")
state_dict = torch.load(path, map_location='cpu')
# logging.error('%s ',checkpoint)
self.model = models.vgg16(pretrained=True)
logging.error('%s ',self.model)
# self.model.classifier = checkpoint['classifier']
# logging.error('%s ', self.model.classifier )
self.model.load_state_dict(state_dict, strict=False)
# self.model.class_to_idx = checkpoint['class_to_idx']
# self.model.idx_to_class = checkpoint['idx_to_class']
# self.model.epochs = checkpoint['epochs']
# optimizer = checkpoint['optimizer']
# optimizer.load_state_dict(checkpoint['optimizer_state_dict'])
# for param in model.parameters():
# param.requires_grad = False
# logger.debug('Model file {0} loaded successfully'.format(model_pt_path))
self.initialized = True
</code></pre>
<p>Using the model linked above, I managed to start torchserve without error.</p>
|
python|pytorch|torch
| 1
|
377,475
| 67,123,411
|
Pandas replacing decimal seperator in string columns
|
<p>I habe a dataframe with many columns >= 50. Some of them have a comma as decimal seperator and some have commas and a few even have a little bit of both. A few are supposed to be string.</p>
<pre class="lang-none prettyprint-override"><code>| colA | colB | colC | colD |
| 12.4 | 9,4 | 17.8 | eaui |
| 12.4 | 17,3 | 9,4 | euia |
| 13.2 | 20,7 | 9,4 | eaea |
| 10.0 | 1,8 | 2.3 | uiae |
</code></pre>
<p>When reading the csv some columns get recognized as float, while most are read as string.
I now want to make sure both (comma decimal and dot) are recognized as string.
I tried:</p>
<pre class="lang-py prettyprint-override"><code>df2 = df.apply(lambda x: x.str.replace(',','.'))
</code></pre>
<p>But get the error, that this operator only works for string values.</p>
<p>I also tried the following, but unfortunatly also without success:</p>
<pre class="lang-py prettyprint-override"><code>df2 = df.apply(lambda x: x.str.replace(',','.') if type(x) == str)
</code></pre>
<p>I found several instructions which work by choosing specific columns at a time, but I have to many to justify this.</p>
<p>I'm guessing there is a very easy oneliner that solves my problem, but I could not find it.
Any help and tipps are appreciated!</p>
|
<p>You have to perform <code>str.replace</code> on a <code>pd.Series</code> object, i.e. a single column. You can first select the columns that are not numeric and then use <code>apply</code> on this sub-frame to replace the comma in each column:</p>
<pre><code>string_columns = df.select_dtypes(include='object').columns
df[string_columns].apply(lambda c: c.str.replace(',', '.').astype(float), axis=1)
</code></pre>
|
python|pandas
| 2
|
377,476
| 67,063,852
|
Performing SQL updates based on row number and using previous row for calculations
|
<p>I have a python/pandas code that I was using to perform some calculations, but I was having performance issues with it. I'm trying to write everything on SQL, updating the table with BigQuery.</p>
<p>The problem that I am facing is to update an existing table based on row number and using previous rows for calculations.</p>
<p>The code below is what I was using, and now I need to do this in SQL. "i" is the row number.</p>
<pre><code>if i <= 4:
perfil['B'].iloc[i] = 0
else:
perfil['B'].iloc[i] = perfil['A'] + perfil['B'].iloc[i - 2]
</code></pre>
<p>So, for the first 5 rows I do some calculations that don't use previous rows. But after that, the calculation will use previous rows.</p>
<p>My table is already created in this way:</p>
<pre><code>| DEPTH_M | A | B |
|-----------|---|---|
|1.2 |2 |0 |
|1.4 |3 |0 |
|1.6 |6 |0 |
|1.8 |2 |0 |
|2.0 |1 |0 |
|2.2 |6 |0 |
|2.4 |7 |0 |
|2.6 |6 |0 |
</code></pre>
<p>And after some user input, I need to perform the update in the table using the code that I showed before that results in this:</p>
<pre><code>| DEPTH_M | A | B |
|-----------|---|------------------------------------------------------|
|1.2 |2 |0 |
|1.4 |3 |0 |
|1.6 |6 |0 |
|1.8 |2 |0 |
|2.0 |1 |0 (Zero till now, first 5 rows are filled with zeros)|
|2.2 |6 |6 (6 from A + 0 from the past two rows) |
|2.4 |7 |7 (7 from A + 0 from the past two rows) |
|2.6 |6 |12 (6 from A + 6 from the past two rows) |
</code></pre>
<p>Thanks in advance!</p>
|
<p>Consider below example</p>
<pre><code>#standardSQL
with `project.dataset.table` as (
select 1.2 DEPTH_M, 2 A, 0 B union all
select 1.4, 3, 0 union all
select 1.6, 6, 0 union all
select 1.8, 2, 0 union all
select 2.0, 1, 0 union all
select 2.2, 6, 0 union all
select 2.4, 7, 0 union all
select 2.6, 6, 0 union all
select 3.0, 1, 0 union all
select 3.2, 6, 0 union all
select 3.4, 7, 0
)
select DEPTH_M, A,
ifnull(if(rn <= 5, 0, A) + sum(if(rn <= 5, 0, A)) over win, 0) as B,
format('%i + %i', if(rn <= 5, 0, A), ifnull(sum(if(rn <= 5, 0, A)) over win, 0)) as expalnation
from (
select DEPTH_M, A, B,
row_number() over(order by DEPTH_M) rn
from `project.dataset.table`
)
window win as (partition by mod(rn, 2) order by rn rows between unbounded preceding and 1 preceding)
order by DEPTH_M
</code></pre>
<p>with output</p>
<p><a href="https://i.stack.imgur.com/W47hp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/W47hp.png" alt="enter image description here" /></a></p>
<p>In case if you need to update your table - you can use below</p>
<pre><code>update `project.dataset.table` u
set B = s.B
from (
select DEPTH_M, A, ifnull(if(rn <= 5, 0, A) + sum(if(rn <= 5, 0, A)) over win, 0) as B
from (
select DEPTH_M, A, B,
row_number() over(order by DEPTH_M) rn
from `project.dataset.table`
)
window win as (partition by mod(rn, 2) order by rn rows between unbounded preceding and 1 preceding)
) s
where u.DEPTH_M = s.DEPTH_M
and u.A = s.A;
</code></pre>
|
sql|pandas|google-bigquery
| 0
|
377,477
| 66,867,203
|
How to fill a pandas dataframe column using a value from another dataframe column
|
<p>Firstly we can import some packages which might be useful</p>
<pre><code>import pandas as pd
import datetime
</code></pre>
<p>Say I now have a dataframe which has a date, name and age column.</p>
<pre><code>df1 = pd.DataFrame({'date': ['10-04-2020', '04-07-2019', '12-05-2015' ], 'name': ['john', 'tim', 'sam'], 'age':[20, 22, 27]})
</code></pre>
<p>Now say I have another dataframe with some random columns</p>
<pre><code>df2 = pd.DataFrame({'a': [1,2,3], 'b': [4,5,6]})
</code></pre>
<p><strong>Question:</strong></p>
<p>How can I take the age value in <code>df1</code> filtered on the date (can select this value) and populate a whole new column in <code>df2</code> with this value? Ideally this method should generalise for any number of rows in the dataframe.</p>
<p><strong>Tried</strong></p>
<p>The following is what I have tried (on a similar example) but for some reason it doesn't seem to work (it just shows nan values in the majority of column entries except for a few which randomly seem to populate).</p>
<pre><code>y = datetime.datetime(2015, 5, 12)
df2['new'] = df1[(df1['date'] == y)].age
</code></pre>
<p><em><strong>Expected Output</strong></em></p>
<p>Since I have filtered above based on sams age (date corresponds to the row with sams name) I would like the new column to be added to df2 with his age as all the entries (in this case 27 repeated 3 times).</p>
<pre><code>df2 = pd.DataFrame({'a': [1,2,3], 'b': [4,5,6], 'new': [27, 27, 27]})
</code></pre>
|
<p>Try:</p>
<pre class="lang-py prettyprint-override"><code>y = datetime.datetime(2015, 5, 12).strftime('%d-%m-%Y')
df2.loc[:, 'new'] = df1.loc[df1['date'] == y, "age"].item()
# Output
a b new
0 1 4 27
1 2 5 27
2 3 6 27
</code></pre>
|
python|pandas|dataframe
| 1
|
377,478
| 66,997,596
|
Nested loop problem in python while working with pandas
|
<p>I am trying to created a nested loop to load multiple files in an s3 bucket and concatenate them into a single dataframe. I am having trouble in arranging the nested loops in order to do this.
Here is my code:</p>
<pre><code>import json
import pandas as pd
import boto3
import io
client = boto3.client('s3')
var = "filename"
filenumber = ["/0", "/1", "/2","/3"]
for j in range(len(filenumber)):
response = client.list_objects(Bucket="bucketname", Prefix="subfolder/%s" % (var + filenumber[j]))
df_list = []
json_buffer = io.StringIO()
for file in response["Contents"]:
obj = client.get_object(Bucket="bucketname", Key=file["Key"])
obj_df = pd.read_json(obj["Body"])
df_list.append(obj_df)
df = pd.concat(df_list)
df.to_json(json_buffer)
</code></pre>
<p>On keeping <code>df = pd.concat(df_list)</code> inside the outer loop, I get the error: <code>DataFrame index must be unique for orient='columns'</code>
If i keep the line outside the outer loop, I only get the last iteration file from the list ie. "/3" loaded into the dataframe.</p>
<p>any help/suggestions are much appreciated. Sorry if my question needs editing, kinda new to stackoverflow.</p>
|
<p>You get that error when your dataframe has a non-unique index or (repeated) values. Since it doesn't look like you're using the index, you could create a new one by using the following command:</p>
<p><code>df.reset_index(inplace=True)</code></p>
<p>or</p>
<p>If you want to remove the previous index.</p>
<p><code>df.reset_index(drop=True, inplace=True)</code></p>
<p>For a deeper understanding and referring to <a href="http://%20http://pandas.pydata.org/pandas-docs/stable/indexing.html#set-reset-index" rel="nofollow noreferrer">http://pandas.pydata.org/pandas-docs/stable/indexing.html#set-reset-index</a> would be useful</p>
|
python|pandas|list|loops|boto3
| 1
|
377,479
| 67,050,913
|
Google colab cache google drive content
|
<p>I have a dataset on google drive that's about 20GB big.
I use a generator to pull in the dataset to my keras/TF models, and the overhead of loading the files (for every batch) is insane.</p>
<p>I want to prefetch the content as one operation and then simply fetched from the local VM disk</p>
<p>I tried this:</p>
<pre class="lang-py prettyprint-override"><code>from google.colab import drive
drive.mount('/content/drive')
!mkdir -p $RAW_NOTEBOOKS_DIR
!cp $RAW_NOTEBOOKS_DIR $LOCAL_NOTEBOOKS_DIR
</code></pre>
<p>However, this snippet runs finishes executing instantly (so it obviously didn't download the data - which was the intent of the <code>cp</code> command (copying from Drive to local).</p>
<p>Is this at all possible?</p>
<hr />
<pre class="lang-py prettyprint-override"><code>RAW_NOTEBOOKS_DIR = "/content/drive/My\ Drive/Colab\ Notebooks"
</code></pre>
|
<p>Theres a good example on google codelab for doing this, they write the dataset in a local TFRecords:</p>
<p><a href="https://codelabs.developers.google.com/codelabs/keras-flowers-tpu/#0" rel="nofollow noreferrer">https://codelabs.developers.google.com/codelabs/keras-flowers-tpu/#0</a></p>
<p>you can find more info here:</p>
<p><a href="https://keras.io/examples/keras_recipes/tfrecord/" rel="nofollow noreferrer">https://keras.io/examples/keras_recipes/tfrecord/</a></p>
<p>so instead of reading the data every time from google drive you just need to read it one time and write it in the local memory in a TFRecord, then pass it to the model for training.</p>
<p>If you follow the guides is pretty straightforward.</p>
|
python|tensorflow|keras|google-colaboratory
| 0
|
377,480
| 67,142,631
|
comparing two arrays of bolean
|
<pre><code>a = np.array(['numeric','string','numeric'])
b = np.array(['numeric','string','numeric','numeric','string'])
</code></pre>
<p>I am trying to compare two arrays a and b.</p>
<p>I want to get something like : <code>array([ True, True, True])</code> because the 3 first elements are identical.</p>
<p>I know i could potentially truncate the biggest array in order to compare them :</p>
<pre><code>if len(a)-len(b)<0:
b = np.array(b[0:len(a)-len(b)])
if len(a)-len(b) > 0:
a = np.array(a[0:len(b)-len(a)])
b==a
</code></pre>
<p>or</p>
<pre><code>if len(a)-len(b)<0:
b = b[0:len(a)]
else:
a = a[0:len(b)]
a==b
</code></pre>
<p>but i'm wondering if there are a built in numpy function to do this without having to truncate them.</p>
|
<p>Built in function for this, I'm not sure, but here's a quick solution:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
a = np.array(['numeric','string','numeric'])
b = np.array(['numeric','string','numeric','numeric','string'])
c = np.array([i == j for i, j in zip(a, b)])
print(c)
</code></pre>
<pre><code>Out: [True, True, True]
</code></pre>
<p>Zip will automatically truncate the longer array to match the length of the shorter array.</p>
<p>A word of caution though, this is a solution for a List or a 1D array. If you plan to do this with a 2D array, a different solution will be needed.</p>
|
python|numpy
| 1
|
377,481
| 66,836,559
|
Drop a column and its corresponding row in a data frame
|
<p>I tried to drop a column and its corresponding row in a data frame in pandas using .drop(). But its dropping only the column and not the corresponding row that has the column value. For ex. I have unknown genre as a column and corresponding to it I have a movie as a row. When i drop the unknown column only the column is deleted but the movie still exist. I want to drop both the column and its corresponding row. Is there a single command to do the same.</p>
<p>Please find below the attachment of the dataframe</p>
<p><a href="https://i.stack.imgur.com/JZOfi.png" rel="nofollow noreferrer">enter image description here</a></p>
|
<p>Try the other way around. First retain only the rows of your dataframe for which value in "unknown" column is not equal to 1 and then get rid of the column:</p>
<pre class="lang-py prettyprint-override"><code>new_df = df.loc[~(df["unkwown"] == 1), :].drop(columns="unknown")
</code></pre>
|
pandas
| 0
|
377,482
| 67,141,529
|
I want to scrape the reviews through selenium webscraping and make a csv file of it but finding hard to remove the error. How to remove error?
|
<pre><code>import pandas as pd
from bs4 import BeautifulSoup
from time import sleep
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
import sqlite3 as sql
urls = []
product_urls = []
list_of_reviews = []
# Each page urls
for i in range(1, 252):
urls.append(f"https://www.etsy.com/in-en/c/jewelry/earrings/ear-jackets-and-climbers?ref=pagination&explicit=1&page={i}")
# Scraping each product's urls | 16,064 products
for url in urls:
driver = webdriver.Chrome(executable_path=r"C:\Users\dell\Downloads\chromedriver_win32\chromedriver.exe")
driver.get(url)
sleep(5)
for i in range(1, 65):
product = WebDriverWait(driver, 20).until(EC.presence_of_element_located(By.XPATH, f'//*[@id="content"]/div/div[1]/div/div[3]/div[2]/div[2]/div[1]/div/div/ul/li[{i}]/div/a'))
product_urls.append(product.get_attribute('href'))
# Scraping each product's reviews
driver = webdriver.Chrome(executable_path='chromedriver.exe')
for product_url in product_urls[15:]:
try:
driver.get(product_url)
sleep(5)
html = driver.page_source
soup = BeautifulSoup(html,'html')
for i in range(4):
try:
list_of_reviews.append(soup.select(f'#review-preview-toggle-{i}')[0].getText().strip())
except:
continue
while(True):
try:
next_button = driver.find_element_by_xpath('//*[@id="reviews"]/div[2]/nav/ul/li[position() = last()]/a[contains(@href, "https")]')
if next_button != None:
next_button.click()
sleep(5)
html = driver.page_source
soup = BeautifulSoup(html,'html')
for i in range(4):
try:
list_of_reviews.append(soup.select(f'#review-preview-toggle-{i}')[0].getText().strip())
except:
continue
except Exception as e:
print('finsish : ', e)
break
except:
continue
scrapedReviewsAll = pd.DataFrame(list_of_reviews, index = None, columns = ['reviews'])
scrapedReviewsAll.to_csv('scrapedReviewsAll.csv')
df = pd.read_csv('scrapedReviewsAll.csv')
conn = sql.connect('scrapedReviewsAll.db')
df.to_sql('scrapedReviewsAllTable', conn)
</code></pre>
<p><strong>TypeError: <strong>init</strong>() takes 2 positional arguments but 3 were given</strong>
<strong>in line product = WebDriverWait(driver, 20).until(EC.presence_of_element_located(By.XPATH(f'//*[@id="content"]/div/div[1]/div/div[3]/div[2]/div[2]/div[1]/div/div/ul/li[{i}]/div/a')))</strong>
<strong>I keep getting this error while running the program..How this typeerror get solved??</strong></p>
|
<p>try to change the code scraping the href to be as following:</p>
<pre><code># Scraping each product's urls | 16,064 products
for url in urls:
driver = driver = webdriver.Chrome(executable_path=r"C:\Users\dell\Downloads\chromedriver_win32\chromedriver.exe")
driver.get(url)
sleep(5)
for i in range(1, 65):
product = driver.find_element_by_xpath("//div[@id='content']//div[contains(@class,'search-listings-group')]//ul//li['"+str(i)+"']/div/a")
product_urls.append(product.get_attribute('href'))
</code></pre>
|
python|pandas|selenium|selenium-webdriver|nlp
| 1
|
377,483
| 66,767,157
|
How to Append Unique Rows and Take Its Values and put in a Dataframe
|
<p>This is my initial df:</p>
<pre><code> Date Items Stocks Sold
11/07/2020 Item1 10 40
11/08/2020 Item1 20 50
11/09/2020 Item1 30 90
11/10/2020 Item1 30 30
11/07/2020 Item2 10 10
11/08/2020 Item2 20 100
11/09/2020 Item2 30 70
11/10/2020 Item2 40 80
</code></pre>
<p>I want to create a new df that is grouped by per item, like so</p>
<pre><code>Items Stocks Sold
Item1 90 210
Item2 100 260
</code></pre>
<p>Here's my code:</p>
<pre><code>item = df.groupby['Items'].unique()
stocks = df.groupby('Items')['Stocks'].sum()
sold = df.groupby('Items')['Sold'].sum()
dff = pd.DataFrame({'Item': item, 'Stocks': stocks, 'Sold': sold})
</code></pre>
<p>It's working however, the numbers are all mixed up. I'm getting this result:</p>
<pre><code>Items Stocks Sold
Item1 100 260
Item2 90 210
</code></pre>
<p>How do I make sure that each value has the right number in my df when I groupby.</p>
|
<p>Try this:</p>
<pre><code>df.groupby('Items', as_index=False)[['Stocks', 'Sold']].sum()
</code></pre>
<p>Output:</p>
<pre><code> Items Stocks Sold
0 Item1 90 210
1 Item2 100 260
</code></pre>
<p>And if you want different agg functions for different columns:</p>
<pre><code>df.groupby('Items', as_index=False)[['Stocks', 'Sold']]\
.agg(stocks_mean=('Stocks', 'mean'), sold_sum=('Sold', 'sum'))
</code></pre>
<p>Output:</p>
<pre><code> Items stocks_mean sold_sum
0 Item1 22.5 210
1 Item2 25.0 260
</code></pre>
|
python|pandas
| 0
|
377,484
| 47,337,195
|
selecting not None value from a dataframe column
|
<p>I would like to use the <code>fillna</code> function to fill None value of a column with its own first most frequent value that is not None or nan. </p>
<p>Input DF:</p>
<pre><code>Col_A
a
None
None
c
c
d
d
</code></pre>
<p>The output Dataframe could be either:</p>
<pre><code>Col_A
a
c
c
c
c
d
d
</code></pre>
<p>Any suggestion would be very appreciated.
Many Thanks, Best Regards,
Carlo</p>
|
<p>Prelude: If your <code>None</code> is actually a <em>string</em>, you can simplify any headaches by getting rid of them first-up. Use <code>replace</code>:</p>
<pre><code>df = df.replace('None', np.nan)
</code></pre>
<hr>
<p>I believe you could use <code>fillna</code> + <code>value_counts</code>:</p>
<pre><code>df
Col_A
0 a
1 NaN
2 NaN
3 c
4 c
5 d
6 d
df.fillna(df.Col_A.value_counts(sort=False).index[0])
Col_A
0 a
1 c
2 c
3 c
4 c
5 d
6 d
</code></pre>
<p>Or, with Vaishali's suggestion, use <code>idxmax</code> to pick <code>c</code>:</p>
<pre><code>df.fillna(df.Col_A.value_counts(sort=False).idxmax())
Col_A
0 a
1 c
2 c
3 c
4 c
5 d
6 d
</code></pre>
<p>The fill-values could either be <code>c</code> or <code>d</code>, depending on whether you include <code>sort=False</code> or not.</p>
<p><strong>Details</strong></p>
<pre><code>df.Col_A.value_counts(sort=False)
c 2
a 1
d 2
Name: Col_A, dtype: int64
</code></pre>
|
python|pandas|dataframe
| 6
|
377,485
| 47,473,996
|
Variable substitution without duplicating the tensor (or having the graph accepting two different input)
|
<p>I think it is easier to clarify what I need with a MWE (question is in the comment).</p>
<pre><code>import tensorflow as tf
import numpy as np
class MLP:
def __init__(self, sizes, activations):
self.input = last_out = tf.placeholder(dtype=tf.float32, shape=[None, sizes[0]])
self.layers = []
for l, size in enumerate(sizes[1:]):
self.layers.append(last_out)
last_out = tf.layers.dense(last_out, size, activation=activations[l], kernel_initializer=tf.glorot_uniform_initializer())
self.layers.append(last_out)
def main():
session = tf.Session()
dim = 3
nn_sizes = [dim, 15, 1]
nn_activations = [tf.nn.tanh, tf.nn.tanh, tf.identity]
mynet = MLP(nn_sizes, nn_activations)
w = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope='mynet')
x1 = tf.placeholder(dtype=tf.float32, shape=[None, dim], name='x1')
x2 = tf.placeholder(dtype=tf.float32, shape=[None, dim], name='x2')
x3 = tf.placeholder(dtype=tf.float32, shape=[None, 1], name='x3')
myfun = tf.reduce_sum(tf.multiply(x3, new_tensor)
# new_tensor has to be the difference myfun(x2)-myfun(x3).
# However, the network is the same and its input variable has a different name.
# I would like to have something like:
# substitute(myfun,input,x1)
# substitute(myfun,input,x2)
# without duplicating the network.
optimizer = tf.contrib.opt.ScipyOptimizerInterface(myfun,var_list=w)
n = 1000
x1_samples = np.asmatrix(np.random.rand(n,dim))
x2_samples = np.asmatrix(np.random.rand(n,dim))
x3_samples = np.asmatrix(np.random.rand(n,1))
print(session.run(myfun, {x1: x1_samples, x2: x2_samples, x3: x3_samples}))
optimizer.minimize(session, {x1: x1_samples, x2: x2_samples, x3: x3_samples})
print(session.run(myfun, {x1: x1_samples, x2: x2_samples, x3: x3_samples}))
if __name__ == '__main__':
main()
</code></pre>
|
<p>Here's one approach (I assume there is a typo and what you want is <code>x3 * (mynet(x2) - mynet(x1))</code>?):</p>
<pre><code>import tensorflow as tf
import numpy as np
class MLP:
def __init__(self, x1, x2, sizes, activations):
x_sizes = [tf.shape(x1)[0], tf.shape(x2)[0]]
last_out = tf.concat([x1, x2], axis=0)
self.layers = []
for l, size in enumerate(sizes[1:]):
self.layers.append(last_out)
last_out = tf.layers.dense(last_out, size, activation=activations[l], kernel_initializer=tf.glorot_uniform_initializer())
self.layers.append(last_out)
self.x1_eval, self.x2_eval = tf.split(last_out, x_sizes, axis=0)
def main():
session = tf.Session()
dim = 3
nn_sizes = [dim, 15, 1]
nn_activations = [tf.nn.tanh, tf.nn.tanh, tf.identity]
w = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope='mynet')
x1 = tf.placeholder(dtype=tf.float32, shape=[None, dim], name='x1')
x2 = tf.placeholder(dtype=tf.float32, shape=[None, dim], name='x2')
x3 = tf.placeholder(dtype=tf.float32, shape=[None, 1], name='x3')
mynet = MLP(x1, x2, nn_sizes, nn_activations)
myfun = tf.reduce_sum(tf.multiply(x3, (mynet.x2_eval - mynet.x1_eval)))
optimizer = tf.contrib.opt.ScipyOptimizerInterface(myfun,var_list=w)
n = 1000
x1_samples = np.asmatrix(np.random.rand(n,dim))
x2_samples = np.asmatrix(np.random.rand(n,dim))
x3_samples = np.asmatrix(np.random.rand(n,1))
session.run(tf.global_variables_initializer())
print(session.run(myfun, {x1: x1_samples, x2: x2_samples, x3: x3_samples}))
optimizer.minimize(session, {x1: x1_samples, x2: x2_samples, x3: x3_samples})
print(session.run(myfun, {x1: x1_samples, x2: x2_samples, x3: x3_samples}))
if __name__ == '__main__':
main()
</code></pre>
|
tensorflow
| 1
|
377,486
| 47,251,621
|
Python pandas/numpy fill 1 to cells with value, and zero to cells of nan
|
<p>I have an array with cells of different types of data (String, float, Integer, ...) . </p>
<p>e.g. </p>
<pre><code>[[18 '1/4/11' 73.0 'Male' 4.0]
[18 nan 73.0 'Male' nan]
[18 '7/5/11' 73.0 'Male' 7.0]]
</code></pre>
<p>And I want to assign 0 to cells with value <code>nan</code>, and 1 to all others</p>
<p>expected outcome: </p>
<pre><code>[[1 1 1 1 1
1 0 1 1 0
1 1 1 1 1]]
</code></pre>
<p>With pandas's <code>fillna(0)</code>, I'm able to fill <code>nan</code> with 0, but how to assign 1 to all the cells with available values given that the data is of different types? </p>
|
<p>Whether it's a dataframe or an ndarray, you can use <code>pd.notnull</code>:</p>
<pre><code>>>> arr = np.array([[18, '1/4/11', 73.0, 'Male', 4.0],
... [18, np.nan, 73.0, 'Male', np.nan],
... [18, '7/5/11', 73.0, 'Male', 7.0]], dtype=object)
>>> pd.notnull(arr)
array([[ True, True, True, True, True],
[ True, False, True, True, False],
[ True, True, True, True, True]], dtype=bool)
</code></pre>
|
python|arrays|pandas|numpy|dataframe
| 1
|
377,487
| 47,405,628
|
Bokeh 'utf8' codec can't decode byte 0xe9 : unexpected end of data
|
<p>Im using Bokeh to plot a pandas Dataframe. Following is the code:</p>
<pre><code>map_options = GMapOptions(lat=19.075984, lng=72.877656, map_type="roadmap", zoom=11)
plot = GMapPlot(x_range=DataRange1d(), y_range=DataRange1d(), map_options=map_options)
plot.api_key = "xxxxx"
source = ColumnDataSource(
data=dict(
lat=[float(i) for i in data.lat],
lon=[float(i) for i in data.lon],
size=[int(i)/1000 for i in data['count']],
ID = [i for i in data.merchant_id],
Merchant = [str(i) for i in data.merchant_name],
count = [float(i) for i in data['count']]
)
)
hover = HoverTool(tooltips=[
("(x,y)", "($lat, $lon)"),
("ID", "$ID"),
("Name", "@Merchant"),
("count","$count")
])
# hover.renderers.append(circle_glyph)
plot.tools.append(hover)
circle = Circle(x="lon", y="lat", size='size', fill_color="blue", fill_alpha=0.8, line_color=None)
plot.add_glyph(source, circle)
# plot.add_layout(labels)
plot.add_tools(PanTool(), WheelZoomTool(), BoxSelectTool())
output_file("gmap_plot.html")
show(plot)
</code></pre>
<p>In the Hovertool using the "Name" field throws the following error:</p>
<blockquote>
<p>UnicodeDecodeError: 'utf8' codec can't decode byte 0xe9 in position 6:
unexpected end of data</p>
</blockquote>
<p>Also commenting the "Name" field still gives me the error but there is an output plot.</p>
<p>Following is the dataframe I'm using:</p>
<pre><code> lat lon merchant_id count merchant_name
0 18.539971 73.893963 757 777 Portobello
1 18.565766 73.910980 745 10193 The Wok Box
2 18.815427 76.775143 1058 2354 Burrito Factory
3 18.914633 72.817916 87 1985 Flamboyante
4 18.915794 72.824370 94 1116 Butterfly Pond
5 18.916473 72.826868 145 1010 Leo's Boulangerie
6 18.918923 72.828325 115 517 Brijwasi Sweets
7 18.928063 72.832888 973 613 Pandora's Box
8 18.928562 72.832353 101 64 La Folie Patisserie
9 18.929516 72.831860 961 6673 Burma Burma
</code></pre>
<p>From my knowledge, the merchant name has characters that's causing the error, but i've tried encoding the column with 'utf-8', 'ascii', etc. But I get the following error:</p>
<pre><code>data['merchant_name'] = data['merchant_name'].str.encode('utf-8')
</code></pre>
<blockquote>
<p>UnicodeDecodeError: 'ascii' codec can't decode byte 0xe9 in position 6: ordinal not in range(128)</p>
</blockquote>
<p>Any Idea on how to proceed ?</p>
|
<p>The byte 0xe9 is not in pure ascii, because it is 233 (in decadical system) and ascii has only 127 symbols. In UTF-8 it is a special byte, which introduces a charecter taking next two bytes. Thus the string is probably in another encoding. For example in latin1 and latin2 the byte 0xe9 symbolizes the letter é.</p>
<p>And remember, first you must decode the string. You tried encode the type str, (normal string) which does not make sense. Therefore Python tried his default <code>decode('ascii')</code> and you got the <code>UnicodeDecodeError</code> on <code>encode</code> method.</p>
<p>I didn't manage to replicate the error and also I don't see any special characters in the data you provided (especially I don't see the 0xe9 byte). So I can only guess. I would try something like this:</p>
<pre><code>data['merchant_name'] = data['merchant_name'].str.decode('latin1').encode('utf-8')
</code></pre>
<p>And last but not least please please please, when you post your code, post the complete code with all imports and everything. I never used Bokeh, and now, when I tried to replicate your error, it was time consuming to reconstruct them. (But anyway -- at the end I managed to import everything, but I didn't get your error.) </p>
|
python|pandas|encoding|bokeh
| 9
|
377,488
| 47,295,566
|
How to use pandas to shift the last row to the first
|
<p>So I have a dataframe that looks like this:</p>
<pre><code> #1 #2
1980-01-01 11.6985 126.0
1980-01-02 43.6431 134.0
1980-01-03 54.9089 130.0
1980-01-04 63.1225 126.0
1980-01-05 72.4399 120.0
</code></pre>
<p>What I want to do is to shift the first row of the first column (11.6985) down 1 row, and then the last row of the first column (72.4399) would be shifted to the first row, first column, like so:</p>
<pre><code> #1 #2
1980-01-01 72.4399 126.0
1980-01-02 11.6985 134.0
1980-01-03 43.6431 130.0
1980-01-04 54.9089 126.0
1980-01-05 63.1225 120.0
</code></pre>
<p>The idea is that I want to use these dataframes to find an R^2 value for every shift, so I need to use all the data or it might not work. I have tried to use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.shift.html" rel="noreferrer">pandas.Dataframe.shift()</a>:</p>
<pre><code>print(data)
#Output
1980-01-01 11.6985 126.0
1980-01-02 43.6431 134.0
1980-01-03 54.9089 130.0
1980-01-04 63.1225 126.0
1980-01-05 72.4399 120.0
print(data.shift(1,axis = 0))
1980-01-01 NaN NaN
1980-01-02 11.6985 126.0
1980-01-03 43.6431 134.0
1980-01-04 54.9089 130.0
1980-01-05 63.1225 126.0
</code></pre>
<p>So it just shifts both columns down and gets rid of the last row of data, which is not what I want.</p>
<p>Any advice?</p>
|
<p>Not sure about the performance, but you could try <code>numpy.roll</code>:</p>
<pre><code>import numpy as np
print(df.apply(np.roll, shift=1))
# #1 #2
#1980-01-01 72.4399 120.0
#1980-01-02 11.6985 126.0
#1980-01-03 43.6431 134.0
#1980-01-04 54.9089 130.0
#1980-01-05 63.1225 126.0
</code></pre>
<hr>
<p>To shift column <code>#1</code> only:</p>
<pre><code>df['#1'] = np.roll(df['#1'], shift=1)
print(df)
# #1 #2
#1980-01-01 72.4399 126.0
#1980-01-02 11.6985 134.0
#1980-01-03 43.6431 130.0
#1980-01-04 54.9089 126.0
#1980-01-05 63.1225 120.0
</code></pre>
|
python|python-3.x|pandas
| 12
|
377,489
| 47,192,221
|
Numpy list of matrixes as dataset in TensorFlow
|
<p>there!</p>
<p>I have a list of n 30x7 matrixes (so I guess it's a 3D array) and another list of n labels (basically TRUE or FALSE). How can I use this as a dataset for TensorFlow? Most tutorials use images, so I can't find how to do it for my case.</p>
<p>Thanks a lot!</p>
|
<p>That's effectively an image, any of the image based tutorials should help you out. If you want to follow tutorials for images, e.g. convolutional neural networks you'll want to <code>reshape</code> your 30x7 matrix to include 1 "color" channel. For example: <code>tf.reshape(data_matrix, shape=[30, 7, 1])</code> will just change the dimensionality to <code>[30, 7, 1]</code> without affecting the actual shape of the data any.</p>
<p>If you batch these together you'll have a dataset of size <code>[batch_size, 30, 7, 1]</code> now.</p>
<p>Now you've got things in the right format for working through some tutorials on convolutional networks.</p>
<p>The bigger question you should ask is whether you want to treat your data as an image and apply a convolution network to it. The most fundamental question to ask of your data is whether the patterns you are expecting the network to extract have some 2D spatial locality. For example, an image is effectively processed by looking for patterns over a small window typically of size [3x3] or [5x5] at each layer of the network. Does it make sense to look at your data in such a way?</p>
<p>If so, great, convolutional networks are your thing. If the data doesn't have any kind of 2D spacial meaning, then you might start by simply flattening the data to a <code>[batchsize, 210]</code> size vector and feeding it through a fully connected neural network.</p>
<p>Perhaps, instead each row of the matrix represents some data, and there is a logical ordering to the sequence of rows from 1 to 30 (or alternatively by column from columns 1 to 7). If the data is most logically represented as a sequence of rows (or columns) then you will probably be better off looking into RNNs, recurrent neural networks, which operate on sequences more effectively.</p>
<p>In any case look into <code>tf.reshape</code>, and <code>tf.stack</code>. And consider doing most of this pre-processing work in numpy, it's always easier to debug in numpy.</p>
|
numpy|tensorflow
| 0
|
377,490
| 47,394,761
|
checkpoint file for different instances of tensorflow program
|
<p>I'm tweaking parameters in my tensorflow script to determine best performance.</p>
<p>Basically I'm running at the same time different instances of the same script with different parameters but saving the model with different names </p>
<p>I thought that changing the name in the <code>saver</code> would be enough. So if I run 3 independent instances of the script <strong>each one</strong> would have a different filename, e.g.,</p>
<pre><code>saver.save(session,"./whatever_param1.ckpt"),
saver.save(session,"./whatever_param2.ckpt"),
saver.save(session,"./whatever_param3.ckpt")
</code></pre>
<p>I'm actually getting different <code>.meta</code>, <code>.index</code> and <code>.data-00000-of-00001</code> files.</p>
<p>What I don't understand is the file named <code>checkpoint</code>. All instances of my script running concurrently seem to write to the same <code>checkpoint</code> file.</p>
<p>I think I'm messing up the results for the different instances of the script running at the same time.</p>
<p>Could you please let me know why only one <code>checkpoint</code> file is created even if a pass a different name to the <code>saver</code>?</p>
<p>Thanks. </p>
|
<p>I found the answer in the <a href="https://www.tensorflow.org/api_docs/python/tf/train/Saver" rel="nofollow noreferrer">tensorflow</a> documentation</p>
<p>In particular this part:</p>
<blockquote>
<p>That protocol buffer is stored in a file named 'checkpoint' next to the checkpoint files.</p>
<p>If you create several savers, you can specify a different filename for the protocol buffer file in the call to save().</p>
</blockquote>
<p>And in <code>save()</code>
the argument to specify is:</p>
<blockquote>
<p>latest_filename: Optional name for the protocol buffer file that will contains the list of most recent checkpoint filenames. That file, kept in the same directory as the checkpoint files, is automatically managed by the saver to keep track of recent checkpoints. Defaults to 'checkpoint'.</p>
</blockquote>
<p>so the solution is:</p>
<pre><code>saver.save(session, "./whatever_param1.ckpt", latest_filename='checkpoint_param1')
</code></pre>
|
python|tensorflow|artificial-intelligence
| 0
|
377,491
| 47,368,382
|
Representations of the binary string in the tree graph paths
|
<p>I'm trying to write an algorithm in which graphs are searched for possible node paths, representing binary strings. Where nodes with even numbers correspond to the digit '0', while the odd numbers '1'. The following code is for the time being unelegant and not optimized. In the code comments I put some explanations for his actions.</p>
<pre><code>import networkx as nx
import matplotlib.pyplot as plt
import pandas as pd
df = pd.read_csv("graph.csv", sep=';', encoding='utf-8')
df1=df.astype(int)
g = nx.Graph()
g = nx.from_pandas_dataframe(df1, 'nodes_1', 'nodes_2')
plt.show()
# I load any binary string.
# Example '01'
z = input('Write a binary number. \n')
z1=list(z)
l1 = df1['nodes_2'].tolist()
# I add to the list '0', because in df1 ['nodes_2'] the node '0' is missing.
l1[:0] = [0]
# I check whether the first digit entered in the input() of the variable 'z' is 0 or 1.
# And with good values I create a list of 'a'.
a=[]
if int(z1[0])==0:
for i in l1:
if i%2==0:
num1 = int(i)
a.append(num1)
elif int(z1[0])==1:
for i in l1:
if i%2 ==1:
num1 = int(i)
a.append(num1)
else: print('...')
# I am creating 'b' list of neighbors lists for nodes from list 'a'.
b=[]
c=[]
for i in a:
c.append(i)
x4 = g.neighbors(i)
b.append(x4)
# For neighbors I choose only those that are odd in this case,
# because the second digit from the entered 'z' is 1,
# and then I create a list of 'e' matching pairs representing the possible graph paths.
e=[]
if int(z1[1])==0:
for j in range(len(b)):
for k in range(len(b[j])):
if b[j][k]%2==0:
d = [a[j], b[j][k]]
e.append(d)
elif int(z1[1])==1:
for j in range(len(b)):
for k in range(len(b[j])):
if b[j][k]%2==1:
d = [a[j], b[j][k]]
e.append(d)
print (a)
# Output:
# [0, 2, 4, 6, 8, 10, 12, 14]
print (b)
# Output:
# [[1, 2], [0, 5, 6], [1, 9, 10], [2, 13, 14], [3], [4], [5], [6]]
print (e)
# Output:
# [[0, 1], [2, 5], [4, 1], [4, 9], [6, 13], [8, 3], [12, 5]]
</code></pre>
<p>csv data format:</p>
<pre><code> nodes_1 nodes_2
0 0 1
1 0 2
2 1 3
3 1 4
4 2 5
5 2 6
6 3 7
7 3 8
8 4 9
9 4 10
10 5 11
11 5 12
12 6 13
13 6 14
</code></pre>
<p>At the present time, I have a problem adjusting the code to use on any long binary string. Because in the above example it is only possible to use a 2-bit string. So I will be very grateful for any tips on simplifying and customizing the code. </p>
|
<p>All the code can be reduced to few lines I mean can be vectorized, so you can get rid of for loops i.e </p>
<pre><code>a = pd.Series([0] + df['nodes_2'][df['nodes_2']%2==0].values.tolist())
# Creating series to make use of apply
b = a.apply(g.neighbors)
n1e ,n2e = df['nodes_1'] % 2 == 0, df['nodes_2'] % 2 == 0
n1o ,n2o = df['nodes_1'] % 2 == 1, df['nodes_2'] % 2 == 1
# Now you want either the nodes_1 be to odd or nodes_2 to be odd but not both, same for even.
# Use that as a boolean mask for selecting the data
e = df[~((n1e == n2e) & (n1o == n2o))]
</code></pre>
<p>Output : </p>
<pre><code>a.values.tolist()
[0, 2, 4, 6, 8, 10, 12, 14]
b.values.tolist()
[[1, 2], [0, 5, 6], [1, 10, 9], [2, 13, 14], [3], [4], [5], [6]]
e.values.tolist()
[[0, 1], [1, 4], [2, 5], [3, 8], [4, 9], [5, 12], [6, 13]]
</code></pre>
<p>You can take the vectroized code and place it under the respective conditions (boolean values) given by the user. </p>
<p>Updating e based on the condition to keep odd at end and even in the begining i.e </p>
<pre><code>e = [[i[0],i[1]] if i[0]%2 == 0 else [i[1],i[0]] for i in e ]
e = pd.DataFrame(e).sort_values(0).values.tolist()
[[0, 1], [2, 5], [4, 1], [4, 9], [6, 13], [8, 3], [12, 5]]
</code></pre>
|
python|algorithm|list|pandas|networkx
| 2
|
377,492
| 47,430,467
|
Dimensionality error after applying a dense layer
|
<p>I am trying to add a dense layer after applying dropout to the max pooled convolutional layer output.</p>
<p>I have the following TensorFlow code written in Python. Number of filters is 128 and len(filter_sizes) is 3</p>
<pre><code>pooled_outputs = []
for i, filter_size in enumerate(filter_sizes):
with tf.name_scope("conv-maxpool-%s" % filter_size):
# Convolution Layer
filter_shape = [filter_size, embedding_size, 1, num_filters]
W = tf.Variable(tf.truncated_normal(filter_shape, stddev=0.1), name="W")
b = tf.Variable(tf.constant(0.1, shape=[num_filters]), name="b")
conv = tf.nn.conv2d(
self.embedded_chars_expanded,
W,
strides=[1, 1, 1, 1],
padding="VALID",
name="conv")
# Applying batch normalization
# h = tf.contrib.layers.batch_norm(conv, center=True, scale=True, is_training=True)
# Apply nonlinearity
h1 = tf.nn.relu(tf.nn.bias_add(conv, b), name="relu")
# Maxpooling over the outputs
pooled = tf.nn.max_pool(
h1,
ksize=[1, sequence_length - filter_size + 1, 1, 1],
strides=[1, 1, 1, 1],
padding='VALID',
name="pool")
pooled_outputs.append(pooled)
# Combine all the pooled features
num_filters_total = num_filters * len(filter_sizes)
self.h_pool = tf.concat(pooled_outputs, 3)
self.h_pool_flat = tf.reshape(self.h_pool, [-1, num_filters_total])
# Add dropout
with tf.name_scope("dropout"):
#self.h_drop = tf.nn.dropout(dense, self.dropout_keep_prob)
self.h_drop = tf.nn.dropout(self.h_pool_flat, self.dropout_keep_prob)
# Adding dense layer
dense = tf.layers.dense(self.h_drop, units=num_classes, activation=tf.nn.relu)
</code></pre>
<p>Facing issues after the application of the dense layer.</p>
<p>Following is the error:</p>
<pre><code>Dimensions must be equal, but are 11 and 384 for 'output/scores/MatMul' (op: 'MatMul') with input shapes: [?,11], [384,11]
</code></pre>
<p>Could someone please help me with it?</p>
|
<p>The error was with the indices of the matrices. I was using the xw_plus_b function provided by tensorflow and using the dimensions of the matrices for multiplication wrong.</p>
|
python|tensorflow|deep-learning|conv-neural-network
| 0
|
377,493
| 47,431,777
|
Reinitializable iterators for simultaneous training and validation
|
<p>I want to use <code>Dataset</code> and <code>Iterator</code>s to evaluate on a validation set during training. I want to evaluate on one (or a few) validation batches every now and then — that every now and then is typically <em>not</em> an epoch.</p>
<p>However reinitializable iterators start all over again when reinitalized to switch their input. E.g.</p>
<pre><code>import tensorflow as tf
dataset_trn = tf.data.Dataset.range(10)
dataset_tst = tf.data.Dataset.range(10).map(lambda i: i + 1000)
iterator = tf.data.Iterator.from_structure(dataset_trn.output_types,
dataset_trn.output_shapes)
batch = iterator.get_next()
trn_init_op = iterator.make_initializer(dataset_trn)
tst_init_op = iterator.make_initializer(dataset_tst)
sess = tf.InteractiveSession()
for _ in range(2):
sess.run(trn_init_op)
for _ in range(5):
print(batch.eval())
sess.run(tst_init_op)
print(batch.eval())
</code></pre>
<p>returns</p>
<pre><code>0
1
2
3
4
1000
0
1
2
3
4
1000
</code></pre>
<p>but I would like it to resume training like that:</p>
<pre><code>0
1
2
3
4
1000
5
6
7
8
9
1001
</code></pre>
<p>Is there a way to achieve this? Note that in practice, batches are shuffled, and I would like it to resume at the same pseudo-random point.</p>
|
<p><a href="https://www.tensorflow.org/programmers_guide/datasets" rel="nofollow noreferrer">Feedable iterators</a> should help, but they're tough to work with. You need to create a placeholder and string handles:</p>
<pre><code>dataset_trn = tf.data.Dataset.range(10)
dataset_tst = tf.data.Dataset.range(10).map(lambda i: i + 1000)
holder = tf.placeholder(tf.string, shape=[])
iterator = tf.data.Iterator.from_string_handle(
holder, dataset_trn.output_types, dataset_trn.output_shapes)
batch = iterator.get_next()
trn_iter = dataset_trn.make_one_shot_iterator()
trn_handle = trn_iter.string_handle()
tst_iter = dataset_tst.make_one_shot_iterator()
tst_handle = tst_iter.string_handle()
with tf.Session() as sess:
for _ in range(2):
trn_string = sess.run(trn_handle)
tst_string = sess.run(tst_handle)
for _ in range(5):
print(sess.run(batch, feed_dict={holder: trn_string}))
print(sess.run(batch, feed_dict={holder: tst_string}))
</code></pre>
|
python|tensorflow
| 3
|
377,494
| 47,364,379
|
I want to use variable length input with dynamic RNN of tensorflow, but I don't know how to padding
|
<p>As an example, input tensor is a batch size 3, a maximum time sequence length 4 and dimension of feature value is 2</p>
<p>[ [ [1, 2], [3, 4], [5, 6], [7, 8] ],
[ [9, 10] ] ]</p>
<p>, And in order to make it an input to RNN</p>
<p>[ [ [1, 2], [3, 4], [5, 6], [7, 8] ],
[ [9, 10], [0,0], [0,0], [0,0] ] ]</p>
<p>I would like to have a form like this, is there a good way to do it?
Since we assume that the tensor to be input to this RNN is the feature amount extracted by another neural network before entering RNN, I would like to know a method that will not break the calculation graph.</p>
|
<p>I think may need <strong>tf.train.batch</strong> (<a href="https://www.tensorflow.org/api_docs/python/tf/train/batch" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/train/batch</a>) and
use <strong>dynamic_pad=True</strong>. </p>
|
python|tensorflow|neural-network|deep-learning|rnn
| 0
|
377,495
| 47,403,696
|
Apply Feature Hashing to specific columns from a DataFrame
|
<p>I'm a bit lost with the use of Feature Hashing in Python Pandas . </p>
<p>I have the a DataFrame with multiple columns, with many information in different types. There is one column that represent a class for the data. </p>
<p>Example:</p>
<pre><code> col1 col2 colType
1 1 2 'A'
2 1 1 'B'
3 2 4 'C'
</code></pre>
<p>My goal is to apply FeatureHashing for the ColType, in order to be able to apply a Machine Learning Algorithm.</p>
<p>I have created a separate DataFrame for the colType, having something like this:</p>
<pre><code> colType value
1 'A' 1
2 'B' 2
3 'C' 3
4 'D' 4
</code></pre>
<p>Then, applied Feature Hashing for this class Data Frame. But I don't understand how to add the result of Feature Hashing to my DataFrame with the info, in order to use it as an input in a Machine Learning Algorithm.</p>
<p>This is how I use FeatureHashing: </p>
<pre><code> from sklearn.feature_extraction import FeatureHasher
fh = FeatureHasher(n_features=10, input_type='string')
result = fh.fit_transform(categoriesDF)
</code></pre>
<p>How do I insert this FeatureHasher result, to my DataFrame? How bad is my approach? Is there any better way to achieve what I am doing?</p>
<p>Thanks!</p>
|
<p>I know this answer comes in late, but I stumbled upon the same problem and found this works:</p>
<pre><code>fh = FeatureHasher(n_features=8, input_type='string')
sp = fh.fit_transform(df['colType'])
df = pd.DataFrame(sp.toarray(), columns=['fh1', 'fh2', 'fh3', 'fh4', 'fh5', 'fh6', 'fh7', 'fh8'])
pd.concat([df1, df], axis=1)
</code></pre>
<p>This creates a dataframe out of the sparse matrix retrieved by the FeatureHasher and concatenates the matrix to the existing dataframe. </p>
|
python|pandas|scikit-learn|data-science
| 3
|
377,496
| 47,525,425
|
Convert numpy array of Datetime objects to UTC strings
|
<p>I have a large array of datetime objects in numpy array. However I am trying to export them as a json object attribute and need them to be represented as a UTC string.</p>
<p>Here is my array ( a small chunk of it )</p>
<pre><code>datetimes = [datetime.datetime(2015, 7, 12, 18, 33, 14, tzinfo=<UTC>), datetime.datetime(2015, 7, 12, 18, 33, 32, tzinfo=<UTC>), datetime.datetime(2015, 7, 12, 18, 33, 50, tzinfo=<UTC>)]
json = {
'datetimes': []
};
</code></pre>
<p>I know I can iterate over the list and convert them however I was hoping there was an efficient pandas or numpy technique for this.</p>
|
<p>I think you can create <code>DataFrame</code>, convert to <code>iso</code> format and save to <code>dict</code>, because <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_json.html" rel="nofollow noreferrer"><code>DataFrame.to_json</code></a> with <code>orint='list'</code> is <a href="https://stackoverflow.com/a/43134775/2901002">not implemented yet</a>:</p>
<pre><code>datetimes = [datetime.datetime(2015, 7, 12, 18, 33, 14, tzinfo=datetime.timezone.utc),
datetime.datetime(2015, 7, 12, 18, 33, 32, tzinfo=datetime.timezone.utc),
datetime.datetime(2015, 7, 12, 18, 33, 50, tzinfo=datetime.timezone.utc)]
df = pd.DataFrame({'datetimes': datetimes})
#native convert to iso, but not support lists yet
print (df.to_json(date_format='iso'))
{"datetimes":{"0":"2015-07-12T18:33:14.000Z",
"1":"2015-07-12T18:33:32.000Z",
"2":"2015-07-12T18:33:50.000Z"}}
df = pd.DataFrame({'datetimes': datetimes})
df['datetimes'] = df['datetimes'].map(lambda x: x.isoformat())
print (json.dumps(df.to_dict(orient='l')))
{"datetimes": ["2015-07-12T18:33:14+00:00",
"2015-07-12T18:33:32+00:00",
"2015-07-12T18:33:50+00:00"]}
print(json.dumps({'datetimes': [x.isoformat() for x in datetimes]}))
{"datetimes": ["2015-07-12T18:33:14+00:00",
"2015-07-12T18:33:32+00:00",
"2015-07-12T18:33:50+00:00"]}
</code></pre>
<p>I test it more and list comprehension is fastest with <code>isoformat</code>:</p>
<pre><code>datetimes = [datetime.datetime(2015, 7, 12, 18, 33, 14, tzinfo=datetime.timezone.utc),
datetime.datetime(2015, 7, 12, 18, 33, 32, tzinfo=datetime.timezone.utc),
datetime.datetime(2015, 7, 12, 18, 33, 50, tzinfo=datetime.timezone.utc)]*10000
In [116]: %%timeit
...: df = pd.DataFrame({'datetimes': datetimes})
...: df['datetimes'] = df['datetimes'].map(lambda x: x.isoformat())
...: json.dumps(df.to_dict(orient='l'))
...:
1 loop, best of 3: 552 ms per loop
#wrong output format, dictionaries not lists
In [117]: %%timeit
...: df = pd.DataFrame({'datetimes': datetimes})
...: df.to_json(date_format='iso')
...:
10 loops, best of 3: 104 ms per loop
In [118]: %%timeit
...: json.dumps({'datetimes': [x.isoformat() for x in datetimes]})
...:
10 loops, best of 3: 67.5 ms per loop
</code></pre>
|
pandas|numpy
| 1
|
377,497
| 47,445,099
|
Inserting data to impala table using Ibis python
|
<p>I'm trying to insert df into a ibis created impala table with partition. I am running this on remote kernel using spyder 3.2.4 on windows 10 machine and python 3.6.2 on edge node machine running CentOS. </p>
<p>I get following error:</p>
<pre><code>Writing DataFrame to temporary file
Writing CSV to: /tmp/ibis/pandas_0032f9dd1916426da62c8b4d8f4dfb92/0.csv
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 2910, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "", line 1, in
insert = target_table.insert(df3)
File "/usr/local/lib/python3.6/site-packages/ibis/impala/client.py", line 1674, in insert
writer, expr = write_temp_dataframe(self._client, obj)
File "/usr/local/lib/python3.6/site-packages/ibis/impala/pandas_interop.py", line 225, in write_temp_dataframe
return writer, writer.delimited_table(path)
File "/usr/local/lib/python3.6/site-packages/ibis/impala/pandas_interop.py", line 188, in delimited_table
schema = self.get_schema()
File "/usr/local/lib/python3.6/site-packages/ibis/impala/pandas_interop.py", line 184, in get_schema
return pandas_to_ibis_schema(self.df)
File "/usr/local/lib/python3.6/site-packages/ibis/impala/pandas_interop.py", line 219, in pandas_to_ibis_schema
return schema(pairs)
File "/usr/local/lib/python3.6/site-packages/ibis/expr/api.py", line 105, in schema
return Schema.from_tuples(pairs)
File "/usr/local/lib/python3.6/site-packages/ibis/expr/datatypes.py", line 109, in from_tuples
return Schema(names, types)
File "/usr/local/lib/python3.6/site-packages/ibis/expr/datatypes.py", line 55, in init
self.types = [validate_type(typ) for typ in types]
File "/usr/local/lib/python3.6/site-packages/ibis/expr/datatypes.py", line 55, in
self.types = [validate_type(typ) for typ in types]
File "/usr/local/lib/python3.6/site-packages/ibis/expr/datatypes.py", line 1040, in validate_type
return TypeParser(t).parse()
File "/usr/local/lib/python3.6/site-packages/ibis/expr/datatypes.py", line 901, in parse
t = self.type()
File "/usr/local/lib/python3.6/site-packages/ibis/expr/datatypes.py", line 1033, in type
raise SyntaxError('Type cannot be parsed: {}'.format(self.text))
File "", line unknown
SyntaxError: Type cannot be parsed: integer
</code></pre>
|
<p>instead of editing the config_init.py mentioned</p>
<blockquote>
<p><a href="https://stackoverflow.com/a/47543691/5485370">https://stackoverflow.com/a/47543691/5485370</a></p>
</blockquote>
<p>It is easier to assign the temp db and path using the ibis.options:</p>
<pre><code>ibis.options.impala.temp_db = 'your_temp_db'
ibis.options.impala.temp_hdfs_path = 'your_temp_hdfs_path'
</code></pre>
|
python|pandas|impala|ibis
| 0
|
377,498
| 47,377,372
|
Python numpy how to join numbers from array
|
<p>I am newbie in Python. I think I'm looking for something easy, but can't find.
I have an numpy binary array, e.g.:</p>
<pre><code> [1,0,1,1,0,0,0,1,1,1,1,0]
</code></pre>
<p>And I want to do 2 things:</p>
<ol>
<li><p>Join (?) all elements into one number, so result will be:</p>
<pre><code>x=101100011110
</code></pre></li>
<li><p>Next want to converse it into binary, so:</p>
<pre><code>xx=2846
</code></pre></li>
</ol>
<p>I have an algorithm to do 2., but I don't know how to do 1. I can do it using loop, but is it possible to do it using numpy, without loop? My array will be huge, so I need the best option.</p>
|
<pre><code>>>> int(''.join(map(str, [1,0,1,1,0,0,0,1,1,1,1,0])))
101100011110
</code></pre>
<p>Or with a little numpy:</p>
<pre><code>>>> int(''.join(np.array([1,0,1,1,0,0,0,1,1,1,1,0]).astype('|S1')))
101100011110
</code></pre>
|
python|arrays|numpy
| 2
|
377,499
| 47,362,481
|
TensorFlow boolean indexing
|
<p>Let's go straight, currently, I want to index a tensor and change the non-zero term to -1 and zero term to 1. But I don't know how to do it in TensorFlow.</p>
<p>Here is my code:</p>
<pre><code>y_[y_ != 0].assign(-1)
y_[y_ == 0].assign(1)
</code></pre>
<p>The reason is TensorFlow doesn't seem to support boolean indexing. How can I fix it?</p>
<p>Btw, it seems <code>boolean_mask</code> doesn't work for me, because I don't want to return a slice of y_, I just want y_ to change its values.</p>
<p>Thanks!</p>
|
<p>You can use <strong>tf.cond ()</strong>, for the conditional assignment. I have given an example code below,</p>
<pre><code>import tensorflow as tf
x_= tf.Variable(5) #non-zero variable
y_= tf.Variable(0) #variable euqals to 0
y_ =tf.cond(tf.equal(y_,0),lambda :y_.assign(1),lambda :y_.assign(-1)) #assign 1 if variable equals to zero else -1
x_ =tf.cond(tf.equal(x_,0),lambda :x_.assign(1),lambda :x_.assign(-1)) #assign 1 if variable equals to zero else -1
sess = tf.Session()
sess.run(tf.global_variables_initializer())
with sess.as_default():
print(y_.eval()) #prints 1
print(x_.eval()) #prints -1
</code></pre>
<p>Hope this helps.</p>
|
python|tensorflow|tensor
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.