Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
375,500
| 61,000,954
|
parallelize loop over dataframe itertuples() rows using joblib
|
<p>I want to iterate over a data frame using <code>itertuples()</code>, the common way to do this:</p>
<pre><code>for row in df.itertuples():
my_funtion(row) # do something with row
</code></pre>
<p>However now I wish to do the loop in parallel using joblib like this (which seems very straightforward to me):</p>
<pre><code>num_cores = multiprocessing.cpu_count()
processed_list = Parallel(n_jobs=num_cores)(delayed(my_function(row) for row in df.itertuples()))
</code></pre>
<p>However I got the following error:</p>
<blockquote>
<p>File "/home/anaconda3/envs/pytorch/lib/python3.7/site-packages/joblib/parallel.py", line 885, in <strong>call</strong>
iterator = iter(iterable)
TypeError: 'function' object is not iterable</p>
</blockquote>
<p>Please, any idea what could be the problem?</p>
|
<p>I think that dask.org satisfies my needs related with this post (following @monkut suggestion). This is an example:</p>
<pre><code>import dask.dataframe as dd
sd = dd.from_pandas(some_df, npartitions=40)
sr = pd.Series([1,1.8,2.8,3.8,4.8,5.8])
['col1','col2','col3','col4','col5']) # this is a meta sample of the ouput to help dask infer output shape
df_out = sd.apply(my_function,axis=1,meta=sr).compute(scheduler='processes')
</code></pre>
<p>This solution applies my_function to every row of the whole dataframe in <strong>31</strong> seconds as measure by timeit. I was able to see multiple ZMQbg Jupyter processes (up to 16) running during the execution. I guess this means it is executing in parallel.</p>
<p>The alternative solution: </p>
<pre><code>df_out = df.apply(my_function,axis=1,result_type="expand")
</code></pre>
<p>produce the same result but in <strong>325</strong> seconds. Approx 10 times slower. With this solution i don't see multiple running processes in top.</p>
|
python|pandas|joblib
| 1
|
375,501
| 60,806,301
|
Check if multi index is in two dataframes
|
<p>I have two dataframes with columns of state and regionname, and I'm trying to see if df2 is in df1, and add that column to df3</p>
<pre><code>df1=
+--------------+------------+------+
| State | RegionName | Data |
+--------------+------------+------+
| New York | New York | 123 |
| Jacksonville | Florida | ABC |
+--------------+------------+------+
df2=
+--------------+------------+------+
| State | RegionName | Data |
+--------------+------------+------+
| New York | New York | 456 |
+--------------+------------+------+
Output would be df3=
+--------------+------------+------+-------+
| State | RegionName | Data | IsIn2 |
+--------------+------------+------+-------+
| New York | New York | 123 | 1 |
| Jacksonville | Florida | ABC | 0 |
+--------------+------------+------+-------+
</code></pre>
|
<p>Let us do </p>
<pre><code>df1['IsIn2']=df1[['State','RegionName']].apply(tuple, axis=1).\
isin(df2[['State','RegionName']].apply(tuple, axis=1)).\
astype(int)
</code></pre>
|
python|pandas|dataframe|isin
| 0
|
375,502
| 60,976,819
|
Beautiful soup extracting rows and data
|
<p>I am using Beautiful soup to pull some data from a internal site. The code provided on the links work for 4 columns of my data. There is one more data tagged as th.How can i get th on the same row with all tds.
<a href="https://stackoverflow.com/questions/50633050/scrape-tables-into-dataframe-with-beautifulsoup">Scrape tables into dataframe with BeautifulSoup</a> </p>
<pre><code>Manager ID Defect Count Transactions DPMO
bedfli 155 2215 898
elojr 26 745 480
torse 0 16 0
jogsn 115 1767 6783
res = []
for tr in rows:
td = tr.find_all('td')
row = [tr.text.strip() for tr in td if tr.text.strip()]
if row:
res.append(row)
df = pd.DataFrame(res,columns = ['Manager ID','Defect Count','Transactions','DPMO'])
print(df)
<tr role = 'row'>
<td>bedfli</td>
<th>Receive</th>
<td>155</td>
<td>2215</td>
<td>898</td>
</code></pre>
<p>I want th(receive) on my dataframe with 5 columns including receive so the final output looks like</p>
<pre><code>Manager ID Process Defect Count Transaction DPMO
bedfli Receive 155 2215 898
</code></pre>
|
<p>Solutions using library SimplifiedDoc.</p>
<pre><code>from simplified_scrapy import SimplifiedDoc
html = '''
<table>
<tr>
<td>Manager ID</td>
<th>Process</th>
<td>Defect Count</td>
<td>Transaction</td>
<td>DPMO</td>
</tr>
<tr role = 'row'>
<td>bedfli</td>
<th>Receive</th>
<td>155</td>
<td>2215</td>
<td>898</td>
</tr>
</table>
'''
doc = SimplifiedDoc(html)
# First way
table = doc.getTable(body='table')
# Second way
table = doc.selects('table>tr').children.text
# Third way
table = doc.selects('table>tr').selects('td|th').text
print (table)
</code></pre>
<p>Result:</p>
<pre><code>[['Manager ID', 'Process', 'Defect Count', 'Transaction', 'DPMO'], ['bedfli', 'Receive', '155', '2215', '898']]
</code></pre>
|
pandas|beautifulsoup
| 0
|
375,503
| 61,039,197
|
Convert one dimensional arrays in a pandas dataframe to numbers
|
<p>Values of a pandas dataframe contain one dimensional arrays and i would like to convert into floats
without the "[]" . Tried this but does not work . How can [0.5142399408894116] be converted to 0.5142399408894116</p>
<pre><code>dfPredictions = pd.DataFrame(data = dff, dtype='float')
</code></pre>
|
<p>Use the index operator [] on the array:</p>
<pre><code>[0.5142399408894116][0]
= 0.5142399408894116
</code></pre>
<p>If you need to apply this to a dataframe row, use the explode method:</p>
<pre><code>df = pd.DataFrame({'col': [[0.5142399408894116], [0.1423994088941165], [0.4239940889411651]]})
df
col
0 [0.5142399408894116]
1 [0.1423994088941165]
2 [0.4239940889411651]
df.explode('col')
col
0 0.51424
1 0.142399
2 0.423994
</code></pre>
|
arrays|pandas|dataframe|scalar
| 0
|
375,504
| 61,052,538
|
Renaming multiple column values in Pandas
|
<p>I have customer reviews stored in a Pandas column 'Sentiment'. This is the result of <code>data['Sentiment'].unique()</code>:</p>
<pre><code>array(['Negative', 'Positive', '?', 'Neutral', 'nan', 'positive',
'neutral', 'negative', 'Neg', 'ppos', 'ne'], dtype=object)
</code></pre>
<p>I am trying to group the values into 'positive', 'negative', and 'neutral' and created the three mapping lists:</p>
<pre><code>positive = ['Positive','positive', 'ppos']
negative = ['Negative', 'negative', 'Neg']
neutral = ['Neutral', 'neutral', 'ne']
</code></pre>
<p>Everything else should be NAn. I had a try with <code>iterrows()</code> along the lines of:</p>
<pre><code>for idx, row in data.iterrows():
if row['Sentiment'].isin(positive):
row['Sentiment'] == 'positive'
...
</code></pre>
<p>Doesn't work and does not seem efficient either. I tried with Series and booleans and it seems a promising approach but I really wonder if there is some succinct workaround.</p>
|
<p>Use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.select.html" rel="nofollow noreferrer">numpy.select</a>. Pass conditions as the first argument, the values corresponding to conditions as second and the default value which doesn't match any condition.</p>
<pre><code>import numpy as np
conditions = [
df['Sentiment'].isin(positive),
df['Sentiment'].isin(neutral),
df['Sentiment'].isin(negative)
]
values = ['positive', 'neutral', 'negative']
df['Sentiment'] = np.select(conditions, values, np.nan)
</code></pre>
|
python|pandas|replace|rename
| 3
|
375,505
| 61,046,262
|
How can I element-wise 2 Tensors with different dimensions with Broadcasting?
|
<p>I have a tensor called <code>inputs</code> with a <code>size</code> of <code>torch.Size([20, 1, 161, 199])</code> and another <code>mask</code> with a size of <code>torch.Size([20, 1, 199])</code>. I want to multiply them together.</p>
<p>I tried:</p>
<pre><code>masked_inputs = inputs * mask[..., None]
</code></pre>
<p>but get an error:</p>
<pre><code>RuntimeError: The size of tensor a (161) must match the size of tensor b (199) at non-singleton dimension 2
</code></pre>
<p>I'm not quite sure what to do?</p>
|
<p>This did it:</p>
<pre><code>masked_inputs = inputs * mask.unsqueeze(2)
</code></pre>
|
python|pytorch|tensor
| 0
|
375,506
| 60,812,563
|
Modifying the values of one column adding a word after the original value
|
<p>I have a pandas dataframe and I would like to modify the values of my first column adding before or after the original value the string " name ". How is the best way to do that.</p>
<p>Here my dataframe :</p>
<pre><code>df
protein_name LEN Start End
0 Ribosomal_S9: 121 0 121
1 Ribosomal_S8: 129 121 250
2 Ribosomal_L10: 100 250 350
3 GrpE: 166 350 516
4 DUF150: 141 516 657
.. ... ... ... ...
115 TIGR03632: 117 40149 40266
116 TIGR03654: 175 40266 40441
117 TIGR03723: 314 40441 40755
118 TIGR03725: 212 40755 40967
119 TIGR03953: 188 40967 41155
</code></pre>
<p>and here what I want to add :</p>
<pre><code>name = "name"
</code></pre>
<p>I am not very demanding for what I want, just something like :</p>
<pre><code>df
protein_name LEN Start End
0 Name Ribosomal_S9: 121 0 121
1 Name Ribosomal_S8: 129 121 250
2 Name Ribosomal_L10: 100 250 350
3 Name GrpE: 166 350 516
4 Name DUF150: 141 516 657
.. ... ... ... ...
115 Name TIGR03632: 117 40149 40266
116 Name TIGR03654: 175 40266 40441
117 Name TIGR03723: 314 40441 40755
118 Name TIGR03725: 212 40755 40967
119 Name TIGR03953: 188 40967 41155
</code></pre>
|
<p>Add string with space by <code>+</code> and variable <code>name</code>:</p>
<pre><code>name = 'Name'
df['protein_name'] = name + ' ' + df['protein_name']
print (df)
protein_name LEN Start End
0 Name Ribosomal_S9: 121 0 121
1 Name Ribosomal_S8: 129 121 250
2 Name Ribosomal_L10: 100 250 350
3 Name GrpE: 166 350 516
4 Name DUF150: 141 516 657
115 Name TIGR03632: 117 40149 40266
116 Name TIGR03654: 175 40266 40441
117 Name TIGR03723: 314 40441 40755
118 Name TIGR03725: 212 40755 40967
119 Name TIGR03953: 188 40967 41155
</code></pre>
<p>Or with <code>f-string</code>s:</p>
<pre><code>df['protein_name'] = df['protein_name'].map(lambda x: f'{name} {x}')
</code></pre>
|
python|pandas|dataframe
| 4
|
375,507
| 60,876,053
|
equivalent python and pandas operation for group_by + mutate + indexing column vectors within mutate in R
|
<p>Sample data frame in Python:</p>
<pre><code>d = {'col1': ["a", "a", "a", "b", "b", "b", "c", "c", "c"],
'col2': [3, 4, 5, 1, 3, 9, 5, 7, 23]}
df = pd.DataFrame(data=d)
</code></pre>
<p>Now I want to get the same output in Python with pandas as I get in R with the code below. So I want to get the change in percentage in col1 by group in col2. </p>
<pre><code>data.frame(col1 = c("a", "a", "a", "b", "b", "b", "c", "c", "c"),
col2 = c(3, 4, 5, 1, 3, 9, 16, 18, 23)) -> df
df %>%
dplyr::group_by(col1) %>%
dplyr::mutate(perc = (dplyr::last(col2) - col2[1]) / col2[1])
</code></pre>
<p>In python, I tried:</p>
<pre><code>def perc_change(column):
index_1 = tu_in[column].iloc[0]
index_2 = tu_in[column].iloc[-1]
perc_change = (index_2 - index_1) / index_1
return(perc_change)
d = {'col1': ["a", "a", "a", "b", "b", "b", "c", "c", "c"],
'col2': [3, 4, 5, 1, 3, 9, 5, 7, 23]}
df = pd.DataFrame(data=d)
df.assign(perc_change = lambda x: x.groupby["col1"]["col2"].transform(perc_change))
</code></pre>
<p>But it gives me an error saying: 'method' object is not subscriptable.</p>
<p>I am new to python and trying to convert some R code into python. How can I solve this in an elegant way? Thank you!</p>
|
<p>You don't want <code>transform</code> here. <code>transform</code> is typically used when your aggregation returns a scalar value per group and you want to broadcast that result to all rows that belong to that group in the original DataFrame. Because <code>GroupBy.pct_change</code> already returns a result indexed like the original, you aggregate and assign back. </p>
<pre><code>df['perc_change'] = df.groupby('col1')['col2'].pct_change()
# col1 col2 perc_change
#0 a 3 NaN
#1 a 4 0.333333
#2 a 5 0.250000
#3 b 1 NaN
#4 b 3 2.000000
#5 b 9 2.000000
#6 c 5 NaN
#7 c 7 0.400000
#8 c 23 2.285714
</code></pre>
<p>But if instead what you need is the overall percentage change within a group, so it's the difference in the first and last value divided by the first value, you would then want transform.</p>
<pre><code>df.groupby('col1')['col2'].transform(lambda x: (x.iloc[-1] - x.iloc[0])/x.iloc[0])
0 0.666667
1 0.666667
2 0.666667
3 8.000000
4 8.000000
5 8.000000
6 3.600000
7 3.600000
8 3.600000
Name: col2, dtype: float64
</code></pre>
|
python|pandas|pandas-groupby
| 2
|
375,508
| 61,031,425
|
Python Dataframe column of dates - change format to yyyy-mm-dd
|
<p>df['date'] is the column I am working with. </p>
<p>The data is in the 'date' column is in the day/month/year format, like this:
7/12/2019. How would I modify this column to give me Year-month-day, or 2019-07-12? </p>
<p>This is what I have tried, but still is not working:
df['date'] = pd.to_datetime(str(df['date']))</p>
|
<p>Try:</p>
<pre><code>df.loc[:,'date'] = pd.to_datetime(df.loc[:,'date'], format="%d/%m/%yyyy")
</code></pre>
<p>Let me know if it works!</p>
<p>EDIT #1</p>
<p>Since, the "date" column has mixed formats, try:</p>
<pre><code>def date_format(df):
for index, row in df.iterrows():
try:
df.loc[index, row['date']] = pd.to_datetime(df.loc[index, row['date']], format="%d/%m/%yyyy")
except ValueError as e:
df.loc[index, row['date']] = pd.to_datetime(df.loc[index, row['date']], format="%m/%d/%yyyy")
return df
</code></pre>
|
python|pandas
| 1
|
375,509
| 61,067,132
|
Why is my custom loss (categorical cross-entropy) not working?
|
<p>I am working on some kind of framework for myself built on top of Tensorflow and Keras. As a start, I wrote just the core of the framework and implemented a first toy example. This toy example is just a classic feed forward network solivng XOR.</p>
<p>It's probably not necessary to explain everything around it but I implemented the loss function like this:</p>
<pre class="lang-py prettyprint-override"><code>class MeanSquaredError(Modality):
def loss(self, y_true, y_pred, sample_weight=None):
y_true = tf.cast(y_true, dtype=y_pred.dtype)
loss = tf.keras.losses.MeanSquaredError(reduction=tf.keras.losses.Reduction.NONE)(y_true, y_pred)
return tf.reduce_sum(loss) / self.model_hparams.model.batch_size
</code></pre>
<p>This will be used in the actual model class like this:</p>
<pre class="lang-py prettyprint-override"><code>class Model(keras.Model):
def loss(self, y_true, y_pred, weights=None):
target_modality = self.modalities['targets'](self.problem.hparams, self.hparams)
return target_modality.loss(y_true, y_pred)
</code></pre>
<p>Now, when it comes to training, I can train the model like this:</p>
<pre class="lang-py prettyprint-override"><code>model.compile(
optimizer=keras.optimizers.Adam(0.001),
loss=model.loss, # Simply setting 'mse' works as well here
metrics=['accuracy']
)
</code></pre>
<p><em>or</em> I can just set <code>loss=mse</code>. Both cases work as expected without any problems.</p>
<p>However, I have another <code>Modality</code> class which I am using for sequence-to-sequence (e.g. translation) tasks. It looks like this:</p>
<pre class="lang-py prettyprint-override"><code>class CategoricalCrossentropy(Modality):
"""Simple SymbolModality with one hot as embeddings."""
def loss(self, y_true, y_pred, sample_weight=None):
labels = tf.reshape(y_true, shape=(tf.shape(y_true)[0], tf.reduce_prod(tf.shape(y_true)[1:])))
y_pred = tf.reshape(y_pred, shape=(tf.shape(y_pred)[0], tf.reduce_prod(tf.shape(y_pred)[1:])))
loss = tf.keras.losses.CategoricalCrossentropy(reduction=tf.keras.losses.Reduction.NONE, from_logits=True)(labels, y_pred)
return tf.reduce_mean(loss) / self.model_hparams.model.batch_size
</code></pre>
<p>What this does is just reshaping the <code>y_true</code> and <code>y_pred</code> tensors <code>[batch_size, seq_len, embedding_size]</code> to <code>[seq_len * batch_size, embedding_size]</code> - effectively stacking all examples. From this, the categorical cross-entropy is calculated and normalized.</p>
<p>Now, the model I am using is a very simple LSTM - this isn't important though. As I am training the model like this:</p>
<pre class="lang-py prettyprint-override"><code>model.compile(
optimizer=keras.optimizers.Adam(0.001),
loss='categorical_crossentropy', # <-- Setting the loss via string argument (works)
metrics=['accuracy']
)
</code></pre>
<p>The model does learn the task as expected. However, if I use the <code>CategoricalCrossentropy</code>-modality from above, setting <code>loss=model.loss</code>, the model does not converge at all. The loss oscillates randomly but does not converge. </p>
<p>And <em>this</em> is where I am scrathing my head. Since the simple XOR-examples works, both ways, and since setting <code>categorical_crossentropy</code> works as well, I do not quite see why using said modality doesn't work.</p>
<p>Am I doing something obviously wrong?</p>
<p>I am sorry that I cannot provide a small example here but this not possible since the framework already consists of some lines of code. Empirically speaking, everything <em>should</em> work.</p>
<p>Any ideas how I could track down the issue or what might be causing this?</p>
|
<p>You're creating a tuple of tensors for shape. That might not work. </p>
<p>Why not just this?</p>
<pre><code>labels = tf.keras.backend.batch_flatten(y_true)
y_pred = tf.keras.backend.batch_flatten(y_pred)
</code></pre>
<p>The standard <code>'categorical_crossentropy'</code> loss does not perform any kind of flattening, and it considers as classes <strong>the last axis</strong>.</p>
<p>Are you sure you want to flatten your data? If you flatten, you will multiply the number of classes by the number of steps, this doesn't seem to make much sense.</p>
<p>Also, the standard <code>'categorical_crossentropy'</code> loss uses <code>from_logits=False</code>!</p>
<p>The standard loss expects outputs from a <code>"softmax"</code> activation, while <code>from_logits=True</code> expects outputs <strong>without</strong> that activation. </p>
|
tensorflow|keras
| 1
|
375,510
| 61,081,125
|
Opening txt data files from url in Python
|
<p>I'm trying to open some data from a URL but it gives me problems and errors I don't know how to deal with. My goal is to get two arrays of data, one for time input and another for some kind of variable. This is what I've tried:</p>
<pre><code>url = "https://github.com/giulio99/Relazione-FFT/blob/master/dati%20giulio/datilunghiquad_b.txt"
df = pd.read_csv(url, sep="\t")
df.columns =['time', 'voltage']
t1 = list(df.time)
V1 = list(df.voltage)
</code></pre>
<p>The error that I get is</p>
<pre><code>pandas.errors.ParserError: Error tokenizing data. C error: Expected 1 fields in line 823, saw 2
</code></pre>
<p>Can someone help me?</p>
|
<p>you are downloading the wrong file! you want the raw file, not the html page.<br>
notice the <code>raw</code> button on that page, it will gives you the address:<br>
<code>https://raw.githubusercontent.com/giulio99/Relazione-FFT/master/dati%20giulio/datilunghiquad_b.txt</code></p>
|
python-3.x|pandas|url
| 1
|
375,511
| 61,167,710
|
Extracting table from a website using Pandas
|
<p>Hi I wanted to extract a table from the url = '<a href="http://www.nativeplant.com/plants/search/input" rel="nofollow noreferrer">http://www.nativeplant.com/plants/search/input</a>'
I proceeded with using Pandas in Python 3</p>
<pre><code>import requests
import pandas as pd
url = 'http://www.nativeplant.com/plants/search/input'
df_list = pd.read_html(html)
df = df_list[0]
print(df)
df.to_csv('my data.csv')
</code></pre>
<p>however when i am calling the read_html function it is throwing an error saying : Name "html" is not defined.
On replacing the inside of the function with "url" I am getting an error saying : No Tables found</p>
<p>I don't understand where I am going wrong. </p>
|
<p>I don't know how to do this pd.read_html()... Moreover, when i try a simple get request with your URL no table element are returned. I bieleve this is why simple pd.read_html doesn't work.</p>
<p>However,</p>
<p>When U click on "UpdateList" Button, this triggers a get request for data. (F12 -> Network -> Html)</p>
<p>You need to specify elements in params dict. Each element corresponds to dropDowns, Checkboxes in search space. Go to the browser inspector to make the connection.</p>
<pre><code>import requests as rq
import pandas as pd
from bs4 import BeautifulSoup as bs
params = {"check":"1","flower_color":"","Ssite":"","searchstr":"","bloom_time":"","Wsite":"","H_Oper":"gt","HTSEL":"","sel_region":"","SUBMIT":"Update List","sortkey":"Scientific_Name","C_Oper":"eq","C":"","Origin":"N","show_latin":"1","show_blooms":"1","W_Oper":"eq","W":"","show_common":"1","S_Oper":"eq","S":"","show_light":"1","show_moisture":"1","show_Thumbs":"1","Thumb_size":"50","show_height":"1","show_price":"1"}
headers = {"User-Agent" : "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:76.0) Gecko/20100101 Firefox/76.0"}
url = 'http://www.nativeplant.com/plants/search/report?'
resp = rq.get(url, headers=headers, params=params)
soup = bs(resp.content)
## c = colnames ; d = data
c = [[th.text.strip() for th in tr.find_all("th")] for tr in soup.find_all("table")[0].find_all('tr')][0]
d = [[td.text.strip() for td in tr.find_all('td')] for tr in soup.find_all("table")[0].find_all('tr')][1:] # this is the second table of the page
df = pd.DataFrame(d, columns=c)
</code></pre>
|
python|pandas|web|screen-scraping
| 0
|
375,512
| 61,147,694
|
No module named 'tensorflow.python.keras.engine.base_layer_v1' in python code with tensor flow keras
|
<p>hi i'm doing this code in google colab and i have this error <strong>No module named 'tensorflow.python.keras.engine.base_layer_v1' in python code with tensor flow keras</strong></p>
<p>i did use tensorflow.keras instead of keras since i use tensorflow v=2.1.0
and keras v=2.3.0-tf</p>
<pre><code>i tried both tensorflow v=2.1.0 and v=2.2.0-rc2
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential, load_model
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow.keras.layers import Dense, Embedding, LSTM, SpatialDropout1D
from sklearn.model_selection import train_test_split
MAX_NB_WORDS=50000
EMBEDDING_DIM=100
model = tf.keras.Sequential()
model.add(Embedding(MAX_NB_WORDS, EMBEDDING_DIM, input_length=train.shape[1]))
model.add(SpatialDropout1D(0.2))
model.add(LSTM(100, dropout=0.2, recurrent_dropout=0.2))
model.add(Dense(13, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
epochs = 5
batch_size = 64
history = model.fit(x_train, y_train, epochs=epochs, batch_size=batch_size, validation_split=0.1, callbacks=[EarlyStopping(monitor='val_loss', patience=3, min_delta=0.0001)])
accr = model.evaluate(x_test,y_test)
print('Test set\n Loss: {:0.3f}\n Accuracy: {:0.3f}'.format(accr[0],accr[1]))
</code></pre>
|
<p>I had similar error while working with gaborNet-CNN. I tired following and it worked in my case.</p>
<pre><code>import numpy as np
from matplotlib import pyplot as plt
from tqdm import tqdm
import keras
from keras import backend as K
from keras import activations, initializers, regularizers, constraints, metrics
from keras.datasets import cifar10
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential, Model
from keras.layers import (Dense, Dropout, Activation, Flatten, Reshape, Layer,
BatchNormalization, LocallyConnected2D,
ZeroPadding2D, Conv2D, MaxPooling2D, Conv2DTranspose,
GaussianNoise, UpSampling2D, Input)
from keras.utils import conv_utils, multi_gpu_model
from keras.layers import Lambda
from keras.engine import Layer, InputSpec
from keras.legacy import interfaces
</code></pre>
|
python|tensorflow|deep-learning|google-colaboratory|tf.keras
| 2
|
375,513
| 61,080,410
|
CNN accuracy y-axis range
|
<p>I have trained my CNN model and attained the accuracy-graph, where I saved the training epochs using pickle.</p>
<p>When I code the graph, I get the the y-axis range from 0 to 1. How is it possible to have the range from 0-100 with the already saved pickle values.</p>
<pre><code>from keras.models import Sequential
from keras.layers import Conv2D,Activation,MaxPooling2D,Dense,Flatten,Dropout
import numpy as np
from keras.preprocessing.image import ImageDataGenerator
from IPython.display import display
import matplotlib.pyplot as plt
from PIL import Image
from sklearn.metrics import classification_report, confusion_matrix
import keras
from keras.layers import BatchNormalization
from keras.optimizers import Adam
import pickle
from keras.models import load_model
f = open('32_With_Dropout_rl_001_1_layer', 'rb')
history = pickle.load(f)
f = open('32_With_Dropout_rl_001_2_layers', 'rb')
history1 = pickle.load(f)
f = open('32_With_Dropout_rl_001_3_layers', 'rb')
history2 = pickle.load(f)
# summarize history for accuracy
plt.plot(history['val_accuracy'])
plt.plot(history1['val_accuracy'])
plt.plot(history2['val_accuracy'])
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['CNN_1', 'CNN_2', 'CNN_3'], loc='lower right')
plt.show()
# summarize history for loss
plt.plot(history['val_loss'])
plt.plot(history1['val_loss'])
plt.plot(history2['val_loss'])
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['CNN_1', 'CNN_2', 'CNN_3'], loc='upper left')
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/LukOE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LukOE.png" alt="enter image description here"></a></p>
|
<p>You can multiply the list value .i.e. 'val_accuracy' by 100. Code is given below,</p>
<pre><code>val_accuracy = [i * 100 for i in history.history['val_accuracy']]
plt.plot(val_accuracy)
plt.title('Model Accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['Val Accuracy'], loc='upper left')
plt.show()
</code></pre>
<p><strong>Model and Plot Example -</strong></p>
<pre><code>import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Conv2D, Flatten, Dropout, MaxPooling2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import Adam
import os
import numpy as np
import matplotlib.pyplot as plt
_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'
path_to_zip = tf.keras.utils.get_file('cats_and_dogs.zip', origin=_URL, extract=True)
PATH = os.path.join(os.path.dirname(path_to_zip), 'cats_and_dogs_filtered')
train_dir = os.path.join(PATH, 'train')
validation_dir = os.path.join(PATH, 'validation')
train_cats_dir = os.path.join(train_dir, 'cats') # directory with our training cat pictures
train_dogs_dir = os.path.join(train_dir, 'dogs') # directory with our training dog pictures
validation_cats_dir = os.path.join(validation_dir, 'cats') # directory with our validation cat pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs') # directory with our validation dog pictures
num_cats_tr = len(os.listdir(train_cats_dir))
num_dogs_tr = len(os.listdir(train_dogs_dir))
num_cats_val = len(os.listdir(validation_cats_dir))
num_dogs_val = len(os.listdir(validation_dogs_dir))
total_train = num_cats_tr + num_dogs_tr
total_val = num_cats_val + num_dogs_val
batch_size = 128
epochs = 15
IMG_HEIGHT = 150
IMG_WIDTH = 150
train_image_generator = ImageDataGenerator(rescale=1./255,brightness_range=[0.5,1.5]) # Generator for our training data
validation_image_generator = ImageDataGenerator(rescale=1./255,brightness_range=[0.5,1.5]) # Generator for our validation data
train_data_gen = train_image_generator.flow_from_directory(batch_size=batch_size,
directory=train_dir,
shuffle=True,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
val_data_gen = validation_image_generator.flow_from_directory(batch_size=batch_size,
directory=validation_dir,
target_size=(IMG_HEIGHT, IMG_WIDTH),
class_mode='binary')
model = Sequential([
Conv2D(16, 3, padding='same', activation='relu', input_shape=(IMG_HEIGHT, IMG_WIDTH ,3)),
MaxPooling2D(),
Conv2D(32, 3, padding='same', activation='relu'),
MaxPooling2D(),
Conv2D(64, 3, padding='same', activation='relu'),
MaxPooling2D(),
Flatten(),
Dense(512, activation='relu'),
Dense(1)
])
model.compile(optimizer="adam",
loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit_generator(
train_data_gen,
steps_per_epoch=total_train // batch_size,
epochs=epochs,
validation_data=val_data_gen,
validation_steps=total_val // batch_size)
val_accuracy = [i * 100 for i in history.history['val_accuracy']]
plt.plot(val_accuracy)
plt.title('Model Accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['Val Accuracy'], loc='upper left')
plt.show()
</code></pre>
<p>Output -</p>
<pre><code>Found 2000 images belonging to 2 classes.
Found 1000 images belonging to 2 classes.
Epoch 1/15
15/15 [==============================] - 11s 763ms/step - loss: 0.8592 - accuracy: 0.5036 - val_loss: 0.6932 - val_accuracy: 0.4989
Epoch 2/15
15/15 [==============================] - 12s 767ms/step - loss: 0.6926 - accuracy: 0.5021 - val_loss: 0.6927 - val_accuracy: 0.5000
Epoch 3/15
15/15 [==============================] - 11s 740ms/step - loss: 0.6908 - accuracy: 0.4989 - val_loss: 0.6830 - val_accuracy: 0.5000
Epoch 4/15
15/15 [==============================] - 11s 746ms/step - loss: 0.6752 - accuracy: 0.5235 - val_loss: 0.6534 - val_accuracy: 0.5580
Epoch 5/15
15/15 [==============================] - 11s 748ms/step - loss: 0.6401 - accuracy: 0.5865 - val_loss: 0.6111 - val_accuracy: 0.6127
Epoch 6/15
15/15 [==============================] - 11s 747ms/step - loss: 0.5673 - accuracy: 0.6779 - val_loss: 0.5867 - val_accuracy: 0.6786
Epoch 7/15
15/15 [==============================] - 11s 747ms/step - loss: 0.5347 - accuracy: 0.7196 - val_loss: 0.5962 - val_accuracy: 0.6964
Epoch 8/15
15/15 [==============================] - 11s 748ms/step - loss: 0.4618 - accuracy: 0.7879 - val_loss: 0.6002 - val_accuracy: 0.6897
Epoch 9/15
15/15 [==============================] - 11s 745ms/step - loss: 0.4271 - accuracy: 0.7906 - val_loss: 0.5649 - val_accuracy: 0.6931
Epoch 10/15
15/15 [==============================] - 11s 753ms/step - loss: 0.3839 - accuracy: 0.8125 - val_loss: 0.5892 - val_accuracy: 0.7042
Epoch 11/15
15/15 [==============================] - 11s 750ms/step - loss: 0.3151 - accuracy: 0.8558 - val_loss: 0.6658 - val_accuracy: 0.6629
Epoch 12/15
15/15 [==============================] - 11s 751ms/step - loss: 0.2736 - accuracy: 0.8686 - val_loss: 0.6635 - val_accuracy: 0.7188
Epoch 13/15
15/15 [==============================] - 11s 748ms/step - loss: 0.2423 - accuracy: 0.8868 - val_loss: 0.7478 - val_accuracy: 0.7054
Epoch 14/15
15/15 [==============================] - 11s 749ms/step - loss: 0.2192 - accuracy: 0.9092 - val_loss: 0.8924 - val_accuracy: 0.6719
Epoch 15/15
15/15 [==============================] - 11s 751ms/step - loss: 0.1754 - accuracy: 0.9215 - val_loss: 0.7900 - val_accuracy: 0.7087
</code></pre>
<p>Plot Output -</p>
<p><a href="https://i.stack.imgur.com/pgEZ8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pgEZ8.png" alt="enter image description here"></a></p>
|
python-3.x|tensorflow|keras|conv-neural-network
| 1
|
375,514
| 61,065,013
|
Loop through rows and columns of different data sets in Python
|
<p>I am new to Python and am struggling to loop through the rows and columns of two different datasets, to generate an array of values.</p>
<p>I have two dataframes (parameterMatrix and growthRates); one shows an array of species and the strengths of their interactions, and the other shows the growth rate of each species. </p>
<p><strong><em>parameterMatrix</em></strong>:</p>
<pre><code> herbivores youngScrub matureScrub sapling matureTree grassHerbs
herbivores 0.0 0.02 0 0.05 0 0.5
youngScrub -0.2 0.00 0 0.00 0 0.0
matureScrub 0.0 0.00 0 0.00 0 0.0
sapling -0.2 0.00 0 0.00 0 0.0
matureTree 0.0 0.00 0 0.00 0 0.0
grassHerbs -5.0 0.00 0 0.00 0 0.0
</code></pre>
<p><strong><em>growthRates</em></strong>:</p>
<pre><code>herbivores youngScrub matureScrub sapling matureTree grassHerbs
0 0.1 0 0.1 0.1 0 0.2
</code></pre>
<p>I am trying to generate an array of values for each of the six species, that can ultimately be used to calculate the rate of change in each species over time. I have manually written out each of the equations (see below), but I am wondering if there is a faster way to do this, e.g. by looping through each of these data frames.</p>
<pre><code>def ecoNetwork(X, t=0):
num_herbivores = X[0]
num_youngScrub = X[1]
num_matureScrub = X[2]
num_sapling = X[3]
num_matureTree = X[4]
num_grassHerb = X[5]
return np.array([
# herbivores
(growthRates['herbivores'][0])*num_herbivores + (parameterMatrix['herbivores']['herbivores']*num_herbivores*num_herbivores)
+ (parameterMatrix['herbivores']['youngScrub']*num_herbivores*num_youngScrub)
+ (parameterMatrix['herbivores']['matureScrub']*num_herbivores*num_matureScrub)
+ (parameterMatrix['herbivores']['sapling']*num_herbivores*num_sapling)
+ (parameterMatrix['herbivores']['matureTree']*num_herbivores*num_matureTree)
+ (parameterMatrix['herbivores']['grassHerbs']*num_herbivores*num_grassHerb)
,
# young scrub (X1)
(growthRates['youngScrub'][0])*num_youngScrub + (parameterMatrix['youngScrub']['herbivores']*num_youngScrub*num_herbivores)
+ (parameterMatrix['youngScrub']['youngScrub']*num_youngScrub*num_youngScrub)
+ (parameterMatrix['youngScrub']['matureScrub']*num_youngScrub*num_matureScrub)
+ (parameterMatrix['youngScrub']['sapling']*num_youngScrub*num_sapling)
+ (parameterMatrix['youngScrub']['matureTree']*num_youngScrub*num_matureTree)
+ (parameterMatrix['youngScrub']['grassHerbs']*num_youngScrub*num_grassHerb)
,
# mature scrub
(growthRates['matureScrub'][0])*num_matureScrub + (parameterMatrix['matureScrub']['herbivores']*num_matureScrub*num_herbivores)
+ (parameterMatrix['matureScrub']['youngScrub']*num_matureScrub*num_youngScrub)
+ (parameterMatrix['matureScrub']['matureScrub']*num_matureScrub*num_matureScrub)
+ (parameterMatrix['matureScrub']['sapling']*num_matureScrub*num_sapling)
+ (parameterMatrix['matureScrub']['matureTree']*num_matureScrub*num_matureTree)
+ (parameterMatrix['matureScrub']['grassHerbs']*num_matureScrub*num_grassHerb)
,
# saplings
(growthRates['sapling'][0])*num_sapling + (parameterMatrix['sapling']['herbivores']*num_sapling*num_herbivores)
+ (parameterMatrix['sapling']['youngScrub']*num_sapling*num_youngScrub)
+ (parameterMatrix['sapling']['matureScrub']*num_sapling*num_matureScrub)
+ (parameterMatrix['sapling']['sapling']*num_sapling*num_sapling)
+ (parameterMatrix['sapling']['matureTree']*num_sapling*num_matureTree)
+ (parameterMatrix['sapling']['grassHerbs']*num_sapling*num_grassHerb)
,
# mature trees
(growthRates['matureTree'][0])*num_matureTree + (parameterMatrix['matureTree']['herbivores']*num_matureTree*num_herbivores)
+ (parameterMatrix['matureTree']['youngScrub']*num_matureTree*num_youngScrub)
+ (parameterMatrix['matureTree']['matureScrub']*num_matureTree*num_matureScrub)
+ (parameterMatrix['matureTree']['sapling']*num_matureTree*num_sapling)
+ (parameterMatrix['matureTree']['matureTree']*num_matureTree*num_matureTree)
+ (parameterMatrix['matureTree']['grassHerbs']*num_matureTree*num_grassHerb)
,
# grass & herbaceous plants
(growthRates['grassHerbs'][0])*num_grassHerb + (parameterMatrix['grassHerbs']['herbivores']*num_grassHerb*num_herbivores)
+ (parameterMatrix['grassHerbs']['youngScrub']*num_grassHerb*num_youngScrub)
+ (parameterMatrix['grassHerbs']['matureScrub']*num_grassHerb*num_matureScrub)
+ (parameterMatrix['grassHerbs']['sapling']*num_grassHerb*num_sapling)
+ (parameterMatrix['grassHerbs']['matureTree']*num_grassHerb*num_matureTree)
+ (parameterMatrix['grassHerbs']['grassHerbs']*num_grassHerb*num_grassHerb)
])
# time points
t = np.linspace(0, 50)
# Initial conditions
X0=np.empty(6)
X0[0]= 10
X0[1] = 30
X0[2] = 50
X0[3] = 70
X0[4] = 90
X0[5] = 110
X0 = np.array([X0[0], X0[1], X0[2], X0[3], X0[4], X0[5]])
# Integrate the ODEs
X = integrate.odeint(ecoNetwork, X0, t)
</code></pre>
<p>Is this possible and if so, what's the best way to do it?</p>
|
<p>Start by storing your species in a list:</p>
<pre><code>species = ["herbivores", "youngScrub", "matureScrub", "sapling", "matureTree", "grassHerbs"]
</code></pre>
<p>Then you can loop over this list instead of manually typing out each one:</p>
<pre><code>new_array = []
for outer_index, outer_animal in enumerate(species):
new_sum = (growthRates[outer_animal][0])* X[outer_index]
for inner_index, inner_animal in enumerate(species):
new_sum += np.sum(parameterMatrix[outer_animal][inner_animal]*X[outer_index]*X[inner_index])
new_array.append(new_sum)
</code></pre>
<p>Further out, I'd encourage you to check out <code>pandas</code>, it's a great python library to work with dataframes.</p>
|
python|pandas|loops|dataframe
| 2
|
375,515
| 60,965,602
|
Pandas assign group numbers for each time bin
|
<p>I have a pandas dataframe that looks like below.</p>
<pre><code>Key Name Val1 Val2 Timestamp
101 A 10 1 01-10-2019 00:20:21
102 A 12 2 01-10-2019 00:20:21
103 B 10 1 01-10-2019 00:20:26
104 C 20 2 01-10-2019 14:40:45
105 B 21 3 02-10-2019 09:04:06
106 D 24 3 02-10-2019 09:04:12
107 A 24 3 02-10-2019 09:04:14
108 E 32 2 02-10-2019 09:04:20
109 A 10 1 02-10-2019 09:04:22
110 B 10 1 02-10-2019 10:40:49
</code></pre>
<p>Starting from the earliest timestamp, that is, '01-10-2019 00:20:21', I need to create time bins of 10 seconds each and assign same group number to all the rows having timestamp fitting in a time bin.
The output should look as below.</p>
<pre><code>Key Name Val1 Val2 Timestamp Group
101 A 10 1 01-10-2019 00:20:21 1
102 A 12 2 01-10-2019 00:20:21 1
103 B 10 1 01-10-2019 00:20:26 1
104 C 20 2 01-10-2019 14:40:45 2
105 B 21 3 02-10-2019 09:04:06 3
106 D 24 3 02-10-2019 09:04:12 4
107 A 24 3 02-10-2019 09:04:14 4
108 E 32 2 02-10-2019 09:04:20 4
109 A 10 1 02-10-2019 09:04:22 5
110 B 10 1 02-10-2019 10:40:49 6
</code></pre>
<p>First time bin: '01-10-2019 00:20:21' to '01-10-2019 00:20:30',
Next time bin: '01-10-2019 00:20:31' to '01-10-2019 00:20:40',
Next time bin: '01-10-2019 00:20:41' to '01-10-2019 00:20:50',
Next time bin: '01-10-2019 00:20:51' to '01-10-2019 00:21:00',
Next time bin: '01-10-2019 00:21:01' to '01-10-2019 00:21:10'
and so on.. Based on these time bins, 'Group' is assigned for each row.
It is not mandatory to have consecutive group numbers(If a time bin is not present, it's ok to skip that group number).</p>
<p>I have generated this using for loop, but it takes lot of time if data is spread across months.
Please let me know if this can be done as a pandas operation using a single line of code. Thanks.</p>
|
<p>Here is an example without <code>loop</code>. The main approach is round up seconds to specific ranges and use <code>ngroup()</code>.</p>
<pre><code>02-10-2019 09:04:12 -> 02-10-2019 09:04:11
02-10-2019 09:04:14 -> 02-10-2019 09:04:11
02-10-2019 09:04:20 -> 02-10-2019 09:04:11
02-10-2019 09:04:21 -> 02-10-2019 09:04:21
02-10-2019 09:04:25 -> 02-10-2019 09:04:21
...
</code></pre>
<p>I use a new temporary column to find some specific range.</p>
<pre><code>df = pd.DataFrame.from_dict({
'Name': ('A', 'A', 'B', 'C', 'B', 'D', 'A', 'E', 'A', 'B'),
'Val1': (1, 2, 1, 2, 3, 3, 3, 2, 1, 1),
'Timestamp': (
'2019-01-10 00:20:21',
'2019-01-10 00:20:21',
'2019-01-10 00:20:26',
'2019-01-10 14:40:45',
'2019-02-10 09:04:06',
'2019-02-10 09:04:12',
'2019-02-10 09:04:14',
'2019-02-10 09:04:20',
'2019-02-10 09:04:22',
'2019-02-10 10:40:49',
)
})
# convert str to Timestamp
df['Timestamp'] = pd.to_datetime(df['Timestamp'])
# your specific ranges. customize if you need
def sec_to_group(x):
if 0 <= x.second <= 10:
x = x.replace(second=0)
elif 11 <= x.second <= 20:
x = x.replace(second=11)
elif 21 <= x.second <= 30:
x = x.replace(second=21)
elif 31 <= x.second <= 40:
x = x.replace(second=31)
elif 41 <= x.second <= 50:
x = x.replace(second=41)
elif 51 <= x.second <= 59:
x = x.replace(second=51)
return x
# new column formated_dt(temporary) with formatted seconds
df['formated_dt'] = df['Timestamp'].apply(sec_to_group)
# group by new column + ngroup() and drop
df['Group'] = df.groupby('formated_dt').ngroup()
df.drop(columns=['formated_dt'], inplace=True)
print(df)
</code></pre>
<p>Output:</p>
<pre><code># Name Val1 Timestamp Group
# 0 A 1 2019-01-10 00:20:21 0 <- ngroup() calculates from 0
# 1 A 2 2019-01-10 00:20:21 0
# 2 B 1 2019-01-10 00:20:26 0
# 3 C 2 2019-01-10 14:40:45 1
# 4 B 3 2019-02-10 09:04:06 2
# ....
</code></pre>
<p>Also you can try to use <a href="https://stackoverflow.com/questions/20095593/pandas-group-by-n-seconds-and-apply-arbitrary-rolling-function">TimeGrouper or resample</a>. </p>
<p>Hope this helps.</p>
|
python|pandas|numpy|timestamp|grouping
| 1
|
375,516
| 60,987,813
|
Python: SettingWithCopyWarning
|
<p>Having read this <a href="https://www.stackoverflow.com/a/42773096/4487805">answer</a>, I tried to do the following to avoid <code>SettingWithCopyWarning</code>. </p>
<p>So I did below. Yet it still generates the warning below. What have I done wrong ? </p>
<pre><code>df_filtered.loc[:,'MY_DT'] = pd.to_datetime(df_filtered['MY_DT'])
SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
</code></pre>
<p>My column was originally a string</p>
<pre><code>df_filtered['MY_DT']
Out[3]:
0 4/24/2020
1 4/24/2020
2 4/24/2020
3 4/24/2020
10 4/24/2020
...
1937 4/30/2020
1938 4/30/2020
1939 4/30/2020
1940 4/30/2020
1941 4/30/2020
Name: MY_DT, Length: 1896, dtype: object
</code></pre>
|
<p>Probably <code>df_filtered</code> is a sub dataframe of other one (<code>df</code>?).</p>
<p>This warning means that you try to change <code>df_filtered</code> which is a slice of <code>df</code>, and it will not change <code>df</code>.</p>
<p>In order to avoid this warning you can try to copy the slice:</p>
<pre><code>df_filtered = df_filtered.copy()
</code></pre>
|
python|pandas
| 1
|
375,517
| 60,911,532
|
Pandas could not be able to read given CSV file correctly
|
<p>I tried below code but could not be able to correct dataframe. Please let me know the correct code to read CSV file using pandas.</p>
<p>CSV file:<a href="https://drive.google.com/file/d/1cxnRl9Jz7RTWg5hddZdT7eLExs5dq_Cf/view" rel="nofollow noreferrer">CSV File</a></p>
<p>My Code:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_csv("Data 8199 2391 6_6_2019 13_39_02.csv",sep="\r\t",skiprows=68,encoding = "ISO-8859-1",index_col=0)
df.head()
</code></pre>
<p>Not satisfied Output:
<a href="https://i.stack.imgur.com/Lgq0Y.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Lgq0Y.jpg" alt="enter image description here"></a></p>
|
<p>Try this:</p>
<pre><code>df = pd.read_csv("Data 8199 2391 6_6_2019 13_39_02.csv", delimiter="\t", skiprows=68,
encoding="utf-16", index_col=0)
print(df.head())
</code></pre>
<p><strong>Output:</strong></p>
<pre><code> Time 101 <RoomTemperature> (C) ... 319 <DU5> (C) 320 (C)
Scan ...
1 06-06-2019 13:39:02:392 21.170 ... 49.767 42.857
2 06-06-2019 13:39:32:376 21.138 ... 49.944 43.253
3 06-06-2019 13:40:02:376 21.116 ... 50.095 43.675
4 06-06-2019 13:40:32:376 21.215 ... 50.227 44.085
5 06-06-2019 13:41:02:376 21.234 ... 50.385 44.561
</code></pre>
|
python|pandas
| 1
|
375,518
| 61,017,835
|
How can I speed up this nested loop using pandas?
|
<p>I am new to python and pandas. I am trying to assign new session IDs for around 2270 users, based on the time difference between the timestamps. If the time difference exceeds 4 hours, I want a new session ID. Otherwise, it would have to remain the same. In the end, I want a modified data frame with the new session ID column. Here is what I have so far: </p>
<pre><code>Eh_2016["NewSessionID"] = 1 #Initialize 'NewSessionID' column in df with 1
Eh_2016['elapsed'] = datetime.time(0,0,0,0) #Create an empty elapsed to calculate Time diff later
users = Eh_2016['Username'].unique() #find the number of unique Usernames
for user in users: #start of the loop
idx = Eh_2016.index[Eh_2016.Username == user] #Slice the original df
temp = Eh_2016[Eh_2016.Username == user] #Create a temp placeholder for the slice
counter = 1 # Initialize counter for NewSessionIDs
for i,t in enumerate(temp['Timestamp']): #Looping for each timestamp value
if i != 0 :
temp['elapsed'].iloc[i] = (t - temp['Timestamp'].iloc[i-1]) #Calculate diff btwn timestamps
if temp['elapsed'].iloc[i] > datetime.timedelta(hours = 4): #If time diff>4
counter +=1 #Increase counter value
temp['NewSessionID'].iloc[i]=counter #Assign new counter value as NewSessionID
else:
temp['NewSessionID'].iloc[i] = counter #Retain previous sessionID
Eh_2016.loc[idx,:]= temp #Replace original df with the updated slice
</code></pre>
<p>Any help on how to make this faster would be greatly appreciated! Let me know if you need more details. Thanks in advance. </p>
<p>Edit: Sample DF</p>
<pre><code> Username Timestamp NewSessionID Elapsed
126842 1095513 2016-06-30 20:58:30.477 1 00:00:00
126843 1095513 2016-07-16 07:54:47.986 2 15 days 10:56:17.509000
126844 1095513 2016-07-16 07:54:47.986 2 0 days 00:00:00
126845 1095513 2016-07-16 07:55:10.986 2 0 days 00:00:23
126846 1095513 2016-07-16 07:55:13.456 2 0 days 00:00:02.470000
... ... ... ...
146920 8641894 2016-08-11 22:26:14.051 31 0 days 04:50:23.415000
146921 8641894 2016-08-11 22:26:14.488 31 0 days 00:00:00.437000
146922 8641894 2016-08-12 20:01:02.419 32 0 days 21:34:47.931000
146923 8641894 2016-08-23 10:19:05.973 33 10 days 14:18:03.554000
146924 8641894 2016-09-25 11:30:35.540 34 33 days 01:11:29.567000
</code></pre>
|
<p>Filtering the whole dataframe for each user is <code>O(users*sessions)</code>, and it's not needed since you need to iterate over the whole thing anyway.</p>
<p>A more efficient approach would be to instead iterate over the dataframe in one pass, and store the temporary variables (counter, location of previous row, etc) in a separate dataframe indexed by users.</p>
<pre><code>Eh_2016["NewSessionID"] = 1 #Initialize 'NewSessionID' column in df with 1
Eh_2016['elapsed'] = datetime.time(0,0,0,0) #Create an empty elapsed to calculate Time diff later
# create new dataframe of unique users
users = pd.DataFrame({'Username': Eh_2016['Username'].unique()}).set_index('Username')
# one column for the previous session looked at for each user
users['Previous'] = -1
# one column for the counter variable
users['Counter'] = 0
# iterate over each row
for index, row in Eh_2016.iterrows(): #start of the loop
user = row['Username']
previous = users[user, 'Previous']
if previous >= 0: # if this is not the first row for this user
Eh_2016.loc[index, 'elapsed'] = (row['Timestamp'] - Eh_2016.loc[previous, 'Timestamp']) #Calculate diff btwn timestamps
if Eh_2016.loc[index, 'elapsed'] > datetime.timedelta(hours = 4): #If time diff>4
users[user,'Counter'] += 1 #Increase counter value
Eh_2016.loc[index, 'NewSessionID'] = users[user,'Counter'] # Assign new counter value as NewSessionID
users[user, 'Previous'] = index # remember this row as the latest row for this user
</code></pre>
|
python|pandas|nested-loops
| 0
|
375,519
| 60,819,387
|
Is there any way to merge csv files directly when import into pandas?
|
<p>I have 35 csv files and i want to merge all the files together on 'Id' column.
Is there any way to merge all? I can manually do like this by uploading each file and then defining into datafame</p>
<pre><code>pd.merge(df_c1, df_c2, on='uuid')
</code></pre>
<p>But curious if there is any smart way?</p>
|
<p>credit to @cs95 for <a href="http://stackoverflow.com/questions/53645882/pandas-merging-101">Pandas Merging 101</a></p>
<pre><code>### read / create data frames
df_c1 = pd.DataFrame({'uuid': ['A', 'B', 'C', 'D'], 'valueA': np.random.randn(4)})
df_c2 = pd.DataFrame({'uuid': ['B', 'D', 'E', 'F'], 'valueB': np.random.randn(4)})
df_c3 = pd.DataFrame({'uuid': ['D', 'E', 'J', 'C'], 'valueC': np.ones(4)})
### list of data frames
dfs = [df_c1, df_c2, df_c3]
</code></pre>
<p>The following could then be used to <code>concat</code>: </p>
<pre><code>pd.concat([df.set_index('uuid') for df in dfs], axis = 1) #.reset_index() could be used to make uuid a column again
</code></pre>
<p>Lastly, I could add to the solution by reading in multiple csv with something like this:</p>
<pre><code>import pandas as pd
import glob
import os
df_list = []
# note: this method assumes all of your csv files are in a single folder
path = '<insert your file path here>'
all_files = glob.glob(os.path.join(path, '*.csv'))
for file in all_files:
df1 = pd.read_csv(file)
df_list.append(df1)
concatenated_df = pd.concat([df for df in df_list], axis = 1) #note use axis = 0 to append row wise
</code></pre>
|
python|pandas|csv|dataframe|merge
| 1
|
375,520
| 61,078,946
|
How to Get Reproducible Results (Keras, Tensorflow):
|
<p>To make the results reproducible I've red more than 20 articles and added to my script maximum of the functions ... but failed. </p>
<p>In the official source I red there are 2 kinds of seeds - global and operational. May be, the key to solving my problem is setting the operational seed, but I don't understand where to apply it.</p>
<p>Would you, please, help me to achieve reproducible results with tensorflow (version > 2.0)? Thank you very much.</p>
<pre><code>from keras.models import Sequential
from keras.layers import Dense
import numpy as np
import pandas as pd
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from keras.optimizers import adam
from sklearn.preprocessing import MinMaxScaler
np.random.seed(7)
import tensorflow as tf
tf.random.set_seed(7) #analogue of set_random_seed(seed_value)
import random
random.seed(7)
tf.random.uniform([1], seed=1)
tf.Graph.as_default #analogue of tf.get_default_graph().finalize()
rng = tf.random.experimental.Generator.from_seed(1234)
rng.uniform((), 5, 10, tf.int64) # draw a random scalar (0-D tensor) between 5 and 10
df = pd.read_csv("s54.csv",
delimiter = ';',
decimal=',',
dtype = object).apply(pd.to_numeric).fillna(0)
#data normalization
scaler = MinMaxScaler()
scaled_values = scaler.fit_transform(df)
df.loc[:,:] = scaled_values
X_train, X_test, y_train, y_test = train_test_split(df.iloc[:,1:],
df.iloc[:,:1],
test_size=0.2,
random_state=7,
stratify = df.iloc[:,:1])
model = Sequential()
model.add(Dense(1200, input_dim=len(X_train.columns), activation='relu'))
model.add(Dense(150, activation='relu'))
model.add(Dense(80, activation='relu'))
model.add(Dense(10, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
loss="binary_crossentropy"
optimizer=adam(lr=0.01)
metrics=['accuracy']
epochs = 2
batch_size = 32
verbose = 0
model.compile(loss=loss,
optimizer=optimizer,
metrics=metrics)
model.fit(X_train, y_train, epochs = epochs, batch_size=batch_size, verbose = verbose)
predictions = model.predict(X_test)
tn, fp, fn, tp = confusion_matrix(y_test, predictions>.5).ravel()
</code></pre>
|
<p>As a reference from the documentation<br>
Operations that rely on a random seed actually derive it from two seeds: the global and operation-level seeds. This sets the global seed.</p>
<p>Its interactions with operation-level seeds are as follows:</p>
<ol>
<li>If neither the global seed nor the operation seed is set: A randomly picked seed is used for this op.</li>
<li>If the operation seed is not set but the global seed is set: The system picks an operation seed from a stream of seeds determined by the global seed.</li>
<li>If the operation seed is set, but the global seed is not set: A default global seed and the specified operation seed are used to determine the random sequence.</li>
<li>If both the global and the operation seed are set: Both seeds are used in conjunction to determine the random sequence.</li>
</ol>
<h2>1st Scenario</h2>
<p>A random seed will be picked by default. This can be easily noticed with the results.
It will have different values every time you re-run the program or call the code multiple times.</p>
<pre><code>x_train = tf.random.normal((10,1), 1, 1, dtype=tf.float32)
print(x_train)
</code></pre>
<h2>2nd Scenario</h2>
<p>The global is set but the operation has not been set.
Although it generated a different seed from first and second random. If you re-run or restart the code. The seed for both will still be the same.
It both generated the same result over and over again. </p>
<pre><code>tf.random.set_seed(2)
first = tf.random.normal((10,1), 1, 1, dtype=tf.float32)
print(first)
sec = tf.random.normal((10,1), 1, 1, dtype=tf.float32)
print(sec)
</code></pre>
<h2>3rd Scenario</h2>
<p>For this scenario, where the operation seed is set but not the global.
If you re-run the code it will give you different results but if you restart the runtime if will give you the same sequence of results from the previous run.</p>
<pre><code>x_train = tf.random.normal((10,1), 1, 1, dtype=tf.float32, seed=2)
print(x_train)
</code></pre>
<h2>4th scenario</h2>
<p>Both seeds will be used to determine the random sequence.
Changing the global and operation seed will give different results but restarting the runtime with the same seed will still give the same results.</p>
<pre><code>tf.random.set_seed(3)
x_train = tf.random.normal((10,1), 1, 1, dtype=tf.float32, seed=1)
print(x_train)
</code></pre>
<p><strong><em>Created a reproducible code as a reference.<br> By setting the global seed, It always gives the same results.</em></strong></p>
<pre><code>import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
import numpy as np
import pandas as pd
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
## GLOBAL SEED ##
tf.random.set_seed(3)
x_train = tf.random.normal((10,1), 1, 1, dtype=tf.float32)
y_train = tf.math.sin(x_train)
x_test = tf.random.normal((10,1), 2, 3, dtype=tf.float32)
y_test = tf.math.sin(x_test)
model = Sequential()
model.add(Dense(1200, input_shape=(1,), activation='relu'))
model.add(Dense(150, activation='relu'))
model.add(Dense(80, activation='relu'))
model.add(Dense(10, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
loss="binary_crossentropy"
optimizer=tf.keras.optimizers.Adam(lr=0.01)
metrics=['mse']
epochs = 5
batch_size = 32
verbose = 1
model.compile(loss=loss,
optimizer=optimizer,
metrics=metrics)
histpry = model.fit(x_train, y_train, epochs = epochs, batch_size=batch_size, verbose = verbose)
predictions = model.predict(x_test)
print(predictions)
</code></pre>
<p><a href="https://i.stack.imgur.com/cFbTJ.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/cFbTJ.jpg" alt="enter image description here"></a>
<br/>
Note: If you are using TensorFlow 2 higher, the Keras is already in the API, therefore, you should use TF.Keras rather than the native one.<br> All of these are simulated on the google colab.</p>
|
tensorflow|keras|neural-network|tensorflow2.0
| 8
|
375,521
| 60,843,310
|
Bug in numpy.shape()?
|
<p>I am playing with day 10 of the <a href="https://adventofcode.com/2019/day/10" rel="nofollow noreferrer">2019 advent of code challenge</a> and found <code>np.shape()</code> behaving weirdly right at the start:</p>
<pre><code>In [45]:
import numpy as np
file = open('Day10.data')
astermap = file.read()
print(astermap)
Out [46]:
##.##..#.####...#.#.####
##.###..##.#######..##..
..######.###.#.##.######
.#######.####.##.#.###.#
[..total of 25 lines]
# astermap is a str, which I make a list of lists
In [46]:
astermap = astermap.split('\n')
astermap = list(map(list, astermap))
Here I found some strange behavior for numpy.shape: the shape should be (25,24)
In [48]:
np.shape(astermap)
Out[48]:
(25,)
In [49]:
np.shape(astermap[0])
Out[49]:
(24,)
In [51]:
type(astermap)
Out[51]:
list
In [52]:
type(astermap[0])
Out[52]:
list
</code></pre>
<p>When I run the same code with a smaller example <code>astermap = '.#..#\n.....\n#####\n....#\n...##'</code>, it works as expected and <code>np.shape(astermap)</code> returns <code>(5, 5)</code>. So I strongly expect that the reason must lie in the format of the str after reading the file. However, I can't see a difference of types between the str I import and the str I create by hand.</p>
<p>Can somebody explain?</p>
|
<p>When you convert a list of lists into a numpy array, it only makes it all a numpy array if the inner lists are of the same length. Otherwise, it makes an array of python lists.</p>
<p>Example:</p>
<p>Running <code>np.array(list(map(list,(" asdf fdsa".split()))))</code> returns:</p>
<pre><code>array([['a', 's', 'd', 'f'],
['f', 'd', 's', 'a']], dtype='<U1')
</code></pre>
<p>Which has shape (2,4). But <code>np.array(list(map(list,(" asdf fdsafda".split()))))</code> returns:</p>
<pre class="lang-py prettyprint-override"><code>array([list(['a', 's', 'd', 'f']),
list(['f', 'd', 's', 'a', 'f', 'd', 'a'])], dtype=object)
</code></pre>
<p>Which has shape (2,).</p>
<p>Even if you convert the inner list to numpy arrays, they still won't display when taking <code>np.shape()</code>, because they're not all the same length.</p>
<p>I'd assume it's very likely one of your lists is a different length, which could be due to splitting by new lines <code>'/n'</code>. It's possible one of your line values is an empty string <code>['']</code>.</p>
|
python|numpy
| 0
|
375,522
| 61,024,276
|
How to implement a custom cost function in keras?
|
<p>I have a following cost function= argmin L1+L2 , where L1 is Mean Squared Error and L2 is
-λ Summation( Square((y) x (z) )) where y is the predicted output image and z is the given input image to model. Elementwise multiplication of y and z and then taking square of it. λ is a trade off parameter between L1 and L2. I am not sure how to implement in , I did it as follows</p>
<pre><code>def custom_loss(i):
def loss(y_true, y_pred):
y_true=K.cast(y_true, dtype='float32')
y_pred=K.cast(y_pred, dtype='float32')
input_image=K.cast(i, dtype='float32')
mul=tf.math.multiply(input_image,y_pred)
L1=K.mean(K.square(mul),axis=1)
L2=K.mean(K.square(y_pred - y_true), axis=-1)
closs=L1-L2
return closs
return loss
</code></pre>
|
<p>To break your question part by part</p>
<blockquote>
<p>where L1 is Mean Squared Error </p>
</blockquote>
<p>Thus, <code>L1 = np.square(np.subtract(y_true,y_pred)).mean()</code></p>
<blockquote>
<p>L2 is -λ Summation( Square((y) x (z) )) where y is the predicted
output image and z is the given input image to model. Elementwise
multiplication of y and z and then taking square of it</p>
</blockquote>
<p>Thus, <code>L2 = np.sum(np.concatenate(np.square(np.multiply(y_true,y_pred))))</code>. <strong><em>You realize that L2 will be a very big number for a loss.</em></strong></p>
<p>To Summarize this is how your loss function looks like - </p>
<pre><code>def custom_loss(y_true,y_pred):
def loss(y_true, y_pred):
y_true = img_to_array(y_true)
y_pred = img_to_array(y_pred)
L1 = np.square(np.subtract(y_true,y_pred)).mean()
L2 = np.sum(np.concatenate(np.square(np.multiply(y_true,y_pred))))
loss=L1-L2
return loss
</code></pre>
<p>I have written a simple code here to load a image as y_true and crop central part for y_pred and perform the loss you mentioned <strong>(doesn't make much meaning as the value is to big)</strong>.</p>
<p><strong>Code -</strong> </p>
<pre><code>import tensorflow as tf
from keras.preprocessing.image import load_img
from keras.preprocessing.image import img_to_array, array_to_img
from matplotlib import pyplot
# Load the image
y_true = load_img('/content/bird.jpg', target_size=(224, 224))
# Convert Image to array
image = img_to_array(y_true)
# Central crop image
image = tf.image.central_crop(image, np.random.uniform(0.50, 1.00))
# Resize to original size of image
image = tf.image.resize(image, (224, 224))
# convert the image to an array
y_pred = array_to_img(image)
# def custom_loss():
# def loss(y_true, y_pred):
# y_true = img_to_array(y_true)
# y_pred = img_to_array(y_pred)
# L1 = np.square(np.subtract(y_true,y_pred)).mean()
# L2 = np.sum(np.concatenate(np.square(np.multiply(y_true,y_pred))))
# loss=L1-L2
# return loss
def loss(y_true, y_pred):
y_true = img_to_array(y_true)
y_pred = img_to_array(y_pred)
L1 = np.square(np.subtract(y_true,y_pred)).mean()
L2 = np.sum(np.concatenate(np.square(np.multiply(y_true,y_pred))))
loss=L1-L2
return loss
x = loss(y_true,y_pred)
print(x)
</code></pre>
<p>Output -</p>
<pre><code>-251577020000000.0
</code></pre>
|
python|python-3.x|tensorflow|keras|deep-learning
| 0
|
375,523
| 61,134,924
|
train on multiple devices
|
<p>I have know that TensorFlow offer Distributed Training API that can train on multiple devices such as multiple GPUs, CPUs, TPUs, or multiple computers ( workers)
Follow this doc : <a href="https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras</a> </p>
<p>But I have a question is this any possible way to split the train using Data Parallelism to train across multiple machines ( include mobile devices and computer devices)? </p>
<p>I would be really grateful if you have any tutorial/instruction.</p>
|
<p>As per my knowledge, Tensorflow only supports CPU, TPU, and GPU for distributed training, considering all the devices should be in the same network.</p>
<p>For connecting multiple devices, as you mentioned you can follow <a href="https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras" rel="nofollow noreferrer">Multi-worker training</a>.</p>
<p><code>tf.distribute.Strategy</code> is integrated to <code>tf.keras</code>, so when <code>model.fit</code> is used with <code>tf.distribute.Strategy</code> instance and then using <code>strategy.scope()</code> for your model allows to create distributed variables.This allows it to equally divide your input data on your devices.
You can follow <a href="https://www.tensorflow.org/guide/distributed_training#using_tfdistributestrategy_with_tfkerasmodelfit" rel="nofollow noreferrer">this</a> tutorial for more details.<br />
Also <a href="https://www.tensorflow.org/tutorials/distribute/input" rel="nofollow noreferrer">Distributed input</a> could help you.</p>
|
tensorflow|machine-learning|distributed-training
| 1
|
375,524
| 60,767,017
|
ImportError: DLL load failed while importing aggregations: The specified module could not be found
|
<p>I am new to Python and currently having trouble when <strong>importing</strong> some libraries.</p>
<p>I am using Python 3.8.</p>
<p>I have installed Pandas in the CMD using "pip install pandas"</p>
<p>If i go to Python folder i see that Pandas is installed:</p>
<p>C:\Users\VALENTINA\AppData\Local\Programs\Python\Python38-32\Lib\site-packages</p>
<p>But then i get this error message when trying to import Pandas in my script:</p>
<pre class="lang-none prettyprint-override"><code>Traceback (most recent call last):
File "<pyshell#1>", line 1, in <module>
import pandas as pd
File "C:\Users\VALENTINA\AppData\Local\Programs\Python\Python38-32\lib\site-packages\pandas\__init__.py", line 55, in <module>
from pandas.core.api import (
File "C:\Users\VALENTINA\AppData\Local\Programs\Python\Python38-32\lib\site-packages\pandas\core\api.py", line 29, in <module>
from pandas.core.groupby import Grouper, NamedAgg
File "C:\Users\VALENTINA\AppData\Local\Programs\Python\Python38-32\lib\site-packages\pandas\core\groupby\__init__.py", line 1, in <module>
from pandas.core.groupby.generic import DataFrameGroupBy, NamedAgg, SeriesGroupBy
File "C:\Users\VALENTINA\AppData\Local\Programs\Python\Python38-32\lib\site-packages\pandas\core\groupby\generic.py", line 60, in <module>
from pandas.core.frame import DataFrame
File "C:\Users\VALENTINA\AppData\Local\Programs\Python\Python38-32\lib\site-packages\pandas\core\frame.py", line 124, in <module>
from pandas.core.series import Series
File "C:\Users\VALENTINA\AppData\Local\Programs\Python\Python38-32\lib\site-packages\pandas\core\series.py", line 4572, in <module>
Series._add_series_or_dataframe_operations()
File "C:\Users\VALENTINA\AppData\Local\Programs\Python\Python38-32\lib\site-packages\pandas\core\generic.py", line 10349, in _add_series_or_dataframe_operations
from pandas.core.window import EWM, Expanding, Rolling, Window
File "C:\Users\VALENTINA\AppData\Local\Programs\Python\Python38-32\lib\site-packages\pandas\core\window\__init__.py", line 1, in <module>
from pandas.core.window.ewm import EWM # noqa:F401
File "C:\Users\VALENTINA\AppData\Local\Programs\Python\Python38-32\lib\site-packages\pandas\core\window\ewm.py", line 5, in <module>
import pandas._libs.window.aggregations as window_aggregations
ImportError: DLL load failed while importing aggregations: The specified module could not be found.
</code></pre>
<p>I have this error message when running my script in Visual Code and also in IDLE.</p>
<p>I appreciate if someone can help me </p>
<p>Thnks</p>
|
<p>I was facing the same problem. I am using <code>python 3.7.5</code>. By default <code>pip install pandas</code> command install version 1.0.3. So i revert to version 1.0.1.</p>
<pre><code>pip uninstall pandas
pip install pandas==1.0.1
</code></pre>
<p>Now it is working without error. You may try it.</p>
|
python|pandas|visual-studio-code
| 24
|
375,525
| 60,977,414
|
Getting complex coefficients in nearest SPD matrices
|
<p>I am writing a python 3.7 program and I need to get symmetric positive-definite matrices.</p>
<p>I used this code to get the nearest SPD (all eigenvalues have to be > 0) :</p>
<p><a href="https://stackoverflow.com/questions/43238173/python-convert-matrix-to-positive-semi-definite?noredirect=1&lq=1">Python: convert matrix to positive semi-definite</a></p>
<p>The fact is that I have to compute riemannian exponentials of symmetric matrices: <a href="https://i.stack.imgur.com/3dyOE.png" rel="nofollow noreferrer">Definition of the riemannian exponential</a>.</p>
<p>But I get matrices with complex coefficients. How could I get rid of that ?</p>
<p>Note: Even with the aforementioned implementation, I get matrices with complex numbers.</p>
<p>I also tried to explore the <code>geomstats</code> package, but I do not know how to use it :</p>
<p><a href="https://geomstats.github.io/geometry.html#module-geomstats.geometry.spd_matrices" rel="nofollow noreferrer">https://geomstats.github.io/geometry.html#module-geomstats.geometry.spd_matrices</a></p>
<p>Thanks a lot</p>
<p><strong>Edit 1: my code and what I expect:</strong></p>
<p>This is my function:</p>
<pre class="lang-py prettyprint-override"><code>def expNearestSPD(Y, Delta):
"""
Implementation of riemannian exponential using the nearestPD function
I try to always keep SPD matrices
But at the end it does not work
"""
Y2 = nearestPD(Y)
mul = np.matmul( np.matmul(inv(sqrtm(Y2)), Delta), inv(sqrtm(Y2)) )
mul = nearestPD(mul)
z1 = expm( mul )
z1 = nearestPD(z1)
z = np.matmul( np.matmul(sqrtm(Y2), z1), sqrtm(Y2) )
return nearestPD(z)
</code></pre>
<p>For example, here is <code>P</code>, a SPD matrix:</p>
<pre><code>P
array([[0.1, 0. , 0. , ..., 0. , 0. , 0. ],
[0. , 0.1, 0. , ..., 0. , 0. , 0. ],
[0. , 0. , 0.1, ..., 0. , 0. , 0. ],
...,
[0. , 0. , 0. , ..., 0.1, 0. , 0. ],
[0. , 0. , 0. , ..., 0. , 0.1, 0. ],
[0. , 0. , 0. , ..., 0. , 0. , 0.1]])
</code></pre>
<p>One can check:</p>
<pre><code>np.isrealobj(P)
True
</code></pre>
<p>But when I compute <code>expNearestSPD(P, P)</code> for example, I get:</p>
<pre><code>expNearestSPD(P, P)
array([[0.27182818+0.j, 0. +0.j, 0. +0.j, ...,
0. +0.j, 0. +0.j, 0. +0.j],
[0. +0.j, 0.27182818+0.j, 0. +0.j, ...,
0. +0.j, 0. +0.j, 0. +0.j],
[0. +0.j, 0. +0.j, 0.27182818+0.j, ...,
0. +0.j, 0. +0.j, 0. +0.j],
...,
[0. +0.j, 0. +0.j, 0. +0.j, ...,
0.27182818+0.j, 0. +0.j, 0. +0.j],
[0. +0.j, 0. +0.j, 0. +0.j, ...,
0. +0.j, 0.27182818+0.j, 0. +0.j],
[0. +0.j, 0. +0.j, 0. +0.j, ...,
0. +0.j, 0. +0.j, 0.27182818+0.j]])
</code></pre>
<p>I get complex, but very little, coefficients.</p>
|
<p>In fact, using the package geomstats, I found a solution.</p>
<p>This is exclusively for SPD matrices.</p>
<p>There is a function to directly compute the riemannian exponential, and even one to compute <code>A**t</code> for <code>A</code> SPD and <code>t</code> real.</p>
<p>I recommend using the last version of geomstats (from GitHub).</p>
<p>Here, <code>N</code> is the order of the square matrices (so that their sizes are <code>N x N</code>).</p>
<pre class="lang-py prettyprint-override"><code>import geomstats.backend as gs
from geomstats.geometry.spd_matrices import SPDMatrices, SPDMetricAffine
gs.random.seed(0)
space = SPDMatrices(N)
metric = SPDMetricAffine(N)
def Exp(Y, Delta):
return metric.exp(Delta, Y)
# for the power of a SPD matrix
gs.linalg.powerm(M, t)
</code></pre>
|
python|numpy|matrix
| 0
|
375,526
| 60,908,529
|
Pulling data from a queue in background thread in python process
|
<p>Assuming you are processing a live stream of data like this:</p>
<p><a href="https://i.stack.imgur.com/YsP92.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YsP92.png" alt="async_loading_of_queued_data_in_python_process"></a></p>
<p>What would be the best way to have a background <code>Thread</code> to update the <code>data</code> variable while the main logic can <code>do_some_logic</code> within the endless loop?</p>
<p>I have some experience with clear start and end point of parallelization using <code>multiprocessing/multithreading</code>, but I am unsure how to continously execute a background Thread updating an internal variable. Any advice would be helpfull - Thanks!</p>
|
<p>Write an update function and periodically run a background thread. </p>
<pre><code>def update_data(data):
pass
</code></pre>
<pre><code>import threading
def my_inline_function(some_args):
# do some stuff
t = threading.Thread(target=update_data, args=some_args)
t.start()
# continue doing stuff
</code></pre>
<p>Understand the constraints of <a href="https://wiki.python.org/moin/GlobalInterpreterLock" rel="nofollow noreferrer">GIL</a> so you know if threading is really what you need. </p>
<p>I'd suggest you to look into async/await to get a better idea of how threading actually works. It's a similar model to javascript: your main-program is single-threaded and it exploits IO-bound tasks to context switch into different parts of your application. </p>
<p>If this doesn't meet your requirements, look into <a href="https://docs.python.org/2/library/multiprocessing.html" rel="nofollow noreferrer">multiprocessing</a> - specifically, how to spin a new process and how to share variables between processes </p>
|
python|pandas|multithreading|asynchronous|queue
| 2
|
375,527
| 71,737,532
|
python pandas : building a recommender (question)
|
<p><em><strong>Hello and welcome to this post, i really appreciate your help</strong></em></p>
<p>i'm building a food recommender, and i came accross two questions that are making me stuck :</p>
<p>As you can see my dataset has a column of "Ingredients", and columns for nutritional values such as sodium, proteins.. ect.</p>
<p><strong>Here is an example :</strong></p>
<p><a href="https://i.stack.imgur.com/3xT5o.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3xT5o.png" alt="enter image description here" /></a></p>
<p>I then created a variable full of non-vegan example of food, and if they match with the ingredients of a plate my food recommender will inform us if its is vegan free or not.</p>
<p><strong>code :</strong>
<a href="https://i.stack.imgur.com/l7R1x.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/l7R1x.png" alt="enter image description here" /></a></p>
<p>My problem is that the ingredients in the orriginal dataset are set with quotes and my code doesn't take this into account so all meals are "vegan free". How could i fix that to take into consideration <strong>'eggs'</strong> and not <strong>eggs</strong>. Also this bunch of code takes approximatly 4 hours for me to run so could you tell me if i do nanything else wrong in the meantime before it's too late.</p>
<p>My second question is about making the difference between low/high calories:
By this time i have an error and don't know how to solve it at all.
<a href="https://i.stack.imgur.com/FXLhM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FXLhM.png" alt="enter image description here" /></a></p>
<p><strong>here is the error :</strong></p>
<p><a href="https://i.stack.imgur.com/lXxR8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lXxR8.png" alt="enter image description here" /></a></p>
<p><strong>Thanks you so much in advance, here is the code for you to correct me easily:</strong></p>
<pre><code>vegan = ['eggs','Castoreum','cream','cheese','Lactose','Fish','turkey','horse','MeatBeef','lamb','Gelatin','eggs','Whey']#Ect...
#It is now time to make our vegan friends happy (part 2):
for i in raw_rec_na['ingredients'].index:
for v in vegan:
if(v not in raw_rec_na['ingredients'][i]):
raw_rec_na['food types'][i]='Vegan free!'
elif(v in raw_rec_na['ingredients'][i]):
raw_rec_na['food types'][i]='NOT Vegan free!'
#Let's now make the difference between low/high calories
raw_rec_na['calories_info'] = np.nan #creating new variable (NULL)
raw_rec_na['calories_info'] = raw_rec_na['calories_info'].astype('str')
for y in raw_rec_na['calories'].index:
if(v < 300):
raw_rec_na['calories_info'][y]='low in calories!'
elif(v > 300):
raw_rec_na['calories_info'][y]='high in calories!'
</code></pre>
|
<p>In row 3 in Ingredients column the value is a list, so you have to unpack them first</p>
|
python|pandas|dataframe|indexing
| 0
|
375,528
| 71,753,163
|
Can't manage to open TensorFlow SavedModel for usage in Keras
|
<p>I'm kinda new to TensorFlow and Keras, so please excuse any accidental stupidity, but I have an issue. I've been trying to load in models from the <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md" rel="nofollow noreferrer">TensorFlow Detection Zoo</a>, but haven't had much success.</p>
<p>I can't figure out how to read these saved_model folders (they contain a saved_model.pb file, and an assets and variables folder), so that they're accepted by Keras. Nor can I figure out a way to convert these models so that they may be loaded in. I've tried converting the SavedModel to ONNX, and then convert the ONNX-model to Keras, but that didn't work. Trying to load the original model as a saved_model, and then trying to to save this loaded model in another format gave me no success either.</p>
|
<p>Since you are new to Tensorflow (and I guess deep learning) I would suggest you stick with the API because the detection zoo models best interface with the object detection API. If you have already downloaded the model, you just need to export it using the exporter_main_v2.py script. This article explains it very well <a href="https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/auto_examples/plot_object_detection_saved_model.html#sphx-glr-auto-examples-plot-object-detection-saved-model-py" rel="nofollow noreferrer">link</a>.</p>
|
tensorflow|keras
| 0
|
375,529
| 71,606,580
|
Append row in pandas without iterrows()
|
<p>Given the pandas dataframe with three loans, I need to add to the dataframe the payments, where the payment amount is total loan amount / number of payments. If seq = 0, is the loan amount, else seq is the payment number. I can do this with <code>iterrows()</code> however the dataframe is very large and I would like to find a different solution.</p>
<p>This is my attempt:</p>
<pre><code>import pandas as pd
rows = [{'id': 'x1', 'seq': 0, 'amount': 2000, 'payments': 4 },
{'id': 'x2', 'seq': 0, 'amount': 4000, 'payments': 2 },
{'id': 'x3', 'seq': 0, 'amount': 9000, 'payments': 3 }]
df = pd.DataFrame(rows)
df2 = df.copy()
for index, row in df.iterrows():
num = range(row['payments'])
for i in num:
payment_amount = row['amount'] / row['payments']
row2 = {'id': row['id'], 'seq': i + 1 , 'amount': payment_amount, 'payments': 0 }
df2 = df2.append(row2, ignore_index=True)
</code></pre>
<p>The result should be:</p>
<pre><code> id seq amount payments
0 x1 0 2000.0 4
1 x2 0 4000.0 2
2 x3 0 9000.0 3
3 x1 1 500.0 0
4 x1 2 500.0 0
5 x1 3 500.0 0
6 x1 4 500.0 0
7 x2 1 2000.0 0
8 x2 2 2000.0 0
9 x3 1 3000.0 0
10 x3 2 3000.0 0
11 x3 3 3000.0 0
</code></pre>
<p>However without using <code>iterrows()</code>. Is this possible?</p>
|
<p>Update:</p>
<pre><code>df_pay = df.iloc[df.index.repeat(df['payments'])]\
.eval('amount = amount / payments')\
.assign(payments=0)
df_pay['seq'] = df_pay.groupby('id').cumcount() + 1
pd.concat([df, df_pay], ignore_index=True)
</code></pre>
<p>Output:</p>
<pre><code> id seq amount payments
0 x1 0 2000.0 4
1 x2 0 4000.0 2
2 x3 0 9000.0 3
3 x1 1 500.0 0
4 x1 2 500.0 0
5 x1 3 500.0 0
6 x1 4 500.0 0
7 x2 1 2000.0 0
8 x2 2 2000.0 0
9 x3 1 3000.0 0
10 x3 2 3000.0 0
11 x3 3 3000.0 0
</code></pre>
<hr />
<p>Try this:</p>
<pre><code>pd.concat([df,
df.iloc[df.index.repeat(df['payments'])]\
.eval('amount = amount / payments')\
.assign(payments=0)])
</code></pre>
<p>Output:</p>
<pre><code> id seq amount payments
0 x1 0 2000.0 4
1 x2 0 4000.0 2
2 x3 0 9000.0 3
0 x1 0 500.0 0
0 x1 0 500.0 0
0 x1 0 500.0 0
0 x1 0 500.0 0
1 x2 0 2000.0 0
1 x2 0 2000.0 0
2 x3 0 3000.0 0
2 x3 0 3000.0 0
2 x3 0 3000.0 0
</code></pre>
<p>Trick using <a href="https://pandas.pydata.org/docs/reference/api/pandas.Index.repeat.html" rel="nofollow noreferrer"><code>pd.Index.repeat</code></a> to generate payment records.</p>
|
python|pandas
| 1
|
375,530
| 71,560,363
|
Pandas: Plotting / annotating from DataFrame
|
<p>There is this boring dataframe with stock data I have:</p>
<pre><code>date close MA100 buy sell
2022-02-14 324.95 320.12 0 0
2022-02-13 324.87 320.11 1 0
2022-02-12 327.20 321.50 0 0
2022-02-11 319.61 320.71 0 1
</code></pre>
<p>Then I am plotting the prices</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
df = ...
df['close'].plot()
df['MA100'].plot()
plt.show()
</code></pre>
<p>So far so good...
Then I'd like to show a marker on the chart if there was buy (green) or sell (red) on that day.
It's just to highlight if there was a transaction on that day. The exact intraday price at which the trade happened is not important.</p>
<p>So the x/y-coordinates could be the date and the close if there is a 1 in column buy (sell).</p>
<p>I am not sure how to implement this.
Would I need a loop to iterate over all rows where buy = 1 (sell = 1) and then somehow add these matches to the plot (probably with annotate?)</p>
<p>I'd really appreciate it if someone could point me in the right direction!</p>
|
<p>You can query the data frame for sell/buy and scatter plot:</p>
<pre><code>fig, ax = plt.subplots()
df.plot(x='date', y=['close', 'MA100'], ax=ax)
df.query("buy==1").plot.scatter(x='date', y='close', c='g', ax=ax)
df.query("sell==1").plot.scatter(x='date', y='close', c='r', ax=ax)
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/ottcM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ottcM.png" alt="enter image description here" /></a></p>
|
python|pandas|matplotlib
| 1
|
375,531
| 71,508,824
|
How to handle hidden-cell output of 2-layer LSTM in PyTorch?
|
<p>I have made a network with a LSTM and a fully connected layer in PyTorch. I want to test how an increase in the LSTM layers affects my performance.</p>
<p>Say my input is (6, 9, 14), meaning batch size 6, sequence size 9, and feature size 14, and I'm working on a task that has 6 classes, so I expect a 6-element one-hot-encoded tensor as the prediction for a single sequence. The output of this network after the FC layer should be (6, 6), however, if I use 2 LSTM layers it becomes (12, 6).</p>
<p>I don't understand how I should handle the output of the LSTM layer to decrease the number of batches from [2 * batch_size] to [batch_size]. Also, I know I'm using the hidden state as the input to the FC layer, I want to try it this way for now.</p>
<p>Should I sum or concatenate every two batches or anything else?? Cheers!</p>
<pre><code> def forward(self, x):
hidden_0 = torch.zeros((self.lstm_layers, x.size(0), self.hidden_size), dtype=torch.double, device=self.device)
cell_0 = torch.zeros((self.lstm_layers, x.size(0), self.hidden_size), dtype=torch.double, device=self.device)
y1, (hidden_1, cell_1) = self.lstm(x, (hidden_0, cell_0))
hidden_1 = hidden_1.view(-1, self.hidden_size)
y = self.linear(hidden_1)
return y
</code></pre>
|
<p>The hidden state shape of a multi layer lstm is <code>(layers, batch_size, hidden_size)</code> <a href="https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html" rel="nofollow noreferrer">see output LSTM</a>. It contains the hidden state for each layer along the 0th dimension.</p>
<p>In your example you convert the shape into two dimensions here:</p>
<pre><code>hidden_1 = hidden_1.view(-1, self.hidden_size)
</code></pre>
<p>this transforms the shape into <code>(batch_size * layers, hidden_size)</code>.</p>
<p>What you would want to do is only use the hidden state of the last layer:</p>
<pre><code>hidden = hidden_1[-1,:,:].view(-1, self.hidden_size) # (1, bs, hidden) -> (bs, hidden)
y = self.linear(hidden)
return y
</code></pre>
|
python|pytorch|time-series|lstm|sequence
| 1
|
375,532
| 71,460,510
|
What is tensor flows row reduction algorithm?
|
<p>I'm wondering what tensor flow uses to perform row reduction. Specifically when I call <code>tf.linalg.inv</code> what algorithm runs? Tensorflow is open source so I figured that it would be easy enough to find but I find myself a little lost in the code base. If I could just get a pointer to the implementation of the aforementioned function that would be great. If there is a name for the Gauss Jordan elimination implementation they used that would be even better.
<a href="https://github.com/tensorflow/tensorflow" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow</a></p>
|
<p>The op uses LU decomposition with partial pivoting to compute the inverses.
For more insighton tf.linalg.inv algorithm please refer to this link: <a href="https://www.tensorflow.org/api_docs/python/tf/linalg/inv" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/linalg/inv</a></p>
<p>-</p>
<p>If you wish to experiment with something similar please refer to this stackoverflow link <a href="https://stackoverflow.com/questions/28441509/how-to-implement-lu-decomposition-with-partial-pivoting-in-python">here</a></p>
|
algorithm|tensorflow|open-source
| 0
|
375,533
| 71,570,565
|
Python apply element-wise operation between to series where every element is a list
|
<p>I have to series, where every element of the series is a list:</p>
<pre><code>s1 = pd.Series([[1,2,3],[4,5,6],[7,8,9]])
s2 = pd.Series([[1,2,3],[1,1,1],[7,8,8]])
</code></pre>
<p>And I want to calculate element-wise <code>sklearn.metrics.mean_squared_error</code>, so I will get:</p>
<pre><code>[0, 16.666, 0.33]
</code></pre>
<p>What is the best way to do it?</p>
<p>Thanks</p>
|
<p>First of all, you can't construct the Series like that, it will throw an error. What you probably meant was this:</p>
<pre><code>s1 = pd.Series([[1,2,3],[4,5,6],[7,8,9]])
s2 = pd.Series([[1,2,3],[1,1,1],[7,8,8]])
</code></pre>
<p>With these Series, you have a few options. You can use zip to create an object in which the corresponding elements are chunked together and use map to apply the function to these chunks. the <code>*</code> is needed to pass the chunk as two separate arguments:</p>
<pre><code>from sklearn.metrics import mean_squared_error
list(map(lambda x: mean_squared_error(*x), zip(s1, s2)))
</code></pre>
<p>Or simpler, you can use list iterations to loop over the elements (again using zip):</p>
<pre><code>[mean_squared_error(x, y) for x, y in zip(s1, s2)]
</code></pre>
|
python|pandas|dataframe|data-science|series
| 1
|
375,534
| 71,486,368
|
Build a pandas Dataframe from multiple "Counter" Collection objects
|
<p>I am working with sequence DNA data, and I would like to count the frequency of each letter (A,C,G,T) on each sequence in my dataset.</p>
<p>For doing so, I have tried the following using <code>Counter</code> method from <code>Collections</code> package, with good results:</p>
<pre><code>df = []
for seq in pseudomona.sequence_DNA:
df.append(Counter(seq))
[Counter({'C': 2156779, 'A': 1091782, 'G': 2143630, 'T': 1090617}),
Counter({'T': 1050880, 'G': 2083283, 'C': 2101448, 'A': 1055877}),
Counter({'C': 2180966, 'A': 1111267, 'G': 2176873, 'T': 1108010}),
Counter({'C': 2196325, 'G': 2204478, 'A': 1128017, 'T': 1123038}),
Counter({'T': 1117153, 'C': 2176409, 'A': 1115003, 'G': 2194606}),
Counter({'G': 2054304, 'A': 1026830, 'T': 1044090, 'C': 2020029})]
</code></pre>
<p>However, I do obtain a list of Counter instances (sorry if that's not the right terminology) and I would like to have a sorted data frame with those frequencies like, for instance:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>A</th>
<th>C</th>
<th>G</th>
<th>T</th>
</tr>
</thead>
<tbody>
<tr>
<td>2237</td>
<td>4415</td>
<td>124</td>
<td>324</td>
</tr>
<tr>
<td>4565</td>
<td>8567</td>
<td>3776</td>
<td>623</td>
</tr>
</tbody>
</table>
</div>
<p>I have tried to convert it into a list of lists but then I can not figure out how to transform it into a pandas Dataframe:</p>
<pre><code>[list(items.items()) for items in df]
[[('C', 2156779), ('A', 1091782), ('G', 2143630), ('T', 1090617)],
[('T', 1050880), ('G', 2083283), ('C', 2101448), ('A', 1055877)],
[('C', 2180966), ('A', 1111267), ('G', 2176873), ('T', 1108010)],
[('C', 2196325), ('G', 2204478), ('A', 1128017), ('T', 1123038)],
[('T', 1117153), ('C', 2176409), ('A', 1115003), ('G', 2194606)],
[('G', 2054304), ('A', 1026830), ('T', 1044090), ('C', 2020029)]]
</code></pre>
<p>It might be something foolish, but I can't figure out how to do it properly. Hope someone has the right clue! :)</p>
|
<p>Make a series out of each, and use <code>pd.concat</code> with <code>axis</code>, and tranpose:</p>
<pre><code>df = pd.concat([pd.Series(c) for c in l], axis=1).T
</code></pre>
<p>Output:</p>
<pre><code>>>> df
C A G T
0 2156779 1091782 2143630 1090617
1 2101448 1055877 2083283 1050880
2 2180966 1111267 2176873 1108010
3 2196325 1128017 2204478 1123038
4 2176409 1115003 2194606 1117153
5 2020029 1026830 2054304 1044090
</code></pre>
|
python|pandas|dataframe|counter
| 2
|
375,535
| 71,634,307
|
Why does it shows "fatal: Too many arguments."?
|
<p>I am trying to clone the repo at my folder but I am getting this error although I succeed to create folder labeling in folder named Tesnsorflow but then it's giving me this error <strong>fatal: Too many arguments.</strong> instead of clone the repo</p>
<pre><code>LABELIMG_PATH = os.path.join('Tensorflow', 'labelimg')
if not os.path.exists(LABELIMG_PATH):
!mkdir {LABELIMG_PATH}
!git clone https: // github.com/tzutalin/labelImg {LABELIMG_PATH}
</code></pre>
<p>I am tryig to clone the repo using</p>
<pre><code>!git clone https: // github.com/tzutalin/labelImg {LABELIMG_PATH}
</code></pre>
<p>I cannot do much to solve this as I am a beginner at this.</p>
|
<p>I don't know that strange syntax for executing shell command from Python.</p>
<p>But the URL should be be definitively ONE argument, not three</p>
<pre><code>https: // github.com/tzutalin/labelImg // not good
https://github.com/tzutalin/labelImg // should be better :-)
</code></pre>
|
python-3.x|git|tensorflow|jupyter-notebook
| 1
|
375,536
| 71,676,189
|
pandas, update dataframe values with a not in the same format dataframe
|
<p>i have two dataframes. The second dataframe contains the values to be updated in the first dataframe. df1:</p>
<pre><code>data=[[1,"potential"],[2,"lost"],[3,"at risk"],[4,"promising"]]
df=pd.DataFrame(data,columns=['id','class'])
id class
1 potential
2 lost
3 at risk
4 promising
</code></pre>
<p>df2:</p>
<pre><code>data2=[[2,"new"],[4,"loyal"]]
df2=pd.DataFrame(data2,columns=['id','class'])
id class
2 new
4 loyal
</code></pre>
<p>expected output:</p>
<pre><code>data3=[[1,"potential"],[2,"new"],[3,"at risk"],[4,"loyal"]]
df3=pd.DataFrame(data3,columns=['id','class'])
id class
1 potential
2 new
3 at risk
4 loyal
</code></pre>
<p>The code below seems to be working, but I believe there is a more effective solution.</p>
<pre><code>final=df.append([df2])
final = final.drop_duplicates(subset='id', keep="last")
</code></pre>
<p>addition:
Is there a way for me to write the previous value in a new column?
like this:</p>
<pre><code>id class prev_class modified date
1 potential nan nan
2 new lost 2022.xx.xx
3 at risk nan nan
4 loyal promising 2022.xx.xx
</code></pre>
|
<p>We can use <code>DataFrame.update</code></p>
<pre><code>df = df.set_index('id')
df.update(df2.set_index('id'))
df = df.reset_index()
</code></pre>
<p>Result</p>
<pre><code>print(df)
id class
0 1 potential
1 2 new
2 3 at risk
3 4 loyal
</code></pre>
|
python|pandas|dataframe
| 1
|
375,537
| 71,638,571
|
Extract a subset given two dates from a python dataframe with timezone date format
|
<p>I have the following dataframe:</p>
<pre><code>| ID | date |
|---------------------|--------------------------------|
| 1 | 2022-02-03 22:01:12+01:00 |
| 2 | 2022-02-04 21:11:21+01:00 |
| 3 | 2022-02-05 11:11:21+01:00 |
| 4 | 2022-02-07 23:01:12+01:00 |
| 5 | 2022-02-07 14:31:14+02:00 |
| 6 | 2022-02-08 18:12:01+02:00 |
| 7 | 2022-02-09 20:21:02+02:00 |
| 8 | 2022-02-11 15:41:25+02:00 |
| 9 | 2022-02-15 11:21:27+02:00 |
</code></pre>
<p>I have made a function that, given two dates with the following format (YYYYY-MM-DD HH:MM:SS), obtains the subset of data between that interval. The code is as follows:</p>
<pre><code># Selects a subset of the dataset from a given time interval
def select_interval(df, start_date, end_date):
# Confirm the given format and convert to datetime
start_date = pd.to_datetime(start_date, format='%Y-%m-%d %H:%M:%S')
end_date = pd.to_datetime(end_date, format='%Y-%m-%d %H:%M:%S')
# Create a copy of the original df
subset = df.copy()
# Creates a temporary column to store the values related to the specific date
subset['tmp_date'] = subset['date'].apply(lambda a: pd.to_datetime(str(a.date()) + " " + str(a.time())))
if start_date < end_date:
mask = (subset['tmp_date'] >= start_date) & (subset['tmp_date'] <= end_date)
df = df.loc[mask]
return df
</code></pre>
<p>I need to make the additional column constructed from the date and time because if I directly compare the dates passed by parameter with the values of the date column (which contain the timezone) it gives the following error: <strong>TypeError: can't compare offset-naive and offset-aware datetimes</strong></p>
<p>I would like to know if there is a way to solve this problem in a more optimal way, because I think that creating the <code>tmp_date</code> column makes my function less efficient. Thank you for your help.</p>
|
<p>You can change the <code>start_date</code> & <code>end_date</code> to timezone aware before passing the parameter to the function as below.</p>
<pre><code>import pytz
start_date = pytz.utc.localize(start_date)
end_date = pytz.utc.localize(end_date)
</code></pre>
|
python|pandas|datetime
| 0
|
375,538
| 71,722,041
|
Different array dimensions causing failure to merge two images into one
|
<p><a href="https://i.stack.imgur.com/0w1TF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0w1TF.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/ICQ8A.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ICQ8A.png" alt="enter image description here" /></a></p>
<p>When trying to join two images to create one:</p>
<pre><code>img3 = imread('image_home.png')
img4 = imread('image_away.png')
result = np.hstack((img3,img4))
imwrite('Home_vs_Away.png', result)
</code></pre>
<p>This error sometimes appears:</p>
<pre><code>all the input array dimensions for the concatenation axis must match exactly, but along dimension 0, the array at index 0 has size 192 and the array at index 1 has size 191
</code></pre>
<p>How should I proceed to generate the image when there is this difference in array size when <code>np.hstack</code> does not work?</p>
<p>Note:<br />
I use several images, so not always the largest image is the first and not always the largest is the second, it can be quite random which is the smallest or largest between the two.</p>
|
<p>You can manually add a row/column with a color of your choice to match the shapes. Or you can simply let cv2.resize handle the resizing for you. In this code I show how to use both methods.</p>
<pre><code>import numpy as np
import cv2
img1 = cv2.imread("image_home.png")
img2 = cv2.imread("image_away.png")
# Method 1 (add a column and a row to the smallest image)
padded_img = np.ones(img1.shape, dtype="uint8")
color = np.array(img2[-1, -1]) # take the border color
padded_img[:-1, :-1, :] = img2
padded_img[-1, :, :] = color
padded_img[:, -1, :] = color
# Method 2 (let OpenCV handle the resizing)
padded_img = cv2.resize(img2, img1.shape[:2][::-1])
result = np.hstack((img1, padded_img))
cv2.imwrite("Home_vs_Away.png", result)
</code></pre>
|
python|arrays|numpy
| 2
|
375,539
| 71,624,121
|
Remove rows from DataFrame when X, Y coordinates are within a threshold distance of another row
|
<p>I'm trying to remove rows from a DataFrame that are within a Euclidean distance threshold of other points listed in the DataFrame. So for example, in the small DataFrame provided below, two rows would be removed if a <code>threshold</code> value was set equal to 0.001 (1 mm: <code>thresh = 0.001</code>), where <code>X</code> and <code>Y</code> are spatial coordinates:</p>
<pre><code>import pandas as pd
data = {'X': [0.075, 0.0791667,0.0749543,0.0791184,0.075,0.0833333, 0.0749543],
'Y': [1e-15, 0,-0.00261746,-0.00276288, -1e-15,0,-0.00261756],
'T': [12.57,12.302,12.56,12.292,12.57,12.052,12.56]}
df = pd.DataFrame(data)
df
# X Y T
# 0 0.075000 1.000000e-15 12.570
# 1 0.079167 0.000000e+00 12.302
# 2 0.074954 -2.617460e-03 12.560
# 3 0.079118 -2.762880e-03 12.292
# 4 0.075000 -1.000000e-15 12.570
# 5 0.083333 0.000000e+00 12.052
# 6 0.074954 -2.617560e-03 12.560
</code></pre>
<p>The rows with indices 4 and 6 need to be removed because they are spatial duplicates of rows 0 and 2, respectively, since they are within the specified threshold distance of previously listed points. Also, I always want to remove the 2nd occurrence of a point that is within the threshold distance of a previous point. What's the best way to approach this?</p>
|
<p>Let's try it with this one. Calculate the Euclidean distance for each pair of (X,Y), which creates a symmetric matrix. Then mask the upper half; then for the lower half, filter out the rows where there is a value less than <code>thresh</code>:</p>
<pre><code>import numpy as np
m = np.tril(np.sqrt(np.power(df[['X']].to_numpy() - df['X'].to_numpy(), 2) +
np.power(df[['Y']].to_numpy() - df['Y'].to_numpy(), 2)))
m[np.triu_indices(m.shape[0])] = np.nan
out = df[~np.any(m < thresh, axis=1)]
</code></pre>
<p>We could also write it a bit more concisely and legibly (taking a leaf out of <a href="https://stackoverflow.com/a/71624243/17521785">@BENY's elegant solution</a>) by using <code>k</code> parameter in <code>numpy.tril</code> to directly mask the upper half of the symmetric matrix:</p>
<pre><code>distances = np.sqrt(np.sum([(df[[c]].to_numpy() - df[c].to_numpy())**2
for c in ('X','Y')], axis=0))
msk = np.tril(distances < thresh, k=-1).any(axis=1)
out = df[~msk]
</code></pre>
<p>Output:</p>
<pre><code> X Y T
0 0.075000 1.000000e-15 12.570
1 0.079167 0.000000e+00 12.302
2 0.074954 -2.617460e-03 12.560
3 0.079118 -2.762880e-03 12.292
5 0.083333 0.000000e+00 12.052
</code></pre>
|
python|pandas|dataframe
| 2
|
375,540
| 71,731,100
|
Pandas Groupby Syntax explanation
|
<p>I am confused why A Pandas Groupby function can be written both of the ways below and yield the same result. The specific code is not really the question, both give the same result. I would like someone to breakdown the syntax of both.</p>
<pre><code>df.groupby(['gender'])['age'].mean()
df.groupby(['gender']).mean()['age']
</code></pre>
<p>In the first instance, It reads as if you are calling the .mean() function on the age column specifically. The second appears like you are calling .mean() on the whole groupby object and selecting the age column after? Are there runtime considerations.</p>
|
<blockquote>
<p>It reads as if you are calling the <code>.mean()</code> function on the age column specifically. The second appears like you are calling <code>.mean()</code> on the whole groupby object and selecting the age column after?</p>
</blockquote>
<p>This is exactly what's happening. <code>df.groupby()</code> returns a dataframe. The <code>.mean()</code> method is applied column-wise by default, so the mean of each column is calculated independent of the other columns and the results are returned as a <code>Series</code> (which can be indexed) if run on the full dataframe.</p>
<p>Reversing the order produces a single column as a <code>Series</code> and then calculates the mean. If you know you only want the mean for a single column, it will be faster to isolate that first, rather than calculate the mean for every column (especially if you have a very large dataframe).</p>
|
python|pandas|syntax|pandas-groupby
| 2
|
375,541
| 71,512,400
|
sentiment analysis of a dataframe
|
<p>i have a project that involves determining the sentiments of a text based on the adjectives. The dataframe to be used is the adjectives column which i derived like so:</p>
<pre><code>def getAdjectives(text):
blob=TextBlob(text)
return [ word for (word,tag) in blob.tags if tag == "JJ"]
dataset['adjectives'] = dataset['text'].apply(getAdjectives)`
</code></pre>
<p>I obtained the dataframe from a json file using this code:</p>
<pre><code>with open('reviews.json') as project_file:
data = json.load(project_file)
dataset=pd.json_normalize(data)
print(dataset.head())
</code></pre>
<p>i have done the sentiment analysis for the dataframe using this code:</p>
<pre><code>dataset[['polarity', 'subjectivity']] = dataset['text'].apply(lambda text: pd.Series(TextBlob(text).sentiment))
print(dataset[['adjectives', 'polarity']])
</code></pre>
<p>this is the output:</p>
<pre><code> adjectives polarity
0 [] 0.333333
1 [right, mad, full, full, iPad, iPad, bad, diff... 0.209881
2 [stop, great, awesome] 0.633333
3 [awesome] 0.437143
4 [max, high, high, Gorgeous] 0.398333
5 [decent, easy] 0.466667
6 [it’s, bright, wonderful, amazing, full, few... 0.265146
7 [same, same] 0.000000
8 [old, little, Easy, daily, that’s, late] 0.161979
9 [few, huge, storage.If, few] 0.084762
</code></pre>
<p>The code has no issue except I want it to output the polarity of each adjective with the adjective, like for example right, 0.00127, mad, -0.9888 even though they are in the same row of the dataframe.</p>
|
<p>Try this:</p>
<pre><code>dataset = dataset.explode("adjectives")
</code></pre>
<p>Note that <code>[]</code> will result in a <code>np.NaN</code> row which you might want to remove beforehand/afterwards.</p>
|
python|pandas
| 0
|
375,542
| 71,513,521
|
How to get reverse diagonal from a certain point in a 2d numpy array
|
<p>Let's say I have a n x m numpy array. For example:</p>
<pre><code>array([[ 1, 2, 3, 4, 5],
[ 6, 7, 8, 9, 10],
[11, 12, 13, 14, 15],
[16, 17, 18, 19, 20]])
</code></pre>
<p>Now I want both diagonals intersection with a certain point (for example (1,2) which is 8). I already know that I can get the diagonal from top to bottom like so:</p>
<pre><code>row = 1
col = 2
a = np.array(
[[1, 2, 3, 4, 5], [6, 7, 8, 9, 10], [11, 12, 13, 14, 15], [16, 17, 18, 19, 20]]
)
diagonal_1 = a.diagonal(col - row)
</code></pre>
<p>Where the result of <code>col - row</code> gives the offset for the diagonal.</p>
<p>Now I want to also have the reverse diagonal (from bottom to top) intersecting with the first diagonal in a random point (in this case (1,2), but it can be any point). But for this example it would be:</p>
<pre><code>[16, 12, 8, 4]
</code></pre>
<p>I already tried a bunch with rotating and flipping the matrix. But I can't get a hold on the offset which I should use after rotating or flipping the matrix.</p>
|
<p>You can use <code>np.eye</code> to create a diagnal line of 1's, and use that as a mask:</p>
<pre><code>x, y = np.nonzero(a == 8)
k = y[0] - a.shape[0] + x[0] + 1
nums = a[np.eye(*a.shape, k=k)[::-1].astype(bool)][::-1]
</code></pre>
<p>Output:</p>
<pre><code>>>> nums
array([16, 12, 8, 4])
</code></pre>
<p>If you need to move the position of the line, increment/decrement the <code>k</code> parameter passed to <code>np.eye</code>.</p>
|
python|numpy
| 1
|
375,543
| 71,580,656
|
How to merge the results of an API call inside a loop to a pandas data frame?
|
<p>Hi apologies for the noob question...</p>
<p>I have written some code:</p>
<pre><code>with RESTClient(key) as client:
from_ = "2020-01-09"
to = "2021-01-10"
for i in all_tickers:
ticker = i['ticker']
r = client.stocks_equities_aggregates(ticker, 1, "day", from_, to, unadjusted=False)
print(f"Daily aggregates for {r.ticker} between {from_} and {to}.")
try:
df = pd.DataFrame(r.results, columns=["t", "v", "vw", "o", "c", "h", "l", "n"])
df['ticker'] = ticker
df = df.append(df)
except:
print('nothing')
</code></pre>
<p>Which outputs:</p>
<pre><code>Minute aggregates for A between 2022-01-01 and 2022-02-01.
{'v': 1606323.0, 'vw': 155.8021, 'o': 159, 'c': 156.48, 'h': 159.44, 'l': 153.93, 't': 1641186000000, 'n': 24318}
{'v': 2233958.0, 'vw': 151.518, 'o': 155.49, 'c': 151.19, 'h': 155.63, 'l': 149.7, 't': 1641272400000, 'n': 34707}
{'v': 2370529.0, 'vw': 149.9716, 'o': 150.83, 'c': 148.6, 'h': 153.1, 'l': 148.53, 't': 1641358800000, 'n': 27421}
{'v': 2298277.0, 'vw': 148.4397, 'o': 148.85, 'c': 149.12, 'h': 149.96, 'l': 145.58, 't': 1641445200000, 'n': 34441}
{'v': 2058658.0, 'vw': 146.4352, 'o': 149.12, 'c': 145.15, 'h': 149.73, 'l': 145.09, 't': 1641531600000, 'n': 28611}
{'v': 2548145.0, 'vw': 143.2162, 'o': 143.29, 'c': 145.16, 'h': 145.31, 'l': 140.86, 't': 1641790800000, 'n': 37241}
{'v': 2194208.0, 'vw': 146.0091, 'o': 145, 'c': 146.64, 'h': 146.94, 'l': 143.81, 't': 1641877200000, 'n': 22781}
{'v': 2250847.0, 'vw': 149.3025, 'o': 147.8, 'c': 149.51, 'h': 150.39, 'l': 147.55, 't': 1641963600000, 'n': 27392}
{'v': 1741764.0, 'vw': 145.7333, 'o': 149.46, 'c': 145.17, 'h': 149.54, 'l': 144.85, 't': 1642050000000, 'n': 23437}
{'v': 2225442.0, 'vw': 143.9446, 'o': 144.04, 'c': 144.68, 'h': 145.15, 'l': 142.36, 't': 1642136400000, 'n': 28295}
{'v': 1907368.0, 'vw': 141.2762, 'o': 142.42, 'c': 140.47, 'h': 143.24, 'l': 140.34, 't': 1642482000000, 'n': 27031}
{'v': 1472206.0, 'vw': 141.538, 'o': 140.67, 'c': 140.43, 'h': 143.6, 'l': 140.26, 't': 1642568400000, 'n': 23595}
{'v': 1861384.0, 'vw': 140.9367, 'o': 141.38, 'c': 139.48, 'h': 143.14, 'l': 139.05, 't': 1642654800000, 'n': 26612}
{'v': 1878663.0, 'vw': 138.4591, 'o': 139.54, 'c': 137.51, 'h': 140.49, 'l': 137.49, 't': 1642741200000, 'n': 27133}
{'v': 2155299.0, 'vw': 135.3192, 'o': 136.38, 'c': 138.12, 'h': 138.49, 'l': 131.28, 't': 1643000400000, 'n': 32745}
{'v': 1705313.0, 'vw': 134.473, 'o': 135.36, 'c': 134.57, 'h': 136.62, 'l': 132.65, 't': 1643086800000, 'n': 24457}
{'v': 1999575.0, 'vw': 134.5836, 'o': 135.54, 'c': 133.51, 'h': 138.0454, 'l': 132.27, 't': 1643173200000, 'n': 28088}
{'v': 1715819.0, 'vw': 133.1775, 'o': 135.28, 'c': 132.09, 'h': 136.36, 'l': 131.68, 't': 1643259600000, 'n': 25556}
{'v': 2174805.0, 'vw': 135.3363, 'o': 133, 'c': 137.06, 'h': 137.4, 'l': 131.215, 't': 1643346000000, 'n': 21446}
{'v': 1702950.0, 'vw': 138.8672, 'o': 137.32, 'c': 139.32, 'h': 139.47, 'l': 136.9729, 't': 1643605200000, 'n': 21984}
{'v': 1655987.0, 'vw': 140.2601, 'o': 140.53, 'c': 141.03, 'h': 141.27, 'l': 138.45, 't': 1643691600000, 'n': 25755}
Minute aggregates for AA between 2022-01-01 and 2022-02-01.
{'v': 6208442.0, 'vw': 60.9882, 'o': 60.24, 'c': 60.36, 'h': 62.61, 'l': 60.09, 't': 1641186000000, 'n': 49209}
{'v': 7943653.0, 'vw': 58.135, 'o': 60.68, 'c': 57.53, 'h': 61.15, 'l': 57.21, 't': 1641272400000, 'n': 59085}
{'v': 7599751.0, 'vw': 60.0291, 'o': 58.95, 'c': 58.55, 'h': 61.79, 'l': 58.445, 't': 1641358800000, 'n': 65793}
{'v': 4363058.0, 'vw': 58.3964, 'o': 58.94, 'c': 58.45, 'h': 59.4911, 'l': 57.25, 't': 1641445200000, 'n': 39017}
{'v': 8071270.0, 'vw': 61.7246, 'o': 60.14, 'c': 62.37, 'h': 62.89, 'l': 59.65, 't': 1641531600000, 'n': 62728}
{'v': 5653472.0, 'vw': 61.268, 'o': 61.62, 'c': 61.54, 'h': 62.71, 'l': 60.441, 't': 1641790800000, 'n': 43504}
{'v': 6003582.0, 'vw': 61.0876, 'o': 60.71, 'c': 62.2, 'h': 62.25, 'l': 59.12, 't': 1641877200000, 'n': 50028}
{'v': 6434989.0, 'vw': 62.1745, 'o': 63.66, 'c': 61.88, 'h': 64.37, 'l': 60.86, 't': 1641963600000, 'n': 53063}
{'v': 5769838.0, 'vw': 61.5961, 'o': 61.75, 'c': 60.51, 'h': 63.26, 'l': 60.37, 't': 1642050000000, 'n': 46975}
{'v': 4397108.0, 'vw': 60.5607, 'o': 60.27, 'c': 61.39, 'h': 61.44, 'l': 59.34, 't': 1642136400000, 'n': 41558}
{'v': 5994091.0, 'vw': 60.0163, 'o': 60.5, 'c': 60.05, 'h': 61.56, 'l': 58.8, 't': 1642482000000, 'n': 55228}
{'v': 7851084.0, 'vw': 60.0468, 'o': 61.39, 'c': 59.63, 'h': 61.93, 'l': 58.885, 't': 1642568400000, 'n': 62775}
{'v': 15925959.0, 'vw': 62.0662, 'o': 62.1, 'c': 61.25, 'h': 64.25, 'l': 59.97, 't': 1642654800000, 'n': 127187}
{'v': 11024982.0, 'vw': 57.5373, 'o': 60.02, 'c': 56.21, 'h': 60.15, 'l': 56.04, 't': 1642741200000, 'n': 99811}
{'v': 9209629.0, 'vw': 56.091, 'o': 53.81, 'c': 58.02, 'h': 58.2, 'l': 53.26, 't': 1643000400000, 'n': 83494}
{'v': 7780587.0, 'vw': 59.9004, 'o': 57.51, 'c': 61.21, 'h': 61.6, 'l': 56.7608, 't': 1643086800000, 'n': 66222}
{'v': 9267426.0, 'vw': 61.8703, 'o': 61.54, 'c': 60.75, 'h': 63.64, 'l': 59.88, 't': 1643173200000, 'n': 76224}
{'v': 6445290.0, 'vw': 59.0007, 'o': 60.6, 'c': 58.03, 'h': 61.6599, 'l': 57.47, 't': 1643259600000, 'n': 55506}
{'v': 6987869.0, 'vw': 56.9155, 'o': 58, 'c': 57.4, 'h': 58.39, 'l': 55.58, 't': 1643346000000, 'n': 62130}
{'v': 7206053.0, 'vw': 56.1436, 'o': 56.94, 'c': 56.71, 'h': 57.02, 'l': 55.02, 't': 1643605200000, 'n': 59567}
{'v': 5939348.0, 'vw': 57.6426, 'o': 57.99, 'c': 58.17, 'h': 58.44, 'l': 56.73, 't': 1643691600000, 'n': 55371}
...
</code></pre>
<p>When I call the DF to display it:</p>
<pre><code>df.rename(columns={'t': 'date'}, inplace=True) # why can't I change this above?
df.drop(columns=['vw', 'n'], inplace=True)
df.rename(columns={'v': 'volume', 'o': 'open', 'c': 'close', 'h': 'high', 'l': 'low'}, inplace=True)
df['date'] = pd.to_datetime(df['date'],unit='ms')
df['date'] = df['date'].dt.date
df
</code></pre>
<p>It is only storing the last symbol in the loop:</p>
<pre><code>date volume open close high low ticker
0 2020-01-09 1048301.0 14.84 14.88 15.0400 14.7100 ZUO
1 2020-01-10 1143633.0 14.88 15.03 15.0800 14.7600 ZUO
2 2020-01-13 1609506.0 15.10 15.45 15.5650 14.9300 ZUO
3 2020-01-14 956361.0 15.50 15.34 15.6000 15.2100 ZUO
4 2020-01-15 1152240.0 15.55 15.50 15.9000 15.4600 ZUO
...
</code></pre>
<p>How do I append each bit of information from <code>r.results</code> to a single dataframe?</p>
|
<p>With the line below, you're overwriting the variable <code>df</code> with every loop in your <code>for i in all_tickers</code> for loop:</p>
<pre><code>df = pd.DataFrame(r.results, columns=["t", "v", "vw", "o", "c", "h", "l", "n"])
</code></pre>
<p>It looks like <code>r.results</code> is a list of dictionaries. If so, it would make more sense to create a list of Pandas dataframes, which you can then combine into one at the end. Something like this:</p>
<pre><code>with RESTClient(key) as client:
from_ = "2020-01-09"
to = "2021-01-10"
df_list = []
for i in all_tickers:
ticker = i['ticker']
r = client.stocks_equities_aggregates(ticker, 1, "day", from_, to, unadjusted=False)
print(f"Daily aggregates for {r.ticker} between {from_} and {to}.")
try:
df = pd.DataFrame(r.results, columns=["t", "v", "vw", "o", "c", "h", "l", "n"])
df['ticker'] = ticker
df_list.append(df)
except:
print('nothing')
combined_df = pd.concat(df_list,ignore_index=True)
</code></pre>
<p>You can then make the final edits afterwards:</p>
<pre><code>## Select just the columns you want
final_df = combined_df[['t','v','o','c','h','l','ticker']]
## Rename them
final_df.columns = ['date','volume','open','close','high','low','ticker']
## Format the date column
final_df['date'] = pd.to_datetime(final_df['date'],unit='ms').dt.date
</code></pre>
|
python|pandas|jupyter-notebook|polygon.io
| 1
|
375,544
| 71,646,596
|
Compute metrics/loss every n batches Pytorch Lightning
|
<p>I'm trying to use Pytorch lighning but I don't have clear all the steps. Anyway I'm trying to calculate the train_loss (for example) not only for each step(=batch) but every n bacthes (i.e. 500) but I'm not sure how to compute it (compute, reset etc). I tried this approach but this is not working. Can you help me? thanks</p>
<pre><code>def training_step(self, batch: tuple, batch_nb: int, *args, **kwargs) -> dict:
"""
Runs one training step. This usually consists in the forward function followed
by the loss function.
:param batch: The output of your dataloader.
:param batch_nb: Integer displaying which batch this is
Returns:
- dictionary containing the loss and the metrics to be added to the lightning logger.
"""
inputs, targets = batch
model_out = self.forward(**inputs)
loss_val = self.loss(model_out, targets)
y = targets["labels"]
y_hat = model_out["logits"]
labels_hat = torch.argmax(y_hat, dim=1)
val_acc = self.metric_acc(labels_hat, y)
tqdm_dict = {"train_loss": loss_val, 'batch_nb': batch_nb}
self.log('train_loss', loss_val, on_step=True, on_epoch=True,prog_bar=True)
self.log('train_acc', val_acc, on_step=True, prog_bar=True,on_epoch=True)
# reset the metric to restart accumulating
self.loss_val_bn = self.loss(model_out, targets) #accumulate state
if batch_nb % 500 == 0:
self.log("x batches test loss_train", self.loss_val_bn.compute(),batch_nb) # perform a compute every 10 batches
self.loss_val_bn.reset()
#output = OrderedDict(
#{"loss": loss_val, "progress_bar": tqdm_dict, "log": tqdm_dict})
# can also return just a scalar instead of a dict (return loss_val)
#return output
return loss_val
</code></pre>
|
<ol>
<li>Write your custom logger following (<a href="https://pytorch-lightning.readthedocs.io/en/stable/extensions/logging.html#make-a-custom-logger" rel="nofollow noreferrer">https://pytorch-lightning.readthedocs.io/en/stable/extensions/logging.html#make-a-custom-logger</a>). The one I present here stores the values generated in every step in a dict where each metric name is a key.</li>
</ol>
<pre class="lang-py prettyprint-override"><code>class History_dict(LightningLoggerBase):
def __init__(self):
super().__init__()
self.history = collections.defaultdict(list) # copy not necessary here
# The defaultdict in contrast will simply create any items that you try to access
@property
def name(self):
return "Logger_custom_plot"
@property
def version(self):
return "1.0"
@property
@rank_zero_experiment
def experiment(self):
# Return the experiment object associated with this logger.
pass
@rank_zero_only
def log_metrics(self, metrics, step):
# metrics is a dictionary of metric names and values
# your code to record metrics goes here
for metric_name, metric_value in metrics.items():
if metric_name != 'epoch':
self.history[metric_name].append(metric_value)
else: # case epoch. We want to avoid adding multiple times the same. It happens for multiple losses.
if (not len(self.history['epoch']) or # len == 0:
not self.history['epoch'][-1] == metric_value) : # the last values of epochs is not the one we are currently trying to add.
self.history['epoch'].append(metric_value)
else:
pass
return
def log_hyperparams(self, params):
pass
</code></pre>
<ol start="2">
<li>Make the model reduce the stored metrics every n steps. I assume that taking the mean is how you want to reduce here. Empty the list after reducing.</li>
</ol>
<pre class="lang-py prettyprint-override"><code>class MNISTModel(LightningModule):
def __init__(self):
super().__init__()
self.l1 = torch.nn.Linear(28 * 28, 10)
def forward(self, x):
return torch.relu(self.l1(x.view(x.size(0), -1)))
def training_step(self, batch, batch_nb):
x, y = batch
loss = F.cross_entropy(self(x), y)
self.log('loss_epoch', loss, on_step=False, on_epoch=True)
self.log('loss_step', loss, on_step=True, on_epoch=False)
print(self.global_step)
if batch_nb % 50 == 0 and self.global_step != 0:
step_metrics = self.logger.history['loss_step']
# I am assuming that the reduction function you want over the saved step values are mean
reduced = sum(step_metrics) / len(step_metrics)
print(reduced)
# Empty the loss list
self.logger.history['loss_step'] = []
return loss
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.02)
</code></pre>
<ol start="3">
<li>Pass this custom logger to your trainer</li>
</ol>
<pre class="lang-py prettyprint-override"><code>hd = History_dict()
# Initialize a trainer
trainer = Trainer(
accelerator="auto",
devices=1 if torch.cuda.is_available() else None, # limiting got iPython runs
max_epochs=3,
callbacks=[TQDMProgressBar(refresh_rate=20)],
log_every_n_steps=10,
logger=[hd],
)
</code></pre>
<p>You should see the following being printed with this exact configuration:</p>
<pre class="lang-bash prettyprint-override"><code>0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
0.9309097290039062
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
0.9098988473415375
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
0.8920584758122762
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
0.8698084503412247
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
0.8622475385665893
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
0.8433656434218089
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
0.8188161773341043
...
</code></pre>
|
python|pytorch|pytorch-lightning
| 1
|
375,545
| 71,545,135
|
How to append rows with concat to a Pandas DataFrame
|
<p>I have defined an empty data frame with:</p>
<pre><code>insert_row = {
"Date": dtStr,
"Index": IndexVal,
"Change": IndexChnge,
}
data = {
"Date": [],
"Index": [],
"Change": [],
}
df = pd.DataFrame(data)
df = df.append(insert_row, ignore_index=True)
df.to_csv(r"C:\Result.csv", index=False)
driver.close()
</code></pre>
<p>But I get the below deprecation warning not to use <code>df.append</code> every time I run the code</p>
<p><a href="https://i.stack.imgur.com/2bzpZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2bzpZ.png" alt="enter image description here" /></a></p>
<p>Can anyone suggest how to get rid of this warning by using <code>pandas.concat</code>?</p>
|
<p>Create a dataframe then <code>concat</code>:</p>
<pre><code>insert_row = {
"Date": '2022-03-20',
"Index": 1,
"Change": -2,
}
df = pd.concat([df, pd.DataFrame([insert_row])])
print(df)
# Output
Date Index Change
0 2022-03-20 1.0 -2.0
</code></pre>
|
python|pandas|dataframe
| 7
|
375,546
| 71,668,874
|
Protobuf compatibility error when running Kedro pipeline
|
<p>I have a Kedro pipeline that I want to run through a Python script, I think I have the minimum necessary code to do this, but everytime I try to run the pipeline through the script, I get a compatibility error regarding the protobuf version, but when I run the pipeline through the terminal it runs without problems. It is important to say that I am running everything inside a Docker container, and the image is based on PyTorch (version 1.9.0 and cuda 11.1).</p>
<p>This is the code I am using to call the pipeline:</p>
<pre><code>from kedro.framework.context import load_context
class TBE():
def run_inference():
context = load_context('./')
output = context.run(pipeline='inf')
return output
</code></pre>
<p>And here is the error that I get when I run it:</p>
<pre><code>[libprotobuf FATAL google/protobuf/stubs/common.cc:83] This program was compiled against
version 3.9.2 of the Protocol Buffer runtime library, which is not compatible with the
installed version (3.19.4). Contact the program author for an update. If you compiled
the program yourself, make sure that your headers are from the same version of Protocol
Buffers as your link-time library. (Version verification failed in "bazel-out/k8-
opt/bin/tensorflow/core/framework/tensor_shape.pb.cc".)
terminate called after throwing an instance of 'google::protobuf::FatalException'
what(): This program was compiled against version 3.9.2 of the Protocol Buffer runtime
library, which is not compatible with the installed version (3.19.4). Contact the
program author for an update. If you compiled the program yourself, make sure that your
headers are from the same version of Protocol Buffers as your link-time library.
(Version verification failed in "bazel-out/k8-
opt/bin/tensorflow/core/framework/tensor_shape.pb.cc".)
Aborted
</code></pre>
<p>I have already tried changing the protobuf version, but I cannot find a compatible one. What can I do to solve this problem?</p>
|
<p>I faced a similar problem with kedro. This helped:</p>
<pre><code>pip install --upgrade "protobuf<=3.20.1"
</code></pre>
|
python|docker|tensorflow|protocol-buffers|kedro
| 1
|
375,547
| 71,468,110
|
pandas extract first row column value equal to 1 for each group
|
<p>I have df:</p>
<pre><code>date id label pred
1/1 1 0 0.2
2/1 1 1 0.5
1/1 2 1 0.9
2/1 2 1 0.3
</code></pre>
<p>I want for each id, get the first row when label column equal to 1. for example desire df:</p>
<pre><code>date id label pred
2/1 1 1 0.3
1/1 2 1 0.9
</code></pre>
<p>thx!</p>
|
<p>First filter only rows with <code>label=1</code> and then remove duplicates per <code>id</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop_duplicates.html" rel="noreferrer"><code>DataFrame.drop_duplicates</code></a>:</p>
<pre><code>df1 = df[df['label'].eq(1)].drop_duplicates('id')
</code></pre>
|
python|pandas
| 5
|
375,548
| 71,789,096
|
how to pad sequences in a tensor slice dataset in TensorFlow?
|
<p>I have a tensor slice dataset made from two ragged tensors.</p>
<p>tensor_a is like: <code><tf.RaggedTensor [[3, 3, 5], [3, 3, 14, 4, 17, 20], [3, 14, 22, 17]]></code></p>
<p>tensor_b is like: <code><tf.RaggedTensor [[-1, 1, -1], [-1, -1, 1, -1, -1, -1], [-1, 1, -1, 2]]></code></p>
<p>(Same index, same length for tensor_a and tensor_b.)</p>
<p>I made the dataset by</p>
<pre><code>dataset = tf.data.Dataset.from_tensor_slices((tensor_a, tensor_b))
dataset
<TensorSliceDataset element_spec=(RaggedTensorSpec(TensorShape([None]), tf.int64, 0, tf.int64), RaggedTensorSpec(TensorShape([None]), tf.int32, 0, tf.int64))>
</code></pre>
<p>How to pad the sequences in my dataset? I've tried <code>tf.pad</code> and <code>tf.keras.preprocessing.sequence.pad_sequences</code> but haven't found a right way.</p>
|
<p>You could try something like this:</p>
<pre><code>import tensorflow as tf
tensor_a = tf.ragged.constant([[3, 3, 5], [3, 3, 14, 4, 17, 20], [3, 14, 22, 17]])
tensor_b = tf.ragged.constant([[-1, 1, -1], [-1, -1, 1, -1, -1, -1], [-1, 1, -1, 2]])
dataset = tf.data.Dataset.from_tensor_slices((tensor_a, tensor_b))
max_length = max(list(dataset.map(lambda x, y: tf.shape(x)[0])))
def pad(x, y):
x = tf.concat([x, tf.zeros((int(max_length-tf.shape(x)[0]),), dtype=tf.int32)], axis=0)
y = tf.concat([y, tf.zeros((int(max_length-tf.shape(y)[0]),), dtype=tf.int32)], axis=0)
return x, y
dataset = dataset.map(pad)
for x, y in dataset:
print(x, y)
</code></pre>
<pre><code>tf.Tensor([3 3 5 0 0 0], shape=(6,), dtype=int32) tf.Tensor([-1 1 -1 0 0 0], shape=(6,), dtype=int32)
tf.Tensor([ 3 3 14 4 17 20], shape=(6,), dtype=int32) tf.Tensor([-1 -1 1 -1 -1 -1], shape=(6,), dtype=int32)
tf.Tensor([ 3 14 22 17 0 0], shape=(6,), dtype=int32) tf.Tensor([-1 1 -1 2 0 0], shape=(6,), dtype=int32)
</code></pre>
<p>For pre-padding, just adjust the <code>pad</code> function:</p>
<pre><code>def pad(x, y):
x = tf.concat([tf.zeros((int(max_length-tf.shape(x)[0]),), dtype=tf.int32), x], axis=0)
y = tf.concat([tf.zeros((int(max_length-tf.shape(y)[0]),), dtype=tf.int32), y], axis=0)
return x, y
</code></pre>
<pre><code>tf.Tensor([0 0 0 3 3 5], shape=(6,), dtype=int32) tf.Tensor([ 0 0 0 -1 1 -1], shape=(6,), dtype=int32)
tf.Tensor([ 3 3 14 4 17 20], shape=(6,), dtype=int32) tf.Tensor([-1 -1 1 -1 -1 -1], shape=(6,), dtype=int32)
tf.Tensor([ 0 0 3 14 22 17], shape=(6,), dtype=int32) tf.Tensor([ 0 0 -1 1 -1 2], shape=(6,), dtype=int32)
</code></pre>
|
python|tensorflow|padding|ragged-tensors
| 0
|
375,549
| 71,481,088
|
How to use rolling function to compare the elements
|
<p>I want to use <code>pandas</code> <code>rolling</code> function to compare whether the first element is smaller than the second one. I think the following codes should work:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
df = pd.DataFrame(data=np.random.randint(0,10,10), columns=['temperature'])
df.rolling(window=2).apply(lambda x: x[0] < x[1])
</code></pre>
<p>but it does not work. Instead, I got an error message:</p>
<pre><code>ValueError: 0 is not in range
</code></pre>
<p>Does anyone know what caused the issue?</p>
<p><strong>Update:</strong>
I know I can use the <code>diff</code> function, but what I really want to do is something like this</p>
<pre><code>df.rolling(window=3).apply(lambda x: x[0] < x[1] < x[2])
</code></pre>
|
<p>Replacing the x[n] with x.iloc[n] should work (using positional indexing)</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
df = pd.DataFrame(data=np.random.randint(0,10,10), columns=['temperature'])
df['increasing'] = df.rolling(window=2).apply(lambda x: x.iloc[0] < x.iloc[1])
</code></pre>
<pre><code> temperature increasing
0 8 NaN
1 9 1.0
2 0 0.0
3 3 1.0
4 8 1.0
5 7 0.0
6 7 0.0
7 8 1.0
8 7 0.0
9 6 0.0
</code></pre>
<p>Why?:</p>
<p>The value of 'x' in your lambda function looks something like this:</p>
<p><strong>first iteration:</strong></p>
<p>index temperature<br />
0 8<br />
1 9</p>
<p><strong>second iteration:</strong></p>
<p>index temperature<br />
1 9<br />
2 0</p>
<p><strong>third iteration:</strong></p>
<p>index temperature<br />
2 0<br />
3 3</p>
<p>The first iteration works because the index 0 and 1 are available (so <code>x[0] < x[1]</code> works fine). However, in the second iteration, the index 0 isn't available and x[0] fails with your ValueError. My solution uses positional indexing (with .iloc) and ignores those index values (see <a href="https://pandas.pydata.org/docs/user_guide/indexing.html" rel="nofollow noreferrer">https://pandas.pydata.org/docs/user_guide/indexing.html</a>).</p>
<p>This is also why your code works fine with two rows e.g.</p>
<pre><code>df = pd.DataFrame(data=np.random.randint(0,10,2), columns=['temperature'])
df.rolling(window=2).apply(lambda x: x[0] < x[1])
</code></pre>
|
python|pandas
| 1
|
375,550
| 71,561,374
|
Pandas df get previous row value
|
<p>I have a pandas dataframe with null values:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>index</th>
<th>fecha</th>
<th>code</th>
<th>Place</th>
<th>dato1</th>
<th>porcentaje_dato1</th>
<th>dato2</th>
<th>dato3</th>
<th>porcentaje_dato3</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>2021-01-04</td>
<td>1</td>
<td>Place1</td>
<td>25809</td>
<td>0.3</td>
<td>NaN</td>
<td>NaN</td>
<td>0.0</td>
</tr>
<tr>
<td>1</td>
<td>2021-01-04</td>
<td>2</td>
<td>Place2</td>
<td>2004</td>
<td>0.15</td>
<td>NaN</td>
<td>NaN</td>
<td>0.0</td>
</tr>
<tr>
<td>2</td>
<td>2021-01-04</td>
<td>3</td>
<td>Place3</td>
<td>9380</td>
<td>0.92</td>
<td>NaN</td>
<td>NaN</td>
<td>0.0</td>
</tr>
<tr>
<td>3</td>
<td>2021-01-04</td>
<td>4</td>
<td>Place4</td>
<td>153</td>
<td>0.01</td>
<td>NaN</td>
<td>NaN</td>
<td>0.0</td>
</tr>
<tr>
<td>20</td>
<td>2021-01-05</td>
<td>1</td>
<td>Place1</td>
<td>40263</td>
<td>0.47</td>
<td>NaN</td>
<td>NaN</td>
<td>0.0</td>
</tr>
<tr>
<td>21</td>
<td>2021-01-05</td>
<td>2</td>
<td>Place2</td>
<td>2985</td>
<td>0.22</td>
<td>NaN</td>
<td>NaN</td>
<td>0.0</td>
</tr>
<tr>
<td>22</td>
<td>2021-01-05</td>
<td>3</td>
<td>Place3</td>
<td>12929</td>
<td>1.27</td>
<td>NaN</td>
<td>NaN</td>
<td>0.0</td>
</tr>
<tr>
<td>23</td>
<td>2021-01-05</td>
<td>4</td>
<td>Place4</td>
<td>2656</td>
<td>0.22</td>
<td>NaN</td>
<td>NaN</td>
<td>0.0</td>
</tr>
<tr>
<td>40</td>
<td>2021-01-07</td>
<td>1</td>
<td>Place1</td>
<td>53934</td>
<td>0.64</td>
<td>NaN</td>
<td>NaN</td>
<td>0.0</td>
</tr>
<tr>
<td>41</td>
<td>2021-01-07</td>
<td>2</td>
<td>Place2</td>
<td>6186</td>
<td>0.46</td>
<td>NaN</td>
<td>NaN</td>
<td>0.0</td>
</tr>
<tr>
<td>42</td>
<td>2021-01-07</td>
<td>3</td>
<td>Place3</td>
<td>14406</td>
<td>1.42</td>
<td>NaN</td>
<td>NaN</td>
<td>0.0</td>
</tr>
<tr>
<td>43</td>
<td>2021-01-07</td>
<td>4</td>
<td>Place4</td>
<td>3190</td>
<td>0.26</td>
<td>NaN</td>
<td>NaN</td>
<td>0.0</td>
</tr>
<tr>
<td>1415</td>
<td>2021-04-14</td>
<td>1</td>
<td>Place1</td>
<td>1970183</td>
<td>23.23</td>
<td>1419209.0</td>
<td>550974.0</td>
<td>6.5</td>
</tr>
<tr>
<td>1416</td>
<td>2021-04-14</td>
<td>2</td>
<td>Place2</td>
<td>331419</td>
<td>24.89</td>
<td>228547.0</td>
<td>102872.0</td>
<td>7.73</td>
</tr>
<tr>
<td>1417</td>
<td>2021-04-14</td>
<td>3</td>
<td>Place3</td>
<td>317019</td>
<td>31.22</td>
<td>216006.0</td>
<td>101013.0</td>
<td>9.95</td>
</tr>
<tr>
<td>1418</td>
<td>2021-04-14</td>
<td>4</td>
<td>Place4</td>
<td>233042</td>
<td>19.18</td>
<td>175460.0</td>
<td>57582.0</td>
<td>4.74</td>
</tr>
<tr>
<td>1436</td>
<td>2021-04-15</td>
<td>1</td>
<td>Place1</td>
<td>2041844</td>
<td>24.07</td>
<td>1481837.0</td>
<td>560007.0</td>
<td>6.6</td>
</tr>
<tr>
<td>1437</td>
<td>2021-04-15</td>
<td>2</td>
<td>Place2</td>
<td>347963</td>
<td>26.14</td>
<td>243497.0</td>
<td>104466.0</td>
<td>7.85</td>
</tr>
<tr>
<td>1438</td>
<td>2021-04-15</td>
<td>3</td>
<td>Place3</td>
<td>330038</td>
<td>32.5</td>
<td>225213.0</td>
<td>104825.0</td>
<td>10.32</td>
</tr>
<tr>
<td>1439</td>
<td>2021-04-15</td>
<td>4</td>
<td>Place4</td>
<td>240488</td>
<td>19.79</td>
<td>180775.0</td>
<td>59713.0</td>
<td>4.91</td>
</tr>
</tbody>
</table>
</div>
<p>If value of dato2 is null, I need to fill it with dato1 value and sum previous day value for same place. Steps to implement are</p>
<ul>
<li>first order by place and date</li>
<li>iterate dataframe. For each row
<ul>
<li>Check if it is first row of entire df. If so, dato2 = dato1</li>
<li>check if place has change (if place of actual row is different than place of previous row). Then dato2 = dato1</li>
<li>else: dato2 = dato2 previous row + dato1 actual row</li>
</ul>
</li>
</ul>
<p>code I have is</p>
<pre class="lang-py prettyprint-override"><code>df = df.sort_values(by=['place', 'fecha'])
for i, row in df.iterrows():
if pd.isnull(row['dato2']):
if i == 0:
df['dato2'][i] = df['dato1'][i]
elif df['place'][i] != df['place'][i-1]:
df['dato2'][i] = df['dato1'][i]
else:
df['dato2'][i] = df['dato2'][i-1] + df_vac['dato1'][i]
else:
df['dato2'][i]
</code></pre>
<p>But with this code indexes are not valid.</p>
|
<p>Here's my approach.</p>
<pre class="lang-py prettyprint-override"><code># Sort dataframe
df = (pd.read_csv(data)
.sort_values(['Place','fecha']
.reset_index())
# Fill missing values for dato2 with dato1
df['dato2'] = df.dato2.fillna(df.dato1)
# Calculate the aggregate, store in separate df
df_agg = (df[['Place','fecha','dato2']].groupby(['Place','fecha']).sum()
.groupby('Place').cumsum()
.reset_index())
# Update original data
df.update(df_agg)
</code></pre>
<p>Result:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>index</th>
<th>fecha</th>
<th>code</th>
<th>Place</th>
<th>dato1</th>
<th>porcentaje_dato1</th>
<th>dato2</th>
<th>dato3</th>
<th>porcentaje_dato3</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>2021-01-04</td>
<td>1</td>
<td>Place1</td>
<td>25809</td>
<td>0.30</td>
<td>25809.0</td>
<td>NaN</td>
<td>0.00</td>
</tr>
<tr>
<td>4</td>
<td>2021-01-05</td>
<td>1</td>
<td>Place1</td>
<td>40263</td>
<td>0.47</td>
<td>66072.0</td>
<td>NaN</td>
<td>0.00</td>
</tr>
<tr>
<td>8</td>
<td>2021-01-07</td>
<td>1</td>
<td>Place1</td>
<td>53934</td>
<td>0.64</td>
<td>120006.0</td>
<td>NaN</td>
<td>0.00</td>
</tr>
<tr>
<td>12</td>
<td>2021-04-14</td>
<td>1</td>
<td>Place1</td>
<td>1970183</td>
<td>23.23</td>
<td>1539215.0</td>
<td>550974.0</td>
<td>6.50</td>
</tr>
<tr>
<td>16</td>
<td>2021-04-15</td>
<td>1</td>
<td>Place1</td>
<td>2041844</td>
<td>24.07</td>
<td>3021052.0</td>
<td>560007.0</td>
<td>6.60</td>
</tr>
<tr>
<td>1</td>
<td>2021-01-04</td>
<td>2</td>
<td>Place2</td>
<td>2004</td>
<td>0.15</td>
<td>2004.0</td>
<td>NaN</td>
<td>0.00</td>
</tr>
<tr>
<td>5</td>
<td>2021-01-05</td>
<td>2</td>
<td>Place2</td>
<td>2985</td>
<td>0.22</td>
<td>4989.0</td>
<td>NaN</td>
<td>0.00</td>
</tr>
<tr>
<td>9</td>
<td>2021-01-07</td>
<td>2</td>
<td>Place2</td>
<td>6186</td>
<td>0.46</td>
<td>11175.0</td>
<td>NaN</td>
<td>0.00</td>
</tr>
<tr>
<td>13</td>
<td>2021-04-14</td>
<td>2</td>
<td>Place2</td>
<td>331419</td>
<td>24.89</td>
<td>239722.0</td>
<td>102872.0</td>
<td>7.73</td>
</tr>
<tr>
<td>17</td>
<td>2021-04-15</td>
<td>2</td>
<td>Place2</td>
<td>347963</td>
<td>26.14</td>
<td>483219.0</td>
<td>104466.0</td>
<td>7.85</td>
</tr>
<tr>
<td>2</td>
<td>2021-01-04</td>
<td>3</td>
<td>Place3</td>
<td>9380</td>
<td>0.92</td>
<td>9380.0</td>
<td>NaN</td>
<td>0.00</td>
</tr>
<tr>
<td>6</td>
<td>2021-01-05</td>
<td>3</td>
<td>Place3</td>
<td>12929</td>
<td>1.27</td>
<td>22309.0</td>
<td>NaN</td>
<td>0.00</td>
</tr>
<tr>
<td>10</td>
<td>2021-01-07</td>
<td>3</td>
<td>Place3</td>
<td>14406</td>
<td>1.42</td>
<td>36715.0</td>
<td>NaN</td>
<td>0.00</td>
</tr>
<tr>
<td>14</td>
<td>2021-04-14</td>
<td>3</td>
<td>Place3</td>
<td>317019</td>
<td>31.22</td>
<td>252721.0</td>
<td>101013.0</td>
<td>9.95</td>
</tr>
<tr>
<td>18</td>
<td>2021-04-15</td>
<td>3</td>
<td>Place3</td>
<td>330038</td>
<td>32.50</td>
<td>477934.0</td>
<td>104825.0</td>
<td>10.32</td>
</tr>
<tr>
<td>3</td>
<td>2021-01-04</td>
<td>4</td>
<td>Place4</td>
<td>153</td>
<td>0.01</td>
<td>153.0</td>
<td>NaN</td>
<td>0.00</td>
</tr>
<tr>
<td>7</td>
<td>2021-01-05</td>
<td>4</td>
<td>Place4</td>
<td>2656</td>
<td>0.22</td>
<td>2809.0</td>
<td>NaN</td>
<td>0.00</td>
</tr>
<tr>
<td>11</td>
<td>2021-01-07</td>
<td>4</td>
<td>Place4</td>
<td>3190</td>
<td>0.26</td>
<td>5999.0</td>
<td>NaN</td>
<td>0.00</td>
</tr>
<tr>
<td>15</td>
<td>2021-04-14</td>
<td>4</td>
<td>Place4</td>
<td>233042</td>
<td>19.18</td>
<td>181459.0</td>
<td>57582.0</td>
<td>4.74</td>
</tr>
<tr>
<td>19</td>
<td>2021-04-15</td>
<td>4</td>
<td>Place4</td>
<td>240488</td>
<td>19.79</td>
<td>362234.0</td>
<td>59713.0</td>
<td>4.91</td>
</tr>
</tbody>
</table>
</div>
|
python|pandas|dataframe
| 0
|
375,551
| 71,774,433
|
Im trying to plot 2 lines in one graph that exist in the same column using matplotlib
|
<p>the values exist in one column and I would like to display them both on the same graph rather than an 2 separate ones <a href="https://i.stack.imgur.com/g1YB0.png" rel="nofollow noreferrer">https://i.stack.imgur.com/g1YB0.png</a> <a href="https://i.stack.imgur.com/Uyal5.png" rel="nofollow noreferrer">https://i.stack.imgur.com/Uyal5.png</a></p>
|
<pre class="lang-py prettyprint-override"><code>plt.plot(x, y)
plt.plot(x2, y2)
</code></pre>
|
python|pandas|matplotlib
| 0
|
375,552
| 71,675,197
|
building mask for 2d array by index
|
<p>Consider the following mask:</p>
<pre class="lang-py prettyprint-override"><code>def maskA(n):
assert((n % 2) == 0)
sample_arr = [False, False]
bool_arr = np.random.choice(sample_arr, size=(n, n))
# print(bool_arr.shape)
for i in range(n):
for j in range(n):
if (i >= n//2) and (j < n//2):
bool_arr[i, j] = True
else:
bool_arr[i, j] = False
for i in range(n):
for j in range(n):
if (i < n//2) and (j >= n//2):
bool_arr[i, j] = True or bool_arr[i, j]
else:
bool_arr[i, j] = False or bool_arr[i, j]
return bool_arr
</code></pre>
<p>which select elements between clusters (2 x n/2 subnetworks or True elements) in a network.</p>
<pre class="lang-py prettyprint-override"><code>[[False False False True True True]
[False False False True True True]
[False False False True True True]
[ True True True False False False]
[ True True True False False False]
[ True True True False False False]]
</code></pre>
<p>is it possible to make it better (shorter, cleaner, faster)?</p>
|
<p>You can use indexing to assign values instead of using for-loops.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
def maskA(n):
assert((n % 2) == 0)
bool_arr = np.full((n, n), True)
bool_arr[0:int(n/2), 0:int(n/2)] = False
bool_arr[int(n/2):n, int(n/2):n] = False
return bool_arr
print(maskA(8))
</code></pre>
|
python|numpy
| 1
|
375,553
| 71,763,939
|
typecast/transform each element of list of lists to appropriate type
|
<p>I am looking for a way to typecast or transform each element of this list of lists, to the appropriate type.</p>
<p>The list needs to be inserted in a SQL database, so every first element of the list of lists might be typecasted to a <code>str</code>, the second to a <code>str</code>, the third to a <code>float</code> etc. (depending on the type of the column)</p>
<pre><code>[["myfirstcolumn", "second", "3", "False", "20200102"],
["myfirstcolumn", "second", "2", "True", "20200101"],
["myfirstcolumn", "second", "1", "False", "20200104"],
["myfirstcolumn", "second", "5", "True", "20200106"],
["myfirstcolumn", "second", "6", "True", "20200107"],
["myfirstcolumn", "second", "7", "True", "20200108"]...]
</code></pre>
<p>I already looped over every element and typecasted each individually, but that takes time if you have like 2.000.000 items. Is there a way to just transform all items based on their position in each list?</p>
<p>So my idea: transform every first element of each list to a <code>str</code> at once, every second to a <code>str</code>, every third to a <code>float</code> etc.</p>
<p>I thought of numpy, but numpy wants every subarray to be a single type. Or is there a way to rotate the numpy array so each subarray becomes a column, transform them, and rotate them back (and transform them back to a list of lists or list of tuples) before insertion to the database???</p>
|
<p>I think you could get it to work faster if you use Pandas:</p>
<pre class="lang-py prettyprint-override"><code>
arr = [["myfirstcolumn", "second", "3", "False", "20200102"],
["myfirstcolumn", "second", "2", "True", "20200101"],
["myfirstcolumn", "second", "1", "False", "20200104"]]
df = pd.DataFrame(arr)
df[2] = df[2].astype("float")
df[3] = df[3].astype("bool")
df[4] = df[4].astype("int")
values = df.values.tolist()
</code></pre>
|
python|algorithm|numpy
| 3
|
375,554
| 71,672,635
|
Automate creating subtables based on values of one column
|
<p>Suppose i have a df with a column X with the following unique values ('A', 'B', 'C')</p>
<p>I want to create a function that will create dataframes containing only the items for such unique value of column X. How best to do this?</p>
<p>I would usually write line of codes by filtering it but I want to know how best to manage this.</p>
|
<p>Try this:</p>
<pre><code>sub_dfs = [df[df['X']== i] for i in list(df['X'].unique())]
</code></pre>
|
python|pandas
| 1
|
375,555
| 71,634,924
|
Copy and split row by if cell condition it met - Pandas Python
|
<p>I am trying to overcome the issue when I have a cell with specific char(';') which I would like to copy the same line with the amount if splitters that specific cell in specific col got.
For example:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Index</th>
<th>Name</th>
<th>Age</th>
<th>Car</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>David</td>
<td>45</td>
<td>Honda;Subaru</td>
</tr>
<tr>
<td>2</td>
<td>Oshir</td>
<td>32</td>
<td>BMW</td>
</tr>
</tbody>
</table>
</div>
<p>The result that I am trying to get is the following:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Index</th>
<th>Name</th>
<th>Age</th>
<th>Car</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>David</td>
<td>45</td>
<td>Honda</td>
</tr>
<tr>
<td>2</td>
<td>David</td>
<td>45</td>
<td>Subaru</td>
</tr>
<tr>
<td>3</td>
<td>Oshir</td>
<td>32</td>
<td>BMW</td>
</tr>
</tbody>
</table>
</div>
<p>Thanks!</p>
|
<p>Possible solution is the following:</p>
<pre><code>import pandas as pd
# set data and create dataframe
data = {"Name": ["David", "Oshir"], "Age": [45, 32], "Car": ["Honda;Subaru", "BMW"]}
df = pd.DataFrame(data)
df = df.assign(Car=df['Car'].str.split(';')).explode('Car').reset_index(drop=True)
df
</code></pre>
<p>Returns</p>
<p><a href="https://i.stack.imgur.com/H3in9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/H3in9.png" alt="enter image description here" /></a></p>
|
python|pandas|dataframe|duplicates
| 1
|
375,556
| 71,443,753
|
Pandas Drop duplicates, reverse of subset
|
<p>I want to drop duplicates on my dataframe. I know I can use <code>subset</code> to type out all columns I want to perform it on, however I have 50+ columns. Is there a way to include all columns and exclude a subset?</p>
<p>For example include column B,C,D,E,G,H,I, etc. and exclude A and F.</p>
<p>Something like:
<code>df.drop_duplicates(subset_to_exclude=['A', 'F'])</code></p>
<p>Thanks.</p>
|
<p>Maybe this could be an approach for you (List comprehension)?</p>
<pre><code>df = pd.DataFrame({
'A': ['Yum Yum', 'Yum Yum', 'Indomie', 'Indomie', 'Indomie'],
'B': ['cup', 'cup', 'cup', 'pack', 'pack'],
'F': [4, 4, 3.5, 15, 5]
})
df.drop_duplicates(subset=[val for val in df.columns if val != "A" and val != "F"])
A B F
0 Yum Yum cup 4.0
3 Indomie pack 15.0
print(df.drop_duplicates(subset=["B"]))
A B F
0 Yum Yum cup 4.0
3 Indomie pack 15.0
</code></pre>
|
pandas
| 1
|
375,557
| 71,511,129
|
Write a function that takes as input a list of these datetime strings and returns only the date in 'yyyy-mm-dd' format
|
<p>i want to write a function that takes as input a list of these datetime strings and returns only the date in 'yyyy-mm-dd' format.</p>
<p>This is the dataframe</p>
<pre><code>twitter_url = 'https://raw.githubusercontent.com/Explore-AI/Public-Data/master/Data/twitter_nov_2019.csv'
twitter_df = pd.read_csv(twitter_url)
twitter_df.head()
</code></pre>
<p>This is the date variable</p>
<pre><code>dates = twitter_df['Date'].to_list()
</code></pre>
<p>This is the code i have written</p>
<pre><code>
def date_parser(dates):
"""
This is a function that takes as input a list of the datetime strings
and returns only the date in 'yyyy-mm-dd' format.
"""
# for every figure in the date string
for date in dates:
# the function should return only the dates and neglect the time
return ([date[:4] + "-" + date[5:7] + "-" + date[8:-9]])
date_parser(dates[:3])
</code></pre>
<p>This is the output i get</p>
<pre><code>['2019-11-29']
</code></pre>
<p>This is the expected output</p>
<pre><code>['2019-11-29', '2019-11-28', '2019-11-28', '2019-11-28', '2019-11-28']
</code></pre>
<p>How do i do these?</p>
|
<p>You can try using regex:</p>
<pre><code>new_date_list = []
</code></pre>
<p>and then inside the loop:</p>
<pre><code>new_date_list.append(re.findall(r"^\d{4}(-|\/)(0[1-9]|1[0-2])(-|\/)(0[1-9]|[12][0-9]|3[01])$",date))
</code></pre>
|
python|pandas|datetime
| 1
|
375,558
| 71,590,935
|
Sample Pandas dataframe based on multiple values in column
|
<p>I'm trying to even up a dataset for machine learning. <a href="https://stackoverflow.com/questions/56191448/sample-pandas-dataframe-based-on-values-in-column">There are great answers</a> for how to sample a dataframe with two values in a column (a binary choice).</p>
<p>In my case I have many values in column <code>x</code>. I want an equal number of records in the dataframe where</p>
<ul>
<li><code>x</code> is <code>0</code> or <code>not 0</code></li>
<li>or in a more complicated example the value in <code>x</code> is <code>0</code>, <code>5</code> or <code> other value</code></li>
</ul>
<p>Examples</p>
<pre><code> x
0 5
1 5
2 5
3 0
4 0
5 9
6 18
7 3
8 5
</code></pre>
<p>** For the first **
I have 2 rows where <code>x = 0</code> and 7 where <code>x != 0</code>. The result should balance this up and be 4 rows: the two with <code>x = 0</code> and 2 where <code>x != 0</code> (randomly selected). Preserving the same index for the sake of illustration</p>
<pre><code>1 5
3 0
4 0
6 18
</code></pre>
<p>** For the second **
I have 2 rows where <code>x = 0</code>, 4 rows where <code>x = 5</code> and 3 rows where <code>x != 0 && x != 5</code>. The result should balance this up and be 6 rows in total: two for each condition. Preserving the same index for the sake of illustration</p>
<pre><code>1 5
3 0
4 0
5 9
6 18
8 5
</code></pre>
<p>I've done examples with 2 conditions & 3 conditions. A solution that generalises to more would be good. It is better if it detects the minimum number of rows (for <code>0</code> in this example) so I don't need to work this out first before writing the condition.</p>
<p>How do I do this with pandas? Can I pass a custom function to <code>.groupby()</code> to do this?</p>
|
<p>IIUC, you could <code>groupby</code> on the condition whether "x" is 0 or not and <code>sample</code> the smallest-group-size number of entries from each group:</p>
<pre><code>g = df.groupby(df['x']==0)['x']
out = g.sample(n=g.count().min()).sort_index()
</code></pre>
<p>(An example) output:</p>
<pre><code>1 5
3 0
4 0
5 9
Name: x, dtype: int64
</code></pre>
<hr />
<p>For the second case, we could use <code>numpy.select</code> and <code>numpy.unique</code> to get the groups (the rest are essentially the same as above):</p>
<pre><code>import numpy as np
groups = np.select([df['x']==0, df['x']==5], [1,2], 3)
g = df.groupby(groups)['x']
out = g.sample(n=np.unique(groups, return_counts=True)[1].min()).sort_index()
</code></pre>
<p>An example output:</p>
<pre><code>2 5
3 0
4 0
5 9
7 3
8 5
Name: x, dtype: int64
</code></pre>
|
python|python-3.x|pandas|dataframe|pandas-groupby
| 2
|
375,559
| 71,612,775
|
if condition not meet, leave blank python code
|
<p>how should we write the code that tell python to leave empty cell in dataframe when the condition is not meet?</p>
<p>I tries " " like excel but it does not work. I tried 'space' also not work either.</p>
<p>eg. np.where((df['Adj Close']> df['signal']), 1, 'what should be the sign here? ' )</p>
<p>Thanks in advance.</p>
<p>I am expecting the code to return blank cell in pandas dataframe when I run the code.</p>
|
<p>If need empty numeric value use missing value <code>NaN</code>:</p>
<pre><code>np.where(df['Adj Close']> df['signal'], 1, np.nan)
</code></pre>
|
python-3.x|pandas|dataframe
| 1
|
375,560
| 71,491,932
|
Why I get "RuntimeError: CUDA error: the launch timed out and was terminated" when using Google Cloud compute engine
|
<p>I have a Google cloud compute engine with 4 Nvidia K80 GPU and Ubuntu 20.04 (python 3.8). When I try to train the yolo5 model, I get the following error:</p>
<pre><code>RuntimeError: CUDA error: the launch timed out and was terminated
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
[W CUDAGuardImpl.h:113] Warning: CUDA warning: the launch timed out and was terminated (function destroyEvent)
terminate called after throwing an instance of 'c10::CUDAError'
what(): CUDA error: the launch timed out and was terminated
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Exception raised from create_event_internal at ../c10/cuda/CUDACachingAllocator.cpp:1230 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x42 (0x7f62be2c17d2 in /home/cheyuxuanll/.local/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #1: <unknown function> + 0x239de (0x7f62f6ea69de in /home/cheyuxuanll/.local/lib/python3.8/site-packages/torch/lib/libc10_cuda.so)
frame #2: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0x22d (0x7f62f6ea857d in /home/cheyuxuanll/.local/lib/python3.8/site-packages/torch/lib/libc10_cuda.so)
frame #3: <unknown function> + 0x300568 (0x7f63736d9568 in /home/cheyuxuanll/.local/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
frame #4: c10::TensorImpl::release_resources() + 0x175 (0x7f62be2aa005 in /home/cheyuxuanll/.local/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #5: std::vector<c10d::Reducer::Bucket, std::allocator<c10d::Reducer::Bucket> >::~vector() + 0x2e9 (0x7f62fa8ca5e9 in /home/cheyuxuanll/.local/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
frame #6: c10d::Reducer::~Reducer() + 0x205 (0x7f62fa8bcd25 in /home/cheyuxuanll/.local/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
frame #7: std::_Sp_counted_ptr<c10d::Reducer*, (__gnu_cxx::_Lock_policy)2>::_M_dispose() + 0x12 (0x7f6373bb7212 in /home/cheyuxuanll/.local/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
frame #8: std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release() + 0x46 (0x7f63735c7506 in /home/cheyuxuanll/.local/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
frame #9: <unknown function> + 0x7e182f (0x7f6373bba82f in /home/cheyuxuanll/.local/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
frame #10: <unknown function> + 0x1f5b20 (0x7f63735ceb20 in /home/cheyuxuanll/.local/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
frame #11: <unknown function> + 0x1f6cce (0x7f63735cfcce in /home/cheyuxuanll/.local/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
frame #12: /usr/bin/python3() [0x5d1ec4]
frame #13: /usr/bin/python3() [0x5a958d]
frame #14: /usr/bin/python3() [0x5ed1a0]
frame #15: /usr/bin/python3() [0x544188]
frame #16: /usr/bin/python3() [0x5441da]
frame #17: /usr/bin/python3() [0x5441da]
frame #18: PyDict_SetItemString + 0x538 (0x5ce7c8 in /usr/bin/python3)
frame #19: PyImport_Cleanup + 0x79 (0x685179 in /usr/bin/python3)
frame #20: Py_FinalizeEx + 0x7f (0x68040f in /usr/bin/python3)
frame #21: Py_RunMain + 0x32d (0x6b7a1d in /usr/bin/python3)
frame #22: Py_BytesMain + 0x2d (0x6b7c8d in /usr/bin/python3)
frame #23: __libc_start_main + 0xf3 (0x7f6378be40b3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #24: _start + 0x2e (0x5fb12e in /usr/bin/python3)
</code></pre>
<p>I am training this model with this command:</p>
<pre><code>python3 -m torch.distributed.run --nproc_per_node 4 train.py --batch 16 --data coco128.yaml --weights yolov5s.pt --device 0,1,2,3
</code></pre>
<p>Am I missing something here?</p>
<p>Thanks</p>
|
<p>We are also running CUDA in the Google Cloud and our server restarted roughly when you posted your question. While we couldn't detect any changes, our service couldn't start due to "RuntimeError: No CUDA GPUs are available".
So there are some similarities, but also some differences.</p>
<p>Anyway, we opted for the good ol' uninstall and reinstall and that fixed it:</p>
<p>Uninstall:</p>
<pre><code>sudo apt-get --purge remove "*cublas*" "cuda*" "nsight*"
sudo apt-get --purge remove "*nvidia*"
</code></pre>
<p>Plus deleting anything in /usr/local/*cuda*</p>
<p>Install:</p>
<pre><code>sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/debian11/x86_64/7fa2af80.pub
sudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/debian11/x86_64/ /"
sudo add-apt-repository contrib
sudo apt-get update
sudo apt-get -y install cuda-11-3
</code></pre>
<p>We also reinstalled CUDNN, but that may or may not be part of your stack.</p>
|
google-cloud-platform|pytorch|google-compute-engine|nvidia|yolo
| 1
|
375,561
| 71,488,685
|
How can I properly format a pandas dataframe into JSON?
|
<p>I have this function that takes in a JSON, transforms it to a pandas dataframe, does a calculation and attempts to return it in proper json form.</p>
<p>Here's what the function looks like:</p>
<pre><code>def run(data):
try:
start_time = datetime.datetime.now()
ret_columns = ["operat_flight_nbr", "schd_leg_dep_dt", \
"schd_leg_dep_tm", "dep_airprt_cd", "predictions"]
df = pd.DataFrame.from_records(json.loads(data)["data"])
predictions = predict(df)
df["predictions"] = predictions
elapsed_time_ms = (datetime.datetime.now() - start_time).total_seconds() * 1000
return {"data" : json.loads(df[ret_columns].to_json(date_format="iso")), "predictions" : predictions, "elapsed_time_ms" : elapsed_time_ms }
</code></pre>
<p>Here's the current undesirable output:</p>
<pre><code>{
"data": {
"operat_flight_nbr": {
"0": 2825,
"1": 3701
},
"schd_leg_dep_dt": {
"0": "2021-06-04",
"1": "2021-08-09"
},
"schd_leg_dep_tm": {
"0": "09:41:00",
"1": "13:03:00"
},
"dep_airprt_cd": {
"0": "CLT",
"1": "MYT"
},
"predictions": {
"0": 18.1041783139,
"1": 2.947184921
}
},
"predictions": [
18.104178313869596,
2.947184920966057
],
"elapsed_time_ms": 59.61000000000001
}
</code></pre>
<p>and here's the desired output:</p>
<pre><code>{
"data": [
{
"operat_flight_nbr": 2825
"schd_leg_dep_dt": "2021-06-04"
"schd_leg_dep_tm": "09:41:00"
"dep_airprt_cd": "CLT",
"predictions": 18.1041783139
},
{
"operat_flight_nbr": 3701
"schd_leg_dep_dt": "2021-08-09"
"schd_leg_dep_tm": "13:03:00"
"dep_airprt_cd": "MYT",
"predictions" : 2.947184921
}
],
"predictions": [
18.104178313869596,
2.947184920966057
],
"elapsed_time_ms": 59.61000000000001
}
</code></pre>
<p>I think I need to use json.dumps() somewhere, but when I remove the .to_json() function, it does dates in the wrong format.</p>
|
<p>Add <code>orient='records'</code> to your <code>to_json()</code> call:</p>
<pre><code> return {"data" : json.loads(df[ret_columns].to_json(date_format="iso", orient="records")), "predictions" : predictions, "elapsed_time_ms" : elapsed_time_ms }
</code></pre>
|
json|pandas
| 0
|
375,562
| 71,577,370
|
Convert pandas column of json-like strings to DataFrame
|
<p>I have the following DataFrame that I get "as-is" from an API:</p>
<pre><code>df = pd.DataFrame({'keys': {0: "[{'contract': 'G'}, {'contract_type': 'C'}, {'strike': '560'}, {'strip': '10/1/2022'}]",
1: "[{'contract': 'G'}, {'contract_type': 'P'}, {'strike': '585'}, {'strip': '10/1/2022'}]",
2: "[{'contract': 'G'}, {'contract_type': 'C'}, {'strike': '580'}, {'strip': '10/1/2022'}]",
3: "[{'contract': 'G'}, {'contract_type': 'C'}, {'strike': '545'}, {'strip': '10/1/2022'}]",
4: "[{'contract': 'G'}, {'contract_type': 'P'}, {'strike': '555'}, {'strip': '10/1/2022'}]"},
'value': {0: 353.3, 1: 25.8, 2: 336.65, 3: 366.05, 4: 20.8}})
>>> df
keys value
0 [{'contract': 'G'}, {'contract_type': 'C'}, {'... 353.30
1 [{'contract': 'G'}, {'contract_type': 'P'}, {'... 25.80
2 [{'contract': 'G'}, {'contract_type': 'C'}, {'... 336.65
3 [{'contract': 'G'}, {'contract_type': 'C'}, {'... 366.05
4 [{'contract': 'G'}, {'contract_type': 'P'}, {'... 20.80
</code></pre>
<p>Each row of the "keys" column is a string (not JSON, as the values are enclosed in single quotes instead of double quotes). For example:</p>
<pre><code>>>> df.at[0, keys]
"[{'contract': 'G'}, {'contract_type': 'C'}, {'strike': '560'}, {'strip': '10/1/2022'}]"
</code></pre>
<p>I would like to convert the "keys" column to a DataFrame and append it to <code>df</code> as new columns.</p>
<p>I am currently doing:</p>
<ol>
<li>Replacing single quotes with double quotes and passing to <code>json.loads</code> to read into a list of dictionaries with the below structure:</li>
</ol>
<pre><code>[{'contract': 'G'}, {'contract_type': 'C'}, {'strike': '560'}, {'strip': '10/1/2022'}]
</code></pre>
<ol start="2">
<li>Combining the dictionaries into a single dictionary with dictionary comprehension:</li>
</ol>
<pre><code>{'contract': 'G', 'contract_type': 'C', 'strike': '560', 'strip': '10/1/2022'}
</code></pre>
<ol start="3">
<li><code>apply</code>-ing this to every row and calling the <code>pd.DataFrame</code> constructor on the result.</li>
<li><code>join</code> back to original <code>df</code></li>
</ol>
<p>In a single line, my code is:</p>
<pre><code>>>> df.drop("keys", axis=1).join(pd.DataFrame(df["keys"].apply(lambda x: {k: v for d in json.loads(x.replace("'","\"")) for k, v in d.items()}).tolist()))
value contract contract_type strike strip
0 353.30 G C 560 10/1/2022
1 25.80 G P 585 10/1/2022
2 336.65 G C 580 10/1/2022
3 366.05 G C 545 10/1/2022
4 20.80 G P 555 10/1/2022
</code></pre>
<p>I was wondering if there is a better way to do this.</p>
|
<p>You could use <code>ast.literal_eval</code> (built-in) to convert the dict strings to actual dicts, and then use <code>pd.json_normalize</code> with <code>record_path=[[]]</code> to get the objects into a table format:</p>
<pre><code>import ast
new_df = pd.json_normalize(df['keys'].apply(ast.literal_eval), record_path=[[]]).apply(lambda col: col.dropna().tolist())
</code></pre>
<p>Output:</p>
<pre><code>>>> new_df
contract contract_type strike strip
0 G C 560 10/1/2022
1 G P 585 10/1/2022
2 G C 580 10/1/2022
3 G C 545 10/1/2022
4 G P 555 10/1/2022
</code></pre>
<hr />
<p>An alternate solution would be to use string replacement to merge the separate dicts into one:</p>
<pre><code>import ast
new_df = pd.DataFrame(df['keys'].str.replace("'}, {'", "', '", regex=True).apply(ast.literal_eval).str[0].tolist())
</code></pre>
<p>Output:</p>
<hr />
<p>Yet another option, this one using <code>functools.reduce</code> (built in):</p>
<pre><code>import ast
new_df = pd.DataFrame(df['keys'].apply(ast.literal_eval).apply(lambda row: functools.reduce(lambda x, y: x | y, row)).tolist())
</code></pre>
|
python|pandas
| 2
|
375,563
| 71,664,909
|
How to load a model using Tensorflow Hub and make a prediction?
|
<p>This should be a simple task: Download a model saved in tensorflow_hub format, load using tensorflow_hub, and use..</p>
<p>This is the model I am trying to use (simCLR stored in Google Cloud): <a href="https://console.cloud.google.com/storage/browser/simclr-checkpoints/simclrv2/pretrained/r50_1x_sk0;tab=objects?pageState=(%22StorageObjectListTable%22:(%22f%22:%22%255B%255D%22))&prefix=&forceOnObjectsSortingFiltering=false" rel="nofollow noreferrer">https://console.cloud.google.com/storage/browser/simclr-checkpoints/simclrv2/pretrained/r50_1x_sk0;tab=objects?pageState=(%22StorageObjectListTable%22:(%22f%22:%22%255B%255D%22))&prefix=&forceOnObjectsSortingFiltering=false</a></p>
<p>I downloaded the /hub folder as they say, using</p>
<pre><code>gsutil -m cp -r \
"gs://simclr-checkpoints/simclrv2/pretrained/r50_1x_sk0/hub" \
</code></pre>
<p>.</p>
<p>The /hub folder contains the files:</p>
<pre><code>/saved_model.pb
/tfhub_module.pb
/variables/variables.index
/variables/variables.data-00000-of-00001
</code></pre>
<p>So far so good.
Now in python3, tensorflow2, tensorflow_hub 0.12 I run the following code:</p>
<pre><code>import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
path_to_hub = '/home/my_name/my_path/simclr/hub'
# Attempt 1
m = tf.keras.models.Sequential([hub.KerasLayer(path_to_hub, input_shape=(224,224,3))])
# Attempt 2
m = tf.keras.models.Sequential(hub.KerasLayer(hubmod))
m.build(input_shape=[None,224,224,3])
# Attempt 3
m = hub.KerasLayer(hub.load(hubmod))
# Toy Data Test
X = np.random.random((1,244,244,3)).astype(np.float32)
y = m.predict(X)
</code></pre>
<p>None of these 3 options to load the hub model work, with the following errors:</p>
<pre><code>Attempt 1 :
ValueError: Error when checking input: expected keras_layer_2_input to have shape (224, 224, 3) but got array with shape (244, 244, 3)
Attempt 2:
tensorflow.python.framework.errors_impl.UnknownError: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
[[{{node sequential_3/keras_layer_3/StatefulPartitionedCall/base_model/conv2d/Conv2D}}]] [Op:__inference_keras_scratch_graph_46402]
Function call stack:
keras_scratch_graph
Attempt 3:
ValueError: Expected a string, got <tensorflow.python.training.tracking.tracking.AutoTrackable object at 0x7fa71c7a2dd0>
</code></pre>
<p>These 3 attempts are all code taken from tensorflow_hub tutorials and are repeated in other answers in stackoverflow, but none works, and I don't know how to continue from those error messages.</p>
<p>Appreciate any help, thanks.</p>
<p>Update 1:
Same issues happen if I try with this ResNet50 hub/
<a href="https://storage.cloud.google.com/simclr-gcs/checkpoints/ResNet50_1x.zip" rel="nofollow noreferrer">https://storage.cloud.google.com/simclr-gcs/checkpoints/ResNet50_1x.zip</a></p>
|
<p>As @Frightera pointed out, there was an error with the input shapes. Also the error on "Attempt 2" was solved by allowing for memory growth on the selected GPU. "Attempt 3" still does not work, but at least there are two methods for loading and using a model saved in /hub format:</p>
<pre><code>import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
gpus = tf.config.experimental.list_physical_devices('GPU')
tf.config.experimental.set_visible_devices(gpus[0], 'GPU')
tf.config.experimental.set_memory_growth(gpus[0], True)
hubmod = 'https://tfhub.dev/google/imagenet/mobilenet_v2_035_96/feature_vector/5'
# Alternative 1 - Works!
m = tf.keras.models.Sequential([hub.KerasLayer(hubmod, input_shape=(96,96,3))])
print(m.summary())
# Alternative 2 - Works!
m = tf.keras.models.Sequential(hub.KerasLayer(hubmod))
m.build(input_shape=[None, 96,96,3])
print(m.summary())
# Alternative 3 - Doesnt work
#m = hub.KerasLayer(hub.load(hubmod))
#m.build(input_shape=[None, 96,96,3])
#print(m.summary())
# Test
X = np.random.random((1,96,96,3)).astype(np.float32)
y = m.predict(X)
print(y.shape)
</code></pre>
|
tensorflow|deep-learning|tensorflow-hub
| 0
|
375,564
| 71,616,860
|
Passing pandas subset of dataframe to lambda function
|
<p>I am trying to pass a subset of my dataframe rows — conditioned with <code>'rating_count' > m</code> — to the 'weighted_rating' function. However, the passed data contains only the 'user_id' column while it's expected to contain several other columns. As the result I receive the <code>KeyError</code> on the line <code>v = xx['rating_count']</code> (see the log below).</p>
<p>So, I need <code>xx['rating_count']</code> and <code>xx['rating']</code> to be present inside the function.</p>
<pre class="lang-py prettyprint-override"><code>def weighted_rating(xx):
print(xx)
v = xx['rating_count']
R = xx['rating']
return (v/(v+m) * R) + (m/(m+v) * C)
final_data['weighted_rating'] = final_data.loc[final_data['rating_count'] >= m].apply(lambda x: weighted_rating(x))
</code></pre>
<p><strong>Output:</strong></p>
<pre class="lang-sh prettyprint-override"><code>659 user_97032@domain.com
660 user_97032@domain.com
662 user_97032@domain.com
663 user_97032@domain.com
664 user_97032@domain.com
...
1653167 user_80312@domain.com
1653169 user_80312@domain.com
1653178 user_80312@domain.com
1653179 user_80312@domain.com
1653190 user_80312@domain.com
Name: user_id, Length: 88446, dtype: object
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
~\anaconda3\lib\site-packages\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance)
3360 try:
-> 3361 return self._engine.get_loc(casted_key)
3362 except KeyError as err:
~\anaconda3\lib\site-packages\pandas\_libs\index.pyx in pandas._libs.index.IndexEngine.get_loc()
pandas\_libs\index_class_helper.pxi in pandas._libs.index.Int64Engine._check_type()
pandas\_libs\index_class_helper.pxi in pandas._libs.index.Int64Engine._check_type()
KeyError: 'rating_count'
</code></pre>
<p>The above exception was the direct cause of the following exception:</p>
<pre class="lang-sh prettyprint-override"><code>KeyError Traceback (most recent call last)
~\AppData\Local\Temp/ipykernel_3296/835381681.py in <module>
----> 1 final_data.loc[final_data['rating_count'] >= m].apply(lambda x: weighted_rating(x))
~\anaconda3\lib\site-packages\pandas\core\frame.py in apply(self, func, axis, raw, result_type, args, **kwargs)
8738 kwargs=kwargs,
8739 )
-> 8740 return op.apply()
8741
8742 def applymap(
~\anaconda3\lib\site-packages\pandas\core\apply.py in apply(self)
686 return self.apply_raw()
687
--> 688 return self.apply_standard()
689
690 def agg(self):
~\anaconda3\lib\site-packages\pandas\core\apply.py in apply_standard(self)
810
811 def apply_standard(self):
--> 812 results, res_index = self.apply_series_generator()
813
814 # wrap results
~\anaconda3\lib\site-packages\pandas\core\apply.py in apply_series_generator(self)
826 for i, v in enumerate(series_gen):
827 # ignore SettingWithCopy here in case the user mutates
--> 828 results[i] = self.f(v)
829 if isinstance(results[i], ABCSeries):
830 # If we have a view on v, we need to make a copy because
~\AppData\Local\Temp/ipykernel_3296/835381681.py in <lambda>(x)
----> 1 final_data.loc[final_data['rating_count'] >= m].apply(lambda x: weighted_rating(x))
~\AppData\Local\Temp/ipykernel_3296/3170994745.py in weighted_rating(xx)
1 def weighted_rating(xx):
2 print(xx)
----> 3 v = xx['rating_count']
4 R = xx['rating']
5 return (v/(v+m) * R) + (m/(m+v) * C)
~\anaconda3\lib\site-packages\pandas\core\series.py in __getitem__(self, key)
940
941 elif key_is_scalar:
--> 942 return self._get_value(key)
943
944 if is_hashable(key):
~\anaconda3\lib\site-packages\pandas\core\series.py in _get_value(self, label, takeable)
1049
1050 # Similar to Index.get_value, but we do not fall back to positional
-> 1051 loc = self.index.get_loc(label)
1052 return self.index._get_values_for_loc(self, loc, label)
1053
~\anaconda3\lib\site-packages\pandas\core\indexes\base.py in get_loc(self, key, method, tolerance)
3361 return self._engine.get_loc(casted_key)
3362 except KeyError as err:
-> 3363 raise KeyError(key) from err
3364
3365 if is_scalar(key) and isna(key) and not self.hasnans:
KeyError: 'rating_count'
</code></pre>
<p>I also tried the following code with no luck:</p>
<pre class="lang-py prettyprint-override"><code>final_data['weighted_rating'] = final_data[final_data['rating_count'] >= m].apply(lambda x: weighted_rating(x))
</code></pre>
<p>Am I doing something wrong? Please help</p>
<p><strong>Edit: Adding sample data</strong></p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code><div>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>user_id</th>
<th>user_age</th>
<th>gender</th>
<th>location</th>
<th>joining_date</th>
<th>content_id</th>
<th>duration_user</th>
<th>date</th>
<th>start_time</th>
<th>end_time</th>
<th>content_type</th>
<th>language</th>
<th>genre</th>
<th>duration_content</th>
<th>release_date</th>
<th>rating</th>
<th>episode_count</th>
<th>season_count</th>
<th>rating_count</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>user_44289@domain.com</td>
<td>38</td>
<td>F</td>
<td>Goa</td>
<td>2018-09-03</td>
<td>cont_3375_16_10</td>
<td>2220000.0</td>
<td>2020-06-03</td>
<td>18:47:17</td>
<td>19:24:17</td>
<td>series</td>
<td>english</td>
<td>action</td>
<td>3060000.0</td>
<td>2015-11-16</td>
<td>5.0</td>
<td>10.0</td>
<td>16.0</td>
<td>64</td>
</tr>
<tr>
<th>1</th>
<td>user_44289@domain.com</td>
<td>38</td>
<td>F</td>
<td>Goa</td>
<td>2018-09-03</td>
<td>cont_1195_1_8</td>
<td>900000.0</td>
<td>2019-04-18</td>
<td>11:12:40</td>
<td>11:27:40</td>
<td>sports</td>
<td>english</td>
<td>football</td>
<td>5400000.0</td>
<td>2017-03-09</td>
<td>0.0</td>
<td>8.0</td>
<td>1.0</td>
<td>66</td>
</tr>
<tr>
<th>2</th>
<td>user_44289@domain.com</td>
<td>38</td>
<td>F</td>
<td>Goa</td>
<td>2018-09-03</td>
<td>cont_3470_2_15</td>
<td>1620000.0</td>
<td>2021-09-18</td>
<td>11:55:34</td>
<td>12:22:34</td>
<td>series</td>
<td>english</td>
<td>horror</td>
<td>2820000.0</td>
<td>1997-08-05</td>
<td>8.0</td>
<td>15.0</td>
<td>2.0</td>
<td>63</td>
</tr>
<tr>
<th>3</th>
<td>user_44289@domain.com</td>
<td>38</td>
<td>F</td>
<td>Goa</td>
<td>2018-09-03</td>
<td>cont_310_25_9</td>
<td>780000.0</td>
<td>2020-08-09</td>
<td>11:38:44</td>
<td>11:51:44</td>
<td>series</td>
<td>english</td>
<td>comedy</td>
<td>3960000.0</td>
<td>2019-06-29</td>
<td>4.0</td>
<td>9.0</td>
<td>25.0</td>
<td>62</td>
</tr>
<tr>
<th>4</th>
<td>user_44289@domain.com</td>
<td>38</td>
<td>F</td>
<td>Goa</td>
<td>2018-09-03</td>
<td>cont_4350_1_3</td>
<td>3480000.0</td>
<td>2021-06-25</td>
<td>23:42:44</td>
<td>00:40:44</td>
<td>sports</td>
<td>english</td>
<td>cricket</td>
<td>3840000.0</td>
<td>2002-10-21</td>
<td>0.0</td>
<td>3.0</td>
<td>1.0</td>
<td>66</td>
</tr>
</tbody>
</table>
</div></code></pre>
</div>
</div>
</p>
|
<p>I assume you want to apply <code>weighted_rating()</code> to each row of the dataframe <code>final_data</code>. In order to do that, you need to pass <code>axis=1</code> to apply() method.</p>
<pre><code>final_data['weighted_rating'] = final_data[final_data['rating_count'] >= m].apply(lambda x: weighted_rating(x), axis=1)
</code></pre>
<p>ref: <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.apply.html" rel="nofollow noreferrer">https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.apply.html</a></p>
|
python|pandas|dataframe
| 0
|
375,565
| 71,640,405
|
I'm unable to convert my Src IP column into type integer using Python
|
<p>I am currently attempting to apply <code>int</code> to this column type, but it's throwing me an error.</p>
<pre><code>int(ipaddress.IPv4Address(df['Src IP']))
</code></pre>
<p>My error traceback is:</p>
<pre><code>AddressValueError: Expected 4 octets in '0
172.27.224.251\n1
172.27.224.251\n2
172.27.224.250\n3
172.27.224.251\n4
172.27.224.250\n
...
\n22619
172.27.224.251\n22620
172.27.224.251\n22621
172.27.224.251\n22622
172.27.224.251\n22623
172.27.224.251\n
Name: Src IP, Length: 22624, dtype: object
</code></pre>
|
<p>Use:</p>
<pre><code>df['new'] = df['Src IP'].map(ipaddress.IPv4Address).astype(int)
print(df)
# Output
Src IP new
0 172.27.224.251 2887508219
1 172.27.224.251 2887508219
2 172.27.224.250 2887508218
3 172.27.224.251 2887508219
4 172.27.224.250 2887508218
22619 172.27.224.251 2887508219
22620 172.27.224.251 2887508219
22621 172.27.224.251 2887508219
22622 172.27.224.251 2887508219
22623 172.27.224.251 2887508219
</code></pre>
|
python|pandas|ip-address|data-conversion
| 0
|
375,566
| 71,546,900
|
Weird `glibc==2.17` conflict when trying to conda install tensorflow 1.4.1
|
<p>I'm trying to create a new conda enviornment with tensorflow (GPU), version 1.4.1 with the following command <code>conda create -n parsim_1.4.1 python=3 tensorflow-gpu=1.4.1</code>.</p>
<p>However, it prints a weird conflict:</p>
<pre><code>$ conda create -n parsim_1.4.1 python=3 tensorflow-gpu=1.4.1
Collecting package metadata (current_repodata.json): done
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: \
Found conflicts! Looking for incompatible packages.
This can take several minutes. Press CTRL-C to abort.
failed
UnsatisfiableError: The following specifications were found to be incompatible with each other:
Output in format: Requested package -> Available versions
Package python conflicts for:
python=3
tensorflow-gpu=1.4.1 -> tensorflow-gpu-base==1.4.1 -> python[version='>=2.7,<2.8.0a0|>=3.5,<3.6.0a0|>=3.6,<3.7.0a0']The following specifications were found to be incompatible with your system:
- feature:/linux-64::__glibc==2.17=0
- python=3 -> libgcc-ng[version='>=9.3.0'] -> __glibc[version='>=2.17']
Your installed version is: 2.17
</code></pre>
<p>My OS is CentOS7, and</p>
<pre><code>$ uname -a
Linux cpu-s-master 3.10.0-1160.42.2.el7.x86_64 #1 SMP Tue Sep 7 14:49:57 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
</code></pre>
<p>What's wrong here? How can I fix it?</p>
<h1>EDIT</h1>
<p>Thanks to @merv's comment, I've tried with Mamba, and indeed it gave better error message (and much much faster). If anyone's interested, that's the command that successfully installed my required versions:</p>
<pre><code>mamba create -n parsim python=3 "tensorflow-gpu=1.4" pillow opencv -c shuangnan -c anaconda
</code></pre>
|
<p>Conda's error reporting <a href="https://stackoverflow.com/a/69137255/570918">isn't always helpful</a>. Mamba is sometimes better, and in this particular case it gives:</p>
<pre class="lang-bash prettyprint-override"><code>Looking for: ['python=3', 'tensorflow-gpu=1.4.1']
conda-forge/linux-64 Using cache
conda-forge/noarch Using cache
pkgs/main/linux-64 No change
pkgs/main/noarch No change
pkgs/r/linux-64 No change
pkgs/r/noarch No change
Encountered problems while solving:
- nothing provides cudatoolkit 8.0.* needed by tensorflow-gpu-base-1.4.1-py27h01caf0a_0
</code></pre>
<p>Even here, that <code>py27</code> in the build string is weird, but it at least directs us to <code>cudatoolkit 8.0</code>, which is no longer hosted in the <strong>main</strong> channel. Instead, you need to <a href="https://stackoverflow.com/a/61483314/570918">include the <strong>free</strong> channel</a>. The following works for me:</p>
<pre class="lang-bash prettyprint-override"><code>$ CONDA_SUBDIR=linux-64 CONDA_CHANNEL_PRIORITY=flexible \
mamba create -n foo \
-c anaconda -c free \
python=3 tensorflow-gpu=1.4.1
__ __ __ __
/ \ / \ / \ / \
/ \/ \/ \/ \
███████████████/ /██/ /██/ /██/ /████████████████████████
/ / \ / \ / \ / \ \____
/ / \_/ \_/ \_/ \ o \__,
/ _/ \_____/ `
|/
███╗ ███╗ █████╗ ███╗ ███╗██████╗ █████╗
████╗ ████║██╔══██╗████╗ ████║██╔══██╗██╔══██╗
██╔████╔██║███████║██╔████╔██║██████╔╝███████║
██║╚██╔╝██║██╔══██║██║╚██╔╝██║██╔══██╗██╔══██║
██║ ╚═╝ ██║██║ ██║██║ ╚═╝ ██║██████╔╝██║ ██║
╚═╝ ╚═╝╚═╝ ╚═╝╚═╝ ╚═╝╚═════╝ ╚═╝ ╚═╝
mamba (0.21.1) supported by @QuantStack
GitHub: https://github.com/mamba-org/mamba
Twitter: https://twitter.com/QuantStack
█████████████████████████████████████████████████████████████
Looking for: ['python=3', 'tensorflow-gpu=1.4.1']
anaconda/linux-64 Using cache
anaconda/noarch Using cache
conda-forge/linux-64 Using cache
conda-forge/noarch Using cache
pkgs/main/noarch No change
pkgs/main/linux-64 No change
pkgs/r/linux-64 No change
pkgs/r/noarch No change
free/linux-64 No change
free/noarch No change
Transaction
Prefix: /Users/mfansler/miniconda3/envs/foo
Updating specs:
- python=3
- tensorflow-gpu=1.4.1
Package Version Build Channel Size
───────────────────────────────────────────────────────────────────────────────────────
Install:
───────────────────────────────────────────────────────────────────────────────────────
+ blas 1.0 openblas anaconda/linux-64 49kB
+ bleach 1.5.0 py36_0 free/linux-64 22kB
+ ca-certificates 2020.10.14 0 anaconda/linux-64 131kB
+ certifi 2020.6.20 py36_0 anaconda/linux-64 163kB
+ cudatoolkit 8.0 3 free/linux-64 338MB
+ cudnn 7.1.3 cuda8.0_0 anaconda/linux-64 241MB
+ html5lib 0.9999999 py36_0 free/linux-64 181kB
+ importlib-metadata 2.0.0 py_1 anaconda/noarch 36kB
+ ld_impl_linux-64 2.33.1 h53a641e_7 anaconda/linux-64 660kB
+ libedit 3.1.20191231 h14c3975_1 anaconda/linux-64 124kB
+ libffi 3.3 he6710b0_2 anaconda/linux-64 55kB
+ libgcc-ng 9.1.0 hdf63c60_0 anaconda/linux-64 8MB
+ libgfortran-ng 7.3.0 hdf63c60_0 anaconda/linux-64 1MB
+ libopenblas 0.3.10 h5a2b251_0 anaconda/linux-64 8MB
+ libprotobuf 3.13.0.1 hd408876_0 anaconda/linux-64 2MB
+ libstdcxx-ng 9.1.0 hdf63c60_0 anaconda/linux-64 4MB
+ markdown 3.3.2 py36_0 anaconda/linux-64 126kB
+ ncurses 6.2 he6710b0_1 anaconda/linux-64 1MB
+ numpy 1.19.1 py36h30dfecb_0 anaconda/linux-64 21kB
+ numpy-base 1.19.1 py36h75fe3a5_0 anaconda/linux-64 5MB
+ openssl 1.1.1h h7b6447c_0 anaconda/linux-64 4MB
+ pip 20.2.4 py36_0 anaconda/linux-64 2MB
+ protobuf 3.13.0.1 py36he6710b0_1 anaconda/linux-64 715kB
+ python 3.6.12 hcff3b4d_2 anaconda/linux-64 36MB
+ readline 8.0 h7b6447c_0 anaconda/linux-64 438kB
+ setuptools 50.3.0 py36hb0f4dca_1 anaconda/linux-64 913kB
+ six 1.15.0 py_0 anaconda/noarch 13kB
+ sqlite 3.33.0 h62c20be_0 anaconda/linux-64 2MB
+ tensorflow-gpu 1.4.1 0 anaconda/linux-64 3kB
+ tensorflow-gpu-base 1.4.1 py36h01caf0a_0 anaconda/linux-64 119MB
+ tensorflow-tensorboard 1.5.1 py36hf484d3e_1 anaconda/linux-64 3MB
+ tk 8.6.10 hbc83047_0 anaconda/linux-64 3MB
+ werkzeug 1.0.1 py_0 anaconda/noarch 249kB
+ wheel 0.35.1 py_0 anaconda/noarch 37kB
+ xz 5.2.5 h7b6447c_0 anaconda/linux-64 449kB
+ zipp 3.3.1 py_0 anaconda/noarch 12kB
+ zlib 1.2.11 h7b6447c_3 anaconda/linux-64 122kB
Summary:
Install: 37 packages
Total download: 784MB
───────────────────────────────────────────────────────────────────────────────────────
</code></pre>
|
python|tensorflow|conda
| 4
|
375,567
| 71,709,830
|
Pandas function for showing aggfunc at every level
|
<p>Let's propose I have a pivot table that looks like this:</p>
<pre><code>pd.pivot_table(
data,
columns=['A','B','C'],
values='widgets',
aggfunc='count'
).T
</code></pre>
<pre><code>[Column] Count
A B C 1
D 2
E F 3
G 4
H I J 5
K L 6
</code></pre>
<p>What I want is:</p>
<pre><code>A 10 B 3 C 1
D 2
E 7 F 3
G 4
H 11 I 11 J 5
L 6
</code></pre>
<p>with intermediary sums of each category in between the final count.</p>
|
<p>Make sure index levels are named:</p>
<pre><code>df = pd.DataFrame(
{'Count': [1, 2, 3, 4, 5, 6]},
pd.MultiIndex.from_tuples([
('A', 'B', 'C'),
('A', 'B', 'D'),
('A', 'E', 'F'),
('A', 'E', 'G'),
('H', 'I', 'J'),
('H', 'K', 'L')
], names=['One', 'Two', 'Three'])
)
df
Count
One Two Three
A B C 1
D 2
E F 3
G 4
H I J 5
K L 6
</code></pre>
<hr />
<pre><code>from functools import reduce
import pandas as pd
names = df.index.names
reduce(
pd.DataFrame.join,
[df.groupby(level=names[:i+1]).sum().add_suffix(f'_{names[i]}')
for i in range(df.index.nlevels)]
)
Count_One Count_Two Count_Three
One Two Three
A B C 10 3 1
D 10 3 2
E F 10 7 3
G 10 7 4
H I J 11 5 5
K L 11 6 6
</code></pre>
|
pandas|pandas-groupby|data-visualization|pivot-table
| 2
|
375,568
| 71,549,342
|
How to read .dta into Python
|
<p>I want to read data from <a href="http://fmwww.bc.edu/ec-p/data/wooldridge/401k.dta" rel="nofollow noreferrer">http://fmwww.bc.edu/ec-p/data/wooldridge/401k.dta</a>. I tried below,</p>
<pre><code>import pandas as pd
import pyreadstat as pyreadstat
dataframe, meta = pyreadstat.read_dta("http://fmwww.bc.edu/ec-p/data/wooldridge/401k.dta")
</code></pre>
<p>With this I am getting below error</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pyreadstat/pyreadstat.pyx", line 260, in pyreadstat.pyreadstat.read_dta
File "pyreadstat/_readstat_parser.pyx", line 1012, in pyreadstat._readstat_parser.run_conversion
pyreadstat._readstat_parser.PyreadstatError: File http://fmwww.bc.edu/ec-p/data/wooldridge/401k.dta does not exist!
</code></pre>
<p>I also tried using <code>pandas</code>, but failed</p>
<pre><code>>>> Data = pd.read_stata("http://fmwww.bc.edu/ec-p/data/wooldridge/401k.dta")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.9/site-packages/pandas/io/stata.py", line 1898, in read_stata
reader = StataReader(
File "/usr/local/lib/python3.9/site-packages/pandas/io/stata.py", line 1066, in __init__
self._read_header()
File "/usr/local/lib/python3.9/site-packages/pandas/io/stata.py", line 1095, in _read_header
self._read_old_header(first_char)
File "/usr/local/lib/python3.9/site-packages/pandas/io/stata.py", line 1299, in _read_old_header
raise ValueError(_version_error.format(version=self.format_version))
ValueError: Version of given Stata file is 110. pandas supports importing versions 105, 108, 111 (Stata 7SE), 113 (Stata 8/9), 114 (Stata 10/11), 115 (Stata 12), 117 (Stata 13), 118 (Stata 14/15/16),and 119 (Stata 15/16, over 32,767 variables).
</code></pre>
<p>However with <code>R</code> I could download this using data without any problem,</p>
<pre><code>> head(read.dta("http://fmwww.bc.edu/ec-p/data/wooldridge/401k.dta"))
prate mrate totpart totelg age totemp sole ltotemp
1 26.1 0.21 1653 6322 8 8709 0 9.072112
2 100.0 1.42 262 262 6 315 1 5.752573
3 97.6 0.91 166 170 10 275 1 5.616771
4 100.0 0.42 257 257 7 500 0 6.214608
5 82.5 0.53 591 716 28 933 1 6.838405
6 100.0 1.82 92 92 7 143 1 4.962845
</code></pre>
<p>Could you please help me to download this data with <code>Python</code>?</p>
|
<pre><code>import requests
import pyreadstat
url = 'http://fmwww.bc.edu/ec-p/data/wooldridge/401k.dta'
def download_file(url):
local_filename = url.split('/')[-1]
with requests.get(url, stream=True) as r:
r.raise_for_status()
with open(local_filename, 'wb') as f:
for chunk in r.iter_content(chunk_size=8192):
f.write(chunk)
return local_filename
# download_file(url)
df, meta = pyreadstat.read_dta(download_file(url))
</code></pre>
|
python|python-3.x|pandas
| 1
|
375,569
| 71,592,500
|
performing function in for loop help? python
|
<p>Hello everyone I'm in the final step of my program to send in calendar but I can't seem to get my for loop to perform the function over the list. It only reads for 1 of the 3 in list. Here is my code below.</p>
<pre><code>Order List = ['0730049','4291200','1830470']
for eachId in Order_List:
**Code to create orders into excel sheet.**
df_list = [f'OrderId_{eachId}.xlsx']
print(df_list)
***everything works fine up until here ***
for eachOrder in df_list:
df = pd.read_excel (eachOrder,engine='openpyxl')
New_Order_Date= df.iat[0,3]
Order_obj_date= datetime.strptime(New_Order_Date,'%m/%d/%Y').date()
print(Order_obj_date)
</code></pre>
<p>my df_list when I print shows me my list of .xlsx files created
output :</p>
<pre><code>'0730049.xlsx','4291200.xlsx','1830470.xlsx'
</code></pre>
<p>But now I need the New_Order_Date to read in each excel file properly and find the date which is at cell value (D,2). Currently, my formula for one excel file, the df. iat[0,3] works perfectly. How can I get it to perform that same search for each of the df_list?</p>
<p>hopefully, this makes sense? but the current code that works for 1 file when printed to give everyone an idea equals:</p>
<pre><code>New_Order_Date=df.iat[0,3]
output: 07-01-2022
</code></pre>
<p>expected output;</p>
<pre><code>0730049.xlsx= 07-01-2022
4291200.xlsx= 07-04-2022
1830470.xlsx= 07-27-2022
</code></pre>
|
<p><code>df_list</code> isn't a list of all the filenames. You're overwriting it with just one filename each time through the loop.</p>
<p>Use a list comprehension to get all of them.</p>
<pre><code>df_list = [f'OrderId_{eachId}.xlsx' for eachId in Order_List]
</code></pre>
<p>But you may not even need that variable, just do everything in the loop over <code>Order_List</code></p>
<pre><code>for eachId in Order_List:
df = pd.read_excel (f'OrderId_{eachId}.xlsx', engine='openpyxl')
New_Order_Date= df.iat[0,3]
Order_obj_date= datetime.strptime(New_Order_Date,'%m/%d/%Y').date()
print(Order_obj_date)
</code></pre>
|
python|pandas|dataframe|for-loop
| 2
|
375,570
| 71,465,356
|
Making a column with differences between 2 other column
|
<p>I already started a similar topick, but few essential novelties are brought in. We have two columns: "333, 444, 555", and "333A, 444, 555B", and we need to get a column shewing "A, n/a, B", i.e. difference in values between the two.</p>
<pre><code>one= ''
for h in str(column1):
if h not in str(column2):
one += h
</code></pre>
<p>whence we get a string of differences. But is there are way to delimit the outcome and eventually place it at corresponding rows? Making</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">Col1</th>
<th style="text-align: left;">Col2</th>
<th style="text-align: left;">Col3</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">333</td>
<td style="text-align: left;">333A</td>
<td style="text-align: left;">A</td>
</tr>
<tr>
<td style="text-align: left;">444</td>
<td style="text-align: left;">444</td>
<td style="text-align: left;">n/a</td>
</tr>
<tr>
<td style="text-align: left;">555</td>
<td style="text-align: left;">555B</td>
<td style="text-align: left;">B</td>
</tr>
</tbody>
</table>
</div>
<p>? Thank you</p>
|
<p>IIUC, use <code>difflib</code>:</p>
<pre><code>from difflib import ndiff
diff = lambda x: ''.join(c[-1] for c in ndiff(x['Col1'], x['Col2']) if c[0] == '+')
df['Col3'] = df.astype({'Col1': str, 'Col2': str}).apply(diff, axis=1)
print(df)
# Output
Col1 Col2 Col3
0 333 333A A
1 444 444
2 555 555B B
3 777 787C 8C
</code></pre>
<p>Using <code>astype({'Col1': str, 'Col2': str})</code> is not mandatory if you already have strings.</p>
<p><strong>Update</strong></p>
<p>Try this version:</p>
<pre><code>def diff(x):
s1 = str(x['Col1'])
s2 = str(x['Col2'])
l = [c[-1] for c in ndiff(s1, s2) if c[0] == '+']
return ''.join(l)
df['Col3'] = df.apply(diff, axis=1)
</code></pre>
<p>Explanation:</p>
<p>Suppose the strings <code>s1 = '567'</code> and <code>s2 = '597C'</code>. The expected result is '9C'.</p>
<pre><code># Without a comprehension
for c in ndiff(s1, s2):
print(c)
# Output
5 # character in both strings
+ 6 # character in s1 only
+ 9 # character in s2 only
7 # character in both strings
+ C # character in s2 only
</code></pre>
<ul>
<li><code>c[0]</code> is the first character (the sign '+' or '-' or ' ')</li>
<li><code>c[-1]</code> is the last character (the current letter)</li>
</ul>
<p>So with the comprehension, we want to extract the current character <code>c[-1]</code> only if the sign <code>c[0]</code> is <code>'+'</code>.</p>
|
python|pandas
| 0
|
375,571
| 71,585,905
|
Python - pandas: create a separate row for each recurrence of a record
|
<p>I have date-interval-data with a "periodicity"-column representing how frequent the date interval occurs:</p>
<ul>
<li>Weekly: same weekdays every week</li>
<li>Biweekly: same weekdays every other week</li>
<li>Monthly: Same DATES every month</li>
</ul>
<p>Moreover I have a "recurring_until"-column specifying when the recurrence should stop.
What I need to accomplish is:</p>
<ul>
<li>creating a separate row for each recurring record until the "recurring_until" has been reached.</li>
</ul>
<p>What I have:
<a href="https://i.stack.imgur.com/HnmZB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HnmZB.png" alt="enter image description here" /></a></p>
<p>What I need:
<a href="https://i.stack.imgur.com/SCl1V.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SCl1V.png" alt="enter image description here" /></a></p>
<p>I have been trying with various for loops without much success. Here is the sample data:</p>
<pre><code>import pandas as pd
data = {'id':['1','2','3','4'],'from':['5/31/2020','6/3/2020','6/18/2020','6/10/2020'],'to':['6/5/2020','6/3/2020','6/19/2020','6/10/2020'],'periodicity':['weekly','weekly','biweekly','monthly'],'recurring_until':['7/25/2020','6/9/2020','12/30/2020','7/9/2020']}
df = pd.DataFrame(data)
</code></pre>
|
<p>First of all preprocess:</p>
<pre><code>df.set_index("id", inplace=True)
df["from"], df["to"], df["recurring_until"] = pd.to_datetime(df["from"]), pd.to_datetime(df.to), pd.to_datetime(df.recurring_until)
</code></pre>
<p>Next compute all the periodic <code>from</code>:</p>
<pre><code>new_from = df.apply(lambda x: pd.date_range(x["from"], x.recurring_until), axis=1) #generate all days between from and recurring_until
new_from[df.periodicity=="weekly"] = new_from[df.periodicity=="weekly"].apply(lambda x:x[::7]) #slicing by week
new_from[df.periodicity=="biweekly"] = new_from[df.periodicity=="biweekly"].apply(lambda x:x[::14]) #slicing by biweek
new_from[df.periodicity=="monthly"] = new_from[df.periodicity=="monthly"].apply(lambda x:x[x.day==x.day[0]]) #selectiong only days equal to the first day
new_from = new_from.explode() #explode to obtain a series
new_from.name = "from" #naming the series
</code></pre>
<p>after this we have <code>new_from</code> like this:</p>
<pre><code>id
1 2020-05-31
1 2020-06-07
1 2020-06-14
1 2020-06-21
1 2020-06-28
1 2020-07-05
1 2020-07-12
1 2020-07-19
2 2020-06-03
3 2020-06-18
3 2020-07-02
3 2020-07-16
3 2020-07-30
3 2020-08-13
3 2020-08-27
3 2020-09-10
3 2020-09-24
3 2020-10-08
3 2020-10-22
3 2020-11-05
3 2020-11-19
3 2020-12-03
3 2020-12-17
4 2020-06-10
Name: from, dtype: datetime64[ns]
</code></pre>
<p>Now lets compute all the periodic <code>to</code> as:</p>
<pre><code>new_to = new_from+(df.to-df["from"]).loc[new_from.index]
new_to.name = "to"
</code></pre>
<p>and we have <code>new_to</code> like this:</p>
<pre><code>id
1 2020-06-05
1 2020-06-12
1 2020-06-19
1 2020-06-26
1 2020-07-03
1 2020-07-10
1 2020-07-17
1 2020-07-24
2 2020-06-03
3 2020-06-19
3 2020-07-03
3 2020-07-17
3 2020-07-31
3 2020-08-14
3 2020-08-28
3 2020-09-11
3 2020-09-25
3 2020-10-09
3 2020-10-23
3 2020-11-06
3 2020-11-20
3 2020-12-04
3 2020-12-18
4 2020-06-10
Name: to, dtype: datetime64[ns]
</code></pre>
<p>We can finally concatenate this two series and join them to the initial dataframe:</p>
<pre><code>periodic_df = pd.concat([new_from, new_to], axis=1).join(df[["periodicity", "recurring_until"]]).reset_index()
</code></pre>
<p>result:</p>
<p><a href="https://i.stack.imgur.com/G79bA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/G79bA.png" alt="enter image description here" /></a></p>
|
python|pandas
| 0
|
375,572
| 71,573,477
|
Import numpy can't be resolved ERROR When I already have numpy installed
|
<p>I am trying to run my chatbot that I created with python, but I keep getting this error that I don't have numpy installed, but I do have it installed and whenever I try to install it it tells me that it is already installed. The error reads <code>"ModuleNotFoundError: No module named 'numpy'"</code></p>
<p>I don't understand what the problem is, why is it always throwing this error? even for nltk and tensorflow even though I have them all installed.</p>
<p>How can I resolve this issue?</p>
<p>Here is a screen shot when i install numpy:</p>
<p><a href="https://i.stack.imgur.com/y1CBM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/y1CBM.png" alt="enter image description here" /></a></p>
<p>Here is a screen shot of the error:</p>
<p><a href="https://i.stack.imgur.com/xnRAP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xnRAP.png" alt="enter image description here" /></a></p>
|
<p>I fixed the problem by deleting a python folder that was in the root directory of C:/ which caused installing the package to be ignored and not be installed in the correct directory which is in C:/Users/</p>
|
python|numpy
| 0
|
375,573
| 71,752,250
|
How could I generate a 2D array from a known slope and aspect value?
|
<p>Given a dummy heightmap (or digital elevation model) stored as a Numpy array like this:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
line = np.flip(np.arange(0, 10))
dem = np.tile(line, (10, 1))
</code></pre>
<p>I can calculate its slope and aspect like this:</p>
<pre><code>x, y = np.gradient(dem)
slope = np.degrees(np.arctan(np.sqrt(x**2 + y**2)))
aspect = np.degrees(np.arctan2(x, -y))
</code></pre>
<p>And visualise it:</p>
<pre><code>fig = plt.figure()
ax = fig.add_subplot(111, projection="3d")
y, x = np.mgrid[:10, :10]
ax.scatter(x, y, dem)
ax.set_title(f"Slope={np.mean(slope)}, Aspect={np.mean(aspect)}")
</code></pre>
<p><a href="https://i.stack.imgur.com/fRvxD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fRvxD.png" alt="enter image description here" /></a></p>
<p>But how would I go the other way?</p>
<p>I'd like to generate a blank 2D Numpy array of a fixed size, then fill it with values that follow a known slope and aspect (starting from an arbitrary elevation e.g. 0).</p>
|
<p>Since <code>gradient</code> assumes a step size of 1, the general formula for making a line with <code>N</code> points and a given <code>slope</code> and <code>offset</code> is</p>
<pre><code>slope * np.arange(N) + offset
</code></pre>
<p>What you call Slope is the magnitude of the gradient, given as an angle. What you call Aspect is the ratio of partial slopes in the x- and y-directions, also given as an angle. You have the following system of non-linear equations:</p>
<pre><code>np.tan(np.radians(slope))**2 = sx**2 + sy**2
np.tan(np.radians(aspect)) = -sx / sy
</code></pre>
<p>Luckily, you can solve this pretty easily using substitution:</p>
<pre><code>p = np.tan(np.radians(slope))**2
q = np.tan(np.radians(aspect))
sy = np.sqrt(p / (q**2 + 1))
sx = -q * sy
</code></pre>
<p>Now all you need to do is take the outer sum of two lines with slopes <code>sx</code> and <code>sy</code>:</p>
<pre><code>dem = offset + sx * np.arange(NX)[::-1, None] + sy * np.arange(NY)
</code></pre>
<p>Here is an example:</p>
<p>Inputs:</p>
<pre><code>aspect = -30
slope = 45
offset = 1
NX = 12
NY = 15
</code></pre>
<p>Gradient:</p>
<pre><code>p = np.tan(np.radians(slope))**2
q = np.tan(np.radians(aspect))
sy = np.sqrt(p / (q**2 + 1)) # np.sqrt(3) / 2
sx = -q * sy # 0.5
</code></pre>
<p>Result:</p>
<pre><code>dem = offset + sx * np.arange(NX)[::-1, None] + sy * np.arange(NY)
fig, ax = plt.subplots(subplot_kw={'projection': '3d'})
ax = fig.add_subplot(111, projection="3d")
ax.scatter(*np.mgrid[:NX, :NY], dem)
</code></pre>
<p><a href="https://i.stack.imgur.com/Brbey.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Brbey.png" alt="enter image description here" /></a></p>
<p>Your conventions may be off by a sign, which you should be able to fix easily by looking at the plot.</p>
|
python|numpy|matplotlib|numpy-ndarray|heightmap
| 1
|
375,574
| 71,569,202
|
For loop optimization to create an adjacency matrix
|
<p>I am currently working with graph with labeled edges.
The original adjacency matrix is a matrix with shape [n_nodes, n_nodes, n_edges] where each cell [i,j, k] is 1 if node i and j are connected via edge k.</p>
<p>I need to create a reverse of the original graph, where nodes become edges and edges become nodes, so i need a new matrix with shape [n_edges, n_edges, n_nodes], where each cell [i,j,k] is 1 if edges i and j have k as a common vertex.</p>
<p><a href="https://i.stack.imgur.com/OfS9E.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OfS9E.png" alt="enter image description here" /></a></p>
<p>The following code correctly completes the task, but the use of 5 nested for-loops is too slow, to process the amount of graphs with which I have to work seems to take about 700 hours.</p>
<p>Is there a better way to implement this?</p>
<pre><code>n_nodes = extended_adj.shape[0]
n_edges = extended_adj.shape[2]
reversed_graph = torch.zeros(n_edges, n_edges, n_nodes, 1)
for i in range(n_nodes):
for j in range(n_nodes):
for k in range(n_edges):
#If adj_mat[i][j][k] == 1 nodes i and j are connected with edge k
#For this reason the edge k must be connected via node j to every outcoming edge of j
if extended_adj[i][j][k] == 1:
#Given node j, we need to loop through every other possible node (l)
for l in range(n_nodes):
#For every other node, we need to check if they are connected by an edge (m)
for m in range(n_edges):
if extended_adj[j][l][m] == 1:
reversed_graph[k][m][j] = 1
</code></pre>
<p>Thanks is advance.</p>
|
<p>Echoing the comments above, this graph representation is almost certainly cumbersome and inefficient. But that notwithstanding, let's define a vectorized solution without loops and that uses tensor views whenever possible, which should be fairly efficient to compute for larger graphs.</p>
<p>For clarity let's use <code>[i,j,k]</code> to index <code>G</code> (original graph) and <code>[i',j',k']</code> to index <code>G'</code> (new graph). And let's shorten <code>n_edges</code> to <code>e</code> and <code>n_nodes</code> to <code>n</code>.</p>
<p>Consider the 2D matrix <code>slice = torch.max(G,dim = 1)</code>. At each coordinate <code>[a,b]</code> of this slice, a 1 indicates that node <code>a</code> is connected by edge <code>b</code> to some other node (we don't care which).</p>
<pre><code>slice = torch.max(G,dim = 1) # dimension [n,e]
</code></pre>
<p>We're well on our way to the solution, but we need an expression that tells us whether <code>a</code> is connected to edge <code>b</code> and another edge <code>c</code>, for all edges <code>c</code>. We can map all combinations <code>b,c</code> by expanding <code>slice</code>, copying it and transposing it, and looking for intersections between the two.</p>
<pre><code>expanded_dim = [slice.shape[0],slice.shape[1],slice.shape[1]] # value [n,e,e]
# two copies of slice, expanded on different dimensions
expanded_slice = slice.unsqueeze(1).expand(expanded_dim) # dimension [n,e,e]
transpose_slice = slice.unsqueeze(2).expand(expanded_dim) # dimension [n,e,e]
G = torch.bitwise_and(expanded_slice,transpose_slice).int() # dimension [n,e,e]
</code></pre>
<p><code>G[i',j',k']</code> now equals 1 iff node <code>i'</code> is connected by edge <code>j'</code> to some other node, AND node <code>i'</code> is connected by edge <code>k'</code> to some other node. If <code>j' = k'</code> the value is 1 as long as one of the endpoints of that edge is <code>i'</code>.</p>
<p>Lastly, we reorder dimensions to get to your desired form.</p>
<pre><code>G = torch.permute(G,(1,2,0)) # dimension [e,e,n]
</code></pre>
|
python|for-loop|optimization|graph|pytorch
| 2
|
375,575
| 71,630,995
|
How to boxplot different columns from a dataframe (y axis) vs groupby a range of hours (x axis) using plotly
|
<p>Good morning,</p>
<p>I'm trying to boxplot the 'columns' from 1 to 6 vs the 'ElapsedTime(hours)' column with the use of plotly library.</p>
<p>Here is my dataframe :</p>
<pre><code>+------------+----------+----------+---------+----------+----------+---------+----------+--------------------+
| Date | Time | Column1 | Column2 | Column3 | Column4 | Column5 | Column6 | ElapsedTime(hours) |
+------------+----------+----------+---------+----------+----------+---------+----------+--------------------+
| 29/07/2021 | 06:48:37 | 0,011535 | 8,4021 | 0,00027 | 0,027806 | 8,431 | 0,000362 | 0 |
+------------+----------+----------+---------+----------+----------+---------+----------+--------------------+
| 29/07/2021 | 06:59:37 | 0,013458 | 8,4421 | 0,000314 | 0,032214 | 8,4738 | 0,000416 | 0,183333333 |
+------------+----------+----------+---------+----------+----------+---------+----------+--------------------+
| 29/07/2021 | 07:14:37 | 0,017793 | 8,4993 | 0,000384 | 0,038288 | 8,5372 | 0,000486 | 0,433333333 |
+------------+----------+----------+---------+----------+----------+---------+----------+--------------------+
| 30/07/2021 | 08:12:50 | 0,018808 | 8,545 | 0,000414 | 0,042341 | 8,5891 | 0,000539 | 24,9702778 |
+------------+----------+----------+---------+----------+----------+---------+----------+--------------------+
| 30/07/2021 | 08:42:50 | 0,025931 | 8,3627 | 0,000534 | 0,032379 | 8,3556 | 0,000557 | 25,9036111 |
+------------+----------+----------+---------+----------+----------+---------+----------+--------------------+
| 30/07/2021 | 08:57:50 | 0,025164 | 8,5518 | 0,000505 | 0,041134 | 8,6516 | 0,000254 | 26,1536111 |
+------------+----------+----------+---------+----------+----------+---------+----------+--------------------+
| 31/07/2021 | 05:45:28 | 0,026561 | 8,6266 | 0,000533 | 0,050387 | 8,6718 | 0,00065 | 46,9475 |
+------------+----------+----------+---------+----------+----------+---------+----------+--------------------+
| 31/07/2021 | 05:55:28 | 0,027744 | 8,6455 | 0,000543 | 0,051511 | 8,6916 | 0,000664 | 47,11416667 |
+------------+----------+----------+---------+----------+----------+---------+----------+--------------------+
| 31/07/2021 | 06:05:28 | 0,028854 | 8,485 | 0,000342 | 0,05693 | 8,6934 | 0,000695 | 47,28083333 |
+------------+----------+----------+---------+----------+----------+---------+----------+--------------------+
</code></pre>
<p>for now, i just know how to boxplot each column vs nothing using these lines of <strong>code</strong> :</p>
<pre><code>import warnings
import pandas as pd
import plotly.graph_objects as go
warnings.simplefilter(action='ignore', category=FutureWarning)
warnings.simplefilter(action='ignore', category= UserWarning)
da = pd.DataFrame()
da['Date'] = ["29/07/2021", "29/07/2021", "29/07/2021", "30/07/2021", "30/07/2021", "30/07/2021", "31/07/2021", "31/07/2021", "31/07/2021"]
da['Time'] = ["06:48:37", "06:59:37", "07:14:37", "08:12:50", "08:42:50", "08:57:50", "05:45:28", "05:55:28", "06:05:28"]
da["Column1"] = [0.011534891, 0.013458399, 0.017792937, 0.018807581, 0.025931434, 0.025163517, 0.026561283, 0.027743659, 0.028854]
da["Column2"] = [8.4021, 8.4421, 8.4993, 8.545, 8.3627, 8.5518, 8.6266, 8.6455, 8.485]
da["Column3"] = [0.000270475, 0.000313769, 0.000383506, 0.000414331, 0.000533619, 0.000505081, 0.000533131, 0.000543031, 0.000342]
da["Column4"] = [0.027806399, 0.032213984, 0.038287754, 0.042340721, 0.032378571, 0.041134106, 0.050387029, 0.051511238, 0.05693]
da["Column5"] = [8.431, 8.4738, 8.5372, 8.5891, 8.3556, 8.6516, 8.6718, 8.6916, 8.6934]
da["Column6"] = [0.000362081, 0.000416463, 0.000486275, 0.000539244, 0.000556613, 0.000253831, 0.00064975, 0.000664063, 0.000695]
da["ElapsedTime(hours)"] = [0, 0.183333333, 0.433333333, 24.9702778, 25.9036111, 26.1536111, 46.9475, 47.11416667, 47.28083333]
fig = go.Figure()
fig.add_trace(
go.Box(y=da['Column1'], name='Column1'))
fig.add_trace(
go.Box(y=da['Column2'], name='Column2'))
fig.add_trace(
go.Box(y=da['Column3'], name='Column3'))
fig.add_trace(
go.Box(y=da['Column4'], name='Column4'))
fig.add_trace(
go.Box(y=da['Column5'], name='Column5'))
fig.add_trace(
go.Box(y=da['Column6'], name='Column6'))
fig.update_layout(legend=dict(
yanchor="top",
y=1.24,
xanchor="left",
x=0.15
))
from plotly import offline
offline.plot(fig)
</code></pre>
<p>Output :
<img src="https://i.stack.imgur.com/Gy0d7.png" alt="enter image description here" /></p>
<p>I can choose to show one column :
<img src="https://i.stack.imgur.com/Tet6b.png" alt="enter image description here" /></p>
<p><strong>What i want (if possible)</strong> : Plot my columns from 1 to 6 vs a range of ElapsedTime(hours). For exemple i choose to have a range of 10 hours, so the boxplots will be taking in consideration that range and plot all the values of that range into one box.</p>
<p><strong>PS</strong> : if i add x=da['ElapsedTime(hours)'] inside the go.Box(), i will be ploting each value of columns 1 to 6 versus one value from the ElapsedTime column and i don't want that, I want a box in a range of an ElapsedTime.</p>
<p><strong>Extra</strong> : If possible, i want the columns from 1 to 6 to be in a dropdown button so that i can click and choose which column i wanna see in the range of the ElapsedTime i choosed.</p>
<p>Thank you for your time, and have a great day !</p>
<p><strong>EDIT :</strong>#################################################</p>
<p>I tried these lines. The problem is that i have an error saying dataframe doesn't have a name argument (name=data.name) and if i get rid of that, let's say i don't use name=data.name, i will get a plot that is not Box. Do you have any idea on how to overcome this problem ?</p>
<pre><code>da["DateTime"] = pd.to_datetime(da.Date + " " + da.Time)
columns = [c for c in da.columns if c.startswith("Column")]
da.set_index("DateTime")[columns].resample("1D")
fig = go.Figure()
for start_datetime, data in da.set_index("DateTime")[columns].resample("1D"):
fig.add_trace(
go.Box(x=data.index, y=data.values, name=data.name))
fig.update_layout(legend=dict(
yanchor="top",
y=1.24,
xanchor="left",
x=0.15
))
fig.update_layout(boxmode='group')
from plotly import offline
offline.plot(fig)
</code></pre>
|
<p>Here are some suggestions.</p>
<ol>
<li><p>Merge the Date and Time columns into a DateTime column:</p>
<pre><code>import pandas as pd
da = pd.DataFrame()
da['Date'] = ["29/07/2021", "29/07/2021", "29/07/2021", "30/07/2021", "30/07/2021", "30/07/2021", "31/07/2021", "31/07/2021", "31/07/2021"]
da['Time'] = ["06:48:37", "06:59:37", "07:14:37", "08:12:50", "08:42:50", "08:57:50", "05:45:28", "05:55:28", "06:05:28"]
da["Column1"] = [0.011534891, 0.013458399, 0.017792937, 0.018807581, 0.025931434, 0.025163517, 0.026561283, 0.027743659, 0.028854]
da["Column2"] = [8.4021, 8.4421, 8.4993, 8.545, 8.3627, 8.5518, 8.6266, 8.6455, 8.485]
da["Column3"] = [0.000270475, 0.000313769, 0.000383506, 0.000414331, 0.000533619, 0.000505081, 0.000533131, 0.000543031, 0.000342]
da["Column4"] = [0.027806399, 0.032213984, 0.038287754, 0.042340721, 0.032378571, 0.041134106, 0.050387029, 0.051511238, 0.05693]
da["Column5"] = [8.431, 8.4738, 8.5372, 8.5891, 8.3556, 8.6516, 8.6718, 8.6916, 8.6934]
da["Column6"] = [0.000362081, 0.000416463, 0.000486275, 0.000539244, 0.000556613, 0.000253831, 0.00064975, 0.000664063, 0.000695]
da["ElapsedTime(hours)"] = [0, 0.183333333, 0.433333333, 24.9702778, 25.9036111, 26.1536111, 46.9475, 47.11416667, 47.28083333]
da["DateTime"] = pd.to_datetime(df.Date + " " + df.Time)
df = da # df is more natural for me ;)
</code></pre>
<p>I use the following to mark the interesting columns:</p>
<pre><code>columns = [c for c in df.columns if c.startswith("Column")]
</code></pre>
</li>
<li><p>Use the <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.aggregate.html" rel="nofollow noreferrer"><code>aggregate</code></a> method, to aggregate data over some time range.</p>
<p>For example to aggregate over one day, use</p>
<pre><code>df.set_index("DateTime")[columns].resample("1D")
</code></pre>
<p>The result is an object, that you can either run some aggregations on, e.g. compute the mean for each such sample:</p>
<pre><code>df.set_index("DateTime")[columns].resample("1D").mean()
</code></pre>
<p>If you want to leverage plotly's functionality to create the boxplot, I would use a loop though:</p>
<pre><code>for start_datetime, data in df.set_index("DateTime")[columns].resample("1D"):
print(start_datetime)
print(data)
print()
</code></pre>
<p>Instead of the <code>print</code> functions, use the plotly commands to create a box in the boxplot.</p>
</li>
</ol>
|
python|pandas|dataframe|plotly|boxplot
| 0
|
375,576
| 42,349,903
|
Combine layer chrominance with image luminance
|
<p>I have read this <a href="http://hi.cs.waseda.ac.jp/~iizuka/projects/colorization/data/colorization_sig2016.pdf" rel="nofollow noreferrer">Colorization paper</a> and it said:</p>
<blockquote>
<p>The output layer of the colorization network consists of a
convolutional layer with a Sigmoid transfer function that outputs the
chrominance of the input grayscale image.</p>
</blockquote>
<p>and in order to get the colored image they said:</p>
<blockquote>
<p>the computed chrominance is combined with the input intensity image to
produce the resulting color image.</p>
</blockquote>
<p>So I have implemented it and get the output layer with depth two, but how can I get the color image? How can I combine the greyscale image luminance values with the output layer of depth 2 (a*b colors) to get the final image?</p>
<p>I use tensorflow and python.</p>
|
<p>Ok, I tried to implement it by using <strong>Skimage</strong> library to make a tensor of image chrominance values and compine it with the luminance by the same method.</p>
|
python|tensorflow
| 0
|
375,577
| 42,247,104
|
How to create graphs of relative frequency from pandas dataframe
|
<p>I know that it's possible to create a histogram from a pandas dataframe column with matplotlib.pyplot using the code:</p>
<pre><code>df.plot.hist(y='Distance')
</code></pre>
<p>Which creates a graph like this:</p>
<p><a href="https://i.stack.imgur.com/rP4E2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rP4E2.png" alt="Graph one"></a></p>
<p>However what I'm looking for is a plot of relative frequency, expressed as a percentage of the total. I'd also like for the graph to have an overflow bin at 300 so that it looks something along the lines of:
<a href="https://i.stack.imgur.com/4YaXW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4YaXW.png" alt="Graph two"></a></p>
|
<p>Try this: </p>
<pre><code>orders = [{'number': 1029,'brand':'XPTO','qty':50},
{'number': 3233,'brand':'ABCD','qty':50},
{'number': 5455,'brand':'XPTO','qty':50},
{'number': 1234,'brand':'ABCD','qty':50},
{'number': 7654,'brand':'TXWZ','qty':50},
{'number': 8765,'brand':'XPTO','qty':50},
{'number': 4354,'brand':'TXWZ','qty':50},
{'number': 9089,'brand':'XPTO','qty':50},
{'number': 1031,'brand':'XPTO','qty':50}]
orders_df = pd.DataFrame(orders)
series = orders_df['brand'].value_counts() / len(orders_df)
indx = [0,1,2]
plt.bar(indx, series*100)
plt.ylabel('%')
plt.title('Relative fequency')
plt.xticks(indx, series.index)
</code></pre>
<p><a href="https://i.stack.imgur.com/QJ3Ov.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QJ3Ov.png" alt="Relative frequency"></a></p>
|
python|python-3.x|pandas|matplotlib
| 3
|
375,578
| 42,449,469
|
Applying a Month End Trading Calendar to Yahoo API data
|
<p>This is my first post, and I am new to Python and Pandas. I have been working on piecing together the code below based on many questions and answers I have viewed on this website. My next challenge is how to apply a month end trading calendar to the code below so that the output consists of month end "Adj Close" values for the two ETFs listed "VTI and BND". The "100ma" 100 day moving average must still be calculated based on the previous 100 trading days.</p>
<p>@ryan sheftel appears to have something on this site that would work, but I can't seem to implement it with my code to give me what I want.</p>
<p><a href="https://stackoverflow.com/questions/33094297/create-trading-holiday-calendar-with-pandas">Create trading holiday calendar with Pandas</a></p>
<p>Code I have put together so far:</p>
<pre><code>import datetime as dt #set start and end dates for data we are using
import pandas as pd
import numpy as np
import pandas_datareader.data as web # how I grab data from Yahoo Finance API. Pandas is popular data analysis library.
start = dt.datetime(2007,1,1)
end = dt.datetime(2017,2,18)
vti = web.DataReader('vti', 'yahoo',start, end)# data frame, stock ticker symbol, where getting from, start time, end time
bnd = web.DataReader('bnd', 'yahoo', start, end)
vti["100ma"] = vti["Adj Close"].rolling(window=100).mean()
bnd["100ma"] = bnd["Adj Close"].rolling(window=100).mean()
# Below I create a DataFrame consisting of the adjusted closing price of these stocks, first by making a list of these objects and using the join method
stocks = pd.DataFrame({'VTI': vti["Adj Close"],
'VTI 100ma': vti["100ma"],
'BND': bnd["Adj Close"],
'BND 100ma': bnd["100ma"],
})
print (stocks.head())
stocks.to_csv('Stock ETFs.csv')
</code></pre>
|
<p>I'd use <code>asfreq</code> to sample down to business month</p>
<pre><code>import datetime as dt #set start and end dates for data we are using
import pandas as pd
import numpy as np
import pandas_datareader.data as web # how I grab data from Yahoo Finance API. Pandas is popular data analysis library.
start = dt.datetime(2007,1,1)
end = dt.datetime(2017,2,18)
ids = ['vti', 'bnd']
data = web.DataReader(ids, 'yahoo', start, end)
ac = data['Adj Close']
ac.join(ac.rolling(100).mean(), rsuffix=' 100ma').asfreq('BM')
bnd vti bnd 100ma vti 100ma
Date
2007-01-31 NaN 58.453726 NaN NaN
2007-02-28 NaN 57.504188 NaN NaN
2007-03-30 NaN 58.148760 NaN NaN
2007-04-30 54.632232 60.487535 NaN NaN
2007-05-31 54.202353 62.739991 NaN 59.207899
2007-06-29 54.033591 61.634027 NaN 60.057136
2007-07-31 54.531996 59.455505 NaN 60.902113
2007-08-31 55.340892 60.330213 54.335640 61.227386
2007-09-28 55.674840 62.650936 54.542452 61.363872
2007-10-31 56.186500 63.773849 54.942038 61.675567
</code></pre>
|
python-3.x|pandas
| 0
|
375,579
| 42,543,602
|
How to use python to pivot the data in a table from many rows to only 4 rows
|
<p>I have data in a csv like this : </p>
<pre>
Month YEAR AZ-Phoenix CA-Los Angeles CA-San Diego CA-San Francisco CO-Denver DC-Washington
January 1987 59.33 54.67 46.61 50.20
February 1987 59.65 54.89 46.87 49.96 64.77
</pre>
<p>I want to convert this to 4 column csv instead of x columns like : </p>
<pre> Month YEAR State Values
January 1987 AZ-Phoenix
January 1987 CA-Los Angeles 59.33
January 1987 CA-San Diego 54.67
January 1987 CA-San Francisco 46.61
January 1987 CO-Denver 50.20..... so on
</pre>
<p>So far the code written works for only 1 column and cant be extrapolated to 2 columns. How to keep Month and year constant and increase while we pivot the state and values?</p>
<p>Code so far : </p>
<pre><code> df = df.set_index('YEAR').stack(dropna=False).reset_index()
df.columns = ['YEAR','A','B']
</code></pre>
<p>cant I just add month somewhere and acheive this?</p>
|
<p>You can simply add the columns you want to preserve to the index, stack, then reset the index. </p>
<pre><code>df.set_index(['Month','YEAR']).stack(dropna=False).reset_index()
</code></pre>
<p><strong>Demo</strong></p>
<pre><code>>>> df
Month YEAR AZ-Phoenix CA-Los Angeles CA-San Diego CA-San.1 \
0 January 1987 59.33 54.67 46.61 50.20 NaN NaN
1 February 1987 59.65 54.89 46.87 49.96 64.77 NaN
Francisco CO-Denver DC-Washington
0 NaN NaN NaN
1 NaN NaN NaN
>>> df.set_index(['Month','YEAR']).stack(dropna=False).reset_index()
Month YEAR level_2 0
0 January 1987 AZ-Phoenix 59.33
1 January 1987 CA-Los 54.67
2 January 1987 Angeles 46.61
3 January 1987 CA-San 50.20
4 January 1987 Diego NaN
5 January 1987 CA-San.1 NaN
6 January 1987 Francisco NaN
7 January 1987 CO-Denver NaN
8 January 1987 DC-Washington NaN
9 February 1987 AZ-Phoenix 59.65
10 February 1987 CA-Los 54.89
11 February 1987 Angeles 46.87
12 February 1987 CA-San 49.96
13 February 1987 Diego 64.77
14 February 1987 CA-San.1 NaN
15 February 1987 Francisco NaN
16 February 1987 CO-Denver NaN
17 February 1987 DC-Washington NaN
</code></pre>
|
python|csv|pandas|dataframe
| 3
|
375,580
| 42,255,729
|
Tensorflow Error: ValueError: Shapes must be equal rank, but are 2 and 1 From merging shape 1 with other shapes
|
<p>I am trying to use tensorflow for implementing a dcgan and have run into this error:</p>
<pre><code>ValueError: Shapes must be equal rank, but are 2 and 1
From merging shape 1 with other shapes. for 'generator/Reshape/packed' (op: 'Pack') with input shapes: [?,2048], [100,2048], [2048].
</code></pre>
<p>As far as iv gathered it indicates that my tensor shapes are different, but i cannot see what i need to change in order to fix this error. I believe the mistake hangs somewhere in between these methods:</p>
<p>First i create a placeholder in a method using:</p>
<pre><code>self.z = tf.placeholder(tf.float32, [None,self.z_dimension], name='z')
self.z_sum = tf.histogram_summary("z", self.z)
self.G = self.generator(self.z)
</code></pre>
<p>Then the last statement calls the generator method, this method uses reshape to change the tensor via:</p>
<pre><code> self.z_ = linear(z,self.gen_dimension * 8 * sample_H16 * sample_W16, 'gen_h0_lin', with_w=True)
self.h0 = tf.reshape(self.z_,[-1, sample_H16, sample_W16,self.gen_dimension * 8])
h0 = tf.nn.relu(self.gen_batchnorm1(self.h0))
</code></pre>
<p>If it helps here is my linear method:</p>
<pre><code>def linear(input_, output_size, scope=None, stddev=0.02, bias_start=0.0, with_w=False):
shape = input_.get_shape().as_list()
with tf.variable_scope(scope or "Linear"):
matrix = tf.get_variable("Matrix", [shape[1], output_size], tf.float32,tf.random_normal_initializer(stddev=stddev))
bias = tf.get_variable("bias", [output_size],initializer=tf.constant_initializer(bias_start))
if with_w:
return tf.matmul(input_, matrix) + bias, matrix, bias
else:
return tf.matmul(input_, matrix) + bias
</code></pre>
<p>EDIT:</p>
<p>I also use these placeholders:</p>
<pre><code> self.inputs = tf.placeholder(tf.float32, shape=[self.batch_size] + image_dimension, name='real_images')
self.gen_inputs = tf.placeholder(tf.float32, shape=[self.sample_size] + image_dimension, name='sample_inputs')
inputs = self.inputs
sample_inputs = self.gen_inputs
</code></pre>
|
<p><code>linear(z, self.gen_dimension * 8 * sample_H16 * sample_W16, 'gen_h0_lin', with_w=True)</code> would be return the tuple <code>(tf.matmul(input_, matrix) + bias, matrix, bias)</code>.</p>
<p>Therefore, <code>self.z_</code> is assigned by the tuple, not the only one tf tensor.</p>
<p>Just change <code>linear(z, self.gen_dimension * 8 * sample_H16 * sample_W16, 'gen_h0_lin', with_w=True)</code> to <code>linear(z, self.gen_dimension * 8 * sample_H16 * sample_W16, 'gen_h0_lin', with_w=False)</code>.</p>
|
python|tensorflow|artificial-intelligence
| 6
|
375,581
| 42,400,962
|
how to filter rows that satisfy a regular expression via pandas
|
<p>I'm trying to figure out a way to to select only the rows that satisfy my regular expression via Pandas. My actual dataset, data.csv, has one column(the heading is not labeled) and millions of row. The first four rows look like:</p>
<pre><code>5;4Z13H;;L
5;346;4567;;O
5;342;4563;;P
5;3LPH14;4567;;O
</code></pre>
<p>and I wrote the following regular expression </p>
<pre><code>([1-9][A-Z](.*?);|[A-Z][A-Z](.*?);|[A-Z][1-9](.*?);)
</code></pre>
<p>which would identify <code>4Z13H;</code> from row 1 and <code>3LPH14;</code> from row 4. Basically I would like pandas to filter my data and select rows 1 and 4.
So my desired output would be </p>
<pre><code>5;4Z13H;;L
5;3LPH14;4567;;O
</code></pre>
<p>I would then like to save the subset of filter rows into a new csv, filteredData.csv. So far I only have this:</p>
<pre><code>import pandas as pd
import numpy as np
import sys
import re
sys.stdout=open("filteredData.csv","w")
def Process(filename, chunksize):
for chunk in pd.read_csv(filename, chunksize=chunksize):
df[0] = df[0].re.compile(r"([1-9][A-Z]|[A-Z][A-Z]|[A-Z][1-9])(.*?);")
sys.stdout.close()
if __name__ == "__main__":
Process('data.csv', 10 ** 4)
</code></pre>
<p>I'm still relatively new to python so the code above has some syntax issues(I'm still trying to figure out how to use pandas chunksize). However the main issue is filtering the rows by the regular expression. I'd greatly appreciate anyone's advice</p>
|
<p>One way is to read the csv as pandas dataframe and then use str.contains to create a mask column</p>
<pre><code>df['mask'] = df[0].str.contains('(\d+[A-Z]+\d+)') #0 is the column name
df = (df[df['mask'] == True]).drop('mask', axis = 1)
</code></pre>
<p>You get the desired dataframe, if you wish, you can reset index using df = df.reset_index()</p>
<pre><code> 0
0 5;4Z13H;;L
3 5;3LPH14;4567;;O
</code></pre>
<p>Second is to first read the csv and create an edit file with only the filtered rows and then read the filtered csv to create the dataframe</p>
<pre><code>with open('filteredData.csv', 'r') as f_in:
with open('filteredData_edit.csv', 'w') as f_outfile:
f_out = csv.writer(f_outfile)
for line in f_in:
line = line.strip()
row = []
if bool(re.search("(\d+[A-Z]+\d+)", line)):
row.append(line)
f_out.writerow(row)
df = pd.read_csv('filteredData_edit.csv', header = None)
</code></pre>
<p>You get</p>
<pre><code> 0
0 5;4Z13H;;L
1 5;3LPH14;4567;;O
</code></pre>
<p>From my experience, I would prefer the second method as it would be more efficient to filter out the undesired rows before creating the dataframe.</p>
|
python|regex|pandas|nlp
| 3
|
375,582
| 42,406,724
|
numexpr: temporary variables or repeated sub-expressions?
|
<p>If the same sub-expression appears in multiple places within one <em>numexpr</em> expression, will it be recalculated multiple times (or is numexpr clever enough to detect this and reuse the result)? </p>
<p>Is there any way to declare temporary variables within a numexpr expression? This would have two aims: </p>
<ol>
<li>encourage numexpr to consider caching and re-using, rather than re-calculating, the result;</li>
<li>simplify the expression (making the source code easier to read and maintain).</li>
</ol>
<p>I am trying to calculate <em>f(g(x))</em> where <em>f</em> and <em>g</em> are themselves both complicated expressions (e.g. for pixel-based thematic classification, <em>f</em> is a nested decision tree involving multiple thresholds, <em>g</em> is a set of normalised difference ratios, and <em>x</em> is a multi-band raster image).</p>
|
<p>Yes, if a sub-expression is repeated within a numexpr expression, it will not be recalculated. </p>
<p>This can be verified by replacing <code>numexpr.evaluate(expr)</code> with <code>numexpr.disassemble(numexpr.NumExpr(expr))</code>.</p>
<p>For example, the expression <code>"where(x**2 > 0.5, 0, x**2 + 10)"</code> is compiled into something like:</p>
<pre><code>y = x*x
t = y>0.5
y = y+10
y[t] = 0
</code></pre>
<p>(Note the multiplication only appears once, not twice.)</p>
<p>For this reason, it is best if the entire computation can be input as a single numexpression. Avoid performing sub-calculations in python (assigning intermediate results or temporary variables into numpy arrays), as this will only increase memory usage and undermine numexpr's optimisations/speedups (which relate to performing this full sequence of computations in CPU-cache sized chunks to evade memory latency).</p>
<p>Nonetheless, more readable code can be formatted by using string substitution:</p>
<pre><code>f = """where({g} > 0.5,
0,
{g} + 10)"""
g = "x**2"
expr = f.format(g=g)
</code></pre>
|
python|numpy|optimization|refactoring|numexpr
| 3
|
375,583
| 42,570,498
|
Pandas conditions across multiple series
|
<p>Lets say I have some data like this:</p>
<pre><code>category = pd.Series(np.ones(4))
job1_days = pd.Series([1, 2, 1, 2])
job1_time = pd.Series([30, 35, 50, 10])
job2_days = pd.Series([1, 3, 1, 3])
job2_time = pd.Series([10, 40, 60, 10])
job3_days = pd.Series([1, 2, 1, 3])
job3_time = pd.Series([30, 15, 50, 15])
</code></pre>
<p>Each entry represents an individual (so 4 people total). <code>xxx_days</code> represents the number of days an individual did something and <code>xxx_time</code> represents the number of minutes spent doing that job on a single day</p>
<p>I want to assign a <code>2</code> to <code>category</code> for an individual, if <em>across all jobs</em> they spent at least 3 days of 20 minutes each. So for example, person 1 does not meet the criteria because they only spent 2 total days with at least 20 minutes (their job 2 day count does not count toward the total because time is < 20). Person 2 does meet the criteria as they spent 5 total days (jobs 1 and 2).</p>
<p>After replacement, <code>category</code> should look like this:
<code>[1, 2, 2, 1]</code></p>
<p>My current attempt to do this requires a for loop and manually indexing into each series and calculating the total days where time is greater than 20. However, this approach doesn't scale well to my actual dataset. I haven't included the code here as i'd like to approach it from a Pandas perspective instead</p>
<p>Whats the most efficient way to do this in Pandas? The thing that stumps me is checking conditions across multiple series and act accordingly after summation of days</p>
|
<p>Put <em>days</em> and <em>time</em> in two data frames with column positions correspondence maintained, then do the calculation in a vectorized approach:</p>
<pre><code>import pandas as pd
time = pd.concat([job1_time, job2_time, job3_time], axis = 1)
days = pd.concat([job1_days, job2_days, job3_days], axis = 1)
((days * (time >= 20)).sum(1) >= 3) + 1
#0 1
#1 2
#2 2
#3 1
#dtype: int64
</code></pre>
|
python|pandas|series
| 1
|
375,584
| 42,579,731
|
Tensorflow Convnet Strange Softmax Output
|
<p>The output for my Convnet has been very unusual. When imprinted out the output vector of the forward propagation results, it was perfectly [0, 0, 0, 1], constant for an entire label in the dataset. I suspect there's an error in my construction.</p>
<pre><code>import os
import sys
import tensorflow as tf
import Input
import os, re
"""
This is a model based on the CIFAR10 Model.
The general structure of the program and a few functions are
borrowed from Tensorflow example of the CIFAR10 model.
https://github.com/tensorflow/tensorflow/tree/r0.7/tensorflow/models/image/cifar10/
As quoted:
"If you are now interested in developing and training your own image classification
system, we recommend forking this tutorial and replacing components to address your
image classification problem."
Source:
https://www.tensorflow.org/tutorials/deep_cnn/
"""
FLAGS = tf.app.flags.FLAGS
TOWER_NAME = 'tower'
tf.app.flags.DEFINE_integer('batch_size', 1, "hello")
tf.app.flags.DEFINE_string('data_dir', 'data', "hello")
def _activation_summary(x):
with tf.device('/cpu:0'):
tensor_name = re.sub('%s_[0-9]*/' % TOWER_NAME, '', x.op.name)
tf.histogram_summary(tensor_name + '/activations', x)
tf.scalar_summary(tensor_name + '/sparsity', tf.nn.zero_fraction(x))
def inputs():
if not FLAGS.data_dir:
raise ValueError('Source Data Missing')
data_dir = FLAGS.data_dir
images, labels = Input.inputs(data_dir = data_dir, batch_size = FLAGS.batch_size)
return images, labels
def eval_inputs():
data_dir = FLAGS.data_dir
images, labels = Input.eval_inputs(data_dir = data_dir, batch_size = 1)
return images, labels
def weight_variable(shape):
with tf.device('/gpu:0'):
initial = tf.random_normal(shape, stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape = shape)
return tf.Variable(initial)
def conv(images, W):
with tf.device('/gpu:0'):
return tf.nn.conv2d(images, W, strides = [1, 1, 1, 1], padding = 'SAME')
def forward_propagation(images):
with tf.variable_scope('conv1') as scope:
conv1_feature = weight_variable([20, 20, 3, 20])
conv1_bias = bias_variable([20])
image_matrix = tf.reshape(images, [-1, 1686, 1686, 3])
conv1_result = tf.nn.relu(conv(image_matrix, conv1_feature) + conv1_bias)
_activation_summary(conv1_result)
with tf.variable_scope('conv2') as scope:
conv2_feature = weight_variable([10, 10, 20, 40])
conv2_bias = bias_variable([40])
conv2_result = tf.nn.relu(conv(conv1_result, conv2_feature) + conv2_bias)
_activation_summary(conv2_result)
conv2_pool = tf.nn.max_pool(conv2_result, ksize = [1, 281, 281, 1], strides = [1, 281, 281, 1], padding = 'SAME')
with tf.variable_scope('conv3') as scope:
conv3_feature = weight_variable([5, 5, 40, 80])
conv3_bias = bias_variable([80])
conv3_result = tf.nn.relu(conv(conv2_pool, conv3_feature) + conv3_bias)
_activation_summary(conv3_result)
conv3_pool = tf.nn.max_pool(conv3_result, ksize = [1, 2, 2, 1], strides = [1, 2, 2, 1], padding = 'SAME')
with tf.variable_scope('local3') as scope:
perceptron1_weight = weight_variable([3 * 3 * 80, 10])
perceptron1_bias = bias_variable([10])
flatten_dense_connect = tf.reshape(conv3_pool, [1, -1])
compute_perceptron1_layer = tf.nn.relu(tf.matmul(flatten_dense_connect, perceptron1_weight) + perceptron1_bias)
_activation_summary(compute_perceptron1_layer)
with tf.variable_scope('softmax_connect') as scope:
perceptron3_weight = weight_variable([10, 4])
perceptron3_bias = bias_variable([4])
y_conv = tf.nn.softmax(tf.matmul(compute_perceptron1_layer, perceptron3_weight) + perceptron3_bias)
_activation_summary(y_conv)
return y_conv
def error(forward_propagation_results, labels):
with tf.device('/cpu:0'):
labels = tf.cast(labels, tf.int64)
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(forward_propagation_results, labels)
cost = tf.reduce_mean(cross_entropy)
tf.add_to_collection('losses', cost)
tf.scalar_summary('LOSS', cost)
return cost
def train(cost):
with tf.device('/gpu:0'):
train_loss = tf.train.GradientDescentOptimizer(learning_rate = 0.01).minimize(cost)
return train_loss
</code></pre>
|
<p>The main issue lies in the fact that Softmax is called twice.</p>
<p>Softmax was called in the forward_propagation part of the code, and that was placed in the Tensorflow cross entropy code, which already contains a softmax, hence causing an anomaly in the outputs.</p>
|
python|machine-learning|tensorflow|neural-network
| 0
|
375,585
| 42,245,818
|
Calculate percentages for subgroups in pandas dataframe
|
<p>I have a dataframe df:</p>
<pre><code> VID SFID SFReps
0 0000F0DD 000C5AF6 9
1 0000F0DD 000E701F 16
2 0000F0DD 00481C04 1
3 0000F0DD 004DCD04 1
4 0000F0DD 006CD213 1
5 0000F0DD 00889D31 9
6 0000AAAA 00F8733A 4
7 0000AAAA 00FDD591 1
8 0000AAAA 01243458 4
9 0000AAAA 01292867 16
10 0000AAAA 0131445A 9
11 0000AAAA 013CB69F 1
</code></pre>
<p>I want to calculate the percentage that each SFReps represents for each VID group. </p>
<p>So the result should be something like:</p>
<pre><code> VID SFID SFReps SFPercent
0 0000F0DD 000C5AF6 9 0.24
1 0000F0DD 000E701F 16 0.43
2 0000F0DD 00481C04 1 0.03
3 0000F0DD 004DCD04 1 0.03
4 0000F0DD 006CD213 1 0.03
5 0000F0DD 00889D31 9 0.24
6 0000AAAA 00F8733A 4 0.11
7 0000AAAA 00FDD591 1 0.03
8 0000AAAA 01243458 4 0.11
9 0000AAAA 01292867 16 0.46
10 0000AAAA 0131445A 9 0.26
11 0000AAAA 013CB69F 1 0.03
</code></pre>
<p>I know I can group each VID values using <code>groupby</code> but after that I'm stumped. </p>
<p>Looping through each row is an option, but I know there is a better way to do this.</p>
|
<p>You can divide by new <code>Series</code> created by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.transform.html" rel="nofollow noreferrer"><code>transform</code></a> and <code>sum</code> for same index as original <code>df</code>:</p>
<pre><code>print (df.groupby('VID')['SFReps'].transform('sum'))
0 37
1 37
2 37
3 37
4 37
5 37
6 35
7 35
8 35
9 35
10 35
11 35
Name: SFReps, dtype: int64
df['SFPercent'] = df.SFReps / df.groupby('VID')['SFReps'].transform('sum')
print (df)
VID SFID SFReps SFPercent
0 0000F0DD 000C5AF6 9 0.243243
1 0000F0DD 000E701F 16 0.432432
2 0000F0DD 00481C04 1 0.027027
3 0000F0DD 004DCD04 1 0.027027
4 0000F0DD 006CD213 1 0.027027
5 0000F0DD 00889D31 9 0.243243
6 0000AAAA 00F8733A 4 0.114286
7 0000AAAA 00FDD591 1 0.028571
8 0000AAAA 01243458 4 0.114286
9 0000AAAA 01292867 16 0.457143
10 0000AAAA 0131445A 9 0.257143
11 0000AAAA 013CB69F 1 0.028571
</code></pre>
<hr>
<pre><code>df['SFPercent'] = df.SFReps.div(df.groupby('VID')['SFReps'].transform('sum'))
print (df)
VID SFID SFReps SFPercent
0 0000F0DD 000C5AF6 9 0.243243
1 0000F0DD 000E701F 16 0.432432
2 0000F0DD 00481C04 1 0.027027
3 0000F0DD 004DCD04 1 0.027027
4 0000F0DD 006CD213 1 0.027027
5 0000F0DD 00889D31 9 0.243243
6 0000AAAA 00F8733A 4 0.114286
7 0000AAAA 00FDD591 1 0.028571
8 0000AAAA 01243458 4 0.114286
9 0000AAAA 01292867 16 0.457143
10 0000AAAA 0131445A 9 0.257143
11 0000AAAA 013CB69F 1 0.028571
</code></pre>
<p>Last if necessary add <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.round.html" rel="nofollow noreferrer"><code>round</code></a>:</p>
<pre><code>df['SFPercent'] = df.SFReps.div(df.groupby('VID')['SFReps'].transform('sum')).round(2)
print (df)
VID SFID SFReps SFPercent
0 0000F0DD 000C5AF6 9 0.24
1 0000F0DD 000E701F 16 0.43
2 0000F0DD 00481C04 1 0.03
3 0000F0DD 004DCD04 1 0.03
4 0000F0DD 006CD213 1 0.03
5 0000F0DD 00889D31 9 0.24
6 0000AAAA 00F8733A 4 0.11
7 0000AAAA 00FDD591 1 0.03
8 0000AAAA 01243458 4 0.11
9 0000AAAA 01292867 16 0.46
10 0000AAAA 0131445A 9 0.26
11 0000AAAA 013CB69F 1 0.03
</code></pre>
|
python|pandas|dataframe
| 4
|
375,586
| 42,136,290
|
Counting the occurrence of one dataframe column as a substring in another?
|
<p>I'm new to python and have found answers for counting hardcoded substrings in a df column but am unable to find an answer when using another df column as input. Is this possible with pandas? </p>
<p>It's quite messy but essentially my dataframe is:</p>
<pre><code>ID Info
3457 <type1><stats></id>3457<type2></id>3457<type2></id>45
234 <type2><stats></id>234
4555 <type2><stats></id>604555<type1></id>4555<type2></id>4555
2378 <stats></id>555
</code></pre>
<p>I've managed to count the occurrences of specific strings e.g</p>
<pre><code>df['Type1_Count']=df['Info'].apply((lambda string: string.count("<type1>")))
df['Type2_Count']=df['Info'].apply((lambda string: string.count("<type2>")))
</code></pre>
<p>However I also need to count the occurrence of the ID's from the first column and since these can have a false match it would really need to be a count of the string "/id>" plus the <strong>ID</strong> column. </p>
<p>Hope this makes sense, appreciate any help.</p>
|
<p>You can try one of these </p>
<pre><code>df = pd.DataFrame({'name':['bernard','Samy','yyy'],'digit':[2,3,3],'SearchID':['be','xx','Sam']})
print df
for ID in df['SearchID']:
print ID, '\n', df.name.str.count(ID)
Searchstr = df['SearchID'].str.cat(sep='|')
print df.apply(lambda x: x['name'].count(x['SearchID']), axis=1)
</code></pre>
|
python|python-2.7|pandas|substring
| 0
|
375,587
| 42,509,878
|
what is the difference between sampled_softmax_loss and nce_loss in tensorflow?
|
<p>i notice there are two functions about negative Sampling in tensorflow to compute the loss (<strong>sampled_softmax_loss</strong> and <strong>nce_loss</strong>). the paramaters of these two function are similar, but i really want to know what is the difference between the two?</p>
|
<p>Sample softmax is all about selecting a sample of the given number and try to get the softmax loss. Here the main objective is to make the result of the sampled softmax equal to our true softmax. So algorithm basically concentrate lot on selecting the those samples from the given distribution.
On other hand NCE loss is more of selecting noise samples and try to mimic the true softmax. It will take only one true class and a K noise classes. </p>
|
tensorflow|sampling
| 3
|
375,588
| 42,248,341
|
How to count and sum entries per each group?
|
<p>This is the dataframe:</p>
<pre><code>GROUP TIME EVAL
AAA 20 0
AAA 22 0
AAA 21 1
AAA 20 0
BBB 20 0
</code></pre>
<p>I want to see how many entries belong to each grouping and how many entries have <code>EVAL</code> equal to 1 in each grouping. I have almost finished the code, but just not sure how to count entries per group. It seems to search for a column <code>TOTAL_CALLS</code>, while I want to create it.</p>
<pre><code>final = df.groupby(['GROUP']).agg({'TIME':'mean','EVAL':'sum','TOTAL_NUM':'count'}).reset_index()
</code></pre>
|
<p>Using the <code>Time</code> column itself, we can calculate both number of records and mean time for each group. This can be achieved by sending a list ['mean','count'] for the aggregation. we could find the sum of <code>Eval</code> for each group as well.</p>
<pre><code> print(data.groupby(['Group']).agg({'Time':['mean','count'],'Eval' : 'sum'}).reset_index())
Group Eval Time
sum mean count
0 AAA 1 20.75 4
1 BBB 0 20.00 1
</code></pre>
|
python|pandas
| 1
|
375,589
| 42,357,499
|
Scatter plot of Multiindex GroupBy()
|
<p>I'm trying to make a scatter plot of a GroupBy() with Multiindex (<a href="http://pandas.pydata.org/pandas-docs/stable/groupby.html#groupby-with-multiindex" rel="nofollow noreferrer">http://pandas.pydata.org/pandas-docs/stable/groupby.html#groupby-with-multiindex</a>). That is, I want to plot one of the labels on the x-axis, another label on the y-axis, and the mean() as the size of each point.</p>
<p><code>df['RMSD'].groupby([df['Sigma'],df['Epsilon']]).mean()</code> returns:</p>
<pre><code>Sigma_ang Epsilon_K
3.4 30 0.647000
40 0.602071
50 0.619786
3.6 30 0.646538
40 0.591833
50 0.607769
3.8 30 0.616833
40 0.590714
50 0.578364
Name: RMSD, dtype: float64
</code></pre>
<p>And I'd like to to plot something like: <code>plt.scatter(x=Sigma, y=Epsilon, s=RMSD)</code></p>
<p>What's the best way to do this? I'm having trouble getting the proper Sigma and Epsilon values for each RMSD value.</p>
|
<p>+1 to Vaishali Garg. Based on his comment, the following works:
<code>
df_mean = df['RMSD'].groupby([df['Sigma'],df['Epsilon']]).mean().reset_index()
plt.scatter(df_mean['Sigma'], df_mean['Epsilon'], s=100.*df_mean['RMSD'])
</code></p>
|
pandas
| 1
|
375,590
| 42,215,933
|
Apply 'wrap_text' to all cells using openpyxl
|
<p>I have a Pandas dataframe that I am writing out to an XLSX using openpyxl. Many of the cells in the spreadsheet contain long sentences, and i want to set 'wrap_text' on all the contents of the sheet (i.e. every cell).</p>
<p>Is there a way to do this? I have seen openpyxl has an 'Alignment' option for 'wrap_text', but I cannot see how to apply this to all cells.</p>
<p>Edit:</p>
<p>Thanks to feedback, the following does the trick. Note - copy due to styles being immutable.</p>
<pre><code>for row in ws.iter_rows():
for cell in row:
cell.alignment = cell.alignment.copy(wrapText=True)
</code></pre>
|
<p>I have been using openpyxl>=2.5.6. Let us say we want to wrap text for cell A1, then we can use the below code.</p>
<pre><code>from openpyxl.styles import Alignment
ws['A1'].alignment = Alignment(wrap_text=True)
</code></pre>
|
python|pandas|openpyxl
| 33
|
375,591
| 42,273,078
|
Python Function does not work
|
<p>I have got the following df:</p>
<pre><code>df = pd.DataFrame(columns=['mbs','Wholesale Data Usage'], index=['x','y','z'])
df.loc['x'] = pd.Series({'mbs':32, 'Wholesale Data Usage':36})
df.loc['y'] = pd.Series({'mbs':64, 'Wholesale Data Usage':62})
df.loc['z'] = pd.Series({'mbs':256, 'Wholesale Data Usage':277})
</code></pre>
<p>Moreover I have defined the following function:</p>
<pre><code>def calculate_costs(row):
mbs = row.loc['mbs']
wdu = row.loc['Wholesale Data Usage']
ac = 0
if wdu >= mbs:
if mbs == 32 | mbs == 64:
ac = (wdu - mbs) * 0.05
elif mbs == 128 | mbs == 256:
ac = (wdu - mbs) * 0.036
elif mbs == 512 | mbs == 1024:
ac = (wdu - mbs) * 0.018
return ac
</code></pre>
<p>For some reason if I apply the function to my df all of the row values are 0:</p>
<pre><code>df['Additional Charge'] = df.apply(lambda r: calculate_costs(r), axis=1)
</code></pre>
<p>Could you please advise what I am doing wrong?</p>
|
<p>I am not sure if this is what you want, the result is not zero now. (I changed the | symbol to "or")</p>
<pre><code>def calculate_costs(row):
mbs = row.loc['mbs']
wdu = row.loc['Wholesale Data Usage']
print(mbs,wdu)
ac = 0
if wdu >= mbs:
print("Hej")
if mbs == 32 or mbs == 64:
ac = (wdu - mbs) * 0.05
elif mbs == 128 or mbs == 256:
ac = (wdu - mbs) * 0.036
elif mbs == 512 or mbs == 1024:
ac = (wdu - mbs) * 0.018
else:
print ("Hello")
return ac
</code></pre>
|
python|function|pandas
| 1
|
375,592
| 42,514,444
|
How do you Merge 2 Series in Pandas
|
<p>I have the following:</p>
<pre><code>s1 = pd.Series([1, 2], index=['A', 'B'])
s2 = pd.Series([3, 4], index=['C', 'D'])
</code></pre>
<p>I want to combine <code>s1</code> and <code>s2</code> to create <code>s3</code> which is:</p>
<pre><code>s3 = pd.Series([1, 2, 3, 4], index=['A', 'B', 'C', 'D'])
</code></pre>
<p>NB: There is no index overlap</p>
|
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="noreferrer"><code>concat()</code></a>, it automatically executes an outer join:</p>
<pre><code>pd.concat([s1, s2])
</code></pre>
<p>result:</p>
<pre><code>A 1
B 2
C 3
D 4
dtype: int64
</code></pre>
|
python|pandas|series
| 5
|
375,593
| 69,897,012
|
how to stack two columns
|
<p>I have a df as this:</p>
<pre><code> C CF
NO FROMNODENO TONODENO
1 1 2 582.551074 0
2 1 809.018213 0
</code></pre>
<p>and I would like to obtain this:</p>
<pre><code> new value
NO FROMNODENO TONODENO new index
1 1 2 C 582.551074
2 1 C 809.018213
1 2 CF 0.000000
2 1 CF 0.000000
</code></pre>
<p>Note that there could be more columns at the beginning and some of them should not change. How to do it? And how to set the name of the new index the values of which come from the labels of the stack indices?</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.rename_axis.html" rel="nofollow noreferrer"><code>DataFrame.rename_axis</code></a> for rename last level with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.stack.html" rel="nofollow noreferrer"><code>DataFrame.stack</code></a>, for one column <code>DataFrame</code> use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.to_frame.html" rel="nofollow noreferrer"><code>Series.to_frame</code></a>:</p>
<pre><code>df = df.rename_axis('new index', axis=1).stack().to_frame('new value')
print (df)
new value
NO FROMNODENO TONODENO new index
1 1 2 C 582.551074
CF 0.000000
2 1 C 809.018213
CF 0.000000
</code></pre>
|
pandas|stack|pivot-table
| 2
|
375,594
| 69,886,734
|
Pandas replacing the entries and NaNs with the average of first two entries
|
<p>This is a very strange dataset that I do not know how to preprocess as following example:</p>
<pre><code>Year, ID, feature1, feature2, target1
2008, 1, 10, 20, 5
2008, 1, 12, 25, 6
2008, 1, NaN, NaN, 4
2008, 1, NaN, NaN, 7
2008, 1, NaN, NaN, 3
2008, 1, NaN, NaN, 5
2008, 2, 22, 16, 7
2008, 2, 24, 14, 3
2008, 2, NaN, NaN, 5
2008, 2, NaN, NaN, 6
2008, 2, NaN, NaN, 9
2008, 3, 12, 15, 6
2008, 3, NaN, NaN, 1
....
</code></pre>
<p>The question is that I would like to replace the first two entries with value by their average, AND also fill the NaN with average from the first two values for columns <code>feature1</code> and <code>feature2</code>. If there is only one column that has an entry like <code>ID == 3</code>, I will just ffill.</p>
<p>Example output:</p>
<pre><code>Year, ID, feature1, feature2, target1
2008, 1, 11, 22.5, 5
2008, 1, 11, 22.5, 6
2008, 1, 11, 22.5, 4
2008, 1, 11, 22.5, 7
2008, 1, 11, 22.5, 3
2008, 1, 11, 22.5, 5
2008, 2, 23, 15, 7
2008, 2, 23, 15, 3
2008, 2, 23, 15, 5
2008, 2, 23, 15, 6
2008, 2, 23, 15, 9
2008, 3, 12, 15, 6
2008, 3, 12, 15, 1
....
</code></pre>
<p>Is there a way I can do that?</p>
|
<p>Try with <code>transform</code> <code>mean</code></p>
<pre><code>g = df.groupby(['Year','ID'])
df['feature1'] = g['feature1'].transform('mean')
df['feature2'] = g['feature2'].transform('mean')
</code></pre>
|
python|pandas
| 1
|
375,595
| 69,757,795
|
How can i use if statement on for loop output from the excel datas
|
<p>I am new in Pandas, I want to use if conditional operator to the printed loop output from excel</p>
<pre><code>for i in range(0,10,3):
line = df.loc[i].to_numpy()
print(line[0], line[1],line[2],line[3],line[4],line[5],line[6],Line[7],Line[8])
</code></pre>
<p>Output:</p>
<pre><code>Year N1 N2 N3 N4 N5 N6 N7 N8
77 13 23 26 31 35 43 58 88
80 3 13 16 23 24 35 78 99
83 2 29 10 14 22 44 66 90
</code></pre>
<p>For example if Year has value of 58 then only to print the number</p>
|
<p>The idea of working with pandas is not to use for loops to go through the rows, and neither to convert the rows to NumPy arrays. Rather, use pandas functionalities to do so. In this case, we can get only the row where Year is 58 by doing:</p>
<pre><code>df[df.Year == 58]
</code></pre>
<p>or</p>
<pre><code>df.query('Year == 58')
</code></pre>
<p>If Year was part of the index, you should instead do:</p>
<pre><code>df.loc[58]
</code></pre>
<p>That will return the row with the Year equal to what you want. To print it you have 2 possibilities: "print(df)" or "display(df)". I recommend "display" because if you are working in a Notebook the format will be prettier.</p>
|
python|pandas|numpy
| 1
|
375,596
| 69,846,953
|
I´m trying to implement: np.maximum.outer in Python 3x but I´m getting this error: NotImplementedError
|
<p>I have a matrix like this:</p>
<pre><code>RCA = pd.DataFrame(
data=[
(1,0,0,0),
(1,1,1,0),
(0,0,1,0),
(0,1,0,1),
(1,0,1,0)],
columns=['ct1','ct2','ct3','ct4'],
index=['ind_1','ind_2','ind_3','ind_4','ind_5'])
</code></pre>
<p>I´m trying to calculate:</p>
<pre><code>norms = RCA.sum()
norm = np.maximum.outer(norms, norms)
</code></pre>
<p>And I´m getting this error:</p>
<pre><code>---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
<ipython-input-9-4fd04a55ad8c> in <module>
4
5 norms = RCA.sum()
----> 6 norm = np.maximum.outer(norms, norms)
7 proximity = RCA.T.dot(RCA).div(norm)
8
~/opt/anaconda3/envs/py37/lib/python3.7/site-packages/pandas/core/series.py in __array_ufunc__(self, ufunc, method, *inputs, **kwargs)
746 return None
747 else:
--> 748 return construct_return(result)
749
750 def __array__(self, dtype=None) -> np.ndarray:
~/opt/anaconda3/envs/py37/lib/python3.7/site-packages/pandas/core/series.py in construct_return(result)
735 if method == "outer":
736 # GH#27198
--> 737 raise NotImplementedError
738 return result
739 return self._constructor(result, index=index, name=name, copy=False)
NotImplementedError:
</code></pre>
<p>This works perfect in Python 2.7, but I need to run it in Python 3.x</p>
<p>I need to find a way around this issue. Thanks a lot.</p>
|
<pre><code>In [181]: RCA
Out[181]:
ct1 ct2 ct3 ct4
ind_1 1 0 0 0
ind_2 1 1 1 0
ind_3 0 0 1 0
ind_4 0 1 0 1
ind_5 1 0 1 0
In [182]: norms = RCA.sum()
In [183]: norms
Out[183]:
ct1 3
ct2 2
ct3 3
ct4 1
dtype: int64
In [184]: np.maximum.outer(norms,norms)
Traceback (most recent call last):
File "<ipython-input-184-d24a173874f6>", line 1, in <module>
np.maximum.outer(norms,norms)
File "/usr/local/lib/python3.8/dist-packages/pandas/core/generic.py", line 2032, in __array_ufunc__
return arraylike.array_ufunc(self, ufunc, method, *inputs, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/pandas/core/arraylike.py", line 381, in array_ufunc
result = reconstruct(result)
File "/usr/local/lib/python3.8/dist-packages/pandas/core/arraylike.py", line 334, in reconstruct
raise NotImplementedError
NotImplementedError
</code></pre>
<p>Sometimes passing a dataframe (or Series) to a numpy function works ok, but apparently here we need to explicitly use the array values:</p>
<pre><code>In [185]: norms.values
Out[185]: array([3, 2, 3, 1])
In [186]: np.maximum.outer(norms.values,norms.values)
Out[186]:
array([[3, 3, 3, 3],
[3, 2, 3, 2],
[3, 3, 3, 3],
[3, 2, 3, 1]])
</code></pre>
<p>Actually looking at the traceback, apparently <code>pandas</code> adapts <code>ufunc</code> to its own uses. <code>np.maximum(norms,norms)</code> works, but apparently pandas has not adapted the <code>outer</code> method. [186] is pure <code>numpy</code>, returning an array.</p>
<p>Plain <code>np.maximum</code> returns a Series:</p>
<pre><code>In [192]: np.maximum(norms,norms)
Out[192]:
ct1 3
ct2 2
ct3 3
ct4 1
dtype: int64
</code></pre>
<p><code>outer</code> returns a 2d array, which in <code>pandas</code> terms would be a dataframe, not a Series. That could explain why pandas does not implement <code>outer</code>.</p>
|
python|python-3.x|numpy|matrix
| 1
|
375,597
| 69,815,642
|
Add a suffix number after each iteration when writing pandas data frame to excel file
|
<p>I'm performing calculations on a double for loop that has a unique list of products and unique list of customers. I want to write out each pandas data frame of the product/customer combo to an excel file and add a number each time by 1. So essentially something like</p>
<pre><code> for product in product_list:
for customer in customer_list:
dataframe = data[(data.Product==product) & (data.Customer==customer)]
# read to excel file:
dataframe.to_excel('df1.xlsx)
</code></pre>
<p>where the code would write out the first dataframe and call it 'df1.xlsx', then 'df2.xlsx', df3.xlsx', etc.</p>
<p>thanks!</p>
|
<p>You could easily add some sort of counter like this.</p>
<pre class="lang-py prettyprint-override"><code> cnt = 0
for product in product_list:
for customer in customer_list:
dataframe = data[(data.Product==product) & (data.Customer==customer)]
# read to excel file:
cnt += 1
dataframe.to_excel(f'df{cnt}.xlsx')
</code></pre>
<p>However, why not add the product and customer to the filename(s) so you can they be more easily identifiable?</p>
<pre class="lang-py prettyprint-override"><code>dataframe.to_excel(f'{product}-{customer}.xlsx')
</code></pre>
|
python|pandas
| 0
|
375,598
| 69,666,481
|
How to check string in string with not exact same values?
|
<p>I'm working with Pandas. I need to create a new column in a dataframe according to conditions in other columns. I try to look for each value in a series if it contains a value (a condition to return text).This works when the values are exactly the same but not when the value is only a part of the value of the series</p>
<p>I also tried with str.contains but I never succeeded
The code who works with exact values :</p>
<pre><code>if ("something" in df2["Symptom"].values):
print("yes")
else:
print("no")
</code></pre>
<p>And i got a "no" when it's not the same values.</p>
|
<p>IIUC:</p>
<pre><code>df2 = pd.DataFrame(data={'Symptom':["I am something", 'I am not', 'Something 2']})
if df2["Symptom"].str.contains('Something').any():
print("yes")
else:
print("no")
</code></pre>
<p><code>OUTPUT: YES</code></p>
<p>And If you do the following:</p>
<pre><code>if df2["Symptom"].str.contains('Something').all():
print("yes")
else:
print("no")
</code></pre>
<p><code>OUTPUT: NO</code></p>
|
python|pandas|string|dataframe|conditional-statements
| 1
|
375,599
| 69,789,339
|
How to count the number of days since a column flag?
|
<p>I have a dataframe defined as follows. I'd like to count the number of days (or rows) when the <code>input</code> column changes from 1 to 0:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'input': [1,1,1,0,0,0,1,1,1,0,0,0]},
index=pd.date_range('2021-10-01', periods=12))
# I can mark the points of interest, i.e. when it goes from 1 to 0
df['change'] = 0
df.loc[(df['input'].shift(1) - df['input']) > 0, 'change'] = 1
print(df)
</code></pre>
<p>I end up with the following:</p>
<pre><code> input change
2021-10-01 1 0
2021-10-02 1 0
2021-10-03 1 0
2021-10-04 0 1
2021-10-05 0 0
2021-10-06 0 0
2021-10-07 1 0
2021-10-08 1 0
2021-10-09 1 0
2021-10-10 0 1
2021-10-11 0 0
2021-10-12 0 0
</code></pre>
<p>What I want is a <code>res</code> output:</p>
<pre><code> input change res
2021-10-01 1 0 0
2021-10-02 1 0 0
2021-10-03 1 0 0
2021-10-04 0 1 1
2021-10-05 0 0 2
2021-10-06 0 0 3
2021-10-07 1 0 0
2021-10-08 1 0 0
2021-10-09 1 0 0
2021-10-10 0 1 1
2021-10-11 0 0 2
2021-10-12 0 0 3
</code></pre>
<p>I know I can use a <code>cumsum</code> but don't find a way to "reset it" at the appropriate points:</p>
<pre><code>df['res'] = (1 - df['input']).cumsum()*(1 - df['input'])
</code></pre>
<p>but this above will continue accumulating and not reset where <code>change == 1</code></p>
|
<p>We can create a boolean Series only where <code>input</code> <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.eq.html" rel="nofollow noreferrer"><code>eq</code></a> <code>0</code> then <a href="https://stackoverflow.com/q/40802800/15497888">group by consecutive values</a> and take the <a href="https://pandas.pydata.org/docs/reference/api/pandas.core.groupby.DataFrameGroupBy.cumsum.html" rel="nofollow noreferrer"><code>groupby cumsum</code></a> of the boolean Series. This is essentially enumerating groups, but only groups where there are 0s in <code>input</code>.</p>
<p><code>0</code>:</p>
<pre><code>m = df['input'].eq(0)
df['res'] = m.groupby(m.ne(m.shift()).cumsum()).cumsum()
</code></pre>
<p><code>df</code>:</p>
<pre><code> input change res
2021-10-01 1 0 0
2021-10-02 1 0 0
2021-10-03 1 0 0
2021-10-04 0 1 1
2021-10-05 0 0 2
2021-10-06 0 0 3
2021-10-07 1 0 0
2021-10-08 1 0 0
2021-10-09 1 0 0
2021-10-10 0 1 1
2021-10-11 0 0 2
2021-10-12 0 0 3
</code></pre>
|
python|pandas
| 0
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.