Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
7,900
| 56,186,710
|
Does Tensorflow automaticaly use multiple CPUs?
|
<p>I have programmed some code doing an inference with Tensorflow's C API (CPU only). It is running on a cluster node, where I have access to 24 CPUs and 1 GPU. I do not make use of the GPU as I will need to do the task CPU-only later on.</p>
<p>Somehow every time I call the Tensorflow-Code from the other program (OpenFOAM) Tensorflow seems to run on all CPUs parallelized. However I have not done anything to cause this behavior. Now I would like to know whether Tensorflow does this parallelization by default?</p>
<p>Greets and thanks in advance!</p>
|
<p>I am not sure how you are using tensorflow. But a typical TensorFlow training has an input pipeline which can be thought as an ETL process. Following are the main activities involved: </p>
<p><strong>Extract</strong>: Read data from persistent storage</p>
<p><strong>Transform</strong>: Use CPU cores to parse and perform preprocessing operations on the data such as image decompression, data augmentation transformations (such as random crop, flips, and color distortions), shuffling, and batching.</p>
<p><strong>Load</strong>: Load the transformed data onto the accelerator device(s) (for example, GPU(s) or TPU(s)) that execute the machine learning model.</p>
<p>CPUs are generally used during the data transformation. During the transformation, the data input elements are preprocessed. To improve the performance of the pre-processing, it is parallelized across multiple CPU cores by default.</p>
<p>Tensorflow provides the tf.data API which offers the tf.data.Dataset.map transformation. To control the parallelism, the map provides the num_parallel_calls argument. </p>
<p>Read more on this from here:
<a href="https://www.tensorflow.org/guide/performance/datasets" rel="nofollow noreferrer">https://www.tensorflow.org/guide/performance/datasets</a></p>
|
tensorflow|deep-learning|c-api
| 2
|
7,901
| 56,077,273
|
Replacing NAN value in a pandas dataframe from values in other records of same group
|
<p>I have a dataframe <code>df</code> </p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'A': [np.nan, 1, 2,np.nan,2,np.nan,np.nan],
'B': [10, np.nan, np.nan,5,np.nan,np.nan,7],
'C': [1,1,2,2,3,3,3]})
</code></pre>
<p>which looks like :</p>
<pre><code> A B C
0 NaN 10.0 1
1 1.0 NaN 1
2 2.0 NaN 2
3 NaN 5.0 2
4 2.0 NaN 3
5 NaN NaN 3
6 NaN 7.0 3
</code></pre>
<p>I want to replace all the NAN values in column <code>A</code> and <code>B</code> with the value from other records which are from the same group as mentioned in column <code>C</code>.</p>
<p>My expected output is :</p>
<pre><code> A B C
0 1.0 10.0 1
1 1.0 10.0 1
2 2.0 5.0 2
3 2.0 5.0 2
4 2.0 7.0 3
5 2.0 7.0 3
6 2.0 7.0 3
</code></pre>
<p>How can I do the same in pandas dataframe ?</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.apply.html" rel="nofollow noreferrer"><code>GroupBy.apply</code></a> with forward and back filling missing values:</p>
<pre><code>df[['A','B']] = df.groupby('C')['A','B'].apply(lambda x: x.ffill().bfill())
print (df)
A B C
0 1.0 10.0 1
1 1.0 10.0 1
2 2.0 5.0 2
3 2.0 5.0 2
4 2.0 7.0 3
5 2.0 7.0 3
6 2.0 7.0 3
</code></pre>
|
python|pandas|nan
| 3
|
7,902
| 55,922,703
|
Multiplying a certain row of a dataframe by the values of another row of another dataframe of the same dimensions
|
<p>I have a dataframe that I am populating by iterating through it by rows and trying to multiply the previous row by the values contained in another row of another dataframe of the same dimensions and then insert the resulting row into the first dataframe.</p>
<p>I've used .loc to filter the row of each dataframe and then I tried to use .mul, but I'm getting TypeError. Also for some reason, the first dataframe.loc results in a transposed series while the second doesn't and I'm forced to add a .transpose() to it</p>
<pre><code>dfPortW.loc[i,"NAV":] = dfPortW.loc[i-1,"NAV":].mul(dfReturns1.loc[dfReturns1["Dates"] == date, "NAV":].transpose()
</code></pre>
|
<p>OK, I found a solution... not an elegant one, but it works.</p>
<p>I suspect the issue was being caused by the fact that pandas / python was not reading the dfReturns1 single row as a Series while dfPortW was one. What solved it was:</p>
<pre><code> j = dfReturns1["Dates"][dfReturns1["Dates"] == date]
j = j.index[0]
dfPortW.loc[i,"NAV":] = dfPortW.loc[i-1,"NAV":].mul(dfReturns1.loc[j,"NAV":], axis = 0)
</code></pre>
<p>this made sure both were series and it was a series .mul which worked just fine... again, not elegant but it works.</p>
|
python|pandas
| 0
|
7,903
| 65,044,179
|
PyTorch is giving me a different value for a scalar
|
<p>When I create a tensor from float using PyTorch, then cast it back to a float, it produces a different result. Why is this, and how can I fix it to return the same value?</p>
<pre><code>num = 0.9
float(torch.tensor(num))
</code></pre>
<p>Output:</p>
<pre><code>0.8999999761581421
</code></pre>
|
<p>This is a floating-point "issue" and you read more about how Python 3 handles those <a href="https://docs.python.org/3/tutorial/floatingpoint.html" rel="nofollow noreferrer">here</a>.</p>
<p>Essentially, not even <code>num</code> is actually storing 0.9. Anyway, the print issue in your case comes from the fact that <code>num</code> is actually double-precision and <code>torch.tensor</code> uses single-precision by default. If you try:</p>
<pre class="lang-py prettyprint-override"><code>num = 0.9
float(torch.tensor(num, dtype=torch.float64))
</code></pre>
<p>you'll get <code>0.9</code>.</p>
|
floating-point|pytorch|tensor
| 2
|
7,904
| 64,720,445
|
Remove the weight_orig in Pytorch after Pruning a model
|
<p>After a model is pruned in Pytorch, the saved model contains both the pruned weights and weight_orig. This causes the pruned model size to be greater than the unpruned model.
Is there a way to remove the weight_orig and reduce the pruned model size?</p>
|
<p>As explained in the <a href="https://pytorch.org/tutorials/intermediate/pruning_tutorial.html#remove-pruning-re-parametrization" rel="nofollow noreferrer">offcial documentation</a>, you can use <code>torch.nn.utils.prune.remove()</code> for this very purpose.<br />
<code>remove()</code> removes the re-parametrization in terms of <code>weight_orig</code> and <code>weight_mask</code>, and removes the <code>forward_pre_hook</code>.
You'd use it like this:</p>
<pre class="lang-py prettyprint-override"><code>for module in model.modules():
if isinstance(module, torch.nn.Conv2d):
prune.remove(module,'weight')
# etc...
</code></pre>
|
deep-learning|pytorch|pruning
| 0
|
7,905
| 64,776,592
|
Transpose multiple columns in pairs of two - pandas python
|
<p>I would like to transpose multiple columns in pairs of two</p>
<p>I have the following columns:</p>
<pre><code>user_id', 'fullname', 'email', 'handle', 'audience_ethnicities_code0', 'audience_ethnicities_weight0', 'audience_ethnicities_code1', 'audience_ethnicities_weight1', 'audience_ethnicities_code2', 'audience_ethnicities_weight2', 'audience_ethnicities_code3', 'audience_ethnicities_weight3'
</code></pre>
<p>where code and weigh are related, for example:</p>
<p>user_id = ABCD</p>
<pre><code>'audience_ethnicities_code0' = asian;
'audience_ethnicities_weight0' = 0.4
'audience_ethnicities_code1' = african;
'audience_ethnicities_weight1' = 0.2
'audience_ethnicities_code2' = white;
'audience_ethnicities_weight2' = 0.2
'audience_ethnicities_code3' = hispanic;
'audience_ethnicities_weight3' = 0.2
</code></pre>
<p>tot weight = 1, and the audience of user ABCD is 40% Asian, 20% African etc. What I want is to have ethnicity (<code>audience_ethnicities_code_n </code>) in the column and in the row their weight (<code>audience_ethnicities_weight_n </code>) for each user</p>
<p>I tried this query but it gave me a messy result:</p>
<pre><code>df1 = df.pivot_table(index=['user_id', 'fullname', 'email', 'handle'],
columns=['audience_ethnicities_code0', 'audience_ethnicities_code1', 'audience_ethnicities_code2', 'audience_ethnicities_code3'],
values=['audience_ethnicities_weight0', 'audience_ethnicities_weight1', 'audience_ethnicities_weight2', 'audience_ethnicities_weigh3'], aggfunc=lambda x: ' '.join(str(v) for v in x))
df1
</code></pre>
<p>any ideas?</p>
|
<p>I would iteratively do the pivot for each column and then merge the dataframes by their index.</p>
<p>Here an example:</p>
<pre><code>from functools import reduce
index = ['user_id', 'fullname', 'email', 'handle']
dfList = []
for i in range(3):
dfList.append(df.pivot_table(index=index,
columns='audience_ethnicities_code{}'.format(i),
values='audience_ethnicities_weight{}'.format(i))
.rename_axis(None, axis=1)
.reset_index())
reduce(lambda x, y: pd.merge(x, y, on=index), dfList)
</code></pre>
|
python|pandas|pivot|transpose
| 0
|
7,906
| 39,883,656
|
Combining dataframes in pandas with the same rows and columns, but different cell values
|
<p>I'm interested in combining two dataframes in pandas that have the same row indices and column names, but different cell values. See the example below:</p>
<pre><code>import pandas as pd
import numpy as np
df1 = pd.DataFrame({'A':[22,2,np.NaN,np.NaN],
'B':[23,4,np.NaN,np.NaN],
'C':[24,6,np.NaN,np.NaN],
'D':[25,8,np.NaN,np.NaN]})
df2 = pd.DataFrame({'A':[np.NaN,np.NaN,56,100],
'B':[np.NaN,np.NaN,58,101],
'C':[np.NaN,np.NaN,59,102],
'D':[np.NaN,np.NaN,60,103]})
In[6]: print(df1)
A B C D
0 22.0 23.0 24.0 25.0
1 2.0 4.0 6.0 8.0
2 NaN NaN NaN NaN
3 NaN NaN NaN NaN
In[7]: print(df2)
A B C D
0 NaN NaN NaN NaN
1 NaN NaN NaN NaN
2 56.0 58.0 59.0 60.0
3 100.0 101.0 102.0 103.0
</code></pre>
<p>I would like the resulting frame to look like this:</p>
<pre><code> A B C D
0 22.0 23.0 24.0 25.0
1 2.0 4.0 6.0 8.0
2 56.0 58.0 59.0 60.0
3 100.0 101.0 102.0 103.0
</code></pre>
<p>I have tried different ways of pd.concat and pd.merge but some of the data always gets replaced with NaNs. Any pointers in the right direction would be greatly appreciated.</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.combine_first.html" rel="nofollow"><code>combine_first</code></a>:</p>
<pre><code>print (df1.combine_first(df2))
A B C D
0 22.0 23.0 24.0 25.0
1 2.0 4.0 6.0 8.0
2 56.0 58.0 59.0 60.0
3 100.0 101.0 102.0 103.0
</code></pre>
<p>Or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.fillna.html" rel="nofollow"><code>fillna</code></a>:</p>
<pre><code>print (df1.fillna(df2))
A B C D
0 22.0 23.0 24.0 25.0
1 2.0 4.0 6.0 8.0
2 56.0 58.0 59.0 60.0
3 100.0 101.0 102.0 103.0
</code></pre>
<p>Or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.update.html" rel="nofollow"><code>update</code></a>:</p>
<pre><code>df1.update(df2)
print (df1)
A B C D
0 22.0 23.0 24.0 25.0
1 2.0 4.0 6.0 8.0
2 56.0 58.0 59.0 60.0
3 100.0 101.0 102.0 103.0
</code></pre>
|
python|pandas|dataframe|merge|concat
| 2
|
7,907
| 39,957,815
|
Not a JPEG file: starts with 0xc3 0xbf
|
<p>I am trying to decode jpeg file using tf.image.decode_jpeg but it says its not a JPEG file. I don't know what the problem is.Can anyone help me to solve this problem?</p>
<p>This is my test code.</p>
<pre><code>import tensorflow as tf
path = "/root/PycharmProjects/mscoco/train2014/COCO_train2014_000000291797.jpg"
with open(path, "r", encoding="latin-1") as f:
image = f.read()
encoded_jpeg = tf.placeholder(dtype=tf.string)
decoded_jpeg = tf.image.decode_jpeg(encoded_jpeg, channels=3)
sess = tf.InteractiveSession()
sess.run(decoded_jpeg, feed_dict={encoded_jpeg: image})
</code></pre>
<p>And This is the error:</p>
<pre><code>Not a JPEG file: starts with 0xc3 0xbf
Traceback (most recent call last):
File "/usr/lib/python3.4/site-packages/tensorflow/python/client/session.py", line 965, in _do_call
return fn(*args)
File "/usr/lib/python3.4/site-packages/tensorflow/python/client/session.py", line 947, in _run_fn
status, run_metadata)
File "/usr/lib64/python3.4/contextlib.py", line 66, in __exit__
next(self.gen)
File "/usr/lib/python3.4/site-packages/tensorflow/python/framework/errors.py", line 450, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors.InvalidArgumentError: Invalid JPEG data, size 165886
[[Node: DecodeJpeg = DecodeJpeg[acceptable_fraction=1, channels=3, fancy_upscaling=true, ratio=1, try_recover_truncated=false, _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_Placeholder_0)]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/root/PycharmProjects/mytf/models/im2txt/im2txt/data/test.py", line 14, in <module>
sess.run(decoded_jpeg, feed_dict={encoded_jpeg: image})
File "/usr/lib/python3.4/site-packages/tensorflow/python/client/session.py", line 710, in run
run_metadata_ptr)
File "/usr/lib/python3.4/site-packages/tensorflow/python/client/session.py", line 908, in _run
feed_dict_string, options, run_metadata)
File "/usr/lib/python3.4/site-packages/tensorflow/python/client/session.py", line 958, in _do_run
target_list, options, run_metadata)
File "/usr/lib/python3.4/site-packages/tensorflow/python/client/session.py", line 978, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors.InvalidArgumentError: Invalid JPEG data, size 165886
[[Node: DecodeJpeg = DecodeJpeg[acceptable_fraction=1, channels=3, fancy_upscaling=true, ratio=1, try_recover_truncated=false, _device="/job:localhost/replica:0/task:0/cpu:0"](_recv_Placeholder_0)]]
Caused by op 'DecodeJpeg', defined at:
File "/root/PycharmProjects/mytf/models/im2txt/im2txt/data/test.py", line 10, in <module>
decoded_jpeg = tf.image.decode_jpeg(encoded_jpeg, channels=3)
File "/usr/lib/python3.4/site-packages/tensorflow/python/ops/gen_image_ops.py", line 283, in decode_jpeg
name=name)
File "/usr/lib/python3.4/site-packages/tensorflow/python/framework/op_def_library.py", line 703, in apply_op
op_def=op_def)
File "/usr/lib/python3.4/site-packages/tensorflow/python/framework/ops.py", line 2317, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/usr/lib/python3.4/site-packages/tensorflow/python/framework/ops.py", line 1239, in __init__
self._traceback = _extract_stack()
</code></pre>
<p>I cannot </p>
|
<p>You're reading an image file as if it were a text file.</p>
<p>Just change the line:</p>
<pre><code>with open(path, "r", encoding="latin-1") as f:
</code></pre>
<p>with</p>
<pre><code>with open(path, "rb") as f:
</code></pre>
<p>To read the image as a binary ("rb" = Read Binary) file.</p>
|
python|tensorflow
| 3
|
7,908
| 44,228,539
|
Delete based on index, without knowing where it is
|
<p>Say I have a <code>pandas.DataFrame</code> with a <code>MultiIndex</code> and I know it has two levels and <code>year</code> is in the first one, and I want to keep particular years, I can do</p>
<pre><code>df = df.loc[yearStart:, :]
</code></pre>
<p>If I know the index has only two levels, but not in which <code>year</code> is, I can hack some dirty</p>
<pre><code>if df.index.names[0] == 'year':
df = df.loc[yearStart:, :]
else
df = df.loc[:, yearStart:]
</code></pre>
<p>What if I know it is in the index, but not which level, nor how many levels the index has? If <code>year</code> is not in the index, but a regular column, I can do</p>
<pre><code>df = df.loc[df.year >= yearStart]
</code></pre>
<p>Is there something similar generic for the index?</p>
|
<p>You can use <code>get_level_values</code> to get a column-like view of an index level.</p>
<pre><code>df = pd.DataFrame({'a': range(100)}, index=pd.MultiIndex.from_product([range(10), range(2010,2020)], names=['idx1', 'year']))
df.head()
Out[41]:
a
idx1 year
0 2010 0
2011 1
2012 2
2013 3
2014 4
df[df.index.get_level_values('year') >= 2015].head()
Out[42]:
a
idx1 year
0 2015 5
2016 6
2017 7
2018 8
2019 9
</code></pre>
|
python|pandas
| 4
|
7,909
| 44,226,065
|
TF-slim layers count
|
<p>Would the code below represent one or two layers? I'm confused because isn't there also supposed to be an input layer in a neural net?</p>
<pre class="lang-python3 prettyprint-override"><code>input_layer = slim.fully_connected(input, 6000, activation_fn=tf.nn.relu)
output = slim.fully_connected(input_layer, num_output)
</code></pre>
<p>Does that contain a hidden layer? I'm just trying to be able to visualize the net. Thanks in advance!</p>
|
<p><a href="https://i.stack.imgur.com/6O50G.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6O50G.png" alt="enter image description here"></a></p>
<p>You have a neural network with one hidden layer. In your code, <code>input</code> corresponds to the 'Input' layer in the above image. <code>input_layer</code> is what the image calls 'Hidden'. <code>output</code> is what the image calls 'Output'.</p>
<p>Remember that the "input layer" of a neural network isn't a traditional fully-connected layer since it's just raw data without an activation. It's a bit of a misnomer. Those neurons in the picture above in the input layer are not the same as the neurons in the hidden layer or output layer.</p>
|
tensorflow|neural-network|tf-slim
| 1
|
7,910
| 44,163,155
|
Python - Pulling Indices data from Google Finance
|
<p>I am trying to pull Historical data of Indices from Google Finance, but it's not working. While I am able to pull historical data of an individual stock easily. Am I doing something wrong with Indices?</p>
<p>My code for Stock</p>
<pre><code>from pandas_datareader import data
from dateutil.relativedelta import relativedelta
import datetime as dt
enddate = dt.datetime.today()
begdate = enddate + relativedelta(years=-1)
x= data.get_data_google("GOOGL",begdate,enddate)
print(x.head())
</code></pre>
<p>Output</p>
<pre><code> Open High Low Close Volume
Date
2016-05-24 719.85 734.20 719.64 733.03 1890195
2016-05-25 735.00 739.89 732.60 738.10 1610773
2016-05-26 736.00 741.10 733.00 736.93 1298295
2016-05-27 737.51 747.91 737.01 747.60 1738913
2016-05-31 748.76 753.48 745.57 748.85 2124248
</code></pre>
<p>My code for Index</p>
<pre><code>x= data.get_data_google(".DJI",begdate,enddate)
</code></pre>
<p>Error</p>
<pre><code>RemoteDataError: Unable to read URL: http://www.google.com/finance/historical
</code></pre>
|
<p>I am not sure where the issue is, however there is a difference on GOOGLE Finance website.</p>
<p>When you try to see historical data for GOOGL:<br>
<a href="https://finance.google.com/finance/historical?q=NASDAQ:GOOGL" rel="nofollow noreferrer">https://finance.google.com/finance/historical?q=NASDAQ:GOOGL</a></p>
<p>On right hand side of the website (under chart) you will see Export section with link to CSV. </p>
<p>However for DJI:<br>
<a href="https://finance.google.com/finance/historical?q=INDEXDJX:.DJI" rel="nofollow noreferrer">https://finance.google.com/finance/historical?q=INDEXDJX:.DJI</a></p>
<p>There is no such link.</p>
<p>It could be that implementation of pandas_datareader uses that link to get the data. I changed the csv download link for the INDEXDJX:.DJI and it returned an error.</p>
<p><strong>Update:</strong><br>
I see that the function is trying to reach<br>
<a href="http://www.google.com/finance/historical?q=INDEXDJX%3A.DJI&startdate=Oct+20%2C+2016&enddate=Oct+20%2C+2017&output=csv" rel="nofollow noreferrer">http://www.google.com/finance/historical?q=INDEXDJX%3A.DJI&startdate=Oct+20%2C+2016&enddate=Oct+20%2C+2017&output=csv</a></p>
<p>This does not exist. When I replace above for google ticker it downloads the file.</p>
<p>In the meantime I found <a href="https://stackoverflow.com/questions/44235964/unable-to-read-sp500-using-pandas-datareader-on-google-finance">this comment</a> that seems to confirm the above i.e. export to csv is not supported for all exchanges <a href="https://support.google.com/finance/answer/71913?hl=en" rel="nofollow noreferrer">see google doc</a> for more info.</p>
|
python|pandas|google-finance
| 2
|
7,911
| 69,540,950
|
Count observations per date and per date & category
|
<p>I need to explain one behavior of pandas.
Suppose this dataframe:</p>
<pre class="lang-py prettyprint-override"><code>index;day;id;value
0;2020-01-03;1;14
1;2020-01-03;1;2
2;2020-01-03;2;5
3;2020-01-05;1;7
4;2020-01-05;1;9
</code></pre>
<p>When I want to compute number of observation per day and id I can simple do:</p>
<pre><code>df["frequency_per_id"] = df(["id", "day"])["id"].transform("count")
</code></pre>
<p>But when I want to compute the number of observations per day using the same formula:</p>
<pre><code>df["frequency"] = df(["day"])["day"].transform("count")
</code></pre>
<p>I got an error <code><ipython-input-16-3a624d98b3b5>:1: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame. Try using .loc[row_indexer,col_indexer] = value instead</code></p>
<p>Can you explain me why? I do the same process? Thanks a lot</p>
|
<p>It's a <em>warning</em>, not an error. I think the code will have done what you want it to do.</p>
<p>And it's a very common error, Googling "SettingWithCopyWarning" returns hundreds of articles and StackOverflow posts</p>
|
python|pandas|date
| 0
|
7,912
| 69,621,513
|
Extract strings from pandas df using regex
|
<p>I need help with regex for Python Pandas dataframe.
Testing strings would be:</p>
<pre><code>s = pd.Series(['xslF345X03/was-form4_163347386959085.xml', 'xslF345X03/wf-form4_163347386959085.xmlasdf', 'xslF345/X03/wf-form4_163347386959085.xml'])
</code></pre>
<p>I would like to:</p>
<ul>
<li><strong>extract starting from the last '/' till the '.xml' at the end</strong></li>
<li><strong>extract only when the string ends with '.xml'</strong></li>
</ul>
<p>so that I get something like this:</p>
<pre><code>xslF345X03/was-form4_163347386959085.xml Extract: /was-form4_163347386959085.xml
xslF345X03/wf-form4_163347386959085.xmlasdf Do not extract because the ending is not .xml
xslF345/X03/wf-form4_163347386959085.xml Extract starting from the last '/' character: /wf-form4_163347386959085.xml
</code></pre>
<p>I figured I need following pandas code to extract using regex:</p>
<pre><code>s.str.extract(...)
</code></pre>
<p>Thank you in advance :-)</p>
|
<p>Use <code>str.extract</code>:</p>
<pre><code>>>> s.str.extract(r'.*/(.*\.xml)$')
0
0 was-form4_163347386959085.xml
1 NaN
2 wf-form4_163347386959085.xml
</code></pre>
|
python|regex|pandas
| 1
|
7,913
| 69,590,456
|
I am trying to pivot a json file using pandas to be in a specific format. I want to pivot it on certain columns
|
<p>So I tried searching on the web to find a way in which I can convert the following json.</p>
<pre><code>{
"eTask_ID": "100",
"Organization": "Power",
"BidID": "2.00",
"Project": "IPP - C",
"Forecast%": "67",
"Sponsor": "Jon R",
"IsActive": "1",
"InternalOrder": "null",
"Forecast": "null",
"BidStatus": "null",
"ProjectNotes": "null",
"EstimateTypeCode": "null",
"Start": "null",
"SponsoringDistrict": "null",
"LocationState": "null",
"Finish": "null",
"AreaManager": "null",
"CTG Vendor": "null"
}
</code></pre>
<p>to the one like below.</p>
<pre><code>{
"eTask_ID": "100",
"Organization": "Power",
"BidID": "2.00",
"Project": "IPP - C",
"Attribute":"Forecast%",
"AttrValue":"67",
},
{
"eTask_ID": "100",
"Organization": "Power",
"BidID": "2.00",
"Project": "IPP - C",
"Attribute":"Sponsor",
"AttrValue":"Jon R",
},
{
"eTask_ID": "100",
"Organization": "Power",
"BidID": "2.00",
"Project": "IPP - C",
"Attribute":"IsActive",
"AttrValue":"1",
},
...
</code></pre>
<p>Now here if you see all the attributes apart from the first four are getting converted into Attribute and AttributeValue and getting their own records.</p>
<p>I have tried searching for a solution on the web but I am still trying to find a solution.</p>
<p>Please help if anyone can.</p>
<p>Thank you in advance.</p>
|
<p>Use <code>pd.melt</code>:</p>
<pre><code>import json
with open('data.json') as json_data:
data = json.load(json_data)
out = pd.DataFrame.from_dict(data, orient='index').T \
.melt(['eTask_ID', 'Organization', 'BidID', 'Project'],
var_name='Attribute', value_name='AttrValue') \
.to_json(orient='records', indent=4)
</code></pre>
<p>Output:</p>
<pre><code>>>> print(out)
[
{
"eTask_ID":100,
"Organization":"Power",
"BidID":2,
"Project":"IPP - C",
"Attribute":"Forecast%",
"AttrValue":67
},
{
"eTask_ID":100,
"Organization":"Power",
"BidID":2,
"Project":"IPP - C",
"Attribute":"Sponsor",
"AttrValue":"Jon R"
},
{
"eTask_ID":100,
"Organization":"Power",
"BidID":2,
"Project":"IPP - C",
"Attribute":"IsActive",
"AttrValue":1
},
{
"eTask_ID":100,
"Organization":"Power",
"BidID":2,
"Project":"IPP - C",
"Attribute":"InternalOrder",
"AttrValue":"null"
},
{
"eTask_ID":100,
"Organization":"Power",
"BidID":2,
"Project":"IPP - C",
"Attribute":"Forecast",
"AttrValue":"null"
},
{
"eTask_ID":100,
"Organization":"Power",
"BidID":2,
"Project":"IPP - C",
"Attribute":"BidStatus",
"AttrValue":"null"
},
{
"eTask_ID":100,
"Organization":"Power",
"BidID":2,
"Project":"IPP - C",
"Attribute":"ProjectNotes",
"AttrValue":"null"
},
{
"eTask_ID":100,
"Organization":"Power",
"BidID":2,
"Project":"IPP - C",
"Attribute":"EstimateTypeCode",
"AttrValue":"null"
},
{
"eTask_ID":100,
"Organization":"Power",
"BidID":2,
"Project":"IPP - C",
"Attribute":"Start",
"AttrValue":"null"
},
{
"eTask_ID":100,
"Organization":"Power",
"BidID":2,
"Project":"IPP - C",
"Attribute":"SponsoringDistrict",
"AttrValue":"null"
},
{
"eTask_ID":100,
"Organization":"Power",
"BidID":2,
"Project":"IPP - C",
"Attribute":"LocationState",
"AttrValue":"null"
},
{
"eTask_ID":100,
"Organization":"Power",
"BidID":2,
"Project":"IPP - C",
"Attribute":"Finish",
"AttrValue":"null"
},
{
"eTask_ID":100,
"Organization":"Power",
"BidID":2,
"Project":"IPP - C",
"Attribute":"AreaManager",
"AttrValue":"null"
},
{
"eTask_ID":100,
"Organization":"Power",
"BidID":2,
"Project":"IPP - C",
"Attribute":"CTG Vendor",
"AttrValue":"null"
}
]
</code></pre>
|
python|json|pandas|dataframe
| 0
|
7,914
| 40,904,392
|
Easy pythonic way to classify columns in groups and store it in Dictionary?
|
<pre><code> Machine_number Machine_Running_Hours
0 1.0 424.0
1 2.0 458.0
2 3.0 465.0
3 4.0 446.0
4 5.0 466.0
5 6.0 466.0
6 7.0 445.0
7 8.0 466.0
8 9.0 447.0
9 10.0 469.0
10 11.0 467.0
11 12.0 449.0
12 13.0 436.0
13 14.0 465.0
14 15.0 463.0
15 16.0 372.0
16 17.0 460.0
17 18.0 450.0
18 19.0 467.0
19 20.0 463.0
20 21.0 205.0
</code></pre>
<p>I am trying to classify according to machine number. Like Machine_number 1 to 5 will be one group. Then 6 to 10 in one group and so on.</p>
|
<p>I think you need substract <code>1</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.sub.html" rel="nofollow noreferrer"><code>sub</code></a> and then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.floordiv.html" rel="nofollow noreferrer"><code>floordiv</code></a>:</p>
<pre><code>df['g'] = df.Machine_number.sub(1).floordiv(5)
#same as //
#df['g'] = df.Machine_number.sub(1) // 5
print (df)
Machine_number Machine_Running_Hours g
0 1.0 424.0 -0.0
1 2.0 458.0 0.0
2 3.0 465.0 0.0
3 4.0 446.0 0.0
4 5.0 466.0 0.0
5 6.0 466.0 1.0
6 7.0 445.0 1.0
7 8.0 466.0 1.0
8 9.0 447.0 1.0
9 10.0 469.0 1.0
10 11.0 467.0 2.0
11 12.0 449.0 2.0
12 13.0 436.0 2.0
13 14.0 465.0 2.0
14 15.0 463.0 2.0
15 16.0 372.0 3.0
16 17.0 460.0 3.0
17 18.0 450.0 3.0
18 19.0 467.0 3.0
19 20.0 463.0 3.0
20 21.0 205.0 4.0
</code></pre>
<p>If need store in dictionary use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> with <code>dict comprehension</code>:</p>
<pre><code>dfs = {i:g for i, g in df.groupby(df.Machine_number.astype(int).sub(1).floordiv(5))}
print (dfs)
{0: Machine_number Machine_Running_Hours
0 1.0 424.0
1 2.0 458.0
2 3.0 465.0
3 4.0 446.0
4 5.0 466.0, 1: Machine_number Machine_Running_Hours
5 6.0 466.0
6 7.0 445.0
7 8.0 466.0
8 9.0 447.0
9 10.0 469.0, 2: Machine_number Machine_Running_Hours
10 11.0 467.0
11 12.0 449.0
12 13.0 436.0
13 14.0 465.0
14 15.0 463.0, 3: Machine_number Machine_Running_Hours
15 16.0 372.0
16 17.0 460.0
17 18.0 450.0
18 19.0 467.0
19 20.0 463.0, 4: Machine_number Machine_Running_Hours
20 21.0 205.0}
</code></pre>
<pre><code>print (dfs[0])
Machine_number Machine_Running_Hours
0 1.0 424.0
1 2.0 458.0
2 3.0 465.0
3 4.0 446.0
4 5.0 466.0
print (dfs[1])
Machine_number Machine_Running_Hours
5 6.0 466.0
6 7.0 445.0
7 8.0 466.0
8 9.0 447.0
9 10.0 469.0
</code></pre>
|
python|pandas|numpy
| 2
|
7,915
| 41,150,846
|
One-hot encoding
|
<p>I have a csv file like this:</p>
<pre><code>text short_text category
... ... ...
</code></pre>
<p>I have opened the file and stored it in a Pandas data frame like so:</p>
<pre><code>filepath = 'path/data.csv'
train = pd.read_csv(filepath, header=0, delimiter=",")
</code></pre>
<p>The category fields for each record contains a list of categories, which is a string and each which category is in single quotes, like so:</p>
<pre><code>['Adult' 'Aged' 'Aged 80 and over' 'Benzhydryl Compounds/*therapeutic use' 'Cresols/*therapeutic use' 'Double-Blind Method' 'Female' 'Humans' 'Male' 'Middle Aged' 'Muscarinic Antagonists/*therapeutic use' '*Phenylpropanolamine' 'Tolterodine Tartrate' 'Urinary Incontinence/*drug therapy']
</code></pre>
<p>I wish to use this for machine learning by using one-hot encoding. I understand I can implement this using scikit-learn's sklearn.preprocessing package but am unsure how to do this.</p>
<p>Note: I don't have a list of all possible categories.</p>
|
<p>you can use <code>pd.value_counts</code> to help</p>
<pre><code>df = pd.DataFrame(dict(
text=list('ABC'),
short_text=list('XYZ'),
category=[list('abc'), list('def'), list('abefxy')]
))
df.category.apply(pd.value_counts).fillna(0).astype(int)
</code></pre>
<p><a href="https://i.stack.imgur.com/eSUCl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eSUCl.png" alt="enter image description here"></a></p>
<p>or everything together</p>
<pre><code>pd.concat(
[df.drop('category', 1),
df.category.apply(pd.value_counts).fillna(0).astype(int)],
axis=1
)
</code></pre>
<p><a href="https://i.stack.imgur.com/RIzAv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RIzAv.png" alt="enter image description here"></a></p>
|
python|pandas|scikit-learn|one-hot-encoding
| 0
|
7,916
| 41,164,473
|
Write Pandas Dataframe via SQLAlchemy in MySQL database
|
<p>I am trying to write a pandas dataframe into a mysql database using pandas. This works perfect for me using </p>
<blockquote>
<p>to_sql</p>
</blockquote>
<p>But what I want is to write the date, ticker and the adj_close in the table test on my own using sqlalchemy. Somehow I tried doing it but it is not working. using the following:</p>
<pre><code>ins = test.insert()
ins = test.insert().values(date=DataLevels['Date'], ticker=DataLevels['ticker'], adj_close=DataLevels['adj_close'])
ins.compile().params
mysql_engine.execute(ins)
</code></pre>
<p>Using it I receive the following error message:</p>
<blockquote>
<p>(mysql.connector.errors.ProgrammingError) Failed processing
pyformat-parameters; Python 'series' cannot be converted to a MySQL
type [SQL: 'INSERT INTO test (date, ticker, adj_close) VALUES
(%(date)s, %(ticker)s, %(adj_close)s)'] [parameters: {'ticker': 0<br>
AAPL</p>
</blockquote>
<p>Does anbody has a clue how that can work? The code is below wothout the code parts mentioned above:</p>
<pre><code>import sqlalchemy as sqlal
import pandas_datareader.data as pdr
import pandas as pd
mysql_engine = sqlal.create_engine('mysql+mysqlconnector://xxx')
mysql_engine.raw_connection()
metadata = sqlal.MetaData()
test = sqlal.Table('test', metadata,
sqlal.Column('date', sqlal.DateTime, nullable=True),
sqlal.Column('ticker', sqlal.String(10), sqlal.ForeignKey("product.ticker"), nullable=True),
sqlal.Column('adj_close', sqlal.Float, nullable=True),
)
metadata.create_all(mysql_engine)
DataLevels = pdr.DataReader(['ACN','AAPL','^GDAXI'], 'yahoo', '2016-11-19', '2016-12-1')
DataLevels = pd.melt(DataLevels['Adj Close'].reset_index(), id_vars='Date', value_name='adj_close', var_name='minor').rename(columns={'minor': 'ticker'})
</code></pre>
|
<p>To insert multiple rows in a single <code>INSERT</code>, don't pass a <code>Series</code> for each column, pass a list of records.</p>
<pre><code>test.insert().values(DataLevels[['Date', 'ticker', 'adj_close']].to_dict('records'))
</code></pre>
|
python|mysql|pandas|dataframe|sqlalchemy
| 2
|
7,917
| 40,822,250
|
how to save Python pandas data into excel file?
|
<p>I am trying to load data from the web source and save it as a Excel file but not sure how to do it. What should I do? The original dataframe has different columns. Let's say that I am trying to save 'Open' column</p>
<pre><code>import matplotlib.pyplot as plt
import pandas_datareader.data as web
import datetime
import pandas as pd
def ViewStockTrend(compcode):
start = datetime.datetime(2015,2,2)
end = datetime.datetime(2016,7,13)
stock = web.DataReader(compcode,'yahoo',start,end)
print(stock['Open'])
compcode = ['FDX','GOOGL','FB']
aa= ViewStockTrend(compcode)
</code></pre>
|
<p>Once you have made the pandas dataframe just use <code>to_excel</code> on the entire thing if you want:</p>
<p><code>aa.to_excel('output/filename.xlsx')</code></p>
|
python|pandas|dataframe|pandas-datareader
| 3
|
7,918
| 54,244,613
|
How to transpose and transform to "one-hot-encode" style from a pandas column containing a set?
|
<p>I want to perform a break down to a pandas column similarly to the <a href="https://stackoverflow.com/questions/45312377/how-to-one-hot-encode-from-a-pandas-column-containing-a-list">question</a>:</p>
<p>I want to transpose and then "one-hot-encode" style. For example, taking dataframe <strong>df</strong></p>
<pre><code>Col1 Col2
C {Apple, Orange, Banana}
A {Apple, Grape}
B {Banana}
</code></pre>
<p>I would like to convert this and get:</p>
<p><strong>df</strong> </p>
<pre><code>Col1 C A B
Apple 1 1 0
Orange 1 0 0
Banana 1 0 1
Grape 0 1 0
</code></pre>
<p>How can I use pandas/Sklearn to achieve this?</p>
|
<p>Here is a possible answer (assuming Col1 is your index):</p>
<pre><code>from sklearn.preprocessing import MultiLabelBinarizer
mlb = MultiLabelBinarizer()
one_hot_encoded = pd.DataFrame(mlb.fit_transform(df['Col2']), columns=mlb.classes_, index=df.index)
one_hot_encoded.T
</code></pre>
|
python|pandas|numpy|scikit-learn|sklearn-pandas
| 2
|
7,919
| 66,299,739
|
Append a tensor vector to tensor matrix
|
<p>I have a tensor matrix that i simply want to append a tensor vector as another column to it.</p>
<p>For example:</p>
<pre><code> X = torch.randint(100, (100,5))
x1 = torch.from_numpy(np.array(range(0, 100)))
</code></pre>
<p>I've tried <code>torch.cat([x1, X)</code> with various numbers for both <code>axis</code> and <code>dim</code> but it always says that the dimensions don't match.</p>
|
<p>You can also use <a href="https://pytorch.org/docs/stable/generated/torch.hstack.html#torch-hstack" rel="nofollow noreferrer"><code>torch.hstack</code></a> to combine and <a href="https://pytorch.org/docs/stable/generated/torch.unsqueeze.html" rel="nofollow noreferrer"><code>unsqueeze</code></a> for reshape <code>x1</code></p>
<pre class="lang-py prettyprint-override"><code>torch.hstack([X, x1.unsqueeze(1)])
</code></pre>
|
python|pytorch
| 3
|
7,920
| 66,047,584
|
Replacing specific values in a Pandas dataframe basing on the values of another column
|
<p>I have a DataFrame similar to this:</p>
<pre><code>Chr Start_Position End_Position Type
1 10000 10001 SNP
5 45321 45327 INS
12 44700 44710 DEL
</code></pre>
<p>I need to change the values of some cells depending on what <code>Type</code> is:</p>
<ul>
<li><code>SNP</code> needs <code>Start_Position</code> + 1</li>
<li><code>INS</code> needs <code>End_Position</code> + 1</li>
<li><code>DEL</code> needs <code>Start_Position</code> + 1</li>
</ul>
<p>My issue is that my current solutions are extremely verbose. What I've tried (<code>dataframe</code> is the original data source):</p>
<pre><code>snp_records = dataframe.loc[dataframe["Type"] == "SNP", :]
del_records = dataframe.loc[dataframe["Type"] == "DEL", :]
ins_records = dataframe.loc[dataframe["Type"] == "INS", :]
snp_records.loc[:, "Start_Position"] = snp_records["Start_Position"].add(1)
del_records.loc[:, "Start_Position"] = del_records["Start_Position"].add(1)
ins_records.loc[:, "End_Position"] = ins_records["End_Position"].add(1)
dataframe.loc[snp_records.index, "Start_Position"] = snp_records["Start_Position"]
dataframe.loc[del_records.index, "Start_Position"] = del_records["Start_Position"]
dataframe.loc[ins_records.index, "End_Position"] = ins_records["End_Position"]
</code></pre>
<p>As I have to do this for more columns than the example (similar concept, though) this becomes very long and verbose, and possibly error prone (in fact, I've made several mistakes just typing down the example) due to all the duplicated lines.</p>
<p><a href="https://stackoverflow.com/questions/57089020/set-values-of-a-column-in-pandas-dataframe-based-on-values-in-other-columns">This question is similar to mine</a>, but there the values are predefined, while I need to get them from the data themselves.</p>
|
<p>You can just do:</p>
<pre><code>df.loc[df['Type'].isin(['SNP','INS']), 'Start_Position'] += 1
df.loc[df['Type'].eq('INS'), 'End_Position'] += 1
</code></pre>
|
python|pandas|dataframe
| 4
|
7,921
| 52,811,542
|
when trying to run tf.confusion matrix it gives end of sequence error
|
<p>I am preparing my dataset using new tensoflow input pipeline, here is my code:</p>
<pre><code>train_data = tf.data.Dataset.from_tensor_slices(train_images)
train_labels = tf.data.Dataset.from_tensor_slices(train_labels)
train_set = tf.data.Dataset.zip((train_data,train_labels)).shuffle(500).batch(30)
valid_data = tf.data.Dataset.from_tensor_slices(valid_images)
valid_labels = tf.data.Dataset.from_tensor_slices(valid_labels)
valid_set = tf.data.Dataset.zip((valid_data,valid_labels)).shuffle(200).batch(20)
test_data = tf.data.Dataset.from_tensor_slices(test_images)
test_labels = tf.data.Dataset.from_tensor_slices(test_labels)
test_set = tf.data.Dataset.zip((test_data, test_labels)).shuffle(200).batch(20)
# create general iterator
iterator = tf.data.Iterator.from_structure(train_set.output_types, train_set.output_shapes)
next_element = iterator.get_next()
train_init_op = iterator.make_initializer(train_set)
valid_init_op = iterator.make_initializer(valid_set)
test_init_op = iterator.make_initializer(test_set)
</code></pre>
<p>Now I wanted to create a confusion matrix for validation set of my CNN model after training, here is what I try to do:</p>
<pre><code>sess.run(valid_init_op)
valid_img, valid_label = next_element
finalprediction = tf.argmax(train_predict, 1)
actualprediction = tf.argmax(valid_label, 1)
confusion_matrix = tf.confusion_matrix(labels=actualprediction,predictions=finalprediction,
num_classes=num_classes,dtype=tf.int32,name=None, weights=None)
print(sess.run(confusion_matrix, feed_dict={keep_prob: 1.0}))
</code></pre>
<p>In this way it creates confusion matrix but only for one batch of validation set. for that I tried to collect all validation set batches in list and then use the list for creating confusion matrix:</p>
<pre><code>val_label_list = []
sess.run(valid_init_op)
for i in range(valid_iters):
while True:
try:
elem = sess.run(next_element[1])
val_label_list.append(elem)
except tf.errors.OutOfRangeError:
print("End of append.")
break
val_label_list = np.array(val_label_list)
val_label_list = val_label_list.reshape(40,2)
</code></pre>
<p>and now the <code>val_label_list</code> contain the labels for all batches of my validation set and I can use it to create confusion matrix:</p>
<pre><code>finalprediction = tf.argmax(train_predict, 1)
actualprediction = tf.argmax(val_label_list, 1)
confusion = tf.confusion_matrix(labels=actualprediction,predictions=finalprediction,
num_classes=num_classes, dtype=tf.int32,name="Confusion_Matrix")
</code></pre>
<p>But now when I want to run the confusion matrix and print it:</p>
<pre><code>print(sess.run(confusion, feed_dict={keep_prob: 1.0}))
</code></pre>
<p>it gives me an error:</p>
<pre><code>OutOfRangeError: End of sequence
[[Node: IteratorGetNext_5 = IteratorGetNext[output_shapes=[[?,10,32,32], [?,2]], output_types=[DT_FLOAT, DT_FLOAT], _device="/job:localhost/replica:0/task:0/device:CPU:0"](Iterator_5)]]
</code></pre>
<p>anyone can tell me how to deal with this error? or any other solution that solve my original problem?</p>
|
<p>The problem is related with the graph flow execution.
Look at this line:</p>
<pre><code>print(sess.run(confusion, feed_dict={keep_prob: 1.0}))
</code></pre>
<p>You are running the graph for getting the 'confusion' value. So all dependent nodes will be also executed. Then:</p>
<pre><code>finalprediction = tf.argmax(train_predict, 1)
actualprediction = tf.argmax(val_label_list, 1)
confusion = tf.confusion_matrix(...)
</code></pre>
<p>I guess that your call to train_predict will try to obtain a new element from the training iterator which has been already completely iterated and after this the error is triggered.</p>
<p>You should compute the confusion matrix directly in the loop and accumulate the results in a variable.</p>
<pre><code>sess.run(valid_init_op)
confusion_matrix = np.zeros(n_labels,n_labels)
while True:
try:
conf_matrix = sess.run(confusion)
confusion_matrix += conf_matrix
except tf.errors.OutOfRangeError:
print("End of append.")
break
</code></pre>
|
python-3.x|tensorflow
| 1
|
7,922
| 52,835,711
|
add numeric prefix to pandas dataframe column names
|
<p>how would I add variable numeric prefix to dataframe column names </p>
<p>If I have a DataFrame df</p>
<pre><code> colA colB
0 A X
1 B Y
2 C Z
</code></pre>
<p>How would I rename the columns according to the number of columns. Something like this:</p>
<pre><code> 1_colA 2_colB
0 A X
1 B Y
2 C Z
</code></pre>
<p>The actually number of columns is very large to be renamed manually</p>
<p>Thanks for the help</p>
|
<p>Use <code>enumerate</code> for count with <code>f-string</code>s and list comprehension:</p>
<pre><code>#python 3.6+
df.columns = [f'{i}_{x}' for i, x in enumerate(df.columns, 1)]
#python below 3.6
#df.columns = ['{}_{}'.format(i, x) for i, x in enumerate(df.columns, 1)]
print (df)
1_colA 2_colB
0 A X
1 B Y
2 C Z
</code></pre>
|
python-3.x|pandas
| 4
|
7,923
| 58,584,908
|
How to make pandas pivot table look like excel pivot table
|
<p><a href="https://i.stack.imgur.com/pbxS6.png" rel="nofollow noreferrer">pivot table in excel</a></p>
<pre><code>df=pd.DataFrame({'Fruit':['Apple', 'Orange', 'Apple', 'Apple', 'Orange', 'Orange'],
'Variety':['Fuji', 'Navel', 'Honeycrisp', 'Gala', 'Tangerine', 'Clementine'],
'Count':[2, 5, 5, 1, 8, 4]})
df_pvt=pd.pivot_table(df, index=['Fruit', 'Variety'], values=['Count'], aggfunc=np.sum)
df_final=pd.concat([
d.append(d.sum().rename((k, 'SubTotal')))
for k, d in df_pvt.groupby(level=0)
]).append(df_pvt.sum().rename(('','GrandTotal')))
</code></pre>
<p><a href="https://stackoverflow.com/questions/41383302/pivot-table-subtotals-in-pandas">subtotal</a></p>
<pre><code>df_final.to_excel('pvt.xlsx')
</code></pre>
<p>yield this <a href="https://i.stack.imgur.com/iaMpC.png" rel="nofollow noreferrer">exported to excel</a></p>
<p>1) How can I get the pivot table generated in pandas to look like the excel one?
2) How do I get the subtotals in each of the top row like excel?</p>
<p>Thank you.</p>
|
<p>IIUC,</p>
<pre><code>df_grand = df[['Count']].sum().T.to_frame(name='Count').assign(Fruit='Grand Total', Variety='').set_index(['Fruit','Variety'])
df_sub = df.groupby('Fruit')[['Count']].sum().assign(Variety='').set_index('Variety', append=True)
df_excel = df.set_index(['Fruit','Variety']).append(df_sub).sort_index().append(df_grand)
</code></pre>
<p>Output:</p>
<pre><code> Count
Fruit Variety
Apple 8
Fuji 2
Gala 1
Honeycrisp 5
Orange 17
Clementine 4
Navel 5
Tangerine 8
Grand Total 25
</code></pre>
|
python|excel|pandas
| 0
|
7,924
| 58,374,112
|
Can not remove 'Unmamed:' row
|
<p>I am reading an excel sheet and got the result like below image,</p>
<p><a href="https://i.stack.imgur.com/BbuN9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BbuN9.png" alt="enter image description here"></a></p>
<pre><code>df_result = pd.read_excel(file_path, sheet_name='Person', index=False)
</code></pre>
<p>I tried to to remove the 'Unnamed:' columns, by using</p>
<pre><code>df_result.drop([0], axis=0, inplace=True)
</code></pre>
<p>But, can not remove this.</p>
<p><strong>Edit:</strong></p>
<p>Tried this too,</p>
<pre><code>df_result.drop([0], inplace=True)
</code></pre>
<p>But, removing the 3rd row.(Given <code>'0'</code> only)</p>
|
<p>You can skip some number of rows before headers, here 1 row (working because empty rows are excluded by default):</p>
<pre><code>df_result = pd.read_excel(file_path, sheet_name='Person', index=False, skiprows=1)
</code></pre>
|
python|excel|pandas
| 1
|
7,925
| 69,069,041
|
Finding string length and append with another column in dictionary format from csv in python
|
<p>My dataframe -
<a href="https://i.stack.imgur.com/FrrJO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FrrJO.png" alt="enter image description here" /></a></p>
<p>Basically
-I want to append house and district column</p>
<ul>
<li>Then find the string length for both columns; in this case House 263 --> (0, 8), dhaka (10,14)</li>
<li>also, attach their associated labels (label1 column and label2)</li>
<li>doing it for all the rows</li>
</ul>
<p>My expected output format is My expected output- <code>[('House 263 dhaka', {'entities': [[(0, 8)], 'holding_number'], [(10,14), 'district']})</code></p>
<p>How do I do it?</p>
|
<p>Try using this list comprehension:</p>
<pre><code>>>> [(k, {'entities': [[[0, len(k.rpartition(' ')[0]) - 1], v['label1']], [(k.rfind(' ') + 1, len(k) - 1), v['label2']]]}) for k, v in df.set_index(['house', 'district']).set_axis(df[['house', 'district']].agg(' '.join, axis=1)).to_dict('index').items()]
[('House 163 dhaka', {'entities': [[[0, 8], 'holding_number'], [(10, 14), 'district']]}), ('House 31 comilla', {'entities': [[[0, 7], 'holding_number'], [(9, 15), 'district']]}), ('House 193/A chittagong', {'entities': [[[0, 10], 'holding_number'], [(12, 21), 'district']]})]
>>>
</code></pre>
|
python|pandas|list|dataframe|dictionary
| 1
|
7,926
| 68,898,150
|
Speeding up a complex python dataframe transposition
|
<p>I've got three tables. I need to manipulate table 1 and 2 to get table 3.</p>
<p>Table 1</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>GENEID</th>
<th>person_1</th>
<th>person_2</th>
</tr>
</thead>
<tbody>
<tr>
<td>ENSG001</td>
<td>0.01</td>
<td>1.6</td>
</tr>
<tr>
<td>ENSG002</td>
<td>1.25</td>
<td>-2.2</td>
</tr>
</tbody>
</table>
</div>
<p>Table 2</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ENSG</th>
<th>Chromosome</th>
<th>EntrezGene</th>
<th>GeneSymbol</th>
</tr>
</thead>
<tbody>
<tr>
<td>ENSG001</td>
<td>1</td>
<td>001</td>
<td>Symbol1</td>
</tr>
<tr>
<td>ENSG002</td>
<td>2</td>
<td>002</td>
<td>Symbol2</td>
</tr>
</tbody>
</table>
</div>
<p>Table 3</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>GENEID</th>
<th>GeneSymbol</th>
<th>EntrezGene</th>
<th>person_id</th>
<th>expression</th>
</tr>
</thead>
<tbody>
<tr>
<td>ENSG001</td>
<td>Symbol1</td>
<td>001</td>
<td>person_1</td>
<td>0.01</td>
</tr>
<tr>
<td>ENSG002</td>
<td>Symbol2</td>
<td>002</td>
<td>person_1</td>
<td>1.25</td>
</tr>
<tr>
<td>ENSG001</td>
<td>Symbol1</td>
<td>001</td>
<td>person_2</td>
<td>1.6</td>
</tr>
<tr>
<td>ENSG002</td>
<td>Symbol2</td>
<td>002</td>
<td>person_2</td>
<td>-2.2</td>
</tr>
</tbody>
</table>
</div>
<p>I've got code that does this. But it is unacceptably slow on large files. I'm dealing with files averaging about 800 columns wide and 60,000 rows deep. I'm not sure how to speed it up.</p>
<pre><code>import pandas as pd
import tqdm
filenames = ["filename1", "filename2"]
merge_file = pd.read_csv("mergefile_123.tsv", sep="\t", dtype='str')
def rearrange_dataframe(df):
"""Transpose the dataframe so that the person_ids are the index, and the columns are the ensembleIds"""
df_new_index = df.set_index('GENEID').copy()
transpose_df = df_new_index.T
return transpose_df
def create_fixed_gene_df(trans_df):
concat_df = pd.DataFrame()
for index, rows in trans_df.iterrows():
new_dataframe = pd.DataFrame(data = {"GENEID":rows.index.tolist(), "person_id":[rows.name] * len(rows.index), "expression":rows.values})
if concat_df.empty:
concat_df = new_dataframe
else:
concat_df = pd.concat([concat_df, new_dataframe])
return concat_df
for f in tqdm.tqdm(filenames):
df = pd.read_csv(f, sep="\t", dtype='str')
transposed_df = rearrange_dataframe(df)
fixed_gene_df = create_fixed_gene_df(transposed_df)
merge_symbol_df = fixed_gene_df.merge(merge_df[["ENSG","EntrezGene ID","HGNC symbol"]],
left_on="GENEID", right_on="ENSG",how="left")
renamed_df = merge_symbol_df.rename(columns={"EntrezGene ID":"locus", "HGNC symbol":"geneSymbol"})
final_df = renamed_df[["GENEID","geneSymbol","locus","person_id","expression"]]
final_df.to_csv("{}_transposed_file.tsv".format(f.split(".tsv")[0]),sep="\t",index=False)
</code></pre>
<p>Any tips on how to optimize the manipulations would be really helpful. And any resources I can read up on so I can get better at these would be great too. Thanks for reading!</p>
|
<p>You can use <code>melt</code> on <code>df1</code> before merging:</p>
<pre><code>>>> df1.melt(id_vars='GENEID', value_vars=['person_1', 'person_2'],
var_name='person_id', value_name='expression') \
.merge(df2, left_on='GENEID', right_on='ENSG') \
.drop(columns='ENSG')
GENEID person_id expression Chromosome EntrezGene GeneSymbol
0 ENSG001 person_1 0.01 1 1 Symbol1
1 ENSG001 person_2 1.60 1 1 Symbol1
2 ENSG002 person_1 1.25 2 2 Symbol2
3 ENSG002 person_2 -2.20 2 2 Symbol2
</code></pre>
|
python|pandas|dataframe|optimization
| 4
|
7,927
| 69,134,379
|
How to make prediction based on model Tensorflow lite?
|
<p>I would like to make a prediction with my Tensorflow lite model. for that I've already trained my model and saved this in <code>tflite</code>. Know I would like to make a preditcion with my trained model. How can I do that? Ive tried something but its showing a error message</p>
<blockquote>
<p>hand = model_hands.predict(X)[0] - 'str' object has no attribute 'predict'</p>
</blockquote>
<pre><code>model_hands = 'converted_model.tflite'
with open(model_hands, 'rb') as fid:
tflite_model = fid.read()
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
</code></pre>
<pre><code>right = result.rightHand.hand
row = list(np.array([[res.x, res.y, res.z] for res in right]).flatten())
</code></pre>
<pre><code>X = pd.DataFrame([row])
hand = model_hands.predict(X)[0]
e_result = np.argmax(hand)
prob = str(round(hand[np.argmax(hand)], 2))
</code></pre>
|
<p>The problem is in the line <code>hand = model_hands.predict(X)[0]</code>. You are trying to call function <code>predict</code> on a string you defined above as <code>model_hands = 'converted_model.tflite'</code>.</p>
<p>I believe what you want to do is load the model using an Interpreter, set the input tensor, and invoke it. Take a look at the following tutorial for more information: <a href="https://www.tensorflow.org/lite/guide/inference#load_and_run_a_model_in_python" rel="nofollow noreferrer">https://www.tensorflow.org/lite/guide/inference#load_and_run_a_model_in_python</a></p>
<pre><code># Load the TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_path='converted_model.tflite')
interpreter.allocate_tensors()
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# Set up your input data.
right = result.rightHand.hand
input_data = np.array([[res.x, res.y, res.z] for res in right]).flatten()
# Invoke the model on the input data
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
# Get the result
output_data = interpreter.get_tensor(output_details[0]['index'])
print(output_data)
e_result = np.argmax(hand)
prob = str(round(hand[np.argmax(hand)], 2))
</code></pre>
<p>Note that you may have to modify the code snippet. I did not test it. But the gist of it is that you have to use <code>set_tensor</code>, <code>invoke</code>, and <code>get_tensor</code> on the interpreter.</p>
|
python|tensorflow
| 0
|
7,928
| 68,998,096
|
Why is tf.GradientTape.jacobian giving None?
|
<p>I'm using the IRIS dataset, and am following this official tutorial: <a href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/customization/custom_training_walkthrough.ipynb" rel="nofollow noreferrer">Custom training: walkthrough</a></p>
<p>In the Training loop, I am trying to gather the model outputs and weights in each <code>epoch%50==0</code> in the lists <code>m_outputs_mod50, gather_weights</code> respectively:</p>
<pre><code># Keep results for plotting
train_loss_results = []
train_accuracy_results = []
m_outputs_mod50 = []
gather_weights = []
num_epochs = 201
for epoch in range(num_epochs):
epoch_loss_avg = tf.keras.metrics.Mean()
epoch_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()
# gather_kernel(model)
# Training loop - using batches of 32
for x, y in train_dataset:
# Optimize the model
loss_value, grads = grad(model, x, y)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
# Track progress
epoch_loss_avg.update_state(loss_value) # Add current batch loss
# Compare predicted label to actual label
# training=True is needed only if there are layers with different
# behavior during training versus inference (e.g. Dropout).
epoch_accuracy.update_state(y, model(x, training=True))
# End epoch
train_loss_results.append(epoch_loss_avg.result())
train_accuracy_results.append(epoch_accuracy.result())
# pred_hist.append(model.predict(x))
if epoch % 50 == 0:
m_outputs_mod50.append(model(x))
gather_weights.append(model.weights)
print("Epoch {:03d}: Loss: {:.3f}, Accuracy: {:.3%}".format(epoch,
epoch_loss_avg.result(),
epoch_accuracy.result()))
</code></pre>
<p>Running the above and trying to even get the jacobian at epoch 0 (using <code>m_outputs_mod50[0]</code> and <code>gather_weights[0]</code>) using</p>
<pre><code>with tf.GradientTape() as tape:
print(tape.jacobian(target = m_outputs_mod50[0], sources = gather_weights[0]))`
</code></pre>
<p>I get a list of None as the output.</p>
<p>Why?</p>
|
<p>You need to understand how the GradientTape operates. For that, you can follow the guide: <a href="https://www.tensorflow.org/guide/autodiff" rel="nofollow noreferrer">Introduction to gradients and automatic differentiation</a>. Here is an excerpt:</p>
<blockquote>
<p>TensorFlow provides the <code>tf.GradientTape</code> API for automatic
differentiation; that is, computing the gradient of a computation with
respect to some inputs, usually <code>tf.Variables</code>. TensorFlow "records"
relevant operations executed inside the context of a <code>tf.GradientTape</code>
onto a "tape". TensorFlow then uses that tape to compute the gradients
of a "recorded" computation using reverse mode differentiation.</p>
</blockquote>
<p>To compute a gradient (or a jacobian), the tape needs to record the operations that are executed in its context. Then, outside its context, once the <em>forward pass</em> has been executed, its possible to use the tape to compute the gradient/jacobian.</p>
<p>You could use something like that:</p>
<pre><code>if epoch % 50 == 0:
with tf.GradientTape() as tape:
out = model(x)
jacobian = tape.jacobian(out, model.weights)
</code></pre>
|
tensorflow|keras|gradient|gradienttape
| 0
|
7,929
| 44,394,346
|
Matplotlib - plot wont show up
|
<p>I want to plot a function graph (using matplotlib) when a button is pressed, to do so I wrote the following code:</p>
<pre><code>##--IMPORT
#Tkinter
from tkinter import Tk, ttk
from tkinter import Frame, LabelFrame, Button
from tkinter import FALSE
#Numpy
from numpy import linspace
#Sympy
from sympy import symbols,sympify,diff,N,log
#MathPlotLib
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg, NavigationToolbar2TkAgg
from matplotlib.figure import Figure
_x = symbols("x")
_sympyFunction = None
_SP_mainSubPlot = None
def pr_draw(plotToDrawTo):
_sympyFunction = sympify("log(x) + x")
valuesRange = linspace(0.01, 3, 100)
x = []
y = []
#Calculate y and x values
for i in range(0, len(valuesRange)):
tempValue = N(_sympyFunction.subs(_x,valuesRange[i]))
x.append(float(valuesRange[i]))
y.append(float(tempValue))
#Draw function graph
plotToDrawTo.plot(x,y)
##--MAIN
if __name__== "__main__":
_root = Tk()
_root.title("Grafico Approsimativo")
_root.resizable(width = FALSE, height = FALSE)
_mainFrame = Frame(_root, bg = "black")
_mainFrame.pack(fill = "both", expand = True)
#Frames
#Main Left
_F_LeftMainFrame = Frame(_mainFrame)
_F_LeftMainFrame.grid(row = 0, column = 0, sticky = "nw")
_F_RightMainFrame = Frame(_mainFrame, bg = "violet")
_F_RightMainFrame.grid(row = 0, column = 2, sticky = "ne")
#Left Content--------------------------
_B_calculate = Button(_F_LeftMainFrame, text = "Draw", command = lambda: pr_draw(_SP_mainSubPlot))
_B_calculate.grid(row = 0, column = 0, padx = 5, pady = 5, sticky = "w")
#Right Content--------------------------
_F_mainPlotWindow = Figure(figsize = None, dpi = 100)
_SP_mainSubPlot = _F_mainPlotWindow.add_subplot(111)
_SP_mainSubPlot.grid(True)
#HERE
#Set master frame for Figure Obj
canvas = FigureCanvasTkAgg(_F_mainPlotWindow, master = _F_RightMainFrame)
canvas.get_tk_widget().pack()
</code></pre>
<p>The problem here is that when the button is pressed, nothing shows up in the plot window, the only way I could get this to work is by calling pr_draw(_SP_mainSubPlot) where I inserted the #HERE line: If the function is called there it will work, but not from the button., why?</p>
|
<p>You would need to redraw the canvas after you have plotted to it.</p>
<p>Adding the line </p>
<pre><code>plotToDrawTo.figure.canvas.draw_idle()
</code></pre>
<p>at the end of your <code>pr_draw</code> function should do that. </p>
<p>Note that I also had to add <code>_root.mainloop()</code> at the end of the script to actually show the window.</p>
|
numpy|matplotlib|python-3.4|sympy
| 0
|
7,930
| 44,516,609
|
Tensorflow : What is the relationship between .ckpt file and .ckpt.meta and .ckpt.index , and .pb file
|
<p>I used <code>saver=tf.train.Saver()</code> to save the model that I trained, and I get three kinds of files named:</p>
<ul>
<li>.ckpt.meta </li>
<li>.ckpt.index</li>
<li>.ckpt.data</li>
</ul>
<p>And a file called:</p>
<ul>
<li>checkpoint</li>
</ul>
<p>What is the connection with the <strong>.ckpt</strong> file? </p>
<p>I saw someone saved model with only .ckpt file, I don't know how to make it.
How can I save model as a .pb file?</p>
|
<ul>
<li><p>the .ckpt file is the old version output of <code>saver.save(sess)</code>, which is the equivalent of your <code>.ckpt-data</code> (see below)</p></li>
<li><p>the "checkpoint" file is only here to tell some TF functions which is the latest checkpoint file.</p></li>
<li><p><code>.ckpt-meta</code> contains the metagraph, i.e. the structure of your computation graph, without the values of the variables (basically what you can see in tensorboard/graph).</p></li>
<li><p><code>.ckpt-data</code> contains the values for all the variables, without the structure. To restore a model in python, you'll usually use the meta and data files with (but you can also use the <code>.pb</code> file):</p>
<pre><code>saver = tf.train.import_meta_graph(path_to_ckpt_meta)
saver.restore(sess, path_to_ckpt_data)
</code></pre></li>
<li><p>I don't know exactly for <code>.ckpt-index</code>, I guess it's some kind of index needed internally to map the two previous files correctly. Anyway it's not really necessary usually, you can restore a model with only <code>.ckpt-meta</code> and <code>.ckpt-data</code>.</p></li>
<li><p>the <code>.pb</code> file can save your whole graph (meta + data). To load and use (but not train) a graph in c++ you'll usually use it, created with <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/tools/freeze_graph.py" rel="noreferrer"><code>freeze_graph</code></a>, which creates the <code>.pb</code> file from the meta and data. Be careful, (at least in previous TF versions and for some people) the py function provided by <code>freeze_graph</code> did not work properly, so you'd have to use the script version. Tensorflow also provides a <code>tf.train.Saver.to_proto()</code> method, but I don't know what it does exactly.</p></li>
</ul>
<p>There are a lot of questions here about how to save and restore a graph. See the answer <a href="https://stackoverflow.com/a/42103559/7456923">here</a> for instance, but be careful that the two cited tutorials, though really helpful, are far from perfect, and a lot of people still seem to struggle to import a model in c++.</p>
<p><strong>EDIT</strong>:
it looks like <a href="https://stackoverflow.com/a/43639305/7456923">you can also use the .ckpt files in c++ now,</a> so I guess you don't necessarily need the .pb file any more.</p>
|
python|tensorflow
| 44
|
7,931
| 60,828,618
|
Is there a way to make custom function in pandas aggregation function?
|
<p>Want to apply custom function in a Dataframe
eg. Dataframe</p>
<pre><code> index City Age
0 1 A 50
1 2 A 24
2 3 B 65
3 4 A 40
4 5 B 68
5 6 B 48
</code></pre>
<p>Function to apply</p>
<pre><code>def count_people_above_60(age):
** *** #i dont know if the age can or can't be passed as series or list to perform any operation later
return count_people_above_60
</code></pre>
<p>expecting to do something like</p>
<pre><code>df.groupby(['City']).agg{"AGE" : ["mean",""count_people_above_60"]}
</code></pre>
<p>expected Output</p>
<pre><code>City Mean People_Above_60
A 38 0
B 60.33 2
</code></pre>
<hr>
|
<p>If performance is important create new column filled by compared values converted to <code>integer</code>s, so for count is used aggregation <code>sum</code>:</p>
<pre><code>df = (df.assign(new = df['Age'].gt(60).astype(int))
.groupby(['City'])
.agg(Mean= ("Age" , "mean"), People_Above_60= ('new',"sum")))
print (df)
Mean People_Above_60
City
A 38.000000 0
B 60.333333 2
</code></pre>
<p>Your solution should be changed with compare values and <code>sum</code>, but is is slow if many groups or large <code>DataFrame</code>:</p>
<pre><code>def count_people_above_60(age):
return (age > 60).sum()
df = (df.groupby(['City']).agg(Mean=("Age" , "mean"),
People_Above_60=('Age',count_people_above_60)))
print (df)
Mean People_Above_60
City
A 38.000000 0
B 60.333333 2
</code></pre>
|
python|python-3.x|pandas|aggregate|pandas-groupby
| 2
|
7,932
| 71,608,700
|
Count occurrences of a string ocurring in multiple columns at the same time
|
<p>I have a data frame that looks like this:</p>
<pre><code> data_ID col1 col2
0 001 Word1 Word1
1 002 Word2 Word2
2 003 Word1 Word3
3 004 Word1 Word1
</code></pre>
<p>I would like to count the number of times <code>Word1</code> appear in both <code>col1</code> and <code>col2</code>. For this dataset, the answer would be 2, since <code>Word1</code> appears in both <code>col1</code> and <code>col2</code> twice.</p>
|
<p>Just compare with <code>==</code> or <code>.eq()</code>, and then use <code>all</code> across the rows with <code>axis=1</code>. That'll a True for each row where <code>col1</code> and <code>col2</code> are both <code>Word1</code>. Then just use <code>sum</code>:</p>
<pre><code>count = df[['col1', 'col2']].eq('Word1').all(axis=1).sum()
</code></pre>
<p>Output:</p>
<pre><code>>>> count
2
</code></pre>
<p>If you want to count all the combinations, and easy solution would be to use <code>value_counts</code>:</p>
<pre><code>all_counts = df[['col1','col2']].value_counts().reset_index()
</code></pre>
<p>Output:</p>
<pre><code>>>> all_counts
col1 col2 0
0 Word1 Word1 2
1 Word1 Word3 1
2 Word2 Word2 1
</code></pre>
<p>Or, if you need a mapping, you could make a <code>MultiIndex</code> and then use <code>value_counts</code>:</p>
<pre><code>all_counts = pd.MultiIndex.from_arrays(df[['col1','col2']].to_numpy().T).value_counts()
</code></pre>
<p>Output:</p>
<pre><code>>>> all_counts
(Word1, Word1) 2
(Word2, Word2) 1
(Word1, Word3) 1
dtype: int64
>>> all_counts[('Word1', 'Word1')]
2
</code></pre>
|
python|pandas
| 3
|
7,933
| 71,564,992
|
Aggregating in pandas with two different identification columns
|
<p>I am trying to aggregate a dataset with purchases, I have shortened the example in this post to keep it simple. The purchases are distinguished based on two different columns used to identify both customer and transaction. The reference refers to the same transaction, while the ID refers to the type of transaction.</p>
<p>I want to sum these records based on ID, however while keeping in mind the reference and not double-counting the size. The example I provide clears it up.</p>
<p>What I tried so far is:</p>
<ul>
<li>df_new = df.groupby(by = ['id'], as_index=False).agg(aggregate)</li>
<li>df_new = df.groupby(by = ['id','ref'], as_index=False).agg(aggregate)</li>
</ul>
<p>Let me know if you have any idea what I can do in pandas, or otherwise in Python.</p>
<p>This is basically what I have,</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Name</th>
<th>Reference</th>
<th>Side</th>
<th>Size</th>
<th>ID</th>
</tr>
</thead>
<tbody>
<tr>
<td>Alex</td>
<td>0</td>
<td>BUY</td>
<td>2400</td>
<td>0</td>
</tr>
<tr>
<td>Alex</td>
<td>0</td>
<td>BUY</td>
<td>2400</td>
<td>0</td>
</tr>
<tr>
<td>Alex</td>
<td>0</td>
<td>BUY</td>
<td>2400</td>
<td>0</td>
</tr>
<tr>
<td>Alex</td>
<td>1</td>
<td>BUY</td>
<td>3000</td>
<td>0</td>
</tr>
<tr>
<td>Alex</td>
<td>1</td>
<td>BUY</td>
<td>3000</td>
<td>0</td>
</tr>
<tr>
<td>Alex</td>
<td>1</td>
<td>BUY</td>
<td>3000</td>
<td>0</td>
</tr>
<tr>
<td>Alex</td>
<td>2</td>
<td>SELL</td>
<td>4500</td>
<td>1</td>
</tr>
<tr>
<td>Alex</td>
<td>2</td>
<td>SELL</td>
<td>4500</td>
<td>1</td>
</tr>
<tr>
<td>Sam</td>
<td>3</td>
<td>BUY</td>
<td>1500</td>
<td>2</td>
</tr>
<tr>
<td>Sam</td>
<td>3</td>
<td>BUY</td>
<td>1500</td>
<td>2</td>
</tr>
<tr>
<td>Sam</td>
<td>3</td>
<td>BUY</td>
<td>1500</td>
<td>2</td>
</tr>
</tbody>
</table>
</div>
<p>What I am trying to achieve is the following,</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Name</th>
<th>Side</th>
<th>Size</th>
<th>ID</th>
</tr>
</thead>
<tbody>
<tr>
<td>Alex</td>
<td>BUY</td>
<td>5400</td>
<td>0</td>
</tr>
<tr>
<td>Alex</td>
<td>SELL</td>
<td>4500</td>
<td>1</td>
</tr>
<tr>
<td>Sam</td>
<td>BUY</td>
<td>1500</td>
<td>2</td>
</tr>
</tbody>
</table>
</div>
<p>P.S. the records are not duplicates of each other, what I provide is a simplified version, but in reality 'Name' is 20 more columns identifying each row.</p>
<p>P.S. P.S. My solution was to first aggregate by Reference then by ID.</p>
|
<p>Use <code>drop_duplicates</code>, <code>groupby</code>, and <code>agg</code>:</p>
<pre><code>new_df = df.drop_duplicates().groupby(['Name', 'Side']).agg({'Size': 'sum', 'ID': 'first'}).reset_index()
</code></pre>
<p>Output:</p>
<pre><code>>>> new_df
Name Side Size ID
0 Alex BUY 5400 0
1 Alex SELL 4500 1
2 Sam BUY 1500 2
</code></pre>
|
python|pandas|sum|logic|aggregate
| 1
|
7,934
| 71,648,305
|
3-dimensional array reshaping? HDF5 dataset type?
|
<p>I have data in the following <strong>shape: (127260, 2, 1250)</strong></p>
<p>The type of this data is <code><HDF5 dataset "data": shape (127260, 2, 1250), type "<f8"></code></p>
<p>The first dimension (127260) is the number of signals, the second dimension (2) is the type of signal, and the third dimension (1250) is the amount of points in each of the signals.</p>
<p>What I wanted to do is <strong>reduce the amount of points for each signal, cut them in half, leave 625 points on each signal, and then have double the amount of signals</strong>.</p>
<p>How to convert HDF5 dataset to something like numpy array and how to do this reshape?</p>
|
<p>If I understand, you want a new dataset with shape: <code>(2*127260, 2, 625)</code>. If so, it's fairly simple to read 2 slices of the dataset into 2 NumPy arrays, create a new array from the slices, then write to a new dataset. Note: reading slices is simple and fast. I would leave the data as-is and do this on-the-fly unless you have a compelling reason to create a new dataset</p>
<p>Code to do this (where <code>h5f</code> is the h5py file object):</p>
<pre><code>new_arr = np.empty((2*127260, 2, 625))
arr1 = h5f['dataset_name'][:,:, :625]
arr2 = h5f['dataset_name'][:,:, 625:]
new_arr[:127260,:,:] = arr1
new_arr[127260:,:,:] = arr2
h5f.create_dataset('new_dataset_name',data=new_arr)
</code></pre>
<p>Alternately you can do this (and combine 2 steps):</p>
<pre><code>new_arr = np.empty((2*127260, 2, 625))
new_arr[:127260,:,:] = h5f['dataset_name'][:,:, :625]
new_arr[127260:,:,:] = h5f['dataset_name'][:,:, 625:]
h5f.create_dataset('new_dataset_name',data=new_arr)
</code></pre>
<p>Here is a 3rd method. It is the most direct way, and reduces the memory overhead. This is important when you have very large datasets that won't fit in memory.</p>
<pre><code>h5f.create_dataset('new_dataset_name',shape=(2*127260, 2, 625),dtype=float)
h5f['new_dataset_name'][:127260,:,:] = h5f['dataset_name'][:,:, :625]
h5f['new_dataset_name'][127260:,:,:] = h5f['dataset_name'][:,:, 625:]
</code></pre>
<p>Whichever method you choose, I suggest adding an attribute to note the data source for future reference:</p>
<pre><code>h5f['new_dataset_name'].attrs['Data Source'] = 'data sliced from dataset_name'
</code></pre>
|
python|multidimensional-array|reshape|numpy-ndarray|hdf5
| 1
|
7,935
| 42,516,341
|
Why there are missing nodes after graph intersection - NetworkX, igraph, python and r
|
<p>I'm experiencing something strange while trying to obtain the intersection between two networks/graphs. I found missing nodes when I check the resulting intersection and I wish to understand why this is happening.</p>
<p>Originally I'm working with python 3.5.2 / pandas 0.17.1. on Linux Mint 18, and the dataset and code to reproduce the problem is on the link:
<a href="https://www.dropbox.com/s/m7ur0fhlmw19od0/Dataset_and_code.zip?dl=0" rel="nofollow noreferrer">Dataset and code</a></p>
<p>Both tables (Test_01.ncol and Test_02.ncol attached in the link) are edge lists.</p>
<p>First I try to get the intersection of two graph tables with pandas, with merge function:</p>
<pre><code>import pandas as pd
# Load graphs
test_01 = pd.read_csv("Test_01.ncol",sep=" ") # Load Net 1
test_02 = pd.read_csv("Test_02.ncol",sep=" ") # Load Net 2
pandas_intersect = pd.merge(test_01, test_02, how='inner', on=['i1', 'i2']) # Intersection by column
pandas_nodes = len(set(pandas_intersect['i1'].tolist() + pandas_intersect['i2'].tolist())) # Store the number of nodes
</code></pre>
<p>And then to check if the merge was done without problems I compared the resulting number of nodes with the resulting nodes of NetworkX intersection as follows:</p>
<pre><code># Now test with NetworkX
import networkx as nx
n1 = nx.from_pandas_dataframe(test_01, source="i1", target="i2") # Transform net 1 in NetworkX Graph
n2 = nx.from_pandas_dataframe(test_02, source="i1", target="i2") # Transform net 2 in NetworkX Graph
fn = nx.intersection(n1,n2) # NetworkX Intersection
networkx_nodes = len(fn.nodes()) # Store the number of nodes
# The number of nodes are different!!!
pandas_nodes == networkx_nodes
</code></pre>
<p>I thought it might be something with the order of nodes, which is not canonical in the tables attached, but even when I put the two dataset on canonical order there are missing nodes.</p>
<p>My next hypothesis is that it might a bug in Pandas or NetworkX, so I try it in R (version 3.3.2) and igraph (version 1.0.1):</p>
<pre><code>library("igraph")
# Read Tables
g1 <- read.table("Test_01.ncol",header=TRUE)
g2 <- read.table("Test_02.ncol",header=TRUE)
# Transform Tables in Graphs
g1 <- graph_from_data_frame(g1, directed=FALSE)
g2 <- graph_from_data_frame(g2, directed=FALSE)
# Create igraph interssection
gi <- graph.intersection(g1,g2)
# Save graph intersection
write.graph(gi,"Test_igraph_intersection.ncol", format="ncol")
# Reload graph intersection
gi_r <- read.graph("Test_igraph_intersection.ncol",format="ncol")
# Prepare result summary
Methods <- c("igraph_intersection","pandas_table_intersection")
Vertex_counts <- c(vcount(gi),vcount(gi_r))
Edge_counts <- c(ecount(gi),ecount(gi_r))
# Create Summary Table
info_data = data.frame(Methods, Vertex_counts, Edge_counts)
colnames(info_data) <- c("Method","Vertices","Edges")
# Check info_data
info_data
</code></pre>
<p>But when I take a look into info_data the result was the same.</p>
<p>I understand that the number of nodes may decrease because of intersection procedure, but why this is happening just after I convert it to table format again on python and after save the file and then loading it again with igraph? Or I'm doing something wrong? </p>
<p>If someone can explain whats happening in python or R I appreciate. I really need to understand why this is happening and if I can trust in these intersections to continue my work.</p>
|
<p>The reason is that the graphs are undirected. <code>intersection</code> in <code>igraph</code> and <code>networkx</code> treats an <em>I--J</em> tie and a <em>J--I</em> tie as equivalent. <code>panda.intersection</code> will only treat exact matches (i.e. column 1 in data frame A matches column 1 in data frame B <em>and</em> column 2 in data frame A matches column 3 in data frame B). </p>
<pre><code>library(igraph); library(dplyr)
set.seed(1034)
g1 <- sample_gnp(20, 0.25, directed = F)
set.seed(1646)
g2 <- sample_gnp(20, 0.25, directed = F)
V(g1)$name <- sample(LETTERS, 20)
V(g2)$name <- sample(LETTERS, 20)
g1_el <- as.data.frame(as_edgelist(g1), stringsAsFactors = F)
g2_el <- as.data.frame(as_edgelist(g2), stringsAsFactors = F)
g1g2_inter <- as.data.frame(as_edgelist(intersection(g1,g2)))
ij <- inner_join(g1_el, g2_el)
</code></pre>
<p>At this point, the two data frames show differing numbers of nodes:</p>
<pre><code>> g1g2_inter
V1 V2
1 X E
2 J Y
3 N J
4 O F
5 H Y
6 T J
7 K N
8 K T
9 P F
10 Q N
> ij
V1 V2
1 T J
2 N J
3 J Y
4 X E
</code></pre>
<p>We can get the data frames equal by reversing the order of the columns in one data frame, using <code>inner_join</code> again. This gets the <em>J--I</em> ties that were missed before. Then <code>full_join</code> to two the two partial intersections:</p>
<pre><code>g1g2_fj <- g1_el %>%
rename(V1 = V2, V2 = V1) #reverse the column order %>%
inner_join(., g2_el) %>% rename(V1 = V2, V2 = V1) %>%
full_join(., ij) %>% #join with other 'partial' intersection
arrange(V1, V2)
</code></pre>
<p>Now, the <code>igraph</code> intersection matches the fully joined partial intersection:</p>
<pre><code>> g1g2_inter[order(g1g2_inter[,1]),] == g1g2_fj
V1 V2
5 TRUE TRUE
2 TRUE TRUE
7 TRUE TRUE
8 TRUE TRUE
3 TRUE TRUE
4 TRUE TRUE
9 TRUE TRUE
10 TRUE TRUE
6 TRUE TRUE
1 TRUE TRUE
</code></pre>
<p>In essence, yes you can trust the intersection methods of <code>networkx</code> and <code>igraph</code>. They are doing something a bit different to deal with undirected ties.</p>
|
python|r|pandas|igraph|networkx
| 1
|
7,936
| 42,300,123
|
Combining values from multiple rows into a single row
|
<p>I am working with several tables which have many-to-many relationships. What is the most efficient way to transform this data to ensure that the category column is unique and that all of the corresponding units are combined into a single row?</p>
<pre><code>category unit
A01 97337
A01 97333
A01 97334
A01 97343
A01 26223
A01 26226
A01 22722
A01 93397
A01 97332
A01 97342
A01 97369
A01 97734
A01 97332
P76 97343
P76 26223
P76 27399
P76 27277
P76 27234
P76 27297
P76 27292
P76 22723
P76 93622
P76 27343
P76 27234
P98 97337
</code></pre>
<p>Into this:</p>
<pre><code>category category_units
A01 97337, 97333, 97334, 97343, 26223, 26226, 22722, 93397, 97332, 97342, 97369, 97734, 97332
P76 97343, 26223, 93622, 99733, 27399, 27277, 27234, 27297, 27292
P98 97337
</code></pre>
<p>One row per category (serves as a primary key) where each of the corresponding units are concatenated into a single column with values separated by a comma.</p>
<p>I would be joining this data back to another fact table and eventually the end user would filter for category_units where it 'contains' some value so it would pull up all rows which are associated with that value.</p>
|
<p>You can use <code>groupby</code> with <code>apply</code> <code>join</code>, if <code>unit</code> column is numeric is necessary cast to <code>string</code>:</p>
<pre><code>df1 = df.groupby('category')['unit']
.apply(lambda x: ', '.join(x.astype(str)))
.reset_index()
print (df1)
category unit
0 A01 97337, 97333, 97334, 97343, 26223, 26226, 2272...
1 P76 97343, 26223, 27399, 27277, 27234, 27297, 2729...
2 P98 97337
</code></pre>
<p>Another solution with casting first:</p>
<pre><code>df.unit = df.unit.astype(str)
df1 = df.groupby('category')['unit'].apply(', '.join).reset_index()
print (df1)
category unit
0 A01 97337, 97333, 97334, 97343, 26223, 26226, 2272...
1 P76 97343, 26223, 27399, 27277, 27234, 27297, 2729...
2 P98 97337
</code></pre>
|
pandas
| 4
|
7,937
| 69,875,550
|
Python Flatten Deep Nested JSON
|
<p>I have the following JSON structure:</p>
<pre><code>{
"comments_v2": [
{
"timestamp": 1196272984,
"data": [
{
"comment": {
"timestamp": 1196272984,
"comment": "OSI Beach Party Weekend, CA",
"author": "xxxx"
}
}
],
"title": "xxxx commented on his own photo."
},
{
"timestamp": 1232918783,
"data": [
{
"comment": {
"timestamp": 1232918783,
"comment": "We'll see about that.",
"author": "xxxx"
}
}
]
}
]
}
</code></pre>
<p>I'm trying to flatten this JSON into a pandas dataframe and here is my solution:</p>
<pre><code># Read file
df = pd.read_json(codecs.open(infile, "r", "utf-8-sig"))
# Normalize
df = pd.json_normalize(df["comments_v2"])
child_column = pd.json_normalize(df["data"])
child_column = pd.concat([child_column.drop([0], axis=1), child_column[0].apply(pd.Series)], axis=1)
df_merge = df.join(child_column)
df_merge.drop(["data"], axis=1, inplace=True)
</code></pre>
<p>The resulting dataframe is as follows:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">timestamp</th>
<th style="text-align: left;">title</th>
<th style="text-align: left;">comment.timestamp</th>
<th style="text-align: left;">comment.comment</th>
<th style="text-align: left;">comment.author</th>
<th style="text-align: left;">comment.group</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">1196272984</td>
<td style="text-align: left;">xxxx commented on his own photo</td>
<td style="text-align: left;">1196272984</td>
<td style="text-align: left;">OSI Beach Party Weekend, CA</td>
<td style="text-align: left;">XXXXX</td>
<td style="text-align: left;">NaN</td>
</tr>
</tbody>
</table>
</div>
<p>Is there a simpler way to flat the JSON to obtain the result shown above?</p>
<p>Thank you!</p>
|
<p>Use <code>record_path='data'</code> as argument of <code>pd.json_normalize</code>:</p>
<pre><code>import json
import codecs
with codecs.open(infile, 'r', 'utf-8-sig') as jsonfile:
data = json.load(jsonfile)
df = pd.json_normalize(data['comments_v2'], 'data')
</code></pre>
<p>Output:</p>
<pre><code>>>> df
comment.timestamp comment.comment comment.author
0 1196272984 OSI Beach Party Weekend, CA xxxx
1 1232918783 We'll see about that. xxxx
</code></pre>
|
python|json|pandas|json-flattener
| 1
|
7,938
| 69,997,327
|
Tensorflow: ValueError: Input 0 is incompatible with layer model: expected shape=(None, 99), found shape=(None, 3)
|
<p>I am trying to predict with a ANN classification model made in Tensorflow to classify pose keypoints with MediaPipe. The mediapipe pose tracker has 33 keypoints for x y and z coordinates for a total of 99 data points.</p>
<p>I am training for 4 classes.</p>
<p>This is running the pose embedding</p>
<pre><code>import mediapipe as mp
import numpy as np
import tensorflow as tf
from tensorflow import keras
mp_pose = mp.solutions.pose
def get_center_point(landmarks, left_bodypart, right_bodypart):
"""Calculates the center point of the two given landmarks."""
left = tf.gather(landmarks, left_bodypart.value, axis=1)
right = tf.gather(landmarks, right_bodypart.value, axis=1)
center = left * 0.5 + right * 0.5
return center
def get_pose_size(landmarks, torso_size_multiplier=2.5):
"""Calculates pose size.
It is the maximum of two values:
* Torso size multiplied by `torso_size_multiplier`
* Maximum distance from pose center to any pose landmark
"""
# Hips center
hips_center = get_center_point(landmarks, mp_pose.PoseLandmark.LEFT_HIP,
mp_pose.PoseLandmark.RIGHT_HIP)
# Shoulders center
shoulders_center = get_center_point(landmarks,mp_pose.PoseLandmark.LEFT_SHOULDER,
mp_pose.PoseLandmark.RIGHT_SHOULDER)
# Torso size as the minimum body size
torso_size = tf.linalg.norm(shoulders_center - hips_center)
# Pose center
pose_center_new = get_center_point(landmarks,mp_pose.PoseLandmark.LEFT_HIP,
mp_pose.PoseLandmark.RIGHT_HIP)
pose_center_new = tf.expand_dims(pose_center_new, axis=1)
# Broadcast the pose center to the same size as the landmark vector to
# perform substraction
pose_center_new = tf.broadcast_to(pose_center_new,
[tf.size(landmarks) // (33*3), 33, 3])
# Dist to pose center
d = tf.gather(landmarks - pose_center_new, 0, axis=0,
name="dist_to_pose_center")
# Max dist to pose center
max_dist = tf.reduce_max(tf.linalg.norm(d, axis=0))
# Normalize scale
pose_size = tf.maximum(torso_size * torso_size_multiplier, max_dist)
return pose_size
def normalize_pose_landmarks(landmarks):
"""Normalizes the landmarks translation by moving the pose center to (0,0) and
scaling it to a constant pose size.
"""
# Move landmarks so that the pose center becomes (0,0)
pose_center = get_center_point(landmarks, mp_pose.PoseLandmark.LEFT_HIP,
mp_pose.PoseLandmark.RIGHT_HIP)
pose_center = tf.expand_dims(pose_center, axis=1)
# Broadcast the pose center to the same size as the landmark vector to perform
# substraction
pose_center = tf.broadcast_to(pose_center,
[tf.size(landmarks) // (33*3), 33, 3])
landmarks = landmarks - pose_center
# Scale the landmarks to a constant pose size
pose_size = get_pose_size(landmarks)
landmarks /= pose_size
return landmarks
def landmarks_to_embedding(landmarks_and_scores):
"""Converts the input landmarks into a pose embedding."""
# Reshape the flat input into a matrix with shape=(33, 3)
reshaped_inputs = keras.layers.Reshape((33, 3))(landmarks_and_scores)
# Normalize landmarks 3D
landmarks = normalize_pose_landmarks(reshaped_inputs[:, :, :3])
# Flatten the normalized landmark coordinates into a vector
embedding = keras.layers.Flatten()(landmarks)
return embedding
</code></pre>
<p>Then I create the model and feed the embedding inputs to it</p>
<pre><code>import csv
import cv2
import itertools
import numpy as np
import pandas as pd
import os
import sys
import tempfile
import tqdm
import mediapipe as mp
from matplotlib import pyplot as plt
from matplotlib.collections import LineCollection
import tensorflow as tf
from tensorflow import keras
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix
from poseEmbedding import get_center_point, get_pose_size, normalize_pose_landmarks, landmarks_to_embedding
def load_pose_landmarks(csv_path):
#load CSV file
dataframe = pd.read_csv(csv_path)
df_to_process = dataframe.copy()
#extract the list of class names
classes = df_to_process.pop('class_name').unique()
#extract the labels
y = df_to_process.pop('class_no')
#convert the input features and labels into float64 format for training
X = df_to_process.astype('float64')
y = keras.utils.to_categorical(y)
return X,y, classes, dataframe
csvs_out_train_path = 'train_data.csv'
csvs_out_test_path = 'test_data.csv'
#Load training data
X, y, class_names, _ = load_pose_landmarks(csvs_out_train_path)
#split training data(X,y) into (X_train, y_train) and (X_val, y_val)
X_train, X_val, y_train, y_val = train_test_split(X,y, test_size=0.15)
X_test, y_test, _, df_test = load_pose_landmarks(csvs_out_test_path)
mp_pose = mp.solutions.pose
inputs = tf.keras.Input(shape=(99))
embedding = landmarks_to_embedding(inputs)
layer = keras.layers.Dense(128, activation=tf.nn.relu6)(embedding)
layer = keras.layers.Dropout(0.5)(layer)
layer = keras.layers.Dense(64, activation=tf.nn.relu6)(layer)
layer = keras.layers.Dropout(0.5)(layer)
outputs = keras.layers.Dense(4, activation="softmax")(layer)
model = keras.Model(inputs, outputs)
#model.summary()
model.compile(
optimizer = 'adam',
loss = 'categorical_crossentropy',
metrics=['accuracy']
)
# Start training
history = model.fit(X_train, y_train,
epochs=200,
batch_size=16,
validation_data=(X_val, y_val))
model.save("complete_epoch_model")
# Visualize the training history to see whether you're overfitting.
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('Model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['TRAIN', 'VAL'], loc='lower right')
plt.show()
loss, accuracy = model.evaluate(X_test, y_test)
</code></pre>
<p>The model summary prints this out:</p>
<pre><code> Layer (type) Output Shape Param # Connected to
==================================================================================================
input_18 (InputLayer) [(None, 99)] 0 []
reshape_17 (Reshape) (None, 33, 3) 0 ['input_18[0][0]']
tf.__operators__.getitem_10 (S (None, 33, 3) 0 ['reshape_17[0][0]']
licingOpLambda)
tf.compat.v1.gather_69 (TFOpLa (None, 3) 0 ['tf.__operators__.getitem_10[0][
mbda) 0]']
tf.compat.v1.gather_70 (TFOpLa (None, 3) 0 ['tf.__operators__.getitem_10[0][
mbda) 0]']
tf.math.multiply_69 (TFOpLambd (None, 3) 0 ['tf.compat.v1.gather_69[0][0]']
a)
tf.math.multiply_70 (TFOpLambd (None, 3) 0 ['tf.compat.v1.gather_70[0][0]']
a)
tf.__operators__.add_31 (TFOpL (None, 3) 0 ['tf.math.multiply_69[0][0]',
ambda) 'tf.math.multiply_70[0][0]']
tf.compat.v1.size_17 (TFOpLamb () 0 ['tf.__operators__.getitem_10[0][
da) 0]']
tf.expand_dims_17 (TFOpLambda) (None, 1, 3) 0 ['tf.__operators__.add_31[0][0]']
tf.compat.v1.floor_div_17 (TFO () 0 ['tf.compat.v1.size_17[0][0]']
pLambda)
tf.broadcast_to_17 (TFOpLambda (None, 33, 3) 0 ['tf.expand_dims_17[0][0]',
) 'tf.compat.v1.floor_div_17[0][0]
']
tf.math.subtract_23 (TFOpLambd (None, 33, 3) 0 ['tf.__operators__.getitem_10[0][
a) 0]',
'tf.broadcast_to_17[0][0]']
tf.compat.v1.gather_75 (TFOpLa (None, 3) 0 ['tf.math.subtract_23[0][0]']
mbda)
tf.compat.v1.gather_76 (TFOpLa (None, 3) 0 ['tf.math.subtract_23[0][0]']
mbda)
tf.math.multiply_75 (TFOpLambd (None, 3) 0 ['tf.compat.v1.gather_75[0][0]']
a)
tf.math.multiply_76 (TFOpLambd (None, 3) 0 ['tf.compat.v1.gather_76[0][0]']
a)
tf.__operators__.add_34 (TFOpL (None, 3) 0 ['tf.math.multiply_75[0][0]',
ambda) 'tf.math.multiply_76[0][0]']
tf.compat.v1.size_18 (TFOpLamb () 0 ['tf.math.subtract_23[0][0]']
da)
tf.compat.v1.gather_73 (TFOpLa (None, 3) 0 ['tf.math.subtract_23[0][0]']
mbda)
tf.compat.v1.gather_74 (TFOpLa (None, 3) 0 ['tf.math.subtract_23[0][0]']
mbda)
tf.compat.v1.gather_71 (TFOpLa (None, 3) 0 ['tf.math.subtract_23[0][0]']
mbda)
tf.compat.v1.gather_72 (TFOpLa (None, 3) 0 ['tf.math.subtract_23[0][0]']
mbda)
tf.expand_dims_18 (TFOpLambda) (None, 1, 3) 0 ['tf.__operators__.add_34[0][0]']
tf.compat.v1.floor_div_18 (TFO () 0 ['tf.compat.v1.size_18[0][0]']
pLambda)
tf.math.multiply_73 (TFOpLambd (None, 3) 0 ['tf.compat.v1.gather_73[0][0]']
a)
tf.math.multiply_74 (TFOpLambd (None, 3) 0 ['tf.compat.v1.gather_74[0][0]']
a)
tf.math.multiply_71 (TFOpLambd (None, 3) 0 ['tf.compat.v1.gather_71[0][0]']
a)
tf.math.multiply_72 (TFOpLambd (None, 3) 0 ['tf.compat.v1.gather_72[0][0]']
a)
tf.broadcast_to_18 (TFOpLambda (None, 33, 3) 0 ['tf.expand_dims_18[0][0]',
) 'tf.compat.v1.floor_div_18[0][0]
']
tf.__operators__.add_33 (TFOpL (None, 3) 0 ['tf.math.multiply_73[0][0]',
ambda) 'tf.math.multiply_74[0][0]']
tf.__operators__.add_32 (TFOpL (None, 3) 0 ['tf.math.multiply_71[0][0]',
ambda) 'tf.math.multiply_72[0][0]']
tf.math.subtract_25 (TFOpLambd (None, 33, 3) 0 ['tf.math.subtract_23[0][0]',
a) 'tf.broadcast_to_18[0][0]']
tf.math.subtract_24 (TFOpLambd (None, 3) 0 ['tf.__operators__.add_33[0][0]',
a) 'tf.__operators__.add_32[0][0]']
tf.compat.v1.gather_77 (TFOpLa (33, 3) 0 ['tf.math.subtract_25[0][0]']
mbda)
tf.compat.v1.norm_14 (TFOpLamb () 0 ['tf.math.subtract_24[0][0]']
da)
tf.compat.v1.norm_15 (TFOpLamb (3,) 0 ['tf.compat.v1.gather_77[0][0]']
da)
tf.math.multiply_77 (TFOpLambd () 0 ['tf.compat.v1.norm_14[0][0]']
a)
tf.math.reduce_max_7 (TFOpLamb () 0 ['tf.compat.v1.norm_15[0][0]']
da)
tf.math.maximum_7 (TFOpLambda) () 0 ['tf.math.multiply_77[0][0]',
'tf.math.reduce_max_7[0][0]']
tf.math.truediv_7 (TFOpLambda) (None, 33, 3) 0 ['tf.math.subtract_23[0][0]',
'tf.math.maximum_7[0][0]']
flatten_7 (Flatten) (None, 99) 0 ['tf.math.truediv_7[0][0]']
dense_21 (Dense) (None, 128) 12800 ['flatten_7[0][0]']
dropout_14 (Dropout) (None, 128) 0 ['dense_21[0][0]']
dense_22 (Dense) (None, 64) 8256 ['dropout_14[0][0]']
dropout_15 (Dropout) (None, 64) 0 ['dense_22[0][0]']
dense_23 (Dense) (None, 4) 260 ['dropout_15[0][0]']
==================================================================================================
Total params: 21,316
Trainable params: 21,316
Non-trainable params: 0
__________________________________________________________________________________________________
</code></pre>
<p>Now when I try to run inference on my webcam, I get the following error from mediapipe and Tensorflow:</p>
<pre><code>ValueError: Input 0 is incompatible with layer model: expected shape=(None, 99), found shape=(None, 3)
</code></pre>
<p>I am not sure how to fix this error as I could only train with shape of 99 as TF was giving me errors for using a shape of 3 when trying to compile. How do I fix this?</p>
<p>This is my inference code:</p>
<pre><code>import cv2
import os
import tqdm
import numpy as np
import logging
from mediapipe.python.solutions import pose as mp_pose
from mediapipe.python.solutions import drawing_utils as mp_drawing
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import backend as K
from tensorflow.keras.utils import CustomObjectScope
def relu6(x):
return K.relu(x, max_value=6)
logging.getLogger().setLevel(logging.CRITICAL)
cap = cv2.VideoCapture(0)
model = tf.keras.models.load_model('weights_best.hdf5', compile = True,
custom_objects = {"relu6": relu6})
with mp_pose.Pose() as pose_tracker:
while cap.isOpened():
# Get next frame of the video.
ret, frame = cap.read()
# Run pose tracker.
imagefirst = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
image = cv2.flip(imagefirst,1)
result = pose_tracker.process(image)
pose_landmarks = result.pose_landmarks
# Draw pose prediction.
if pose_landmarks is not None:
mp_drawing.draw_landmarks(
image,
landmark_list=pose_landmarks,
connections=mp_pose.POSE_CONNECTIONS)
if pose_landmarks is not None:
# Get landmarks.
frame_height, frame_width = frame.shape[0], frame.shape[1]
pose_landmarks = np.array([[lmk.x * frame_width, lmk.y * frame_height, lmk.z * frame_width]
for lmk in pose_landmarks.landmark], dtype=np.float32)
assert pose_landmarks.shape == (33, 3), 'Unexpected landmarks shape: {}'.format(pose_landmarks.shape)
prediction = model.predict(pose_landmarks)
# Save the output frame.
image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
cv2.imshow('Raw Webcam Feed', image)
if cv2.waitKey(10) & 0xFF == ord('q'):
break
# Close output video.
cap.release()
cv2.destroyAllWindows()
# Release MediaPipe resources.
pose_tracker.close()
</code></pre>
|
<p>Maybe try changing the shape of <code>pose_landmarks</code> from <code>(33, 3)</code> to <code>(1, 99</code>) after your assertion and before you make a prediction:</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
pose_landmarks = tf.random.normal((33, 3))
assert pose_landmarks.shape == (33, 3), 'Unexpected landmarks shape: {}'.format(pose_landmarks.shape)
pose_landmarks = tf.expand_dims(pose_landmarks, axis=0)
shape = tf.shape(pose_landmarks)
pose_landmarks = tf.reshape(pose_landmarks, (shape[0], shape[1] * shape[2]))
tf.print(pose_landmarks.shape)
</code></pre>
<pre><code>TensorShape([1, 99])
</code></pre>
|
python|tensorflow|opencv|keras|neural-network
| 1
|
7,939
| 43,391,009
|
Pandas DataFrame to Seaborn
|
<p>I'm attempting to draw seaborn heatmap using pandas DataFrame.
My data format is as below</p>
<pre><code>visit_table
yyyymm visit_cnt
0 201101 91252
1 201102 140571
2 201103 141457
3 201104 147680
4 201105 154066
...
68 201609 591242
69 201610 650174
70 201611 507579
71 201612 465218
</code></pre>
<p>How can I change DataFrame to seaborn data format as below</p>
<pre><code> 2011 2012 2013 2015
1 91252
2 14057
3 147680
4 154066
...
11 123455
12 1234456
</code></pre>
|
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html" rel="nofollow noreferrer"><code>to_datetime</code></a> for converting the column <code>yyyymm</code> and then create new <code>Series</code> (columns) with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.month.html" rel="nofollow noreferrer"><code>dt.month</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.year.html" rel="nofollow noreferrer"><code>dt.year</code></a>. Last reshape by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.pivot.html" rel="nofollow noreferrer"><code>pivot</code></a> and replace <code>NaN</code> to <code>0</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.fillna.html" rel="nofollow noreferrer"><code>fillna</code></a> if necessary.</p>
<pre><code>df['yyyymm'] = pd.to_datetime(df['yyyymm'], format='%Y%m')
df1 = pd.pivot(index=df['yyyymm'].dt.month, columns=df['yyyymm'].dt.year, values=df.visit_cnt)
.fillna(0)
print (df1)
yyyymm 2011 2016
yyyymm
1 91252.0 0.0
2 140571.0 0.0
3 141457.0 0.0
4 147680.0 0.0
5 154066.0 0.0
9 0.0 591242.0
10 0.0 650174.0
11 0.0 507579.0
12 0.0 465218.0
</code></pre>
<p>Another solution is similar, only reshape by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>set_index</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.unstack.html" rel="nofollow noreferrer"><code>unstack</code></a>:</p>
<pre><code>df['yyyymm'] = pd.to_datetime(df['yyyymm'], format='%Y%m')
df['year'] = df['yyyymm'].dt.year
df['month'] = df['yyyymm'].dt.month
df1 = df.set_index(['month','year'])['visit_cnt'].unstack(fill_value=0)
print (df1)
year 2011 2016
month
1 91252 0
2 140571 0
3 141457 0
4 147680 0
5 154066 0
9 0 591242
10 0 650174
11 0 507579
12 0 465218
</code></pre>
<p>Finally, use <a href="http://seaborn.pydata.org/generated/seaborn.heatmap.html" rel="nofollow noreferrer"><code>seaborn.heatmap</code></a>:</p>
<pre><code>import seaborn as sns
ax = sns.heatmap(df1)
</code></pre>
<p><a href="https://i.stack.imgur.com/8RxCY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8RxCY.png" alt="graph"></a></p>
|
pandas|dataframe|seaborn
| 2
|
7,940
| 43,402,017
|
Tensorflow LSTM - Matrix multiplication on LSTM cell
|
<p>I'm making a LSTM neural network in Tensorflow.</p>
<p>The input tensor size is 92.</p>
<pre><code>import tensorflow as tf
from tensorflow.contrib import rnn
import data
test_x, train_x, test_y, train_y = data.get()
# Parameters
learning_rate = 0.001
epochs = 100
batch_size = 64
display_step = 10
# Network Parameters
n_input = 28 # input size
n_hidden = 128 # number of hidden layers
n_classes = 20 # output size
# Placeholders
x = tf.placeholder(dtype=tf.float32, shape=[None, n_input])
y = tf.placeholder(dtype=tf.float32, shape=[None, n_classes])
# Network
def LSTM(x):
W = tf.Variable(tf.random_normal([n_hidden, n_classes]), dtype=tf.float32) # weights
b = tf.Variable(tf.random_normal([n_classes]), dtype=tf.float32) # biases
x_shape = 92
x = tf.transpose(x)
x = tf.reshape(x, [-1, n_input])
x = tf.split(x, x_shape)
lstm = rnn.BasicLSTMCell(
num_units=n_hidden,
forget_bias=1.0
)
outputs, states = rnn.static_rnn(
cell=lstm,
inputs=x,
dtype=tf.float32
)
output = tf.matmul( outputs[-1], W ) + b
return output
# Train Network
def train(x):
prediction = LSTM(x)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
output = sess.run(prediction, feed_dict={"x": train_x})
print(output)
train(x)
</code></pre>
<p>I'm not getting any errors, but I'm feeding an input tensor of size 92, and the matrix multiplication in the LSTM function returns a list containing one result vector, when the desired amount is 92, one result vector per input. </p>
<p>Is the problem that I'm matrix multiplying only the last item in the outputs array? Like this:</p>
<pre><code>output = tf.matmul( outputs[-1], W ) + b
</code></pre>
<p>instead of:</p>
<pre><code>output = tf.matmul( outputs, W ) + b
</code></pre>
<p>This is the error I get when I do the latter:</p>
<pre><code>ValueError: Shape must be rank 2 but is rank 3 for 'MatMul' (op: 'MatMul') with input shapes: [92,?,128], [128,20].
</code></pre>
|
<p>static_rnn for making the simplest recurrent neural net.<a href="https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/static_rnn" rel="nofollow noreferrer">Here's the tf documentation</a>.So the input to it should be a sequence of tensors. Let's say you want to input 4 words calling "Hi","how","Are","you". So your input place holder should consist of four n(size of each input vector) dimensional vectors corresponding to each words.</p>
<p>I think there's something wrong with your place holder. You should initialize it with number of inputs to the RNN. 28 is number of dimensions in each vector. I believe 92 is the length of the sequence. (more like 92 lstm cell)</p>
<p>In the output list you will get set of vectors equal to length of sequence each of size equal to number of hidden units. </p>
|
python|tensorflow|lstm
| 1
|
7,941
| 43,104,965
|
Does groupby automatically group over all non numeric columns in pandas?
|
<p>I have a sample of a dataset below (only showing the first couple rows but there are 193 rows):</p>
<pre><code>country,beer_servings,spirit_servings,wine_servings,total_litres_of_pure_alcohol,continent
Afghanistan,0,0,0,0.0,Asia
Albania,89,132,54,4.9,Europe
Algeria,25,0,14,0.7,Africa
Andorra,245,138,312,12.4,Europe
Angola,217,57,45,5.9,Africa
Antigua & Barbuda,102,128,45,4.9,North America
...
</code></pre>
<p>When I run this: <code>drinks.groupby('continent').head()</code></p>
<p>I get back a dataframe with 30 rows. But in those 30 rows I still have duplicate names for the <code>continent</code>. For example in the image below you can see that <code>Europe</code> is repeated two times (at rows 1 and 3):</p>
<p><a href="https://i.stack.imgur.com/VISVL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VISVL.png" alt="enter image description here"></a> </p>
<p>I am not able to understand why I am still having two rows with the same continent when I grouped by continent originally?</p>
<p>In this case is the <code>groupby</code> operation also grouping by the <code>country</code> even though I never specified it in the <code>groupby</code> function? Since I know in SQL you are supposed to use an aggregate function like <code>max</code>, <code>min</code>, <code>sum</code>, etc. But in this case I don't have to pass in an aggregate function and I get the result above. </p>
|
<p>No!</p>
<p>What is happening is that <code>head</code> is a method on the <code>groupby</code> object and behaves a little differently than <code>pd.DataFrame.head</code>.</p>
<p>What the the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.head.html" rel="nofollow noreferrer"><strong><code>groupby</code> version of <code>head</code></strong></a> does is returns the head of each group.</p>
<p>So we can see this more clearly by passing <code>1</code> to the <code>head</code> method and see it return the first row of each subset</p>
<pre><code>df.groupby('continent').head(1)
</code></pre>
<p><a href="https://i.stack.imgur.com/JgJo0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JgJo0.png" alt="enter image description here"></a></p>
|
python|pandas
| 2
|
7,942
| 45,408,544
|
Zip in python does not work properly with lists
|
<p>Here is what I tried: </p>
<pre><code>>>> d
array([ 0.71428573, 0.69230771, 0.69999999], dtype=float32)
>>> f
[('name', 999), ('ddd', 33), ('mm', 112)]
>>> for n1,s1,normal in zip(d,f):
... print(n1,s1,normal)
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: need more than 2 values to unpack
</code></pre>
<p>Then I tried this: </p>
<pre><code>>>> for (name,confidence),normal in zip(d,f):
... print(name,confidence,normal)
...
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'numpy.float32' object is not iterable
</code></pre>
<p>Where,</p>
<pre><code>d = ['Jonathan Walsh','Patrick Walsh','John Welsh']
array = np.array(d)
from pyxdameraulevenshtein import damerau_levenshtein_distance_ndarray, normalized_damerau_levenshtein_distance_ndarray
d = normalized_damerau_levenshtein_distance_ndarray('jwalsh', array)
</code></pre>
<p>Kindly, let me know what i need to do to print the values simultenously? I am using Python2.7.13 on Windows 10.</p>
|
<p><code>f</code> is a nested list, hence to unpack its item to individual variables you need to do:</p>
<pre><code>>>> for n1, (s1, normal) in zip(d, f):
... print(n1, s1, normal)
...
(0.71428573, 'name', 999)
(0.69230771, 'ddd', 33)
(0.69999999, 'mm', 112)
</code></pre>
<p>This is basically equivalent to:</p>
<pre><code>>>> a, (b, c) = [1, (2, 3)]
>>> a, b, c
(1, 2, 3)
</code></pre>
<p>While this will fail because <code>a</code> can assigned to <code>1</code> but now for <code>b</code> and <code>c</code> there's only one item and Python complains that it needs one more item in the list on RHS or we use the same structure on LHS.</p>
<pre><code>>>> a, b, c = [1, (2, 3)]
Traceback (most recent call last):
File "<ipython-input-9-c8a9ecc8f325>", line 1, in <module>
a, b, c = [1, (2, 3)]
ValueError: need more than 2 values to unpack
</code></pre>
<p>From <a href="https://docs.python.org/2/reference/simple_stmts.html#assignment-statements" rel="nofollow noreferrer">docs:</a></p>
<blockquote>
<p>If the target list is a comma-separated list of targets: The object
must be an iterable with the same number of items as there are targets
in the target list, and the items are assigned, from left to right, to
the corresponding targets.</p>
</blockquote>
|
python|python-2.7|list|numpy|zip
| 8
|
7,943
| 62,844,737
|
How to calculate monthly annual average from daily dataframe and plot it by abbreviated month
|
<p>I have daily values of precipitation and temperature for a period of several years. I would like to compute the average of the precipitation and temperature for each month of the year (January to December). For precipitation I first need to calculate the summation of daily precipitation for each month, and then compute the average for the same month for all the years of data. For temperature I need to average the monthly averages of the values (so an average of all the data for all the months gives the exact same result). Once this is done I need to plot both sets of data (precipitation and temperature) using abbreviated months.</p>
<p>I cannot find a way to compute the precipitation values and to be able to obtain the sum for each month and to then average it for all years. Furthermore, I am having trouble to display the format in abbreviated months.</p>
<p>This is what I have tried so far (unsuccessfully):</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.dates import DateFormatter
example = [['01.10.1965 00:00', 13.88099957, 5.375],
['02.10.1965 00:00', 5.802999973, 3.154999971],
['03.10.1965 00:00', 9.605699539, 0.564999998],
['14.10.1965 00:00', 0.410299987, 1.11500001],
['31.10.1965 00:00', 6.184500217, -0.935000002],
['01.11.1965 00:00', 0.347299993, -5.235000134],
['02.11.1965 00:00', 0.158299997, -8.244999886],
['03.11.1965 00:00', 1.626199961, -3.980000019],
['24.10.1966 00:00', 0, 3.88499999],
['25.10.1966 00:00', 0.055100001, 1.279999971],
['30.10.1966 00:00', 0.25940001, -5.554999828]]
names = ["date","Pobs","Tobs"]
data = pd.DataFrame(example, columns=names)
data['date'] = pd.to_datetime(data['date'], format='%d.%m.%Y %H:%M')
#I think the average of temperature is well computed but the precipitation would give the complete summation for all years!
tempT = data.groupby([data['date'].dt.month_name()], sort=False).mean().eval('Tobs')
tempP = data.groupby([data['date'].dt.month_name()], sort=False).sum().eval('Pobs')
fig = plt.figure(); ax1 = fig.add_subplot(1,1,1); ax2 = ax1.twinx();
ax1.bar(tempP.index.tolist(), tempP.values, color='blue')
ax2.plot(tempT.index.tolist(), tempT.values, color='red')
ax1.set_ylabel('Precipitation [mm]', fontsize=10)
ax2.set_ylabel('Temperature [°C]', fontsize=10)
#ax1.xaxis.set_major_formatter(DateFormatter("%b")) #this line does not work properly!
plt.show()
</code></pre>
|
<p>Here's working code for your problem:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.dates import DateFormatter
import matplotlib.dates as mdates
example = [['01.10.1965 00:00',13.88099957,5.375], ...]
names = ["date","Pobs","Tobs"]
data = pd.DataFrame(example, columns=names)
data['date'] = pd.to_datetime(data['date'], format='%d.%m.%Y %H:%M')
# Temperature:
tempT = data.groupby([data['date'].dt.month_name()], sort=False).mean().eval('Tobs')
# Precipitation:
df_sum = data.groupby([data['date'].dt.month_name(), data['date'].dt.year], sort=False).sum() # get sum for each individual month
df_sum.index.rename(['month','year'], inplace=True) # just renaming the index
df_sum.reset_index(level=0, inplace=True) # make the month-index to a column
tempP = df_sum.groupby([df_sum['month']], sort=False).mean().eval('Pobs') # get mean over all years
fig = plt.figure();
ax1 = fig.add_subplot(1,1,1);
ax2 = ax1.twinx();
xticks = pd.to_datetime(tempP.index.tolist(), format='%B').sort_values() # must work for both axes
ax1.bar(xticks, tempP.values, color='blue')
ax2.plot(xticks, tempT.values, color='red')
plt.xticks(pd.to_datetime(tempP.index.tolist(), format='%B').sort_values()) # to show all ticks
ax1.xaxis.set_major_formatter(mdates.DateFormatter("%b")) # must be called after plotting both axes
ax1.set_ylabel('Precipitation [mm]', fontsize=10)
ax2.set_ylabel('Temperature [°C]', fontsize=10)
plt.show()
</code></pre>
<p>Explanation:
<a href="https://stackoverflow.com/a/33746112/3692004">As of this StackOverflow answer, DateFormatter uses mdates.</a>
For this to work, <strong>you need to make a DatetimeIndex-Array</strong> from the month names, which the DateFormatter can then re-format.</p>
<p>As for the calculation, I understood the solution to your problem as such that we take the <strong>sum within each individual month and then take the average of these sums over all years</strong>. This leaves you with the average total precipitation per month over all years.</p>
|
python|pandas|matplotlib|pandas-groupby
| 1
|
7,944
| 62,572,369
|
How to calculate mean in a particular subset and replace the value
|
<p>csv table :</p>
<p><img src="https://i.stack.imgur.com/GkgrS.png" alt="csv table" /></p>
<p>So I have a csv file that has different columns like nodeVolt, Temperature1, temperature2, temperature3, pressure and luminosity. Under temperatures column, there are various cells where the value is wrong (ie. 220). I want to replace that value in that cell by taking a mean of the previous 10 cells and replacing it there. I want this to run dynamically by finding all the cells with values 220 in that particular column and replace with the mean of previous 10 values in the same column.</p>
<p>I was able to search the cells containing 220 in that particular problem but unable to take mean and replace it.</p>
<pre><code>import pandas as pd
import numpy as np
data = pd.read_csv(r"108e.csv")
data = data.drop(['timeStamp','nodeRSSI','packetID', 'solarPanelVolt', 'solarPanelBattVolt',
'solarPanelCurr','temperature2','nodeVolt','nodeAddress'], axis = 1)
df = pd.DataFrame(data)
df1 = df.loc[lambda df: df['temperature3'] == 220]
print(df1)
for i in df1:
df1["temperature3"][i] == df["temperature3"][i-11:i-1, 'temperature3'].mean()
</code></pre>
|
<p>Here you go:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame(
{
"something": 3.37,
"temperature3": [
31.94,
31.93,
31.85,
31.91,
31.92,
31.89,
31.9,
31.94,
32.06,
32.16,
32.3,
220,
32.1,
32.5,
32.2,
32.3,
],
}
)
# replace all 220 values by NaN
df["temperature3"] = df["temperature3"].replace({220: np.nan})
# fill all NaNs with an shifted rolling average of the last 10 rows
df["temperature3"] = df["temperature3"].fillna(
df["temperature3"].rolling(10, min_periods=1).mean().shift(1)
)
</code></pre>
<p>Result:</p>
<pre><code> something temperature3
0 3.37 31.940
1 3.37 31.930
2 3.37 31.850
3 3.37 31.910
4 3.37 31.920
5 3.37 31.890
6 3.37 31.900
7 3.37 31.940
8 3.37 32.060
9 3.37 32.160
10 3.37 32.300
11 3.37 31.986
12 3.37 32.100
13 3.37 32.500
14 3.37 32.200
15 3.37 32.300
</code></pre>
<p>(please provide next time some sample data as code, not as an image)</p>
|
python|pandas|dataframe|csv|numpy-ndarray
| 1
|
7,945
| 54,488,667
|
.py file not running properly in py.exe, works in eclipse
|
<p>Hi I have a simple python script that uses an odbc driver to connect to a database get a dataframe and store it/overwrite an excel file. When I run the script using eclipse, it works just fine. However, when I run it by right clicking the .py file and open with py.exe, the excel file is not being overwritten/saved. </p>
<p>Ultimately, I want other users to be able to install python and just double click a .py script to update an excel file. Does anyone know why it is not working with the right click method? They should both be using the same interpreter when I checked. </p>
<pre><code>import pyodbc
import pandas as pd
cnxn = pyodbc.connect('Driver={ODBC Driver (x64)};'
'DSN=MyDSN;'
'Server=ServerAddress;'
'Database=Stuff;')
t1 = "table1"
sql = ("select * " + "from " + t1)
writer = pd.ExcelWriter("MyExcelFile.xlsx")
dframe = pd.read_sql(sql,cnxn)
aggDf = dframe.groupby(['DEPARTMENT']).sum()
dframe.to_excel(writer,"RawSalesData", index = False)
aggDf.to_excel(writer, "SalesStats")
writer.save()
writer.close()
</code></pre>
<hr>
<p>Below is the results of running the sys code suggested by Jacob in the comment. Seems like both methods match.</p>
<pre><code>sys.version_info(major=3, minor=6, micro=5, releaselevel='final', serial=0)
[
'C:\\Users\\persona\\PythonWorkSpace\\TestPython',
'C:\\Users\\persona\\anaconda3',
'C:\\Users\\persona\\anaconda3\\DLLs',
'C:\\Users\\persona\\anaconda3\\libs',
'C:\\Users\\persona\\anaconda3\\pkgs',
'C:\\Users\\persona\\anaconda3\\conda-meta',
'C:\\Users\\persona\\anaconda3\\envs',
'C:\\Users\\persona\\anaconda3\\etc',
'C:\\Users\\persona\\anaconda3\\include',
'C:\\Users\\persona\\anaconda3\\Lib',
'C:\\Users\\persona\\anaconda3\\Library',
'C:\\Users\\persona\\anaconda3\\man',
'C:\\Users\\persona\\anaconda3\\Menu',
'C:\\Users\\persona\\anaconda3\\Scripts',
'C:\\Users\\persona\\anaconda3\\share',
'C:\\Users\\persona\\anaconda3\\sip',
'C:\\Users\\persona\\anaconda3\\tcl',
'C:\\Users\\persona\\anaconda3\\Tools',
'C:\\WINDOWS\\system32',
'C:\\Users\\persona\\Anaconda3\\python36.zip',
'C:\\Users\\persona\\Anaconda3\\lib\\site-packages',
'C:\\Users\\persona\\Anaconda3\\lib\\site-packages\\win32',
'C:\\Users\\persona\\Anaconda3\\lib\\site-packages\\win32\\lib',
'C:\\Users\\persona\\Anaconda3\\lib\\site-packages\\Pythonwin'
]
^ right click method
--------------------
sys.version_info(major=3, minor=6, micro=5, releaselevel='final', serial=0)
[
'C:\\Users\\persona\\PythonWorkSpace\\TestPython',
'C:\\Users\\persona\\PythonWorkSpace\\TestPython',
'C:\\Users\\persona\\anaconda3',
'C:\\Users\\persona\\anaconda3\\DLLs',
'C:\\Users\\persona\\anaconda3\\libs',
'C:\\Users\\persona\\anaconda3\\pkgs',
'C:\\Users\\persona\\anaconda3\\conda-meta',
'C:\\Users\\persona\\anaconda3\\envs',
'C:\\Users\\persona\\anaconda3\\etc',
'C:\\Users\\persona\\anaconda3\\include',
'C:\\Users\\persona\\anaconda3\\Lib',
'C:\\Users\\persona\\anaconda3\\Library',
'C:\\Users\\persona\\anaconda3\\man',
'C:\\Users\\persona\\anaconda3\\Menu',
'C:\\Users\\persona\\anaconda3\\Scripts',
'C:\\Users\\persona\\anaconda3\\share',
'C:\\Users\\persona\\anaconda3\\sip',
'C:\\Users\\persona\\anaconda3\\tcl',
'C:\\Users\\persona\\anaconda3\\Tools',
'C:\\Program Files\\eclipse',
'C:\\Users\\persona\\Anaconda3\\lib\\site-packages',
'C:\\Users\\persona\\Anaconda3\\lib\\site-packages\\win32',
'C:\\Users\\persona\\Anaconda3\\lib\\site-packages\\win32\\lib',
'C:\\Users\\persona\\Anaconda3\\lib\\site-packages\\Pythonwin',
'C:\\Users\\persona\\Anaconda3\\python36.zip'
]
</code></pre>
|
<p>Make a python file like this:</p>
<pre><code>from sys import version_info, path
print(version_info)
print(path)
</code></pre>
<p>Then try opening with both methods you mentioned. The results should match, and if they don't then there is your issue. If they do match, you should add more logging/exception handling so we can see why this fails.</p>
|
python|excel|database|pandas|pyodbc
| 0
|
7,946
| 73,811,474
|
Numpy: Is there any simple way to solve equation in form Ax = b such that some x's takes fixed values
|
<p>So basically I want to solve Ax = b but I want the value of x1 to always be equation to say 4.</p>
<p>For example, if A is 3x3 and x is 3x1 then the answer of the above equation should be in form x = [4, x2, x3]</p>
|
<p>if always x1=4, then x1 is no longer a unknown --> insert x1=4 in each place of the system and simplify the equations (algebraically = manually) --> you will get a system where A is 2x2 and x is 2x1.</p>
|
numpy|linear-equation
| 0
|
7,947
| 73,682,132
|
Read and tabulate table-like data from website
|
<p>I want to tabulate and store into Pandas this linked data from the <a href="https://water.weather.gov/ahps2/crests.php?wfo=jan&gage=jacm6&crest_type=historic" rel="nofollow noreferrer">U.S. Weather Service</a>.</p>
<p>Here are the first four lines of the webpage.</p>
<blockquote>
<p>Historic Crests</p>
<p>(1) 43.28 ft on 04/17/1979</p>
<p>(2) 39.58 ft on 05/25/1983</p>
<p>(3) 36.67 ft on 02/17/2020</p>
</blockquote>
<p>You can access the data from an IDE or notebook using the following code.</p>
<pre class="lang-py prettyprint-override"><code>import bs4
import urllib.request
link = "https://water.weather.gov/ahps2/crests.php?wfo=jan&gage=jacm6&crest_type=historic"
webpage=str(urllib.request.urlopen(link).read())
soup = bs4.BeautifulSoup(webpage)
print(soup.get_text())
</code></pre>
<p>The data is already in a table-like structure, and I think you could tabulate it by parsing it into a dictionary of lists and then uploading it into a Pandas Dataframe. However, I imagine that there is a more simple <em>pythonic</em> approach.</p>
<p>Here's a snippet of the desired table structure.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>No.</th>
<th>Crest</th>
<th>Date</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>43.28</td>
<td>04/17/1979</td>
</tr>
<tr>
<td>2</td>
<td>39.58</td>
<td>05/25/1983</td>
</tr>
<tr>
<td>3</td>
<td>36.67</td>
<td>02/17/2020</td>
</tr>
</tbody>
</table>
</div>
|
<p>You could create a <em>regex</em> pattern and feed it to <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.str.extractall.html" rel="nofollow noreferrer"><code>Series.str.extractall</code></a>. E.g.:</p>
<pre><code>tbl = soup.find('div', class_='water_information')
vals = tbl.get_text().split(r'\n')
df = pd.Series(vals).str.extractall(r'\((?P<No>\d+)\)\s(?P<Crest>\d+.\d+)\sft\son\s(?P<Date>\d{2}\/\d{2}\/\d{4})')\
.reset_index(drop=True)
print(df)
No Crest Date
0 1 43.28 04/17/1979
1 2 39.58 05/25/1983
2 3 36.67 02/17/2020
3 4 36.30 03/31/1902
4 5 36.30 12/05/1880
.. .. ... ...
86 87 30.30 04/20/1955
87 88 30.26 01/25/1983
88 89 30.26 04/07/1981
89 90 30.25 01/24/2020
90 91 30.23 12/23/1967
</code></pre>
<p>And maybe change the dtypes at this point:</p>
<pre><code>df['No'] = df.No.astype(int)
df['Crest'] = df.Crest.astype(float)
df['Date'] = pd.to_datetime(df.Date)
</code></pre>
|
python|html|pandas|dataframe
| 2
|
7,948
| 73,685,342
|
Efficient approach to append zeroes to product id which are not present on a particular date
|
<p>I'm trying to append zeroes to product id's data which are not present for any particular date and my code takes lot of time to append zeroes.
Looking for an alternate approach in pandas/numpy.</p>
<p>Here is the sample data:</p>
<pre class="lang-none prettyprint-override"><code>rpt_date product_id total_views total_cart_adds total_order_unit total_gmv
30-07-2022 mp000000006243574 7 1 0 0
30-07-2022 mp000000006292285 1 0 0 0
30-07-2022 mp000000006294016 18 1 0 0
31-07-2022 mp000000006243574 8 2 0 0
31-07-2022 mp000000006292285 5 0 0 0
</code></pre>
<p>For Eg if data for product id 'mp000000006294016' is not present on 31 or any other day then it should append 0 for the respective columns</p>
<p>Below is my code</p>
<pre><code>df_ans=pd.DataFrame()
for x in prod_df['product_id']:
df2=pd.DataFrame()
print(prod_df[prod_df['product_id']==x])
df2 = prod_df[prod_df['product_id']==x]
if df2.shape[0] == 1:
formatted_date1 = time.strptime(prod_df['rpt_date'][0], "%d-%m-%Y")
formatted_date2 = time.strptime('30-07-2022', "%d-%m-%Y")
if formatted_date1==formatted_date2:
df2.loc[-1] = ['31-07-2022', x, '0', '0','0','0'] # adding a row
df2.index = df2.index + 1 # shifting index
df2 = df2.sort_index()
else:
df2.loc[-1] = ['30-07-2022', x, '0', '0','0','0'] # adding a row
df2.index = df2.index + 1 # shifting index
df2 = df2.sort_index()
print(df2)
df_ans= pd.concat([df_ans,df2])
print("***************************************")
</code></pre>
|
<p>You should almost certainly be solving this problem in the query that generated the data to begin with. But one way to work around it is to create a "blank" dataFrame with all the <code>rpt_date</code> and <code>product_id</code> combinations, then <code>concat</code> that to the original data and drop all the duplicate rows:</p>
<pre class="lang-py prettyprint-override"><code>df_prid = prod_df[['product_id']].drop_duplicates()
df_date = prod_df[['rpt_date']].drop_duplicates()
df_blank = df_date.merge(df_prid, how='cross')
df_blank[['total_views', 'total_cart_adds', 'total_order_unit', 'total_gmv']] = [0,0,0,0]
df_final = pd.concat([prod_df, df_blank]).drop_duplicates(subset=['rpt_date', 'product_id'], keep='first').reset_index()
</code></pre>
<p>Output:</p>
<pre><code> index rpt_date product_id total_views total_cart_adds total_order_unit total_gmv
0 0 30-07-2022 mp000000006243574 7 1 0 0
1 1 30-07-2022 mp000000006292285 1 0 0 0
2 2 30-07-2022 mp000000006294016 18 1 0 0
3 3 31-07-2022 mp000000006243574 8 2 0 0
4 4 31-07-2022 mp000000006292285 5 0 0 0
5 5 31-07-2022 mp000000006294016 0 0 0 0
</code></pre>
|
python-3.x|pandas|numpy
| 2
|
7,949
| 73,719,673
|
Skipping one specific excel tab from multiple tabs in multiple excel sheet (Pandas Python)
|
<p>I have a routine in place to convert my multiple excel files, with multiple tabs and multiple columns (<strong>some tabs are present in the excel sheets, some are not, but the column structuring inside all the tabs is the same for all the sheets</strong>) to a dictionary of dictionaries. I'm facing an issue while skipping one specific tab from some of the excel sheets. I know we define the name of the sheets which we want to include in the data structure in the <strong>sheet_name</strong> parameter in the <strong>read_excel</strong> function of <strong>pandas</strong>. But, the problem here is that I want to skip one specific tab (<strong>Sheet1</strong>) from all the excel sheets, and also, the tab names I'm defining other than that in the sheet_name parameter are not present in each of the excel sheets. Please let me know if there are any workarounds here. Thank you!!</p>
<pre><code>#Assigning the path to the folder variable
folder = r'specified_path'
#Changing the directory to the database directory
os.chdir(folder)
#Getting the list of files from the assigned path
files = os.listdir(folder)
#Joining the list of files to the assigned path
for archivedlist in files:
local_path = os.path.join(folder, archivedlist)
print("Joined Path: ", local_path)
#Reading the data from the files in the dictionary data structure
main_dict = {}
def readdataframe(files):
df_dict = {}
for element in files:
df_dict[element] = pd.read_excel(element, sheet_name = ["Sheet2", "Sheet3", "Sheet4",
"Sheet5", "Sheet6", "Sheet7",
"Sheet8"])
print(df_dict[element].keys)
return df_dict
print(readdataframe(files))
</code></pre>
<p>I want to skip sheet1 from all the excel files wherever it is present and want to extract the sheets[2-8] from all the excel files if they are present there. Also, a side note is that I could extract all the data from all the excel files when I was using <strong>sheet_name = None</strong>, but that is not the expected result.</p>
<p><strong>Lastly, all the tabs which are extracted from all the excel sheets should be a pandas data frame.</strong></p>
|
<p>I was able to resolve this query by creating two functions. The first function I created takes the input as the sheet name I want to skip/delete and the master dictionary (df_dict). Below is the code for the function:</p>
<pre><code>def delete_key(rm_key, df_dict):
'''This routine is used to delete any tab from a nested dictionary '''
#Checking for the tab name if it is present in the master dictionary. If yes, delete it directly from there
if rm_key in df_dict:
del df_dict[rm_key]
#Looping in the master dictionary to check for the tab name to be deleted
for val in df_dict.values():
if isinstance(val, dict):
df_dict = delete_key(rm_key, val) #Deleting the whole tab with its value from the master dictionary using a recursive routine
return df_dict
</code></pre>
<p>We need to call this function once we get our data structure from the routine mentioned in the question. The changes in that routine are as follows:</p>
<pre><code>folder = r'specified_path'
files = os.listdir(folder)
def readdataframe(files):
'''This routine is used to read multiple excel files into a nested
dictionary of data frames'''
for element in files:
df_dict[element] = pd.read_excel(element, sheet_name = None)
for num in df_dict[element]:
df_dict[element][num] = pd.DataFrame.from_dict(df_dict[element][num])
print("Filename: ", element, "Tab Name: ", num, "Type: ", type(df_dict1[element][num]))
return df_dict
</code></pre>
<p>When we execute both of these functions, we get the output as a dictionary of data frames which is not having the sheet that we want to skip.</p>
<p>Please follow these routines, and they will work. Let me know if you face any issues.</p>
<p>For simplicity, I have created three excel files with the same number of tabs inside them (Sheet1, Sheet2, Sheet3). The columns inside the tabs are also the same. Please check below the output. We get this output by running the readdataframe(files) function.</p>
<pre><code>Output:
Joined Path: specified_path\1.xlsx
Joined Path: specified_path\2.xlsx
Joined Path: specified_path\3.xlsx
Filename: 1.xlsx Tab Name: Sheet1 Type: <class 'pandas.core.frame.DataFrame'>
Filename: 1.xlsx Tab Name: Sheet2 Type: <class 'pandas.core.frame.DataFrame'>
Filename: 1.xlsx Tab Name: Sheet3 Type: <class 'pandas.core.frame.DataFrame'>
Filename: 2.xlsx Tab Name: Sheet1 Type: <class 'pandas.core.frame.DataFrame'>
Filename: 2.xlsx Tab Name: Sheet2 Type: <class 'pandas.core.frame.DataFrame'>
Filename: 2.xlsx Tab Name: Sheet3 Type: <class 'pandas.core.frame.DataFrame'>
Filename: 3.xlsx Tab Name: Sheet1 Type: <class 'pandas.core.frame.DataFrame'>
Filename: 3.xlsx Tab Name: Sheet2 Type: <class 'pandas.core.frame.DataFrame'>
Filename: 3.xlsx Tab Name: Sheet3 Type: <class 'pandas.core.frame.DataFrame'>
{'1.xlsx': {'Sheet1': A B C D
0 1 1 1 2
1 2 2 4 2
2 3 3 2 4
3 4 1 3 3, 'Sheet2': A
0 1
1 2
2 3
3 4, 'Sheet3': B
0 3
1 4
2 5
3 6}, '2.xlsx': {'Sheet1': A B C D
0 1 1 1 2
1 2 2 4 2
2 3 3 2 4
3 4 1 3 3, 'Sheet2': A
0 1
1 2
2 3
3 4, 'Sheet3': B
0 3
1 4
2 5
3 6}, '3.xlsx': {'Sheet1': A B C D
0 1 1 1 2
1 2 2 4 2
2 3 3 2 4
3 4 1 3 3, 'Sheet2': A
0 1
1 2
2 3
3 4, 'Sheet3': B
0 3
1 4
2 5
3 6}}
</code></pre>
<p>Once we get this output, we can delete Sheet1 using delete_key('Sheet1', df_dict) function. The output after running this function is as follows:</p>
<pre><code>Output:
{'1.xlsx': {'Sheet2': A
0 1
1 2
2 3
3 4,
'Sheet3': B
0 3
1 4
2 5
3 6},
'2.xlsx': {'Sheet2': A
0 1
1 2
2 3
3 4,
'Sheet3': B
0 3
1 4
2 5
3 6},
'3.xlsx': {'Sheet2': A
0 1
1 2
2 3
3 4,
'Sheet3': B
0 3
1 4
2 5
3 6}}
</code></pre>
<p>This is how we can see that Sheet one was removed from all the excel files.</p>
|
python|excel|pandas|dataframe|data-extraction
| 0
|
7,950
| 71,236,651
|
Tensorflow importing custom layers, runs training of custom model
|
<p>My use case is the following: I am creating a dimensionality reducing AutoEncoder with Tensorflow. I have implemented three custom layers and with that a model</p>
<pre><code>class ConvLayer(Layer):
def __init__(self, filter, kernel, act, **kwargs):
super().__init__()
self.filter = filter
self.kernel = kernel
self.act = act
super(ConvLayer, self).__init__(**kwargs)
def build(self, input_shape):
self.conv = Conv1D(self.filter, self.kernel, padding='same')
self.norm = BatchNormalization()
self.acti = Activation(self.act)
def get_config(self):
config = super(ConvLayer, self).get_config()
config.update({
"filter": self.filter,
"kernel": self.kernel,
"act" : self.act
})
return config
def call(self, inputs):
x = self.conv(inputs)
x = self.norm(x)
return self.acti(x)
class _Conv1DTranspose(Layer):
def __init__(self, filter, kernel, **kwargs):
super().__init__()
self.filter = filter
self.kernel = kernel
super(_Conv1DTranspose, self).__init__(**kwargs)
def build(self, input_shape):
self.first = Lambda(lambda x: K.expand_dims(x, axis=2))
self.conv = Conv2DTranspose(self.filter, (self.kernel, 1), padding='same')
self.second = Lambda(lambda x: K.squeeze(x, axis=2))
def get_config(self):
config = super(_Conv1DTranspose, self).get_config()
config.update({
"filter": self.filter,
"kernel": self.kernel
})
return config
def call(self, inputs):
x = self.first(inputs)
x = self.conv(x)
return self.second(x)
class DeconvLayer(Layer):
def __init__(self, filter, kernel, act, **kwargs):
super().__init__()
self.filter = filter
self.kernel = kernel
self.act = act
super(DeconvLayer, self).__init__(**kwargs)
def build(self, input_shape):
self.conv = _Conv1DTranspose(self.filter, self.kernel)
self.norm = BatchNormalization()
self.acti = Activation(self.act)
def get_config(self):
config = super(DeconvLayer, self).get_config()
config.update({
"filter": self.filter,
"kernel": self.kernel,
"act" : self.act
})
return config
def call(self, inputs):
x = self.conv(inputs)
x = self.norm(x)
return self.acti(x)
def create_model(latent_dim):
encoder = Sequential([
ConvLayer(128, 2, 'selu'),
ConvLayer(128, 2, 'selu'),
ConvLayer(128, 2, 'selu'),
ConvLayer(128, 2, 'selu'),
MaxPooling1D(5),
ConvLayer(64, 2, 'selu'),
ConvLayer(64, 2, 'selu'),
ConvLayer(64, 2, 'selu'),
ConvLayer(64, 2, 'selu'),
MaxPooling1D(2),
ConvLayer(32, 2, 'selu'),
ConvLayer(32, 2, 'selu'),
ConvLayer(32, 2, 'selu'),
ConvLayer(32, 2, 'selu'),
MaxPooling1D(2),
Flatten(),
Dense(latent_dim, activation='selu') ], name='Encoder')
decoder = Sequential([
Dense((latent_dim * 32), activation='selu'),
Reshape((50, 32)),
UpSampling1D(2),
DeconvLayer(32, 2, 'selu'),
DeconvLayer(32, 2, 'selu'),
DeconvLayer(32, 2, 'selu'),
DeconvLayer(32, 2, 'selu'),
UpSampling1D(2),
DeconvLayer(64, 2, 'selu'),
DeconvLayer(64, 2, 'selu'),
DeconvLayer(64, 2, 'selu'),
DeconvLayer(64, 2, 'selu'),
UpSampling1D(5),
DeconvLayer(128, 2, 'selu'),
DeconvLayer(128, 2, 'selu'),
DeconvLayer(128, 2, 'selu'),
DeconvLayer(128, 2, 'selu'),
DeconvLayer(1, 2, 'sigmoid') ], name='Decoder')
return encoder, decoder
</code></pre>
<p>I am training and saving the encoder part in a separate file, <em>autoencoder.py</em>, with <code>encoder.save("encoder_dim_50.h5")</code> Now in my <em>main.py</em> I want to load in my model and use it to reduce some dimensions.</p>
<p>Here starts my issue, I am importing the custom layers <code>from autoencoder import _Conv1DTranspose, ConvLayer, DeconvLayer</code> and while importing, it start to run the whole training sequence again?!</p>
<p>The code does not even reach the loading of the model</p>
<pre><code>self.Encoder = tf.keras.models.load_model(model_path, custom_objects={'_Conv1DTranspose': _Conv1DTranspose,
'ConvLayer' : ConvLayer,
'DeconvLayer' : DeconvLayer})
</code></pre>
<p>Am I missing some glaring issue here, or should I implement my custom layers in the <em>main.py</em> as well?</p>
<p>Thank you for your time</p>
|
<p>To stop your code from automatically running, convert your code in the following format, it always prevents the code from auto running.</p>
<pre><code>def __main__():
#do whatever you want in this function as this will run if you run this file directly
print('hello world')
if __name__ == "__main__"
__main__()
</code></pre>
<p>This way, your code should never run on its own and you can use your function the way you want.</p>
|
python|tensorflow
| 1
|
7,951
| 71,238,264
|
Python Pandas drop a specific raneg of columns that are all NaN
|
<p>I'm attempting to drop a range of columns in a pandas dataframe that have all NaN. I know the following code:</p>
<pre><code>df.dropna(axis=1, how='all', inplace = True)
</code></pre>
<p>Will search all the columns in the dataframe and drop the ones that have all NaN.</p>
<p>However, when I extend this code to a specific range of columns:</p>
<pre><code>df[df.columns[48:179]].dropna(axis=1, how='all', inplace = True)
</code></pre>
<p>The result is the original dataframe with no columns removed. I also no for a fact that the selected range has multiple columns with all NaN's</p>
<p>Any idea what I might be doing wrong here?</p>
|
<p>Don't use <code>inplace=True</code>. Instead do this:</p>
<pre><code>cols = df.columns[48:179]
df[cols] = df[cols].dropna(axis=1, how='all')
</code></pre>
|
python|pandas|dataframe
| 1
|
7,952
| 52,301,279
|
Error loading TensorFlow graph with C API
|
<p>I'm trying to use the TensorFlow C API to load and execute a graph. It keeps failing and I can't figure out why.</p>
<p>I first use this Python script to create a very simple graph and save it to a file.</p>
<pre><code>import tensorflow as tf
graph = tf.Graph()
with graph.as_default():
input = tf.placeholder(tf.float32, [10, 3], name='input')
output = tf.reduce_sum(input**2, name='output')
tf.train.write_graph(graph, '.', 'test.pbtxt')
</code></pre>
<p>Then I use this C++ code to load it in.</p>
<pre><code>#include <fstream>
#include <iostream>
#include <string>
#include <c_api.h>
using namespace std;
int main() {
ifstream graphFile("test.pbtxt");
string graphText((istreambuf_iterator<char>(graphFile)), istreambuf_iterator<char>());
TF_Buffer* buffer = TF_NewBufferFromString(graphText.c_str(), graphText.size());
TF_Graph* graph = TF_NewGraph();
TF_ImportGraphDefOptions* importOptions = TF_NewImportGraphDefOptions();
TF_Status* status = TF_NewStatus();
TF_GraphImportGraphDef(graph, buffer, importOptions, status);
cout<<TF_GetCode(status)<<endl;
return 0;
}
</code></pre>
<p>The status code it prints is 3, or <code>TF_INVALID_ARGUMENT</code>. Which argument is invalid and why? I verified the file contents are loaded correctly into <code>graphText</code>, and all the other arguments are trivial.</p>
|
<p>First of all, I think you should write the Graph with <code>as_graph_def()</code>, in your case:</p>
<pre class="lang-py prettyprint-override"><code>with open('test.pb', 'wb') as f:
f.write(graph.as_graph_def().SerializeToString())
</code></pre>
<p>Apart from it, I recommend you not to use the C API directly as it is error prone with memory leaks. Instead I have tried your code using <a href="https://github.com/serizba/cppflow" rel="nofollow noreferrer">cppflow</a>, a C++ wrapper, and it works like a charm. I have used the following code:</p>
<pre class="lang-cpp prettyprint-override"><code># Load model
Model model("../test.pb");
# Declare tensors by name
auto input = new Tensor(model, "input");
auto output = new Tensor(model, "output");
# Feed data
std::vector<float> data(30, 1);
input->set_data(data);
# Run and show
model.run(input, output);
std::cout << output->get_data<float>()[0] << std::endl;
</code></pre>
|
tensorflow
| 1
|
7,953
| 60,671,246
|
Change type Column with 2 format Python
|
<p><strong>Hello World,</strong></p>
<p>New in Python, I am trying to convert a column (YEAR) into one format.</p>
<p>The column contains 2 formats:
- yy
- yyyy</p>
<p>Here is an example</p>
<pre><code>year
2007
07
1999
2001
99
</code></pre>
<p>What I have tried is to fill 20 before year column but what about when date is 99.</p>
<p>The desire output will be something like</p>
<pre><code>year new_year
2007 2007
07 2007
1999 1999
2001 2001
99 1999
</code></pre>
<p>Thanks for anyone helping</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_datetime.html" rel="nofollow noreferrer"><code>to_datetime</code></a> with format for <code>YY</code> - <code>%y</code> and <code>errors='coerce'</code> and with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.year.html" rel="nofollow noreferrer"><code>Series.dt.year</code></a>, so non format <code>YY</code> return missing values replaced by original with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.fillna.html" rel="nofollow noreferrer"><code>Series.fillna</code></a>: </p>
<pre><code>df['new_year'] = (pd.to_datetime(df['year'], format='%y', errors='coerce').dt.year
.fillna(df['year'])
.astype(int))
print (df)
year new_year
0 2007 2007
1 07 2007
2 1999 1999
3 2001 2001
4 99 1999
</code></pre>
|
python|pandas
| 4
|
7,954
| 60,720,213
|
pandas: get rows by comparing two columns of dataframe to list of tuples
|
<p>Say I have a pandas DataFrame with four columns: A,B,C,D.</p>
<pre class="lang-py prettyprint-override"><code>my_df = pd.DataFrame({'A': [0,1,4,9], 'B': [1,7,5,7],'C':[1,1,1,1],'D':[2,2,2,2]})
</code></pre>
<p>I also have a list of tuples:</p>
<pre class="lang-py prettyprint-override"><code>my_tuples = [(0,1),(4,5),(9,9)]
</code></pre>
<p>I want to keep only the rows of the dataframe where the value of <code>(my_df['A'],my_df['B'])</code> is equal to one of the tuples in my_tuples.</p>
<p>In this example, this would be row#0 and row#2.</p>
<p>Is there a good way to do this? I'd appreciate any help.</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.merge.html" rel="noreferrer"><code>DataFrame.merge</code></a> with <code>DataFrame</code> created by tuples, there is no <code>on</code> parameter for default interecton of all columns in both <code>DataFrames</code>, here <code>A</code> and <code>B</code>:</p>
<pre><code>df = my_df.merge(pd.DataFrame(my_tuples, columns=['A','B']))
print (df)
A B C D
0 0 1 1 2
1 4 5 1 2
</code></pre>
<p>Or:</p>
<pre><code>df = my_df[my_df.set_index(['A','B']).index.isin(my_tuples)]
print (df)
A B C D
0 0 1 1 2
2 4 5 1 2
</code></pre>
|
python|pandas
| 7
|
7,955
| 72,595,974
|
How do I divide accross all columns in a pandas DF
|
<p>I have a large data set with over 25 columns in a pandas df and I would like to calculate the growth rate from one month to the other.</p>
<p>How do I get from this:</p>
<p><a href="https://i.stack.imgur.com/FOYzr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FOYzr.png" alt="enter image description here" /></a></p>
<p>To this:</p>
<p><a href="https://i.stack.imgur.com/ChLrP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ChLrP.png" alt="enter image description here" /></a></p>
<p>without having to repeat the formular 3 times like this;</p>
<pre><code>growth_rate1 = (df['august'] / df['july'] - 1)
growth_rate2 = (df['september'] / df['august'] - 1)
growth_rate3 = (df['october'] / df['september'] - 1)
</code></pre>
|
<p>You need to use <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.pct_change.html" rel="nofollow noreferrer"><code>pct_change</code></a>:</p>
<pre><code>out = df.pct_change(axis=1).iloc[:,1:]
</code></pre>
|
python-3.x|pandas|dataframe
| 2
|
7,956
| 72,581,847
|
How to take tf.Tensor type to number type in Typescript
|
<p>I'm working with Tensorflow.js in typescript and I want to get the cosine similarity of two 1D tensors, but I am having trouble with dealing with the types that tensorflow uses.
When I calculate the cosine similarity, using this function I should be getting a number, but instead I get a bunch of different other types including number.</p>
<pre><code>function calculateCosineSimilarity(abstractEmbedding: tf.Tensor1D | Array<number>, queryEmbedding: tf.Tensor1D | Array<number>): number{
const dotProd: tf.Tensor = tf.dot(abstractEmbedding, queryEmbedding);
const lenAbstractEmbedding: tf.Tensor = tf.dot(abstractEmbedding, abstractEmbedding);
const lenQueryEmbedding: tf.Tensor = tf.dot(queryEmbedding, queryEmbedding);
const similarityScore: tf.Tensor = tf.div(dotProd, tf.mul(lenAbstractEmbedding,lenQueryEmbedding));
return similarityScore.arraySync();
}
</code></pre>
<p>I get this error at the return statement:</p>
<pre><code>Type 'number | number[] | number[][] | number[][][] | number[][][][] | number[][][][][] | number[][][][][][]' is not assignable to type 'number'.
</code></pre>
<p>I know when you take dot products of multidimensional arrays, the dimensions/types of the resulting array will vary, but for my case I know I will be getting a single value back, so I just want to return a single number. Is there a way to resolve this issue without having to change the return type of the function?</p>
|
<p>I don't believe it is the best practice, but if you do not want the return type of the function to list all of those type options, or be Any, then you can force the coercion of the response variable to unknown, if necessary, and then to the type you are expecting.</p>
<pre><code>return similarityScore.arraySync() as number;
</code></pre>
<p>or if the types do not sufficiently overlap</p>
<pre><code>return similarityScore.arraySync() as unknown as number;
</code></pre>
|
javascript|typescript|tensorflow|tensorflow.js
| 0
|
7,957
| 72,806,145
|
Keep columns if column contains string
|
<p>This has been answered similarly at <a href="https://stackoverflow.com/questions/66715267/how-to-keep-a-row-if-any-column-contains-a-certain-substring">How to keep a row if any column contains a certain substring?</a>. However, my problem involves multiple dataframes within a list which is a different set-up to the other post. Additionally, I want to keep columns rather than rows.</p>
<p>I have tried all alternatives to the answers in that post and cannot get my problem to successfully work. Here's what I am working with:</p>
<pre><code>import pandas as pd
nm = ["Sepal.Length" ,"Sepal.Width" , "Petal.Length", "Petal.Width", "Species"]
def tbl(data):
data = [data[x:] + data[:x] for x in range(1, len(data)+1)]
df = pd.DataFrame(data)
return df
df_tbl = tbl(nm)
ls_comb = [df_tbl.loc[0:i] for i in range(0, len(df_tbl))]
reply_pred=[i.apply(lambda x: x.str.replace('Species', 'log(Species)')) for i in ls_comb]
</code></pre>
<p>Here is what I have tried:</p>
<pre><code>[i[i.apply(lambda x: x.str.contains('Sepal.Width', na=False))] for i in reply_pred]
[ 0 1 2 3 4
0 Sepal.Width NaN NaN NaN NaN,
0 1 2 3 4
0 Sepal.Width NaN NaN NaN NaN
1 NaN NaN NaN NaN Sepal.Width,
0 1 2 3 4
0 Sepal.Width NaN NaN NaN NaN
1 NaN NaN NaN NaN Sepal.Width
2 NaN NaN NaN Sepal.Width NaN,
0 1 2 3 4
0 Sepal.Width NaN NaN NaN NaN
1 NaN NaN NaN NaN Sepal.Width
2 NaN NaN NaN Sepal.Width NaN
3 NaN NaN Sepal.Width NaN NaN,
0 1 2 3 4
0 Sepal.Width NaN NaN NaN NaN
1 NaN NaN NaN NaN Sepal.Width
2 NaN NaN NaN Sepal.Width NaN
3 NaN NaN Sepal.Width NaN NaN
4 NaN Sepal.Width NaN NaN NaN]
</code></pre>
<p>However, the expected output should return the entire column, for example:</p>
<pre><code>[ 0
0 Sepal.Width
0 4
0 Sepal.Width Sepal.Length
1 Petal.Length Sepal.Width,
0 3 4
0 Sepal.Width log(Species) Sepal.Length
1 Petal.Length Sepal.Length Sepal.Width
2 Petal.Width Sepal.Width Petal.Length,
.
.
.
</code></pre>
|
<p>You can booleaing mask <code>df.columns</code> then use <code>df.loc</code> to select the remaining columns</p>
<pre class="lang-py prettyprint-override"><code>dfs = [df.loc[:, df.columns[df.apply(lambda col: col.str.contains('Sepal.Width')).any()]]
for df in reply_pred]
</code></pre>
<pre><code>for df in dfs:
print(df, '\n')
0
0 Sepal.Width
0 4
0 Sepal.Width Sepal.Length
1 Petal.Length Sepal.Width
0 3 4
0 Sepal.Width log(Species) Sepal.Length
1 Petal.Length Sepal.Length Sepal.Width
2 Petal.Width Sepal.Width Petal.Length
0 2 3 4
0 Sepal.Width Petal.Width log(Species) Sepal.Length
1 Petal.Length log(Species) Sepal.Length Sepal.Width
2 Petal.Width Sepal.Length Sepal.Width Petal.Length
3 log(Species) Sepal.Width Petal.Length Petal.Width
0 1 2 3 4
0 Sepal.Width Petal.Length Petal.Width log(Species) Sepal.Length
1 Petal.Length Petal.Width log(Species) Sepal.Length Sepal.Width
2 Petal.Width log(Species) Sepal.Length Sepal.Width Petal.Length
3 log(Species) Sepal.Length Sepal.Width Petal.Length Petal.Width
4 Sepal.Length Sepal.Width Petal.Length Petal.Width log(Species)
</code></pre>
|
python|pandas
| 1
|
7,958
| 59,523,972
|
Dimension mismatch between input data and trained data when using conv1D
|
<p>I have tried to build my first CNN using Conv1D, as i deal with time series data. My target is to make compression for input_data of 1501 shape. The x_train shape is (550, 1501) and I increased it's dimension to fit the model.</p>
<p>However, the compiler complains:</p>
<blockquote>
<p>ValueError: A target array with shape (550, 1501, 1) was passed for an output of shape (None, 1500, 1) while using as loss <code>mean_squared_error</code>. This loss expects targets to have the same shape as the output.</p>
</blockquote>
<p>This is the code</p>
<pre><code>import numpy as np
from tensorflow.keras.layers import Input,Dense, Conv1D, MaxPooling1D, UpSampling1D, Flatten, Input
from tensorflow.keras import optimizers, Model
import matplotlib.pyplot as plt
from tensorflow.keras import backend as K
#(1,128,1)
input_data = Input(shape=(1501,1))
fil_ord = 3
# Eecode
encode = Conv1D(2000, fil_ord, activation='relu', padding='same')input_data)
encode = MaxPooling1D( 2 )(encode)
encode = Conv1D(750, fil_ord, activation='relu', padding='same')(encode)
# Decode
decode = Conv1D(750, fil_ord, activation='relu', padding='same')(encode)
decode = UpSampling1D( 2)(decode)
decode = Conv1D(1, fil_ord, activation='sigmoid', padding='same')(decode)
model = Model(input_data, decode)
model.summary()
from numpy import zeros, newaxis
x_train1=x_train[:,:,None]
batch_size = 128
epochs = 10
# Optimizer
sgd = optimizers.Adam(lr=0.001)
# compile
model.compile(loss='mse', optimizer=sgd)
# train
history = model.fit(x_train1, x_train1, batch_size=batch_size, epochs=epochs, verbose=2,shuffle=True)
</code></pre>
<p>The model.summary() output:</p>
<p><a href="https://i.stack.imgur.com/TbGNy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TbGNy.png" alt="enter image description here"></a></p>
|
<p>The error is in the <code>decode</code> output dimension at <code>axis=1</code>, i.e., <code>1500</code> which is different from the target <code>x_train1</code> dimension of <code>1501</code>.</p>
<p>This is happening due to <em>this chain</em> of <em>max-pooling</em> and <em>upsampling</em> operations: <code>1501 -> 750 -> 1500</code> where <code>MaxPooling1D</code> ignores one additional element while down-sampling and outputs features of dimension <code>750</code> at <code>axis=1</code> which is not recovered from up-sampling operation with <code>UpSampling1D</code>. </p>
<p>Hence, the target (<code>x_train1</code>) and the predicted (<code>decode</code>) outputs differ in shape because of which we cannot compute the loss.</p>
<p>Two approaches that could be used to solve this are:</p>
<ul>
<li><strong>Crop</strong> the target (<code>x_train</code>'s) dimension in <code>axis=1</code> to match that of the <code>decode</code>'s, i.e., <code>1500</code>. Here's one way to do this:
<code>history = model.fit(x_train1, x_train1[:,:-1,], batch_size=batch_size, ...)</code></li>
<li><strong>Pad</strong> the output obtained from <code>decode</code> with (say) <code>0</code>'s, to match dimension of <code>x_train</code>, i.e., <code>1501</code>. One way to do this is by using <a href="https://keras.io/layers/convolutional/#zeropadding2d" rel="nofollow noreferrer">ZeroPadding2D</a> layer on <code>decode</code>:
<code>ZeroPadding2D(padding=((0,0),(0,1),(0,0)))(decode)</code></li>
</ul>
|
python|tensorflow|keras|conv-neural-network|convolution
| 0
|
7,959
| 59,851,196
|
How to have a batch size greater than 1 in a Keras LSTM network?
|
<p>I've been training a LSTM using Keras with a batch size of 1 and it's been very slow. I'd like to increase the batch size in an attempt to speed up training time but I can't work out how to do it.</p>
<p>Below is code (my minimum, reproducible example) that shows my problem. It works with a batch size of 1, but how could I use a batch size of 2?</p>
<pre><code>import pandas as pd
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import TimeDistributed
#the demo data contains 10 instances each with 7 features and 3 targets
#the features are 0 or 1, the targets are the sum of the features in binary (with 3 bits)
raw_data = [[1,1,1,0,0,0,1,1,0,0], #has features 1,1,1,0,0,0,1, which contains 4 (100 in binary) 1's
[0,0,0,0,0,0,0,0,0,0],[0,1,1,0,0,0,0,0,1,0],[1,0,0,1,1,1,1,1,0,1],[0,0,0,0,0,1,0,0,0,1],
[1,1,1,1,1,1,0,1,1,0],[1,1,1,0,0,0,0,0,1,1],[1,1,1,1,1,1,1,1,1,1],[0,1,0,0,1,0,1,0,1,1],
[1,1,1,1,0,1,1,1,1,0]]
#how can I use a batch_size of 2?
batch_size = 1
epochs = 10
df = pd.DataFrame(raw_data)
train_x, train_y = df.values[:,0:-3], df.values[:,-3:]
#reshape <batch_size, time_steps, seq_len> https://mc.ai/understanding-input-and-output-shape-in-lstm-keras/
#but can't reshape to a batch size of 2 as I get the ValueError below,
#ValueError: cannot reshape array of size 70 into shape (2,10,7) - which makes sense
#if I remove the batch_size from the reshape I get the ValueError below,
#ValueError: Error when checking input: expected lstm_1_input to have 3 dimensions, but got array with shape (10, 7)
train_x = np.reshape(train_x, (batch_size, train_x.shape[0], train_x.shape[1]))
train_y = np.reshape(train_y, (batch_size, train_y.shape[0], train_y.shape[1]))
model = Sequential()
model.add(LSTM(batch_input_shape=(batch_size, 10, 7), return_sequences=True, units=7))
model.add(TimeDistributed(Dense(activation='linear', units=3)))
model.compile(loss='mse', optimizer='rmsprop')
#training and testing on the same data here, but it's only example code to demonstrate my batch_size problem
history = model.fit(train_x, train_y, epochs=epochs, batch_size=batch_size, validation_data=(train_x, train_y))
yhat = model.predict(train_x, batch_size)
print(yhat)
</code></pre>
<p>If I was training for a model just using Dense layers, I'd be able to keep my training/testing data to 2 dimensions and the batch size would be handled for me. However, the LSTM requires 3 dimensions. Would I have to manually create my batches to present to the model by doing something like,</p>
<pre><code>train_x = np.reshape(train_x, (batch_size, int(train_x.shape[0]/batch_size), train_x.shape[1]))
train_y = np.reshape(train_y, (batch_size, int(train_y.shape[0]/batch_size), train_y.shape[1]))
...
model.add(LSTM(batch_input_shape=(batch_size, int(train_x.shape[0]/batch_size), 7), return_sequences=True, units=7))
</code></pre>
<p>but this gives a ValueError of</p>
<pre><code>ValueError: Error when checking input: expected lstm_1_input to have shape (1, 7) but got array with shape (5, 7)
</code></pre>
<p>when applying <code>model.fit</code></p>
<p>In my minimum, reproducible example above how could I modify it to use a batch size of 2? </p>
|
<p>Only use <code>batch_size</code> in <code>model.fit</code>. The following code works for me:</p>
<pre><code>import pandas as pd
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers import TimeDistributed
#the demo data contains 10 instances each with 7 features and 3 targets
#the features are 0 or 1, the targets are the sum of the features in binary (with 3 bits)
raw_data = [[1,1,1,0,0,0,1,1,0,0], #has features 1,1,1,0,0,0,1, which contains 4 (100 in binary) 1's
[0,0,0,0,0,0,0,0,0,0],[0,1,1,0,0,0,0,0,1,0],[1,0,0,1,1,1,1,1,0,1],[0,0,0,0,0,1,0,0,0,1],
[1,1,1,1,1,1,0,1,1,0],[1,1,1,0,0,0,0,0,1,1],[1,1,1,1,1,1,1,1,1,1],[0,1,0,0,1,0,1,0,1,1],
[1,1,1,1,0,1,1,1,1,0]]
#how can I use a batch_size of 2?
batch_size = 2
epochs = 10
df = pd.DataFrame(raw_data)
train_x, train_y = df.values[:,0:-3], df.values[:,-3:]
#reshape <batch_size, time_steps, seq_len> https://mc.ai/understanding-input-and-output-shape-in-lstm-keras/
#but can't reshape to a batch size of 2 as I get the ValueError below,
#ValueError: cannot reshape array of size 70 into shape (2,10,7) - which makes sense
#if I remove the batch_size from the reshape I get the ValueError below,
#ValueError: Error when checking input: expected lstm_1_input to have 3 dimensions, but got array with shape (10, 7)
train_x = np.reshape(train_x, (1, train_x.shape[0], train_x.shape[1]))
train_y = np.reshape(train_y, (1, train_y.shape[0], train_y.shape[1]))
model = Sequential()
model.add(LSTM(batch_input_shape=(1, 10, 7), return_sequences=True, units=7))
model.add(TimeDistributed(Dense(activation='linear', units=3)))
model.compile(loss='mse', optimizer='rmsprop')
#training and testing on the same data here, but it's only example code to demonstrate my batch_size problem
history = model.fit(train_x, train_y, epochs=epochs, batch_size=batch_size, validation_data=(train_x, train_y))
yhat = model.predict(train_x, 1)
print(yhat)
</code></pre>
|
python|tensorflow|keras|lstm
| 2
|
7,960
| 59,746,744
|
parsing a list of dictionaries in a pandas data frame rows
|
<p>I have a list dictionaries in pandas dataframe column. I want to parse it and create new rows from it even though other column value repeat.</p>
<p>Here is the dataframe:</p>
<pre><code>event_date event_timestamp event_name event_params
20191118 1.57401E+15 user_engagement [{'key': 'firebase_event_origin', 'value': {'string_value': 'auto', 'int_value': None, 'float_value': None, 'double_value': None}}, {'key': 'ga_session_number', 'value': {'string_value': None, 'int_value': 5, 'float_value': None, 'double_value': None}}, {'key': 'engagement_time_msec', 'value': {'string_value': None, 'int_value': 17167, 'float_value': None, 'double_value': None}}, {'key': 'firebase_screen_id', 'value': {'string_value': None, 'int_value': 9065232440298470924, 'float_value': None, 'double_value': None}}, {'key': 'ga_session_id', 'value': {'string_value': None, 'int_value': 1574005142, 'float_value': None, 'double_value': None}}, {'key': 'firebase_screen_class', 'value': {'string_value': 'SplashActivity', 'int_value': None, 'float_value': None, 'double_value': None}}, {'key': 'engaged_session_event', 'value': {'string_value': None, 'int_value': 1, 'float_value': None, 'double_value': None}}]
20191119 1.57401E+15 screen_view [{'key': 'firebase_previous_id', 'value': {'string_value': None, 'int_value': 9065232440298470924, 'float_value': None, 'double_value': None}}, {'key': 'firebase_event_origin', 'value': {'string_value': 'auto', 'int_value': None, 'float_value': None, 'double_value': None}}, {'key': 'ga_session_number', 'value': {'string_value': None, 'int_value': 5, 'float_value': None, 'double_value': None}}, {'key': 'firebase_screen_id', 'value': {'string_value': None, 'int_value': 9065232440298470925, 'float_value': None, 'double_value': None}}, {'key': 'firebase_previous_class', 'value': {'string_value': 'SplashActivity', 'int_value': None, 'float_value': None, 'double_value': None}}, {'key': 'ga_session_id', 'value': {'string_value': None, 'int_value': 1574005142, 'float_value': None, 'double_value': None}}, {'key': 'firebase_screen_class', 'value': {'string_value': 'AuthenticationActivity', 'int_value': None, 'float_value': None, 'double_value': None}}, {'key': 'engaged_session_event', 'value': {'string_value': None, 'int_value': 1, 'float_value': None, 'double_value': None}}]
20191120 1.57401E+15 user_engagement [{'key': 'firebase_event_origin', 'value': {'string_value': 'auto', 'int_value': None, 'float_value': None, 'double_value': None}}, {'key': 'ga_session_number', 'value': {'string_value': None, 'int_value': 5, 'float_value': None, 'double_value': None}}, {'key': 'engagement_time_msec', 'value': {'string_value': None, 'int_value': 13271, 'float_value': None, 'double_value': None}}, {'key': 'firebase_screen_id', 'value': {'string_value': None, 'int_value': 9065232440298470925, 'float_value': None, 'double_value': None}}, {'key': 'ga_session_id', 'value': {'string_value': None, 'int_value': 1574005142, 'float_value': None, 'double_value': None}}, {'key': 'firebase_screen_class', 'value': {'string_value': 'AuthenticationActivity', 'int_value': None, 'float_value': None, 'double_value': None}}, {'key': 'engaged_session_event', 'value': {'string_value': None, 'int_value': 1, 'float_value': None, 'double_value': None}}]
</code></pre>
<p>This is what I want</p>
<pre><code>event_date event_timestamp event_name key value
20191118 1.57401E+15 user_engagement firebase_event_origin auto
20191118 1.57401E+15 user_engagement ga_session_number 5
20191118 1.57401E+15 user_engagement engagement_time_msec 17167
20191119 1.57401E+15 screen_view firebase_previous_id 9.06523E+18
20191119 1.57401E+15 screen_view engaged_session_event 1
</code></pre>
<p>This is what I have tried:</p>
<pre><code>pd.DataFrame(data['event_params'].apply(ast.literal_eval).values.tolist()) \
.stack() \
.reset_index(level=0,drop=True) \
.reset_index()
</code></pre>
<p>It gives me following output:</p>
<pre><code>index 0
0 {'key': 'firebase_event_origin', 'value': {'st...
1 {'key': 'ga_session_number', 'value': {'string...
2 {'key': 'engagement_time_msec', 'value': {'str...
</code></pre>
<p>How do I parse it more populating columns 'key' and 'value'. Also, making other column values duplicate. Kindly assist me.</p>
<p>UPDATE:
Solution tried</p>
<p><a href="https://i.stack.imgur.com/RCiqr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RCiqr.png" alt="Solution tried as per @bharath"></a></p>
|
<p>You have to explode your Data Frame First using pandas.series.explode()
and then write a couple of for loops to get the expected result.
Here is the Answer.</p>
<pre><code>import pandas as pd
d = {'event_date': [1, 2], 'event_name': [3, 4] ,'event_params': [[{'key': 'firebase_event_origin', 'value': {'string_value': 'auto', 'int_value': None, 'float_value': None, 'double_value': None}}, {'key': 'ga_session_number', 'value': {'string_value': None, 'int_value': 5, 'float_value': None, 'double_value': None}}, {'key': 'engagement_time_msec', 'value': {'string_value': None, 'int_value': 17167, 'float_value': None, 'double_value': None}}, {'key': 'firebase_screen_id', 'value': {'string_value': None, 'int_value': 9065232440298470924, 'float_value': None, 'double_value': None}}, {'key': 'ga_session_id', 'value': {'string_value': None, 'int_value': 1574005142, 'float_value': None, 'double_value': None}}, {'key': 'firebase_screen_class', 'value': {'string_value': 'SplashActivity', 'int_value': None, 'float_value': None, 'double_value': None}}, {'key': 'engaged_session_event', 'value': {'string_value': None, 'int_value': 1, 'float_value': None, 'double_value': None}}], [{'key': 'firebase_previous_id', 'value': {'string_value': None, 'int_value': 9065232440298470924, 'float_value': None, 'double_value': None}}, {'key': 'firebase_event_origin', 'value': {'string_value': 'auto', 'int_value': None, 'float_value': None, 'double_value': None}}, {'key': 'ga_session_number', 'value': {'string_value': None, 'int_value': 5, 'float_value': None, 'double_value': None}}, {'key': 'firebase_screen_id', 'value': {'string_value': None, 'int_value': 9065232440298470925, 'float_value': None, 'double_value': None}}, {'key': 'firebase_previous_class', 'value': {'string_value': 'SplashActivity', 'int_value': None, 'float_value': None, 'double_value': None}}, {'key': 'ga_session_id', 'value': {'string_value': None, 'int_value': 1574005142, 'float_value': None, 'double_value': None}}, {'key': 'firebase_screen_class', 'value': {'string_value': 'AuthenticationActivity', 'int_value': None, 'float_value': None, 'double_value': None}}, {'key': 'engaged_session_event', 'value': {'string_value': None, 'int_value': 1, 'float_value': None, 'double_value': None}}]]}
df = pd.DataFrame(d)
df = df.explode('event_params').reset_index(drop = True)
df['key'] = None
for i in range(len(df)):
df.loc[i, 'key'] = df.loc[i, 'event_params']['key']
df['value'] = None
for i in range(len(df)):
for k in df.loc[i, 'event_params']['value']:
if df.loc[i, 'event_params']['value'][k]!=None:
df.loc[i, 'value'] = df.loc[i, 'event_params']['value'][k]
df.drop(columns = 'event_params', inplace = True)
</code></pre>
|
python-3.x|pandas|dataframe|dictionary
| 2
|
7,961
| 61,823,039
|
How to create pandas dataframe and fill it from function?
|
<p>I have that function: </p>
<pre><code>def count (a,b):
x = a*b
</code></pre>
<p>Values of 'a' and 'b' must be 1...99 for 'a' and 100...800 for 'b'. So the question is how to create pandas dataframe with a-values vertically and b-values horizontally and x-values inside that are counted with 'count' function (using all combinations of a and b)? It must look like that:
<a href="https://i.stack.imgur.com/482Lc.png" rel="nofollow noreferrer">example</a></p>
|
<p>Hope this may help</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
def count(a,b):
x = a*b
return x
a = list(range(1,100))
b = list(range(100,801))
data = []
for i in a:
temp = [i]
for j in b:
temp.append(count(i,j))
data.append(temp)
df = pd.DataFrame(data, columns=["a/b"]+b)
# to save as csv
df.to_csv("data.csv", index=False)
</code></pre>
|
python|pandas
| 2
|
7,962
| 61,821,288
|
Multiindex "get_level_values"-function for arbitrarily many levels
|
<p>Is there a way to construct a function that uses "get_level_values" an arbitrarily number of times and returns the sliced dataframe? An example can explain my need.</p>
<p>Multiindex:</p>
<pre><code>arrays = [['bar', 'bar', 'bar', 'baz', 'baz', 'foo', 'foo','foo','qux', 'qux'],
['one', 'two', 'three', 'one', 'four', 'one', 'two', 'eight','one', 'two'],
['green', 'green', 'blue', 'blue', 'black', 'black', 'orange', 'green','blue', 'black'] ]
s = pd.DataFrame(np.random.randn(10), index=arrays)
s.index.names = ['p1','p2','p3']
s
0
p1 p2 p3
bar one green -0.676472
two green -0.030377
three blue -0.957517
baz one blue 0.710764
four black 0.404377
foo one black -0.286358
two orange -1.620832
eight green 0.316170
qux one blue -0.433310
two black 1.127754
</code></pre>
<p>Now, this is is the function I want to create: </p>
<pre><code>def my_func(df,levels, values):
# Code using get_level_values
return ret
# Example use
my_func(s, ['p1'],['bar'])
p1 p2 p3
bar one green -0.676472
two green -0.030377
three blue -0.957517
my_func(s, ['p1','p2'],['bar','one'])
p1 p2 p3
bar one green -0.676472
</code></pre>
<p>Above <code>my_func(['p1'],['bar'])</code> returns <code>s.loc[s.index.get_level_values('p1')=='bar']</code> and <code>my_func(['p1','p2'],['bar','one'])</code> returns <code>s.loc[(s.index.get_level_values('p1')=='bar') & (s.index.get_level_values('p2')=='one')]</code></p>
<p>So, I want to put a list of arbitrarily many levels and a list of the same number of values and return the sliced dataframe.</p>
<p>Any help is much appreciated!</p>
|
<p>Try this and see if it works for you : since ur multiindex has names, it is easier using <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#multiindex-query-syntax" rel="nofollow noreferrer">query</a> for your function : </p>
<pre><code>def my_func(df,levels, values):
# Code using query
m = dict(zip(levels,values))
#create expression to use in the query method
expr = " and ".join(f"{k}=={v!r}" for k,v in m.items())
ret = df.query(expr)
return ret
#function application
my_func(s, ['p1'],['bar'])
0
p1 p2 p3
bar one green -0.087366
two green 1.126620
three blue 0.868515
my_func(s, ['p1','p2'],['bar','one'])
0
p1 p2 p3
bar one green -0.087366
</code></pre>
|
python|pandas|function|dataframe|multi-index
| 1
|
7,963
| 58,023,598
|
Filter DataFrame rows based on groups
|
<p>I'm learning Python/Pandas with a DataFrame having the following structure:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({"cus_id" : ["2370", "2370", "5100", "5100", "8450", "8450", "1630", "1630", "1630"],
"cus_group" : ["A", "A", "A", "B", "B", "B", "A", "A", "B"]})
print(df)
cus_id cus_group
0 2370 A
1 2370 A
2 5100 A
3 5100 B
4 8450 B
5 8450 B
6 1630 A
7 1630 A
8 1630 B
</code></pre>
<p>My goal is to filter the rows of the above DataFrame. Specifically, I want to keep only the rows where a customer belong to different groups. Here's my attempt:</p>
<pre><code>print(df.drop_duplicates(subset = ["cus_id", "cus_group"], keep = False))
cus_id cus_group
2 5100 A
3 5100 B
8 1630 B
</code></pre>
<p>Unfortunetely, this is not the exact output I'm looking for. Note that <code>cus_id</code> = <code>1630</code> appear three times in the original DataFrame: two times in group <code>A</code> and one time in group <code>B</code>. Since it belongs to two distinct groups (<code>A</code> and <code>B</code>), I don't want to drop any of the rows for this customer. That is, the output I'm looking for is the following: </p>
<pre><code> cus_id cus_group
2 5100 A
3 5100 B
6 1630 A
7 1630 A
8 1630 B
</code></pre>
<p>I'm not sure what functionality I'm missing to achieve my goal. Any additional help would be appreciated. </p>
|
<p>You can simply change your keep value to first. It would give you the desired result. </p>
<pre><code>import pandas as pd
df = pd.DataFrame({"cus_id" : ["2370", "2370", "5100", "5100", "8450", "8450", "1630", "1630", "1630"],
"cus_group" : ["A", "A", "A", "B", "B", "B", "A", "A", "B"]})
print(df.drop_duplicates(subset = ["cus_id", "cus_group"], keep = "first"))
</code></pre>
<p><strong>EDIT</strong></p>
<p>My apologies, I misunderstood the question</p>
<p>You can use <code>group_by</code> to identify the same and then use transform as <code>nunique</code> to provide the result. </p>
|
python|pandas|dataframe
| 1
|
7,964
| 49,458,902
|
Does applying a Dropout Layer after the Embedding Layer have the same effect as applying the dropout through the LSTM dropout parameter?
|
<p>I am slightly confused on the different ways to apply dropout to my Sequential model in Keras.</p>
<p>My model is the following:</p>
<pre><code>model = Sequential()
model.add(Embedding(input_dim=64,output_dim=64, input_length=498))
model.add(LSTM(units=100,dropout=0.5, recurrent_dropout=0.5))
model.add(Dense(units=1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
</code></pre>
<p>Assume that I added an extra Dropout layer after the Embedding layer in the below manner:</p>
<pre><code>model = Sequential()
model.add(Embedding(input_dim=64,output_dim=64, input_length=498))
model.add(Dropout(0.25))
model.add(LSTM(units=100,dropout=0.5, recurrent_dropout=0.5))
model.add(Dense(units=1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy'])
</code></pre>
<p>Will this make any difference since I then specified that the dropout should be 0.5 in the LSTM parameter specifically, or am I getting this all wrong?</p>
|
<p>When you add a dropout layer you're adding dropout to the output of the previous layer only, in your case you are adding dropout to your embedding layer.</p>
<p>An LSTM cell is more complex than a single layer neural network, when you specify the dropout in the LSTM cell you are actually applying dropout to 4 different sub neural network operations in the LSTM cell. </p>
<p>Below is a visualization of an LSMT cell from Colah's blog on LSTMs (the best visualization of LSTM/RNNs out there, <a href="http://colah.github.io/posts/2015-08-Understanding-LSTMs/" rel="noreferrer">http://colah.github.io/posts/2015-08-Understanding-LSTMs/</a>). The yellow boxes represent 4 fully connected network operations (each with their own weights) which occur under the hood of the LSTM - this is neatly wrapped up in the LSTM cell wrapper, though it's not really so hard to code by hand.</p>
<p><a href="https://i.stack.imgur.com/GmEXu.png" rel="noreferrer"><img src="https://i.stack.imgur.com/GmEXu.png" alt="enter image description here"></a></p>
<p>When you specify <code>dropout=0.5</code> in the LSTM cell, what you are doing under the hood is applying dropout to each of these 4 neural network operations. This is effectively adding <code>model.add(Dropout(0.25))</code> 4 times, once after each of the 4 yellow blocks you see in the diagram, within the internals of the LSTM cell.</p>
<p>I hope that short discussion makes it more clear how the dropout applied in the LSTM wrapper, which is applied to effectively 4 sub networks within the LSTM, is different from the dropout you applied once in the sequence after your embedding layer. And to answer your question directly, yes, these two dropout definitions are very much different.</p>
<p>Notice, as a further example to help elucidate the point: if you were to define a simple 5 layer fully connected neural network you would need to define dropout after each layer, not once. <code>model.add(Dropout(0.25))</code> is <em>not</em> some kind of global setting, it's adding the dropout operation to a pipeline of operations. If you have 5 layers, you need to add 5 dropout operations.</p>
|
python|tensorflow|machine-learning|keras|lstm
| 24
|
7,965
| 67,546,315
|
One-hot encode labels in keras
|
<p>I have a set of integers from a label column in a CSV file - <code>[1,2,4,3,5,2,..]</code>. The number of classes is <code>5</code> ie range of <code>1</code> to <code>6</code>. I want to one-hot encode them using the below code.</p>
<pre><code>y = df.iloc[:,10].values
y = tf.keras.utils.to_categorical(y, num_classes = 5)
y
</code></pre>
<p>But this code gives me an error</p>
<pre><code>IndexError: index 5 is out of bounds for axis 1 with size 5
</code></pre>
<p>How can I fix this?</p>
|
<p>If you use <code>tf.keras.utils.to_categorical</code> to one-hot the label vector, the integers should start from <code>0</code> to <code>num_classes</code>, <a href="https://www.tensorflow.org/api_docs/python/tf/keras/utils/to_categorical#args" rel="noreferrer">source</a>. In your case, you should do as follows</p>
<pre><code>import tensorflow as tf
import numpy as np
a = np.array([1,2,4,3,5,2,4,2,1])
y_tf = tf.keras.utils.to_categorical(a-1, num_classes = 5)
y_tf
array([[1., 0., 0., 0., 0.],
[0., 1., 0., 0., 0.],
[0., 0., 0., 1., 0.],
[0., 0., 1., 0., 0.],
[0., 0., 0., 0., 1.],
[0., 1., 0., 0., 0.],
[0., 0., 0., 1., 0.],
[0., 1., 0., 0., 0.],
[1., 0., 0., 0., 0.]], dtype=float32)
</code></pre>
<p>or, you can use <code>pd.get_dummies</code>,</p>
<pre><code>import pandas as pd
import numpy as np
a = np.array([1,2,4,3,5,2,4,2,1])
a_pd = pd.get_dummies(a).astype('float32').values
a_pd
array([[1., 0., 0., 0., 0.],
[0., 1., 0., 0., 0.],
[0., 0., 0., 1., 0.],
[0., 0., 1., 0., 0.],
[0., 0., 0., 0., 1.],
[0., 1., 0., 0., 0.],
[0., 0., 0., 1., 0.],
[0., 1., 0., 0., 0.],
[1., 0., 0., 0., 0.]], dtype=float32)
</code></pre>
|
tensorflow|keras|one-hot-encoding
| 5
|
7,966
| 67,461,808
|
Is there a way to loop through pandas dataframe and drop windows of rows dependent on condition?
|
<ol>
<li><strong>Problem Summary</strong> - I have a dataframe of ~10,000 rows. Some rows contain data aberrations that I would like to get rid of, and those aberrations are tied to observations made at certain temperatures (one of the data columns).</li>
<li><strong>What I've tried</strong> - My thought is that the easiest way to remove the rows of bad data is to loop through the temperature intervals, find the maximum index that is less than each of the temperature interval observations, and use the df.drop function to get rid of a window of rows around that index. Between every temperature interval at which bad data is observed, I reset the index of the dataframe. <em>However, it seems to be completely unstable!!</em> Sometimes it nearly works, other times it throws key errors. I think my problem may be in working with the data frame "in place," but I don't see another way to do it.</li>
<li><strong>Example Code:</strong>
Here is an example with a synthetic dataframe and a function that uses the same principles that I've tried. Note that I've tried different renditions with .loc and .iloc (commented out below).</li>
</ol>
<pre><code>#Create synthetic dataframe
import pandas as pd
import numpy as np
temp_series = pd.Series(range(25, 126, 1))
temp_noise = np.random.rand(len(temp_series))*3
df = pd.DataFrame({'temp':(temp_series+temp_noise), 'data':(np.random.rand(len(temp_series)))*400})
#calculate length of original and copy original because function works in place.
before_length = len(df)
df_dup = df
temp_intervals = [50, 70, 92.7]
window = 5
</code></pre>
<p>From here, run a function based on the dataframe (df), the temperature observations (temp_intervals) and the window size (window):</p>
<pre><code>def remove_window(df, intervals, window):
'''Loop through the temperature intervals to define a window of indices around given temperatures in the dataframe to drop. Drop the window of indices in place and reset the index prior to moving to the next interval.
'''
def remove_window(df, intervals, window):
for temp in intervals[0:len(intervals)]:
#Find index where temperature first crosses the interval input
cent_index = max(df.index[df['temp']<=temp].tolist())
#Define window of indices to remove from the df
drop_indices = list(range(cent_index-window, cent_index+window))
#Use df.drop
df.drop(drop_indices, inplace=True)
df.reset_index(drop=True)
return df
</code></pre>
<p>So, is this a problem with he funtcion I've defined or is there a problem with df.drop?</p>
<p>Thank you,
Brad</p>
|
<p>It can be tricky to repeatedly delete parts of the dataframe and keep track of what you're doing. A cleaner approach is to keep track of which rows you want to delete within the loop, but only delete them outside of the loop, all at once. This should also be faster.</p>
<pre class="lang-py prettyprint-override"><code>def remove_window(df, intervals, window):
# Create a Boolean array indicating which rows to keep
keep_row = np.repeat(True, len(df))
for temp in intervals[0:len(intervals)]:
# Find index where temperature first crosses the interval input
cent_index = max(df.index[df['temp']<=temp].tolist())
# Define window of indices to remove from the df
keep_row[range(cent_index - window, cent_index + window)] = False
# Delete all unwanted rows at once, outside the loop
df = df[keep_row]
df.reset_index(drop=True, inplace=True)
return df
</code></pre>
|
python|pandas|dataframe|drop
| 1
|
7,967
| 67,380,650
|
Stacking with CONV2D + LSTM
|
<p>I'm trying to use 2 methods, such as <em>Conv2D</em> and LSTM. I have already run ImageDataGenerator code.<a href="https://i.stack.imgur.com/F3RXS.jpg" rel="nofollow noreferrer">enter image description here</a></p>
<p>it shows that:
Found 1312 images belonging to 3 classes.
Found 876 images belonging to 3 classes.</p>
<p>My shape training and val is (150,150,1) and (150,150,1).
Here the code for combining Conv2d + LSTM
<a href="https://i.stack.imgur.com/jqGFn.jpg" rel="nofollow noreferrer">enter image description here</a></p>
<p>Aftar I have run program <em>model_image</em>, it shows that:
"Input 0 of layer conv2d_76 is incompatible with the layer: : expected min_ndim=4, found ndim=3. Full shape received: (3000, 150, 1)"</p>
<p>I have already used many ways to handle that error (such as Flatten and Reshape layers). But, the result is same. I don't know how to solve this code. Help me please,
it's such to be honour for help me. Thank you</p>
|
<p>Try using the expand_dims() function from TF.</p>
<p><a href="https://www.tensorflow.org/api_docs/python/tf/expand_dims" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/expand_dims</a></p>
|
tensorflow|image-processing|deep-learning|conv-neural-network|lstm
| 1
|
7,968
| 67,502,981
|
AttributeError: 'float' object has no attribute '_get_axis_number'
|
<p>I am trying to get the percentage change in value of today compared to yesterday, for every day in the dataframe. This is the line that throws the error-</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'new_cases':[368060.0,
357316.0,
382146.0,
412431.0,
414188.0,
401078.0,
403405.0,
366494.0,
329942.0]})
df['percent_increase_cases'] = df['new_cases'].apply(pd.Series.pct_change)
</code></pre>
<p>The formula I am using is</p>
<blockquote>
<p>percent_increase = (today's cases - yesterday's cases) / yesterday's
cases * 100</p>
</blockquote>
<p>It works if I use the code below but I wanted to make it cleaner.</p>
<pre><code>df['percent_increase_cases'] = (df['new_cases'].diff(1)) / df['new_cases'].shift(1) * 100
</code></pre>
|
<p>Another, simpler method of doing this would be</p>
<pre><code>df['percent_increase_cases'] = df[['new_cases']].apply(pd.Series.pct_change)
</code></pre>
<p>Notice the extra pair of <code>[]</code> when selecting columns.</p>
<p>Selecting a single column from a dataframe returns a series, which would run in to the problems described by <a href="https://stackoverflow.com/a/67504452/913098">@jjramsey</a>, but selecting a list of columns keeps the dataframe as a dataframe, not running into trouble.</p>
|
python|pandas
| 1
|
7,969
| 60,324,226
|
Pandas combine multiple pivot tables
|
<p>Suppose I have <code>df1</code>, <code>piv1</code>, and, <code>piv2</code> below:</p>
<pre><code>df1 = pd.DataFrame({'R': [1, 2], 'S': ['s1', 's2'], 'G1': ['g1a', 'g1b'], 'G2': ['g2a', 'g2b']})
df1
R S G1 G2
0 1 s1 g1a g2a
1 2 s2 g1b g2b
piv1 = df1.pivot_table(index=(['S']), columns=(['G1']), aggfunc=({'R': 'mean'}))
piv1
R
G1 g1a g1b
S
s1 1.0 NaN
s2 NaN 2.0
piv2 = df1.pivot_table(index=(['S']), columns=(['G2']), aggfunc=({'R': 'mean'}))
piv2
R
G2 g2a g2b
S
s1 1.0 NaN
s2 NaN 2.0
</code></pre>
<p>Instead of <code>piv1</code> and <code>piv2</code>, I am trying to make <code>piv3</code> which would look like below. Any ideas? Ideally I'd like to create <code>piv3</code> directly from <code>df1</code> (i.e. not needing to create <code>piv1</code> and <code>piv2</code> and then combine them).</p>
<pre><code>piv3
S g1a g1b g2a g2b
s1 1.0 NaN 1.0 NaN
s2 NaN 2.0 NaN 2.0
</code></pre>
|
<p>IIUC</p>
<pre><code>s=df1.melt(['R','S']).groupby(['S','value']).R.mean().unstack()
Out[63]:
value g1a g1b g2a g2b
S
s1 1.0 NaN 1.0 NaN
s2 NaN 2.0 NaN 2.0
</code></pre>
|
python|pandas|dataframe|pivot-table|pandas-groupby
| 4
|
7,970
| 65,257,150
|
Select rows and sort them values and add result as new string column
|
<p>Hello how can i simplify this code, i take 5 rows a need them as sorted string i new column:</p>
<p><code>nba_data['lineup']</code> is data frame and 'lineup' is last column</p>
<pre><code>nba_data['lineup'] = 0
for i in range(len(nba_data.index)):
single_lineup = []
df_single_lineup = nba_data.iloc[i, 59:64]
single_lineup = df_single_lineup.values.tolist()
single_lineup.sort()
nba_data.iloc[i, -1] = str(single_lineup[0]) + '_' + str(single_lineup[1]) + '_' + str(single_lineup[2]) + '_' + \
str(single_lineup[3]) + '_' + str(single_lineup[4])```
</code></pre>
|
<p>Data:</p>
<pre><code>np.random.seed(4)
nba_data = pd.DataFrame(np.random.randint(1,1000, 25).reshape(5,5))
</code></pre>
<p>nba_data:</p>
<pre><code> 0 1 2 3 4
0 123 175 440 710 898
1 361 600 457 819 394
2 59 765 872 110 607
3 824 952 314 873 677
4 947 45 295 127 565
</code></pre>
<p>You used 5 columns <code>59:64</code>, since i created my own data here, i used 2 column <code>2:4</code></p>
<p>The actual code is just one line</p>
<pre><code>nba_data['new_column'] = nba_data.apply(lambda x:"_".join(sorted(x.iloc[2:4].astype(str).to_list())) , axis=1)
</code></pre>
<p>nba_data:</p>
<pre><code> 0 1 2 3 4 new_column
0 123 175 440 710 898 440_710
1 361 600 457 819 394 457_819
2 59 765 872 110 607 110_872
3 824 952 314 873 677 314_873
4 947 45 295 127 565 127_295
</code></pre>
|
python|pandas|dataframe
| 0
|
7,971
| 65,203,203
|
Pandas: counting unique combinations from four columns that have NaN values
|
<p>I've been scratching my head with the this.. I have a dataframe with four columns</p>
<pre><code> a b c d
0 1 1 Nan NaN
1 2 1 1 NaN
2 1 1 Nan NaN
3 3 2 1 3
</code></pre>
<p>and I want the count of unique combinations from the columns to a new column</p>
<pre><code> a b c d count
0 1 1 Nan NaN 2
1 2 1 1 NaN 1
3 3 2 1 3 1
</code></pre>
<p>I've been using:</p>
<pre><code>df.groupby(['a', 'b', 'c', 'd']).size().reset_index().rename(columns={0:'count'})
</code></pre>
<p>but this only gives me rows where every column has a value that is not Nan:</p>
<pre><code> a b c d count
0 3 2 1 3 1
</code></pre>
<p>How can I get all combinations?</p>
|
<p>One way to go around is to replace <code>NaN</code> values with the string <code>'NaN'</code>:</p>
<pre><code>(df.fillna('NaN')
.groupby(list(df.columns))['a'].size()
.reset_index(name='count')
)
</code></pre>
<p>Output:</p>
<pre><code> a b c d count
0 1 1 NaN NaN 2
1 2 1 1 NaN 1
2 3 2 1 3 1
</code></pre>
|
pandas
| 1
|
7,972
| 65,190,217
|
How to process TransformerEncoderLayer output in pytorch
|
<p>I am trying to use bio-bert sentence embeddings for text classification of longer pieces of text.</p>
<p>As it currently stands I standardize the number of sentences in each piece of text (some sentences are only comprised of ("[PAD]") and run each sentence through biobert to get sentence vectors as they do here:
<a href="https://pypi.org/project/biobert-embedding/" rel="nofollow noreferrer">https://pypi.org/project/biobert-embedding/</a></p>
<p>I then run those embeddings through a TrasnformerEncoder with 8 layers and 16 attention heads.</p>
<p>The TrasnformerEncoder outputs something of shape (batch_size, num_sentences, embedding_size).</p>
<p>I then try to decode this with a Linear layer and map it to my classes (of which there are 7) and softmax the output to get probabilities.</p>
<p>My loss function is simply nn.CrossEntropyLoss().</p>
<p>At first, I summed over dimensions 1 of the TransformerEncoder output to get something of size (batch_size, embedding_size). This invariable led to my network converging on always predicting one of the labels with absolute certainty. Usually the most common label in the dataset.</p>
<p>I then tried only taking the output for the last sentence of the TransformerEncoder output. i.e. TransformerEncoderOutput[:, -1, :].</p>
<p>This resulted in something similar.</p>
<p>I then tried running my Linear layer on each of the outputs of TransformerEncoder output to produce a tensor of size (batch_size, num_sentences, 7). I then sum over dim 1 (makes a tensor of size (batch_size, 7) and softmax as usual. The idea here is that every sentence gets to vote for the label after being informed about its place in the sequence.</p>
<p>This converged even more quickly to just predicting 1 for one of the labels and vanishingly small values for the others.</p>
<p>I feel like I am misunderstanding out to use the output of a pytorch Transformer somehow.
My learning rate is very low, 0.00001, and that helped delay the convergence but it converged eventually anyway.</p>
<p>What this suggests to me is that my network is incapable of figuring anything out about the text and is just learning to find the most common labels. I would guess that this is either a problem with my loss function or a problem with how I am using the Transformer.</p>
<p>Is there a glaring flaw in the architecture that I have laid out?</p>
|
<p>So the input and output shape of the transformer-encoder is <code>batch-size, sequence-length, embedding-size)</code>.
There are three possibilities to process the output of the transformer encoder (when not using the decoder).</p>
<ol>
<li>you take the mean of the <code>sequence-length</code> dimension:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>x = self.transformer_encoder(x)
x = x.reshape(batch_size, seq_size, embedding_size)
x = x.mean(1)
</code></pre>
<ol start="2">
<li>sum it up as you said:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>x = self.transformer_encoder(x)
x = x.reshape(batch_size, seq_size, embedding_size)
x = x.sum(1)
</code></pre>
<ol start="3">
<li>using a recurrent neural network to combine the information along the <code>sequence-length</code> dimension:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>x = self.transformer_encoder(x)
x = x.reshape(batch_size, seq_len, embedding_size)
# init hidden state
hidden = torch.zeros(layers, batch_size, embedding_size).to(device=device)
x, hidden = self.rnn(x, hidden)
x = x.reshape(batch_size, seq_size, embedding_size)
# take last output
x = x[:, -1]
</code></pre>
<p>Taking the last element of the Transformer output isnt a good idea I think. Because then you only take 1 / seq-len of the information. But using rnn, the last output has still information of every other output.</p>
<p>Id say that taking the mean is the best idea.</p>
<p>And to the learning rate: For me it always worked very much better when I used warmup training. If you dont know what that is: You start at a low learning rate, for example 0.00001 and you increase it until you have reached some target lr, for example 0.002. And from then you just decay the lr as usual.</p>
|
pytorch|bert-language-model|transformer-model
| 1
|
7,973
| 65,317,427
|
how to map two dataframes on condition while having different rows
|
<p>I have two dataframes that need to be mapped (or joined?) based on some condition. These are the dataframes:</p>
<p><code>df_1</code></p>
<pre><code> img_names img_array
0 1_rel 253
1 1_rel_right 255
2 1_rel_top 250
3 4_rel 180
4 4_rel_right 182
5 4_rel_top 189
6 7_rel 217
7 7_rel_right 183
8 7_rel_top 196
</code></pre>
<p><code>df_2</code></p>
<pre><code> List_No time
0 1 38
1 4 23
2 7 32
</code></pre>
<p>After mapping I would like to get the following dataframe:</p>
<p><code>df_3</code></p>
<pre><code> img_names img_array List_No time
0 1_rel 253 1 38
1 1_rel_right 255 1 38
2 1_rel_top 250 1 38
3 4_rel 180 4 23
4 4_rel_right 182 4 23
5 4_rel_top 189 4 23
6 7_rel 217 7 32
7 7_rel_right 183 7 32
8 7_rel_top 196 7 32
</code></pre>
<p>Basically, <code>df_2</code>'s each row is populated 3 times to match the number of rows in <code>df_1</code> and the mapping (if we can say so) is done by the split string in each row of <code>df_1</code>'s <code>img_name</code> column. The names of row elements in <code>img_names</code> may have different names, but each of them always starts with the some number (<code>1,4,7</code> in this case) and an undescore, etc. So I need to split the correspongding number in each row and map it with the row elements of <code>List_No</code>.</p>
<p>I hope the example above is clear.</p>
<p>Thank you.</p>
|
<p>Looks like you can just extract the digit parts and merge:</p>
<pre><code>df_1['List_No'] = df_1['img_names'].str.split('_').str[0].astype(int)
df_3 = df_1.merge(df_2, on='List_No')
</code></pre>
<p>Output:</p>
<pre><code> img_names img_array List_No time
0 1_rel 253 1 38
1 1_rel_right 255 1 38
2 1_rel_top 250 1 38
3 4_rel 180 4 23
4 4_rel_right 182 4 23
5 4_rel_top 189 4 23
6 7_rel 217 7 32
7 7_rel_right 183 7 32
8 7_rel_top 196 7 32
</code></pre>
|
python-3.x|pandas|dataframe|mapping
| 3
|
7,974
| 65,097,963
|
Efficiently update rows of a postgres table from another table in another database based on a condition in a common column
|
<p>I have two <a href="https://pandas.pydata.org/" rel="nofollow noreferrer">pandas</a> <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html" rel="nofollow noreferrer">DataFrames</a>:</p>
<p><code>df1</code> from database A with connection parameters <code>{"host":"hostname_a","port": "5432", "dbname":"database_a", "user": "user_a", "password": "secret_a"}</code>. The column <code>key</code> is the primary key.</p>
<p><code>df1</code>:</p>
<pre><code>| | key | create_date | update_date |
|---:|------:|:-------------|:--------------|
| 0 | 57247 | 1976-07-29 | 2018-01-21 |
| 1 | 57248 | | 2018-01-21 |
| 2 | 57249 | 1992-12-22 | 2016-01-31 |
| 3 | 57250 | | 2015-01-21 |
| 4 | 57251 | 1991-12-23 | 2015-01-21 |
| 5 | 57262 | | 2015-01-21 |
| 6 | 57263 | | 2014-01-21 |
</code></pre>
<p><code>df2</code> from database B with connection parameters <code>{"host": "hostname_b","port": "5433", "dbname":"database_b", "user": "user_b", "password": "secret_b"}</code>. The column <code>id</code> is the primary key (these values are originally the same than the one in the column <code>key</code> in <code>df1</code>; it's only a renaming of the primary key column of <code>df1</code>).</p>
<p><code>df2</code>:</p>
<pre><code>| | id | create_date | update_date | user |
|---:|------:|:-------------|:--------------|:------|
| 0 | 57247 | 1976-07-29 | 2018-01-21 | |
| 1 | 57248 | | 2018-01-21 | |
| 2 | 57249 | 1992-12-24 | 2020-10-11 | klm |
| 3 | 57250 | 2001-07-14 | 2019-21-11 | ptl |
| 4 | 57251 | 1991-12-23 | 2015-01-21 | |
| 5 | 57262 | | 2015-01-21 | |
| 6 | 57263 | | 2014-01-21 | |
</code></pre>
<p>Notice that the row[2] and row[3] in <code>df2</code> have more recent <code>update_date</code> values (<code>2020-10-11</code> and <code>2019-21-11</code> respectively) than their counterpart in <code>df1</code> (where <code>id</code> = <code>key</code>) because their <code>creation_date</code> have been modified (by the given users).</p>
<p>I would like to update rows (i.e. in concrete terms; <code>create_date</code> and <code>update_date</code> values) of <code>df1</code> where <code>update_date</code> in <code>df2</code> is more recent than its original value in <code>df1</code> (for the same primary keys).</p>
<p>This is how I'm tackling this for the moment, using <a href="https://www.sqlalchemy.org/" rel="nofollow noreferrer">sqlalchemy</a> and <a href="https://pypi.org/project/psycopg2/" rel="nofollow noreferrer">psycopg2</a> + the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_sql.html" rel="nofollow noreferrer"><code>.to_sql()</code></a> method of <a href="https://pandas.pydata.org/" rel="nofollow noreferrer">pandas</a>' <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html" rel="nofollow noreferrer">DataFrame</a>:</p>
<pre><code>import psycopg2
from sqlalchemy import create_engine
connector = psycopg2.connect(**database_parameters_dictionary)
engine = create_engine('postgresql+psycopg2://', creator=connector)
df1.update(df2) # 1) maybe there is something better to do here?
with engine.connect() as connection:
df1.to_sql(
name="database_table_name",
con=connection,
schema="public",
if_exists="replace", # 2) maybe there is also something better to do here?
index=True
)
</code></pre>
<p>The problem I have is that, according to the documentation, the <code>if_exists</code> argument can only do three things:</p>
<blockquote>
<p>if_exists{‘fail’, ‘replace’, ‘append’}, default ‘fail’</p>
</blockquote>
<p>Therefore, to update these two rows, I have to;<br />
1) use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.update.html" rel="nofollow noreferrer"><code>.update()</code></a> method on <code>df1</code> using <code>df2</code> as an argument, together with<br />
2) replacing the whole table inside the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_sql.html" rel="nofollow noreferrer"><code>.to_sql()</code></a> method, which means "drop+recreate".<br />
As the tables are really large (more than 500'000 entries), I have the feeling that this will need a lot of unnecessary work!</p>
<p>How could I efficiently update only those two newly updated rows? Do I have to generate some custom SQL queries to compares the dates for each rows and only take the ones that have really changed? But here again, I have the intuition that, looping through all rows to compare the update dates will take "a lot" of time. How is the more efficient way to do that? (It would have been easier in pure SQL if the two tables were on the same host/database but it's unfortunately not the case).</p>
|
<p>Pandas can't do partial updates of a table, no. There is a <a href="https://github.com/pandas-dev/pandas/issues/14553" rel="nofollow noreferrer">longstanding open bug</a> for supporting sub-whole-table-granularity updates in <code>.to_sql()</code>, but you can see from the discussion there that it's a very complex feature to support in the general case.</p>
<p>However, limiting it to just your situation, I think there's a reasonable approach you could take.</p>
<p>Instead of using <code>df1.update(df2)</code>, put together an expression that yields only the changed records with their new values (I don't use pandas often so I don't know this offhand); then iterate over the resulting dataframe and build the UPDATE statements yourself (or with the SQLAlchemy expression layer, if you're using that). Then, use the connection to DB A to issue all the UPDATEs as one transaction. With an indexed PK, it should be as fast as this would ever be expected to be.</p>
<p>BTW, I don't think df1.update(df2) is exactly correct - from my reading, that would update all rows with any differing fields, not just when updated_date > prev updated_date. But it's a moot point if updated_date in df2 is only ever more recent than those in df1.</p>
|
python-3.x|pandas|postgresql|dataframe|sqlalchemy
| 0
|
7,975
| 65,092,912
|
Python Pandas Filter but results are inversed
|
<p>Hi I've built a filter where I expect the results to only show 'New'. However the result shows everything but new?</p>
<pre><code>filt = (export['jiraStatus'] == 'New')
print(export.loc[~filt])
</code></pre>
<p>Thoughts?</p>
<p>TIA</p>
<p>Neil</p>
|
<p>The <code>~</code> negates/inverts the filter. Just use <code>.loc[filt]</code> instead of <code>.loc[~filt]</code> to get the un-negated result.</p>
|
python|pandas
| 0
|
7,976
| 65,261,873
|
find geometry distance between rows x y z in pandas
|
<p>I have such dataframe</p>
<pre><code> x y z
0 1202.3235 541.05555 2.835000e+01
1 1202.3235 541.05555 2.835000e+01
</code></pre>
<p>It is necessary to find the rows, which have got very small distance from other rows.
Distance should be calculated by analythical geometry rules</p>
<pre><code>np.sqrt((x1-x0)*(x1-x0)+(y1-y0)*(y1-y0)+(z1-z0)*(z1-z0))
</code></pre>
|
<p>Grouping similar data points is normally done using cluster analysis.</p>
<p>You will find suitable methods in Scipy (<a href="https://docs.scipy.org/doc/scipy/reference/cluster.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/scipy/reference/cluster.html</a>) or scikit-learn (<a href="https://scikit-learn.org/stable/modules/clustering.html" rel="nofollow noreferrer">https://scikit-learn.org/stable/modules/clustering.html</a>) which can be applied directly on pandas dataframes.</p>
<p>For example, this code:</p>
<pre><code>from scipy.cluster.hierarchy import linkage, fcluster
tree = linkage(my_data, method='ward', metric='euclidean')
clusters = fcluster(tree, t=some_threshold, criterion='distance')
</code></pre>
<p>or this one:</p>
<pre><code>from sklearn.cluster import AgglomerativeClustering
clusters = AgglomerativeClustering(n_clusters=None, distance_threshold=some_threshold).fit(u).labels_
</code></pre>
<p>will group together the points below a threshold distance and the <code>clusters</code> variable will contain the cluster number assigned to each data point.</p>
|
pandas|geometry
| 0
|
7,977
| 65,474,427
|
How can pandas str.extract method returns more match from my list?
|
<p>I have rows like this in a pandas series object:</p>
<pre><code>['Blazic M.', 'Boli F.', 'Botka E.', 'Civic E.', 'Dibusz D. (K)', 'Kharatin I.', 'N. Tokmac', 'Otigba K.', 'Sigér D.', 'Vécsei B.', 'Zubkov O.']`
</code></pre>
<p>it is a <class 'str'></p>
<p>I want with .str.extract('[\w,]') to only match the alphabetic characters and commas but i only got the first letter from all the row. Where did i make the mistake?</p>
<p>here is my full code: import pandas as pd</p>
<pre><code>df = pd.read_csv('output.csv', encoding='latin', names=['Csapat','Játékosok'])
jatekosok = df['Játékosok'].str.extract('[\w,]')
print(jatekosok)
</code></pre>
<p>here is the original series which i work with before the extraction:</p>
<pre><code>0 ['Blazic M.', 'Boli F.', 'Botka E.', 'Civic E....
1 ['Berecz Zs.', 'Cseri T.', 'Farkas D.', 'Jurin...
2 ['Deutsch L.', 'Gyurcsó Á.', 'Hadzhiev K.', 'K...
3 ['Batik B.', 'Gazdag D.', 'George M.', 'Hidi P...
4 ['Adeniji T.', 'Bényei B.', 'Ferenczi J.', 'Ki...
...
391 ['Böde D.', 'Fejes A.', 'Fejõs Á.', 'Hahn J.',...
392 ['Cseri T.', 'Farkas D.', 'Karnitskiy A.', 'Ka...
393 ['Babati B.', 'Barczi D.', 'Bedi B.', 'Demjén ...
394 ['B. Pauljevic', 'Burekovic D.', 'Koszta M.', ...
395 ['Hadzhiev K.', 'Hegedûs L. (K)', 'Henty E.', ...
</code></pre>
|
<p>You can use <code>findall</code>:</p>
<pre><code>>> pd.Series(['Blazic M., 123 Boli F.']).str.findall('([a-zA-Z,])')
0 [B, l, a, z, i, c, M, ,, B, o, l, i, F]
dtype: object
</code></pre>
|
python|regex|pandas
| 1
|
7,978
| 64,134,929
|
Fishers Exact Test from Pandas Dataframe
|
<p>I'm trying to work out the best way to create a p-value using Fisher's Exact test from four columns in a dataframe. I have already extracted the four parts of a contingency table, with 'a' being top-left, 'b' being top-right, 'c' being bottom-left and 'd' being bottom-right. I have started including additional calculated columns via simple pandas calculations, but these aren't necessary if there's an easier way to just use the 4 initial columns. I have over 1 million rows when including an additional set (x.type = high), so want to use an efficient method. So far this is my code:</p>
<pre><code>import pandas as pd
import glob
import math
path = r'directory_path'
all_files = glob.glob(path + "/*.csv")
li = []
for filename in all_files:
df = pd.read_csv(filename, index_col=None, header=0)
li.append(df)
frame = pd.concat(li, axis=0, ignore_index=True)
frame['a+b'] = frame['a'] + frame['b']
frame['c+d'] = frame['c'] + frame['d']
frame['a+c'] = frame['a'] + frame['c']
frame['b+d'] = frame['b'] + frame['d']
</code></pre>
<p>As an example of this data, 'frame' currently shows:</p>
<pre><code> ID(n) a b c d i x.name x.type a+b c+d a+c b+d
0 1258065 5 28 31 1690 1754 Albumin low 33 1721 36 1718
1 1132105 4 19 32 1699 1754 Albumin low 23 1731 36 1718
2 898621 4 30 32 1688 1754 Albumin low 34 1720 36 1718
3 573158 4 30 32 1688 1754 Albumin low 34 1720 36 1718
4 572975 4 23 32 1695 1754 Albumin low 27 1727 36 1718
... ... ... ... ... ... ... ... ... ... ... ... ...
666646 12435 1 0 27 1726 1754 WHR low 1 1753 28 1726
666647 15119 1 0 27 1726 1754 WHR low 1 1753 28 1726
666648 17053 1 2 27 1724 1754 WHR low 3 1751 28 1726
666649 24765 1 3 27 1723 1754 WHR low 4 1750 28 1726
666650 8733 1 1 27 1725 1754 WHR low 2 1752 28 1726
</code></pre>
<p>Is the best way to convert these to a numpy array and process it through iteration, or keep it in pandas? I assume that I can't use math functions within a dataframe (I've tried <code>math.comb()</code>, which didn't work in a dataframe). I've also tried using <a href="https://github.com/biocore-ntnu/pyranges" rel="nofollow noreferrer">pyranges</a> for its fisher method but it seems it doesn't work with my environment (python 3.8).</p>
<p>Any help would be much appreciated!</p>
|
<p>Following the <a href="https://stackoverflow.com/questions/34947578/how-to-vectorize-fishers-exact-test">answer here</a> which came from the author of pyranges (i think), let's say you data is something like:</p>
<pre><code>import pandas as pd
import scipy.stats as stats
import numpy as np
np.random.seed(111)
df = pd.DataFrame(np.random.randint(1,100,(1000000,4)))
df.columns=['a','b','c','d']
df['ID'] = range(1000000)
df.head()
a b c d ID
0 85 85 85 87 0
1 20 42 67 83 1
2 41 72 58 8 2
3 13 11 66 89 3
4 29 15 35 22 4
</code></pre>
<p>You convert it into a numpy array and did it like in the post:</p>
<pre><code>c = df[['a','b','c','d']].to_numpy(dtype='uint64')
from fisher import pvalue_npy
_, _, twosided = pvalue_npy(c[:, 0], c[:, 1], c[:, 2], c[:, 3])
df['odds'] = (c[:, 0] * c[:, 3]) / (c[:, 1] * c[:, 2])
df['pvalue'] = twosided
</code></pre>
<p>Or you can fit it directly:</p>
<pre><code>_, _, twosided = pvalue_npy(df['a'].to_numpy(np.uint), df['b'].to_numpy(np.uint),
df['c'].to_numpy(np.uint), df['d'].to_numpy(np.uint))
df['odds'] = (df['a'] * df['d']) / (df['b'] * df['c'])
df['pvalue'] = twosided
</code></pre>
|
python|python-3.x|pandas|statistics|combinations
| 1
|
7,979
| 46,730,272
|
Pandas: Difference of of two datetime64 objects yields NaT rather than correct timedelta value
|
<p>This question "gets asked a lot" - but after looking carefully at the other answers I haven't found a solution that works in my case. It's a shame this is still such a sticking point.</p>
<p>I have a <code>pandas</code> dataframe with a column <code>datetime</code> and I simply want to calculate the time range covered by the data, in seconds (say).</p>
<pre><code>from datetime import datetime
# You can create fake datetime entries any way you like, e.g.
df = pd.DataFrame({'datetime': pd.date_range('10/1/2001 10:00:00', \
periods=3, freq='10H'),'B':[4,5,6]})
# (a) This yields NaT:
timespan_a=df['datetime'][-1:]-df['datetime'][:1]
print timespan_a
# 0 NaT
# 2 NaT
# Name: datetime, dtype: timedelta64[ns]
# (b) This does work - but why?
timespan_b=df['datetime'][-1:].values.astype("timedelta64")-\
df['datetime'][:1].values.astype("timedelta64")
print timespan_b
# [72000000000000]
</code></pre>
<ol>
<li><p>Why doesn't (a) work?</p></li>
<li><p>Why is (b) required rather? (it also gives a one-element <code>numpy</code> array rather than a <code>timedelta</code> object)</p></li>
</ol>
<p>My pandas is at version <code>0.20.3</code>, which rules out an earlier known bug.</p>
<p>Is this a dynamic-range issue?</p>
|
<p>There is problem different indexes, so one item Series cannot align and get <code>NaT</code>.</p>
<p>Solution is convert first or second values to numpy array by <code>values</code>:</p>
<pre><code>timespan_a = df['datetime'][-1:]-df['datetime'][:1].values
print (timespan_a)
2 20:00:00
Name: datetime, dtype: timedelta64[ns]
</code></pre>
<p>Or set both index values to same:</p>
<pre><code>a = df['datetime'][-1:]
b = df['datetime'][:1]
print (a)
2 2001-10-02 06:00:00
Name: datetime, dtype: datetime64[ns]
a.index = b.index
print (a)
0 2001-10-02 06:00:00
Name: datetime, dtype: datetime64[ns]
print (b)
0 2001-10-01 10:00:00
Name: datetime, dtype: datetime64[ns]
timespan_a = a - b
print (timespan_a)
0 20:00:00
Name: datetime, dtype: timedelta64[ns]
</code></pre>
<p>If want working with scalars:</p>
<pre><code>a = df.loc[df.index[-1], 'datetime']
b = df.loc[0, 'datetime']
print (a)
2001-10-02 06:00:00
print (b)
2001-10-01 10:00:00
timespan_a = a - b
print (timespan_a)
0 days 20:00:00
</code></pre>
<p>Another solution, thank you <a href="https://stackoverflow.com/questions/46730272/pandas-difference-of-of-two-datetime64-objects-yields-nat-rather-than-correct-t/46730329?noredirect=1#comment80407024_46730329">Anton vBR</a>:</p>
<pre><code>timespan_a = df.get_value(len(df)-1,'datetime')- df.get_value(0,'datetime')
print (timespan_a)
0 days 20:00:00
</code></pre>
|
python|pandas|datetime
| 3
|
7,980
| 63,069,276
|
Pandas to_sql ignoring dtype when column contains null values
|
<p>First SO Question. I hope this is descriptive enough.</p>
<p>Pandas 0.25, Oracle 11g</p>
<p>I have a dataframe read from a csv. It contains a mix of numeric, string and date data.</p>
<p>I force data types within the dataframe using <code>.astype(str)</code>, <code>.astype(int)</code> and <code>.to_datetime</code>.</p>
<p>I then create a dtype dictionary to select the data types I want.</p>
<p>When there are some nulls in the numeric columns <code>types.NUMBER</code> and <code>types.INTEGER</code> creates a <code>FLOAT</code> in the Oracle table. It should be <code>NUMBER(38,0)</code>, especially if I use <code>types.INTEGER</code>. The key column that is defined as <code>types.NUMBER</code> and contains all non-null integers is created as a <code>NUMBER(38,0)</code> as expected.</p>
<p>When there are columns with all nulls, but have had <code>.astype(str)</code> applied and dtype of <code>types.VARCHAR(300)</code> the columns are also created as <code>FLOAT</code> in Oracle.</p>
<p>I need to use <code>if_exists='append'</code> in to_sql as the table collects history, so I can't wait for the VARCHAR columns to recieve data. Though I have been using <code>if_exists='replace</code> during testing to ensure the table is dropped and recreated.</p>
<p>Is there a way to resolve these issues, caused by nulls in the data, resulting in the datatype selection being incorrect? I shouldn't need to use a blank (ie '') in the strings and 0 for integers, I need nulls to come through as nulls.</p>
<p>Nulls in date columns, even when the entire column is null values works, and creates a <code>DATE</code> in Oracle as requested.</p>
<p>EDIT: String to VARCHAR Issue was actually an issue with a trapped and incorrectly handled exception.</p>
<p>Numbers were still an issue that had to be handled separately I will add an answer with the solution.</p>
|
<p>The issue with the numeric fields with some null and some non-null values was due to Pandas using NaN for null and numpy treating NaN as float.</p>
<p><code>.astype(int)</code> doesn't handle NaN's and actually raises an exception due to the NaNs (which a try block had caught and handled incorrectly in my case).</p>
<p>The solution is: <code>df['pref1'] = df['pref1'].astype('Int64')</code></p>
<p>The 'Int64' needs the capitalised 'I'. <code>.astype('int64')</code> also doesn't work.</p>
<p><a href="https://stackoverflow.com/a/54194908/7140431">This answer was helpful</a></p>
|
python|pandas|oracle|dtype|pandas-to-sql
| 0
|
7,981
| 63,189,077
|
SimpleITK reading a slice of an image
|
<p>Good day to all.</p>
<p>I happen to have a very large .mha file on my HDD (9.7 Gb) which is a 3D image of a brain. I know this image's shape and for the needs of a report, I would like to extract a slice of it, in order to get a 2D image that I can save as a .png.</p>
<p>The problem is that my 16 Gb RAM computer does not allow me to load the complete image and therefore I would like to know if there is a way to extract a 2D slice of this image without loading the whole image in RAM. For smaller images, I used <code>sitk.GetArrayFromImage(sitk.ReadImage(fileName))</code> and fed it to <code>pyplot.imshow()</code> but this implies to load the whole image which I want to avoid.</p>
<p>I thought for a time to use <code>numpy.memmap</code> in order to create a temporary <code>.npy</code> file in which I could store the whole array and then get a slice of it but I am having trouble allocating the array from image to it using <code>sitk.GetArrayFromImage(sitk.ReadImage(fileName))</code>.</p>
<p>Does anyone has an idea on how to do it?</p>
<p>Thanks in advance !</p>
|
<p>With SimpleITK you can read the header of an image, to get it's size information (and other meta-data). Then you can tell the ImageFileReader to only read in a sub-section of the image.</p>
<p>The following example demonstrates how to do it:</p>
<p><a href="https://simpleitk.readthedocs.io/en/master/link_AdvancedImageReading_docs.html" rel="nofollow noreferrer">https://simpleitk.readthedocs.io/en/master/link_AdvancedImageReading_docs.html</a></p>
<p>The key is calling ImageFileReader's ReadImageInformation method first. That gets all the header info. Then calling SetExtractIndex and SetExtractSize to specify the sub-region to load before calling Execute to read the image.</p>
|
python|numpy|memory-management|simpleitk|numpy-memmap
| 1
|
7,982
| 63,234,851
|
How to extract data from a dictionary?
|
<p>I am trying to get the asset/free/locked fields along with the corresponding data to populate into columns. Currently, I can only get the balances column where these fields fall.</p>
<p>Here is the data format. I don't need anything before 'balances'. Thinking if I could remove this part of the string maybe then the columns would be created? Or if there is a another way to do this?</p>
<pre><code>'{'makerCommission': 10, 'takerCommission': 10, 'buyerCommission': 0, 'sellerCommission': 0, 'canTrade': True, 'canWithdraw': True, 'canDeposit': True, 'updateTime': 1595872633345, 'accountType': 'MARGIN', 'balances': [{'asset': 'BTC', 'free': '0.00000000', 'locked': '0.00000000'}, {'asset': 'LTC', 'free': '0.00000000', 'locked': '0.00000000'}, {'asset': 'ETH', 'free': '0.00000000', 'locked': '0.00000000'}...'
</code></pre>
<p>The code so far to get the balances is:</p>
<pre><code>account = client.get_account()
assets = pd.DataFrame(account, columns = ['balances'])
</code></pre>
<p>Any help appreciated. Got me stumped.</p>
|
<ul>
<li>If <code>account</code> is a <code>string</code>, it must be converted to a <code>dict</code> with <a href="https://docs.python.org/3/library/ast.html#ast.literal_eval" rel="nofollow noreferrer"><code>ast.literal_eval</code></a>.</li>
<li>With <code>account</code> as a <code>dict</code>, use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.json_normalize.html" rel="nofollow noreferrer"><code>pandas.json_normalize</code></a> to extract the nested <code>keys</code> and <code>values</code></li>
</ul>
<pre class="lang-py prettyprint-override"><code>from ast import literal_eval
import pandas as pd
# if account is a string
assets = pd.json_normalize(literal_eval(account), 'balances')
# if account is a dict
assets = pd.json_normalize(account, 'balances')
# display(assets)
asset free locked
0 BTC 0.00000000 0.00000000
1 LTC 0.00000000 0.00000000
2 ETH 0.00000000 0.00000000
</code></pre>
<h2>Sample Data as a <code>str</code></h2>
<pre class="lang-py prettyprint-override"><code>data = "{'makerCommission': 10, 'takerCommission': 10, 'buyerCommission': 0, 'sellerCommission': 0, 'canTrade': True, 'canWithdraw': True, 'canDeposit': True, 'updateTime': 1595872633345, 'accountType': 'MARGIN', 'balances': [{'asset': 'BTC', 'free': '0.00000000', 'locked': '0.00000000'}, {'asset': 'LTC', 'free': '0.00000000', 'locked': '0.00000000'}, {'asset': 'ETH', 'free': '0.00000000', 'locked': '0.00000000'}]}"
</code></pre>
|
python|pandas|dataframe|dictionary|json-normalize
| 2
|
7,983
| 62,960,690
|
Difference in rounding - float64 vs. float32
|
<p>This scenario is a simplification of an ETL scenario involving multiple sets of data pulled from MySQL tables. I have a merged dataframe where one price column is type <code>float64</code> and the other is type <code>object</code>.</p>
<pre><code>import pandas as pd
df = pd.DataFrame({
'price1': [0.066055],
'price2': ['0.066055'],
})
>>> df.dtypes
price1 float64
price2 object
dtype: object
</code></pre>
<p>When these two columns are converted to <code>float64</code>, the column <code>price1</code> is rounded incorrectly when rounded to 5 digits.</p>
<pre><code>float64_df = df[price_cols].apply(lambda x: pd.to_numeric(x))
>>> float64_df.dtypes
price1 float64
price2 float64
dtype: object
>>> float64_df[price_cols].apply(lambda x: x.round(5))
price1 price2
0 0.06606 0.06605
</code></pre>
<p>However, when the columns are converted to <code>float32</code> using <code>downcast='float'</code>, the rounding works as expected.</p>
<pre><code>float32_df = df[price_cols].apply(lambda x: pd.to_numeric(x, downcast='float'))
>>> float32_df.dtypes
price1 float32
price2 float32
dtype: object
>>> float32_df[price_cols].apply(lambda x: x.round(5))
price1 price2
0 0.06606 0.06606
</code></pre>
<p>Any ideas why the rounding doesn't work properly when both columns are of type <code>float64</code>?</p>
|
<p>Printing the floats with higher precision shows that <code>pd.to_numeric</code> converted <code>'.066055'</code> to <code>0.06605499999999998872</code>.</p>
<pre class="lang-py prettyprint-override"><code>with pd.option_context('display.float_format', '{:0.20f}'.format):
print(float64_df)
</code></pre>
<p>Output:</p>
<pre><code> price1 price2
0 0.06605500000000000260 0.06605499999999998872
</code></pre>
|
python|pandas
| 2
|
7,984
| 63,055,152
|
np.genfromtxt returns string with 'b'
|
<p>I am learning about different functions of NUmpy, And I have a dummy dataset <a href="http://eforexcel.com/wp/downloads-18-sample-csv-files-data-sets-for-testing-sales/" rel="nofollow noreferrer">here</a> named as 100-Sales-Records.</p>
<p>Now I want to read it using <code>np.genfromtxt</code>. My code to read it is</p>
<pre><code>df3 = np.genfromtxt('100 Sales Records.csv', delimiter=',',names=True, dtype=None)
</code></pre>
<p>Because it is a 'csv' file and have strings as well as float.
Now the Output of</p>
<p><code>pd.DataFramge(df3).head()</code>
is
<a href="https://i.stack.imgur.com/zuCEQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zuCEQ.png" alt="enter image description here" /></a></p>
<p>WHich you can see that all strings have <code>b</code> in front of them. WHat is this <code>b</code> and how to remove it?</p>
|
<p>The answer is that <code>b</code> before strings means that it is a byte object normally returned with <code>utf-8</code> encoding. It is a bytes object.</p>
<p>To remove it, there is a parameter in <code>genfromtxt</code> that is <code>encoding</code>, set it to <code>utf-8</code></p>
<p>i.e</p>
<pre class="lang-py prettyprint-override"><code>df3 = np.genfromtxt('100 Sales Records.csv', delimiter=',',names=True, dtype=None, encoding='utf-8')
</code></pre>
<p>This will give you the desired results.</p>
|
python|pandas|numpy|genfromtxt
| 3
|
7,985
| 67,675,174
|
Check if at least one column contains a string in pandas
|
<p>I would like to check whether several columns contain a string, and generate a Boolean column with the result. This is easy to do for a single column, but generates an Attribute Error (<code>AttributeError: 'DataFrame' object has no attribute 'str'</code>) when this method is applied to multiple columns.</p>
<p>Example:</p>
<pre><code>import pandas as pd
c1=[x+'x' for x in 'abcabc']
c2=['Y'+x+'m' for x in 'CABABC']
cols=['A','B']
df=pd.DataFrame(list(zip(c1,c2)),columns=cols)
df
</code></pre>
<p>Returns:</p>
<pre><code> A B
0 ax YCm
1 bx YAm
2 cx YBm
3 ax YAm
4 bx YBm
5 cx YCm
</code></pre>
<p>The following code works when applied to a single column, but does not work when applied to several columns. I'd like something that fits in here and gives the desired result:</p>
<pre><code>df['C']=df[cols].str.contains('c',case=False)
</code></pre>
<p>Thus the desired output is:</p>
<pre><code> A B C
0 ax YCm True
1 bx YAm False
2 cx YBm True
3 ax YAm False
4 bx YBm False
5 cx YCm True
</code></pre>
<p>Edit: I updated my example to reflect the desire to actually search for whether the column "contains" a value, rather than "is equivalent to" that value.</p>
<p>Edit: in terms of timings, here's the benchmark I'd like to be able to match or beat, without creating the new columns (using a <code>*1000</code> to the columns in my toy example):</p>
<pre><code>newcols=['temp_'+x for x in cols]
for col in cols:
df['temp_'+col]=df[col].str.contains('c',case=False)
df['C']=df[newcols].any(axis=1)
df=df[['A','B','C']]
</code></pre>
|
<p>An option via <code>applymap</code> :</p>
<pre><code>df['C'] = df.applymap(lambda x: 'c' in str(x).lower()).any(1)
</code></pre>
<p>Via <code>stack/unstack</code>:</p>
<pre><code>df['C'] = df.stack().str.contains('c', case=False).unstack().any(1)
df['C'] = df.stack().str.lower().str.contains('c').unstack().any(1)
</code></pre>
<p>OUTPUT:</p>
<pre><code> A B C
0 ax YCm True
1 bx YAm False
2 cx YBm True
3 ax YAm False
4 bx YBm False
5 cx YCm True
</code></pre>
|
python|pandas
| 4
|
7,986
| 68,003,064
|
Pandas - optimize percentile calculation
|
<p>I have a dataset like this:</p>
<pre><code>id type score
a1 ball 15
a2 ball 12
a1 pencil 10
a3 ball 8
a2 pencil 6
</code></pre>
<p>I want to find out the rank for each type for each id. As I later would translate the rank into percentiles, I prefer using <code>rank</code>.</p>
<p>the output should be something like this:</p>
<pre><code>id type score rank
a1 ball 15 1
a2 ball 12 2
a1 pencil 10 1
a3 ball 8 3
a2 pencil 6 2
</code></pre>
<p>So far, what I did, was getting unique set of <code>type</code> and iterating over it with this:</p>
<pre><code>test_data['percentile_from_all'] = 0
for i in unique_type_list:
loc_i = test_data['type']==i
percentiles = test_data.loc[loc_i,['score']].rank(pct = True)*100
test_data.loc[loc_i,'percentile_from_all'] = percentiles.values
</code></pre>
<p>This approach works well for small datasets, but for even 10k iterations, it becomes too slow. Is there a way to do it simultaneously like with <code>apply</code> or so?</p>
<p>Thank you!</p>
|
<p>Check with <code>groupby</code></p>
<pre><code>df['rnk'] = df.groupby('type').score.rank(ascending=False)
Out[67]:
0 1.0
1 2.0
2 1.0
3 3.0
4 2.0
Name: score, dtype: float64
</code></pre>
|
python|pandas|rank
| 2
|
7,987
| 67,905,777
|
how to use the input with pandas to get all the value.count linked to this input
|
<p>my dataframe looks like this:</p>
<pre><code>Index(['#Organism/Name', 'TaxID', 'BioProject Accession', 'BioProject ID', 'Group', 'SubGroup', 'Size (Mb)', 'GC%', 'Replicons', 'WGS',
'Scaffolds', 'Genes', 'Proteins', 'Release Date', 'Modify Date',
'Status', 'Center', 'BioSample Accession', 'Assembly Accession',
'Reference', 'FTP Path', 'Pubmed ID', 'Strain'],
dtype='object')
</code></pre>
<p>I ask the user to enter the name of the species with this script :</p>
<pre><code>print("bacterie species?")
species=input()
</code></pre>
<p>I want to look for the rows with "Organism/Name" equal to the species written by the user (input) then to calculate with "values.count" of the status column and finally to retrieve 'FTP Path'.
Here is the code that I could do but that does not work:</p>
<pre><code>if (data.loc[(data["Organism/Name"]==species)
print(Data['Status'].value_counts())
else:
print("This species not found")
if (data.loc[(data["Organism/Name"]==species)
print(Data['Status'].value_counts())
else:
print(Data.get["FTP Path"]
</code></pre>
|
<p>If I understand your question correctly, this is what you're trying to achieve:</p>
<pre><code>import wget
import numpy as np
import pandas as pd
URL='https://ftp.ncbi.nlm.nih.gov/genomes/GENOME_REPORTS/prokaryotes.txt'
data = pd.read_csv(wget.download(URL) , sep = '\t', header = 0)
species = input("Enter the bacteria species: ")
if data["#Organism/Name"].str.contains(species, case = False).any():
print(data.loc[data["#Organism/Name"].str.contains(species, case = False)]['Status'].value_counts())
FTP_list = data.loc[data["#Organism/Name"].str.contains(species, case = False)]["FTP Path"].values
else:
print("This species not found")
</code></pre>
<p>To wite all the <code>FTP_Path</code> urls into a txt file, you can do this:</p>
<pre><code>with open('/path/urls.txt', mode='wt') as file:
file.write('\n'.join(FTP_list))
</code></pre>
|
python|pandas|dataframe
| 1
|
7,988
| 67,728,773
|
Flip a Data Frame in Pandas and keep one column's values as the new row's values
|
<p>I currently have a Data Frame that looks like so when I read it in:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Date</th>
<th>Country</th>
<th>A</th>
<th>B</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<td>01/01/2020</td>
<td>AFG</td>
<td>0</td>
<td>1</td>
<td>5</td>
</tr>
<tr>
<td>01/02/2020</td>
<td>AFG</td>
<td>2</td>
<td>5</td>
<td>0</td>
</tr>
<tr>
<td>01/03/2020</td>
<td>AFG</td>
<td>1</td>
<td>4</td>
<td>1</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>01/01/2020</td>
<td>USA</td>
<td>2</td>
<td>3</td>
<td>7</td>
</tr>
<tr>
<td>01/02/2020</td>
<td>USA</td>
<td>4</td>
<td>5</td>
<td>6</td>
</tr>
</tbody>
</table>
</div>
<p>I would like to transform it into the form below, whereby the country becomes the row's index, date replaces the columns, and the values of Column A go onto fill the date's respective value for each country.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Country</th>
<th>01/01/2020</th>
<th>01/02/2020</th>
<th>01/03/2020</th>
<th>...</th>
<th>04/25/2021</th>
</tr>
</thead>
<tbody>
<tr>
<td>AFG</td>
<td>0</td>
<td>2</td>
<td>1</td>
<td>...</td>
<td>5</td>
</tr>
<tr>
<td>USA</td>
<td>2</td>
<td>4</td>
<td>9</td>
<td>...</td>
<td>15</td>
</tr>
</tbody>
</table>
</div>
<p>I have tried to use group-by before but nothing appears to be working quite in the way shown above. Am I forgetting a command or is there some way this can be done?</p>
|
<p>You can do it in this way:</p>
<ol>
<li><p>TRY <code>pivot_table</code> to get the required.</p>
</li>
<li><p>Use <code>rename_axis</code> to remove the axis name.</p>
</li>
<li><p>Finally reset the index via <code>reset_index()</code>.</p>
</li>
</ol>
<pre><code>df = df.pivot_table(index='Country', columns='Date', values='A', fill_value=0).rename_axis(None, axis=1).reset_index()
</code></pre>
<p>OUTPUT:</p>
<pre><code> Country 01/01/2020 01/02/2020 01/03/2020
0 AFG 0 2 1
1 USA 2 4 0
</code></pre>
|
python|pandas|dataframe|group-by|time-series
| 1
|
7,989
| 53,235,718
|
Change column values in pandas based on condition
|
<p>df:</p>
<pre><code> A
0 219
1 590
2 272
3 945
4 175
5 930
6 662
7 472
8 251
9 130
</code></pre>
<p>I am trying to create a new column quantile based on which quantile the value falls in, for example:</p>
<pre><code>if value > 1st quantile : value = 1
if value > 2nd quantile : value = 2
if value > 3rd quantile : value = 3
if value > 4th quantile : value = 4
</code></pre>
<p>Code:</p>
<pre><code>f_q = df['A'] .quantile (0.25)
s_q = df['A'] .quantile (0.5)
t_q = df['A'] .quantile (0.75)
fo_q = df['A'] .quantile (1)
index = 0
for i in range(len(test_df)):
value = df.at[index,"A"]
if value > 0 and value <= f_q:
df.at[index,"A"] = 1
elif value > f_q and value <= s_q:
df.at[index,"A"] = 2
elif value > s_q and value <= t_q:
df.at[index,"A"] = 3
elif value > t_q and value <= fo_q:
df.at[index,"A"] = 4
index += 1
</code></pre>
<p>The code works fine. But I would like to know if there is a more efficient pandas way of doing this. Any suggestions are helpful.</p>
|
<p>Yes, using <a href="http://pandas.pydata.org/pandas-docs/version/0.15.0/generated/pandas.qcut.html" rel="nofollow noreferrer"><code>pd.qcut</code></a>:</p>
<pre><code>>>> pd.qcut(df.A, 4).cat.codes + 1
0 1
1 3
2 2
3 4
4 1
5 4
6 4
7 3
8 2
9 1
dtype: int8
</code></pre>
<p>(Gives me exactly the same result your code does.)</p>
<p>You could also call <a href="https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.unique.html" rel="nofollow noreferrer"><code>np.unique</code></a> on the <code>qcut</code> result:</p>
<pre><code>>>> np.unique(pd.qcut(df.A, 4), return_inverse=True)[1] + 1
array([1, 3, 2, 4, 1, 4, 4, 3, 2, 1])
</code></pre>
<p>Or, using <a href="http://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.factorize.html" rel="nofollow noreferrer"><code>pd.factorize</code></a> (note the slight difference in the output):</p>
<pre><code>>>> pd.factorize(pd.qcut(df.A, 4))[0] + 1
array([1, 2, 3, 4, 1, 4, 4, 2, 3, 1])
</code></pre>
|
python|pandas
| 2
|
7,990
| 52,908,588
|
Vectorization and optimization of matrix subtraction
|
<p>Is it possible to vectorize / optimize the following loop?</p>
<pre><code>In [33]: a = np.arange(10000 * 700).reshape([10000, 700])
In [34]: b = np.arange(1000 * 700).reshape([1000, 700])
In [35]: c = np.empty([b.shape[0], a.shape[0]])
In [36]: for i in range(b.shape[0]):
...: c[i] = np.argsort(np.linalg.norm(a - b[i], axis=1))
...:
</code></pre>
<hr>
<p><strong>Edit:</strong></p>
<p>I believe the following should work:</p>
<pre><code>d = np.argsort(np.linalg.norm(a[:, None] - b, axis=2), axis=1)
</code></pre>
<p>But I'm getting <code>MemoryError</code> for <code>a[:, None] - b</code>. Am I in the right direction? What can be done regarding the <code>MemoryError</code>?</p>
|
<p>Simplest way would be with <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.cdist.html" rel="nofollow noreferrer"><code>cdist</code></a> -</p>
<pre><code>from scipy.spatial.distance import cdist
cdist(b,a).argsort(axis=1)
</code></pre>
<p>Equivalent one with <a href="http://scikit-learn.org/stable/modules/generated/sklearn.metrics.pairwise_distances.html" rel="nofollow noreferrer"><code>pairwise_distances</code></a> -</p>
<pre><code>from sklearn.metrics import pairwise_distances
pairwise_distances(b,a).argsort(1)
</code></pre>
<p>Timings for given sample data -</p>
<pre><code>In [201]: %%timeit # original solution
...: c = np.empty([b.shape[0], a.shape[0]],dtype=int)
...: for i in range(b.shape[0]):
...: c[i] = np.argsort(np.linalg.norm(a - b[i], axis=1))
1 loop, best of 3: 40.6 s per loop
In [202]: %timeit pairwise_distances(b,a).argsort(1)
1 loop, best of 3: 384 ms per loop
</code></pre>
<p><strong><code>100x+</code></strong> speedup!</p>
|
numpy|vectorization
| 2
|
7,991
| 53,300,353
|
Python every three rows to columns using pandas
|
<p>I have a text file with data that repeoates every 3 rows. Lets say it is <code>hash</code>, <code>directory</code>, <code>sub directory</code>. The data looks like the following:</p>
<pre><code>a3s2d1f32a1sdf321asdf
Dir_321321
Dir2_asdf
s21a3s21d3f21as32d1f
Dir_65465
Dir2_werq
asd21231asdfa3s21d
Dir_76541
Dir2_wbzxc
....
</code></pre>
<p>I have created a python script that takes the data and every 3 rows creates columns:</p>
<pre><code>import pandas as pd
df1 = pd.read_csv('RogTest/RogTest.txt', delimiter = "\t", header=None)
df2 = df1[df1.index % 3 == 0]
df2 = df2.reset_index(drop=True)
df3 = df1[df1.index % 3 == 1]
df3 = df3.reset_index(drop=True)
df4 = df1[df1.index % 3 == 2]
df4 = df4.reset_index(drop=True)
df5 = pd.concat([df2, df3], axis=1)
df6 = pd.concat([df5, df4], axis=1)
#Rename columns
df6.columns = ['Hash', 'Dir_1', 'Dir_2']
#Write to csv
df6.to_csv('RogTest/RogTest.csv', index=False, header=True)
</code></pre>
<p>This works fine but I am curious if there is a more efficient way to do this aka less code?</p>
|
<p>You can use:</p>
<pre><code>df_final = pd.DataFrame(np.reshape(df.values,(3, df.shape[0]/3)))
df_final.columns = ['Hash', 'Dir_1', 'Dir_2']
</code></pre>
<p>Output:</p>
<pre><code> Hash Dir_1 Dir_2
0 a3s2d1f32a1sdf321asdf Dir_321321 Dir2_asdf
1 s21a3s21d3f21as32d1f Dir_65465 Dir2_werq
2 asd21231asdfa3s21d Dir_76541 Dir2_wbzxc
</code></pre>
|
python|pandas
| 2
|
7,992
| 65,672,181
|
how many epoch for training 1k images
|
<p>am doing training for detecting the objects using yolov3 but i face some problem when i set batch_size > 1 it causes me cuda out of memory so i searched in google to see another solution found it depends on my GPU (GTX 1070 8G) .</p>
<p>may be the number of epoch is high and it require to be optimized .</p>
<p>maybe the epoch number should be decreased? and for training 1k images and 200 pic for validations .</p>
<p>what is best epoch should i set to avoid overfitting?</p>
|
<p>your model's overfitting wont depend on the no. of epochs you set.....<br />
since you hav made a val split in your data, make sure that your <strong>train loss - val loss OR train acc - val acc</strong> is nearly the same.This will assure that your model is not overfitting</p>
|
python|machine-learning|deep-learning|pytorch
| 0
|
7,993
| 65,513,530
|
Create interactive plot of the Continuous Uniform Distribution with sliders for parameter values
|
<p>How can I create an interactive plot of the pdf and the cdf of the <a href="https://en.wikipedia.org/wiki/Continuous_uniform_distribution" rel="nofollow noreferrer">Continuous Uniform Distribution</a> using python?</p>
<p><a href="https://i.stack.imgur.com/k7AnJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/k7AnJ.png" alt="uniform distribution" /></a></p>
<p>I would like to have interactive sliders with which I can adjust the parameters of the distribution.</p>
<p>This is handy for getting a better insight in the behavior of this distribution for different values of its parameters.</p>
|
<p><strong>1. The simplest way is to use <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.uniform.html" rel="nofollow noreferrer">scipy.stats.uniform()</a> to get the pdf and the cdf of the distribution and then using <a href="https://panel.holoviz.org/user_guide/Interact.html" rel="nofollow noreferrer">pn.interact()</a> to get the interactive sliders for the parameters:</strong></p>
<pre><code># import libraries
from scipy import stats
import pandas as pd
import hvplot.pandas
import panel as pn
pn.extension()
import panel.widgets as pnw
import holoviews as hv
hv.extension('bokeh')
# define pdf and cdf for cont uniform distr and return plots
def get_interactive_plot_cont_uniform(loc=1, scale=5):
continuous_uniform = stats.uniform(loc=loc, scale=scale)
x_values = np.linspace(0, 10, num=1000)
fx_values = continuous_uniform.pdf(x_values)
Fx_values = continuous_uniform.cdf(x_values)
interactive_plot = (
hv.Curve((x_values, fx_values), label='PDF')
+ hv.Curve((x_values, Fx_values), label='CDF'))
return interactive_plot
# use pn.interact() to get interactive sliders
# and define the sliders yourself for more flexibility
pn.interact(
get_interactive_plot_cont_uniform,
loc=pnw.FloatSlider(
name='Value for loc', value=1.,
start=0, step=0.5, end=10),
scale=pnw.FloatSlider(
name='Value for scale', value=5.,
start=0, step=0.5, end=10),
)
</code></pre>
<p><strong>Resulting interactive plot with sliders:</strong></p>
<p><a href="https://i.stack.imgur.com/FrTwl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FrTwl.png" alt="interactive plot with sliders of continuous uniform distribution" /></a></p>
<p><strong>2. A more extensive and flexible example which allows setting parameter a and b more intuitively:</strong></p>
<pre><code># create sliders to adjust parameter a and b
param_a = pnw.FloatSlider(name='Value for parameter a', value=1., start=0, step=0.5, end=10)
param_b = pnw.FloatSlider(name='Value for parameter b', value=6., start=0, step=0.5, end=10)
# get interactivity by using following decorator
@pn.depends(param_a, param_b)
def get_interactive_cont_uniform(param_a, param_b):
# define the uniform distribution
# scale in scipy.stats.uniform is b - a
loc = param_a
scale = param_b - param_a
continuous_uniform = stats.uniform(loc=loc, scale=scale)
# calculate x and y values for pdf and cdf and put in dataframe
x_values = np.linspace(0, 10, num=1000)
fx_values = continuous_uniform.pdf(x_values)
Fx_values = continuous_uniform.cdf(x_values)
df = pd.DataFrame({
'x': x_values,
'f(x)': fx_values,
'F(x)': Fx_values,
})
# create pdf and cdf plot
pdf_plot = df.hvplot.line(
x='x',
y='f(x)',
ylim=(0, 1.02),
ylabel='f(x) - pdf values', xlabel='',
title=f'pdf where a={param_a} and b={param_b}',
height=225,
).opts(fontscale=1.25)
cdf_plot = df.hvplot.line(
x='x',
y='F(x)',
ylim=(0, 1.02),
ylabel='F(x) - cdf values',
title=f'cdf where a={param_a} and b={param_b}',
height=225,
).opts(fontscale=1.25)
return (pdf_plot + cdf_plot).cols(1)
# use pyviz panel to get a nice view of sliders and the plots
pn.Column(
pn.pane.Markdown("## Continuous Uniform Distribution"),
pn.Row(param_a, param_b),
pn.Row(get_interactive_cont_uniform)
)
</code></pre>
|
python|pandas|scipy|holoviews|panel-pyviz
| 2
|
7,994
| 65,485,987
|
How do I convert a .dbf file into a Pandas DataFrame?
|
<p>I have a <code>.dbf</code> file that I would like to convert into a pandas <code>DataFrame</code>, but <code>DataFrame</code>s aren't able to directly convert the data.</p>
|
<p>Using <a href="https://pypi.org/project/dbf/" rel="nofollow noreferrer">my <code>dbf</code> library</a>, the following function will do the job:</p>
<pre><code>def dbf_to_dataframe(filename):
"""
converts the dbf table at filename into a Panda's DataFrame
data types and field names are preserved
"""
import dbf
import numpy as np
import pandas as pd
from datetime import date, datetime, time
names = []
types = []
table = dbf.Table(filename)
for name in table.field_names:
ftype, size, decimals, _ = table.field_info(name)
ftype = chr(ftype)
if ftype in 'GP':
continue
if ftype == 'N' and decimals:
ftype = 'F'
dtype = {
'B': 'float64',
'C': 'string',
'D': 'datetime64[ns]',
'F': 'float64',
'I': 'int64',
'L': 'boolean',
'M': 'string',
'N': 'int64',
'T': 'datetime64[ns]',
'Y': 'float64',
}[ftype]
names.append(name)
types.append(dtype)
with table:
series = [[] for _ in names]
for rec in table:
for i, value in enumerate(rec):
if isinstance(value, date):
value = datetime.combine(value, time())
elif value is None:
value = np.nan
series[i].append(value)
data_recs = dict(
(n, pd.Series(s, dtype=t))
for n, s, t in zip(names, series, types)
)
return pd.DataFrame(data_recs)
</code></pre>
|
python|pandas|dataframe|dbf
| 2
|
7,995
| 65,671,476
|
How to implement a Forward Selection using KNN?
|
<p>I am trying to use a wrapper method in Python to implement a simple forward selection using KNN from the data I have.</p>
<p>My data:</p>
<pre><code>ID S_LENGTH S_WIDTH P_LENGTH P_WIDTH SPECIES
------------------------------------------------------------------
1 3.5 2.5 5.6 1.7 VIRGINICA
2 4.5 5.6 3.4 8.7 SETOSA
</code></pre>
<p>This is where I have defined <code>X</code> and <code>y</code>:</p>
<pre><code>X = df[['S_LENGTH', 'S_WIDTH', 'P_LENGTH', 'P_WIDTH']].values
y = df['SPECIES'].values
</code></pre>
<p>This is a simple KNN model:</p>
<pre><code>clf = neighbors.KNeighborsClassifier()
clf.fit(X_fs,y)
predictions = clf.predict(X_fs)
metrics.accuracy_score(y, predictions)
</code></pre>
<p>Therefore, how would I implement a KNN model using forward selection?</p>
<p>Thanks!</p>
|
<p>I do <a href="https://stats.stackexchange.com/questions/363662/can-you-derive-variable-importance-from-a-nearest-neighbor-algorithm">not believe that KNN has a features importance</a> built-in, so you have basically three options. First, you can use a model agnostic version of feature importance like permutation importance.</p>
<p>Second, you can try adding one feature at a time at each step, and pick the model that most increases performance.</p>
<p>Third (closely related to second), just try every permutation! Since you only have 4 features, assuming you don't have too much data, you could just try all combinations of features. There are 4 models of one feature, 6 (4 choose 2) models with two features, 4 with three, and 1 with all four. That's probably less computation than the above two ideas.</p>
<p>So something like this:</p>
<pre><code>feat_lists = [
['S_LENGTH'],
['S_WIDTH'],
...
['S_LENGTH', 'S_WIDTH', 'P_LENGTH'],
['S_LENGTH', 'S_WIDTH', 'P_WIDTH'],
...
['S_LENGTH', 'S_WIDTH', 'P_LENGTH', 'P_WIDTH']
]
for feats in feat_lists:
X = df[feats].values
y = df['SPECIES'].values
...all you other code...
print(feats)
print(metrics.accuracy_score(y, predictions))
</code></pre>
<p>To clarify, I'm assuming that's not actually your data, but only the first two rows, correct? If you only have two rows, you have bigger problems :)</p>
|
python|pandas|scikit-learn|knn|feature-selection
| 0
|
7,996
| 63,638,082
|
Access groupby Pandas taking first n quantity
|
<p>I have a data frame looking like this:</p>
<pre><code>Date Product Quantity Price Buy/Sell
8/11 Apple 5 5 b
8/11 Apple 5 4 b
8/12 Pear 11 4 b
8/13 Pear 4 3 b
8/13 Pear 5 6 s
</code></pre>
<p>I am trying to distribute them according to a split. In this case, say 60% and 40%.</p>
<p>The top 60% would go to one df, the lower 40% goes to another.</p>
<p>So the result would be</p>
<pre><code>output df1
8/11 Apple 5 5 b
8/11 Apple 1 4 b
8/12 Pear 10 4 b
8/13 Pear 3 6 s
output df2
8/11 Apple 4 4 b
8/12 Pear 1 4 b
8/13 Pear 4 3 b
8/13 Pear 2 6 s
</code></pre>
<p>I have grouped them with df.groupby(["product", "Buy/Sell"]), but im not sure how to access it to obatin the individual groups.</p>
<p>Im thinking after I group them, I can have a counter moving to the 60% side until it can't move whole entries anymore, then split the next one. After that, the rest going to 40%.</p>
<p>How would I go about accessing the groupby elements?</p>
<p>Is this a good way to go about it?</p>
|
<p>I think you are OK going this route. You will have to devise some quick function to split up the ones that need to be divided, which will be a bit of work.</p>
<p>You can access the groups out of the grouped object as below. The "GroupBy" object is iterable, and when you iterate on it, you get back a tuple with the group name (key) and a dataframe of that group.</p>
<pre><code>In [43]: df
Out[43]:
Date Product Quantity Price Buy/Sell
0 8/11 Apple 5 5 b
1 8/11 Apple 5 4 b
2 8/12 Pear 11 4 b
3 8/13 Pear 4 3 b
4 8/13 Pear 5 6 s
In [44]: grouped = df.groupby(['Product', 'Buy/Sell'])
In [45]: type(grouped)
Out[45]: pandas.core.groupby.generic.DataFrameGroupBy
In [46]: for group_name, group in grouped:
...: print(group_name)
...: print(type(group))
...: print(group)
...: grp_tot = group['Quantity'].sum()
...: print(f'Total quantity within this group is {grp_tot}')
...: print('\n')
...:
('Apple', 'b')
<class 'pandas.core.frame.DataFrame'>
Date Product Quantity Price Buy/Sell
0 8/11 Apple 5 5 b
1 8/11 Apple 5 4 b
Total quantity within this group is 10
('Pear', 'b')
<class 'pandas.core.frame.DataFrame'>
Date Product Quantity Price Buy/Sell
2 8/12 Pear 11 4 b
3 8/13 Pear 4 3 b
Total quantity within this group is 15
('Pear', 's')
<class 'pandas.core.frame.DataFrame'>
Date Product Quantity Price Buy/Sell
4 8/13 Pear 5 6 s
Total quantity within this group is 5
</code></pre>
<p>I would use something like this and make another "destination" column or such in the overall dataframe and mark it with "1" or "2" for which split-out you want it to go into, and then you can just use that as a selection, which will avoid row-by-row appends, which is slow. Then you will have to go back and find the ones that need to be split up and work them.</p>
|
python-3.x|pandas
| 1
|
7,997
| 53,613,006
|
How to sum all values with one index greater than X in MultiIndexed Datarfame, grouping on the other indices?
|
<p>I am trying to do the exact same thing as described in this <a href="https://stackoverflow.com/questions/39125695/how-to-sum-all-values-with-index-greater-than-x">post</a> but with a MultiIndexed Pandas DataFrame. I've been trying to adapt the answer to the other post so that it would work with my DataFrame but without any luck.</p>
<p>Currently I have the following DataFrame where <code>target</code>, <code>wt</code> and <code>ms</code> are in the index:</p>
<pre><code> percent
target wt ms
g1 2 1 2
2 5
... ...
620 0.003
630 0.005
... ... ... ... ...
g9 8 1 4
2 8
... ...
470 0.005
480 0.004
</code></pre>
<p>I need to limit the range of <code>ms</code> to some number, say 12, and sum up the values in the <code>percent</code> column where <code>ms>12</code>, grouped on the indices <code>target</code> and <code>wt</code>.</p>
<p>The outcome I want would look something like this:</p>
<pre><code> percent
target wt ms
g1 2 1 2
2 5
... ...
>12 5.4
... ... ... ... ...
g9 8 1 4
2 8
... ...
>12 7.3
</code></pre>
<p>How can I do this?</p>
|
<p>First create boolean mask by level <code>ms</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.get_level_values.html" rel="nofollow noreferrer"><code>get_level_values</code></a> compared by scalar. Then filter rows by <a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a> and <code>sum</code> per first 2 levels. It lost level <code>ms</code>, so is added with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.assign.html" rel="nofollow noreferrer"><code>assign</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>set_index</code></a>.</p>
<p>Last join together by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="nofollow noreferrer"><code>concat</code></a> with filtering rows with inverted mask by <code>~</code> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_index.html" rel="nofollow noreferrer"><code>sort_index</code></a>:</p>
<pre><code>mask = df.index.get_level_values('ms') > 12
df1 = df[mask].sum(level=[0,1]).assign(ms='>12').set_index('ms', append=True)
df = pd.concat([df[~mask], df1]).sort_index()
print (df)
percent
target wt ms
g1 2 1 2.000
2 5.000
>12 0.008
g9 8 1 4.000
2 8.000
>12 0.009
</code></pre>
|
python|pandas|dataframe|aggregate|pandas-groupby
| 1
|
7,998
| 71,943,413
|
Roll over 1 day when you have repeated dates
|
<p>I have this set of data :</p>
<pre><code>df = pd.DataFrame()
df['Date'] = ["29/07/2021", "29/07/2021", "29/07/2021", "29/07/2021", "30/07/2021", "30/07/2021", "30/07/2021", "30/07/2021", "31/07/2021", "31/07/2021", "01/08/2021", "01/08/2021", "02/08/2021"]
df['Time'] = ["06:48:00", "06:53:00", "06:56:00", "06:59:00", "07:14:00", "07:18:00", "07:40:00", "08:12:00", "08:42:00", "08:57:00", "05:45:00", "05:55:00", "06:05:00"]
df["Column1"] = [0.011534891, 0.013458399, 0.017792937, 0.018807581, 0.025931434, 0.025163517, 0.026561283, 0.027743659, 0.028854, 0.000383506, 0.000543031, 0.000342, 0.000313769]
</code></pre>
<p>I want to roll over 1 day and find the mean value of "Column1".</p>
<p>I tried this code :</p>
<pre><code>df.index = df["Date"]
df['mean'] = df["Column1"].rolling(1440, min_periods=1).mean()
print(df['mean'])
output:
Date
29/07/2021 0.011535
29/07/2021 0.012497
29/07/2021 0.014262
29/07/2021 0.015398
30/07/2021 0.017505
30/07/2021 0.018781
30/07/2021 0.019893
30/07/2021 0.020874
31/07/2021 0.021761
31/07/2021 0.019623
01/08/2021 0.017889
01/08/2021 0.016426
02/08/2021 0.015187
</code></pre>
<p>As you can see, the dates are repeating and that is not correct.</p>
<p><strong>I expected to have something like this</strong> (i want the mean value of those values i just used mean to show you what i needed but it will be just one value as a mean one):</p>
<pre><code>Date
29/07/2021 mean(0.011535, 0.012497, 0.014262, 0.015398)
30/07/2021 mean(0.017505, 0.018781, 0.019893, 0.020874)
31/07/2021 mean(0.021761, 0.019623)
01/08/2021 mean(0.017889, 0.016426)
02/08/2021 0.015187
</code></pre>
<p>I also tried this code :</p>
<pre><code>df['DateTime'] = pd.to_datetime(df['Date'], dayfirst=True) + pd.to_timedelta(df['Time'])
df.index = df['DateTime']
df['mean'] = df["Column1"].rolling('1D').mean()
print(df['mean'])
output :
DateTime
2021-07-29 06:48:00 0.011535
2021-07-29 06:53:00 0.012497
2021-07-29 06:56:00 0.014262
2021-07-29 06:59:00 0.015398
2021-07-30 07:14:00 0.025931
2021-07-30 07:18:00 0.025547
2021-07-30 07:40:00 0.025885
2021-07-30 08:12:00 0.026350
2021-07-31 08:42:00 0.028854
2021-07-31 08:57:00 0.014619
2021-08-01 05:45:00 0.009927
2021-08-01 05:55:00 0.007531
2021-08-02 06:05:00 0.000314
</code></pre>
<p>But I get a different result and still not the one desired.</p>
|
<p>Rather than <code>rolling</code>, the desired output calls for <code>groupby</code> + <code>mean</code>:</p>
<pre><code>out = df.groupby('Date', as_index=False)['Column1'].mean()
</code></pre>
<p>Output:</p>
<pre><code> Date Column1
0 01/08/2021 0.000443
1 02/08/2021 0.000314
2 29/07/2021 0.015398
3 30/07/2021 0.026350
4 31/07/2021 0.014619
</code></pre>
|
python|python-3.x|pandas|dataframe|pandas-groupby
| 2
|
7,999
| 71,899,250
|
Python reading the first entry of a paranthesis in a series of paranthesis
|
<p>I have thousands of lines of the following sample in a csv file. The header of the file is as follows:</p>
<p><strong>File Hearder:</strong></p>
<blockquote>
<p>field1, field2, field3, field4</p>
</blockquote>
<p><strong>Sample Data:</strong></p>
<blockquote>
<p>field1, field2, 1, "[('entryA', 'typeA'), ('entryB', 'typeB'), ('entryC', 'typeC'),
('entryD', 'typeD')]"</p>
</blockquote>
<p>My question is how to extract field3 and pair it with the first entry of each parenthesis and put it in the following format?</p>
<p><strong>What I need:</strong></p>
<blockquote>
<p>{
"id": "field3",
"entries" : ["entryA", "entryB", "entryC", "entryD"]
}</p>
</blockquote>
<p><strong>My code:</strong></p>
<pre><code>import pandas as pd
import re
df = pd.read_csv('file.csv')
id = df['field3']
entries = df['field4']
for row in entries:
result = entries.str.findall("(?<=\(').*?(?=',)")
</code></pre>
<p><strong>Current output:</strong></p>
<p>The current Regex works but, I just noticed that I have special symbols like '(' in my entries which affects the matched results (unwanted matches).</p>
|
<p>you don't need <code>re</code>, use <code>ast.literal_eval</code></p>
<pre class="lang-py prettyprint-override"><code>>>> s = "[('entryA', 'typeA'), ('entryB', 'typeB'), ('entryC', 'typeC'), ('entryD', 'typeD')]"
>>> from ast import literal_eval
>>> literal_eval(s)
[('entryA', 'typeA'), ('entryB', 'typeB'), ('entryC', 'typeC'), ('entryD', 'typeD')]
>>> out = [i[0] for i in literal_eval(s)]
>>> out
['entryA', 'entryB', 'entryC', 'entryD']
</code></pre>
|
python|pandas|dataframe
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.