Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
7,500
| 55,908,567
|
How can I replace any value with an NAN that is not within a certain range of the previous value in a pandas series?
|
<p>I have a pandas series and I want to find out if a value is within a certain range of the previous value (say 10% above or below) and replace it with NAN if not. I am not sure how to proceed. The standard outlier removal techniques mostly deal with overall standard deviation etc.</p>
<p>How can I access the previous value at every step and operate on it?</p>
<pre><code>2018-09-06 NaN
2018-09-07 NaN
2018-09-08 NaN
2018-09-09 662.105
2018-09-10 651.010
2018-09-11 454.870
2018-09-12 597.840
2018-09-13 662.405
2018-09-14 660.735
2018-09-15 671.065
2018-09-16 668.485
2018-09-17 666.205
2018-09-18 663.620
2018-09-19 663.320
2018-09-20 662.715
2018-09-21 665.145
2018-09-22 663.015
2018-09-23 663.775
2018-09-24 662.860
2018-09-25 663.315
2018-09-26 665.600
2018-09-27 664.080
2018-09-28 661.800
2018-09-29 659.825
2018-09-30 659.370
2018-10-01 NaN
2018-10-02 NaN
2018-10-03 NaN
2018-10-04 NaN
</code></pre>
|
<p>You can use <code>pct_change</code> as @ALollz mentioned in the comment. Use <code>Series.loc</code> to set the values where the condition is not met to False.</p>
<pre><code>ts.loc[ts.pct_change().abs() > 0.1] = np.nan
2018-09-06 NaN
2018-09-07 NaN
2018-09-08 NaN
2018-09-09 662.105
2018-09-10 651.010
2018-09-11 NaN
2018-09-12 NaN
2018-09-13 NaN
2018-09-14 660.735
2018-09-15 671.065
2018-09-16 668.485
2018-09-17 666.205
2018-09-18 663.620
2018-09-19 663.320
2018-09-20 662.715
2018-09-21 665.145
2018-09-22 663.015
2018-09-23 663.775
2018-09-24 662.860
2018-09-25 663.315
2018-09-26 665.600
2018-09-27 664.080
2018-09-28 661.800
2018-09-29 659.825
2018-09-30 659.370
2018-10-01 NaN
2018-10-02 NaN
2018-10-03 NaN
2018-10-04 NaN
</code></pre>
|
python|pandas
| 4
|
7,501
| 64,806,982
|
Pivot Pandas Column of Lists
|
<p>I have a pandas dataframe that has a column whose values are lists and where another column is a date. I would like to create a dataframe that counts the elements of the lists by date.</p>
<p>The dataframe looks like:</p>
<p><img src="https://i.stack.imgur.com/zBZtq.png" alt="image of dataframe. I'm not yet awesome enough to post pictures directly" /></p>
<pre><code>pd.DataFrame(
data={
"col1": ["['a','b']", "['b','c']", "['a','c']", "", "['b']"],
"col2": ["2020-01-01", "2020-01-02", "2020-01-03", "2020-01-04", "2020-01-05"],
},
index=[0, 1, 2, 3, 4],
)
</code></pre>
<p>What I would like the dataframe to look like is:</p>
<p><img src="https://i.stack.imgur.com/ghIKv.png" alt="Image of desired dataframe" /></p>
<pre><code>pd.DataFrame(
data={"a": [1, 0, 1, 0, 0], "b": [1, 1, 0, 0, 1], "c": [0, 1, 1, 0, 0]},
index=["2020-01-01", "2020-01-02", "2020-01-03", "2020-01-04", "2020-01-05"],
)
</code></pre>
<p>Any thoughts on how to do this kind of transformation?</p>
|
<p>You can use <code>extractall</code> to extract the values inside <code>''</code>, then counts the values with <code>groupby</code>:</p>
<pre><code>out= (df.col1.str.extractall("'([^']*)'")
.groupby(level=0)[0].value_counts()
.unstack(level=1,fill_value=0)
.reindex(df.index, fill_value=0)
)
out.index= df['col2']
print(out)
</code></pre>
<p>Output:</p>
<pre><code>0 a b c
col2
2020-01-01 1 1 0
2020-01-02 0 1 1
2020-01-03 1 0 1
2020-01-04 0 0 0
2020-01-05 0 1 0
</code></pre>
|
python|pandas|list|dataframe|pivot-table
| 2
|
7,502
| 64,762,595
|
Apply a function from package to one column in Python
|
<p>Given a small dataset as follows:</p>
<pre><code> id floor room company
0 1 1 101.0 NaN
1 2 1 102.0 繁簡轉換器 ---> need to convert
2 3 2 201.0 缔美诗药妆皮肤管理中心
3 4 2 201.0 TT潮牌造型设计(上海)
4 5 2 202.0 TT潮牌造型设计(北京)
5 6 3 NaN 繁簡轉換器 ---> need to convert
6 7 3 201.0 NaN
7 8 3 301.0 湖南杰牌传动科技发展有限公司
</code></pre>
<p>I need to convert <code>company</code> column from traditional chinese to simplied chinese using this <a href="https://github.com/berniey/hanziconv" rel="nofollow noreferrer">package</a>.</p>
<p>I tested with string <code>繁簡轉換器</code>, it converts successfully:</p>
<pre><code>>>> from hanziconv import HanziConv
>>> print(HanziConv.toSimplified('繁簡轉換器'))
繁简转换器
</code></pre>
<p>But as I try to apply it to <code>company</code> column:</p>
<pre><code>from hanziconv import HanziConv
df['company'] = df['company'].apply(HanziConv.toSimplified())
</code></pre>
<p>It returns an error: <code>TypeError: toSimplified() missing 1 required positional argument: 'text'</code>.</p>
<p>Anyone could help to solve this issue? Thanks a lot.</p>
|
<p>I don't know about <code>HanziConv</code>, but this may be work.</p>
<p><code>df['company'] = df['company'].astype(str).apply(HanziConv.toSimplified)</code></p>
|
python-3.x|pandas|dataframe|apply
| 1
|
7,503
| 40,297,848
|
SGD with momentum in TensorFlow
|
<p>In Caffe, the SGD solver has a momentum parameter (<a href="http://caffe.berkeleyvision.org/tutorial/solver.html" rel="nofollow noreferrer">link</a>). In TensorFlow, I see that <code>tf.train.GradientDescentOptimizer</code> does not have an explicit momentum parameter. However, I can see that there is <code>tf.train.MomentumOptimizer</code> optimizer. Is it the equivalent of Caffe SGD with momentum optimizer?</p>
|
<p>Yes it is. <code>tf.train.MomentumOptimizer</code> = SGD + momentum</p>
|
tensorflow|optimization|sgd
| 20
|
7,504
| 39,970,099
|
ValueError for comparison of np.arrays
|
<p>I have a list of lists of np.arrays, representing islands > island > geodesic point on the island.</p>
<p>I'm trying to use:</p>
<pre><code>if not groups:
createNewGroup(point)
else:
for group in groups:
if point in group:
continue
else:
createNewGroup(point)
</code></pre>
<p>The first island is being created correctly, but for the second island I am getting this error:</p>
<pre><code>File "A2.py", line 371, in findIslands
if point in group:
ValueError: The truth value of an array with more than
one element is ambiguous. Use a.any() or a.all()
</code></pre>
<p>I've researched this error and am trying to understand how this applies to my situation, and have tried applying <code>.any()</code> and <code>.all()</code> to <code>point</code> but I am getting the same error regardless.</p>
<p>I'm trying to check if the current geodesic point is already in the list of lists for any of the islands. Point is multidimensional and I think that's where the issue is coming from.</p>
|
<p>This error arises when a boolean array is used in a scalar context, such as an <code>if</code> statement. Or maybe in the <code>in</code> part of the expression.</p>
<p>Is <code>point</code> an array, and <code>group</code> a list of arrays?</p>
<p>In general <code>in</code> is not a good test when working with arrays.</p>
<p>To get more help print <code>point</code>, <code>group</code>, <code>point in group</code>. Or at least their type, shape and dtype.</p>
<p>Make a small list of <code>point</code>, and focus on to perform an <code>in</code> test or equivalent. How do you tell whether one equals another? <code>point1 == point2</code>?</p>
|
python|numpy
| 0
|
7,505
| 44,202,825
|
Tensorflow-GPU Error - Pycharm
|
<p>I am installing Tensor flow.</p>
<p>I was having trouble installing through Anaconda, so I uninstalled everything including Python, and downloaded Python 3.5 from here:</p>
<p><a href="https://www.python.org/downloads/release/python-352/" rel="nofollow noreferrer">https://www.python.org/downloads/release/python-352/</a></p>
<p>After installing Python 3.5 I installed PyCharm, and set my path variables so that it could find the Python folder.</p>
<p>Then I used command prompt to install tensorflow using:</p>
<pre><code>pip3 install --upgrade tensorflow-gpu
</code></pre>
<p>Anyway, it installed tensorflow and the other stuff like numpy, protobuf, etc
I set up a project in PyCharm and set the interpreter to the default one located in the Python35 folder.</p>
<p>I opened up the Python console within Pycharm and typed:</p>
<pre><code>import tensorflow
</code></pre>
<p>to get this error:</p>
<pre><code>import tensorflow
Traceback (most recent call last):
File "C:\Users\Justin\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 18, in swig_import_helper
return importlib.import_module(mname)
File "C:\Users\Justin\AppData\Local\Programs\Python\Python35\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 986, in _gcd_import
File "<frozen importlib._bootstrap>", line 969, in _find_and_load
File "<frozen importlib._bootstrap>", line 958, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 666, in _load_unlocked
File "<frozen importlib._bootstrap>", line 577, in module_from_spec
File "<frozen importlib._bootstrap_external>", line 906, in create_module
File "<frozen importlib._bootstrap>", line 222, in _call_with_frames_removed
ImportError: DLL load failed: The specified module could not be found.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Justin\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\pywrap_tensorflow.py", line 41, in <module>
from tensorflow.python.pywrap_tensorflow_internal import *
File "C:\Program Files\JetBrains\PyCharm Community Edition 2017.1.3\helpers\pydev\_pydev_bundle\pydev_import_hook.py", line 21, in do_import
module = self._system_import(name, *args, **kwargs)
File "C:\Users\Justin\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 21, in <module>
_pywrap_tensorflow_internal = swig_import_helper()
File "C:\Users\Justin\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\python\pywrap_tensorflow_internal.py", line 20, in swig_import_helper
return importlib.import_module('_pywrap_tensorflow_internal')
File "C:\Users\Justin\AppData\Local\Programs\Python\Python35\lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
ImportError: No module named '_pywrap_tensorflow_internal'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "C:\Program Files\JetBrains\PyCharm Community Edition 2017.1.3\helpers\pydev\_pydev_bundle\pydev_import_hook.py", line 21, in do_import
module = self._system_import(name, *args, **kwargs)
File "C:\Users\Justin\AppData\Local\Programs\Python\Python35\lib\site-packages\tensorflow\__init__.py", line 24, in <module>
Failed to load the native TensorFlow runtime.
</code></pre>
<p>I hope that is enough detail for someone to help me.</p>
|
<p>You are missing some dependencies. I would recommend to rebuild it from scratch. Use <a href="https://www.tensorflow.org/install/install_windows" rel="nofollow noreferrer">this guide</a> for Windows.</p>
<p>You need to fulfill the requirements for running TF GPU.</p>
|
python|python-3.x|tensorflow
| 0
|
7,506
| 69,516,366
|
RuntimeError: expected scalar type Float but found Double
|
<p>My code is as follows:</p>
<pre><code>net = nn.Linear(54, 7)
optimizer = optim.SGD(net.parameters(), lr=lr, momentum=0)
logloss = torch.nn.CrossEntropyLoss()
for i in range(niter):
optimizer.zero_grad()
y_2 = torch.from_numpy(np.array(y, dtype='float64'))
X_2 = torch.from_numpy(np.array(X, dtype='float64'))
outputs = net(X_2)
print(loss)
loss.backward()
optimizer.step()
</code></pre>
<p>And I got the following error message</p>
<pre><code>---> 57 outputs = net(X_2)
58 print(np.shape(outputs))
59 loss = logloss(outputs, y_2)
~\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
~\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\linear.py in forward(self, input)
94
95 def forward(self, input: Tensor) -> Tensor:
---> 96 return F.linear(input, self.weight, self.bias)
97
98 def extra_repr(self) -> str:
~\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\functional.py in linear(input, weight, bias)
1845 if has_torch_function_variadic(input, weight):
1846 return handle_torch_function(linear, (input, weight), input, weight, bias=bias)
-> 1847 return torch._C._nn.linear(input, weight, bias)
1848
1849
RuntimeError: expected scalar type Float but found Double
</code></pre>
<p>Can you specify what is my problems, thank you. I except I have transform the the results into float through torch.from_numpy(np.array(y, dtype='float64')), but do not work.</p>
|
<p>You need to cast your tensors to <em>float32</em>, either with <code>dtype='float32'</code> or calling <code>float()</code> on your input tensors.</p>
|
pytorch|logistic-regression
| 1
|
7,507
| 69,371,559
|
Pandas - group sales by month
|
<p>I need help with group sales for employee_ID by month.</p>
<p>with this:</p>
<pre><code>date |employee_ID |price
2000-01-01| 12 | 300
2000-01-02| 12 | 250
</code></pre>
<p>i want make this</p>
<pre><code>date | employee_ID |total_sales
2000-01 | 12 2
</code></pre>
<p>I dont know how to do this with groupby</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Grouper.html" rel="nofollow noreferrer"><code>Grouper</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.size.html" rel="nofollow noreferrer"><code>GroupBy.size</code></a>:</p>
<pre><code>df['date'] = pd.to_datetime(df['date'])
df = df.groupby([pd.Grouper(freq='M', key='date'), 'employee_ID']).size().reset_index(name='total_sales')
</code></pre>
<p>Or month periods by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.to_period.html" rel="nofollow noreferrer"><code>Series.dt.to_period</code></a>:</p>
<pre><code>df = df.groupby([df['date'].dt.to_period('m'), 'employee_ID']).size().reset_index(name='total_sales')
</code></pre>
|
python|pandas
| 1
|
7,508
| 69,434,657
|
How to iterate over pandas dataframe column which is a list and then map to new values
|
<p>I have a dataframe like below. I want to iterate through the "label" column values and replace to new values which is label_dict = {1:'production', 2:'to_be_discussed'}</p>
<pre><code> Name label score
0 prdn [2, 1] [0.886071, 0.78242475]
1 tbd [1] [0.9897076]
</code></pre>
<p>I tried below line of code, but looks like it doest work for listed column values.</p>
<pre><code>df['label'].replace(label_dict, inplace=True)
</code></pre>
<p>How to iterate over the column and change to new values?</p>
|
<p>Solution working with lists in column <code>label</code>:</p>
<pre><code>#if not lists but strings create them
#import ast
#df['label'] = df['label'].apply(ast.literal_eval)
</code></pre>
<p>Solution with removing values from original column if not exist in keys of dict:</p>
<pre><code>label_dict = {1:'production', 2:'to_be_discussed'}
df['label'] = df['label'].apply(lambda x: [label_dict[y] for y in x if y in label_dict])
</code></pre>
<p>Solution with NOT removing values from original column if not exist in keys of dict:</p>
<pre><code>label_dict = {1:'production', 2:'to_be_discussed'}
df['label'] = df['label'].apply(lambda x: [label_dict.get(y, y) for y in x])
</code></pre>
<p>Difference in changed sample data:</p>
<pre><code>df = pd.DataFrame({'label':[[2,1,5], [1]]})
label_dict = {1:'production', 2:'to_be_discussed'}
df['label1'] = df['label'].apply(lambda x: [label_dict[y] for y in x if y in label_dict])
df['label2'] = df['label'].apply(lambda x: [label_dict.get(y, y) for y in x])
print (df)
label label1 label2
0 [2, 1, 5] [to_be_discussed, production] [to_be_discussed, production, 5]
1 [1] [production] [production]
</code></pre>
|
python|pandas
| 3
|
7,509
| 41,157,005
|
python pandas - joining specific columns
|
<p>I have a main dataframe (MbrKPI4), and I want to left join it with another dataframe (mbrsdf). They have the same index. I am successful with the below.</p>
<pre><code>MbrKPI4.join(mbrsdf['Gender'])
</code></pre>
<p>However, I want to join more columns from mbrsdf, and the below does not work (MemoryError). Is there a way to join that I can select the columns I want from mbrsdf?</p>
<pre><code>MbrKPI4.join(mbrsdf['Gender'], mbrsdf['Marital Status'])
</code></pre>
|
<p>Based on documentation for <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.join.html" rel="nofollow noreferrer">join()</a> I think you want to pass in an array of dataframes to left join by, or chain join calls.</p>
<pre><code>d1.join([d2['Gender'], d2['Marital Status']])
d1.join(d2['Gender']).join(d2['Marital Status'])
</code></pre>
<p>Hope that works.</p>
|
python|pandas
| 0
|
7,510
| 41,092,836
|
Pandas get sorted index order for multiple columns
|
<p>I have something like the following multi-index Pandas series where the values are indexed by Team, Year, and Gender. </p>
<pre><code>>>> import pandas as pd
>>> import numpy as np
>>> multi_index=pd.MultiIndex.from_product([['Team A','Team B', 'Team C', 'Team D'],[2015,2016],['Male','Female']], names = ['Team','Year','Gender'])
>>> np.random.seed(0)
>>> df=pd.Series(index=multi_index, data=np.random.randint(1, 10, 16))
>>> df
>>>
Team Year Gender
Team A 2015 Male 6
Female 1
2016 Male 4
Female 4
Team B 2015 Male 8
Female 4
2016 Male 6
Female 3
Team C 2015 Male 5
Female 8
2016 Male 7
Female 9
Team D 2015 Male 9
Female 2
2016 Male 7
Female 8
</code></pre>
<p>My goal is to get a dataframe of the team ranked order for each of the 4 Year / Gender combinations (Male 2015, Male 2016, Female 2015, and Female 2016).</p>
<p>My approach has been to first unstack the dataframe so that it is indexed by team...</p>
<pre><code>>>> unstacked_df = df.unstack(['Year','Gender'])
>>> print unstacked_df
>>>
>>>
Year 2015 2016
Gender Male Female Male Female
Team
Team A 6 1 4 4
Team B 8 4 6 3
Team C 5 8 7 9
Team D 9 2 7 8
</code></pre>
<p>And then create a dataframe from the index orders by looping through and sorting each of those 4 columns...</p>
<pre><code>>>> team_orders = np.array([unstacked_df.sort_values(x).index.tolist() for x in unstacked_df.columns]).T
>>> result = pd.DataFrame(team_orders, columns=unstacked_df.columns)
>>> print result
Year 2015 2016
Gender Male Female Male Female
0 Team C Team A Team A Team B
1 Team A Team D Team B Team A
2 Team B Team B Team C Team D
3 Team D Team C Team D Team C
</code></pre>
<p>Is there an easier / better approach that I'm missing?</p>
|
<p>Starting from your unstacked version, you can use <code>.argsort()</code> with <code>.apply()</code> to rank order each column and then just use that as a lookup against the index:</p>
<pre><code>df.unstack([1,2]).apply(lambda x: x.index[x.argsort()]).reset_index(drop=True)
Year 2015 2016
Gender Male Female Male Female
0 Team C Team A Team A Team B
1 Team A Team D Team B Team A
2 Team B Team B Team C Team D
3 Team D Team C Team D Team C
</code></pre>
<p><strong>EDIT</strong>: Here's a little more info on why this works. With just the <code>.argsort()</code>, you get:</p>
<pre><code>print df.unstack([1,2]).apply(lambda x: x.argsort())
Year 2015 2016
Gender Male Female Male Female
Team
Team A 2 0 0 1
Team B 0 3 1 0
Team C 1 1 2 3
Team D 3 2 3 2
</code></pre>
<p>The lookup bit is essentially just doing the following for each column:</p>
<pre><code>df.unstack([1,2]).index[[2,0,1,3]]
Index([u'Team C', u'Team A', u'Team B', u'Team D'], dtype='object', name=u'Team')
</code></pre>
<p>and the <code>.reset_index()</code> gets rid of the now-meaningless index labels.</p>
|
python|pandas
| 2
|
7,511
| 41,131,728
|
Problems with KNN implemantion in TensorFlow
|
<p>I am struggling to implement K-Nearest Neighbor in TensorFlow. I think that either I am overlooking a mistake or doing something terrible wrong.</p>
<p>The following code always predicts Mnist labels as 0.</p>
<pre><code>from __future__ import print_function
import numpy as np
import tensorflow as tf
# Import MNIST data
from tensorflow.examples.tutorials.mnist import input_data
K = 4
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
# In this example, we limit mnist data
Xtr, Ytr = mnist.train.next_batch(55000) # whole training set
Xte, Yte = mnist.test.next_batch(10000) # whole test set
# tf Graph Input
xtr = tf.placeholder("float", [None, 784])
ytr = tf.placeholder("float", [None, 10])
xte = tf.placeholder("float", [784])
# Euclidean Distance
distance = tf.neg(tf.sqrt(tf.reduce_sum(tf.square(tf.sub(xtr, xte)), reduction_indices=1)))
# Prediction: Get min distance neighbors
values, indices = tf.nn.top_k(distance, k=K, sorted=False)
nearest_neighbors = []
for i in range(K):
nearest_neighbors.append(np.argmax(ytr[indices[i]]))
sorted_neighbors, counts = np.unique(nearest_neighbors, return_counts=True)
pred = tf.Variable(nearest_neighbors[np.argmax(counts)])
# not works either
# neighbors_tensor = tf.pack(nearest_neighbors)
# y, idx, count = tf.unique_with_counts(neighbors_tensor)
# pred = tf.slice(y, begin=[tf.arg_max(count, 0)], size=tf.constant([1], dtype=tf.int64))[0]
accuracy = 0.
# Initializing the variables
init = tf.initialize_all_variables()
# Launch the graph
with tf.Session() as sess:
sess.run(init)
# loop over test data
for i in range(len(Xte)):
# Get nearest neighbor
nn_index = sess.run(pred, feed_dict={xtr: Xtr, xte: Xte[i, :]})
# Get nearest neighbor class label and compare it to its true label
print("Test", i, "Prediction:", nn_index,
"True Class:", np.argmax(Yte[i]))
# Calculate accuracy
if nn_index == np.argmax(Yte[i]):
accuracy += 1. / len(Xte)
print("Done!")
print("Accuracy:", accuracy)
</code></pre>
<p>Any help is greatly appreciated.</p>
|
<p>So in general it's not a good idea to go to <code>numpy</code> functions while defining your TensorFlow model. That's precisely why your code wasn't working. I have made just two changes to your code. I have replaced <code>np.argmax</code> with <code>tf.argmax</code>. I've also removed the comments from <code>#This doesn't work either</code>.</p>
<p>Here is the complete working code:</p>
<pre><code>from __future__ import print_function
import numpy as np
import tensorflow as tf
# Import MNIST data
from tensorflow.examples.tutorials.mnist import input_data
K = 4
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
# In this example, we limit mnist data
Xtr, Ytr = mnist.train.next_batch(55000) # whole training set
Xte, Yte = mnist.test.next_batch(10000) # whole test set
# tf Graph Input
xtr = tf.placeholder("float", [None, 784])
ytr = tf.placeholder("float", [None, 10])
xte = tf.placeholder("float", [784])
# Euclidean Distance
distance = tf.negative(tf.sqrt(tf.reduce_sum(tf.square(tf.subtract(xtr, xte)), reduction_indices=1)))
# Prediction: Get min distance neighbors
values, indices = tf.nn.top_k(distance, k=K, sorted=False)
nearest_neighbors = []
for i in range(K):
nearest_neighbors.append(tf.argmax(ytr[indices[i]], 0))
neighbors_tensor = tf.stack(nearest_neighbors)
y, idx, count = tf.unique_with_counts(neighbors_tensor)
pred = tf.slice(y, begin=[tf.argmax(count, 0)], size=tf.constant([1], dtype=tf.int64))[0]
accuracy = 0.
# Initializing the variables
init = tf.initialize_all_variables()
# Launch the graph
with tf.Session() as sess:
sess.run(init)
# loop over test data
for i in range(len(Xte)):
# Get nearest neighbor
nn_index = sess.run(pred, feed_dict={xtr: Xtr, ytr: Ytr, xte: Xte[i, :]})
# Get nearest neighbor class label and compare it to its true label
print("Test", i, "Prediction:", nn_index,
"True Class:", np.argmax(Yte[i]))
#Calculate accuracy
if nn_index == np.argmax(Yte[i]):
accuracy += 1. / len(Xte)
print("Done!")
print("Accuracy:", accuracy)
</code></pre>
|
python|machine-learning|tensorflow|knn
| 13
|
7,512
| 53,875,372
|
How do I modify this PyTorch convolutional neural network to accept a 64 x 64 image and properly output predictions?
|
<p>I took this convolutional neural network (CNN) from <a href="https://gist.github.com/johnolafenwa/96b3322aabb61d4d36fd870a77f02aa3" rel="nofollow noreferrer">here</a>. It accepts 32 x 32 images and defaults to 10 classes. However, I have 64 x 64 images with 500 classes. When I pass in 64 x 64 images (batch size held constant at 32), I get the following error.</p>
<pre>
ValueError: Expected input batch_size (128) to match target batch_size (32).
</pre>
<p>The stack trace starts at the line <code>loss = loss_fn(outputs, labels)</code>. The <code>outputs.shape</code> is <code>[128, 500]</code> and the <code>labels.shape</code> is <code>[32]</code>.</p>
<p>The code is listed here for completeness.</p>
<pre><code>class Unit(nn.Module):
def __init__(self,in_channels,out_channels):
super(Unit,self).__init__()
self.conv = nn.Conv2d(in_channels=in_channels,kernel_size=3,out_channels=out_channels,stride=1,padding=1)
self.bn = nn.BatchNorm2d(num_features=out_channels)
self.relu = nn.ReLU()
def forward(self,input):
output = self.conv(input)
output = self.bn(output)
output = self.relu(output)
return output
class SimpleNet(nn.Module):
def __init__(self,num_classes=10):
super(SimpleNet,self).__init__()
self.unit1 = Unit(in_channels=3,out_channels=32)
self.unit2 = Unit(in_channels=32, out_channels=32)
self.unit3 = Unit(in_channels=32, out_channels=32)
self.pool1 = nn.MaxPool2d(kernel_size=2)
self.unit4 = Unit(in_channels=32, out_channels=64)
self.unit5 = Unit(in_channels=64, out_channels=64)
self.unit6 = Unit(in_channels=64, out_channels=64)
self.unit7 = Unit(in_channels=64, out_channels=64)
self.pool2 = nn.MaxPool2d(kernel_size=2)
self.unit8 = Unit(in_channels=64, out_channels=128)
self.unit9 = Unit(in_channels=128, out_channels=128)
self.unit10 = Unit(in_channels=128, out_channels=128)
self.unit11 = Unit(in_channels=128, out_channels=128)
self.pool3 = nn.MaxPool2d(kernel_size=2)
self.unit12 = Unit(in_channels=128, out_channels=128)
self.unit13 = Unit(in_channels=128, out_channels=128)
self.unit14 = Unit(in_channels=128, out_channels=128)
self.avgpool = nn.AvgPool2d(kernel_size=4)
self.net = nn.Sequential(self.unit1, self.unit2, self.unit3, self.pool1, self.unit4, self.unit5, self.unit6
,self.unit7, self.pool2, self.unit8, self.unit9, self.unit10, self.unit11, self.pool3,
self.unit12, self.unit13, self.unit14, self.avgpool)
self.fc = nn.Linear(in_features=128,out_features=num_classes)
def forward(self, input):
output = self.net(input)
output = output.view(-1,128)
output = self.fc(output)
return output
</code></pre>
<p>Any ideas on how to modify this CNN to accept and properly return outputs? </p>
|
<p>The problem is an incompatible reshape (view) at the end.</p>
<p>You're using a sort of "flattening" at the end, which is different from a "global pooling". Both are valid for CNNs, but only the global poolings are compatible with any image size.</p>
<h2>The flattened net (your case)</h2>
<p>In your case, with a flatten, you need to keep track of all image dimensions in order to know how to reshape at the end.</p>
<p>So:</p>
<ul>
<li>Enter with 64x64</li>
<li>Pool1 to 32x32</li>
<li>Pool2 to 16x16</li>
<li>Pool3 to 8x8</li>
<li>AvgPool to 2x2</li>
</ul>
<p>Then, at the end you've got a shape of <code>(batch, 128, 2, 2)</code>. Four times the final number if the image were 32x32.</p>
<p>Then, your final reshape should be <code>output = output.view(-1,128*2*2)</code>.</p>
<p>This is a different net with a different classification layer, though, because <code>in_features=512</code>.</p>
<h2>The global pooling net</h2>
<p>On the other hand, you could use the same model, same layers and same weights for any image size >= 32 if you replace the last pooling with a global pooling:</p>
<pre><code>def flatChannels(x):
size = x.size()
return x.view(size[0],size[1],size[2]*size[3])
def globalAvgPool2D(x):
return flatChannels(x).mean(dim=-1)
def globalMaxPool2D(x):
return flatChannels(x).max(dim=-1)
</code></pre>
<p>The ending of the model:</p>
<pre><code> #removed the pool from here to put it in forward
self.net = nn.Sequential(self.unit1, self.unit2, self.unit3, self.pool1, self.unit4,
self.unit5, self.unit6, self.unit7, self.pool2, self.unit8,
self.unit9, self.unit10, self.unit11, self.pool3,
self.unit12, self.unit13, self.unit14)
self.fc = nn.Linear(in_features=128,out_features=num_classes)
def forward(self, input):
output = self.net(input)
output = globalAvgPool2D(output) #or globalMaxPool2D
output = self.fc(output)
return output
</code></pre>
|
python|conv-neural-network|pytorch
| 1
|
7,513
| 53,873,845
|
Filter a pandas dataframe by a list of column values
|
<p>This is a subsample of my dataframe:</p>
<pre><code>idcontrn ctosaldo fecanota diamovto fecopera codsprod
491748 000 2017-08-25 3 2017-08-25 0
1014320 000 2018-05-28 99999 2018-05-28 33
1907630 000 2017-06-12 99999 2017-06-09 21
1573897 000 2018-01-25 613 2018-01-25 0
1713456 000 2017-08-08 17 2017-08-07 0
186315 000 2017-06-29 13 2017-06-28 0
150328 000 2017-10-23 1 2017-10-23 84
1531535 000 2017-04-25 1 2017-04-25 78
</code></pre>
<p>i wanted to extract the "codsprod" column's top 20 most frequent categories by occurrence, so i did this:</p>
<pre><code>pd.DataFrame(sample.groupby(['codsprod']).size()).sort_values(by = 0,ascending = False).reset_index()[0:21]
</code></pre>
<p>which yields:</p>
<pre><code>codsprod 0
0 0 319971
1 76 120026
2 33 62017
3 119 48138
4 14 42180
5 104 40756
6 48 26902
</code></pre>
<p>and so on... till the number 20.</p>
<p>Now what i want to do is to filter the original df by these top 20 categories of the "codsprod" column. I know how to apply filters to a pandas df based on a condition, but writing something like:</p>
<p><code>sample[sample['codsprod'] == category_number]</code> just seems to tedious and long for me since i will hace to manually stablish 20 conditions, one per each category. </p>
<p>Is there a quicker and neater way of achieving this??</p>
<p>Thank you very much in advance.</p>
|
<p>Use <code>groupby</code> + <code>size</code> + <code>head</code> to get the largest <code>'codsprod'</code> groups. Use <code>.isin</code> to filter the original <code>DataFrame</code>. To get the largest 2 groups:</p>
<pre><code>df[df.codsprod.isin(df.groupby('codsprod').size().head(2).index)]
</code></pre>
<h3>Output:</h3>
<pre><code> idcontrn ctosaldo fecanota diamovto fecopera codsprod
0 491748 0 2017-08-25 3 2017-08-25 0
2 1907630 0 2017-06-12 99999 2017-06-09 21
3 1573897 0 2018-01-25 613 2018-01-25 0
4 1713456 0 2017-08-08 17 2017-08-07 0
5 186315 0 2017-06-29 13 2017-06-28 0
</code></pre>
<hr>
<h3>Explanation:</h3>
<p><code>df.groupby('codsprod').size()</code> returns a <code>Series</code> which is sorted in descending order based on the group size. The values of this <code>Series</code> are the group sizes, and the index of this <code>Series</code> is the corresponding <code>'codsprod'</code> value:</p>
<pre><code>df.groupby('codsprod').size()
#codsprod
#0 4
#21 1
#33 1
#78 1
#84 1
#dtype: int64
</code></pre>
<p>Taking <code>.head(n)</code> will then return only the top <code>n</code> records, which in this case is the <code>n</code> largest groups. But note it doesn't deal with ties, it just takes whatever appears first (this wouldn't be too hard to include anything that ties also):</p>
<pre><code>df.groupby('codsprod').size().head(2)
#codsprod
#0 4
#21 1
#dtype: int64
</code></pre>
<p>At this point, you don't care about how large the groups are, you want to know <em>which</em> groups are the largest. So you need the indices of this series. </p>
<pre><code>df.groupby('codsprod').size().head(2).index
#Int64Index([0, 21], dtype='int64', name='codsprod')
</code></pre>
<p>This is basically a list of <code>'codsprod'</code> values, and to filter a <code>DataFrame</code> based on the value being equal to any value in that list you use <a href="https://stackoverflow.com/questions/12096252/use-a-list-of-values-to-select-rows-from-a-pandas-dataframe"><code>.isin</code></a>,</p>
|
python|python-3.x|pandas|dataframe|conditional-statements
| 2
|
7,514
| 53,979,515
|
Representing ratios in a pandas Dataframe Columns
|
<p>I am trying to represent ratios in dataframe column. However, the formatting I am getting is totally horrendous when I am just able to use a print function and print what I want. The true problem is representing it in a correct format.</p>
<p>what I have done is create the Greatest common divisor, apply it to my dataframe now I want to </p>
<pre><code>def gcd(a,b):
""" Greatest common divisor """
while b!=0:
r=a%b
a,b=b,r
return a
#trying the function
a= int(15/gcd(15,10))
b= int(10/gcd(15,10))
print( a,':',b)
# result
3 : 2
# Dataframe
d = {'col1': [3, 2], 'col2': [12, 4]}
df = pd.DataFrame(data=d)
df
col1 col2
0 3 12
1 2 4
#applying the function to the frame
df['gcd'] = df.apply(lambda x: gcd(x['col2'], x['col1']), axis=1)
col1 col2 gcd
0 3 12 3
1 2 4 2
df['ratio']= str(df['col1']/df['gcd']) + ':' + str(df['col2']/df['gcd'])
# this result gives me a very bad formatting
</code></pre>
<p>what I want is a ratio column that looks like this:</p>
<pre><code>ratio
3:2
4:5
</code></pre>
<p>The main problem for me is representing something with the colons. </p>
|
<p>It's not clear how you derive <code>3:2</code> and <code>4:5</code>. But note you can use NumPy (via <code>np.gcd</code>) for calculating the greatest common divisor, since these operations will be vectorised. Alternatively, you can use the <a href="https://docs.python.org/3.7/library/fractions.html" rel="nofollow noreferrer"><code>fractions</code></a> module with a list comprehension for conversion to strings.</p>
<p>Let's assume we start with this dataframe.</p>
<pre><code># input dataframe
df = pd.DataFrame({'col1': [3, 2], 'col2': [12, 4]})
</code></pre>
<h3><a href="https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.gcd.html" rel="nofollow noreferrer"><code>np.gcd</code></a>: vectorised calculation</h3>
<p>This solution is <em>partially</em> vectorised. The calculation itself is performed column-wise. String construction, either via concatenation or via f-strings and a list comprehension, uses Python-level loops.</p>
<pre><code>factored = df.div(np.gcd(df['col1'], df['col2']), axis=0).astype(int)
df['ratio'] = factored['col1'].astype(str) + ':' + factored['col2'].astype(str)
# alternative list comprehension
# zipper = zip(factored['col1'], factored['col2'])
# df['ratio'] = [f'{x}:{y}' for x, y in zipper]
</code></pre>
<h3><a href="https://docs.python.org/3/library/fractions.html#fractions.Fraction" rel="nofollow noreferrer"><code>Fraction</code></a> + <a href="https://docs.python.org/3/library/stdtypes.html#str.replace" rel="nofollow noreferrer"><code>str.replace</code></a> + list comprehension</h3>
<p>Solely with row-wise operations, you can use a single list comprehension:</p>
<pre><code>from fractions import Fraction
zipper = zip(df['col1'], df['col2'])
df['ratio'] = [str(Fraction(x, y)).replace('/', ':') for x, y in zipper]
</code></pre>
<p>The result is the same in either case:</p>
<pre><code> col1 col2 ratio
0 3 12 1:4
1 2 4 1:2
</code></pre>
|
python|pandas
| 0
|
7,515
| 52,458,563
|
Tensorflow Parameter server: Is it necessary?
|
<p>I'm currently experimenting with tensorflow distribution and I was wondering if it is necessary to include the parameter server.</p>
<p>The method that I am using is tf.estimator.train_and_evaluate.
My setup is one master, one worker, and one parameter server running on three servers.</p>
<p>It seems that the parameter server is just listening to the other two servers but it's not doing anything else. Based on mrry's answer from <a href="https://stackoverflow.com/questions/40979688/tensorflow-using-parameter-servers-in-distributed-training">Tensorflow: Using Parameter Servers in Distributed Training</a>, I tried distributing with only one worker and one master and I was still able to get results.</p>
<p>Does tf.estimator.train_and_evaluate need a parameter server?</p>
|
<p>After doing multiple tests...yes you do need a parameter server</p>
|
python|tensorflow|distributed-computing
| 1
|
7,516
| 52,479,349
|
How to use named binds with bulk inserts (executemany) in cx_Oracle from pandas dataframe
|
<p>I am uncertain on how to bulk insert to Oracle from Python 3 using named bind-variables, when source-data is in a Pandas Dataframe. The code below shows my attempt. With unnamed binds, it is pretty easy, but errorprone, as the order of binds need to be the same as columns in the Dataframe. </p>
<pre><code> "Named pandas binds with cursor.executemany in cx_oracle, how ?"
import pandas as pd
import cx_Oracle
# create table t( a number, b varchar2 (20 char));
df = pd.DataFrame(data={'a': [1, 2], 'b': ["Dog", "Cat"]})
conn = cx_Oracle.connect('/@DB')
cur = conn.cursor()
# Bulk insert, numbered binds work
cur.execute("truncate table t")
cur.executemany("insert into t (a, b) values (:1, :2)", df.values.tolist())
print(pd.read_sql("select a, b from t", con=conn))
# Insert, named binds work
cur.execute("truncate table t")
cur.execute("insert into t (a, b) values (:cc, :dd)", dd="Donkey", cc=1)
print(pd.read_sql("select a, b from t", con=conn))
# Bulk insert, named binds do not work
cur.execute("truncate table t")
cur.executemany("insert into t (a, b) values (:cc, :dd)", dd=df['b'].values.tolist(), cc=df['a'].values.tolist())
# TypeError: Required argument 'parameters' (pos 2) not found
print(pd.read_sql("select a, b from t", con=conn))
#
conn.commit()
cur.close()
conn.close()
</code></pre>
<p>Niels</p>
|
<p>If you intend to use named bind variables you will need to do the following instead:</p>
<pre><code>[{"a" : 1, "b" : "Dog"}, {"a" : 2, "b" : "Cat"}]
</code></pre>
<p>In other words you need to create a list of dictionaries instead of a list of lists.</p>
|
python-3.x|pandas|dataframe|cx-oracle
| 1
|
7,517
| 52,798,399
|
EXCLUDED from export because they cannot be be served via TensorFlow Serving APIs
|
<p>Tensorflow version 1.10</p>
<p>Using: <code>DNNClassifier</code> and <code>tf.estimator.FinalExporter</code></p>
<p>I'm using the Iris example from TF <a href="https://developers.googleblog.com/2017/09/introducing-tensorflow-datasets.html" rel="nofollow noreferrer">blog</a>.
I defined the following code:</p>
<pre><code># The CSV features in our training & test data.
COLUMN_NAMES = ['SepalLength',
'SepalWidth',
'PetalLength',
'PetalWidth',
'Species']
FEATURE_COLUMNS = COLUMN_NAMES[:4]
INPUT_COLUMNS = [
tf.feature_column.numeric_column(column) for column in COLUMN_NAMES
]
def serving_input_receiver_fn():
"""Build the serving inputs."""
inputs = {}
for feat in INPUT_COLUMNS:
inputs[feat.name] = tf.placeholder(shape=[None], dtype=feat.dtype)
return tf.estimator.export.ServingInputReceiver(inputs, inputs)
</code></pre>
<p>This is how I call my functions:</p>
<pre><code> train_spec = tf.estimator.TrainSpec(
train_input, max_steps=hparams.train_steps)
exporter = tf.estimator.FinalExporter(
'iris', serving_input_receiver_fn)
eval_spec = tf.estimator.EvalSpec(
eval_input,
steps=hparams.eval_steps,
exporters=[exporter],
name='iris-eval')
run_config = tf.estimator.RunConfig(
session_config=_get_session_config_from_env_var())
run_config = run_config.replace(model_dir=hparams.job_dir)
print('Model dir: %s', run_config.model_dir)
estimator = model.build_estimator(
# Construct layers sizes.
config=run_config,
hidden_units=[10, 20, 10],
n_classes=3)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
</code></pre>
<p>I get the following messages:</p>
<pre><code>INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Signatures INCLUDED in export for Eval: None
INFO:tensorflow:Signatures INCLUDED in export for Classify: None
INFO:tensorflow:Signatures INCLUDED in export for Regress: None
INFO:tensorflow:Signatures INCLUDED in export for Predict: ['predict']
INFO:tensorflow:Signatures INCLUDED in export for Train: None
INFO:tensorflow:Signatures EXCLUDED from export because they cannot be be served via TensorFlow Serving APIs:
INFO:tensorflow:'serving_default' : Classification input must be a single string Tensor; got {'SepalLength': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'PetalLength': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>, 'PetalWidth': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'SepalWidth': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'Species': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>}
INFO:tensorflow:'classification' : Classification input must be a single string Tensor; got {'SepalLength': <tf.Tensor 'Placeholder:0' shape=(?,) dtype=float32>, 'PetalLength': <tf.Tensor 'Placeholder_2:0' shape=(?,) dtype=float32>, 'PetalWidth': <tf.Tensor 'Placeholder_3:0' shape=(?,) dtype=float32>, 'SepalWidth': <tf.Tensor 'Placeholder_1:0' shape=(?,) dtype=float32>, 'Species': <tf.Tensor 'Placeholder_4:0' shape=(?,) dtype=float32>}
WARNING:tensorflow:Export includes no default signature!
</code></pre>
<p>When I print <code>serving_input_receiver_fn</code> I get:</p>
<pre><code>ServingInputReceiver(features={'sepal_width': <tf.Tensor 'Placeholder_1:0' shape=(?, 1) dtype=float32>, 'petal_width': <tf.Tensor 'Placeholder_3:0' shape=(?, 1) dtype=float32>, 'sepal_length': <tf.Tensor 'Placeholder:0' shape=(?, 1) dtype=float32>, 'petal_length': <tf.Tensor 'Placeholder_2:0' shape=(?, 1) dtype=float32>}, receiver_tensors={'sepal_width': <tf.Tensor 'Placeholder_1:0' shape=(?, 1) dtype=float32>, 'petal_width': <tf.Tensor 'Placeholder_3:0' shape=(?, 1) dtype=float32>, 'sepal_length': <tf.Tensor 'Placeholder:0' shape=(?, 1) dtype=float32>, 'petal_length': <tf.Tensor 'Placeholder_2:0' shape=(?, 1) dtype=float32>}, receiver_tensors_alternatives=None)
</code></pre>
<p>In the export folder there is nothing (CSV, JSON, etc.):</p>
<pre><code>gs://<my-bucket>/iris/iris_20181014_214916/export/:
gs://<my-bucket>/iris/iris_20181014_214916/export/
</code></pre>
|
<p>I found a solution <a href="https://github.com/GoogleCloudPlatform/cloudml-samples/blob/master/iris/tensorflow/estimator/trainer/model.py" rel="nofollow noreferrer">here</a>.</p>
<pre><code>def _make_input_parser(with_target=True):
"""Returns a parser func according to file_type, task_type and target.
Need to set record_default for last column to integer instead of float in
case of classification tasks.
Args:
with_target (boolean): Pass label or not.
Returns:
It returns a parser.
"""
def _decode_csv(line):
"""Takes the string input tensor and parses it to feature dict and target.
All the columns except the first one are treated as feature column. The
first column is expected to be the target.
Only returns target for if with_target is True.
Args:
line: csv rows in tensor format.
Returns:
features: A dictionary of features with key as "column_names" from
self._column_header.
target: tensor of target values which is the first column of the file.
This will only be returned if with_target==True.
"""
column_header = column_names if with_target else column_names[:4]
record_defaults = [[0.] for _ in xrange(len(column_names) - 1)]
# Pass label as integer.
if with_target:
record_defaults.append([0])
columns = tf.decode_csv(line, record_defaults=record_defaults)
features = dict(zip(column_header, columns))
target = features.pop(column_names[4]) if with_target else None
return features, target
return _decode_csv
def serving_input_receiver_fn():
"""This is used to define inputs to serve the model.
Returns:
A ServingInputReciever object.
"""
csv_row = tf.placeholder(shape=[None], dtype=tf.string)
features, _ = _make_input_parser(with_target=False)(csv_row)
return tf.estimator.export.ServingInputReceiver(features,
{'csv_row': csv_row})
</code></pre>
|
tensorflow|google-cloud-ml|tensorflow-estimator
| 0
|
7,518
| 52,600,059
|
Pandas qcut based on expanding window of all columns
|
<p>Let's say I have a dataframe:</p>
<pre><code>import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.normal(0,1,[100,50]))
</code></pre>
<p>that looks like:</p>
<pre><code> 0 1 2 3 4 5 6 \
0 -0.141305 2.158252 1.006520 -1.004185 -0.213160 0.648904 -0.089369
1 -1.373167 -1.100959 1.007023 0.699591 -1.667834 1.422182 0.940912
2 -0.212014 1.967436 0.401133 -0.996298 -1.696490 -0.857453 -0.686584
3 -0.351902 0.413816 -0.494869 0.448740 0.146897 -0.798095 -0.546489
4 0.416376 -0.689577 -0.967050 -1.667480 1.223966 -1.382113 -0.812368
7 8 9 ... 40 41 42 \
0 0.282299 0.627085 1.111637 ... 1.354044 0.335316 -1.817465
1 -0.540302 -1.276811 -0.077210 ... 0.556072 0.642445 0.313477
2 0.601571 -0.989826 0.942893 ... 0.803984 0.286897 -0.507413
3 -0.277153 -1.068749 1.720561 ... 0.317774 0.744266 -1.671273
4 0.391501 0.703358 0.972910 ... -0.251225 -0.918734 0.226089
43 44 45 46 47 48 49
0 -2.088606 -1.297459 -1.135577 -0.579162 -0.538286 1.223049 -0.577341
1 2.307270 0.381122 0.970177 0.011552 -0.704012 -1.759955 0.649379
2 0.139226 1.287651 0.335977 0.832819 -0.701925 1.656187 0.218177
3 0.621638 -2.893360 -1.349287 2.160106 0.977205 -0.550635 -0.473224
4 -0.646419 2.197215 -0.483294 -1.141479 0.706850 2.686787 0.054517
</code></pre>
<p>The following code does what I need but in an incredibly inefficient way:</p>
<pre><code>lbound_ = float(pd.DataFrame(np.ravel(df.iloc[0:10,:].values)).quantile(0.))
ubound_ = float(pd.DataFrame(np.ravel(df.iloc[0:10,:].values)).quantile(0.1))
df[(df>=lbound_) & (df<ubound_)]
</code></pre>
<p>I want to decile/quantile bucket my data, at each point in time based on any data observed until that point in any column in an expanding manner.</p>
<p>The above only executes for <code>0:10</code> for the first bucket <code>[0,.1)</code>.</p>
<p>A very slow implementation looks like:</p>
<pre><code>def get_quantile(df,q):
return np.percentile(df.ravel(),q)
df.expanding().apply(get_quantile,args=(.1,))
</code></pre>
<p>How would I generalize this and do it efficiently?</p>
<p>A bit stumped here, and would appreciate guidance.</p>
<p>Thank you</p>
|
<p>For anyone who stumbles upon this Q, below is what I went with. There may be faster solutions, so please post if you have any better ideas.</p>
<p>Thanks</p>
<pre><code>def standardize_block(df_standardize_arg):
df_standardize_arg = df_standardize_arg.copy()
ix_ = df_standardize_arg.index
prior_data = np.array([])
output = []
for i in range(0, len(ix_)):
data = np.array(df_standardize_arg.loc[ix_[i]].values.ravel())
data = data[~np.isnan(data)]
prior_data = np.concatenate((prior_data, data), axis=0)
#this date piece may be specific to my use case.
output.append({'date': ix_[i],
'mean': prior_data.mean(),
'std': prior_data.std()})
df_output = pd.DataFrame(output)
del df_standardize_arg #my data is quite large, so I delete it here.
df_output.index = df_output['date']
del df_output['date']
return df_output.copy()
</code></pre>
|
python|python-3.x|pandas|dataframe
| 0
|
7,519
| 46,449,716
|
Most efficient way to test whether each element from one 1-d array exists in corresponding row of another 2-d array, using python
|
<p>I would like to know the most efficient way to test whether each element from one 1-d array exists in corresponding row of another 2-d array, using python</p>
<p>Specifically, I have two arrays. The first is an 1-d array of integers. The second is a 2-d array of integers.</p>
<p>Sample input:</p>
<pre><code>[1, 4, 12, 9] # array 1
[[1, 12, 299],
[2, 5, 11],
[1, 3, 11],
[0, 1, 9]] # array 2
</code></pre>
<p>Expected output:</p>
<pre><code>[True, False, False, True]
</code></pre>
|
<p>You can reshape <code>a</code> to 2d array, compare with <code>b</code> and then check if there's any <code>True</code> in each row:</p>
<pre><code>np.equal(np.reshape(a, (-1,1)), b).any(axis=1)
</code></pre>
<hr>
<pre><code>a = [1, 4, 12, 9] # array 1
b = [[1, 12, 299],
[2, 5, 11],
[1, 3, 11],
[0, 1, 9]]
np.equal(np.reshape(a, (-1,1)), b).any(1)
# array([ True, False, False, True], dtype=bool)
</code></pre>
|
python|numpy
| 1
|
7,520
| 58,346,764
|
Histogramm Binning Two Data Sets With Reference To Each Other
|
<p>I have two lists of equal length. Each item in one list corresponds to the same index in the other list.
I have histogrammed one of the two lists:</p>
<pre><code>xnums, xbins = np.histogram(x)
</code></pre>
<p>What I need is a quick way to bin the corresponding data accordingly, i.e. bin the data in y in correspondence with how x has been binned.
I have tried:</p>
<pre><code>ybins = []
for i in range(len(xnums)):
yi = []
for j in range(len(y)):
if x[j] >= xbins[i] and x[j] < xbins[i+1]:
yi.append(y[j])
elif i == len(xnums) and x[j] == max(x):
yi.append(y[j])
ybins.append(yi)
</code></pre>
<p>but this is slow, and for some reason it misses out some values.</p>
<p>Is there a more efficient way to do this?</p>
|
<p>If I understand correctly, then you want to use <code>xbins</code> as <code>ybins</code>, and find the corresponding <code>ynums</code> (i.e. the counts).</p>
<p>You want <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.digitize.html" rel="nofollow noreferrer"><code>np.digitize()</code></a>. Give it <code>y</code> and <code>xbins</code> and it will give you the indices of the bins that <code>y</code>'s elements fall into.</p>
<p>Then you can use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.bincount.html" rel="nofollow noreferrer"><code>np.bincount()</code></a> to count everything and get <code>ynums</code>.</p>
<pre><code>y_digitized = np.digitize(y, xbins)
ynums = np.bincount(y_digitized)
</code></pre>
|
python|numpy|histogram|binning
| 0
|
7,521
| 58,498,669
|
video segmentation on mobile devices
|
<p>I have used tensorflow.js to do person segmentation of live video in a web browser. Is there a similar tensorflowlite/mobile version of the same? Is there any android SDK available to do person/video segmentation on mobile clients? Any pointers would be very helpful.</p>
<p>thanks
Sowmya</p>
|
<p>Yes, TensorFlow Lite is available for Android. <a href="https://www.tensorflow.org/lite/guide/android" rel="nofollow noreferrer">Click here for more.</a></p>
|
tensorflow|tensorflow-lite
| 0
|
7,522
| 69,026,756
|
How to select a row in a pandas DataFrame datetime index using a datetime variable?
|
<p>I am not a Professional programmer at all and slowly accumulating some experience in python.
This is the issue I encounter.</p>
<p>On my dev machine I had a python3.7 installed with pandas version 0.24.4</p>
<p>the following sequence was working perfectly fine.</p>
<pre><code> >>> import pandas as pd
>>> df = pd.Series(range(3), index=pd.date_range("2000", freq="D", periods=3))
</code></pre>
<pre><code>>>> df
2000-01-01 0
2000-01-02 1
2000-01-03 2
Freq: D, dtype: int64
</code></pre>
<pre><code>>>> import datetime
>>> D = datetime.date(2000,1,1)
>>> df[D]
0
</code></pre>
<p>in the production environnent the pandas version is 1.1.4 and the sequence described does not work anymore.</p>
<pre><code>>>> df[D]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/ec2-user/.local/lib/python3.7/site-packages/pandas/core/series.py", line 882, in __getitem__
return self._get_value(key)
File "/home/ec2-user/.local/lib/python3.7/site-packages/pandas/core/series.py", line 989, in _get_value
loc = self.index.get_loc(label)
File "/home/ec2-user/.local/lib/python3.7/site-packages/pandas/core/indexes/datetimes.py", line 622, in get_loc
raise KeyError(key)
KeyError: datetime.date(2000, 1, 1)
</code></pre>
<p>Then, unexpectedly, by transforming D in a string type the following command did work :</p>
<pre><code>>>> df[str(D)]
0
</code></pre>
<p>Any idea of why this behaviour has changed in the different versions ?
Is this behaviour a bug or will be permanent over time ?
should I transform all the selections by datetime variables in the code in string variables or is there a more robust way over time to do this ?</p>
|
<p>It depends of version. If need more robust solution use <code>datetime</code>s for match <code>DatetimeIndex</code>:</p>
<pre><code>import datetime
D = datetime.datetime(2000,1,1)
print (df[D])
0
</code></pre>
|
python-3.x|pandas|dataframe|datetime
| 1
|
7,523
| 68,946,196
|
Reading in a Database file (DBF) using Python and then plotting the shapefile
|
<p>My goal is to read in a DBF from CropScape using Python and plot the DBF to create a figure as in the example. I need help reading in and plotting the DBF.</p>
<p>DATA: I download the CropScape Cropland DBF from <a href="https://nassgeodata.gmu.edu/CropScape/" rel="nofollow noreferrer">https://nassgeodata.gmu.edu/CropScape/</a>. I define my <strong>Area of Interest</strong> as Iowa.</p>
<p><a href="https://i.stack.imgur.com/R2rMP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/R2rMP.png" alt="Define Area of Interest" /></a></p>
<p>Then I download <strong>Defined Area of Interest Data</strong> for the year <code>2020</code>, <code>corn</code> and <code>soybean</code>, and the projection <code>Degrees Lat/Lon, WGS84 Datum</code>.</p>
<p><a href="https://i.stack.imgur.com/PGcuY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PGcuY.png" alt="Download Defined Area of Interest Data" /></a></p>
<p><a href="https://i.stack.imgur.com/uy7Eo.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uy7Eo.jpg" alt="Flight Path overlaid on Iowa CropScape DBF" /></a></p>
<p>What I have so far:</p>
<pre><code>filename = 'CDL_2020_clip_20210826173123_1609691609.tif.vat.dbf'
dbf = gpd.read_file(filename)
</code></pre>
<p>Additional information that might be useful is:</p>
<pre><code>dbf.head() =
VALUE RED GREEN BLUE CLASS_NAME OPACITY geometry
0 0 0.0 0.000 0.000 Background 0.0 None
1 1 1.0 0.827 0.000 Corn 1.0 None
2 2 1.0 0.149 0.149 Cotton 1.0 None
3 3 0.0 0.659 0.894 Rice 1.0 None
4 4 1.0 0.620 0.043 Sorghum 1.0 None
len(dbf) = 256
</code></pre>
|
<ul>
<li>As per <strong>ArcGIS_coloro_table_readme.docx</strong>, which downloads in the zip file, all of the downloaded files are specifically for <code>GeoTIFF</code> in <code>ArgGIS</code>. There is no shape, or geolocation data in the <code>.dbf</code> files.</li>
<li><strong><code>'CDL_2020_clip_20210826173123_1609691609.tif.vat.dbf'</code> is not a shapefile</strong>; there is no geolocation data in the file.</li>
<li><strong>It doesn't seem that this can be directly accomplished with <code>geopandas</code></strong>, which appears to be the goal of the question.</li>
<li><a href="https://stackoverflow.com/q/24956653/7758804">Read elevation using gdal python from geotiff</a> might allow for reading the <code>.tif</code> file with <a href="https://pypi.org/project/GDAL/" rel="nofollow noreferrer">PyPi: GDAL 3.3.1</a></li>
<li>Other options: <a href="https://www.google.com/search?q=python%20read%20geotiff%20site:stackoverflow.com&sxsrf=AOaemvL6UzbFVygTUYAGzc1bzyg62yq_HQ:1630348288559&sa=X&ved=2ahUKEwiluOXUsNnyAhVhCjQIHS4DAbwQrQIoBHoECDUQBQ&biw=1920&bih=975" rel="nofollow noreferrer">python read geotiff site:stackoverflow.com</a> and <a href="https://www.google.com/search?q=python%20read%20geotiff%20site:gis.stackexchange.com&sxsrf=AOaemvL6UzbFVygTUYAGzc1bzyg62yq_HQ:1630348288559&sa=X&ved=2ahUKEwiluOXUsNnyAhVhCjQIHS4DAbwQrQIoBHoECAcQBQ&biw=1920&bih=975" rel="nofollow noreferrer">python read geotiff site:gis.stackexchange.com</a></li>
</ul>
<h2><code>ArcGIS_coloro_table_readme.docx</code> contents</h2>
<blockquote>
<p>To view the image attribute file of the downloaded CDL data in GeoTIFF format in ArgGIS software, the user has to load a .tif.vat.dbf created by ArcGIS. The .tif.vat.dbf file has to have the same file name as the CDL data file. There are two .tif.vat.dbf files included in the downloaded package. The default .tif.vat.dbf file, which has the same file name as the CDL file, is compatible with the ArcGIS. 10.3.1 as shown in the following example:
CDL_2019_clip_20200203101819_718535908.tif
CDL_2019_clip_20200203101819_718535908.tif.vat.dbf</p>
</blockquote>
<blockquote>
<p>Please notice that all three file extension tags .tif.vat.dbf are required.</p>
</blockquote>
<blockquote>
<p>The ArcGIS. 10.3.1 (including earlier version) and ArcGIS 10.7.0 use different scales for RGB values. The ArcGIS 10.3.1 outputs RGB values ranging from 0 – 1 (same as the Erdas Imagine values), while ArcGIS 10.7.0 has RGB values ranging from 0 – 255.</p>
</blockquote>
<blockquote>
<p>All CDLs from previous years (up to 2018) have .vat.dbf files with RGB values ranging from 0 – 1. Therefore, they are compatible with the .tif.vat.dbf file of the ArcGIS. 10.3.1 version. Starting from 2019 CDLs, the .tif.vat.dbf file of ArcGIS 10.7.0 are included in the downloading package to make it compatible with ArcGIS 10.7.0. Users have to rename the included file “ArcGIS10.7.0_2019_30m_cdls.tif.vat.dbf” to the same name as the CDL file’s like the following:
CDL_2019_clip_20200203101819_718535908.tif.vat.dbf
If you want to view the image attribute in the ArcGIS 10.7.0. The renamed .tif.vat.dbf file has to be loaded into ArcGIS 10.7.0. along with the CDL file.</p>
</blockquote>
<blockquote>
<p>NASS maintains an extensive Frequently Asked Questions (FAQs) webpage, including how to properly attribute the CDL in ESRI ArcGIS and Erdas Imagine, at the following webpage: <a href="https://www.nass.usda.gov/Research_and_Science/Cropland/sarsfaqs2.php#Section2_1.0" rel="nofollow noreferrer">https://www.nass.usda.gov/Research_and_Science/Cropland/sarsfaqs2.php#Section2_1.0</a>.</p>
</blockquote>
|
python|shapefile|geopandas|geotiff
| 1
|
7,524
| 69,167,900
|
Numpy slice not updating as expected
|
<p>I've written an algorithm for LU decomposition of a square matrix. Problem I'm facing is that the values in a NumPy 2-d array <strong>slice</strong> are <strong>not updating as expected</strong>. See the image at the bottom.</p>
<p>The matrix A is defined as follows:</p>
<pre><code>A = np.array([[1, -3, 5, 2], [1, 0, 1, -1], [6, 1, -9, 2], [1, 0, -6, 3]])
</code></pre>
<p>The Algorithm:</p>
<pre><code>def LUGAUSS(A):
if A.shape[0] != A.shape[1]:
return "Invalid Matrix. A must be a square marix."
multipliers = dict()
for i in range(A.shape[0]):
print('i',i)
if A[i,i] == 0:
return "Pivot is zero"
else:
multipliers[(i+1,i)] = A[i+1:,i] / A[i,i]
A[i+1:,i] = multipliers[(i+1,i)] # <- !!This line is the problem!!
A[i+1:,i+1:] = A[i+1:,i+1:] - A[i+1:,i].reshape(-1,1) * A[i,i+1:].reshape(1,-1)
L = np.eye(A.shape[0])
for x in range(L.shape[1]):
L[x+1:,x] = multipliers[(x+1,x)]
U = A.copy()
for x in range(U.shape[1]):
U[x+1:,x] = 0
return (L,U,multipliers)
</code></pre>
<p>The Following update should change [6,1] to [6.333,1].</p>
<p>See the image:</p>
<p><img src="https://i.stack.imgur.com/4NRCq.png" alt="Img" /></p>
|
<p><code>B</code> is an integer array, so the incoming values are being cast to integers. If you need floats. you should use <code>astype</code> to convert <code>B</code> to float64.</p>
|
python|numpy|array-broadcasting
| 0
|
7,525
| 68,982,750
|
Python - Pandas - KeyError: "None of [Index(['questions'], dtype='object')] are in the [columns]"
|
<p>I am new to Python, and currently trying to write a code which will suggest answers to specific questions automatically. I have this issue, when running the following code:</p>
<pre><code>import pandas as pd
df=pd.read_csv("Book11.csv", encoding= 'cp1252');
df.columns=["question","answers"]
df
print(df)
import re
import gensim
from gensim.parsing.preprocessing import remove_stopwords
def clean_sentence(sentence,stopwords=False):
sentence = sentence.lower().strip()
sentence = re.sub(r'[^a-z0-9\s]','',sentence)
if stopwords:
sentence = remove_stopwords(sentence)
return sentence
def get_cleaned_sentences (df, stopwords=False):
sents=df[["questions"]];
cleaned_sentences=[]
for index,row in df.iterrows():
#print(index.row)
cleaned=clean_sentence(row["questions"], stopwords);
cleaned_sentences.append(cleaned);
return cleaned_sentences;
cleaned_sentences=get_cleaned_sentences(df, stopwords=True)
print(cleaned_sentences);
</code></pre>
<p>-when running on Colab - it works fine
-when running on local Python 3.9.1 under Windows - it works fine
-when running on Ubuntu VM, running the same code just gives me the following error: KeyError: "None of [Index(['questions'], dtype='object')] are in the [columns]"</p>
<p>I have tried all the workarounds found after searching the above error , without success.</p>
<p>I do not understand why this works seamlessly on two environments.</p>
<p>Many thanks in advance.</p>
|
<p>On the windows computer, try reading and changing the encoding to utf8:</p>
<pre><code>import pandas as pd
df=pd.read_csv("Book11.csv", encoding= 'cp1252')
df.to_csv("Book11-utf8.csv", encoding='utf-8', index_col=None)
</code></pre>
<p>Copy the utf8 csv file to the VM.</p>
<p>Then on the VM machine, try to read the <code>Book11-utf8.csv</code> file with utf8:</p>
<pre><code>df=pd.read_csv("Book11-utf8.csv", encoding= 'utf-8')
</code></pre>
|
python|pandas|dataframe
| 0
|
7,526
| 68,916,814
|
After scaling the train and test data, the model score goes to 1, something doesn't seem right?
|
<p>prescaled features
<a href="https://i.stack.imgur.com/kDmb6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kDmb6.png" alt="enter image description here" /></a></p>
<pre><code>scaler.fit(x_train)
scaler.fit(x_test)
xts= scaler.transform(x_train)
xts=pd.DataFrame(xts, columns=x_train.columns)
xtest= scaler.transform(x_test)
xtest=pd.DataFrame(xtest, columns=x_test.columns)
from sklearn.svm import SVC
model = SVC()
model.fit(x_train,y_train)
model.score(x_test,y_test)
score: 0.71-0.76
</code></pre>
<p>postscaled features</p>
<p><a href="https://i.stack.imgur.com/tUYSi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tUYSi.png" alt="enter image description here" /></a></p>
<pre><code>
scaler = MinMaxScaler()
scaler.fit(x_train)
scaler.fit(x_test)
xts= scaler.transform(x_train)
xts=pd.DataFrame(xts, columns=x_train.columns)
xtest= scaler.transform(x_test)
xtest=pd.DataFrame(xtest, columns=x_test.columns)
from sklearn.svm import SVC
model = SVC()
model.fit(xts,y_train)
model.score(xtest,y_test)
score: 1.0
</code></pre>
<p>What can I do to fix this, am I just overthinking it? Is it normal to get this kind of results from this data?</p>
|
<p>It's hard to say without more data to work on, but it is possible, especially given the dramatic impact of scaling on your dataset. Take a look at the impact of scaling in this below example. In a 2-d case it is easier to picture, and it's important to understand that a score of 1 indicates there is most likely no overlap of the different categories, and hence you can draw perfectly clean vectors (MMH) to divide the categories. Perhaps you could also consider taking the logarithm or reviews or so, given the large range of possible values?</p>
<p><a href="https://i.stack.imgur.com/1Vco9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1Vco9.png" alt="SVM - before and after scaling" /></a></p>
|
python|pandas|machine-learning|svm
| 0
|
7,527
| 68,947,061
|
how low in dataframe to find all elements of list
|
<p>I have a list:</p>
<pre><code>elements = ['a', 'b', 'c', 'd']
</code></pre>
<p>And a dataframe that has some or all of the elements of my list:</p>
<pre><code> mycol
0 a
1 x
2 y
3 e
4 b
5 c
6 o
7 l
8 s
9 d
10 g
</code></pre>
<p>I want to know how low I have to search on my df to find all of the elements of my list. In this case the answer would be <code>10</code> because it is until where I found all the elements of my list.</p>
<p>Thanks</p>
|
<p>It's worth considering <a href="https://stackoverflow.com/questions/68947061/how-low-in-dataframe-to-find-all-elements-of-list/68947793#comment121849905_68947061">Barmar's comment</a>. I couldn't get the fancier indexing answers to work with some bigger testing data, but Barmar's loop should be reliable:</p>
<blockquote>
<p>Just loop over the dataframe indexes. If the current df element is in the list, remove it from the list. When the list becomes empty, the current index is the answer.</p>
</blockquote>
<pre><code>def idxall(series, elements):
for i, e in enumerate(series.to_numpy()): # faster than series.items()
if e in elements:
elements.remove(e)
if not elements:
return i + 1
return np.nan
</code></pre>
<hr />
<h3>Timings</h3>
<p>Given <code>df = pd.DataFrame({'mycol': np.random.choice(list(string.ascii_lowercase), size=1000)})</code>:</p>
<pre><code>%timeit tdy_idxall(df.mycol, list(string.ascii_lowercase))
# 21.4 µs ± 7.44 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
</code></pre>
<pre><code>%timeit henry_ecker_np_unique(df.mycol, list(string.ascii_lowercase))
# 379 µs ± 48.1 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
</code></pre>
<pre><code>%timeit u12_forward_idxmax(df.mycol, list(string.ascii_lowercase)
# 538 µs ± 61.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
</code></pre>
<pre><code>%timeit corralien_idxall(df.mycol, list(string.ascii_lowercase))
# 1.28 ms ± 243 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
</code></pre>
<hr />
<h3>Verification</h3>
<ul>
<li><p>Using OP's sample:</p>
<pre><code>df = pd.DataFrame({'mycol': list('axyebcolsdg')})
elements = list('abcd')
idxall(df.mycol, elements)
# 10
</code></pre>
</li>
<li><p>Using Henry's sample #1 (mixed order and duplicates):</p>
<pre><code>df = pd.DataFrame({'mycol': list('dxcabcodsdg')})
elements = list('abcd')
idxall(df.mycol, elements)
# 5
</code></pre>
</li>
<li><p>Using Henry's sample #2 (not all elements found):</p>
<pre><code>df = pd.DataFrame({'mycol': list('dxcabcodsdg')})
elements = list('abcz')
idxall(df.mycol, elements)
# nan
</code></pre>
</li>
</ul>
|
python|pandas|list|dataframe
| 2
|
7,528
| 44,396,618
|
Vectorize or optimize an loop where each iteration depends on the state of the previous iteration
|
<p>I have an algorithm which I am implementing in python. The algorithm might be executed 1.000.000 times so I want to optimize it as much as possible. The base in the algorithm is three lists (<code>energy</code>, <code>point</code> and <code>valList</code>) and two counters <code>p</code> and <code>e</code>.</p>
<p>The two lists <code>energy</code> and <code>point</code> contains number between 0 and 1 on which I base decisions. <code>p</code> is a point-counter and <code>e</code> is an energy-counter. I can trade points for enery, and the cost of each energy is defined in <code>valList</code> (it is time dependent). I can also trade the otherway. But I have to trade all at once.</p>
<p><strong>The outline of the algorithm:</strong></p>
<ol>
<li>Get a boolean-list where the elements in <code>energy</code> is above a threshold and the elements in <code>point</code> is below another threshold. This is a decision to trade energy for points. Get a corresponding list for point, which gives decision to trade points for energy</li>
<li>In each of the boolean-lists. Remove all true-values that comes after another true value (if i have trade all points for energy, i am not allowed to do that again point after)</li>
<li>For each item-pair (<code>pB</code>, point bool and <code>eB</code>, energy bool) from the two boolean lists: If pB is true and i have points, i want to trade all my points for enery. If <code>eB</code> is true and i have energy, i want to trade all my energy to points.</li>
</ol>
<p><strong>This is the implementation i have come up with:</strong></p>
<pre><code>start = time.time()
import numpy as np
np.random.seed(2) #Seed for deterministic result, just for debugging
topLimit = 0.55
bottomLimit = 0.45
#Generate three random arrays, will not be random in the real world
res = np.random.rand(500,3) #Will probably not be much longer than 500
energy = res[:,0]
point = res[:,1]
valList = res[:,2]
#Step 1:
#Generate two bools that (for ex. energy) is true when energy is above a threashold
#and point below another threshold). The opposite applies to point
energyListBool = ((energy > topLimit) & (point < bottomLimit))
pointListBool = ((point > topLimit) & (energy < bottomLimit))
#Step 2:
#Remove all 'true' that comes after another true since this is not valid
energyListBool[1:] &= energyListBool[1:] ^ energyListBool[:-1]
pointListBool[1:] &= pointListBool[1:] ^ pointListBool[:-1]
p = 100
e = 0
#Step 3:
#Loop through the lists, if point is true, I loose all p but gain p/valList[i] for e
#If energy is true I loose all e but gain valList[i]*e for p
for i in range(len(energyListBool)):
if pointListBool[i] and e == 0:
e = p/valList[i] #Trade all points to energy
p = 0
elif energyListBool[i] and p == 0:
p = valList[i]*e #Trade all enery to points
e = 0
print('p = {0} (correct for seed 2: 3.1108006690739174)'.format(p))
print('e = {0} (correct for seed 2: 0)'.format(e))
end = time.time()
print(end - start)
</code></pre>
<p>What I am struggeling with is how (if it can be done) to vectorize the for-loop, so i can use that instead of the for-loop which in my mind probably would be faster.</p>
|
<p>Within the current problem setting that's not possible since vectorization essentially requires that your <code>n</code>-th computation step shouldn't depend on previous <code>n-1</code> steps. Sometimes, however, it's possible to find so-called "closed form" of a recurrence <code>f(n) = F(f(n-1), f(n-2), ... f(n-k))</code>, i.e. to find an explicit expression for <code>f(n)</code> that doesn't depend on <code>n</code>, but it's a separate research problem.</p>
<p>Moreover, from algorithmic point of view such a vectorization wouldn't give a lot, since complexity of your algorithm would still be <code>C*n = O(n)</code>. However, since "complexity constant" <code>C</code> does matter in practice, there are different ways to reduce it. For example, it shouldn't be a big problem to rewrite your critical loop in C/C++.</p>
|
python|performance|numpy|optimization
| 1
|
7,529
| 61,095,091
|
How to strip and split in pandas
|
<p>Is there a way to perform a split by new line and also do a strip of whitespaces in a single line ?
this is how my df looks like originally</p>
<pre><code> df["Source"]
0 test1 \n test2
1 test1 \n test2
2 test1 \ntest2
Name: Source, dtype: object
</code></pre>
<p>I used to do a split based on new line and create a list with the below code</p>
<pre><code>Data = (df["Source"].str.split("\n").to_list())
Data
[['test1 ', ' test2 '], [' test1 ', ' test2 '], [' test1 ', 'test2 ']]
</code></pre>
<p>I want to further improve this and remove any leading or trailing white spaces and i am not sure how to use the split and strip in a single line</p>
<pre><code>df['Port']
0 443\n8080\n161
1 25
2 169
3 25
4 2014\n58
Name: Port, dtype: object
</code></pre>
<p>when i try to split it based on the new line , it fills in nan values for the ones that does not have \n</p>
<pre><code>df['Port'].str.split("\n").to_list()
[['443', '8080', '161'], nan, nan, nan, ['2014', '58']]
</code></pre>
<p>the same works perfectly for other columns</p>
<pre><code>df['Source Hostname']
0 test1\ntest2\ntest3
1 test5
2 test7\ntest8\n
3 test1
4 test2\ntest4
Name: Source Hostname, dtype: object
df["Source Hostname"].str.split('\n').apply(lambda z: [e.strip() for e in z]).tolist()
[['test1', 'test2', 'test3'], ['test5'], ['test7', 'test8', ''], ['test1'], ['test2', 'test4']]
</code></pre>
|
<pre><code>df['Source'].str.split('\n').apply(lambda x: [e.strip() for e in x]).tolist()
</code></pre>
|
python|pandas
| 5
|
7,530
| 71,552,215
|
Merge two DataFrame on the index, but if one DFs is missing an index I want it to create Null (Nan) values if one of the DFs is missing that index
|
<p>I want to merge two DataFrames on the index. But if one of those DataFrames is missing an index value I want it to put null ('Nan') values in the place of the new DataFrame for whatever Dataframe is missing that index.</p>
<pre><code>import pandas as pd
dict1 = {
'Short Name': ['SOO','BS', 'SOC'],
'File': ['r1','r2','r3'],
'acc1': ['321','321','321']
}
dict2 = {
'Short Name': ['S00','SOC'],
'File': ['r1','r2'],
'acc2': ['123','123']
}
df1 = pd.DataFrame(dict1)
df1.set_index('Short Name', inplace=True)
df1
df2 = pd.DataFrame(dict2)
df2.set_index('Short Name', inplace=True)
df2
new_df = pd.merge(df1,df2, on='Short Name')
</code></pre>
<p>The output that I'm trying to achieve is something that looks like this:</p>
<pre><code> File_x acc1 File_y acc2
Short Name
SOO r1 321 r1 123
BS r2 321 Nan Nan
SOC r3 321 r2 123
</code></pre>
<p>[DataFrame of dict1][1]
[DataFrame of dict2][2]
[1]: https://i.stack.imgur.com/u5g0y.png
[2]: https://i.stack.imgur.com/AwenX.png</p>
|
<p>Try <code>join</code></p>
<pre><code>out = df1.join(df2,lsuffix='_x',rsuffix='_y',how='left')
Out[934]:
File_x acc1 File_y acc2
Short Name
SOO r1 321 NaN NaN
BS r2 321 NaN NaN
SOC r3 321 r2 123
</code></pre>
|
python|pandas|database|dataframe|jupyter-notebook
| 1
|
7,531
| 71,567,214
|
efficient way to calculate between columns with conditions
|
<p>I have a dataframe looks like</p>
<pre><code> Cnt_A Cnt_B Cnt_C Cnt_D
ID_1 0 1 3 0
ID_2 1 0 0 0
ID_3 5 2 0 8
...
</code></pre>
<p>I'd like to count columns that are not zero and put the result into new column like this,</p>
<pre><code> Total_Not_Zero_Cols Cnt_A Cnt_B Cnt_C Cnt_D
ID_1 2 0 1 3 0
ID_2 1 1 0 0 0
ID_3 3 5 2 0 8
...
</code></pre>
<p>I did loop to get the result, but it took very long time (of course).</p>
<p>I can't figure out the most efficient way to calculate between columns with condition :(</p>
<p>Thank you in advance</p>
|
<p>Check if each value not equals to 0 then sum on columns axis:</p>
<pre class="lang-py prettyprint-override"><code>df['Total_Not_Zero_Cols'] = df.ne(0).sum(axis=1)
print(df)
# Output
Cnt_A Cnt_B Cnt_C Cnt_D Total_Not_Zero_Cols
ID_1 0 1 3 0 2
ID_2 1 0 0 0 3
ID_3 5 2 0 8 1
</code></pre>
|
python|pandas
| 5
|
7,532
| 71,658,063
|
In google-colab pandas.read_pickle() is not working on pickle5
|
<p>I made a pickle file of a dataframe from my computer using <code>pd.to_pickle()</code> which I could not read in colab. It gives error <code>ValueError: unsupported pickle protocol: 5</code>. Please give a solution.</p>
|
<p>You need to install <code>pickle5</code> first, using:</p>
<pre><code>!pip install pickle5
</code></pre>
<p>Then,</p>
<pre><code>#Import the library
import pickle5 as pickle
path = 'path_to_pickle5'
with open(path, "rb") as dt:
df = pickle.load(dt)
</code></pre>
|
pandas|dataframe|google-colaboratory|pickle
| 1
|
7,533
| 71,636,236
|
Add a new column for color code from red to green based on the increasing value of Opportunity in data frame
|
<p>I have a data frame and I wanted to generate a new column for colour codes which stars from red for the least value of <strong>Opportunity</strong> and moves toward green for highest value of <strong>Opportunity</strong></p>
<p>My Data Frame -</p>
<pre><code>State Brand DYA Opportunity
Jharkhand Ariel 0.15 0.00853
Jharkhand Fusion 0.02 0.00002
Jharkhand Gillett 0.04 -0.0002
</code></pre>
|
<p>To obtain the color range from red to green you can use matplotlib <a href="https://matplotlib.org/stable/tutorials/colors/colormaps.html#diverging" rel="nofollow noreferrer"><code>color maps</code></a>, more specifically, the <code>RdYlGn</code>. But before applying the color mapping, first, you need to normalize the data in the <code>Opportunity</code> column between 0 and 1. Then, from here you can encode the data to a color code in any away way you see appropriate. As an example, here I'm using Pandas <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.apply.html" rel="nofollow noreferrer"><code>apply</code></a> function with the <code>rbg2hex</code> with the intention of grabbing the CSS value that represents the color mapping used in the <code>Opportunity</code> column.</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.cm as cm
from matplotlib.colors import Normalize
from matplotlib.colors import rgb2hex
# df = Your original dataframe
cmapR = cm.get_cmap('RdYlGn')
norm = Normalize(vmin=df['Opportunity'].min(), vmax=df['Opportunity'].max())
df['Color'] = df['Opportunity'].apply(lambda r: rgb2hex(cmapR(norm(r))))
dstyle = df.style.background_gradient(cmap=cmapR, subset=['Opportunity'])
dstyle.to_html('sample.html')
</code></pre>
<p><a href="https://i.stack.imgur.com/v7QDu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/v7QDu.png" alt="df_color_coded" /></a></p>
|
python|pandas|dataframe|data-science
| 2
|
7,534
| 71,735,256
|
How to read csv file as dataframe with missing headers and data shift problem in certain rows due to extra commas?
|
<p>I want to read this CSV file in data frame</p>
<pre><code>Username,Identifier,First name,Last name,Department,Location
booker12,9012,Rachel,Booker,,,Sales,Manchester
grey07,2070,Laura,Grey,,,Depot,London
johnson81,4081,Craig,Johnson,Depot,London
jenkins46,9346,Mary,Jenkins,Engineering,Manchester
smith79,5079,Jamie,Smith,Engineering,Manchester
</code></pre>
<p>There is data shift in first and second row due to extra comma.</p>
<p>I want to read the data as it is with missing headers.</p>
<p>There is data shift in first and second row due to extra comma.
Now when I read it with</p>
<pre><code>df = pd.read_csv('Commission.csv', index_col=False)
</code></pre>
<p>It throws warning ParserWarning: Length of header or names does not match length of data. This leads to a loss of data with index_col=False.</p>
<p>and gives output</p>
<pre><code> Username Identifier First name Last name Department Location
0 booker12 9012 Rachel Booker NaN NaN
1 grey07 2070 Laura Grey NaN NaN
2 johnson81 4081 Craig Johnson Depot London
3 jenkins46 9346 Mary Jenkins Engineering Manchester
4 smith79 5079 Jamie Smith Engineering Manchester
</code></pre>
<p>So how can I get data as:</p>
<pre><code> Username Identifier First name Last name Department Location Somename Somename
0 booker12 9012 Rachel Booker NaN NaN Sales Manchester
1 grey07 2070 Laura Grey NaN NaN Depot London
2 johnson81 4081 Craig Johnson Depot London NaN NaN
3 jenkins46 9346 Mary Jenkins Engineering Manchester NaN NaN
4 smith79 5079 Jamie Smith Engineering Manchester NaN NaN
</code></pre>
|
<p>you have to use pd.merge with NaN with the right column</p>
|
python|pandas|dataframe|csv|header
| 0
|
7,535
| 71,668,357
|
Calling a specific Pandas Dataframe from user input to use in a function?
|
<p>This might have been answered, but I can't quite find the right group of words to search for to find the answer to the problem I'm having.</p>
<p><strong>Situation</strong>:
I have a several data frames that could be plugged into a function. The function requires that I name the data frame so that it can take the shape.</p>
<pre><code>def heatmap(table, cmap=cm.inferno, vmin=None, vmax=None, inner_r=0.25, pie_args={}:
n, m = table.shape
</code></pre>
<p>I want the user to be able to specify the data frame to use as the table like this:</p>
<pre><code>table_name= input('specify Table to Graph: ')
heatmap(table_name)
</code></pre>
<p><strong>Expectation</strong>: If the user input was TableXYZ then the variable table_name would reference TableXYZ so the function would be able to find the shape of TableXYZ to use that information in other parts of the function.</p>
<p><strong>What actually happens</strong>: When I try to run the code I get an "AttribureError: 'str' has not attribute 'shape'." I see that the table_name input is a string object, but I'm trying to reference the data frame itself, not the name.</p>
<p>I feel like I'm missing a step to turn the user's input to something the function can actually take the shape of.</p>
|
<p>I'd recommend assigning each of your dataframes to a dictionary, then retrieving the dataframe by name from the dictionary to pass it to the <code>heatmap</code> function.</p>
<p>For example:</p>
<pre><code>df_by_name = {"df_a": pd.DataFrame(), "df_b": pd.DataFrame()}
table_name= input('specify Table to Graph: ')
df = df_by_name[table_name]
heatmap(df)
</code></pre>
<p>Replace <code>pd.DataFrame()</code> in the first line with the actual data frames that you want to select from.</p>
|
python|pandas|dataframe|user-input|shapes
| 2
|
7,536
| 69,839,409
|
Reorder axis in TensorFlow Keras layer
|
<p>I am building a model that applies a random shuffle to data along the first non batch axis, applies a series of Conv1Ds, then applies the inverse of the shuffle. Unfortunately the <code>tf.gather</code> layer messes up the batch dimension <code>None</code>, and i'm not sure why.</p>
<p>Below is an example of what happens.</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
dim = 90
input_img = keras.Input(shape=(dim, 4))
# Get random shuffle order
order = layers.Lambda(lambda x: tf.random.shuffle(tf.range(x)))(dim)
# Apply shuffle
tensor = layers.Lambda(lambda x: tf.gather(x[0], tf.cast(x[1], tf.int32), axis=1,))(input_img, order)
model = keras.models.Model(
inputs=[input_img],
outputs=tensor,
)
</code></pre>
<p>Here the summary is as follows:</p>
<pre><code>Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 90, 4)] 0
_________________________________________________________________
lambda_51 (Lambda) (90, 90, 4) 0
=================================================================
Total params: 0
Trainable params: 0
Non-trainable params: 0
_________________________________________________________________
</code></pre>
<p>Whereas I want the output shape of <code>lambda_51</code> to be <code>(None, 90, 4)</code>.</p>
|
<p>Try to wrap <code>input_img</code> and <code>order</code> into a list when you pass them to <code>tensor</code> layer.</p>
<p>In this way <code>tensor</code> layer becomes:</p>
<pre><code>tensor = layers.Lambda(lambda x: tf.gather(x[0], tf.cast(x[1], tf.int32), axis=1,))([input_img, order])
</code></pre>
<p>and your summary:</p>
<pre><code>_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) [(None, 90, 4)] 0
_________________________________________________________________
lambda_3 (Lambda) (None, 90, 4) 0
=================================================================
Total params: 0
Trainable params: 0
Non-trainable params: 0
</code></pre>
|
python|tensorflow|keras|tensorflow2.0|tf.keras
| 1
|
7,537
| 43,381,986
|
Creating list of 2D matrices in Python
|
<p>I am trying to create a list of 2d matrices, as per the illustration below:</p>
<p><a href="https://i.stack.imgur.com/8kQJe.jpg" rel="nofollow noreferrer">list of 2d matrices</a></p>
<p>Basically, I want to start with a NxN matrix with all zeros and sequentially replace the 0's with 1's (as shown in the image). With each modification changing the 0's to 1's, I would like to output the matrix at that step and save it in a list or array.</p>
<p>For the first row of matrices in the illustration, I have this:</p>
<pre><code> dim = 4
x=[]
for i in range(0,dim):
matrix = np.zeros((dim,dim))
matrix[0,i] = 1
x.append(matrix)
m0 = x[0]
m1 = x[0]+x[1]
m2 = x[0]+x[1]+x[2]
m3 = x[0]+x[1]+x[2]+x[3]
</code></pre>
<p>I would like to generalize this so I not only get the first row but the rest of the rows shown in the image and change the matrix size through 'dim'. I can't seem to figure this out. I'd appreciate any help with this.</p>
|
<p>This will do the job:</p>
<pre><code>import numpy as np
dim = 4
x=[]
for i in range(dim):
lst=[]
matrix=np.zeros((dim,dim))
vec=np.ones(i+1)
for j in range(dim):
matrix[0:i+1,j]=vec
lst.append(np.copy(matrix))
x.append(lst)
print(x)
</code></pre>
|
python|python-3.x|numpy|for-loop|matrix
| 0
|
7,538
| 72,226,280
|
Perform a merge by date field without creating an auxiliary column in the DataFrame
|
<p>Be the following DataFrames in python pandas:</p>
<pre><code>| date | counter |
|-----------------------------|------------------|
| 2022-01-01 10:00:02+00:00 | 34 |
| 2022-01-03 11:03:02+00:00 | 23 |
| 2022-02-01 12:00:05+00:00 | 12 |
| 2022-03-01 21:04:02+00:00 | 7 |
</code></pre>
<pre><code>| date | holiday |
|-----------------------------|------------------|
| 2022-01-01 | True |
| 2022-01-02 | False |
| 2022-01-03 | True |
| 2022-02-01 | True |
| 2022-02-02 | True |
| 2022-02-03 | True |
| 2022-03-01 | False |
| 2022-03-02 | True |
| 2022-03-03 | False |
</code></pre>
<p>How could I merge both DataFrames taking into account that I don't want to create an auxiliary column with the date?</p>
<pre><code>| date | counter | holiday |
|-----------------------------|------------------|--------------|
| 2022-01-01 10:00:02+00:00 | 34 | True |
| 2022-01-03 11:03:02+00:00 | 23 | True |
| 2022-02-01 12:00:05+00:00 | 12 | True |
| 2022-03-01 21:04:02+00:00 | 7 | False |
</code></pre>
<p>Thank you for your help in advance.</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.map.html" rel="nofollow noreferrer"><code>Series.map</code></a> with datetimes without times by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.normalize.html" rel="nofollow noreferrer"><code>Series.dt.normalize</code></a> - then helper column is not created in <code>df2</code> output:</p>
<pre><code>df2['holiday'] = df2['date'].dt.normalize().map(df1.set_index('date')['holiday'])
</code></pre>
<p>Another idea with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.merge_asof.html" rel="nofollow noreferrer"><code>merge_asof</code></a>, but for avoid error need remove timezones by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.dt.tz_convert.html" rel="nofollow noreferrer"><code>Series.dt.tz_convert</code></a>:</p>
<pre><code>df = pd.merge_asof(df1.assign(date = df1['date'].dt.tz_convert(None)).sort_values('date'),
df2, on='date')
print (df)
date counter holiday
0 2022-01-01 10:00:02 34 True
1 2022-01-03 11:03:02 23 True
2 2022-02-01 12:00:05 12 True
3 2022-03-01 21:04:02 7 False
</code></pre>
|
python|pandas|dataframe|datetime
| 1
|
7,539
| 72,316,941
|
Create a pandas.PeriodIndex from a list of quarter strings in format YYYYQ
|
<p>I know a bit about <code>pandas.Period</code> and <code>pandas.PeriodIndex</code>. But I am not able to make them fit to my use case.</p>
<p>I have a list of <em>quarter strings</em> in format <code>YYYYQ</code>:</p>
<pre><code>df = pandas.DataFrame({'quarter': ['20214', '20222']})
</code></pre>
<p>How can I create a <code>PeriodIndex</code> or <code>Period</code> (each element) of it?</p>
<p>I am not able to do this</p>
<pre><code>pandas.PeriodIndex(df.quarter, freq='Q')
</code></pre>
<p>because my strings doesn't contain a <code>q</code> or <code>Q</code> at the right position. I could do some string manipulation in a first step to insert a <code>q</code> at the right position. But I wonder if <code>PeriodIndex</code> gives me the ability to specify a format string like <code>YYYYQ</code>.</p>
|
<p>Add <code>q</code> before last digit by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.replace.html" rel="nofollow noreferrer"><code>Series.str.replace</code></a> and then is possible converting to quarters by your solution:</p>
<pre><code>df['quarter'] = pd.PeriodIndex(df['quarter'].str.replace(r'(\d{1})$', r'q\1', regex=True),
freq='Q')
print (df)
quarter
0 2021Q4
1 2022Q2
</code></pre>
<p><strong>Details</strong>:</p>
<pre><code>print (df['quarter'].str.replace(r'(\d{1})$', r'q\1', regex=True))
0 2021q4
1 2022q2
Name: quarter, dtype: object
</code></pre>
|
python|pandas
| 1
|
7,540
| 72,288,839
|
`logits` and `labels` must have the same shape, received ((None, 512, 768) vs (None, 1)) when using transformers
|
<p>I get the next error when im trying to fine tuning a bert model to predict sentiment analysis.</p>
<p>Im using as input:
X-A list of strings that contains tweets
y-a numeric list (0 - negative, 1 - positive)</p>
<p>I am trying to fine tuning a bert model to predict sentiment analysis but i always get the same error in logits and labels when im trying to fit the model. I load a pretrained model and then build the dataset but when i am trying to fit it, it is impossible.</p>
<p>The text used as input is a list of strings made of tweets and the labels used as input are a list of categories (negative and positive) but transformed to 0 and 1.</p>
<pre><code>from sklearn.preprocessing import MultiLabelBinarizer
#LOAD MODEL
hugging_face_model = 'distilbert-base-uncased-finetuned-sst-2-english'
batches = 32
epochs = 1
tokenizer = BertTokenizer.from_pretrained(hugging_face_model)
model = TFBertModel.from_pretrained(hugging_face_model, num_labels=2)
#PREPARE THE DATASET
#create a list of strings (tweets)
lst = list(X_train_lower['lower_text'].values)
encoded_input = tokenizer(lst, truncation=True, padding=True, return_tensors='tf')
y_train['sentimentNumber'] = y_train['sentiment'].replace({'negative': 0, 'positive': 1})
label_list = list(y_train['sentimentNumber'].values)
#CREATE DATASET
train_dataset = tf.data.Dataset.from_tensor_slices((dict(encoded_input), label_list))
#COMPILE AND FIT THE MODEL
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=5e-5), loss=BinaryCrossentropy(from_logits=True),metrics=["accuracy"])
model.fit(train_dataset.shuffle(len(df)).batch(batches),epochs=epochs,batch_size=batches) ```
ValueError Traceback (most recent call last)
<ipython-input-158-e5b63f982311> in <module>()
----> 1 model.fit(train_dataset.shuffle(len(df)).batch(batches),epochs=epochs,batch_size=batches)
1 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in autograph_handler(*args, **kwargs)
1145 except Exception as e: # pylint:disable=broad-except
1146 if hasattr(e, "ag_error_metadata"):
-> 1147 raise e.ag_error_metadata.to_exception(e)
1148 else:
1149 raise
ValueError: in user code:
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1021, in train_function *
return step_function(self, iterator)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1010, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1000, in run_step **
outputs = model.train_step(data)
File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_tf_utils.py", line 1000, in train_step
loss = self.compiled_loss(y, y_pred, sample_weight, regularization_losses=self.losses)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/compile_utils.py", line 201, in __call__
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
File "/usr/local/lib/python3.7/dist-packages/keras/losses.py", line 141, in __call__
losses = call_fn(y_true, y_pred)
File "/usr/local/lib/python3.7/dist-packages/keras/losses.py", line 245, in call **
return ag_fn(y_true, y_pred, **self._fn_kwargs)
File "/usr/local/lib/python3.7/dist-packages/keras/losses.py", line 1932, in binary_crossentropy
backend.binary_crossentropy(y_true, y_pred, from_logits=from_logits),
File "/usr/local/lib/python3.7/dist-packages/keras/backend.py", line 5247, in binary_crossentropy
return tf.nn.sigmoid_cross_entropy_with_logits(labels=target, logits=output)
ValueError: `logits` and `labels` must have the same shape, received ((None, 512, 768) vs (None, 1)).
</code></pre>
|
<p>As described in this <a href="https://www.kaggle.com/code/dhruv1234/huggingface-tfbertmodel/notebook" rel="nofollow noreferrer">kaggle notebook</a>, you must build a custom Keras Model around the pre-trained BERT model to perform classification,</p>
<blockquote>
<p>The bare Bert Model transformer outputing raw hidden-states without
any specific head on top</p>
</blockquote>
<p>Here is a copy of a piece of code:</p>
<pre><code>def create_model(bert_model):
input_ids = tf.keras.Input(shape=(60,),dtype='int32')
attention_masks = tf.keras.Input(shape=(60,),dtype='int32')
output = bert_model([input_ids,attention_masks])
output = output[1]
output = tf.keras.layers.Dense(32,activation='relu')(output)
output = tf.keras.layers.Dropout(0.2)(output)
output = tf.keras.layers.Dense(1,activation='sigmoid')(output)
model = tf.keras.models.Model(inputs = [input_ids,attention_masks],outputs = output)
model.compile(Adam(lr=6e-6), loss='binary_crossentropy', metrics=['accuracy'])
return model
</code></pre>
<p>Note: you might have to adapt this code and in particular modify the Input shape (60 to <strong>512</strong> seemingly from the error message, your tokenizer maximum length)</p>
<p>Load BERT model and build the classifier :</p>
<pre><code>from transformers import TFBertModel
bert_model = TFBertModel.from_pretrained(hugging_face_model)
model = create_model(bert_model)
model.summary()
</code></pre>
<p>Summary:</p>
<pre><code>Model: "model"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 60)] 0 []
input_2 (InputLayer) [(None, 60)] 0 []
tf_bert_model_1 (TFBertModel) TFBaseModelOutputWi 109482240 ['input_1[0][0]',
thPoolingAndCrossAt 'input_2[0][0]']
tentions(last_hidde
n_state=(None, 60,
768),
pooler_output=(Non
e, 768),
past_key_values=No
ne, hidden_states=N
one, attentions=Non
e, cross_attentions
=None)
dense (Dense) (None, 32) 24608 ['tf_bert_model_1[0][1]']
dropout_74 (Dropout) (None, 32) 0 ['dense[0][0]']
dense_1 (Dense) (None, 1) 33 ['dropout_74[0][0]']
==================================================================================================
Total params: 109,506,881
Trainable params: 109,506,881
Non-trainable params: 0
</code></pre>
|
tensorflow|machine-learning|keras|sentiment-analysis|bert-language-model
| 0
|
7,541
| 50,584,887
|
Python: how to get sum of values based on different columns
|
<p>I have a datframe <code>df</code> like the following:</p>
<pre><code>df name city
0 John New York
1 Carl New York
2 Carl Paris
3 Eva Paris
4 Eva Paris
5 Carl Paris
</code></pre>
<p>I want to know the total number of people in the different cities</p>
<pre><code>df2 city number
0 New York 2
1 Paris 3
</code></pre>
<p>or the number of people with the same name in the cities</p>
<pre><code>df2 name city number
0 John New York 1
1 Eva Paris 2
2 Carl Paris 2
3 Eva New York 0
</code></pre>
|
<p>I believe need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.size.html" rel="nofollow noreferrer"><code>GroupBy.size</code></a>:</p>
<pre><code>df1 = df.groupby(['city']).size().reset_index(name='number')
print (df1)
city number
0 New York 2
1 Paris 4
</code></pre>
<hr>
<pre><code>df2 = df.groupby(['name','city']).size().reset_index(name='number')
print (df2)
name city number
0 Carl New York 1
1 Carl Paris 2
2 Eva Paris 2
3 John New York 1
</code></pre>
<p>If need all combinations one solution is add <code>unstack</code> and <code>stack</code>:</p>
<pre><code>df3=df.groupby(['name','city']).size().unstack(fill_value=0).stack().reset_index(name='count')
print (df3)
name city number
0 Carl New York 1
1 Carl Paris 2
2 Eva New York 0
3 Eva Paris 2
4 John New York 1
5 John Paris 0
</code></pre>
<p>Or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.reindex.html" rel="nofollow noreferrer"><code>reindex</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.MultiIndex.from_product.html" rel="nofollow noreferrer"><code>MultiIndex.from_product</code></a>:</p>
<pre><code>df2 = df.groupby(['name','city']).size()
mux = pd.MultiIndex.from_product(df2.index.levels, names=df2.index.names)
df2 = df2.reindex(mux, fill_value=0).reset_index(name='number')
print (df2)
name city number
0 Carl New York 1
1 Carl Paris 2
2 Eva New York 0
3 Eva Paris 2
4 John New York 1
5 John Paris 0
</code></pre>
|
python|pandas|group-by
| 1
|
7,542
| 50,507,468
|
Fastest way to parse a column to datetime in pandas
|
<p>I have the following dataframe with more than 400 000 lines.</p>
<pre><code>df = pd.DataFrame({'date' : ['03/02/2015 23:00',
'03/02/2015 23:30',
'04/02/2015 00:00',
'04/02/2015 00:30',
'04/02/2015 01:00',
'04/02/2015 01:30',
'04/02/2015 02:00',
'04/02/2015 02:30',
'04/02/2015 03:00',
'04/02/2015 03:30',
'04/02/2015 04:00',
'04/02/2015 04:30',
'04/02/2015 05:00',
'04/02/2015 05:30',
'04/02/2015 06:00',
'04/02/2015 06:30',
'04/02/2015 07:00']})
</code></pre>
<p>I am trying to parse the date column of a csv file in pandas as fast as possible. I know how to do it with read_csv but that takes a lot of time! Also, I have tried the following which works but which is also very slow: <code>df['dateTimeFormat'] = pd.to_datetime(df['date'],dayfirst=True)</code> </p>
<p>How could I parse efficiently and in a really fast way the date column to datetime?</p>
<p>Thank you very much for your help,</p>
<p>Pierre</p>
|
<p>You can define format of <code>datetime</code>s by <a href="http://strftime.org/" rel="noreferrer">http://strftime.org/</a>:</p>
<pre><code>df = pd.concat([df] * 1000, ignore_index=True)
%timeit df['dateTimeFormat1'] = pd.to_datetime(df['date'],dayfirst=True)
2.94 s ± 285 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit df['dateTimeFormat2'] = pd.to_datetime(df['date'],format='%d/%m/%Y %H:%M')
55 ms ± 1.47 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
</code></pre>
|
pandas|parsing|datetime
| 10
|
7,543
| 62,815,494
|
Python/Pandas: Find min value in dataframe or dictionary
|
<p>I created a dictionary in a for loop which gave me the following 192 results:</p>
<pre><code>dic_aic = {0: 16.83024400288158,
1: 10.580792750644934,
2: 10.460203246761916,
3: 10.44309674334913,
4: 10.425859422774248,
...
191: 10.273789550619007,
192: 10.272853618268071}
</code></pre>
<p>When I plot this out with pandas I get the following:</p>
<pre><code>aic_df = pd.DataFrame(aic_dic.items(), columns=['Order', 'AIC'])
aic_df = aic_df.set_index('Order')
aic_df[1:].plot()
</code></pre>
<p><a href="https://i.stack.imgur.com/LSTl8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LSTl8.png" alt="enter image description here" /></a></p>
<p>With</p>
<pre><code> min(aic_dic, key=aic_dic.get)
</code></pre>
<p>I get the minimum value in my dic with 192. But as you can see in the picture the difference between the AIC at 192 and e.g. 96 is very small with only 10.272853618268071 - 10.28435108717606 = 0.01149...
I am trying to find the optimized value around 96. Does anybody have an idea on how to solve this with python? Maybe with an integral?</p>
|
<p>You can first set min value as you just did.</p>
<p>Then you need to set a tolerance value like 0.1.</p>
<p>Then you can return smallest key value with with satisfies min +- tolerance</p>
<pre><code>minimum = min(aic_dic, key=aic_dic.get) #your code here
tolerance = 0.1 #could be higher lower up to you
possible_answers = []
for key,value in your_dict.items():
if minimum + tolerance > value:
possible_answers(key)
print(min(possible_answers)) # you can adjust here as according to your need
</code></pre>
|
python|pandas|optimization
| 1
|
7,544
| 62,880,355
|
Creating new variable based on data in dataframe, ignore NaN
|
<p>I have a dataframe like that below and want to create a new variable that is a <code>1/0</code> or <code>True/False</code> if all of the available scores in certain columns are equal to or above 4.</p>
<p>The data is quite messy. Some cells are <code>NaN</code> (respondent didn't provide a response), some are white space (bad formatting or respondent pressed space bar, maybe?).</p>
<pre><code>ID Var1 Var2 Var3
id0001 2 NaN 2
id0002 10 3 10
id0003 8 0
id0004 NaN NaN NaN
id0005 7 3 7
id0006 NaN 9 9
</code></pre>
<p>I don't want to drop those rows with a missing value because most have a missing value. I can't just make NaN and white space cells 0 because 0 means something here. I can easily make all white space cells NaN, but I don't know how to ignore them as then I have instances of 'str' and 'int' when I do something like the following:</p>
<pre><code>scoreoffouroraboveforall = [(df.Var1 >= 4) & (df.Var2 >= 4) & (df.Var3 >= 4)]
</code></pre>
<p>This is probably very simple to do, but I'm at a loss.</p>
|
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_numeric.html" rel="nofollow noreferrer"><code>pd.to_numeric</code></a> with optional parameter <code>errors=coerce</code> to convert each of the column in <code>Var1</code>, <code>Var2</code> and <code>Var3</code> to numeric type, then using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.ge.html" rel="nofollow noreferrer"><code>DataFrame.ge</code></a> and <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.all.html" rel="nofollow noreferrer"><code>DataFrame.all</code></a> along <code>axis=1</code> to create the boolean mask as required with <code>True/False</code> values:</p>
<pre><code>m = df[['Var1', 'Var2', 'Var3']].apply(
pd.to_numeric, errors='coerce').ge(4).all(axis=1)
</code></pre>
<p>Result:</p>
<pre><code>print(m)
0 False
1 False
2 False
3 False
4 False
5 False
dtype: bool
</code></pre>
|
python|pandas|dataframe
| 0
|
7,545
| 62,752,191
|
Seaborn columnwise violinplot
|
<p>I have a dict <code>d</code> which lists numbers' occurences:</p>
<pre><code>{'item1': [42, 1, 2, 3, 42, 2, 1, 1, 1, 1, 1],
'item2': [2, 5],
'item3': [5, 1, 7, 2, 7, 1, 42, 2, 9]}
</code></pre>
<p>Which I then convert to a DataFrame counting these occurences:</p>
<pre><code>df = pd.DataFrame.from_dict({k: dict(Counter(v)) for k, v in d.items()})
item1 item2 item3
42 2.0 NaN 1.0
1 6.0 NaN 2.0
2 2.0 1.0 2.0
3 1.0 NaN NaN
5 NaN 1.0 1.0
7 NaN NaN 2.0
9 NaN NaN 1.0
</code></pre>
<p>How can I plot this or some other DataFrame that was derived from <code>d</code> using <a href="https://seaborn.pydata.org/generated/seaborn.violinplot.html" rel="nofollow noreferrer"><code>seaborn.violinplot</code></a>, so that each column in the dataframe represents a violin in the plot based on the data provided by each columns values and their respective indices?</p>
<p>I have tried multiple combinations of which I believe this intuitively comes the closest, but unfortunately still fails:</p>
<pre><code>sns.violinplot(x=df.keys(), y=df.index, data=df)
</code></pre>
|
<p>Pass <code>DataFrame</code> to <a href="https://seaborn.pydata.org/generated/seaborn.violinplot.html" rel="nofollow noreferrer"><code>seaborn.violinplot</code></a>, so each Series (column) is plotted separately:</p>
<pre><code>sns.violinplot(data=df)
</code></pre>
<p><a href="https://i.stack.imgur.com/heD0Q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/heD0Q.png" alt="pic" /></a></p>
|
python|pandas|dataframe|plot|seaborn
| 2
|
7,546
| 54,250,651
|
How Weight update in Dynamic Computation Graph of pytorch works?
|
<p>How does the Weight Update works in Pytorch code of Dynamic Computation Graph when Weights are shard (=reused multiple times)</p>
<p><a href="https://pytorch.org/tutorials/beginner/examples_nn/dynamic_net.html#sphx-glr-beginner-examples-nn-dynamic-net-py" rel="nofollow noreferrer">https://pytorch.org/tutorials/beginner/examples_nn/dynamic_net.html#sphx-glr-beginner-examples-nn-dynamic-net-py</a></p>
<pre><code>import random
import torch
class DynamicNet(torch.nn.Module):
def __init__(self, D_in, H, D_out):
"""
In the constructor we construct three nn.Linear instances that we will use
in the forward pass.
"""
super(DynamicNet, self).__init__()
self.input_linear = torch.nn.Linear(D_in, H)
self.middle_linear = torch.nn.Linear(H, H)
self.output_linear = torch.nn.Linear(H, D_out)
def forward(self, x):
"""
For the forward pass of the model, we randomly choose either 0, 1, 2, or 3
and reuse the middle_linear Module that many times to compute hidden layer
representations.
Since each forward pass builds a dynamic computation graph, we can use normal
Python control-flow operators like loops or conditional statements when
defining the forward pass of the model.
Here we also see that it is perfectly safe to reuse the same Module many
times when defining a computational graph. This is a big improvement from Lua
Torch, where each Module could be used only once.
"""
h_relu = self.input_linear(x).clamp(min=0)
for _ in range(random.randint(0, 3)):
h_relu = self.middle_linear(h_relu).clamp(min=0)
y_pred = self.output_linear(h_relu)
return y_pred
</code></pre>
<p>I want to know what happens to <code>middle_linear</code> weight at each backward which is used multiple times at a step</p>
|
<p>When you call <a href="https://pytorch.org/docs/master/autograd.html#torch.autograd.backward" rel="nofollow noreferrer"><code>backward</code></a> (either as the function or a method on a tensor) the gradients of operands with <code>requires_grad == True</code> are calculated with respect to the tensor you called <code>backward</code> on. These gradients are <em>accumulated</em> in the <code>.grad</code> property of these operands. If the same operand <code>A</code> appears multiple times in the expression, you can conceptually treat them as separate entities <code>A1</code>, <code>A2</code>... for the backpropagation algorithm and just at the end sum their gradients so that <code>A.grad = A1.grad + A2.grad + ...</code>.</p>
<p>Now, strictly speaking, the answer to your question</p>
<blockquote>
<p>I want to know what happens to middle_linear <em>weight</em> at each backward</p>
</blockquote>
<p>is: nothing. <code>backward</code> does not change weights, only calculates the gradient. To change the weights you have to do an optimization step, perhaps using one of the optimizers in <a href="https://pytorch.org/docs/master/optim.html" rel="nofollow noreferrer"><code>torch.optim</code></a>. The weights are then updated according to their <code>.grad</code> property, so if your operand was used multiple times, it will be updated accordingly to the sum of the gradients in each of its uses.</p>
<p>In other words, if your matrix element <code>x</code> has positive gradient when first applied and negative when used the second time, it may be that the net effects will cancel out and it will stay as it is (or change just a bit). If both applications call for <code>x</code> to be higher, it will raise more than if it was used just once, etc.</p>
|
deep-learning|pytorch|computation-graph
| 4
|
7,547
| 54,640,571
|
Iterating through Pandas rows after an initial condition in the same row is met
|
<p>“I am trying to write a program that uses a pandas data.rsi and iterate through this column. if rsi > 70 i would like to check whether the n next data points have an rsi of 60, if the do and the rsi moves above 70 again a would like to create a 1 in a column called data.RSIFI ”</p>
<p>To sum it up the problem is to look for a new condition in the n next rows when a condition is already met ( but not in the same state any longer) </p>
<p>the crossover part is just another condition that is -1 or 1. </p>
<pre><code> for i in data.index:
val = data.get_value(i,'rsi')
if val >70 and data.get_value(i, 'cross_ov_un') == 1:
for n in range(40):
if val.shift(n) < 60:
for n in range(40):
if val.shift(n) > 70:
data.loc[i,'RSIFI']= 1
</code></pre>
<p>this does not work, and one of the problems is that a timestamp cant be shifted.</p>
<p>example of how the data looks: </p>
<pre><code> RSIFI cross_ov_un rsi
date
2019-01-14 09:00:00 0 1 40.716622
2019-01-14 10:00:00 0 1 40.304055
2019-01-14 11:00:00 0 1 46.000142
2019-01-14 12:00:00 0 1 44.732117
2019-01-14 13:00:00 0 1 40.476486
2019-01-14 14:00:00 0 1 44.553255
2019-01-14 15:00:00 1 1 70.540997
2019-01-14 16:00:00 0 1 65.734665
2019-01-14 17:00:00 0 1 70.383329
2019-01-14 18:00:00 1 1 71.235720
2019-01-14 19:00:00 0 1 64.735780
2019-01-14 20:00:00 0 1 62.017401
2019-01-14 21:00:00 0 1 59.410495
2019-01-14 22:00:00 0 1 66.339052
2019-01-14 23:00:00 1 1 71.217073
2019-01-15 00:00:00 1 1 74.982245
2019-01-15 01:00:00 0 1 57.951364
2019-01-15 02:00:00 0 1 56.833347
</code></pre>
<p>Example of how I would like it to look </p>
<pre><code> RSIFI cross_ov_un rsi
date
2019-01-14 09:00:00 0 1 40.716622
2019-01-14 10:00:00 0 1 40.304055
2019-01-14 11:00:00 0 1 46.000142
2019-01-14 12:00:00 0 1 44.732117
2019-01-14 13:00:00 0 1 40.476486
2019-01-14 14:00:00 0 1 44.553255
2019-01-14 15:00:00 0 1 70.540997
2019-01-14 16:00:00 0 1 65.734665
2019-01-14 17:00:00 0 1 70.383329
2019-01-14 18:00:00 0 1 71.235720
2019-01-14 19:00:00 0 1 64.735780
2019-01-14 20:00:00 0 1 62.017401
2019-01-14 21:00:00 0 1 59.410495
2019-01-14 22:00:00 0 1 66.339052
2019-01-14 23:00:00 1 1 71.217073
2019-01-15 00:00:00 0 1 74.982245
2019-01-15 01:00:00 0 1 57.951364
2019-01-15 02:00:00 0 1 56.833347
</code></pre>
|
<p>The problem is that <code>.loc</code> is used for accessing a group of rows, while the <code>.at</code> method accesses the value of a single index of the data frame. </p>
<pre><code>for i in data.index:
val = data.at[i,'rsi']
if val > 70 and data.at[i, 'cross_ov_un'] == 1:
data.at[i,'RSIFI']= 1
</code></pre>
|
python|python-3.x|pandas
| 0
|
7,548
| 54,647,372
|
Check if values of multiple columns are the same (python)
|
<p>I have a binairy dataframe and I would like to check whether all values in a specific row have the value 1. So for example I have
below dataframe. Since row 0 and row 2 all contain value 1 in col1 till col3 the outcome shoud be 1, if they are not it should be 0.</p>
<pre><code>import pandas as pd
d = {'col1': [1, 0,1,0], 'col2': [1, 0,1, 1], 'col3': [1,0,1,1], 'outcome': [1,0,1,0]}
df = pd.DataFrame(data=d)
</code></pre>
<p>Since my own dataframe is much larger I am looking for a more elegant way than the following, any thoughts?</p>
<pre><code>def similar(x):
if x['col1'] == 1 and x['col2'] == 1 and x['col3'] == 1:
return 1
else:
''
df['outcome'] = df.apply(similar, axis=1)
</code></pre>
|
<p>A classic case of <code>all</code>. </p>
<p>(The <code>iloc</code> is just there to disregard your current outcome col, if you didn't have it you could just use <code>df == 1</code>.)</p>
<pre><code>df['outcome'] = (df.iloc[:,:-1] == 1).all(1).astype(int)
col1 col2 col3 outcome
0 1 1 1 1
1 0 0 0 0
2 1 1 1 1
3 0 1 1 0
</code></pre>
|
python|pandas|similarity
| 11
|
7,549
| 71,342,922
|
TensorFlow.js prediction time is difference between the first trial and followings
|
<p>I am testing to load the TensorFlow.js model and trying to measure how many milliseconds it takes to predict. For example, the first time, it takes about 300 milliseconds to predict value but the time is decreased to 13~20 milliseconds from the second trial. I am not calculating time from the model loading. I am calculating only the prediction value after the model is loaded.</p>
<p>Can anyone explain why it gets decreased time to predict value?</p>
<pre><code>// Calling TensorFlow.js model
const MODEL_URL = 'https://xxxx-xxxx-xxxx.xxx.xxx-xxxx-x.xxxxxx.com/model.json'
let model;
let prediction;
export async function getModel(input){
console.log("From helper function: Model is being retrieved from the server...")
model = await tf.loadLayersModel(MODEL_URL);
// measure prediction time
var str_time = new Date().getTime();
prediction = await model.predict(input)
var elapsed = new Date().getTime() - str_time;
console.log("Laoding Time for Tensorflow: " + elapsed)
console.log(prediction.arraySync())
...
}
</code></pre>
|
<p>Usually the first prediction would take longer due to needing to load the model into memory from the API request, once thats done it would be cached and you would not need make the same API request again.</p>
<p>If you wanted to see the actual prediction time, repeat the process of timing the predictions many times(perhaps 1000) then get the 99th quantile value which will show what is the prediction time for 99% of the cases(you can alter the quantile value as well to 90 or 50).</p>
|
javascript|memory|time|tensorflow.js|performance-measuring
| 0
|
7,550
| 52,282,896
|
ML Engine: Prediction Error while executing local predict command
|
<p>I have uploaded a version of the model in the Google ML Engine with <code>saved_model.pb</code> and a variables folder. When I try to execute the command:</p>
<pre><code>gcloud ml-engine local predict --model-dir=saved_model --json-instances=request.json
</code></pre>
<p>It shows the following error:</p>
<pre><code>ERROR: (gcloud.ml-engine.local.predict) 2018-09-11 19:06:39.770396: I tensorflow/core/platform/cpu_feature_guard.cc:141]
Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
Traceback (most recent call last):
File "lib/googlecloudsdk/command_lib/ml_engine/local_predict.py", line 172, in <module>
main()
File "lib/googlecloudsdk/command_lib/ml_engine/local_predict.py", line 167, in main
signature_name=args.signature_name)
File "/usr/lib/google-cloud-sdk/lib/third_party/ml_sdk/cloud/ml/prediction/prediction_lib.py", line 106, in local_predict
predictions = model.predict(instances, signature_name=signature_name)
File "/usr/lib/google-cloud-sdk/lib/third_party/ml_sdk/cloud/ml/prediction/prediction_utils.py", line 230, in predict
preprocessed = self.preprocess(instances, stats=stats, **kwargs)
File "/usr/lib/google-cloud-sdk/lib/third_party/ml_sdk/cloud/ml/prediction/frameworks/tf_prediction_lib.py", line 436, in preprocess
preprocessed = self._canonicalize_input(instances, signature)
File "/usr/lib/google-cloud-sdk/lib/third_party/ml_sdk/cloud/ml/prediction/frameworks/tf_prediction_lib.py", line 453, in _canonicalize_input
return canonicalize_single_tensor_input(instances, tensor_name)
File "/usr/lib/google-cloud-sdk/lib/third_party/ml_sdk/cloud/ml/prediction/frameworks/tf_prediction_lib.py", line 166, in canonicalize_single_tensor_input
instances = [parse_single_tensor(x, tensor_name) for x in instances]
File "/usr/lib/google-cloud-sdk/lib/third_party/ml_sdk/cloud/ml/prediction/frameworks/tf_prediction_lib.py", line 162, in parse_single_tensor
(tensor_name, list(x.keys())))
cloud.ml.prediction.prediction_utils.PredictionError: Invalid inputs: Expected tensor name: inputs, got tensor name: [u'inputs', u'key']. (Error code: 1)
</code></pre>
<p>My <code>request.json</code> file is</p>
<pre><code>{"inputs": {"b64": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsIxwcKDcpLDAxNDQ0Hyc5PTgyPC4zNDL/2wBDAQkJCQwLDBgNDRgyIRwhMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjL/wAARCAHVArwDASIAAhEBAxEB/8QAHwAAAQUBAQEBA....."}, "key": "841bananas.jpg"}
</code></pre>
<p>Thanks in advance.</p>
|
<p>It appears your model was exported with only one input named "inputs". In that case, you shouldn't be sending "key" in the JSON, i.e., (scroll to the end to see I've removed "keys"):</p>
<pre><code>{"inputs": {"b64": "/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsIxwcKDcpLDAxNDQ0Hyc5PTgyPC4zNDL/2wBDAQkJCQwLDBgNDRgyIRwhMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjL/wAARCAHVArwDASIAAhEBAxEB/8QAHwAAAQUBAQEBA....."}}
</code></pre>
|
python|tensorflow|machine-learning|google-cloud-platform|google-cloud-ml
| 1
|
7,551
| 52,097,943
|
count and calculate percentage of each column by threshold in Python
|
<p>If I have a following dataframe:</p>
<pre><code>studentId sex history english math biology
01 male 75 90 85 60
02 female 85 80 95 70
03 male 55 60 78 86
04 male 90 89 76 80
</code></pre>
<p>I want get a new table showing the percentage of each subject score higher than a threshold of 80 (80 included). For instance, there are two students' scores higher than 80 on history, thus the percentage of history is 2/4 = 50%.
Does someone can help me to do so with Python? Thanks.</p>
<pre><code>history 50%
english 75%
math 50%
biology 50%
</code></pre>
|
<p>Use:</p>
<pre><code>s = df.iloc[:, 2:].ge(80).mean().mul(100)
print (s)
history 50.0
english 75.0
math 50.0
biology 50.0
dtype: float64
</code></pre>
<p><strong>Explanation</strong>:</p>
<p>First select only necessary columns by positions by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.iloc.html" rel="noreferrer"><code>DataFrame.iloc</code></a>:</p>
<pre><code>print (df.iloc[:, 2:])
history english math biology
0 75 90 85 60
1 85 80 95 70
2 55 60 78 86
3 90 89 76 80
</code></pre>
<p>Then compare by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.ge.html" rel="noreferrer"><code>DataFrame.ge</code></a> (<code>>=</code>):</p>
<pre><code>print (df.iloc[:, 2:].ge(80))
history english math biology
0 False True True False
1 True True True False
2 False False False True
3 True True False True
</code></pre>
<p>And get <code>mean</code> with multiple by <code>100</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.mul.html" rel="noreferrer"><code>DataFrame.mul</code></a>:</p>
<pre><code>print (df.iloc[:, 2:].ge(80).mean().mul(100))
history 50.0
english 75.0
math 50.0
biology 50.0
dtype: float64
</code></pre>
|
python|pandas
| 8
|
7,552
| 60,577,436
|
Append multiindex dataframe in HDF
|
<p><strong>Following end-of-day stock data as example:</strong></p>
<pre><code>In [36]: df
Out[36]:
Code Name High Low Close Volume Change Change.2
0 AAAU Perth Mint Physical Gold ETF 16.8500 16.3900 16.6900 311400 0.0000 0.02
1 AADR Advisorshares Dorsey Wright ADR 49.8400 49.2300 49.6100 18500 -1.3000 2.54
2 AAMC Altisource Asset 24.0000 20.0000 23.9400 2500 0.3600 1.53
3 AAU Almaden Minerals 0.3987 0.3650 0.3684 355100 -0.0147 3.84
4 ABEQ Absolute Core Strategy ETF 23.2100 22.8200 23.1100 114700 -0.1900 0.82
... ... ... ... ... ... ... ... ...
26643 ZVLO Esoft Inc 0.0600 0.0600 0.0600 1000 0.0100 20
26644 ZVTK Zevotek Inc 0.0313 0.0209 0.0302 44900 0.0102 51
26645 ZXAIY China Zenix Auto International 0.1534 0.1534 0.1534 200 -0.1566 50.52
26646 ZYRX Zyrox Mining Intl Inc 0.0200 0.0181 0.0200 3000 0.0000 0
26647 ZZZOF Zinc One Resources Inc 0.0111 0.0111 0.0111 300 0.0000 0
</code></pre>
<p><strong>Additional question:</strong></p>
<p>There are some different ways to store this kind of data to HDF5.</p>
<ol>
<li>Don't change the DataFrame and save it with df.to_hdf() to differt
groups named by date. </li>
<li>Split the different stocks to series and build the table by Name or better by 'Code' with the attribute 'Name'</li>
<li>Append an multiindex DataFrame in only one group.</li>
</ol>
<p>I guess the third solution would be the fastest and most flexible in case of data acessing and analyzing. But with the secound solution it seems to be easier to add new information like fundamentals to each company. Is there a better compromise that I don't know yet?</p>
<p><strong>The main problem (third way):</strong></p>
<p>I use this code to append the hierachial dataframe on every new day:</p>
<pre><code>df = pd.concat(lod, ignore_index=True)
# remove not useful dataj
df = df.drop(['Change.1', 'Change.2', 'Unnamed: 9'], axis=1)
df = df.dropna()
# append a Date column
df['Date'] = dt.datetime.today().date() - dt.timedelta(days=1)
# create multiindex
df = df.set_index(['Date', 'Code', 'Name'])
# append the data to hdf5 container
df.to_hdf(wkd + 'Database.h5', key='stocks', mode='a', format='table')
</code></pre>
<p>The table is replaced instead of expanded. What is wrong?</p>
|
<p>The answer to my main problem was quite simple:</p>
<p>Fond it here:
<a href="https://github.com/pandas-dev/pandas/issues/4584" rel="nofollow noreferrer">https://github.com/pandas-dev/pandas/issues/4584</a></p>
<p>Just add 'append = True'</p>
<pre><code>df.to_hdf(wkd + 'Database.h5', key='stocks', mode='a', format='table', append = True)
</code></pre>
<p>Edit:
My current answer to the additional question would be:</p>
<p>I think it's ok to use the third way because it's easy to query the multiindex dataframe with the pandas HDFStore object on disc:</p>
<pre><code>store.select('stocks', "Code=BMWYY")
</code></pre>
<p>To add new data like comany fundamentals i just add a new table object to the HDF file. Then i query both tables and do further analysis with pandas.</p>
|
python|pandas|hdf5
| 0
|
7,553
| 72,748,479
|
numpy - conditional change with closest elements
|
<p>In a numpy array, I want to replace every occurrences of 4 where top and left of them is a 5.</p>
<p>so for instance :</p>
<pre><code>0000300
0005000
0054000
0000045
0002050
</code></pre>
<p>Should become :</p>
<pre><code>0000300
0005000
0058000
0000045
0002000
</code></pre>
<p>I'm sorry I can't share what I tried, that's a very specific question.</p>
<p>I've had a look at things like</p>
<pre><code>map[map == 4] = 8
</code></pre>
<p>And <code>np.where()</code> but I have really no idea about how to check nearby elements of a specific value.</p>
|
<p>this might seem tricky, but an <code>and</code> between three shifted versions of the matrix will work, you simply need to shift the x==5 array to the bottom, and another version of it shifted to the right, the third matrix is the x==4.</p>
<pre class="lang-py prettyprint-override"><code>first_array = np.zeros(x.shape,dtype=bool)
second_array = np.zeros(x.shape,dtype=bool)
equals_5 = x == 5
equals_4 = x == 4
first_array[1:] = equals_5[:-1] # shift down
second_array[:,1:] = equals_5[:,:-1] # shift right
third_array = equals_4 # put it as it is.
# and operation on the 3 arrays above
results = np.logical_and(np.logical_and(first_array,second_array),third_array)
x[results] = 8
</code></pre>
<p>now <code>results</code> will be the needed logical array.
and it's an O(n) algorithm, but it scales badly if the requested pattern is very complex, not that it's not doable.</p>
|
python|numpy
| 2
|
7,554
| 72,793,767
|
Matching two columns with the same row values in a csv file
|
<p>I have a csv file with 4 columns:</p>
<pre><code>Name Dept Email Name Hair Color
John Smith candy Lincoln Tun brown
Diana Princ candy John Smith gold
Perry Plat wood Oliver Twist bald
Jerry Springer clothes Diana Princ gold
Calvin Klein clothes
Lincoln Tun warehouse
Oliver Twist kitchen
</code></pre>
<p>I want to match the columns <code>Name</code> and <code>Email Name</code> by names.</p>
<p>This what the final output should look like:</p>
<pre><code>Name Dept Email Name Hair Color
John Smith candy John Smith gold
Diana Princ candy Diana Princ gold
Perry Plat wood
Jerry Springer clothes
Calvin Klein clothes
Lincoln Tun warehouse Lincoln Tun brown
Oliver Twist kitchen Oliver Twist bald
</code></pre>
<p>I tried something like this in my code:</p>
<pre><code>dfs = np.split(df,len(df.columns), axis=1)
dfs = [df.set_index(df.columns[0], drop=False) for df in dfs]
f=dfs[0].join(dfs[1:]).reset_index(drop=True).fillna(0)
</code></pre>
<p>Which sorted my two columns great but made everything else 0's</p>
<pre><code>Name Dept Email Name Hair Color
John Smith 0 John Smith 0
Diana Princ 0 Diana Princ 0
Perry Plat 0 0 0
Jerry Springer 0 0 0
Calvin Klein 0 0 0
Lincoln Tun 0 Lincoln Tun 0
Oliver Twist 0 Oliver Twist 0
</code></pre>
<p>Here is my code so far:</p>
<pre><code>import pandas as pd
import numpy as np
import os, csv, sys
csvPath = 'User.csv'
df= pd.read_csv(csvPath)
dfs = np.split(df,len(df.columns), axis=1)
dfs = [df.set_index(df.columns[0], drop=False) for df in dfs]
f=dfs[0].join(dfs[1:]).reset_index(drop=True).fillna(0)
testCSV = 'test_user.csv' #to check my csv file
f.to_csv(testCSV, encoding='utf-8') #send it to csv
</code></pre>
|
<p>You could use merge for that:</p>
<pre><code>pd.merge(df[['Name','Dept']],df[['Email Name','Hair Color']], left_on='Name', right_on='Email Name', how='left')
</code></pre>
<p><strong>Result</strong></p>
<pre><code> Name Dept Email Name Hair Color
0 John Smith candy John Smith gold
1 Diana Princ candy Diana Princ gold
2 Perry Plat wood NaN NaN
3 Jerry Springer clothes NaN NaN
4 Calvin Klein clothes NaN NaN
5 Lincoln Tun warehouse Lincoln Tun brown
6 Oliver Twist kitchen Oliver Twist bald
</code></pre>
|
python-3.x|pandas|csv|sorting
| 1
|
7,555
| 72,532,019
|
Finding points in radius of each point in same GeoDataFrame
|
<p>I have geoDataFrame:</p>
<pre class="lang-py prettyprint-override"><code>df = gpd.GeoDataFrame([[0, 'A', Point(10,12)],
[1, 'B', Point(14,8)],
[2, 'C', Point(100,2)],
[3, 'D' ,Point(20,10)]],
columns=['ID','Value','geometry'])
</code></pre>
<p>Is it possible to find points in a range of radius for example 10 for each point and add their "Value" and 'geometry' to GeoDataFrame so output would look like:</p>
<pre class="lang-py prettyprint-override"><code>['ID','Value','geometry','value_of_point_in_range_1','geometry_of_point_in_range_1','value_of_point_in_range_2','geometry_of_point_in_range_2' etc.]
</code></pre>
<p>Before i was finding nearest neighbor for each and after that was checking if is it in range but i must find all of the points in radius and don't know what tool should i use.</p>
|
<p>Although in your example the output will have a predictable amount of columns in the resulting dataframe, this not true in general. Therefore I would instead create a column in the dataframe that consists of a lists denoting the index/value/geometry of the nearby points.</p>
<p>In a small dataset like you provided, simple arithmetics in python will suffice. But for large datasets you will want to use a spatial tree to query the nearby points. I suggest to use scipy's KDTree like this:</p>
<pre class="lang-py prettyprint-override"><code>import geopandas as gpd
import numpy as np
from shapely.geometry import Point
from scipy.spatial import KDTree
df = gpd.GeoDataFrame([[0, 'A', Point(10,12)],
[1, 'B', Point(14,8)],
[2, 'C', Point(100,2)],
[3, 'D' ,Point(20,10)]],
columns=['ID','Value','geometry'])
tree = KDTree(list(zip(df.geometry.x, df.geometry.y)))
pairs = tree.query_pairs(10)
df['ValueOfNearbyPoints'] = np.empty((len(df), 0)).tolist()
n = df.columns.get_loc("ValueOfNearbyPoints")
m = df.columns.get_loc("Value")
for (i, j) in pairs:
df.iloc[i, n].append(df.iloc[j, m])
df.iloc[j, n].append(df.iloc[i, m])
</code></pre>
<p>This yields the following dataframe:</p>
<pre><code> ID Value geometry ValueOfNearbyPoints
0 0 A POINT (10.00000 12.00000) [B]
1 1 B POINT (14.00000 8.00000) [A, D]
2 2 C POINT (100.00000 2.00000) []
3 3 D POINT (20.00000 10.00000) [B]
</code></pre>
<p>To verify the results, you may find plotting the result usefull:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
ax = plt.subplot()
df.plot(ax=ax)
for (i, j) in pairs:
plt.plot([df.iloc[i].geometry.x, df.iloc[j].geometry.x],
[df.iloc[i].geometry.y, df.iloc[j].geometry.y], "-r")
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/D0DHl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/D0DHl.png" alt="enter image description here" /></a></p>
|
python-3.x|pandas|geopandas|shapely
| 0
|
7,556
| 59,819,683
|
concatenate more than two model of cnn in keras
|
<p>I make model by keras in <strong>CNN like MNIST.h5,cifar10.h5,dogs_and_cats_classification.h5</strong> .but these model is bounded in their own class like mnist only predict handwritten digit or dogs_and_cats_classification.h5 only predict dogs or cats output separately.
but I want to make a <strong>single model</strong>(all.h5) such that that model can predict digit,dogs,cats,or some extra.
so is their any way to do such thing in keras?</p>
<p>please suggest me..
Thanks.</p>
|
<p>You can remove the last dense layer, add a new dense layer and then retrain your model, or you can create a multi output ensemble, which you can learn more about from keras FAQ.</p>
|
tensorflow|keras|conv-neural-network
| 0
|
7,557
| 59,515,290
|
Since it is not "checkpoint", what is the standard method for crash-recovery to resume TensorFlow 2.0 Training?
|
<p>To resume training after a crash, one must restore not only the model but all objects and parameters that go into the state of a <code>model.fit(...)</code> process. </p>
<p>Before I go bother to fork the <code>keras</code> code to implement a <code>fitting</code> object includes for example, the training data, I'd like to know what the standard method, if any, is for crash-recovery to resume TensorFlow 2.0 training where it left off.</p>
<p>Or has someone actually filled this obviously gaping hole in the TensorFlow object model?</p>
|
<p>The canonical way of checkpointing a <code>tf.keras.Model.fit()</code> process is the <a href="https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/ModelCheckpoint" rel="nofollow noreferrer">ModelCheckpoint</a> callback. </p>
<p>The usage looks something like:</p>
<pre class="lang-py prettyprint-override"><code>mode.fit(..., callbacks=[tf.keras.callbacks.ModelCheckpoint(checkpoint_dir)]
</code></pre>
<p>The saved checkpoint, which is generated at the end of every training epoch by default, includes not only the model's architecture and weight values, but also the training state. If you're interested, you can study its source code <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/keras/callbacks.py#L813" rel="nofollow noreferrer">here</a>. The saved training state includes</p>
<ul>
<li>the optimizer configuration</li>
<li>the weight variable values of the optimizer (for stateful optimizers such as Adam)</li>
<li>the loss and metric configuration</li>
</ul>
<p>Do these cover all the training states you have in mind? </p>
|
tensorflow2.0|checkpointing
| 0
|
7,558
| 59,788,539
|
How can I convert this tensor to a numpy array?
|
<p>I'd like to apply the pagerank algorithm to the x_attn tensor. But the nx.pagerank module only accepts numpy arrays. When I try to convert it to using <strong>x_att.eval()</strong>, it says:</p>
<p>"tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'main_input_5' with dtype float and shape [?,6600]".</p>
<p>Can somebody please help me out?</p>
<pre><code>def variable_attn_15jan():
input_dim=input_dim_func()
main_input = Input(shape=(input_dim,),name='main_input')
inputs_w1=Lambda(lambda x: x[:,0:3300])(main_input)
inputs_w2=Lambda(lambda x: x[:,3300:6600])(main_input)
x1_attn= Dense(11, activation='softmax')(inputs_w1)
x2_attn= Dense(11, activation='softmax')(inputs_w2)
list_x_att1=[]
list_x_att2=[]
for i in range(11) :
val_scalar=Lambda(lambda x: x[:,i:(i+1)])(x1_attn)
list_x_att1.append(Lambda(lambda x: x[:,(i*300):(i+1)*300]*val_scalar)(inputs_w1))
x_att1 = concatenate(list_x_att1)
for i in range(11) :
val_scalar=Lambda(lambda x: x[:,i:(i+1)])(x2_attn)
list_x_att2.append(Lambda(lambda x: x[:,(i*300):(i+1)*300]*val_scalar)(inputs_w2))
x_att2 = concatenate(list_x_att2)
x_att = concatenate([x_att1,x_att2])
</code></pre>
|
<p>On your tensor (v2.0):</p>
<p><code>npa = tf.numpy()</code></p>
<p>where npa will be your numpy array name.</p>
<p>Alternatively, tensor (< v2.0):</p>
<pre class="lang-py prettyprint-override"><code>npa=tf.eval()
print(type(npa))
</code></pre>
<p>Update 1:</p>
<p>Use below code how to check what type of arrays you've got. Look also at comment from <em>rkern</em> commented posted <a href="https://github.com/numpy/numpy/issues/14000" rel="nofollow noreferrer">here</a> on Jul 14, 2019 .</p>
<pre class="lang-py prettyprint-override"><code>type(tf)
type(np.array(tf))
</code></pre>
|
python|tensorflow|keras
| 0
|
7,559
| 59,551,458
|
Numpy / PyTorch - how to assign value with indices in different dimensions?
|
<p>Suppose I have a matrix and some indices</p>
<pre><code>a = np.array([[1, 2, 3], [4, 5, 6]])
a_indices = np.array([[0,2], [1,2]])
</code></pre>
<p>Is there any efficient way to achieve following operation?</p>
<pre><code>for i in range(2):
a[i, a_indices[i]] = 100
# a: np.array([[100, 2, 100], [4, 100, 100]])
</code></pre>
|
<p>Use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.put_along_axis.html" rel="noreferrer"><code>np.put_along_axis</code></a> -</p>
<pre><code>In [111]: np.put_along_axis(a,a_indices,100,axis=1)
In [112]: a
Out[112]:
array([[100, 2, 100],
[ 4, 100, 100]])
</code></pre>
<p>Alternaytively, if you want to do with the explicit way, i.e. integer-based indexing -</p>
<pre><code>In [115]: a[np.arange(len(a_indices))[:,None], a_indices] = 100
</code></pre>
|
python|numpy|tensorflow|pytorch|tensor
| 7
|
7,560
| 59,714,814
|
Filter list based on another query result with JMESPath
|
<p>Having an object such as the one below:</p>
<pre><code>{
"pick": "a",
"elements": [
{"id": "a", "label": "First"},
{"id": "b", "label": "Second"}
]
}
</code></pre>
<p>how can I retrieve the item in the <code>elements</code> list where <code>id</code> is equal to the value of <code>pick</code>?</p>
<p>I was trying something like:</p>
<pre><code>elements[?id == pick]
</code></pre>
<p>But, apparently, the expression at the right of the comparator is evaluated relative to the object being tested against my filter expression.</p>
<p>How can I achieve what I want? If this is not possible out of the box, do you have any suggestion of where I should start extending JMESPath? Thank you!</p>
|
<p>Unfortunately, <em>JMESPath</em> does not allow to reference the parent element.</p>
<p>To circumvent this limitation, in this simple case, you can:</p>
<ul>
<li>read the <em>pick</em> attribute in the first query,</li>
<li>create the second query using the value just read,</li>
<li>read the wanted content in the second query.</li>
</ul>
<p>Actually, thanks to <em>f-strings</em>, two last steps can be performed in
a single instruction, so the code can be:</p>
<pre><code>pck = jmespath.search('pick', dct)
jmespath.search(f'elements[?id == `{pck}`]', dct)
</code></pre>
<p>where <em>dct</em> is the source JSON object.</p>
<h1>A more complex case</h1>
<p>If you have a more complex case (e.g. many such elements, with different <em>pick</em>
values in each case), you should use another tool.</p>
<p>One quite interesting option is to make use of <em>Pandas</em> package.</p>
<p>Assume that your source dictionary contains:</p>
<pre><code>dct = {
"x1": {
"pick": "a",
"elements": [
{"id": "a", "label": "First_a"},
{"id": "b", "label": "Second_a"},
{"id": "c", "label": "Third_a"}
]
},
"x2": {
"pick": "b",
"elements": [
{"id": "a", "label": "First_b"},
{"id": "b", "label": "Second_b"},
{"id": "c", "label": "Third_b"}
]
}
}
</code></pre>
<p>The first thing to do is to convert <em>dct</em> into a <em>Pandas</em> DataFrame:</p>
<pre><code>import pandas as pd
df = pd.DataFrame.from_dict(dct, orient='index')
</code></pre>
<p>The result (printed in a "shortened" form) is:</p>
<pre><code> pick elements
x1 a [{'id': 'a', 'label': 'First_a'}, {'id': 'b', ...
x2 b [{'id': 'a', 'label': 'First_b'}, {'id': 'b', ...
</code></pre>
<p>Description (if you have no experience in <em>Pandas</em>):</p>
<ul>
<li><em>x1</em>, <em>x2</em>, ... - the index column - values taken from first level keys
in <em>dct</em>.</li>
<li><em>pick</em> - column with (no surprise) <em>pick</em> elements,</li>
<li><em>elements</em> - column with <em>elements</em> (for now each cell contains the
whole list).</li>
</ul>
<p>This shape is not very much useful, so let's <strong>explode</strong> <em>elements</em> column:</p>
<pre><code>df = df.explode('elements')
</code></pre>
<p>Now <em>df</em> contains:</p>
<pre><code> pick elements
x1 a {'id': 'a', 'label': 'First_a'}
x1 a {'id': 'b', 'label': 'Second_a'}
x1 a {'id': 'c', 'label': 'Third_a'}
x2 b {'id': 'a', 'label': 'First_b'}
x2 b {'id': 'b', 'label': 'Second_b'}
x2 b {'id': 'c', 'label': 'Third_b'}
</code></pre>
<p>This shape is closer to what we need: Each source row has been broken into
a couple of rows, each with <strong>separate</strong> item from the initial list.</p>
<p>There is one more thing to do, i.e. create a column containing <em>id</em> values,
to be later compared with <em>pick</em> column. To do it run:</p>
<pre><code>df['id'] = df.elements.apply(lambda dct: dct['id'])
</code></pre>
<p>Now <em>df</em> contains:</p>
<pre><code> pick elements id
x1 a {'id': 'a', 'label': 'First_a'} a
x1 a {'id': 'b', 'label': 'Second_a'} b
x1 a {'id': 'c', 'label': 'Third_a'} c
x2 b {'id': 'a', 'label': 'First_b'} a
x2 b {'id': 'b', 'label': 'Second_b'} b
x2 b {'id': 'c', 'label': 'Third_b'} c
</code></pre>
<p>And to get the final result you should:</p>
<ul>
<li>select rows with <em>pick</em> column == <em>id</em>,</li>
<li>take only <em>elements</em> column (together with key column, but this detail
<em>Pandas</em> gives you just out of the box).</li>
</ul>
<p>The code to do it is:</p>
<pre><code>df.query('pick == id').elements
</code></pre>
<p>giving:</p>
<pre><code>x1 {'id': 'a', 'label': 'First_a'}
x2 {'id': 'b', 'label': 'Second_b'}
</code></pre>
<p>In the <em>Pandas</em> parlance it is a <em>Series</em> (let's say a list with each element
"labelled" with an index.</p>
<p>Now you can covert it to a dictionary or whatever you wish.</p>
|
python|pandas|jmespath
| 5
|
7,561
| 59,864,408
|
tensorflow:Your input ran out of data
|
<p>I am working on a seq2seq keras/tensorflow 2.0 model. Every time the user inputs something, my model prints the response perfectly fine. However on the last line of each response I get this:</p>
<blockquote>
<p>You: WARNING:tensorflow:Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least <code>steps_per_epoch * epochs</code> batches (in this case, 2 batches). You may need to use the repeat() function when building your dataset.</p>
</blockquote>
<p>The "You:" is my last output, before the user is supposed to type something new in. The model works totally fine, but I guess no error is ever good, but I don't quite get this error. It says "interrupting training", however I am not training anything, this program loads an already trained model. I guess this is why the error is not stopping the program?</p>
<p>In case it helps, my model looks like this:</p>
<pre><code>intent_model = keras.Sequential([
keras.layers.Dense(8, input_shape=[len(train_x[0])]), # input layer
keras.layers.Dense(8), # hidden layer
keras.layers.Dense(len(train_y[0]), activation="softmax"), # output layer
])
intent_model.compile(optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"])
intent_model.fit(train_x, train_y, epochs=epochs)
test_loss, test_acc = intent_model.evaluate(train_x, train_y)
print("Tested Acc:", test_acc)
intent_model.save("models/intent_model.h5")
</code></pre>
|
<p>To make sure that you have "<em>at least <code>steps_per_epoch * epochs</code> batches</em>", set the <code>steps_per_epoch</code> to</p>
<pre><code>steps_per_epoch = len(X_train)//batch_size
validation_steps = len(X_test)//batch_size # if you have validation data
</code></pre>
<p>You can see the maximum number of batches that <code>model.fit()</code> can take by the progress bar when the training interrupts:</p>
<pre><code>5230/10000 [==============>...............] - ETA: 2:05:22 - loss: 0.0570
</code></pre>
<p>Here, the maximum would be 5230 - 1</p>
<p>Importantly, keep in mind that by default, <code>batch_size</code> is 32 in <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model#fit" rel="noreferrer"><code>model.fit()</code></a>.</p>
<p>If you're using a <code>tf.data.Dataset</code>, you can also add the <a href="https://www.tensorflow.org/api_docs/python/tf/data/Dataset#repeat" rel="noreferrer"><code>repeat()</code></a> method, but be careful: it will loop indefinitely (unless you specify a number).</p>
|
python|tensorflow|machine-learning|keras|deep-learning
| 24
|
7,562
| 61,693,503
|
How do groupby elements in pandas based on consecutive row values
|
<p>I have a dataframe as below :</p>
<pre><code> distance_along_path
0 0
1 2.2
2 4.5
3 7.0
4 0
5 3.0
6 5.0
7 0
8 2.0
9 5.0
10 7.0
</code></pre>
<p>I want be able to group these by the distance_along_path values, every time a 0 is seen a new group is created and until the next 0 all these rows are under 1 group as indicated below </p>
<pre><code> distance_along_path group
0 0 A
1 2.2 A
2 4.5 A
3 7.0 A
4 0 B
5 3.0 B
6 5.0 B
7 0 C
8 2.0 C
9 5.0 C
10 7.0 C
</code></pre>
<p>Thank you</p>
|
<p>You can try <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.eq.html" rel="nofollow noreferrer"><code>eq</code></a> followed by <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.cumsum.html" rel="nofollow noreferrer"><code>cumcun</code></a>: </p>
<pre><code>df["group"] = df.distance_along_path.eq(0).cumsum()
</code></pre>
<p><strong>Explanation</strong>:</p>
<ol>
<li><p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.eq.html" rel="nofollow noreferrer"><code>eq</code></a> to find values equals to <code>0</code></p></li>
<li><p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.cumsum.html" rel="nofollow noreferrer"><code>cumcun</code></a> to apply a cumulative count on <code>True</code> values</p></li>
</ol>
<p><strong>Code + Illustration</strong></p>
<pre><code># Step 1
print(df.distance_along_path.eq(0))
# 0 True
# 1 False
# 2 False
# 3 False
# 4 True
# 5 False
# 6 False
# 7 True
# 8 False
# 9 False
# 10 False
# Name: distance_along_path, dtype: bool
# Step 2
print(df.assign(group=df.distance_along_path.eq(0).cumsum()))
# distance_along_path group
# 0 0.0 1
# 1 2.2 1
# 2 4.5 1
# 3 7.0 1
# 4 0.0 2
# 5 3.0 2
# 6 5.0 2
# 7 0.0 3
# 8 2.0 3
# 9 5.0 3
# 10 7.0 3
</code></pre>
<p>Note : as you can see, the group column is number and not a letter but that doesn't matter if it's used in a <code>groupby</code>.</p>
|
python|pandas|pandas-groupby
| 1
|
7,563
| 62,003,143
|
Delete specific numbers characters from Excel cell counting backwards
|
<p>I have an Excel sheet where my column B has following combination of words and letters</p>
<p>(Name Lastname 3to4Numbers PM/AM Month date Year)</p>
<p>Example:</p>
<ul>
<li>Kevin Hart 206PM May 16 2020</li>
<li>Michael B Jordan 0339AM May 06 2020</li>
</ul>
<p>I want to go in each cell in my B column and remove the 3 to 4 Numbers and the pm or Am.</p>
<p>I thought about counting backwards and remove position 13 to 20 since the names will vary.</p>
<p>Any other ideas and how to do it?</p>
|
<p><strong>Edit</strong>: I realize you might now have it in pandas yet. If you don't, you do somethe like this:</p>
<pre><code>import pandas as pd
df = pd.read_csv('YOURFILE.CSV')
</code></pre>
<p>And then you run the line under the <code>#Solution</code> in the code below, change <code>col</code> to the name of your column, and <code>col2</code> to whatever you want your new column to be called. You can save your file again with <code>df.to_csv('outputfile.csv')</code>. Good luck!</p>
<p>Here is a solution using Regex. </p>
<pre><code># Sample data
import pandas as pd
df = pd.DataFrame({
'col': ['Kevin Hart 206PM May 16 2020',
'Michael B Jordan 0339AM May 06 2020',
]
})
# Solution
df['col2'] = df['col'].str.replace('\s\d{3,4}[AP]M', '')
print(df)
col col2
0 Kevin Hart 206PM May 16 2020 Kevin Hart May 16 2020
1 Michael B Jordan 0339AM May 06 2020 Michael B Jordan May 06 2020
</code></pre>
|
python|excel|pandas|numpy|dataframe
| 2
|
7,564
| 61,943,386
|
How to assign smaller array to larger in overlapping area
|
<p>I'm trying to put a small 8x7 2D array, inside an 8x8 2D array.</p>
<p>Here's what I'm working with:</p>
<pre><code>--> Array called 'a' with shape 8x7
a = [[ 16., 11., 10., 16., 24., 40., 51.],
[ 12., 12., 14., 19., 26., 58., 60.],
[ 14., 13., 16., 24., 40., 57., 69.],
[ 14., 17., 22., 29., 51., 87., 80.],
[ 18., 22., 37., 56., 68., 109., 103.],
[ 24., 35., 55., 64., 81., 104., 113.],
[ 49., 64., 78., 87., 103., 121., 120.],
[ 72., 92., 95., 98., 112., 100., 103.]]
--> Array called 'b' with shape 8x8
b = [[0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0.]]
</code></pre>
<p>So basically, what I want is:</p>
<pre><code>--> Array called 'c' with shape 8x8
c = [[ 16., 11., 10., 16., 24., 40., 51., 0],
[ 12., 12., 14., 19., 26., 58., 60., 0],
[ 14., 13., 16., 24., 40., 57., 69., 0],
[ 14., 17., 22., 29., 51., 87., 80., 0],
[ 18., 22., 37., 56., 68., 109., 103., 0],
[ 24., 35., 55., 64., 81., 104., 113., 0],
[ 49., 64., 78., 87., 103., 121., 120., 0],
[ 72., 92., 95., 98., 112., 100., 103., 0]]
</code></pre>
<p>Is there an easy way to do this, preferably without using loops, like 'for' , 'while', 'map'
or list comprehension?</p>
<p>Thank you in advance!</p>
|
<p>You can just slice assign to <code>b</code> up to the dimensions of <code>a</code>:</p>
<pre><code>x, y = a.shape
b[:x, :y] = a
</code></pre>
<hr>
<pre><code>print(b)
array([[ 16., 11., 10., 16., 24., 40., 51., 0.],
[ 12., 12., 14., 19., 26., 58., 60., 0.],
[ 14., 13., 16., 24., 40., 57., 69., 0.],
[ 14., 17., 22., 29., 51., 87., 80., 0.],
[ 18., 22., 37., 56., 68., 109., 103., 0.],
[ 24., 35., 55., 64., 81., 104., 113., 0.],
[ 49., 64., 78., 87., 103., 121., 120., 0.],
[ 72., 92., 95., 98., 112., 100., 103., 0.]])
</code></pre>
|
python|arrays|numpy|indexing|numpy-ndarray
| 2
|
7,565
| 61,637,020
|
Problem when importing json to dataframe with pandas
|
<p>I'm trying to import a .json file with pandas.read_json(), but it imports the file as one single line and column.</p>
<p>The structure of the json file is like this:</p>
<pre><code>{ "DataList": [ [ { "parameter": 12345, "parmeter 2": 56789, "DataSet": [ {"Data": "data", "Time": "date"} , {...}, {...} ], [ { "parameter": 12345, "parmeter 2": 56789, "DataSet": [ {"Data": "data", "Time": "date"} , {...}, {...} ] }
</code></pre>
<p>Anyone know how to read it correctly?
Thanks</p>
|
<p>No problem still it can work fine</p>
|
python|json|pandas
| 0
|
7,566
| 58,030,812
|
Pandas - add a column with value computed based on another column value in current and previous row
|
<p>Given the dataframe below, </p>
<pre><code>colNames = ["Time","Col2","Col3","Col4","Col5","Col6","Col7","Col8","Col9","Col10","Col11","Col12","Col13"]
colVals = [['05:17:55.703', '', '', '', '', '', '21', '', '3', '89', '891', '11', ''], ['05:17:55.703', '', '', '', '', '', '21', '', '3', '217', '891', '12', ''], ['05:17:55.703', '', '', '', '', '', '21', '', '3', '217', '891', '13', ''], ['05:17:55.703', '', '', '', '', '', '21', '', '3', '217', '891', '15', ''], ['05:17:55.703', '', '', '', '', '', '21', '', '3', '217', '891', '16', ''], ['05:17:55.703', '', '', '', '', '', '21', '', '3', '217', '891', '17', ''], ['05:17:55.703', '', '', '', '', '', '21', '', '3', '217', '891', '18', ''], ['05:17:55.707', '', '', '', '', '', '18', '', '3', '185', '892', '0', ''], ['05:17:55.707', '', '', '', '', '', '21', '', '3', '185', '892', '1', ''], ['05:17:55.707', '', '', '', '', '', '17', '', '3', '73', '892', '5', ''], ['05:17:55.707', '', '', '', '', '', '17', '', '3', '185', '892', '6', ''], ['05:17:55.707', '', '', '', '', '', '21', '', '3', '73', '892', '7', ''], ['05:17:55.708', '268', '4', '28', '-67.60', '13', '', '2', '', '', '', '', '2'], ['05:17:55.711', '', '', '', '', '', '18', '', '3', '57', '892', '10', ''], ['05:17:55.711', '', '', '', '', '', '21', '', '3', '201', '892', '11', ''], ['05:17:55.711', '', '', '', '', '', '21', '', '3', '25', '892', '12', ''], ['05:17:55.723', '', '', '', '', '', '21', '', '3', '217', '893', '11', ''], ['05:17:55.723', '', '', '', '', '', '21', '', '3', '217', '893', '15', ''], ['05:17:55.723', '', '', '', '', '', '21', '', '3', '217', '893', '16', ''], ['05:17:55.726', '268', '4', '', '-67.80', '', '', '', '', '', '', '', ''], ['05:17:55.728', '', '', '28', '', '12', '31', '2', '3', '185', '894', '0', '1'], ['05:17:55.728', '', '', '', '', '', '31', '', '3', '185', '894', '1', ''], ['05:17:55.731', '', '', '', '', '', '31', '', '3', '217', '894', '10', ''], ['05:17:55.731', '', '', '', '', '', '20', '', '3', '217', '894', '11', ''], ['05:17:55.731', '', '', '', '', '', '20', '', '3', '217', '894', '12', ''], ['05:17:55.731', '', '', '', '', '', '20', '', '3', '217', '894', '13', ''], ['05:17:55.743', '', '', '', '', '', '20', '', '3', '217', '895', '11', ''], ['05:17:55.743', '', '', '', '', '', '20', '', '3', '217', '895', '15', ''], ['05:17:55.743', '', '', '', '', '', '20', '', '3', '217', '895', '16', ''], ['05:17:55.746', '268', '4', '', '-67.82', '', '', '', '', '', '', '', ''], ['05:17:55.747', '', '', '28', '', '13', '20', '2', '3', '185', '896', '1', '2'], ['05:17:55.747', '', '', '', '', '', '20', '', '3', '185', '896', '2', ''], ['05:17:55.747', '', '', '', '', '', '30', '', '3', '217', '896', '5', ''], ['05:17:55.751', '', '', '', '', '', '18', '', '3', '217', '896', '10', ''], ['05:17:55.751', '', '', '', '', '', '21', '', '3', '217', '896', '11', ''], ['05:17:55.751', '', '', '', '', '', '21', '', '3', '217', '896', '12', ''], ['05:17:55.751', '', '', '', '', '', '21', '', '3', '217', '896', '13', ''], ['05:17:55.763', '', '', '', '', '', '31', '', '3', '217', '897', '11', ''], ['05:17:55.763', '', '', '', '', '', '30', '', '3', '217', '897', '15', ''], ['05:17:55.763', '', '', '', '', '', '20', '', '3', '217', '897', '16', ''], ['05:17:55.763', '', '', '', '', '', '20', '', '3', '217', '897', '17', ''], ['05:17:55.766', '268', '4', '', '-67.13', '', '', '', '', '', '', '', ''], ['05:17:55.768', '', '', '28', '', '12', '20', '2', '3', '185', '898', '3', '2'], ['05:17:55.768', '', '', '', '', '', '16', '', '3', '217', '898', '6', ''], ['05:17:55.771', '', '', '', '', '', '18', '', '3', '217', '898', '10', ''], ['05:17:55.771', '', '', '', '', '', '20', '', '3', '217', '898', '11', ''], ['05:17:55.771', '', '', '', '', '', '20', '', '3', '217', '898', '12', ''], ['05:17:55.784', '', '', '', '', '', '20', '', '3', '217', '899', '11', ''], ['05:17:55.784', '', '', '', '', '', '20', '', '3', '41', '899', '12', ''], ['05:17:55.784', '', '', '', '', '', '20', '', '3', '25', '899', '13', ''], ['05:17:55.784', '', '', '', '', '', '20', '', '3', '217', '899', '15', ''], ['05:17:55.784', '', '', '', '', '', '20', '', '3', '217', '899', '16', ''], ['05:17:55.784', '', '', '', '', '', '20', '', '3', '217', '899', '17', ''], ['05:17:55.784', '', '', '', '', '', '20', '', '3', '217', '899', '18', ''], ['05:17:55.786', '268', '4', '', '-67.66', '', '', '', '', '', '', '', ''], ['05:17:55.788', '', '', '28', '', '13', '18', '2', '3', '185', '900', '0', '2'], ['05:17:55.788', '', '', '', '', '', '20', '', '3', '185', '900', '1', ''], ['05:17:55.788', '', '', '', '', '', '20', '', '3', '185', '900', '2', ''], ['05:17:55.788', '', '', '', '', '', '16', '', '3', '41', '900', '5', ''], ['05:17:55.788', '', '', '', '', '', '17', '', '3', '185', '900', '6', ''], ['05:17:55.791', '', '', '', '', '', '20', '', '3', '105', '900', '7', ''], ['05:17:55.791', '', '', '', '', '', '20', '', '3', '89', '900', '8', ''], ['05:17:55.791', '', '', '', '', '', '18', '', '3', '217', '900', '10', ''], ['05:17:55.791', '', '', '', '', '', '20', '', '3', '217', '900', '11', ''], ['05:17:55.791', '', '', '', '', '', '20', '', '3', '25', '900', '12', ''], ['05:17:55.806', '268', '4', '', '-67.50', '', '', '', '', '', '', '', ''], ['05:17:55.808', '', '', '28', '', '12', '31', '2', '3', '185', '902', '0', '1'], ['05:17:55.808', '', '', '', '', '', '31', '', '3', '185', '902', '1', ''], ['05:17:55.808', '', '', '', '', '', '20', '', '3', '25', '902', '2', ''], ['05:17:55.808', '', '', '', '', '', '20', '', '3', '25', '902', '3', ''], ['05:17:55.808', '', '', '', '', '', '16', '', '3', '217', '902', '5', ''], ['05:17:55.808', '', '', '', '', '', '16', '', '3', '217', '902', '6', ''], ['05:17:55.811', '', '', '', '', '', '20', '', '3', '89', '902', '7', ''], ['05:17:55.811', '', '', '', '', '', '20', '', '3', '121', '902', '8', ''], ['05:17:55.811', '', '', '', '', '', '18', '', '3', '217', '902', '10', ''], ['05:17:55.811', '', '', '', '', '', '20', '', '3', '217', '902', '11', ''], ['05:17:55.811', '', '', '', '', '', '20', '', '3', '73', '902', '12', ''], ['05:17:55.811', '', '', '', '', '', '20', '', '3', '9', '902', '15', ''], ['05:17:55.815', '', '', '', '', '', '20', '', '3', '217', '902', '16', ''], ['05:17:55.815', '', '', '', '', '', '20', '', '3', '25', '902', '17', ''], ['05:17:55.815', '', '', '', '', '', '20', '', '3', '217', '902', '18', ''], ['05:17:55.815', '', '', '', '', '', '18', '', '3', '217', '903', '0', ''], ['05:17:55.815', '', '', '', '', '', '21', '', '3', '217', '903', '1', ''], ['05:17:55.815', '', '', '', '', '', '19', '', '3', '105', '903', '2', ''], ['05:17:55.815', '', '', '', '', '', '21', '', '3', '41', '903', '3', ''], ['05:17:55.823', '', '', '', '', '', '21', '', '3', '217', '903', '11', ''], ['05:17:55.823', '', '', '', '', '', '21', '', '3', '9', '903', '12', ''], ['05:17:55.823', '', '', '', '', '', '21', '', '3', '105', '903', '13', ''], ['05:17:55.823', '', '', '', '', '', '21', '', '3', '217', '903', '15', ''], ['05:17:55.823', '', '', '', '', '', '21', '', '3', '217', '903', '16', ''], ['05:17:55.823', '', '', '', '', '', '21', '', '3', '121', '903', '17', ''], ['05:17:55.823', '', '', '', '', '', '21', '', '3', '89', '903', '18', ''], ['05:17:55.826', '268', '4', '', '-67.51', '', '', '', '', '', '', '', ''], ['05:17:55.828', '', '', '28', '', '12', '18', '2', '3', '185', '904', '0', '1'], ['05:17:55.828', '', '', '', '', '', '21', '', '3', '185', '904', '1', ''], ['05:17:55.828', '', '', '', '', '', '21', '', '3', '185', '904', '2', ''], ['05:17:55.828', '', '', '', '', '', '21', '', '3', '185', '904', '3', ''], ['05:17:55.828', '', '', '', '', '', '17', '', '3', '217', '904', '5', ''], ['05:17:55.828', '', '', '', '', '', '17', '', '3', '217', '904', '6', ''], ['05:17:55.831', '', '', '', '', '', '21', '', '3', '217', '904', '7', ''], ['05:17:55.831', '', '', '', '', '', '20', '', '3', '169', '904', '11', ''], ['05:17:55.831', '', '', '', '', '', '20', '', '3', '217', '904', '12', ''], ['05:17:55.831', '', '', '', '', '', '20', '', '3', '217', '904', '13', ''], ['05:17:55.846', '268', '4', '', '-67.01', '', '', '', '', '', '', '', ''], ['05:17:55.848', '', '', '28', '', '13', '19', '2', '3', '57', '906', '1', '2'], ['05:17:55.848', '', '', '', '', '', '19', '', '3', '41', '906', '2', ''], ['05:17:55.848', '', '', '', '', '', '19', '', '3', '73', '906', '3', ''], ['05:17:55.848', '', '', '', '', '', '16', '', '3', '217', '906', '5', ''], ['05:17:55.848', '', '', '', '', '', '16', '', '3', '217', '906', '6', ''], ['05:17:55.848', '', '', '', '', '', '19', '', '3', '9', '906', '7', ''], ['05:17:55.851', '', '', '', '', '', '20', '', '3', '121', '906', '11', ''], ['05:17:55.851', '', '', '', '', '', '20', '', '3', '57', '906', '12', ''], ['05:17:55.851', '', '', '', '', '', '20', '', '3', '105', '906', '13', ''], ['05:17:55.855', '', '', '', '', '', '20', '', '3', '217', '906', '15', ''], ['05:17:55.855', '', '', '', '', '', '20', '', '3', '217', '906', '16', ''], ['05:17:55.855', '', '', '', '', '', '20', '', '3', '105', '906', '17', ''], ['05:17:55.855', '', '', '', '', '', '17', '', '3', '185', '907', '0', ''], ['05:17:55.855', '', '', '', '', '', '20', '', '3', '217', '907', '1', ''], ['05:17:55.855', '', '', '', '', '', '20', '', '3', '9', '907', '2', ''], ['05:17:55.864', '', '', '', '', '', '20', '', '3', '217', '907', '11', ''], ['05:17:55.864', '', '', '', '', '', '20', '', '3', '57', '907', '12', ''], ['05:17:55.864', '', '', '', '', '', '20', '', '3', '153', '907', '13', ''], ['05:17:55.864', '', '', '', '', '', '20', '', '3', '217', '907', '15', ''], ['05:17:55.864', '', '', '', '', '', '20', '', '3', '217', '907', '16', ''], ['05:17:55.864', '', '', '', '', '', '20', '', '3', '57', '907', '17', ''], ['05:17:55.864', '', '', '', '', '', '20', '', '3', '105', '907', '18', ''], ['05:17:55.867', '', '', '', '', '', '17', '', '3', '185', '908', '0', ''], ['05:17:55.867', '', '', '', '', '', '20', '', '3', '185', '908', '1', ''], ['05:17:55.867', '', '', '', '', '', '20', '', '3', '9', '908', '2', ''], ['05:17:55.867', '', '', '', '', '', '16', '', '3', '73', '908', '6', ''], ['05:17:55.868', '268', '4', '28', '-66.79', '13', '', '2', '', '', '', '', '2'], ['05:17:55.871', '', '', '', '', '', '20', '', '3', '105', '908', '7', ''], ['05:17:55.871', '', '', '', '', '', '20', '', '3', '25', '908', '8', ''], ['05:17:55.871', '', '', '', '', '', '17', '', '3', '217', '908', '10', ''], ['05:17:55.871', '', '', '', '', '', '21', '', '3', '217', '908', '11', ''], ['05:17:55.871', '', '', '', '', '', '21', '', '3', '9', '908', '12', ''], ['05:17:55.871', '', '', '', '', '', '21', '', '3', '121', '908', '13', ''], ['05:17:55.875', '', '', '', '', '', '21', '', '3', '217', '908', '15', ''], ['05:17:55.875', '', '', '', '', '', '21', '', '3', '217', '908', '16', ''], ['05:17:55.875', '', '', '', '', '', '21', '', '3', '57', '908', '17', ''], ['05:17:55.875', '', '', '', '', '', '18', '', '3', '73', '909', '0', ''], ['05:17:55.875', '', '', '', '', '', '21', '', '3', '217', '909', '1', ''], ['05:17:55.875', '', '', '', '', '', '21', '', '3', '89', '909', '2', ''], ['05:17:55.875', '', '', '', '', '', '21', '', '3', '89', '909', '3', ''], ['05:17:55.886', '268', '4', '', '-67.48', '', '', '', '', '', '', '', ''], ['05:17:55.888', '', '', '28', '', '12', '31', '2', '3', '185', '910', '0', '1'], ['05:17:55.888', '', '', '', '', '', '21', '', '3', '89', '910', '1', ''], ['05:17:55.888', '', '', '', '', '', '21', '', '3', '9', '910', '2', ''], ['05:17:55.888', '', '', '', '', '', '21', '', '3', '185', '910', '3', ''], ['05:17:55.888', '', '', '', '', '', '17', '', '3', '217', '910', '5', ''], ['05:17:55.888', '', '', '', '', '', '17', '', '3', '217', '910', '6', ''], ['05:17:55.891', '', '', '', '', '', '21', '', '3', '89', '910', '7', ''], ['05:17:55.891', '', '', '', '', '', '21', '', '3', '73', '910', '8', ''], ['05:17:55.891', '', '', '', '', '', '18', '', '3', '217', '910', '10', ''], ['05:17:55.891', '', '', '', '', '', '21', '', '3', '217', '910', '11', ''], ['05:17:55.891', '', '', '', '', '', '21', '', '3', '57', '910', '12', ''], ['05:17:55.900', '', '', '', '', '', '21', '', '3', '217', '911', '3', ''], ['05:17:55.900', '', '', '', '', '', '17', '', '3', '217', '911', '5', ''], ['05:17:55.900', '', '', '', '', '', '17', '', '3', '217', '911', '6', ''], ['05:17:55.900', '', '', '', '', '', '21', '', '3', '105', '911', '7', ''], ['05:17:55.900', '', '', '', '', '', '21', '', '3', '25', '911', '8', ''], ['05:17:55.900', '', '', '', '', '', '18', '', '3', '217', '911', '10', ''], ['05:17:55.903', '', '', '', '', '', '21', '', '3', '217', '911', '11', ''], ['05:17:55.903', '', '', '', '', '', '21', '', '3', '57', '911', '12', ''], ['05:17:55.903', '', '', '', '', '', '21', '', '3', '153', '911', '13', ''], ['05:17:55.903', '', '', '', '', '', '21', '', '3', '217', '911', '15', ''], ['05:17:55.903', '', '', '', '', '', '21', '', '3', '217', '911', '16', ''], ['05:17:55.903', '', '', '', '', '', '21', '', '3', '137', '911', '17', ''], ['05:17:55.906', '268', '4', '', '-67.79', '', '', '', '', '', '', '', ''], ['05:17:55.908', '', '', '28', '', '13', '18', '2', '3', '57', '912', '0', '2'], ['05:17:55.908', '', '', '', '', '', '21', '', '3', '185', '912', '1', ''], ['05:17:55.908', '', '', '', '', '', '21', '', '3', '57', '912', '2', ''], ['05:17:55.908', '', '', '', '', '', '17', '', '3', '137', '912', '5', ''], ['05:17:55.908', '', '', '', '', '', '17', '', '3', '169', '912', '6', ''], ['05:17:55.923', '', '', '', '', '', '21', '', '3', '217', '913', '11', ''], ['05:17:55.923', '', '', '', '', '', '21', '', '3', '89', '913', '12', ''], ['05:17:55.926', '268', '4', '', '-68.13', '', '', '', '', '', '', '', ''], ['05:17:55.928', '', '', '28', '', '13', '31', '2', '3', '185', '914', '0', '2'], ['05:17:55.928', '', '', '', '', '', '21', '', '3', '105', '914', '2', ''], ['05:17:55.928', '', '', '', '', '', '17', '', '3', '25', '914', '5', ''], ['05:17:55.928', '', '', '', '', '', '17', '', '3', '121', '914', '6', ''], ['05:17:55.928', '', '', '', '', '', '21', '', '3', '25', '914', '7', ''], ['05:17:55.931', '', '', '', '', '', '18', '', '3', '89', '914', '10', ''], ['05:17:55.931', '', '', '', '', '', '21', '', '3', '217', '914', '11', ''], ['05:17:55.931', '', '', '', '', '', '21', '', '3', '73', '914', '12', ''], ['05:17:55.939', '', '', '', '', '', '17', '', '3', '137', '915', '5', ''], ['05:17:55.939', '', '', '', '', '', '17', '', '3', '153', '915', '6', ''], ['05:17:55.939', '', '', '', '', '', '21', '', '3', '73', '915', '7', ''], ['05:17:55.939', '', '', '', '', '', '18', '', '3', '153', '915', '10', ''], ['05:17:55.943', '', '', '', '', '', '21', '', '3', '217', '915', '11', ''], ['05:17:55.943', '', '', '', '', '', '21', '', '3', '137', '915', '12', ''], ['05:17:55.943', '', '', '', '', '', '21', '', '3', '137', '915', '15', ''], ['05:17:55.943', '', '', '', '', '', '21', '', '3', '217', '915', '16', ''], ['05:17:55.943', '', '', '', '', '', '21', '', '3', '105', '915', '17', ''], ['05:17:55.946', '268', '4', '', '-67.45', '', '', '', '', '', '', '', ''], ['05:17:55.948', '', '', '28', '', '9', '30', '3', '3', '160', '916', '0', '3'], ['05:17:55.948', '', '', '', '', '', '21', '', '3', '121', '916', '1', ''], ['05:17:55.948', '', '', '', '', '', '21', '', '3', '105', '916', '2', ''], ['05:17:55.948', '', '', '', '', '', '17', '', '3', '73', '916', '5', ''], ['05:17:55.948', '', '', '', '', '', '17', '', '3', '137', '916', '6', ''], ['05:17:55.959', '', '', '', '', '', '13', '', '4', '217', '917', '3', ''], ['05:17:55.959', '', '', '', '', '', '11', '', '4', '217', '917', '5', ''], ['05:17:55.959', '', '', '', '', '', '11', '', '4', '217', '917', '6', ''], ['05:17:55.959', '', '', '', '', '', '13', '', '4', '217', '917', '7', ''], ['05:17:55.959', '', '', '', '', '', '13', '', '4', '217', '917', '8', ''], ['05:17:55.959', '', '', '', '', '', '11', '', '4', '153', '917', '10', ''], ['05:17:55.963', '', '', '', '', '', '13', '', '4', '217', '917', '11', ''], ['05:17:55.963', '', '', '', '', '', '13', '', '4', '121', '917', '12', ''], ['05:17:55.966', '268', '4', '', '-67.22', '', '', '', '', '', '', '', ''], ['05:17:55.968', '', '', '28', '', '12', '14', '2', '4', '57', '918', '1', '2'], ['05:17:55.968', '', '', '', '', '', '14', '', '4', '185', '918', '2', ''], ['05:17:55.968', '', '', '', '', '', '12', '', '4', '169', '918', '5', ''], ['05:17:55.983', '', '', '', '', '', '14', '', '4', '57', '919', '11', ''], ['05:17:55.983', '', '', '', '', '', '14', '', '4', '217', '919', '12', ''], ['05:17:55.983', '', '', '', '', '', '14', '', '4', '9', '919', '15', ''], ['05:17:55.983', '', '', '', '', '', '14', '', '4', '217', '919', '16', ''], ['05:17:55.983', '', '', '', '', '', '14', '', '4', '217', '919', '17', ''], ['05:17:55.986', '268', '4', '', '-67.80', '', '', '', '', '', '', '', ''], ['05:17:55.988', '', '', '28', '', '13', '14', '2', '4', '9', '920', '1', '1'], ['05:17:55.988', '', '', '', '', '', '14', '', '4', '41', '920', '2', ''], ['05:17:55.988', '', '', '', '', '', '14', '', '4', '57', '920', '3', ''], ['05:17:55.988', '', '', '', '', '', '12', '', '4', '217', '920', '5', ''], ['05:17:55.988', '', '', '', '', '', '12', '', '4', '217', '920', '6', ''], ['05:17:55.988', '', '', '', '', '', '14', '', '4', '41', '920', '7', ''], ['05:17:55.991', '', '', '', '', '', '14', '', '4', '89', '920', '8', ''], ['05:17:55.991', '', '', '', '', '', '22', '', '3', '217', '920', '11', ''], ['05:17:55.991', '', '', '', '', '', '22', '', '3', '89', '920', '12', ''], ['05:17:56.003', '', '', '', '', '', '22', '', '3', '25', '921', '11', ''], ['05:17:56.003', '', '', '', '', '', '22', '', '3', '121', '921', '12', ''], ['05:17:56.003', '', '', '', '', '', '22', '', '3', '25', '921', '13', ''], ['05:17:56.003', '', '', '', '', '', '22', '', '3', '217', '921', '15', ''], ['05:17:56.003', '', '', '', '', '', '22', '', '3', '217', '921', '16', ''], ['05:17:56.003', '', '', '', '', '', '22', '', '3', '217', '921', '17', ''], ['05:17:56.006', '268', '4', '', '-67.31', '', '', '', '', '', '', '', ''], ['05:17:56.008', '', '', '28', '', '10', '22', '3', '3', '9', '922', '1', '3'], ['05:17:56.008', '', '', '', '', '', '22', '', '3', '73', '922', '2', ''], ['05:17:56.008', '', '', '', '', '', '17', '', '3', '121', '922', '5', ''], ['05:17:56.008', '', '', '', '', '', '17', '', '3', '185', '922', '6', ''], ['05:17:56.011', '', '', '', '', '', '22', '', '3', '217', '922', '7', ''], ['05:17:56.011', '', '', '', '', '', '22', '', '3', '121', '922', '11', ''], ['05:17:56.011', '', '', '', '', '', '22', '', '3', '89', '922', '12', ''], ['05:17:56.020', '', '', '', '', '', '17', '', '3', '105', '923', '5', ''], ['05:17:56.020', '', '', '', '', '', '17', '', '3', '185', '923', '6', ''], ['05:17:56.020', '', '', '', '', '', '22', '', '3', '25', '923', '7', ''], ['05:17:56.020', '', '', '', '', '', '31', '', '3', '112', '923', '10', ''], ['05:17:56.023', '', '', '', '', '', '31', '', '3', '112', '923', '11', ''], ['05:17:56.023', '', '', '', '', '', '22', '', '3', '217', '923', '12', ''], ['05:17:56.023', '', '', '', '', '', '22', '', '3', '217', '923', '13', ''], ['05:17:56.023', '', '', '', '', '', '22', '', '3', '217', '923', '15', ''], ['05:17:56.023', '', '', '', '', '', '22', '', '3', '217', '923', '16', ''], ['05:17:56.023', '', '', '', '', '', '22', '', '3', '217', '923', '17', ''], ['05:17:56.023', '', '', '', '', '', '22', '', '3', '217', '923', '18', ''], ['05:17:56.027', '', '', '', '', '', '18', '', '3', '57', '924', '0', ''], ['05:17:56.027', '', '', '', '', '', '22', '', '3', '41', '924', '1', ''], ['05:17:56.027', '', '', '', '', '', '22', '', '3', '185', '924', '2', ''], ['05:17:56.027', '', '', '', '', '', '22', '', '3', '185', '924', '3', ''], ['05:17:56.027', '', '', '', '', '', '17', '', '3', '73', '924', '5', ''], ['05:17:56.027', '', '', '', '', '', '17', '', '3', '217', '924', '6', ''], ['05:17:56.028', '268', '4', '28', '-67.68', '13', '', '2', '', '', '', '', '2'], ['05:17:56.039', '', '', '', '', '', '22', '', '3', '217', '925', '3', ''], ['05:17:56.039', '', '', '', '', '', '17', '', '3', '217', '925', '5', ''], ['05:17:56.039', '', '', '', '', '', '17', '', '3', '217', '925', '6', ''], ['05:17:56.039', '', '', '', '', '', '22', '', '3', '217', '925', '7', ''], ['05:17:56.039', '', '', '', '', '', '18', '', '3', '57', '925', '10', ''], ['05:17:56.043', '', '', '', '', '', '22', '', '3', '73', '925', '16', '']]
df = pd.DataFrame(colVals, columns=colNames)
df = df.set_index('Time')
df = df.apply(pd.to_numeric, errors='coerce')
</code></pre>
<p>Need to add new <code>Col14</code>, such that it starts from 0 and increment by one whenever value in <code>Col11</code> of current row is less than the value in <code>Col11</code> of previous row - if the previous row's <code>Col11</code> value is <code>NaN</code>, it shouldn't increment <code>Col14</code>.</p>
<p>For example</p>
<pre><code>+-------+-------+-------------------------------------------------------------------------------------------------+
| Col11 | Col14 | |
+-------+-------+-------------------------------------------------------------------------------------------------+
| 900 | 0 | start with 0 |
| NaN | 0 | |
| 900 | 0 | |
| 903 | 0 | |
| 904 | 0 | |
| 904 | 0 | |
| 8 | 1 | increment Col14 by 1 when current row's Col11 value is less than the previous row's Col11 value |
| 8 | 1 | |
| 200 | 1 | |
| 201 | 1 | |
| NaN | 1 | if Col11 is NaN, Col12 takes same value as prev row |
| 0 | 2 | increment Col14 by 1 when current row's Col11 value is less than the previous row's Col11 value |
| 1 | 2 | |
| NaN | 2 | |
| NaN | 2 | |
| 0 | 3 | increment Col14 by 1 when current row's Col11 value is less than the previous row's Col11 value |
+-------+-------+-------------------------------------------------------------------------------------------------+
</code></pre>
<p>Is there a way to get reference to the previous row ?</p>
|
<p>Can you try this?</p>
<pre><code>df['col14']=df.Col11.gt(df.Col11.shift(-1)).cumsum()
</code></pre>
<p>or </p>
<pre><code>df['col14a']=df.Col11.gt(df.Col11.shift(-1)).shift().fillna(0).cumsum().astype(int)
</code></pre>
<p>The difference between the two is that the count switches at the end of lower value (in the first one) and it switches at the start of new value (2nd one)</p>
<p>If you want to consider NaN same as the previous non-Nan value, you the code below.</p>
<pre><code>df['col14b']=df.Col11.fillna(method='ffill').gt(df.Col11.fillna(method='ffill').shift(-1)).shift().fillna(0).cumsum().astype(int)
</code></pre>
<p>With the smaller dummy dataset you provided, output is below</p>
<pre><code>Col11 col14 col14a col14b
0 900.0 0 0 0
1 NaN 0 0 0
2 900.0 0 0 0
3 900.0 0 0 0
4 903.0 0 0 0
5 904.0 0 0 0
6 904.0 1 0 0
7 8.0 1 1 1
8 8.0 1 1 1
9 200.0 1 1 1
10 201.0 1 1 1
11 NaN 1 1 1
12 0.0 1 1 2
13 1.0 1 1 2
14 NaN 1 1 2
15 NaN 1 1 2
16 0.0 1 1 3
</code></pre>
<p><strong>Details</strong>
I am explaining the last one here, the other two are similar to this.</p>
<p>With <code>df.Col11.fillna(method='ffill')</code> we are filling the NaN with the previous valid values, so tha out put is as below (we are not changing Coll11, we are just using this while creating the new column) </p>
<pre><code>0 900.0
1 900.0
2 900.0
3 900.0
4 903.0
5 904.0
6 904.0
7 8.0
8 8.0
9 200.0
10 201.0
11 201.0
12 0.0
13 1.0
14 1.0
15 1.0
16 0.0
</code></pre>
<p>With <code>df.Col11.fillna(method='ffill').shift(-1)</code> we are just shifting 1 row up so that we can do the comparison between the row & it's previous row.</p>
<pre><code>0 900.0
1 900.0
2 900.0
3 903.0
4 904.0
5 904.0
6 8.0
7 8.0
8 200.0
9 201.0
10 201.0
11 0.0
12 1.0
13 1.0
14 1.0
15 0.0
16 NaN
</code></pre>
<p>With <code>df.Col11.fillna(method='ffill').gt(df.Col11.fillna(method='ffill').shift(-1))</code>, we do the comparison & the result is true of false, as below.</p>
<pre><code>0 False
1 False
2 False
3 False
4 False
5 False
6 True
7 False
8 False
9 False
10 False
11 True
12 False
13 False
14 False
15 True
16 False
</code></pre>
<p>We use <code>shift</code> to shift the value one row down since you want the number to switch when the smaller number is present. When we shift rows down, the very first row becomes <code>NaN</code>, we are replacing that with 0 (since we have to start with zero) using <code>.fillna(0)</code>. Then we use <code>.cumsum()</code>. <code>.cumsum()</code> adds only <code>True</code> values so, we get the values we need. Finally, we assign this column as integer with <code>.astype(int)</code> to make all the values as integers.</p>
|
python|pandas|dataframe
| 2
|
7,567
| 58,007,325
|
How to merge list of lists with a tsv file in pandas
|
<p>I have a list of lists namely <code>mylist</code> as follows.</p>
<pre><code>mylist = [[9, ["nuts", "fruits"]], [12, ["france", "italy", "rome", "paris"]], [18, ["cat", "dog", "parrot", "rabbit", "cow"]], [19, ["ebay", "wish"]]]
</code></pre>
<p>I also have a tsv file namely <code>myinput</code> as follows.</p>
<pre><code>ID Category
12 ["places", "locations"]
19 ["online", "customer"]
18 ["pets"]
9 ["food"]
</code></pre>
<p>I want to combine these two with <code>ID</code> as follows.</p>
<pre><code>my_output = [[9, ["nuts", "fruits"], ["food"]], [12, ["france", "italy", "rome", "paris"], ["places", "locations"]], [18, ["cat", "dog", "parrot", "rabbit", "cow"], ["pets"]], [19, ["ebay", "wish"], ["online", "customer"]]]
</code></pre>
<p>I was initially thought to convert the tsv file to a list as follows and perform a typical list processing using python.</p>
<pre><code>input_data = pd.read_csv("myinput.tsv", header=0, delimiter="\t", quoting=3 )
my_data = input_data.values.tolist()
</code></pre>
<p>However, I am wondering if it is possible to do the same using pandas.</p>
<p>I am happy to provide more details if needed.</p>
|
<p>Use <code>map</code></p>
<p><strong>Ex:</strong></p>
<pre><code>mylist = [[9, ["nuts", "fruits"]], [12, ["france", "italy", "rome", "paris"]], [18, ["cat", "dog", "parrot", "rabbit", "cow"]], [19, ["ebay", "wish"]]]
d = dict(mylist)
df = pd.DataFrame({"ID": [12, 19, 18, 9],
"Category": [["places", "locations"], ["online", "customer"], ["pets"], ["food"]]})
df["D"] = df["ID"].map(d)
print(df)
</code></pre>
<p><strong>Output:</strong></p>
<pre><code> ID Category D
0 12 [places, locations] [france, italy, rome, paris]
1 19 [online, customer] [ebay, wish]
2 18 [pets] [cat, dog, parrot, rabbit, cow]
3 9 [food] [nuts, fruits]
</code></pre>
<hr>
<pre><code>print(list(zip(df.ID, df.Category, df.D)))
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>[(12, ['places', 'locations'], ['france', 'italy', 'rome', 'paris']), (19, ['online', 'customer'], ['ebay', 'wish']), (18, ['pets'], ['cat', 'dog', 'parrot', 'rabbit', 'cow']), (9, ['food'], ['nuts', 'fruits'])]
</code></pre>
|
pandas
| 2
|
7,568
| 54,703,473
|
TensorFlow why we still use tf.name_scope when we already have the function tf.variable_scope
|
<p>I do not understand why we also need the function <code>tf.name_scope</code> when we already have <code>tf.variable_scope</code>. From the Tensorflow official API, I see that the <code>tf.variable_scope</code> is more powerful because it can have an effect on <code>tf.get_variable</code>. When we create layers and want to share variables, we always use <code>tf.variable_scope</code> and <code>tf.name_scope</code>. However, I try to learn something new from code released by Nvidia on GitHub. I found that it is frequent for coders to use <code>tf.name_scope</code>. Why do we still need this function? </p>
|
<p>You can use <code>tf.variable_scope</code> to add a prefix on <strong>both</strong> variables created with <code>tf.get_variable</code> and operations: as you said, this allows also variable sharing but it also makes the first call to <code>tf.get_variable</code> the definition of new variable under this scope.</p>
<p><code>tf.name_scope</code> adds a prefix <strong>only</strong> at the operations: variables defined outside the <code>tf.name_scope</code> using <code>tf.get_variable</code> are not prefixed thus the <code>tf.name_scope</code> is ignored completely for this variable: you're not declarning a variable prefixed in any way.</p>
<p>This can be useful when you want to create an operation block (using <code>tf.name_scope</code>) that uses a variable declared outside of it. This variable can be even used by multiple operation blocks at the same time.</p>
|
python|tensorflow
| 0
|
7,569
| 67,334,932
|
ValueError trying to remove elements from a pandas dataframe that are in a list
|
<p>I am trying to remove items from a pandas dataframe that have a value for column a that is part of a list.</p>
<pre><code>import pandas as pd
a = ['abc', 'def', 'ghi', 'jkl', 'mno', 'pqr', 'stu', 'vwx', 'yz']
b = [1,2,3,2,1,1,3,2,1]
df = pd.DataFrame(zip(a, b), columns = ['a', 'b'])
print(df)
verwijder = ['jkl', 'mno', 'vwx']
df = df[df['a'] not in verwijder]
print(df)
</code></pre>
<p>The above throws an <code>ValueError</code>:</p>
<pre><code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
<p>What does this error mean and how can I fix it?</p>
|
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.isin.html" rel="nofollow noreferrer"><code>.isin()</code></a>. <code>not in</code> doesn't support operation on Series.</p>
<pre class="lang-py prettyprint-override"><code>df = df[~df['a'].isin(verwijder)]
</code></pre>
|
python|python-3.x|pandas|dataframe
| 3
|
7,570
| 67,495,172
|
Unable to remove rows from dataframe based on condition
|
<p>So i have a dataframe, df:</p>
<pre><code> Rank Name Platform ... JP_Sales Other_Sales Global_Sales
0 1 Wii Sports Wii ... 3.77 8.46 82.74
1 2 Super Mario Bros. NES ... 6.81 0.77 40.24
2 3 Mario Kart Wii Wii ... 3.79 3.31 35.82
3 4 Wii Sports Resort Wii ... 3.28 2.96 33.00
4 5 Pokemon Red/Pokemon Blue GB ... 10.22 1.00 31.37
... ... ... ... ... ... ... ...
16593 16596 Woody Woodpecker in Crazy Castle 5 GBA ... 0.00 0.00 0.01
16594 16597 Men in Black II: Alien Escape GC ... 0.00 0.00 0.01
16595 16598 SCORE International Baja 1000: The Official Game PS2 ... 0.00 0.00 0.01
16596 16599 Know How 2 DS ... 0.00 0.00 0.01
16597 16600 Spirits & Spells GBA ... 0.00 0.00 0.01
</code></pre>
<p>I used <code>df.describe</code> and it shows that the year count is less than the others:<a href="https://i.stack.imgur.com/eDzRT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eDzRT.png" alt="enter image description here" /></a></p>
<p>So i thought that some values in Year are empty.
tried doing <code>df.dropna()</code> but that didnt work.</p>
<p>I then tried printing the values of the column Year which were not numbers with this code (Probably not the best code but it works) along with the <code>type()</code>:</p>
<pre><code>with open("vgsales.csv", "r") as csv_file:
rows = csv_file.read().split("\n")
row_components = [row.split(",") for row in rows if len(row) > 0]
data_dict = {header:[] for header in row_components[0]}
for header_index, header in enumerate(row_components[0]):
print("header_index: ", header_index)
for row_index, row in enumerate(row_components[1:]):
data_dict[header].append(row[header_index])
for i in data_dict["Year"]:
if not i.isdigit():
print(i, type(i))
</code></pre>
<p>The output (same output repeated a lot):</p>
<pre><code>N/A <class 'str'>
</code></pre>
<p>So then i tried the answers i found in this <a href="https://stackoverflow.com/questions/18172851/deleting-dataframe-row-in-pandas-based-on-column-value">stackoverflow</a> question:
<code>df = df[df.Year != "N/A"]</code> and it didnt work either</p>
<p>Also tried <code>df = df.drop(df[(df.Year == "N/A")].index)</code> and it didnt work</p>
<p>So then i thought Why dont i open it in excel and see what values are there when it is not a year. Indeed it was <code>N/A</code></p>
<p><a href="https://i.stack.imgur.com/LPVfa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LPVfa.png" alt="enter image description here" /></a></p>
<p>Any ideas what i can do? I want to clean the data so that all the columns have the same count for a machine learning project</p>
|
<p>First off, it's important to know why you're missing data, and to see if you can possibly impute rather than just drop.</p>
<p>If you still want to drop, you can use <code>df = df.dropna(how='any')</code>.</p>
<p>The reason why Excel shows "N/A" as the value for missing data is because that's Excel's way of showing missing data. It doesn't mean that the value of the cell that is missing data is <code>N/A</code>--that would be a string containing an N, a slash, and an A. Instead, you can try <code>df = df[~df['Year'].isnull()]</code> as an alternative method for selecting non-null values.</p>
|
python|pandas|dataframe
| 2
|
7,571
| 67,344,123
|
Get indexs and columns from a Dataframe that satisfy two conditions
|
<p>I'm new in Python and Pandas.</p>
<p>I'm trying to filter a string dataframe by two conditions, to get a list or dataframe with indexs and columns that satisfy both conditions. I get this dataframe from a spreadsheet where each cell is YES or NOT.</p>
<pre><code>df = pd.DataFrame([['YES', 'YES', 'NO', 'NO'], ['NO', 'YES', 'NO', 'NO'], ['NO', 'NO', 'NO', 'NO'], ['YES', 'NO', 'NO', 'YES']],
index=['task1', 'task2', 'task3', 'task4'],
columns=['David', 'Carol', 'Tony', 'Anna'])
</code></pre>
<p>df</p>
<pre><code> David Carol Tony Anna
taks1 YES YES NO NO
task2 NO YES NO NO
task3 NO NO NO NO
task4 YES NO NO YES
</code></pre>
<p>I need to get something like this (two lists, dataframe, bidimensional array...):</p>
<pre><code>David task1
David task4
Carol task1
Carol task2
Anna task4
</code></pre>
<p>I have used loc, but I cannot extend the filter for all the columns:</p>
<pre><code>active = df.loc[lambda df1: df1['David'] == 'YES', :]
</code></pre>
<p>Rows and columns number are unknown when I read spreadsheet, therefore, I need to have a flexible solution for different tables size.</p>
|
<p>You can use pandas' <code>melt</code> to convert the data frame to the long format, and then apply the condition</p>
<pre><code>df_long = df.melt(value_vars=['David', 'Carol', 'Tony', 'Anna'], ignore_index=False).reset_index()
df_long.columns = ['Task', 'Name', 'Value']
print(df_long)
Task Name Value
0 task1 David YES
1 task2 David NO
2 task3 David NO
3 task4 David YES
4 task1 Carol YES
5 task2 Carol YES
6 task3 Carol NO
7 task4 Carol NO
8 task1 Tony NO
9 task2 Tony NO
10 task3 Tony NO
11 task4 Tony NO
12 task1 Anna NO
13 task2 Anna NO
14 task3 Anna NO
15 task4 Anna YES
df_long.loc[df_long['Value']=='YES', ['Name', 'Task']].reset_index(drop=True)
Name Task
0 David task1
1 David task4
2 Carol task1
3 Carol task2
4 Anna task4
</code></pre>
|
python|pandas|numpy
| 1
|
7,572
| 59,925,338
|
Is there a model.config file for tensorflow-serving to return all the models at the host
|
<p>I have followed the instruction of tensorflow serving and using docker could run 3 models by specifying their versions. But as the output at the host i get only the latest version of the model. Is there any model.config file that cna help display all three models at the host?</p>
|
<p>Figured it out!!
This is how you can do it</p>
<ol>
<li><p>Include model_config file in the folder where the models are present</p>
<pre><code> model_config_list: {
config: {
name: "half_plus_three",
base_path: "/models/half_plus_three"
model_platform: "tensorflow",
model_version_policy: {all: {}}
}
</code></pre>
<p>}</p>
</li>
<li><p>Run the command</p>
<pre><code>docker run -t --rm -p 8501:8501 \
-v "$(pwd)/models/:/models/" tensorflow/serving \
--model_config_file=/models/models.config \
--model_config_file_poll_wait_seconds=60
</code></pre>
</li>
</ol>
|
docker|tensorflow-serving
| 2
|
7,573
| 65,202,027
|
Convert large string with data frame like content into a data frame
|
<p>So I have a file which I can ope through Python read function, which returns one large string that essentially looks like a data frame, but is still a large string. So for example it could look something like this:</p>
<pre><code>1609441 test.test1.test3 1/15.34 -1 100 622 669
160441 test.test1.test3 2/11.101 -1 100 140216 177363
16041 test2.test8.test6 2/15.34 -1 100 2791 2346
160441 test.test7.test5 2/15.34 1 100 Bin Any 5 1794 2346
1609441 test4.test4.test4 2/15.34 1 100 E Any 5 997 0
1642 test4.test3.test1 28.0.101 -1 100 5409155 10357332
</code></pre>
<p>If it were a real data frame, it would look like:</p>
<pre><code>1609441 test.test1.test3 1/15.34 -1 100 622 669
160441 test.test1.test3 2/11.101 -1 100 140216 177363
16041 test2.test8.test6 2/15.34 -1 100 2791 2346
160441 test.test7.test5 2/15.34 1 100 Bin A 5 1794 2346
1609441 test4.test4.test4 2/15.34 1 100 E A 5 997 0
1642 test4.test3.test1 28.0.101 -1 1 155 7332
</code></pre>
<p>So as can be seen the data varies a lot. Some has 10 rows of different data, some only have 7 - and so on. Again, this is a large text string, and I have tried <code>read_csv</code> and <code>read_fwf</code>, but I haven't really succeeded.
Optimally it would just create a data frame with a fixed amount of columns (I know the maximum number of columns), and if doesn't have any value, well, just make a <code>NaN</code> value instead.</p>
<p>Can this be achieved in any way ?</p>
|
<p>I tried with <code>read_csv</code> and this looked like it worked:</p>
<pre><code>t = '''1609441 test.test1.test3 1/15.34 -1 100 622 669
160441 test.test1.test3 2/11.101 -1 100 140216 177363
16041 test2.test8.test6 2/15.34 -1 100 2791 2346
160441 test.test7.test5 2/15.34 1 100 Bin Any 5 1794 2346
1609441 test4.test4.test4 2/15.34 1 100 E Any 5 997 0
1642 test4.test3.test1 28.0.101 -1 100 5409155 10357332'''
with open('test.txt', 'w') as f:
f.write(t)
pd.read_csv('test.txt', delim_whitespace=True, names=['1', '2', '3' ,'4', '5', '6' ,'7' ,'8', '9', '10'])
</code></pre>
<p>Does that not work with the full dataset?</p>
<p><a href="https://i.stack.imgur.com/HV7yh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HV7yh.png" alt="Data frame image" /></a></p>
|
python|pandas|dataframe
| 1
|
7,574
| 65,211,176
|
How to rewrite the node ids that stored in .mtx file
|
<p>I have a <code>.mtx</code> file that looks like below:</p>
<pre><code>0 435 1
0 544 1
1 344 1
2 410 1
2 471 1
</code></pre>
<p>This matrix has shape of <code>(1000, 1000)</code>.
As you can see, node ids starts at <code>0</code>. I want to change this to start at <code>1</code> instead of <code>0</code>.
In other words, I need to add <code>1</code> to all the numbers in the first and second columns that represent the node ids.</p>
<p>So I converted <code>.mtx</code> file to <code>.txt</code> file and tried to add <code>1</code> in each first and second columns.</p>
<p>and simply added <code>1</code> to each row like below</p>
<pre><code>import numpy as np
data_path = "my_data_path"
data = np.loadtxt(data_path, delimiter=' ', dtype='int')
for i in data:
print(data[i]+1)
</code></pre>
<p>and result was</p>
<pre><code>[ 1 436 2]
[ 1 545 2]
[ 2 345 2]
[ 3 411 2]
[ 3 472 2]
</code></pre>
<p>now I need to subtract <code>1</code> from third column, but I have no idea how to implement that.</p>
<p>Can someone help me to do that?</p>
<p>Or if there's any way to complete my goal way more easier, please tell me. Thank you in advance.</p>
|
<p>Why wouldn't you increment only the first column?</p>
<pre><code>data[:, 0] += 1
</code></pre>
<p>You may want to have a look at <a href="https://numpy.org/doc/stable/reference/arrays.indexing.html" rel="nofollow noreferrer">indexing in NumPy</a>.</p>
<p>Additionally, I don't think the loop in your code ever worked:</p>
<pre><code>for i in data:
print(data[i]+1)
</code></pre>
<p>You are indexing with values from the array which generally is wrong and is surely wrong in this case:</p>
<blockquote>
<p>IndexError: index 435 is out of bounds for axis 0 with size 5</p>
</blockquote>
<p>You could correct it to print the whole matrix:</p>
<pre><code>print(data + 1)
</code></pre>
<p>Giving:</p>
<pre><code>[[ 1 436 2]
[ 1 545 2]
[ 2 345 2]
[ 3 411 2]
[ 3 472 2]]
</code></pre>
|
python|python-3.x|numpy
| 0
|
7,575
| 65,390,947
|
How to count rows based on multiple column conditions using pandas?
|
<p>How can I count csv file rows with pandas using <code>&</code> and <code>or</code> condition?</p>
<p>In the below code I want to count all rows that have True/False=FALSE and status = OK, and have '+' value in any of those columns openingSoon, underConstruction, comingSoon.</p>
<p>I've tried:</p>
<pre><code>checkOne = df['id'].loc[(df['True/False'] == 'FALSE') & (df['status'] == 'OK') & (df['comingSoon'] == '+') or (df['openingSoon'] == '+') or (df['underConstruction'] == '+')].count()
</code></pre>
<p>error:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.9/site-packages/pandas/core/generic.py", line 1329, in __nonzero__
raise ValueError(
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
|
<p>Use <code>|</code> for bitwise <code>or</code> and for count <code>True</code>s values filtering not necessary, use <code>sum</code>:</p>
<p>Also for testing boolean <code>df['True/False'] == False</code> is possible simplify by <code>~df['True/False']</code></p>
<pre><code>checkOne = (~df['True/False'] &
(df['status'] == 'OK') &
(df['comingSoon'] == '+') |
(df['openingSoon'] == '+') |
(df['underConstruction'] == '+')).sum()
</code></pre>
<p>If <code>True/False</code> are strings <code>TRUE/FALSE</code> use:</p>
<pre><code>checkOne = ((df['True/False'] == 'FALSE') &
(df['status'] == 'OK') &
(df['comingSoon'] == '+') |
(df['openingSoon'] == '+') |
(df['underConstruction'] == '+')).sum()
</code></pre>
|
python|pandas
| 3
|
7,576
| 65,387,643
|
Obtaining an empty plot when plotting cost vs epoch for a multivariate linear regression model
|
<p>I am leaning machine leaning and trying to implement Multivariate Linear Regression on a car price dataset to predict the price of cars in the future.</p>
<p><a href="https://github.com/ua9612/JUPYTER/blob/main/car_price.csv" rel="nofollow noreferrer">Here is my dataset</a></p>
<p><a href="https://github.com/ua9612/JUPYTER/blob/main/CAR_PRICE.ipynb" rel="nofollow noreferrer">Link to my jupyter notebook code</a></p>
<p>Here is my code</p>
<pre><code> In [2]:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
In [3]:
temp = pd.read_csv('car_price.csv')
In [4]:
temp.columns
Out[4]:
Index(['name', 'year', 'selling_price', 'km_driven', 'fuel', 'seller_type',
'transmission', 'owner', 'mileage', 'engine', 'max_power', 'torque',
'seats'],
dtype='object')
In [5]:
data = temp[['year', 'selling_price', 'km_driven', 'fuel', 'seller_type',
'transmission', 'owner', 'mileage', 'engine', 'max_power', 'torque',
'seats'
]]
In [6]:
data['Current_Year'] = 2020
In [7]:
data.head()
Out[7]:
year selling_price km_driven fuel seller_type transmission owner mileage engine max_power torque seats Current_Year
0 2014 450000 145500 Diesel Individual Manual First Owner 23.4 kmpl 1248 CC 74 bhp 190Nm@ 2000rpm 5.0 2020
1 2014 370000 120000 Diesel Individual Manual Second Owner 21.14 kmpl 1498 CC 103.52 bhp 250Nm@ 1500-2500rpm 5.0 2020
2 2006 158000 140000 Petrol Individual Manual Third Owner 17.7 kmpl 1497 CC 78 bhp 12.7@ 2,700(kgm@ rpm) 5.0 2020
3 2010 225000 127000 Diesel Individual Manual First Owner 23.0 kmpl 1396 CC 90 bhp 22.4 kgm at 1750-2750rpm 5.0 2020
4 2007 130000 120000 Petrol Individual Manual First Owner 16.1 kmpl 1298 CC 88.2 bhp 11.5@ 4,500(kgm@ rpm) 5.0 2020
In [8]:
data['# Years'] = data['Current_Year'] - data['year']
In [9]:
to_drop = ['Current_Year','year','torque','max_power','seller_type','owner']
data.drop(to_drop, inplace = True, axis = 1)
In [10]:
data.head()
Out[10]:
selling_price km_driven fuel transmission mileage engine seats # Years
0 450000 145500 Diesel Manual 23.4 kmpl 1248 CC 5.0 6
1 370000 120000 Diesel Manual 21.14 kmpl 1498 CC 5.0 6
2 158000 140000 Petrol Manual 17.7 kmpl 1497 CC 5.0 14
3 225000 127000 Diesel Manual 23.0 kmpl 1396 CC 5.0 10
4 130000 120000 Petrol Manual 16.1 kmpl 1298 CC 5.0 13
In [11]:
data['engine']= data['engine'].str.replace('[^\d.]', '',regex = True).astype(float)
In [12]:
data['mileage'] = data['mileage'].str.replace('[^\d.]', '',regex = True).astype(float)
In [13]:
data.head()
Out[13]:
selling_price km_driven fuel transmission mileage engine seats # Years
0 450000 145500 Diesel Manual 23.40 1248.0 5.0 6
1 370000 120000 Diesel Manual 21.14 1498.0 5.0 6
2 158000 140000 Petrol Manual 17.70 1497.0 5.0 14
3 225000 127000 Diesel Manual 23.00 1396.0 5.0 10
4 130000 120000 Petrol Manual 16.10 1298.0 5.0 13
In [14]:
data.replace(to_replace = ['Diesel','Petrol','LPG','CNG'],value=[1,2,3,4],inplace = True)
In [15]:
data.head()
Out[15]:
selling_price km_driven fuel transmission mileage engine seats # Years
0 450000 145500 1 Manual 23.40 1248.0 5.0 6
1 370000 120000 1 Manual 21.14 1498.0 5.0 6
2 158000 140000 2 Manual 17.70 1497.0 5.0 14
3 225000 127000 1 Manual 23.00 1396.0 5.0 10
4 130000 120000 2 Manual 16.10 1298.0 5.0 13
In [16]:
data.replace(to_replace = ['Manual','Automatic'],value=[1.0,2.0],inplace = True)
In [17]:
data.head()
Out[17]:
selling_price km_driven fuel transmission mileage engine seats # Years
0 450000 145500 1 1.0 23.40 1248.0 5.0 6
1 370000 120000 1 1.0 21.14 1498.0 5.0 6
2 158000 140000 2 1.0 17.70 1497.0 5.0 14
3 225000 127000 1 1.0 23.00 1396.0 5.0 10
4 130000 120000 2 1.0 16.10 1298.0 5.0 13
In [18]:
data.head()
Out[18]:
selling_price km_driven fuel transmission mileage engine seats # Years
0 450000 145500 1 1.0 23.40 1248.0 5.0 6
1 370000 120000 1 1.0 21.14 1498.0 5.0 6
2 158000 140000 2 1.0 17.70 1497.0 5.0 14
3 225000 127000 1 1.0 23.00 1396.0 5.0 10
4 130000 120000 2 1.0 16.10 1298.0 5.0 13
In [ ]:
In [21]:
data = (data - data.mean())/data.std()
X = data.iloc[:,1:8]
ones = np.ones([X.shape[0],1])
X = np.concatenate((ones,X),axis=1)
y = data.iloc[:,0:1].values
theta = np.zeros([1,8])
print(X)
def computeCost(X,y,theta):
tobesummed = np.power(((X @ theta.T)-y),2)
return np.sum(tobesummed)/(2 * len(X))
def gradientDescent(X,y,theta,iters,alpha):
cost = np.zeros(iters)
for i in range(iters):
theta = theta - (alpha/len(X)) * np.sum(X * (X @ theta.T - y), axis=0)
cost[i] = computeCost(X, y, theta)
return theta,cost
alpha = 0.01
iters = 1000
g,cost = gradientDescent(X,y,theta,iters,alpha)
print(g)
finalCost = computeCost(X,y,g)
print(finalCost)
fig, ax = plt.subplots()
ax.plot(np.arange(iters), cost, 'r')
ax.set_xlabel('Iterations')
ax.set_ylabel('Cost')
ax.set_title('Error vs. Training Epoch')
[[ 1. 1.33828022 -0.86972865 ... -0.41797619 -0.43426926
-0.04846121]
[ 1. 0.88735626 -0.86972865 ... 0.07813794 -0.43426926
-0.04846121]
[ 1. 1.24102211 0.95315801 ... 0.07615349 -0.43426926
1.92965648]
...
[ 1. 0.88735626 -0.86972865 ... -0.41797619 -0.43426926
1.18786235]
[ 1. -0.79255652 -0.86972865 ... -0.12427662 -0.43426926
0.1988035 ]
[ 1. -0.79255652 -0.86972865 ... -0.12427662 -0.43426926
0.1988035 ]]
[[nan nan nan nan nan nan nan nan]]
nan
Out[21]:
Text(0.5, 1.0, 'Error vs. Training Epoch')
In [ ]:
</code></pre>
<p>When plotting cost vs epoch I am getting am empty graph and when printing the cost values I am getting data missing 'nan'</p>
<p>I can't seem to understand where am I going wrong.</p>
|
<p>Your data had 663 null values in total and hence the error,</p>
<pre><code>data.isnull().values.sum()
663
</code></pre>
<p>Did a mean imputation and replaced all NaNs.</p>
<pre><code>data = data.fillna(data.mean())
data.isnull().values.sum()
0
</code></pre>
<p>Then executed the remaining code,</p>
<pre><code>alpha = 0.01
iters = 1000
g,cost = gradientDescent(X,y,theta,iters,alpha)
print(g)
[[ 3.37617073e-16 -1.03407822e-01 -7.06228959e-02 3.63774873e-01
2.51480688e-03 4.51868433e-01 -2.02133329e-01 -2.67955488e-01]]
finalCost = computeCost(X,y,g)
print(finalCost)
0.21914950571622366
</code></pre>
<p>Plot:</p>
<pre><code>fig, ax = plt.subplots()
ax.plot(np.arange(iters), cost, 'r')
ax.set_xlabel('Iterations')
ax.set_ylabel('Cost')
ax.set_title('Error vs. Training Epoch')
</code></pre>
<p><a href="https://i.stack.imgur.com/ELgHn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ELgHn.png" alt="Text(0.5, 1.0, 'Error vs. Training Epoch')" /></a></p>
<p><strong>Note:</strong> : I dont think you should take the mean of the entire data like below,</p>
<pre><code>data = (data - data.mean())/data.std()
</code></pre>
<p>The target variable <code>selling_price</code> should be excluded from scaling.</p>
|
python|pandas|numpy|machine-learning|linear-regression
| 0
|
7,577
| 64,132,973
|
multiple regex condition after certain character
|
<p>I would like to do regex that return boolean value if it matches. I want to extract characters <em>after</em> <code>@</code>. It could be a lot of character. For example I want to check if email using <code>banana</code> or <code>apple</code>
domain.
sample:</p>
<p><code>df.head()</code></p>
<pre><code>EMAIL
data1@gmail.com
data2@yahoo.com
data3@banana.com
data4@apple.com
apple@gmail.com
</code></pre>
<p>I tried this
<code>df["sus"] = df["email"].str.match(r'([^@]*banana|apple)')</code>
but it also catch <em>before</em> <code>@</code></p>
<p>result I got</p>
<pre><code>SUS
False
False
True
True
True
</code></pre>
<p>result I want</p>
<pre><code>SUS
False
False
True
True
False
</code></pre>
|
<p>You can use <code>.str.contains</code> because <code>.str.match</code> only searches for a match at the start of a string (it is based on <code>re.match</code>). Also, <code>[^@]*</code> matches zero or more chars other than <code>@</code>, so it does not restrict matching <code>banana</code> or <code>apple</code> matching (these words may appear at the start, end, anywhere in the string) if you use your pattern.</p>
<p>You can use</p>
<pre><code>df["sus"] = df["email"].str.contains(r'@(?:banana|apple)\b')
</code></pre>
<p>See the <a href="https://regex101.com/r/I0otBW/1" rel="nofollow noreferrer">regex demo</a></p>
<p><em>Details</em>:</p>
<ul>
<li><code>@</code> - the <code>@</code> char</li>
<li><code>(?:banana|apple)</code> - a non-capturing group matching either <code>banana</code> or <code>apple</code></li>
<li><code>\b</code> - word boundary</li>
</ul>
|
python|regex|pandas|dataframe
| 2
|
7,578
| 63,897,642
|
How can words with misspellings be corrected in a data frame?
|
<p>I have a data frame and I want the wrong words to be edited in it. First i delete the characters that have been repeated more than twice in a word and then i apply Spell Correction on it. For the first part, I can only apply changes on strings. I want to be able to apply it to the data frame as well. How can I do this?</p>
<pre><code>text='Aye concreeete steel and plastic housesss will keep us alll safe and flourishing ?'
import re
def reduce_lengthening(text):
pattern = re.compile(r"(.)\1{2,}")
return pattern.sub(r"\1\1", text)
print('string is: ',reduce_lengthening(text))
</code></pre>
<p>out put string is:</p>
<p><code>Aye concreete steel and plastic housess will keep us all safe and flourishing ?</code></p>
<p>How can I apply this function to the following data frame?</p>
<pre><code>text=['dear pados wali anttty , can just keep your thoughts and nose out business raising_hands thaaaank .',
'but least did not call him losers suckers , juuust was did not want the cemetery and honor them , big deal.',
'some hunters are just entitled , you are lucky have them.',
'thin corrrect time that.. only one person could save from this crisis .. correct sarthak ? ?',
'thereee also the wuhan virus. that totally different ?',
'does nooot every woman hav adam apple amp; flat hairy chest ?']
import pandas as pd
df=pd.DataFrame()
df['Text']=text
</code></pre>
|
<p>If you have do it with <code>apply</code> function like below</p>
<pre><code>df["Text"] = df["Text"].apply(reduce_lengthening)
</code></pre>
<p>or before adding this column ( <code>using </code>df['Text']=text<code>),you can pass each text element to </code>reduce_lengthening` in list comprehension like this, and store the resulting list</p>
<pre><code>df["Text"] = [reduce_lengthening(x) for x in text]
</code></pre>
|
python|pandas
| 1
|
7,579
| 63,885,517
|
How to replace whole cell in pandas, with values from other dataframe and rest to be set as 1?
|
<p>I have a two DataFrames.
df1:</p>
<pre><code>A | B | C
-----|---------|---------|
25zx | b(50gh) | |
50tr | a(70lc) | c(50gh) |
</code></pre>
<p>df2:</p>
<pre><code> A | B
-----|-----
b | 1.2
a | 3.5
c | 6
</code></pre>
<p>I want to replace values in df1. The row that I'm comparing is df2['A'], but the value that I want to put in to df1 is value from the row df['B']. Note my goal is for new value to replace whole cell, and cells that don't have anything to replace to set as 1.</p>
<p>So the final table would look like:</p>
<p>df3:</p>
<pre><code>A | B | C
-----|---------|---------|
1 | 1.2 | 1 |
1 | 3.5 | 6 |
</code></pre>
|
<p>Use the <code>regex</code> with <code>replace</code> then <code>apply</code> <code>to_numeric</code> follow by <code>fillna</code></p>
<pre><code>df3 = df1.replace(df2.set_index('A').B,regex=True)
df3 = df3.apply(pd.to_numeric,errors='coerce').fillna(1)
df3
Out[123]:
A B C
0 1.0 1.2 1.0
1 1.0 3.5 6.0
</code></pre>
|
python|pandas|dataframe
| 3
|
7,580
| 64,135,228
|
numpy 2d: How to get the index of the max element in the first column for only the allowed value in the second column
|
<p>Help find a high-performance way to solve the problem:
I have a result after neural-network(answers_weight), a category for answers(same len) and allowed categories for current request:</p>
<pre><code>answers_weight = np.asarray([0.9, 3.8, 3, 0.6, 0.7, 0.99]) # ~3kk items
answers_category = [1, 2, 1, 5, 3, 1] # same size as answers_weight: ~3kk items
categories_allowed1 = [1, 5, 8]
res = np.stack((answers_weight, answers_category), axis=1)
</code></pre>
<p>I need to know the index(in <em>answers_weight</em> array) of max element, but skip not allowed categories(2,3).</p>
<p>In final, index must be = <strong>2</strong>("3.0", because "3.8" must be skipped as not-allowed by category)</p>
|
<p>The easiest way would be to use numpy's masked_arrays to mask your weights according to allowed_categories and then find <code>argmax</code>:</p>
<pre><code>np.ma.masked_where(~np.isin(answers_category,categories_allowed1),answers_weight).argmax()
#2
</code></pre>
<p>Another way of doing it using masks (this one assumes unique max weight):</p>
<pre><code>mask = np.isin(answers_category, categories_allowed1)
np.argwhere(answers_weight==answers_weight[mask].max())[0,0]
#2
</code></pre>
|
python|arrays|numpy|max|masked-array
| 3
|
7,581
| 64,062,297
|
I just need tensorflow function equivalent to cv2.threshold nad np.std to use in tensorflow 1.8
|
<pre><code> I need to convert the following line into its tensorflow equivalent in tf version 1.8
but I am not getting the appropriate functions equivalent to cv2.threshold and np.std in TF 1.8
ret,mask = cv2.threshold(mask,tf.reduce_mean(mask)+1.2*np.std(mask),255,cv2.THRESH_BINARY)
</code></pre>
<p>The output of the line should be a binary mask</p>
|
<p>You can do that like this:</p>
<pre class="lang-py prettyprint-override"><code>thr = tf.math.reduce_mean(mask) + 1.2 * tf.math.reduce_std(mask)
mask = tf.cast(mask > thr, tf.uint8) * 255
</code></pre>
|
tensorflow
| 0
|
7,582
| 46,673,882
|
Tensorflow tf.squared_difference outputs unexpected shape
|
<p>Newbie here, I am sorry if this question is silly but I couldn't find anything about it online. I am getting an unexpected shape for the output of <code>tf.squared_difference</code>. I would expect the obtain of a Tensor with <code>shape=(100, ?)</code> shape as the loss from the following snippet</p>
<pre><code>[print("Logits",logits,"#Labels",labels,"LOSS",tf.squared_difference(labels,logits)) for logits, labels in zip(logits_series,labels_series)]
</code></pre>
<p>However it produces <code>(100,100)</code> Loss </p>
<blockquote>
<p>Logits Tensor("add_185:0", shape=(100, 1), dtype=float32) #Labels Tensor("unstack_29:0", shape=(100,), dtype=float32) LOSS Tensor("SquaredDifference_94:0", shape=(100, 100), dtype=float32)
Logits Tensor("add_186:0", shape=(100, 1), dtype=float32) #Labels Tensor("unstack_29:1", shape=(100,), dtype=float32) LOSS Tensor("SquaredDifference_95:0", shape=(100, 100), dtype=float32)</p>
</blockquote>
<p>I have tested another example with following code and gives the expected output shape. </p>
<pre><code>myTESTX = tf.placeholder(tf.float32, [100, None])
myTESTY = tf.placeholder(tf.float32, [100, 1])
print("Test diff X-Y",tf.squared_difference(myTESTX,myTESTY) )
print("Test diff Y-X",tf.squared_difference(myTESTY,myTESTX) )
</code></pre>
<blockquote>
<p>Test diff X-Y Tensor("SquaredDifference_92:0", shape=(100, ?), dtype=float32)
Test diff Y-X Tensor("SquaredDifference_93:0", shape=(100, ?), dtype=float32)</p>
</blockquote>
<p><strong>I am having issue why these two snippets produce different output shape</strong></p>
|
<p>There's a slight difference between your first example (with <code>logits</code> and <code>labels</code>) and second example (with <code>myTESTX</code> and <code>myTESTY</code>). <code>logits</code> has the same shape as <code>myTESTY</code>: <code>(100, 1)</code>. However, <code>labels</code> has shape <code>(100,)</code> (which is NOT a dynamic shape), but <code>myTESTX</code> has shape <code>(100, ?)</code>.</p>
<p>In the first case (<code>logits</code> and <code>labels</code>), input shapes are <code>(100,)</code> and <code>(100,1)</code>, and tensorflow uses broadcasting. Neither input shape is dynamic, so your output shape is static: <code>(100, 100)</code> due to broadcasting.</p>
<p>In the second case (<code>myTESTX</code> and <code>myTESTY</code>), input shapes are <code>(100, ?)</code> and <code>(100, 1)</code>. The first input shape is dynamic, so your output shape is dynamic: <code>(100, ?)</code>.</p>
<p>As a simpler, illustrative example in numpy (which uses the same broadcasting), consider this:</p>
<pre><code>import numpy as np
x = np.arange(10) # Shape: (10,)
y = np.arange(10).reshape(10,1) # Shape: (10, 1)
difference = x-y # Shape: (10, 10)
</code></pre>
|
python|machine-learning|tensorflow|deep-learning
| 2
|
7,583
| 46,835,607
|
Pandas apply works on individual columns as expected, but not entire dataframe
|
<p>I have a dataframe that looks something like this:</p>
<pre><code>pd.DataFrame({'state':['AL','AL'],'statefp':[1.0,1.0]})
state statefp
0 AL 1.0
1 AL 1.0
</code></pre>
<p>I want to turn the entire dataframe into type str and am using the <code>.apply</code> method. What I want to do is if the item is of type float, I want to store it as a string <em>integer</em>, and if it's already a string I want to lowercase it. I've tried this:</p>
<pre><code>df.apply(lambda x: '{:.0f}'.format(x) if isinstance(x,float) else x.astype(str).str.lower())
</code></pre>
<p>Which outputs this (not what I wanted since <code>statefp</code> is stored as a string <em>float</em>): </p>
<pre><code> state statefp
0 al 1.0
1 al 1.0
</code></pre>
<p>However, when I use the same apply on just that column, it works fine: </p>
<pre><code>>>>df.statefp.apply(lambda x: '{:.0f}'.format(x) if isinstance(x,float) else x.astype(str).str.lower())
0 1
1 1
</code></pre>
<p>Am I missing something about how <code>.apply</code> is working on the entire dataframe? I've also tried setting <code>axis='columns'</code> argument of <code>.apply</code>, but that didn't work either. </p>
<p>Also, I am open to trying other code, not just <code>.apply</code>.</p>
|
<p>Try using dtype instead:</p>
<pre><code>df.apply(lambda x: x.astype(int).astype(str) if x.dtype==np.float else x.astype(str).str.lower())
</code></pre>
<p>Output:</p>
<pre><code> state statefp
0 al 1
1 al 1
df.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 2 entries, 0 to 1
Data columns (total 2 columns):
state 2 non-null object
statefp 2 non-null object
dtypes: int32(1), object(1)
memory usage: 104.0+ bytes
</code></pre>
|
python|pandas|apply
| 0
|
7,584
| 46,784,822
|
Low-pass Chebyshev type-I filter with Scipy
|
<p>I am reading a paper, trying to reproduce the results of the paper. In this paper, they use a low-pass Chebyshev type-I filter on the raw data. And they give those parameters. </p>
<p>Sampling frequency = 32Hz, Fcut=0.25Hz, Apass = 0.001dB, Astop = -100dB, Fstop = 2Hz, Order of the filter = 5. I found some materials help me understand these parameters</p>
<p><a href="https://i.stack.imgur.com/k4bDv.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/k4bDv.gif" alt="enter image description here"></a></p>
<p>But when I take a look at the scipy.signal.cheby1. The parameters required by this function are different.</p>
<pre><code>cheby1(N, rp, Wn, btype='low', analog=False, output='ba')
</code></pre>
<p>Here N:The order of the filter; btype: type of filter, in my case, it is 'lowpass'; analog=False, because the data is sampled, so it is digital; output: specifies the type of output. But I am not sure about rp, Wn. </p>
<p>In the documentation, it says:</p>
<p>rp : float
The maximum ripple allowed below unity gain in the passband. Specified in decibels, as a positive number.</p>
<p>Wn : array_like
A scalar or length-2 sequence giving the critical frequencies. For Type I filters, this is the point in the transition band at which the gain first drops below -rp. For digital filters, Wn is normalized from 0 to 1, where 1 is the Nyquist frequency, pi radians/sample. (Wn is thus in half-cycles / sample.) For analog filters, Wn is an angular frequency (e.g. rad/s).</p>
<p>According to this question:
<a href="https://stackoverflow.com/questions/13740348/how-to-apply-a-filter-to-a-signal-in-python?noredirect=1&lq=1">How To apply a filter to a signal in python</a></p>
<p>I know how I can use the filter. But I don't know how to create a filter which has the same parameters as mentioned above. I don't know how to convert these parameters and provide them to the function in Scipy.</p>
|
<p>Take a look at <a href="https://en.wikipedia.org/wiki/Chebyshev_filter#Type_I_Chebyshev_filters" rel="nofollow noreferrer">the wikipedia page on the Type I Chebyshev filter</a>. Note that your plot illustrates the characteristics of a general filter. A lowpass Type I Chebyshev filter, however, has no ripple in the stop band.</p>
<p>You have three available parameters for the design of a Type I Chebyshev filter: the filter order, the ripple factor, and the cutoff frequency. These are the first three parameters of <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.cheby1.html" rel="nofollow noreferrer"><code>scipy.signal.cheby1</code></a>:</p>
<ul>
<li>The first argument of <code>cheby1</code> is the order of the filter.</li>
<li>The second argument, <code>rp</code>, corresponds to δ in the wikipedia page, and is apparently what you called <code>Apass</code>.</li>
<li><p>The third argument is <code>wn</code>, the cutoff frequency expressed as a fraction of the Nyquist frequency. In your case, you could write something like</p>
<pre><code>fs = 32 # Sample rate (Hz)
fcut = 0.25 # Desired filter cutoff frequency (Hz)
# Cutoff frequency relative to the Nyquist
wn = fcut / (0.5*fs)
</code></pre></li>
</ul>
<p>Once those three parameters are chosen, all the other characteristics
(e.g. transition band width, Astop, Fstop, etc) are determined. So it appears that the specification that you give, "Sampling frequency = 32Hz, Fcut=0.25Hz, Apass = 0.001dB, Astop = -100dB, Fstop = 2Hz, Order of the filter = 5", are not compatible with a Type I Chebyshev filter. In particular, I get a gain of approximately -78 dB at 2 Hz.
(If you increase the order to 6, then the gain at 2 Hz is approximately -103.)</p>
<p>Here's a complete script, followed by the plot that it generates. The plot shows just the pass band, but you can change the arguments of the <code>xlim</code> and <code>ylim</code> functions to see more.</p>
<pre><code>import numpy as np
from scipy.signal import cheby1, freqz
import matplotlib.pyplot as plt
# Sampling parameters
fs = 32 # Hz
# Desired filter parameters
order = 5
Apass = 0.001 # dB
fcut = 0.25 # Hz
# Normalized frequency argument for cheby1
wn = fcut / (0.5*fs)
b, a = cheby1(order, Apass, wn)
w, h = freqz(b, a, worN=8000)
plt.figure(1)
plt.plot(0.5*fs*w/np.pi, 20*np.log10(np.abs(h)))
plt.axvline(fcut, color='r', alpha=0.2)
plt.plot([0, fcut], [-Apass, -Apass], color='r', alpha=0.2)
plt.xlim(0, 0.3)
plt.xlabel('Frequency (Hz)')
plt.ylim(-5*Apass, Apass)
plt.ylabel('Gain (dB)')
plt.grid()
plt.title("Chebyshev Type I Lowpass Filter")
plt.tight_layout()
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/AlOBu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AlOBu.png" alt="plot"></a></p>
|
numpy|filter|scipy|signal-processing|data-processing
| 4
|
7,585
| 63,028,927
|
Creating Orange table from numpy array
|
<p>I am trying to create an Orange table manually and am having some issues.</p>
<p>My code:</p>
<pre><code>new_domain = Domain([
ContinuousVariable("NAME"),
ContinuousVariable("AGE"),
DiscreteVariable("BLOOD TYPE", list(["A+", "A-", "B+", "B-", "AB+", "AB-", "O+", "O-"]))
])
data = np.array([
["Joe", "25", "B-"],
["Marc", "30", "AB+"],
["Martin", "28", "O-"]
], dtype=object)
orangeTable = Table.from_numpy(new_domain, X=data)
</code></pre>
<p>However, I am getting this error:</p>
<pre><code>ValueError: could not convert string to float: 'Joe'
</code></pre>
<p>I don't understand why it try this conversion, what is wrong?
I just begin so not everything is clear at this point ...</p>
|
<p>The error message is telling you everything you need to know:</p>
<blockquote>
<p>ValueError: could not convert string to float: 'Joe'</p>
</blockquote>
<p>The problem is that numpy arrays can <em>only</em> contain floating point (numeric) values inside of them. Here, you are tying to include non-numeric (string) values into your array.</p>
<p>If you would like to create a "matrix" like object like the one you have above, that includes string data types and numbers, you will need to construct a <strong>Pandas Data Frame</strong> instead.</p>
|
numpy-ndarray|orange
| 1
|
7,586
| 62,938,291
|
How can I access the data generated by autocorrelation_plot?
|
<p>I plotted the autocorrelation line using the function <code>autocorrelation_plot</code> in Pandas and would like to access the data it generates.</p>
<p>I am basically trying to find out (locate) the lag points in my dataset.</p>
<p>How can this be achieved?</p>
<p><a href="https://i.stack.imgur.com/vgU9u.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vgU9u.png" alt="enter image description here" /></a></p>
|
<p>The following should work:</p>
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
data = np.array(range(5)) * 10
ax = pd.plotting.autocorrelation_plot(data)
</code></pre>
<p><a href="https://i.stack.imgur.com/XnDs3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XnDs3.png" alt="enter image description here" /></a></p>
<p><code>ax</code> is an <code>AxesSubplot</code></p>
<pre><code><matplotlib.axes._subplots.AxesSubplot at 0x7f8b52758470>
</code></pre>
<p>which has an attribute called <code>lines</code></p>
<pre><code>[<matplotlib.lines.Line2D at 0x7f8b51c30f28>,
<matplotlib.lines.Line2D at 0x7f8b51c39390>,
<matplotlib.lines.Line2D at 0x7f8b51c397b8>,
<matplotlib.lines.Line2D at 0x7f8b51c39ba8>,
<matplotlib.lines.Line2D at 0x7f8b51c39f98>,
<matplotlib.lines.Line2D at 0x7f8b51c423c8>]
</code></pre>
<p>If you now loop through these:</p>
<pre><code>for li in ax.lines:
print(li.get_xydata())
</code></pre>
<p>you obtain</p>
<pre><code>[[0. 1.15194588]
[1. 1.15194588]]
[[0. 0.87652254]
[1. 0.87652254]]
[[0. 0.]
[1. 0.]]
[[ 0. -0.87652254]
[ 1. -0.87652254]]
[[ 0. -1.15194588]
[ 1. -1.15194588]]
[[ 1. 0.4]
[ 2. -0.1]
[ 3. -0.4]
[ 4. -0.4]
[ 5. 0. ]]
</code></pre>
<p>So, as we can see:</p>
<pre><code>xydata = ax.lines[-1].get_xydata()
array([[ 1. , 0.4],
[ 2. , -0.1],
[ 3. , -0.4],
[ 4. , -0.4],
[ 5. , 0. ]])
</code></pre>
<p>If you want the data separately, you can do:</p>
<pre><code>xdata = ax.lines[-1].get_xdata()
# array([1, 2, 3, 4, 5])
ydata = ax.lines[-1].get_ydata()
# array([ 0.4, -0.1, -0.4, -0.4, 0. ])
</code></pre>
|
python|pandas|plot
| 0
|
7,587
| 63,238,061
|
Reading excel file with pandas and printing it for inserting it in http GET statement for Rest-API
|
<p>I want to read each line of an excel file (.xlsx-file) in the column called 'ABC'. There are 4667 lines and each line there is a string.
I want to print each string. But it does not work.</p>
<pre><code>import requests
import pandas as pd
get_all_ABC = pd.read_excel('C:\Users\XXX\XXX2\XXX3\table.xlsx', header = 0)
row_iterator = get_all_ABC.iterrows()
_, last = row_iterator.__next__()`
for i, row in row_iterator:
r= requests.get(row["ABC"])
r= requests.get(last["ABC"])
last = row
data = (r.text)
print ((r.text))
</code></pre>
|
<p>Why are you using the requests library? That is for making HTTP requests. Also, it's almost always bad practice to iterate over rows in pandas, and 99% of the time unnecessary.</p>
<p>Also, <code>r.text</code> will be undefined as it's outside of the for loop scope.</p>
<p>Could you explain exactly what you're trying to accomplish? I don't think I'm understanding correctly.</p>
|
python|excel|pandas|python-requests|rest
| 1
|
7,588
| 63,078,749
|
How to fill a time series that's missing data but only when the gap is smaller than a certain number?
|
<p>Good afternoon,</p>
<p>I'm working on pre-processing data that's streaming from sensors and which normally comes in every second (1hz). However this is not always the case, there are instances when there is gaps of data of 2s, 3s and even more.</p>
<p>I'm trying to set up some code that fills these gaps but only when they're smaller than some number, say 10 seconds.</p>
<p>The data comes in as follows:</p>
<pre><code> Timestamp Sensor1 Sensor2 Sensor3
7/1/2020 00:00:00 5 135 77
7/1/2020 00:00:01 6 118 79
7/1/2020 00:00:02 4 131 75
7/1/2020 00:00:04 3 125 78
7/1/2020 00:00:05 9 145 67
7/1/2020 00:00:06 6 136 71
7/1/2020 00:00:10 7 141 77
7/1/2020 00:00:11 4 145 72
</code></pre>
<p>What I'd like to do is fill the dataframe whenever the missed window is smaller than 10 seconds, and fill it with the average of the two neighboring values.</p>
<pre><code> Timestamp Sensor1 Sensor2 Sensor3
7/1/2020 00:00:00 5 135 77
7/1/2020 00:00:01 6 118 79
7/1/2020 00:00:02 4 131 75
7/1/2020 00:00:03 3.5 128 76.5
7/1/2020 00:00:04 3 125 78
7/1/2020 00:00:05 9 145 67
7/1/2020 00:00:06 6 136 71
7/1/2020 00:00:07 6.5 138.5 74
7/1/2020 00:00:08 6.5 138.5 74
7/1/2020 00:00:09 6.5 138.5 74
7/1/2020 00:00:10 7 141 77
7/1/2020 00:00:11 4 145 72
</code></pre>
<p>I think once I can set up the proper time "grid" with no missing seconds filling it should be relatively simple using the fill method. But how do I tell it to only fill windows smaller than 10 seconds?</p>
<p>Thanks in advance</p>
|
<p>without the 10s missing window, it is something with <code>resample</code> and <code>interpolate</code>.</p>
<pre><code>df.set_index('Timestamp').resample('s').interpolate().reset_index()
</code></pre>
<p>To add the filling only when less than 10s missing, then you can use <code>groupby</code> and get a new group where the <code>diff</code> between 2 rows are less than 10 in seconds. Note: to see it, I changed in your data 10 by 20 and 11 by 22 in the last two timestamps.</p>
<pre><code>print (df.set_index('Timestamp')
.groupby(df['Timestamp'].diff().dt.total_seconds()
.gt(10).cumsum()
.to_numpy())
.apply(lambda x: x.resample('s').interpolate())
.reset_index()
.drop('level_0', axis=1)
)
Timestamp Sensor1 Sensor2 Sensor3
0 2020-07-01 00:00:00 5.0 135.0 77.0
1 2020-07-01 00:00:01 6.0 118.0 79.0
2 2020-07-01 00:00:02 4.0 131.0 75.0
3 2020-07-01 00:00:03 3.5 128.0 76.5
4 2020-07-01 00:00:04 3.0 125.0 78.0
5 2020-07-01 00:00:05 9.0 145.0 67.0
6 2020-07-01 00:00:06 6.0 136.0 71.0
7 2020-07-01 00:00:20 7.0 141.0 77.0
8 2020-07-01 00:00:21 5.5 143.0 74.5
9 2020-07-01 00:00:22 4.0 145.0 72.0
</code></pre>
|
python|pandas|time-series
| 1
|
7,589
| 67,698,119
|
How to create multiple triangles based on given number of simulations?
|
<p>Below is my code:</p>
<pre><code>triangle = cl.load_sample('genins')
# Use bootstrap sampler to get resampled triangles
bootstrapdataframe = cl.BootstrapODPSample(n_sims=4, random_state=42).fit(triangle).resampled_triangles_
#converting to dataframe
resampledtriangledf = bootstrapdataframe.to_frame()
print(resampledtriangledf)
</code></pre>
<p>In above code i mentioned n_sims(number of simulation)=4. So it generates below datafame:</p>
<pre><code>0 2001 12 254,926
0 2001 24 535,877
0 2001 36 1,355,613
0 2001 48 2,034,557
0 2001 60 2,311,789
0 2001 72 2,539,807
0 2001 84 2,724,773
0 2001 96 3,187,095
0 2001 108 3,498,646
0 2001 120 3,586,037
0 2002 12 542,369
0 2002 24 1,016,927
0 2002 36 2,201,329
0 2002 48 2,923,381
0 2002 60 3,711,305
0 2002 72 3,914,829
0 2002 84 4,385,757
0 2002 96 4,596,072
0 2002 108 5,047,861
0 2003 12 235,361
0 2003 24 960,355
0 2003 36 1,661,972
0 2003 48 2,643,370
0 2003 60 3,372,684
0 2003 72 3,642,605
0 2003 84 4,160,583
0 2003 96 4,480,332
0 2004 12 764,553
0 2004 24 1,703,557
0 2004 36 2,498,418
0 2004 48 3,198,358
0 2004 60 3,524,562
0 2004 72 3,884,971
0 2004 84 4,268,241
0 2005 12 381,670
0 2005 24 1,124,054
0 2005 36 2,026,434
0 2005 48 2,863,902
0 2005 60 3,039,322
0 2005 72 3,288,253
0 2006 12 320,332
0 2006 24 1,022,323
0 2006 36 1,830,842
0 2006 48 2,676,710
0 2006 60 3,375,172
0 2007 12 330,361
0 2007 24 1,463,348
0 2007 36 2,771,839
0 2007 48 4,003,745
0 2008 12 282,143
0 2008 24 1,782,267
0 2008 36 2,898,699
0 2009 12 362,726
0 2009 24 1,277,750
0 2010 12 321,247
1 2001 12 219,021
1 2001 24 755,975
1 2001 36 1,360,298
1 2001 48 2,062,947
1 2001 60 2,356,983
1 2001 72 2,781,187
1 2001 84 2,987,837
1 2001 96 3,118,952
1 2001 108 3,307,522
1 2001 120 3,455,107
1 2002 12 302,932
1 2002 24 1,022,459
1 2002 36 1,634,938
1 2002 48 2,538,708
1 2002 60 3,005,695
1 2002 72 3,274,719
1 2002 84 3,356,499
1 2002 96 3,595,361
1 2002 108 4,100,065
1 2003 12 489,934
1 2003 24 1,233,438
1 2003 36 2,471,849
1 2003 48 3,672,629
1 2003 60 4,157,489
1 2003 72 4,498,470
1 2003 84 4,587,579
1 2003 96 4,816,232
1 2004 12 518,680
1 2004 24 1,209,705
1 2004 36 2,019,757
1 2004 48 2,997,820
1 2004 60 3,630,442
1 2004 72 3,881,093
1 2004 84 4,080,322
1 2005 12 453,963
1 2005 24 1,458,504
1 2005 36 2,036,506
1 2005 48 2,846,464
1 2005 60 3,280,124
1 2005 72 3,544,597
1 2006 12 369,755
1 2006 24 1,209,117
1 2006 36 1,973,136
1 2006 48 3,034,294
1 2006 60 3,537,784
1 2007 12 477,788
1 2007 24 1,524,537
1 2007 36 2,170,391
1 2007 48 3,355,093
1 2008 12 250,690
1 2008 24 1,546,986
1 2008 36 2,996,737
1 2009 12 271,270
1 2009 24 1,446,353
1 2010 12 510,114
2 2001 12 170,866
2 2001 24 797,338
2 2001 36 1,663,610
2 2001 48 2,293,697
2 2001 60 2,607,067
2 2001 72 2,979,479
2 2001 84 3,127,308
2 2001 96 3,285,338
2 2001 108 3,574,272
2 2001 120 3,630,610
2 2002 12 259,060
2 2002 24 1,011,092
2 2002 36 1,851,504
2 2002 48 2,705,313
2 2002 60 3,195,774
2 2002 72 3,766,008
2 2002 84 3,944,417
2 2002 96 4,234,043
2 2002 108 4,763,664
2 2003 12 239,981
2 2003 24 983,484
2 2003 36 1,929,785
2 2003 48 2,497,929
2 2003 60 2,972,887
2 2003 72 3,313,868
2 2003 84 3,727,432
2 2003 96 4,024,122
2 2004 12 77,522
2 2004 24 729,401
2 2004 36 1,473,914
2 2004 48 2,376,313
2 2004 60 2,999,197
2 2004 72 3,372,020
2 2004 84 3,887,883
2 2005 12 321,598
2 2005 24 1,132,502
2 2005 36 1,710,504
2 2005 48 2,438,620
2 2005 60 2,801,957
2 2005 72 3,182,466
2 2006 12 255,407
2 2006 24 1,275,141
2 2006 36 2,083,421
2 2006 48 3,144,579
2 2006 60 3,891,772
2 2007 12 338,120
2 2007 24 1,275,697
2 2007 36 2,238,715
2 2007 48 3,615,323
2 2008 12 310,214
2 2008 24 1,237,156
2 2008 36 2,563,326
2 2009 12 271,093
2 2009 24 1,523,131
2 2010 12 430,591
3 2001 12 330,887
3 2001 24 831,193
3 2001 36 1,601,374
3 2001 48 2,188,879
3 2001 60 2,662,773
3 2001 72 3,086,976
3 2001 84 3,332,247
3 2001 96 3,317,279
3 2001 108 3,576,659
3 2001 120 3,613,563
3 2002 12 358,263
3 2002 24 1,139,259
3 2002 36 2,236,375
3 2002 48 3,163,464
3 2002 60 3,715,130
3 2002 72 4,295,638
3 2002 84 4,502,105
3 2002 96 4,769,139
3 2002 108 5,323,304
3 2003 12 489,934
3 2003 24 1,570,352
3 2003 36 3,123,215
3 2003 48 4,189,299
3 2003 60 4,819,070
3 2003 72 5,306,689
3 2003 84 5,560,371
3 2003 96 5,827,003
3 2004 12 419,727
3 2004 24 1,308,884
3 2004 36 2,118,936
3 2004 48 2,906,732
3 2004 60 3,561,577
3 2004 72 3,934,400
3 2004 84 4,010,511
3 2005 12 389,217
3 2005 24 1,173,226
3 2005 36 1,794,216
3 2005 48 2,528,910
3 2005 60 3,474,035
3 2005 72 3,908,999
3 2006 12 291,940
3 2006 24 1,136,674
3 2006 36 1,915,614
3 2006 48 2,693,930
3 2006 60 3,375,601
3 2007 12 506,055
3 2007 24 1,684,660
3 2007 36 2,678,739
3 2007 48 3,545,156
3 2008 12 282,143
3 2008 24 1,536,490
3 2008 36 2,458,789
3 2009 12 271,093
3 2009 24 1,199,897
3 2010 12 266,359
</code></pre>
<p>Using above dataframe I have to create 4 triangles based on Toatal column:
For example:</p>
<pre><code> Row Labels 12 24 36 48 60 72 84 96 108 120 Grand Total
2001 254,926 535,877 1,355,613 2,034,557 2,311,789 2,539,807 2,724,773 3,187,095 3,498,646 3,586,037 22,029,119
2002 542,369 1,016,927 2,201,329 2,923,381 3,711,305 3,914,829 4,385,757 4,596,072 5,047,861 28,339,832
2003 235,361 960,355 1,661,972 2,643,370 3,372,684 3,642,605 4,160,583 4,480,332 21,157,261
2004 764,553 1,703,557 2,498,418 3,198,358 3,524,562 3,884,971 4,268,241 19,842,659
2005 381,670 1,124,054 2,026,434 2,863,902 3,039,322 3,288,253 12,723,635
2006 320,332 1,022,323 1,830,842 2,676,710 3,375,172 9,225,377
2007 330,361 1,463,348 2,771,839 4,003,745 8,569,294
2008 282,143 1,782,267 2,898,699 4,963,110
2009 362,726 1,277,750 1,640,475
2010 321,247 321,247
Grand Total 3,795,687 10,886,456 17,245,147 20,344,022 19,334,833 17,270,466 15,539,355 12,263,499 8,546,507 3,586,037 128,812,009
.
.
.
</code></pre>
<p>Like this i need 4 triangles (4 is number of simulation) using 1st dataframe.
If user gives s_sims=900 then it creates 900 totals values based on this we have to create 900 triangles.
In above triangle i just displayed only 1 triangle for 0th value. But i neet triangle for 1 ,2 and 3 also.</p>
|
<p>Try:</p>
<pre><code>df['sample_size'] = pd.to_numeric(df['sample_size'].str.replace(',',''))
df.pivot_table('sample_size','year', 'no', aggfunc='first')\
.pipe(lambda x: pd.concat([x,x.sum().to_frame('Grand Total').T]))
</code></pre>
<p>Output:</p>
<pre><code>no 12 24 36 48 60 72 84 96 108 120
2001 254926.0 535877.0 1355613.0 2034557.0 2311789.0 2539807.0 2724773.0 3187095.0 3498646.0 3586037.0
2002 542369.0 1016927.0 2201329.0 2923381.0 3711305.0 3914829.0 4385757.0 4596072.0 5047861.0 NaN
2003 235361.0 960355.0 1661972.0 2643370.0 3372684.0 3642605.0 4160583.0 4480332.0 NaN NaN
2004 764553.0 1703557.0 2498418.0 3198358.0 3524562.0 3884971.0 4268241.0 NaN NaN NaN
2005 381670.0 1124054.0 2026434.0 2863902.0 3039322.0 3288253.0 NaN NaN NaN NaN
2006 320332.0 1022323.0 1830842.0 2676710.0 3375172.0 NaN NaN NaN NaN NaN
2007 330361.0 1463348.0 2771839.0 4003745.0 NaN NaN NaN NaN NaN NaN
2008 282143.0 1782267.0 2898699.0 NaN NaN NaN NaN NaN NaN NaN
2009 362726.0 1277750.0 NaN NaN NaN NaN NaN NaN NaN NaN
2010 321247.0 NaN NaN NaN NaN NaN NaN NaN NaN NaN
Grand Total 3795688.0 10886458.0 17245146.0 20344023.0 19334834.0 17270465.0 15539354.0 12263499.0 8546507.0 3586037.0
</code></pre>
|
python|pandas|dataframe|numpy
| 2
|
7,590
| 67,678,256
|
Match between two dataframes and add result to the column of one of them (Pandas)
|
<p>I have the following dataframes:</p>
<pre><code>df = pd.DataFrame({'user': ['A', 'B', 'C'],
'results': ['hi how why', 'which how raw', 'final what is']})
df_v2 = pd.DataFrame({'user': ['A', 'B', 'C'],
'results': ['John', 'Peter', 'Anne']})
</code></pre>
<p>What I have to do is to look for if the user is the same between the two dataframes, if it is the same I have to add the results column in the second dataframe in the first one, to look like this:</p>
<pre><code>df_new = pd.DataFrame({'user': ['A', 'B', 'C'],
'results': ['hi how why John', 'which how raw Peter', 'final what is Anne']})
</code></pre>
<p>Any ideas of how could I do this?</p>
|
<p>You can use the <code>set_index</code> method to tell <code>pandas</code> how to align dataframes when performing operations like addition, subtraction, etc.</p>
<pre><code>new_series = df.set_index("user")["results"] + " " + df_v2.set_index("user")["results"]
print(new_series)
user
A hi how why John
B which how raw Peter
C final what is Anne
Name: results, dtype: object
</code></pre>
<hr />
<p>Alternatively, you can use the <code>merge</code> function to combine your dataframes (which will also perform the aligning operation for you), and then add the columns on the <code>merge</code>'d dataframe:</p>
<pre><code>new_df = df.merge(df_v2, on="user").assign(newcol=lambda d: d["results_x"] + " " + d["results_y"])
print(new_df)
user results_x results_y newcol
0 A hi how why John hi how why John
1 B which how raw Peter which how raw Peter
2 C final what is Anne final what is Anne
</code></pre>
|
python|pandas|dataframe|nlp|match
| 0
|
7,591
| 67,952,600
|
Python categorize data in excel based on key words from another excel sheet
|
<p>I have two excel sheets, one has four different types of categories with keywords listed. I am using Python to find the keywords in the review data and match them to a category. I have tried using pandas and data frames to compare but I get errors like "DataFrame objects are mutable, thus they cannot be hashed". I'm not sure if there is a better way but I am new to Pandas.</p>
<p>Here is an example:</p>
<p>Category sheet</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Service</th>
<th>Experience</th>
</tr>
</thead>
<tbody>
<tr>
<td>fast</td>
<td>bad</td>
</tr>
<tr>
<td>slow</td>
<td>easy</td>
</tr>
</tbody>
</table>
</div>
<p>Data Sheet</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Review #</th>
<th>Location</th>
<th>Review</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>New York</td>
<td>"The service was fast!</td>
</tr>
<tr>
<td>2</td>
<td>Texas</td>
<td>"Overall it was a bad experience for me"</td>
</tr>
</tbody>
</table>
</div>
<p>For the examples above I would expect the following as a result.
I would expect review 1 to match the category Service because of the word "fast" and I would expect review 2 to match category Experience because of the word "bad". I do not expect the review to match every word in the category sheet, and it is fine if one review belongs to more than one category.</p>
<p>Here is my code, note I am using a simple example. In the example below I am trying to find the review data that would match the Customer Service list of keywords.</p>
<pre><code>import pandas as pd
# List of Categories
cat = pd.read_excel("Categories_List.xlsx")
# Data being used
data = pd.read_excel("Data.xlsx")
# Data Frame for review column
reviews = pd.DataFrame(data["reviews"])
# Data Frame for Categories
cs = pd.DataFrame(cat["Customer Service"])
be = pd.DataFrame(cat["Billing Experience"])
net = pd.DataFrame(cat["Network"])
out = pd.DataFrame(cat["Outcome"])
for i in reviews:
if cs in reviews:
print("True")
</code></pre>
|
<p>One approach would be to build a regular expression from the <code>cat</code> frame:</p>
<pre><code>exp = '|'.join([rf'(?P<{col}>{"|".join(cat[col].dropna())})' for col in cat])
</code></pre>
<pre><code>(?P<Service>fast|slow)|(?P<Experience>bad|easy)
</code></pre>
<p>Alternatively replace <code>cat</code> with a list of columns to test:</p>
<pre><code>cols = ['Service']
exp = '|'.join([rf'(?P<{col}>{"|".join(cat[col].dropna())})' for col in cols])
</code></pre>
<pre><code>(?P<Service>fast|slow|quick)
</code></pre>
<hr/>
<p>Then to get matches use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.extractall.html#pandas-series-str-extractall" rel="nofollow noreferrer"><code>str.extractall</code></a> and <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.DataFrameGroupBy.aggregate.html#pandas-core-groupby-dataframegroupby-aggregate" rel="nofollow noreferrer"><code>aggregate</code></a> into summary + <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.join.html#pandas-dataframe-join" rel="nofollow noreferrer"><code>join</code></a> to add back to the <code>reviews</code> frame:</p>
<p>Aggregated into List:</p>
<pre><code>reviews = reviews.join(
reviews['Review'].str.extractall(exp).groupby(level=0).agg(
lambda g: list(g.dropna()))
)
</code></pre>
<pre class="lang-none prettyprint-override"><code> Review # Location Review Service Experience
0 1 New York The service was fast and easy! [fast] [easy]
1 2 Texas Overall it was a bad experience for me [] [bad]
</code></pre>
<p>Aggregated into String:</p>
<pre><code>reviews = reviews.join(
reviews['Review'].str.extractall(exp).groupby(level=0).agg(
lambda g: ', '.join(g.dropna()))
)
</code></pre>
<pre class="lang-none prettyprint-override"><code> Review # Location Review Service Experience
0 1 New York The service was fast and easy! fast easy
1 2 Texas Overall it was a bad experience for me bad
</code></pre>
<hr/>
<p>Alternatively for an existence test use <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.any.html#pandas-dataframe-any" rel="nofollow noreferrer"><code>any</code></a> on level=0:</p>
<pre><code>reviews = reviews.join(
reviews['Review'].str.extractall(exp).any(level=0)
)
</code></pre>
<pre class="lang-none prettyprint-override"><code> Review # Location Review Service Experience
0 1 New York The service was fast and easy! True True
1 2 Texas Overall it was a bad experience for me False True
</code></pre>
<p>Or iteratively over the columns and with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.contains.html#pandas-series-str-contains" rel="nofollow noreferrer"><code>str.contains</code></a>:</p>
<pre><code>cols = cat.columns
for col in cols:
reviews[col] = reviews['Review'].str.contains('|'.join(cat[col].dropna()))
</code></pre>
<pre class="lang-none prettyprint-override"><code> Review # Location Review Service Experience
0 1 New York The service was fast and easy! True True
1 2 Texas Overall it was a bad experience for me False True
</code></pre>
|
python|excel|pandas|dataframe
| 0
|
7,592
| 67,971,213
|
Read several csv from another folder in python
|
<p>my python file in which I work is contained in the following path '/Users/pycar/Documents/Srett/Python/',
In this same space I have a folder that contains 8 other folders that all contain a csv that I want to import via panda because it's a database, the problem is that most of the codes found do not work (It says that the file is named 'month' and that the 8 folders are named by the first 8 months of the year then it does not matter what the names of the csv inside.</p>
<p>I would like to make a loop that digs into 'month' and goes into each folder (so january february etc...) and import the csv that is contained inside (with a read.csv).</p>
<p>for a little more visibility tell you that the file my_python is my notebook and that it is in the same folder as month which contains what I gave you</p>
<p>my_python</p>
<p>month-> january -> jan.csv</p>
<p>month-> February -> feb.csv</p>
<p>month-> March -> mar.csv</p>
<p>month-> April -> apr.csv</p>
<p>month-> May -> may.csv</p>
<p>month-> June -> jun.csv</p>
<p>month-> july -> jul.csv</p>
<p>month-> August -> Aug.csv</p>
<p>How can i proceed ?</p>
|
<p>If catalog <code>month</code> and subcatalogs hold solely csv files of interest, you might use <a href="https://docs.python.org/3/library/glob.html#glob.glob" rel="nofollow noreferrer">glob.glob</a>. Please prepare following script in same catalog in which <code>month</code> catalog is present, run it and write if it does print all csv files you want to get:</p>
<pre><code>import glob
for i in glob.glob('month/*/*.csv'):
print(i)
</code></pre>
|
python|pandas|csv
| 2
|
7,593
| 67,972,661
|
Hugging Face: NameError: name 'sentences' is not defined
|
<p>I am following this tutorial here: <a href="https://huggingface.co/transformers/training.html" rel="nofollow noreferrer">https://huggingface.co/transformers/training.html</a> - though, I am coming across an error, and I think the tutorial is missing an import, but i do not know which.</p>
<p>These are my current imports:</p>
<pre><code># Transformers installation
! pip install transformers
# To install from source instead of the last release, comment the command above and uncomment the following one.
# ! pip install git+https://github.com/huggingface/transformers.git
! pip install datasets transformers
from transformers import pipeline
</code></pre>
<p>Current code:</p>
<pre><code>from datasets import load_dataset
raw_datasets = load_dataset("imdb")
</code></pre>
<pre><code>from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
</code></pre>
<pre><code>inputs = tokenizer(sentences, padding="max_length", truncation=True)
</code></pre>
<p>The error:</p>
<pre><code>NameError Traceback (most recent call last)
<ipython-input-9-5a234f114e2e> in <module>()
----> 1 inputs = tokenizer(sentences, padding="max_length", truncation=True)
NameError: name 'sentences' is not defined
</code></pre>
|
<p>This error is because you have not declared sentences. Now you need to
access raw data using:</p>
<pre><code>k = raw_datasets['train']
sentences = k['text']
</code></pre>
|
python|bert-language-model|huggingface-transformers|huggingface-tokenizers|huggingface-datasets
| 1
|
7,594
| 61,442,931
|
Matplotlib Animation plotting the same thing for every frame
|
<p>I am trying to recreate the animation in <a href="https://youtu.be/kbKtFN71Lfs" rel="nofollow noreferrer">this</a> video.</p>
<p>Currently, my code plots, but each frame is the same frame as the last. I am trying to first plot the vertices, then plot each dot one at a time. The points are precalculated, so all I want is to plot the current point and the points before it. The next step will be for me to make an animation (edit: i.e. gif or mp4), but that is after I can get this part working.</p>
<pre><code>import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
import numpy as np
import random
vertices = np.array([[0,0],
[2,0],
[1,2.5]])
dots = 100
def newPos(index, old_x, old_y):
vertex_x = vertices[index][0]
vertex_y = vertices[index][1]
new_x = 0.5*(vertex_x + old_x)
new_y = 0.5*(vertex_y + old_y)
return([new_x,new_y])
global points
points = np.array([[0.25, 0.1]])
for i in range(dots-1):
points = np.concatenate((points, [newPos(random.randint(0,2), points[i][0], points[i][1])]), axis = 0)
plt.figure()
global index
index = 0
def animate(i):
plt.cla()
global index
index += 1
plt.plot(vertices[0][0], vertices[0][1], 'o')
global points
plt.plot(points[0:index][0], points[0:index][1], 'o', color = '#1f77b4')
plt.legend(['index = {0}'.format(index)], loc='upper left')
plt.tight_layout()
while index < dots:
ani = FuncAnimation(plt.gcf(), animate, interval=15000/dots)
plt.title('Chaos Game with {0} Vertices and {1} Steps'.format(len(vertices), dots))
plt.show()
</code></pre>
|
<p>It seems like you misunderstood how <a href="https://matplotlib.org/3.2.1/api/_as_gen/matplotlib.animation.FuncAnimation.html" rel="nofollow noreferrer">matplotlib.animation.funcanimation</a> works, I'll strongly advise you to look at some of the many examples to be found online. Let's try with this version as follows:</p>
<pre><code>import matplotlib.pyplot as plt
import matplotlib.animation as animation
import numpy as np
import random
vertices = np.array([[0,0],
[2,0],
[1,2.5],
[0,0]])
dots = 1000
# lower and higher bounds for x to be generated
int_lb = np.min(vertices[:,0])
int_hb = np.max(vertices[:,0])
def newPos(index, old_x, old_y):
vertex_x = vertices[index][0]
vertex_y = vertices[index][1]
new_x = 0.5*(vertex_x + old_x)
new_y = 0.5*(vertex_y + old_y)
return([new_x,new_y])
# evaluating all your points
points = np.array([[0.25, 0.1]])
for j in range(dots-1):
points = np.concatenate((points, [newPos(random.randint(int_lb,int_hb), points[j][0], points[j][1])]), axis=0)
fig = plt.figure()
ax = plt.gca()
ax.set_xlim([np.min(vertices[:,0])-0.05*np.max(vertices[:,0]),1.05*np.max(vertices[:,0])])
ax.set_ylim([np.min(vertices[:,1])-0.05*np.max(vertices[:,1]),1.05*np.max(vertices[:,1])])
ax.set_title('Chaos Game with {a} Vertices and {b} Steps'.format(a=len(vertices)-1, b=dots))
# draw boundaries
ax.plot(vertices[:,0],vertices[:,1],'k-', linewidth=1)
# initialize scatter object for current step and all evaluated dots
scat_curr = ax.scatter([], [], marker='X', s=15, c='black')
scat_dots = ax.scatter([], [], marker='o', s=5, c='#1f77b4',zorder=-1)
def init():
scat_curr.set_offsets(np.c_[vertices[0,0], vertices[0,1]])
scat_dots.set_offsets(np.c_[vertices[0,0], vertices[0,1]])
return scat_curr, scat_dots
def animate(i):
scat_curr.set_offsets(np.c_[points[i,0], points[i,1]])
scat_dots.set_offsets(np.c_[points[:i,0], points[:i,1]])
ax.legend([scat_curr],['iter i = {a}'.format(a=i)], loc='upper left')
return scat_curr, scat_dots
anim = animation.FuncAnimation(fig, animate, init_func=init, frames=dots, interval=10)
Writer = animation.writers['ffmpeg']
writer = Writer(fps=15, metadata=dict(artist='Me'), bitrate=1800)
anim.save('some_nice_triforces.mp4', writer=writer)
</code></pre>
<p>which gives:</p>
<p><a href="https://i.stack.imgur.com/6X68l.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6X68l.gif" alt="triforce"></a></p>
<p>If you have any questions, I will add some more comments but since this is mainly your own work here I am sure that you will figure out that what you have tried was far more complex than what it should have been :). Hope this helps.</p>
|
python|numpy|matplotlib
| 1
|
7,595
| 68,851,524
|
Filter out the last available day of a quarter in the dataframe in which Quarter column is already present
|
<p>I want to get the row for the last available date in a Quarter in a pandas df. There's already a column denoting the Quarter of that particular year.</p>
<pre><code> player amount date Quarter
dan 10 2021-06-29 2Q21
dmitri 45 2021-06-30 2Q21
darren 15 2021-12-31 4Q21
xae12 40 2021-12-30 4Q21
except 89 2022-01-31 1Q22
</code></pre>
<p>For the above df. I should get the following rows as output (the ones with latest date in a particular Quarter)</p>
<pre><code> player amount date Quarter
dmitri 45 2021-06-30 2Q21
darren 15 2021-12-31 4Q21
</code></pre>
<p><strong>Note</strong>: The last row shouldn't appear in the result as it is not the actual end date of a Quarter (e.g. June 30, Dec 31 etc.)</p>
<p>Any help is appreciated. Right now I am trying to use pandasql library but I don't want to inject SQL type queries for pandas manipulations in my code. I would prefer it doing in a more pandas native way.</p>
|
<p>You can use <code>pd.offsets.QuarterEnd</code>:</p>
<pre><code># df["date"] = pd.to_datetime(df["date"])
print (df.loc[df["date"] == (df["date"]+pd.offsets.QuarterEnd(0))])
player amount date Quarter
1 dmitri 45 2021-06-30 2Q21
2 darren 15 2021-12-31 4Q21
</code></pre>
|
python|pandas|dataframe|date|datetime
| 3
|
7,596
| 68,697,977
|
While working on building an image segmentation model I am facing a problem of getting dimensions not equal
|
<p>I am working on an image segmentation problem where training images=50 and testing images=51. I am facing an error where dimensions are not equal.
Input_shape=(256,256,3)
Model Code:</p>
<pre><code>import tensorflow as tf
from tensorflow.keras.layers import Conv2D, BatchNormalization, Activation, MaxPool2D, Conv2DTranspose, Concatenate, Input
from tensorflow.keras.layers import GlobalAveragePooling2D, Reshape, Dense, Multiply, AveragePooling2D, UpSampling2D
from tensorflow.keras.models import Model
from tensorflow.keras.applications import VGG19
def squeeze_excite_block(inputs, ratio=8):
init = inputs
channel_axis = -1
filters = init.shape[channel_axis]
se_shape = (1, 1, filters)
se = GlobalAveragePooling2D()(init)
se = Reshape(se_shape)(se)
se = Dense(filters // ratio, activation='relu', kernel_initializer='he_normal', use_bias=False)(se)
se = Dense(filters, activation='sigmoid', kernel_initializer='he_normal', use_bias=False)(se)
x = Multiply()([init, se])
return x
def ASPP(x, filter):
shape = x.shape
y1 = AveragePooling2D(pool_size=(shape[1], shape[2]))(x)
y1 = Conv2D(filter, 1, padding="same")(y1)
y1 = BatchNormalization()(y1)
y1 = Activation("relu")(y1)
y1 = UpSampling2D((shape[1], shape[2]), interpolation="bilinear")(y1)
y2 = Conv2D(filter, 1, dilation_rate=1, padding="same", use_bias=False)(x)
y2 = BatchNormalization()(y2)
y2 = Activation("relu")(y2)
y3 = Conv2D(filter, 3, dilation_rate=6, padding="same", use_bias=False)(x)
y3 = BatchNormalization()(y3)
y3 = Activation("relu")(y3)
y4 = Conv2D(filter, 3, dilation_rate=12, padding="same", use_bias=False)(x)
y4 = BatchNormalization()(y4)
y4 = Activation("relu")(y4)
y5 = Conv2D(filter, 3, dilation_rate=18, padding="same", use_bias=False)(x)
y5 = BatchNormalization()(y5)
y5 = Activation("relu")(y5)
y = Concatenate()([y1, y2, y3, y4, y5])
y = Conv2D(filter, 1, dilation_rate=1, padding="same", use_bias=False)(y)
y = BatchNormalization()(y)
y = Activation("relu")(y)
return y
def conv_block(x, filters):
x = Conv2D(filters, 3, padding="same")(x)
x = BatchNormalization()(x)
x = Activation("relu")(x)
x = Conv2D(filters, 3, padding="same")(x)
x = BatchNormalization()(x)
x = Activation("relu")(x)
x = squeeze_excite_block(x)
return x
def encoder1(inputs):
skip_connections = []
model = VGG19(include_top=False, weights="imagenet", input_tensor=inputs)
names = ["block1_conv2", "block2_conv2", "block3_conv4", "block4_conv4"]
for name in names:
skip_connections.append(model.get_layer(name).output)
output = model.get_layer("block5_conv4").output
return output, skip_connections
def decoder1(inputs, skip_connections):
num_filters = [256, 128, 64, 32]
skip_connections.reverse()
x = inputs
for i, f in enumerate(num_filters):
x = UpSampling2D((2, 2), interpolation="bilinear")(x)
x = Concatenate()([x, skip_connections[i]])
x = conv_block(x, f)
return x
def output_block(inputs):
x = Conv2D(1, 1, padding="same")(inputs)
x = Activation("sigmoid")(x)
return x
def encoder2(inputs):
num_filters = [32, 64, 128, 256]
skip_connections = []
x = inputs
for i, f in enumerate(num_filters):
x = conv_block(x, f)
skip_connections.append(x)
x = MaxPool2D((2, 2))(x)
return x, skip_connections
def decoder2(inputs, skip_1, skip_2):
num_filters = [256, 128, 64, 32]
skip_2.reverse()
x = inputs
for i, f in enumerate(num_filters):
x = UpSampling2D((2, 2), interpolation="bilinear")(x)
x = Concatenate()([x, skip_1[i], skip_2[i]])
x = conv_block(x, f)
return x
def build_model(input_shape):
inputs = Input(input_shape)
x, skip_1 = encoder1(inputs)
x = ASPP(x, 64)
x = decoder1(x, skip_1)
output1 = output_block(x)
x = inputs * output1
x, skip_2 = encoder2(x)
x = ASPP(x, 64)
x = decoder2(x, skip_1, skip_2)
output2 = output_block(x)
outputs = Concatenate()([output1, output2])
model = Model(inputs, outputs)
return model
if __name__ == "__main__":
input_shape = (256, 256, 3)
model = build_model(input_shape)
model.summary()
</code></pre>
<p>And getting the following error while training my model :
ValueError: in user code:</p>
<pre><code>/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:855 train_function *
return step_function(self, iterator)
<ipython-input-10-88ab9377d655>:15 dice_coef *
intersection = tf.reduce_sum(y_true * y_pred)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/math_ops.py:1250 binary_op_wrapper
raise e
/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/math_ops.py:1234 binary_op_wrapper
return func(x, y, name=name)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/math_ops.py:1575 _mul_dispatch
return multiply(x, y, name=name)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py:206 wrapper
return target(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/math_ops.py:530 multiply
return gen_math_ops.mul(x, y, name)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/gen_math_ops.py:6250 mul
"Mul", x=x, y=y, name=name)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/op_def_library.py:750 _apply_op_helper
attrs=attr_protos, op_def=op_def)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py:601 _create_op_internal
compute_device)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py:3565 _create_op_internal
op_def=op_def)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py:2042 __init__
control_input_ops, op_def)
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py:1883 _create_c_op
raise ValueError(str(e))
ValueError: Dimensions must be equal, but are 65536 and 131072 for '{{node mul_1}} = Mul[T=DT_FLOAT](flatten/Reshape, flatten_1/Reshape)' with input shapes: [?,65536], [?,131072]
</code></pre>
<p>Any help regarding this would be much appreciated.</p>
<p>Thank you for your help in advance.</p>
|
<p>The error occurs at this line</p>
<pre><code>x = Multiply()([init, se])
</code></pre>
<p>The reason for the error is that <code>Multiply()</code> does an element-wise multiplication and hence the dimension of <code>init</code> and <code>se</code> should be same. In you case the <code>init</code> is a output of a <code>BatchNorm</code> on <code>Conv2D</code> and hence will have a shape like <code>(1, x, y, filters)</code> where as the <code>se</code> is the output from a dense layer and hence will have a shape like <code>(1, filters)</code> here you have defined <code>filters</code> as the number of neurons in the last Dense layer. Now you have to reshape the output from the <code>Dense</code> layer to match the <code>init</code>. Also, if your x and y are not equal to 1 then there will be a dimension mismatch and you will get the error.</p>
|
python|tensorflow|deep-learning|conv-neural-network|image-segmentation
| 0
|
7,597
| 68,830,578
|
How to properly get all the data in excel from dataframe in google colab
|
<p>I have a dataframe in google colab when I print the dataframe this is the output I get:</p>
<pre><code> 0
0 Aaron Burciaga
1 \nECS
2 \nVP Artificial Intelligence\n
3 Chanchal Chatterjee
4 \nGoogle, Inc.
.. ...
247 \nI2chain
248 \nFounder\n
249 Chandrashekar Bhat M
250 \nTrashin
251 \nCEO\n
</code></pre>
<p>However when I export the dataframe in excel I get the below output
<a href="https://i.stack.imgur.com/Yv0ZO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Yv0ZO.png" alt="enter image description here" /></a></p>
<p>Can someone help me figure out how to properly export the data.</p>
<p>My code:</p>
<pre><code> tmpList = []
speaker_name = soup.find_all('h4', class_ = 'clearfix Roboto-Medium font15 sbl-t t-b- m0 dks-t l-h20' )
for name in speaker_name:
print(name.text)
tmpList.append(name.text)
df1 = pd.DataFrame(tmpList)
df1
print(df1)
df1.to_csv('sample.csv')
files.download("sample.csv")
</code></pre>
|
<p>The problem comes from the <a href="https://stackoverflow.com/questions/31489377/working-of-n-in-python"><code>\n</code></a> at the beginning (of some) of your values. Just <a href="https://www.w3resource.com/pandas/series/series-str-strip.php" rel="nofollow noreferrer">strip</a> them.<sup>In case you haven't figured it out, the values are present in each cell, but not at the first (visible) line.</sup></p>
<p>So what about replacing <code>df1 = pd.DataFrame(tmpList)</code> by</p>
<pre><code>df1 = pd.DataFrame(tmpList).applymap(str.strip)
</code></pre>
<p>?</p>
|
python|pandas|google-colaboratory
| 0
|
7,598
| 65,494,915
|
TypeError: f() takes 1 positional argument but 2 were given
|
<p>I can't figure out why i get the error message</p>
<pre><code>TypeError
TypeError: f() takes 1 positional argument but 2 were given
</code></pre>
<p>I have this code
`</p>
<pre><code>df_dur=user.groupby(['Date'], as_index=False).sum(['Duration'])
df_dur=df_dur.duration
print('DF Duration:\n', df_dur)`
</code></pre>
<p>How can i do ?</p>
|
<p>I had the same error message, however I'm not sure this at all is relevant to your question. But maybe someone else will find it useful, I updated my pandas from 0.22 to 1.1 and it solved my problem (TypeError: f() takes 1 positional argument but 2 were given).</p>
<blockquote>
<p>pip install --upgrade pandas</p>
</blockquote>
<p>NOTE: This was on a Jetson AGX Xavier running tegra linux.</p>
|
python|pandas
| 3
|
7,599
| 65,762,304
|
Getting very poor accuracy on stanford_dogs dataset
|
<p>I'm trying to train a model on the stanford_dogs dataset to classify 120 dog breeds but my code is acting strange.</p>
<p>I downloaded the image data from <a href="http://vision.stanford.edu/aditya86/ImageNetDogs/images.tar" rel="nofollow noreferrer">http://vision.stanford.edu/aditya86/ImageNetDogs/images.tar</a></p>
<p>Then ran the following code to split each folder of the breeds into training and testing folders:</p>
<pre><code>dataset_dict = {}
source_path = 'C:/Users/visha/Downloads/stanford_dogs/dataset'
dir_root = os.getcwd()
dataset_folders = [x for x in os.listdir(os.path.join(dir_root, source_path)) if os.path.isdir(os.path.join(dir_root, source_path, x))]
for category in dataset_folders:
dataset_dict[category] = {'source_path': os.path.join(dir_root, source_path, category),
'train_path': create_folder(new_path='C:/Users/visha/Downloads/stanford_dogs/train',
folder_type='train',
data_class=category),
'validation_path': create_folder(new_path='C:/Users/visha/Downloads/stanford_dogs/validation',
folder_type='validation',
data_class=category)}
dataset_folders = [x for x in os.listdir(os.path.join(dir_root, source_path)) if os.path.isdir(os.path.join(dir_root, source_path, x))]
for key in dataset_dict:
print("Splitting Category {} ...".format(key))
split_data(source_path=dataset_dict[key]['source_path'],
train_path=dataset_dict[key]['train_path'],
validation_path=dataset_dict[key]['validation_path'],
split_size=0.7)
</code></pre>
<p>I fed the images through the network after some image augmentation and used sigmoid activation in the final layer and categorical_crossentropy loss.</p>
<pre><code>import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import RMSprop
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(16, (3, 3), activation='relu', input_shape=(150, 150, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(32, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3, 3), activation='relu'),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(120, activation='softmax')
])
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
TRAINING_DIR = 'C:/Users/visha/Downloads/stanford_dogs/train'
train_datagen = ImageDataGenerator(rescale=1./255,rotation_range=40,width_shift_range=0.2,height_shift_range=0.2,
shear_range=0.2,zoom_range=0.2,horizontal_flip=True,fill_mode='nearest')
train_generator = train_datagen.flow_from_directory(TRAINING_DIR,
batch_size=10,
class_mode='categorical',
target_size=(150, 150))
VALIDATION_DIR = 'C:/Users/visha/Downloads/stanford_dogs/validation'
validation_datagen = ImageDataGenerator(rescale=1./255, rotation_range=40,width_shift_range=0.2, height_shift_range=0.2,
shear_range=0.2,zoom_range=0.2,horizontal_flip=True,fill_mode='nearest')
validation_generator = validation_datagen.flow_from_directory(VALIDATION_DIR,
batch_size=10,
class_mode='categorical',
target_size=(150, 150))
history = model.fit(train_generator,
epochs=10,
verbose=1,
validation_data=validation_generator)
</code></pre>
<p>But the code is not working as intended. The val_accuracy after 10 epochs is something like 4.756.</p>
|
<p>For validation data you should not do any image augmentation, just do rescale. In validation flow_from_directory set shuffle=False. Be advised that the Stanford Dog Data set is very difficult. To achieve a reasonable degree of accuracy you will need a much more complex model. I recommend you consider transfer learning using the Mobilenet model. The code below shows how to do that.</p>
<pre><code>base_model=tf.keras.applications.mobilenet.MobileNet( include_top=False,
input_shape=(150,150,3) pooling='max', weights='imagenet',dropout=.4)
x=base_model.output
x=keras.layers.BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001 )(x)
x = Dense(1024, activation='relu')(x)
x=Dropout(rate=.3, seed=123)(x)
output=Dense(120, activation='softmax')(x)
model=Model(inputs=base_model.input, outputs=output)
model.compile(Adamax(lr=.001),loss='categorical_crossentropy',metrics=
['accuracy'] )
</code></pre>
<p>I forgot to mention that Mobilenet was trained on Images with pixel values in the range -1 to +1. So in ImageDataGenerator include the code</p>
<pre><code>preprocessing_function=tf.keras.applications.mobilenet.preprocess_input
</code></pre>
<p>this scales the pixels so you do not need the code</p>
<pre><code>rescale=1./255
</code></pre>
<p>or alternatively set</p>
<pre><code>rescale=1/157.5-1
</code></pre>
<p>which will rescale the values between -1 and +1</p>
|
python|python-3.x|tensorflow|neural-network|tensorflow2.0
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.