Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
374,800
| 49,202,426
|
Adding tuple elements, parsed into pandas DataFrame
|
<p>I have several Python lists of tuples:</p>
<pre><code>[(0, 61), (1, 30), (5, 198), (4, 61), (0, 30), (5, 200)]
[(1, 72), (2, 19), (3, 31), (4, 192), (6, 72), (5, 75)]
[(3, 12), (0, 51)]
...
</code></pre>
<p>Each of these tuples are created such that these are in the format <code>(key, value)</code>:</p>
<p>There are seven keys: 0, 1, 2, 3, 4, 5, 6</p>
<p>The intended output is a pandas DataFrame, whereby each column is named by the key:</p>
<pre><code>import pandas as pd
print(df)
0 1 2 3 4 5 6
91 30 0 0 61 198 0
0 72 19 31 192 75 72
51 0 0 12 0 0 0
</code></pre>
<p>Now, the problem I have conceptually is how to add several tuple "values" if they keys which are the same. </p>
<p>I can access these values for a given list, e.g. </p>
<pre><code>mylist = [(0, 61), (1, 30), (5, 198), (4, 61), (0, 30), (5, 200)]
keys = [x[0] for x in mylist]
</code></pre>
<p>and</p>
<pre><code>print(keys)
[0, 1, 5, 4, 0, 5]
</code></pre>
<p>I'm not sure how to create, e.g. a dictionary of the key:value pairs, which I could load into a pandas DataFrame</p>
|
<p>Consider your data in a name <code>tups</code></p>
<pre><code>tups = [
[(0, 61), (1, 30), (5, 198), (4, 61), (0, 30), (5, 200)],
[(1, 72), (2, 19), (3, 31), (4, 192), (6, 72), (5, 75)],
[(3, 12), (0, 51)]
]
</code></pre>
<hr>
<p><strong>Option 0</strong><br>
Using <code>np.bincount</code> and crazy maps and zips and splats<br>
This works because <code>np.bincount</code>s first two arguments are the array of positions and the optional array of weights to use while adding. </p>
<pre><code>pd.DataFrame(
list(map(lambda t: np.bincount(*zip(*t)), tups))
).fillna(0, downcast='infer')
0 1 2 3 4 5 6
0 91 30 0 0 61 398 0
1 0 72 19 31 192 75 72
2 51 0 0 12 0 0 0
</code></pre>
<hr>
<p><strong>Option 1</strong><br>
Using comprehensions and summation over axis levels. </p>
<pre><code>pd.Series({
(i, j, k): v
for i, row in enumerate(tups)
for k, (j, v) in enumerate(row)
}).sum(level=[0, 1]).unstack(fill_value=0)
0 1 2 3 4 5 6
0 91 30 0 0 61 398 0
1 0 72 19 31 192 75 72
2 51 0 0 12 0 0 0
</code></pre>
<hr>
<p><strong>Option 2</strong><br>
You can use the <code>DataFrame</code> constructor on the result of using a defaultdict:</p>
<pre><code>from collections import defaultdict
d = defaultdict(lambda: defaultdict(int))
for i, row in enumerate(tups):
for j, v in row:
d[j][i] += v
pd.DataFrame(d).fillna(0, downcast='infer')
0 1 2 3 4 5 6
0 91 30 0 0 61 398 0
1 0 72 19 31 192 75 72
2 51 0 0 12 0 0 0
</code></pre>
<hr>
<p><strong>Option 3</strong><br>
Create a zero dataframe and update it via iteration</p>
<pre><code>n, m = len(tups), max(j for row in tups for j, _ in row) + 1
df = pd.DataFrame(0, range(n), range(m))
for i, row in enumerate(tups):
for j, v in row:
df.at[i, j] += v
df
0 1 2 3 4 5 6
0 91 30 0 0 61 398 0
1 0 72 19 31 192 75 72
2 51 0 0 12 0 0 0
</code></pre>
|
python|pandas
| 4
|
374,801
| 49,296,114
|
Convert list of strings to columns with custom rules pandas
|
<p>I have list of string within my dataframe columns:</p>
<pre><code>data = [{'column A': '3 item X; 4 item Y; item E of size 7', 'column B': 'item I of size 10; item X has 5 specificities; characteristic W'},
{'column A': '13 item X; item F of size 0; 9 item Y', 'column B': 'item J of size 11; item Y has 8 specificities'}]
df = pd.DataFrame(data)
</code></pre>
<p><img src="https://i.stack.imgur.com/ez5F3.png" alt="df"></p>
<p>I want to extract numerical information from strings that contains integers, for each row.<br>
For instance, I need to create a new column named <code>Size item E</code> that takes the value <code>7</code> for the first row of <code>df</code> in column A, since the list contains <code>item E of size 7</code>.<br>
If the value in the list of strings does not contain number, I just want to encode them as 1 or 0 if it is present in the original list. </p>
<p>Here is a summary of my desired output:</p>
<p><img src="https://i.stack.imgur.com/xrB9x.png" alt="df2"></p>
<p>This is what I have coded so far, applying only 1 rule:</p>
<pre><code>import pandas
import re
def hasNumbers(inputString):
return any(char.isdigit() for char in inputString)
def transform(df):
columns = ['column A', 'column B']
for col in columns:
temp = df[col].apply(lambda x : str(x).split(';'))
tokens = set([l for j in temp for l in j])
for token in tokens:
try:
integer = int(re.search(r'\d+', token).group())
except:
pass
if token[0].isdigit():
df['Nb ' + token.replace('{} '.format(integer), '')] = integer
# if ...:
# ...other rules
elif hasNumbers(token) == False:
df[token] = df[col].apply(lambda x : 1 if token in str(x) else 0)
df = df.drop(col, axis=1)
return df
df3 = transform(df)
</code></pre>
<p>Which is returning me the following dataframe:</p>
<p><img src="https://i.stack.imgur.com/hvHVc.png" alt="df3"></p>
<p>As you can see, I cannot apply my feature extraction by row, it updates the whole pandas series. Is there any to update new column values for each row step by step?</p>
|
<p>Don't go for complex functions pandas has great string manipulation functions.
Check this code to get the desired output.</p>
<pre><code>data = [{'column A': '3 item X; 4 item Y; item E of size 7', 'column B': 'item I of size 10; item X has 5 specificities; characteristic W'},
{'column A': '13 item X; item F of size 0; 9 item Y', 'column B': 'item J of size 11; item Y has 8 specificities'}]
df = pd.DataFrame(data)
#joining 2 columns with ';'
df['All Columns joined'] = df[['column A','column B']].apply(lambda x: ';'.join(x), axis=1)
#creating empty dataframe
df_new = pd.DataFrame([])
#Desired output logic using string extract function
df_new['Nb item X'] = df['All Columns joined'].str.extract(r'([0-9]+) item X',expand = False)
df_new['Nb item Y'] = df['All Columns joined'].str.extract(r'([0-9]+) item Y',expand = False)
df_new['Nb specificities item X'] = df['All Columns joined'].str.extract(r'item X has ([0-9]+) specificities',expand = False)
df_new['Nb specificities item Y'] = df['All Columns joined'].str.extract(r'item Y has ([0-9]+) specificities',expand = False)
df_new['Size item E'] = df['All Columns joined'].str.extract(r'item E of size ([0-9]+)',expand = False)
df_new['Size item F'] = df['All Columns joined'].str.extract(r'item F of size ([0-9]+)',expand = False)
df_new['Size item I'] = df['All Columns joined'].str.extract(r'item I of size ([0-9]+)',expand = False)
df_new['Size item J'] = df['All Columns joined'].str.extract(r'item J of size ([0-9]+)',expand = False)
df_new['characteristic W'] = df['All Columns joined'].str.extract(r'(characteristic W)',expand = False).notnull().astype(int)
df_new
Nb item X Nb item Y Nb specificities item X Nb specificities item Y Size item E Size item F Size item I Size item J characteristic W
0 3 4 5 NaN 7 NaN 10 NaN 1
1 13 9 NaN 8 NaN 0 NaN 11 0
</code></pre>
<p>Ouput of the df_new dataframe.
<a href="https://i.stack.imgur.com/7EcQA.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7EcQA.jpg" alt="enter image description here" /></a></p>
|
python|pandas
| 0
|
374,802
| 49,098,712
|
States in the tensorflow static rnn
|
<p>I'm trying to work with RNN using Tensorflow. I use the following function from this <a href="https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/recurrent_network.py" rel="nofollow noreferrer">repos</a>:</p>
<pre><code>def RNN(x, weights, biases):
# Prepare data shape to match `rnn` function requirements
# Current data input shape: (batch_size, timesteps, n_input)
# Required shape: 'timesteps' tensors list of shape (batch_size, n_input)
# Unstack to get a list of 'timesteps' tensors of shape (batch_size, n_input)
x = tf.unstack(x, timesteps, 1)
# Define a lstm cell with tensorflow
lstm_cell = rnn.BasicLSTMCell(num_hidden, forget_bias=1.0)
# Get lstm cell output
outputs, states = rnn.static_rnn(lstm_cell, x, dtype=tf.float32)
# Linear activation, using rnn inner loop last output
return tf.matmul(outputs[-1], weights['out']) + biases['out']
</code></pre>
<p>I understand that <code>outputs</code> is a list containing intermediate outputs in the unrolled neural network. I can verify that <code>len(outputs)</code> equals <code>timesteps</code>. However, I wonder why <code>len(states)</code> equals <code>2</code>. I think I should contain only the final state of the network.
Could you please help explain?
Thanks.</p>
|
<p>To confirm the discussion in the comments: when constructing a static RNN using <code>BasicLSTMCell</code>, <code>state</code> is a two-tuple of <code>(c, h)</code>, where <code>c</code> is the final cell state and <code>h</code> is the final hidden state. The final cell hidden state is in fact equal to the final output in <code>outputs</code>. You can corroborate this by reading the <a href="https://github.com/tensorflow/tensorflow/blob/r1.6/tensorflow/python/ops/rnn_cell_impl.py" rel="nofollow noreferrer">source code</a> (see <code>BasicLSTMCell</code>'s <code>call</code> method).</p>
|
tensorflow|recurrent-neural-network
| 1
|
374,803
| 48,999,887
|
tensorflow next_batch vs custom next_batch?
|
<p>I'm trying to write a function that can get batches of data, similar to tensorflow's next_batch.</p>
<p>next_batch can be seen here:
<a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/learn/python/learn/datasets/mnist.py" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/blob/master/tensorflow/contrib/learn/python/learn/datasets/mnist.py</a></p>
<p>This is the code that I wrote.</p>
<pre><code>class Sampler:
def __init__(self, data):
self.x, self.y = data
self.N, = self.y.shape
self.start = 0
self.shuffle = np.arange(self.N)
np.random.shuffle(self.shuffle)
self.x = self.x[self.shuffle]
self.y = self.y[self.shuffle]
def sample(self, s):
start = self.start
end = np.minimum(start+s, self.N)
data = (self.x[start:end], self.y[start:end])
self.start += s
if self.start >= self.N - 1:
self.start = 0
np.random.shuffle(self.shuffle)
self.x = self.x[self.shuffle]
self.y = self.y[self.shuffle]
return data
</code></pre>
<p>I feel that this is a natural approach, but while I can get 99%+ accuracy with classification using next_batch, I can only get around 50% using my "sample" function.</p>
<p>Could anyone help me understand what's going on?</p>
|
<p>Direct cp from my comment but...</p>
<p>As far as I can tell, your code does almost exactly the same thing as the next_batch function from the mnist example. The only differences being that the DataSet class in the example flattens input data from (x,y,z,1) into (x,y*z) and then also normalizes all the data from [0,256] to [0,1]. Neither of these should effect accuracy immediately, but depending on how you are training they could have an effect.</p>
|
python|tensorflow|tensorflow-datasets
| 0
|
374,804
| 49,068,574
|
Sklearn, Gaussian Process: XA and XB must have the same number of columns
|
<p>I am quite new to python and interesting in doing Gaussian regression.
I am under py3.6 and SKlearn 0.19.</p>
<p>I have simple code and I get an error about the dimension of the vectors in cdist called by predict. I understand there's something bad in my input. But I do not see why...</p>
<p>I looked for example of gaussian process regressor, but it does not seems to be the most common tools.</p>
<p>Thank in advance for you help.</p>
<p>Cheers.</p>
<p>Here is a sample of my code:</p>
<pre><code>import pandas as pd
import numpy as np
import numpy as np
from sklearn.gaussian_process import GaussianProcessRegressor as gpr
from sklearn.gaussian_process.kernels import RBF, ConstantKernel as C
....
#X_train are the training samples
X_train= np.column_stack((xc,yc,zc))
print('X_train')
print(X_train.shape)
print(X_train)
</code></pre>
<p>Here is the print of X_train:</p>
<pre><code> X_train (4576, 3)
[[ 0.71958336 -1.12719598 0.47889958]
[ 0.71958336 -1.12719598 0.47889958]
[ 0.71958336 -1.12719598 0.34285071]
...
[ 0.55255508 -1.18817547 -1.63666023]
[ 0.55255508 -1.18817547 -1.70468466]
[ 0.55255508 -1.18817547 -1.77270909]]
</code></pre>
<p>here is the target feature on the training:</p>
<pre><code>print('v1')
print(v1.shape)
print(v1)
</code></pre>
<p>its print</p>
<pre><code>v1
(4576,)
0 10.0
1 14.0
2 13.0
3 19.0
....
4573 39.0
4574 16.0
4575 12.0
</code></pre>
<p>Here is the samples to predict:</p>
<pre><code>x = np.column_stack((xp,
yp,
zp))
print('x')
print(x.shape)
print(x)
</code></pre>
<p>here is the print:</p>
<blockquote>
<pre><code>x
(75, 3)
[[-1.41421356 -1.41421356 -1.22474487]
[-0.70710678 -1.41421356 -1.22474487]
[ 0. -1.41421356 -1.22474487]
[ 0.70710678 -1.41421356 -1.22474487]
.....
[ 0.70710678 -0.70710678 -1.22474487]
[ 1.41421356 -0.70710678 -1.22474487]
[-1.41421356 0. -1.22474487]
[-0.70710678 0. -1.22474487]
[ 0. 0. -1.22474487]
</code></pre>
</blockquote>
<p>Here is the fitting and prediction</p>
<pre><code>v1 = v1.ravel()
#default kernel
kernel = C(1.0, (1e-3, 1e3)) * RBF(10, (1e-2, 1e2))
X_train, v1 = make_regression()
model = gpr(kernel=kernel, n_restarts_optimizer=9)
model.fit(X_train,v1)
#Predict v1
v1_pred = model.predict(x)
</code></pre>
<p>When runing I get the following error: </p>
<blockquote>
<p>File "test.py", line 189, in test
v1_pred = model.predict(x) File "/usr/local/lib/python3.6/site-packages/sklearn/gaussian_process/gpr.py",
line 315, in predict
K_trans = self.kernel_(X, self.X_train_) File "/usr/local/lib/python3.6/site-packages/sklearn/gaussian_process/kernels.py",
line 758, in <strong>call</strong>
return self.k1(X, Y) * self.k2(X, Y) File "/usr/local/lib/python3.6/site-packages/sklearn/gaussian_process/kernels.py",
line 1215, in <strong>call</strong>
metric='sqeuclidean') File "/usr/local/lib/python3.6/site-packages/scipy/spatial/distance.py",
line 2373, in cdist
raise ValueError('XA and XB must have the same number of columns ' ValueError: XA and XB must have the same number of columns (i.e.
feature dimension.) </p>
</blockquote>
|
<p>I hav simply copy paste a code and did something stupid:</p>
<pre><code>X_train, v1 = make_regression()
</code></pre>
<p>Just had to remove it. </p>
|
python|pandas|numpy|machine-learning|scikit-learn
| 1
|
374,805
| 48,898,406
|
compress list of numbers into unique non overlapping time ranges using python
|
<p>I'm from biology and very new to python and ML, the lab has a blackbox ML model which outputs a sequence like this : </p>
<pre><code>Predictions =
[1,1,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,1,0,1,0,1,0,1,0,1,1,1,1,1,0,0,0,1,1,1,1,1,1,0]
</code></pre>
<p>each value represents a predicted time frame of duration 0.25seconds.<br>
1 means High.<br>
0 means Not High. </p>
<p>How do I convert these predictions into a [start,stop,label] ?<br>
so that longer sequences are grouped example the first 10 ones represent 0 to 10*.25s thus the first range and label would be </p>
<p>[[0.0,2.5, High]<br>
next there are 13 zeroes ===> start = (2.5), stop = 13*.25 +2.5, label = Not high<br>
thus<br>
[2.5, 5.75, Not-High] </p>
<p>so final list would be something like a list of lists/ranges with unique non overlapping intervals along with a label like : </p>
<pre><code>[[0.0,2.5, High],
[2.5, 5.75, Not-High],
[5.75,6.50, High] ..
</code></pre>
<p>What I tried:<br>
1. Count number of values in Predictions<br>
2. Generate two ranges , one starting at zero and another starting at 0.25<br>
3. merge these two lists into tuples </p>
<pre><code>import numpy as np
len_pred = len(Predictions)
range_1 = np.arange(0,len_pred,0.25)
range_2 = np.arange(0.25,len_pred,0.25)
new_range = zip(range_1,range_2)
</code></pre>
<p>Here I'm able to get the ranges, but missing out on the labels.<br>
Seems like simple problem but I'm running in circles. </p>
<p>Please advise.
Thanks. </p>
|
<p>You can iterate through the list and create a range when you detect a change. You'll also need to account for the final range when using this method. Might not be super clean but should be effective.</p>
<pre><code>current_time = 0
range_start = 0
current_value = predictions[0]
ranges = []
for p in predictions:
if p != current_value:
ranges.append([range_start, current_time, 'high' if current_value == 1 else 'not high'])
range_start = current_time
current_value = p
current_time += .25
ranges.append([range_start, current_time, 'high' if current_value == 1 else 'not high'])
</code></pre>
<p>Updated to fix a few off by one type errors.</p>
|
python|algorithm|python-2.7|numpy
| 4
|
374,806
| 49,120,232
|
Add a datetime column to multiple dataframes in Pandas
|
<p>I'm trying to loop through a list of dataframes and append a datetime column to each. I've tried the following to no avail:</p>
<pre><code>dfs = ['nov22_2017', 'nov29_2017', 'dec06_2017','dec13_2017',
'dec20_2017', 'dec27_2017', 'jan03_2018', 'jan10_2018']
sheets = ['11.22.17', '11.29.17', '12.6.17', '12.13',
'12.20', '12.27', '1.3.18', '1.10.18']
dates = ['2017-11-22', '2017-11-29', '2017-1-06', '2017-12-13',
'2017-12-20', '2017-12-27', '2018-01-03', '2018-01-10']
# create a list of datetimes
datetimes = [pd.to_datetime(date) for date in dates]
# assign each df to a variable
dfs = [nrc_xl.parse(sheet, usecols = 10) for sheet in sheets]
# assign datetime columns
for index, df in enumerate(dfs):
df['date'] = datetimes[index]
</code></pre>
<p>The for loop doesn't modify the dataframes in the list. How do I accomplish this programmatically without having to create and assign a column for each dataframe?</p>
<p>EDIT: I fixed it.</p>
<pre><code>sheets = ['11.22.17', '11.29.17', '12.6.17', '12.13',
'12.20', '12.27', '1.3.18', '1.10.18']
# assign dfs to variables
[nov22_2017, nov29_2017, dec06_2017, dec13_2017,
dec20_2017, dec27_2017, jan03_2018, jan10_2018] = [nrc_xl.parse(sheet, usecols = 10) for sheet in sheets]
dfs = [nov22_2017, nov29_2017, dec06_2017, dec13_2017,
dec20_2017, dec27_2017, jan03_2018, jan10_2018]
# create a list of datetimes
dates = ['2017-11-22', '2017-11-29', '2017-1-06', '2017-12-13',
'2017-12-20', '2017-12-27', '2018-01-03', '2018-01-10']
datetimes = [pd.to_datetime(date) for date in dates]
# assign datetime columns
for index, df in enumerate(dfs):
df['date'] = datetimes[index]
</code></pre>
|
<p>You could make this part of your list comprehension:</p>
<pre><code># assign each df to a variable
dfs = [nrc_xl.parse(sheet, usecols = 10).assign(date=datetimes[i]) \
for i, sheet in enumerate(sheets)]
</code></pre>
|
python|pandas|dataframe
| 0
|
374,807
| 49,337,960
|
SKLearn NMF Vs Custom NMF
|
<p>I am trying to build a recommendation system using Non-negative matrix factorization. Using <a href="http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.NMF.html" rel="noreferrer">scikit-learn NMF</a> as the model, I fit my data, resulting in a certain loss(i.e., reconstruction error). Then I generate recommendation for new data using the <a href="http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.NMF.html#sklearn.decomposition.NMF.inverse_transform" rel="noreferrer">inverse_transform</a> method. </p>
<p>Now I do the same using another model I built in TensorFlow. The reconstruction error after training is close to that obtained using sklearn's approach earlier.
However, neither are the latent factors similar to one another nor the final recommendations.</p>
<p>One difference between the 2 approaches that I am aware of is:
In sklearn, I am using the Coordinate Descent solver whereas in TensorFlow, I am using the AdamOptimizer which is based on Gradient Descent.
Everything else seems to be the same:</p>
<ol>
<li>Loss function used is the Frobenius Norm</li>
<li>No regularization in both cases</li>
<li>Tested on the same data using same number of latent dimensions</li>
</ol>
<p>Relevant code that I am using:</p>
<p><strong>1. scikit-learn approach:</strong></p>
<pre><code>model = NMF(alpha=0.0, init='random', l1_ratio=0.0, max_iter=200,
n_components=2, random_state=0, shuffle=False, solver='cd', tol=0.0001,
verbose=0)
model.fit(data)
result = model.inverse_transform(model.transform(data))
</code></pre>
<p><strong>2. TensorFlow approach:</strong></p>
<pre><code>w = tf.get_variable(initializer=tf.abs(tf.random_normal((data.shape[0],
2))), constraint=lambda p: tf.maximum(0., p))
h = tf.get_variable(initializer=tf.abs(tf.random_normal((2,
data.shape[1]))), constraint=lambda p: tf.maximum(0., p))
loss = tf.sqrt(tf.reduce_sum(tf.squared_difference(x, tf.matmul(w, h))))
</code></pre>
<p>My question is that if the recommendations generated by these 2 approaches do not match, then how can I determine which are the right ones?
Based on my use case, sklearn's NMF is giving me good results, but not the TensorFlow implementation. How can I achieve the same using my custom implementation?</p>
|
<p>The choice of the optimizer has a big impact on the quality of the training. Some very simple models (I'm thinking of GloVe for example) do work with some optimizer and not at all with some others. Then, to answer your questions:</p>
<ol>
<li><blockquote>
<p>how can I determine which are the right ones ?</p>
</blockquote></li>
</ol>
<p>The evaluation is as important as the design of your model, and it is as hard i.e. you can try these 2 models and several available datasets and use some metrics to score them. You could also use A/B testing on a real case application to estimate the relevance of your recommendations.</p>
<ol start="2">
<li><blockquote>
<p>How can I achieve the same using my custom implementation ?</p>
</blockquote></li>
</ol>
<p>First, try to find a coordinate descent optimizer for <em>Tensorflow</em> and make sure all step you implemented are exactly the same as the one in <em>scikit-learn</em>. Then, if you can't reproduce the same, try different solutions (why don't you try a simple gradient descent optimizer first ?) and take profit of the great modularity that <em>Tensorflow</em> offers ! </p>
<p>Finally, if the recommendations provided by your implementation are that bad, I suggest you have an error in it. Try to compare with some <a href="https://nipunbatra.github.io/blog/2017/nnmf-tensorflow.html" rel="nofollow noreferrer">existing codes</a>.</p>
|
python|tensorflow|scikit-learn|recommendation-engine|nmf
| 2
|
374,808
| 48,929,508
|
numpy pad a sequence instead of constant values
|
<p>I am trying to pad a numpy array with a sequence <code>[0, 1]</code> along each row. So for example if I have an array as:</p>
<pre><code>x = np.random.rand(2, 4)
array([[0.51352468, 0.4274193 , 0.11244252, 0.56787658],
[0.37855923, 0.80976327, 0.0290558 , 0.87585656]])
</code></pre>
<p>After the padding operation, it becomes:</p>
<pre><code>array([[0.51352468, 0.4274193 , 0.11244252, 0.56787658],
[0.37855923, 0.80976327, 0.0290558 , 0.87585656],
[0. , 0. , 0. , 0. ],
[1. , 1. , 1. , 1. ]])
</code></pre>
<p>Currently, the way I am doing this is:</p>
<pre><code>padded = np.asarray(x)
padded = np.pad(padded, [(0, 4 - len(padded.shape[0])), (0, 0)], 'constant')
padded[:, -1] = 1.0
</code></pre>
<p>This seems a bit cumbersome as I pad with zeros and then set the last row to 1. I was wondering if there is a way to do this with just one <code>numpy.pad</code> call?</p>
<p><strong>EDIT</strong></p>
<p>As you can see from the code above, the function will convert the input to a 4 dimensional object (the input dimension of the array is either 2 or 3 i.e. len(x) == 2 or 3). So, if the input is 2 dimensional, it will add two rows of zeros and then set the last one to 1. if the input is 3 dimensional, it will add a row of zeros and then overwrite to a row of ones.</p>
|
<p>Perhaps it would be simpler (to read) if you simply allocate a 2D array of zeros,
assign ones to the last row, and copy <code>x</code> into the padded array:</p>
<pre><code>import numpy as np
def pad(x):
nrows, ncols = x.shape
padded = np.zeros((4, ncols))
padded[-1, :] = 1
padded[:nrows, :] = x
return padded
nrows, ncols = np.random.randint(2, 4), 4
x = np.random.rand(nrows, ncols)
padded = pad(x)
</code></pre>
<p>yields a padded array such as </p>
<pre><code>array([[ 0.38746512, 0.23166218, 0.97459752, 0.37565333],
[ 0.05774882, 0.44061104, 0.06661526, 0.26714634],
[ 0.00805322, 0.30201519, 0.71373347, 0.08288743],
[ 1. , 1. , 1. , 1. ]])
</code></pre>
<p>or </p>
<pre><code>array([[ 0.68343436, 0.6108459 , 0.84325679, 0.10912022],
[ 0.547983 , 0.7543816 , 0.02411474, 0.02711809],
[ 0. , 0. , 0. , 0. ],
[ 1. , 1. , 1. , 1. ]])
</code></pre>
<p>depending on the number of rows <code>x</code> has (which this assumes is <= 4).</p>
|
python|numpy
| 2
|
374,809
| 49,090,643
|
How can I use numpy create list out of another list with modified elements
|
<p>I am not new at all to Python programming, but I am completely new to the Numpy module. I need to use this module for it's very fast and efficient.</p>
<p>Say I have an array called <code>noise</code> which is defined as follows:</p>
<pre><code>noise = [[uniform(0, 1) for i in range(size)] for j in range(size)]
</code></pre>
<p>In numpy terms, it is defined, I believe, as so:</p>
<pre><code>noise = np.uniform(0, 1, (size, size))
</code></pre>
<p>Now say I want to generate a new array which takes the noise array and replaces every element <code>noise[i][j]</code> of its elements by the function <code>function(i, j)</code>
Using python's built-in list comprehension, I would simply say:</p>
<pre><code>modified_noise = [[function(i, j) for i in range(size)] for j in range(size)]
</code></pre>
<p>My question, is: how can I do that using the numpy module.</p>
|
<p>You can use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.fromfunction.html" rel="nofollow noreferrer"><code>np.fromfunction</code></a> for this:</p>
<pre><code>modified_noise = np.fromfunction(lambda i, j: function(i, j), (size, size), dtype=float)
</code></pre>
<p>This constructs an array by executing a function over each coordinate.</p>
<p>Related: <a href="https://stackoverflow.com/questions/49059667/how-can-i-use-a-range-inside-numpy-fromfunction">How can I use a range inside numpy.fromfunction?</a></p>
|
python|arrays|performance|numpy|list-comprehension
| 2
|
374,810
| 49,054,126
|
Creating a 3D numpy array of from a prexisting iterable that has the appropriate shape
|
<p>I have an array of M samples
and each sample has a shape of: (11, 64)
So theoretically my main array should have a shape of (M, 11, 64)
but all I get is (m,) as the shape</p>
<p>I tried np.array(main_array) but that doesn't do anything.
I was wondering if there was anyway to make numpy realize the dimensionality of the data that its using. </p>
<p>The way I get the data is by using pandas in the following fashion:</p>
<pre><code>main_array = data['source_info'].apply(func_to_create_2d_array_for_each_row).values
</code></pre>
|
<p><code>np.array</code> won't 'flatten' an object dtype array. You have to use some sort of concatenate.</p>
<p>Make an array of arrays. Notice that I have play some games to get around <code>np.array's</code> preference to create a 3d array:</p>
<pre><code>In [5]: arr = np.empty((3,), dtype=object)
In [6]: arr
Out[6]: array([None, None, None], dtype=object)
In [7]: arr[:] = [np.zeros((2,3)) for _ in range(3)]
In [8]: arr
Out[8]:
array([array([[0., 0., 0.],
[0., 0., 0.]]),
array([[0., 0., 0.],
[0., 0., 0.]]),
array([[0., 0., 0.],
[0., 0., 0.]])], dtype=object)
</code></pre>
<p>Another <code>np.array</code> call doesn't do anything</p>
<pre><code>In [9]: np.array(arr)
Out[9]:
array([array([[0., 0., 0.],
[0., 0., 0.]]),
array([[0., 0., 0.],
[0., 0., 0.]]),
array([[0., 0., 0.],
[0., 0., 0.]])], dtype=object)
</code></pre>
<p><code>stack</code> treats the <code>arr</code> as a list, and joins the elements on a new axis. <code>concatenate</code> joins them on an existing axis.</p>
<pre><code>In [10]: np.stack(arr)
Out[10]:
array([[[0., 0., 0.],
[0., 0., 0.]],
[[0., 0., 0.],
[0., 0., 0.]],
[[0., 0., 0.],
[0., 0., 0.]]])
In [11]: np.concatenate(arr, axis=0)
Out[11]:
array([[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]])
</code></pre>
<p>If one or more of elements of <code>arr</code> differed in shape, then this would not work.</p>
<p><code>np.array((np.zeros((2,3)), np.zeros((3,2))))</code> creates an object array effortlessly - and possibly is a mistake. It cannot be <code>stacked</code>.</p>
|
python|arrays|pandas|numpy|multidimensional-array
| 1
|
374,811
| 58,692,359
|
Training a model with single output on multiple losses keras
|
<p>I am building an image segmentation model using keras and I want to train my model on multiple loss functions. I have seen <a href="https://gaborvecsei.wordpress.com/2018/10/15/implement-loss-functions-inside-keras-models/" rel="nofollow noreferrer">this</a> link but I am looking for a simpler and straight-forward solutions for this situation as my loss functions are quite complex. Can someone tell me how to build a model with single output with multiple losses in keras. </p>
|
<p>This is just an example from <a href="https://keras.io/api/losses/#creating-custom-losses" rel="nofollow noreferrer">here</a>. You could play around with it.</p>
<pre><code>def custom_losses(y_true, y_pred):
alpha = 0.6
squared_difference = tf.square(y_true - y_pred)
Huber = tf.keras.losses.huber(y_true, y_pred)
return tf.reduce_mean(squared_difference, axis=-1) + (alpha*Huber)
model.compile(optimizer='adam', loss=custom_losses,metrics=['MeanSquaredError'])
</code></pre>
|
python|tensorflow|keras|deep-learning|loss-function
| 0
|
374,812
| 58,789,924
|
Cant assign value to cell in multiindex dataframe (assigning to copy / slice of df?)
|
<p>I am trying to assign a value (mean of values in another column) to a cell in a multi-index Pandas dataframe over which I iterate to calculate means over a moving window in a different column. But, when I try to assign the value it doesn't change. </p>
<p>I am not used to working with multi-indexes and have solved several other problems but this one has me stumped for now...</p>
<p>Toy code that reproduces the problem:</p>
<pre class="lang-py prettyprint-override"><code>tuples = [
('AFG', 1963), ('AFG', 1964), ('AFG', 1965), ('AFG', 1966), ('AFG', 1967), ('AFG', 1968),
('BRA', 1963), ('BRA', 1964), ('BRA', 1965), ('BRA', 1966), ('BRA', 1967), ('BRA', 1968)
]
index = pd.MultiIndex.from_tuples(tuples)
values = [[12, None], [0, None],
[12, None], [0, 4],
[12, 5], [0, 4],
[12, 2], [0, 4],
[12, 2], [0, 4],
[1, 4], [7, 1]]
df = pd.DataFrame(values, columns=['Oil', 'Pop'], index=index)
lag =-2
lead=0
indicator = 'Pop'
new_indicator = 'Mean_pop'
df[new_indicator] = np.nan
df
</code></pre>
<p>Gives:</p>
<pre class="lang-py prettyprint-override"><code> Oil Pop Mean_pop
AFG 1963 12 NaN NaN
1964 0 NaN NaN
1965 12 NaN NaN
1966 0 4.0 NaN
1967 12 5.0 NaN
1968 0 4.0 NaN
BRA 1963 12 2.0 NaN
1964 0 4.0 NaN
1965 12 2.0 NaN
1966 0 4.0 NaN
1967 1 4.0 NaN
1968 7 1.0 NaN
</code></pre>
<p>Then to iterate over the df:</p>
<pre class="lang-py prettyprint-override"><code>for country, country_df in df.groupby(level=0):
oldestyear = country_df[indicator].first_valid_index()[1]
latestyear = country_df[indicator].last_valid_index()[1]
for t in range(oldestyear, latestyear+1):
print (country, oldestyear, latestyear, t)
print (" For", country, ", calculate mean over ", t+lag, "to", t+lead,
"and add to row for year", t)
dftt = country_df.loc[(country, t+lag):(country, t+lead)]
print(dftt[indicator])
mean = dftt[indicator].mean(axis=0)
print("mean for ", indicator, "in", country, "during", t+lag, "to", t+lead, "is", mean)
df.loc[country, t][new_indicator] = mean
</code></pre>
<p>Diagnostic output not pasted, but df looks the same after iterating over it and I get the following warning on some iterations:</p>
<pre class="lang-py prettyprint-override"><code>A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
if sys.path[0] == '':
</code></pre>
<p>Any pointers will be greatly appreciated.</p>
|
<p>I think it is a easy as setting last line to:</p>
<pre><code>df.loc[(country, t), new_indicator] = mean
</code></pre>
|
python-3.x|pandas|multi-index
| 2
|
374,813
| 58,702,476
|
Tensorflow saved_model.load issue
|
<p>I'm trying to just be able to load a tensorflow model from a checkpoint, but for some reason I'm getting the error:
"The passed save_path is not a valid checkpoint: /path/variables/variables"</p>
<p>I noticed that it adds an extra "variables" string to the path, for some reason. Is that correct? My directory file structure contains the variables.data-- and variables.index files in the /path/variables folder.</p>
<p>The code I'm using to load the model is:</p>
<pre><code>tf.saved_model.loader.load(current_session, [tf.saved_model.tag_constants.SERVING], path)
</code></pre>
<p>For saving it, I'm doing:</p>
<pre><code>self.builder = tf.saved_model.builder.SavedModelBuilder(path)
self.builder.add_meta_graph_and_variables(self.sess, [tf.saved_model.tag_constants.SERVING], signature_def_map='prediction': self.prediction.signature,})
self.builder.save()
</code></pre>
|
<blockquote>
<p>I noticed that it adds an extra "variables" string to the path, for
some reason. Is that correct?</p>
</blockquote>
<p>Yes, it correct. Please refer below code, here model saved at <code>./savedmodel/</code>, where as it loaded from <code>./savedmodel/variables/variables</code>.</p>
<p>I am able to execute code to save and restore using <code>tf.saved_model</code> works as expected in both Google Colab and Ananconda (Jupyter Notebook).</p>
<p>For the benefit of community here adding save and load using <code>tf.saved_model</code> in Google Colab.</p>
<p>Save Model:</p>
<pre><code>%tensorflow_version 1.x
import tensorflow as tf
# define the tensorflow network and do some trains
x = tf.placeholder("float", name="x")
w = tf.Variable(2.0, name="w")
b = tf.Variable(0.0, name="bias")
h = tf.multiply(x, w)
y = tf.add(h, b, name="y")
sess = tf.Session()
sess.run(tf.global_variables_initializer())
# save the model
export_path = './savedmodel'
builder = tf.saved_model.builder.SavedModelBuilder(export_path)
tensor_info_x = tf.saved_model.utils.build_tensor_info(x)
tensor_info_y = tf.saved_model.utils.build_tensor_info(y)
prediction_signature = (
tf.saved_model.signature_def_utils.build_signature_def(
inputs={'x_input': tensor_info_x},
outputs={'y_output': tensor_info_y},
method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME))
builder.add_meta_graph_and_variables(
sess, [tf.saved_model.tag_constants.SERVING],
signature_def_map={
tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:
prediction_signature
},
)
builder.save()
</code></pre>
<p>Output:</p>
<pre><code>TensorFlow 1.x selected.
WARNING:tensorflow:From <ipython-input-1-1c4a4b6eef10>:19: build_tensor_info (from tensorflow.python.saved_model.utils_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.utils.build_tensor_info or tf.compat.v1.saved_model.build_tensor_info.
INFO:tensorflow:No assets to save.
INFO:tensorflow:No assets to write.
INFO:tensorflow:SavedModel written to: ./savedmodel/saved_model.pb
b'./savedmodel/saved_model.pb'
</code></pre>
<p>Load Model: </p>
<pre><code>import tensorflow as tf
sess=tf.Session()
signature_key = tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY
input_key = 'x_input'
output_key = 'y_output'
export_path = './savedmodel'
meta_graph_def = tf.saved_model.loader.load(
sess,
[tf.saved_model.tag_constants.SERVING],
export_path)
signature = meta_graph_def.signature_def
x_tensor_name = signature[signature_key].inputs[input_key].name
y_tensor_name = signature[signature_key].outputs[output_key].name
x = sess.graph.get_tensor_by_name(x_tensor_name)
y = sess.graph.get_tensor_by_name(y_tensor_name)
y_out = sess.run(y, {x: 3.0})
print(y_out)
</code></pre>
<p>Output:</p>
<pre><code>TensorFlow 1.x selected.
WARNING:tensorflow:From <ipython-input-1-097ac1a9f3ad>:14: load (from tensorflow.python.saved_model.loader_impl) is deprecated and will be removed in a future version.
Instructions for updating:
This function will only be available through the v1 compatibility library as tf.compat.v1.saved_model.loader.load or tf.compat.v1.saved_model.load. There will be a new function for importing SavedModels in Tensorflow 2.0.
INFO:tensorflow:Restoring parameters from ./savedmodel/variables/variables
6.0
</code></pre>
<p>Please refer <code>tf.compat.v1</code> version for <a href="https://www.tensorflow.org/api_docs/python/tf/compat/v1/saved_model/Builder" rel="nofollow noreferrer">Save</a> and
<a href="https://www.tensorflow.org/api_docs/python/tf/compat/v1/saved_model/loader" rel="nofollow noreferrer">load</a> for more details.</p>
|
python|tensorflow
| 0
|
374,814
| 58,864,468
|
Python pandas modify dataframe according to date and column adding hours
|
<p>I have the following dataframe:</p>
<pre><code>;h0;h1;h2;h3;h4;h5;h6;h7;h8;h9;h10;h11;h12;h13;h14;h15;h16;h17;h18;h19;h20;h21;h22;h23
2017-01-01;52.72248155184351;49.2949899678983;46.57492391198069;44.087373768731766;44.14801243124734;42.17606224526609;43.18529986793594;39.58391124876044;41.63499969987035;41.40594457169249;47.58107920806581;46.56963630932529;47.377935483897694;37.99479190229543;38.53347417483357;40.62674178535282;45.81503347748674;49.0590694393733;52.73183568074295;54.37213882189341;54.737087166843295;50.224872755157314;47.874441844531056;47.8848916244788
2017-01-02;49.08874087825248;44.998912615866075;45.92457207636786;42.38001388673675;41.66922093408655;43.02027406525752;49.82151473221541;53.23401784350719;58.33805556091773;56.197239473200206;55.7686948361035;57.03099874898539;55.445563603040405;54.929102019056195;55.85170734639889;57.98929007227575;56.65821961018764;61.01309728212006;63.63384537162659;61.730431501017684;54.40180394585544;50.27375006416599;51.229656340500156;51.22066846069472
2017-01-03;50.07885876956572;47.00180020415448;44.47243045246001;42.62192562660052;40.15465704760352;43.48422695796396;50.01631022884173;54.8674584250141;60.434849010428685;61.47694796693493;60.766557330286844;59.12019178422993;53.97447369962696;51.85242030255539;53.604945764469065;56.48188852869667;59.12301823257856;72.05688032286155;74.61342126987793;70.76845988290785;64.13311592022278;58.7237387203283;55.2422389373486;52.63648285910918
</code></pre>
<p>As you can notice, there are the days, in the column and the hours.
I would like to create a new dataframe with only two columns:
the first the days (with also the hour data) and a column with the value. Something like the following:</p>
<pre><code>2017-01-01 00:00:00 ; 52.72248
2017-01-01 01:00:00 ; 49.2949899678983
...
</code></pre>
<p>I could create a new dataframe and use a cycle to fullfill it. This is I do now</p>
<pre><code>icount = 0
for idd in range(0,365):
for ih in range(0,24):
df.loc[df.index.values[icount]] = ecodf.iloc[idd,ih]
icount = icount + 1
</code></pre>
<p>What do you think?</p>
<p>Thanks</p>
|
<p>Turn columns names into a new column, turn to hours and use pd.to_datetime</p>
<pre><code>s = df.stack()
pd.concat([
pd.to_datetime(s.reset_index() \
.replace({'level_1': r'h(\d+)'}, {'level_1': '\\1:00'}, regex=True) \
[['level_0','level_1']].apply(' '.join, axis=1)), \
s.reset_index(drop=True)], \
axis=1, sort=False)
0 1
0 2017-01-01 00:00:00 52.722482
1 2017-01-01 01:00:00 49.294990
2 2017-01-01 02:00:00 46.574924
3 2017-01-01 03:00:00 44.087374
4 2017-01-01 04:00:00 44.148012
.. ... ...
67 2017-01-03 19:00:00 70.768460
68 2017-01-03 20:00:00 64.133116
69 2017-01-03 21:00:00 58.723739
70 2017-01-03 22:00:00 55.242239
71 2017-01-03 23:00:00 52.636483
[72 rows x 2 columns]
>>>
</code></pre>
|
python|pandas|sorting|date|dataframe
| 0
|
374,815
| 58,923,848
|
TensorFlow Root being imported and many methods not registering
|
<p>I've spent the last few hours trying to install TensorFlow (non-GPU) and it still not working. I'm using Visual Studio 2019. I have used an admin CMD to <code>pip install tensorflow</code>, and it was successful (or seems so). I can see in <code>%appdata%\..\Local\Programs\Python\Python37\Lib\site-packages\</code> that the folders <code>tensorboard</code>, <code>tensorboard-2.0.1.dist-info</code>, <code>tensorflow</code>, <code>tensorflow_core</code>, <code>tensorflow_estimator</code>, <code>tensorflow_estimator-2.0.1.dist-info</code>, and <code>tensorflow-2.0.0.dist-info</code> all exist. I ran <code>pip</code> multiple times to make sure that everything for TensorFlow was installed and up-to-date. I also followed the instructions in the installation guide for TensorFlow to verify the installation with <code>python3 -c "import tensorflow as tf; print(tf.reduce_sum(tf.random.normal([1000, 1000])))"</code>, which didn't give an error (also did not output anything, but I'm assuming it's not supposed to).</p>
<p>So I think my actual installation is fine, but perhaps this is a Visual Studio issue? Here's my Python code:</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
a = tf.Variable(1, name="a")
b = tf.Variable(2, name="b")
f = a + b
init = tf.global_variables_initializer()
with tf.Session() as s:
init.run()
print(f.eval())
</code></pre>
<p>As soon as I press "Start" to start the program, Visual Studio says "Exception Unhandled" and explains "module 'tensorflow' has no attribute 'global_variables_initializer'". The Python Console window then gives the following error:</p>
<pre><code> Traceback (most recent call last):
File "c:\program files (x86)\microsoft visual studio\2019\community\common7\ide\extensions\microsoft\python\core\ptvsd_launcher.py", line 119, in <module>
vspd.debug(filename, port_num, debug_id, debug_options, run_as)
File "c:\program files (x86)\microsoft visual studio\2019\community\common7\ide\extensions\microsoft\python\core\Packages\ptvsd\debugger.py", line 39, in debug
run()
File "c:\program files (x86)\microsoft visual studio\2019\community\common7\ide\extensions\microsoft\python\core\Packages\ptvsd\__main__.py", line 316, in run_file
runpy.run_path(target, run_name='__main__')
File "C:\Users\[ME]\AppData\Local\Programs\Python\Python37\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "C:\Users\[ME]\AppData\Local\Programs\Python\Python37\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "C:\Users\[ME]\AppData\Local\Programs\Python\Python37\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\[ME]\Documents\Programming\Python\Tutorials\Simplest_Tensorflow_Application.py", line 8, in <module>
init = tf.global_variables_initializer()
AttributeError: module 'tensorflow' has no attribute 'global_variables_initializer'
</code></pre>
<p>When I hover my mouse over "<code>tensorflow</code>" in <code>import tensorflow as tf</code>, it VS says "TensorFlow root package". I'm not sure if that's what's supposed to be there or if the root package is something different.</p>
<p>If anyone has any suggestions on this, please let me know. I've tried for a while to get this to work and it's become rather frustrating. If anyone needs to know, I'm running a laptop with an SSD, 8GB DDR3 RAM, an Intel Core i7 3540M (Ivy Bridge) CPU, an integrated graphics card Intel HD Graphics 4000, and a dedicated graphics card NVIDIA NVS 5200M.</p>
<p>Even more information:
I had tried to install TensorFlow some time ago and had accidentally downloaded <code>tensorflow</code> and <code>tensorflow-gpu</code>. I have manually uninstalled them both through Visual Studio (which seemed to have removed them from the actual Python directory, not just in a virtual environment) and reinstalled just <code>tensorflow</code>.</p>
|
<p>I deleted and remade the VS solution. It seems Visual Studio was not updating its packages or something. </p>
|
python|python-3.x|visual-studio|tensorflow|visual-studio-2019
| 0
|
374,816
| 59,016,748
|
Append a dataframe with a column of another dataframe and a constant with Python
|
<p>Let's take these two dataframes :</p>
<pre><code>df1 = pd.DataFrame([[1, 2], [3, 4]], columns=list('AB'))
df1
A B
0 1 2
1 3 4
df2 = pd.DataFrame([[5, 6], [7, 8]], columns=list('CD'))
df2
C D
0 5 6
1 7 8
</code></pre>
<p>I would like to add column C of df2 to column A of df1, and to put 9 in column B. To sum up, I would like to have :</p>
<pre><code>df1
df1
A B
0 1 2
1 3 4
2 5 9
3 7 9
</code></pre>
<p>I tried numerous things with the append function but didn't succeed to find the right code. Could you please help me ?</p>
|
<pre><code>df1.append(df2.rename(columns={'C':'A'}).drop(columns='D'), ignore_index=True) \
.fillna(9).astype(int)
A B
0 1 2
1 3 4
2 5 9
3 7 9
</code></pre>
|
python|pandas|dataframe|append
| 1
|
374,817
| 58,615,611
|
How do I use my own Hand-Drawn Image in TensorFlow Number Recognition
|
<p>I have some basic Python code to create a very basic neural network that classifies hand-drawn numbers from the MNIST dataset.</p>
<p>The network is working and I would like to make a prediction against a hand drawn image that is not part of the MNIST dataset.</p>
<p><strong>Here is my code:</strong></p>
<pre><code>import tensorflow as tf
mnist = tf.keras.datasets.mnist # 28x28 images of handwritten digits (0-9)
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = tf.keras.utils.normalize(x_train, axis=1)
x_test = tf.keras.utils.normalize(x_test, axis=1)
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(128, activation=tf.nn.relu))
model.add(tf.keras.layers.Dense(10, activation=tf.nn.softmax))
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=3)
val_loss, val_acc = model.evaluate(x_test, y_test)
print(val_loss, val_acc)
import matplotlib.pyplot as plt
</code></pre>
<p><strong>Below is where I can make predictions. I would like to change the code so that I can predict against my own hand-drawn image (labeled "test_image.jpg"):</strong></p>
<pre><code>predictions = model.predict([x_test])
import numpy as np
print(np.argmax(predictions[0]))
</code></pre>
<p>Any ideas would be very helpful!</p>
|
<p>Since your model is trained on black and white images, you only have one channel and you need to convert your image to greyscale:</p>
<pre><code>import numpy as np
import cv2
img = cv2.imread('test_image.jpg')
img = cv2.resize(img, (28,28))
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img = np.reshape(img, [1,28,28])
predictions = model.predict(img)
print(np.argmax(predictions[0]))
</code></pre>
|
python|tensorflow|machine-learning|keras|deep-learning
| 1
|
374,818
| 58,847,405
|
Drop nan rows unless string value in separate column - Pandas
|
<p>I want to drop rows containing <code>NaN</code> values except if a separate column contains a specific string. Using the <code>df</code> below, I want to drop rows if <code>NaN</code> in <code>Code2, Code3</code> unless the string A is in <code>Code1</code>.</p>
<pre><code>df = pd.DataFrame({
'Code1' : ['A','A','B','B','C','C'],
'Code2' : ['B',np.nan,'A','B',np.nan,'B'],
'Code3' : ['C',np.nan,'C','C',np.nan,'A'],
})
def dropna(df, col):
if col == np.nan:
df = df.dropna()
return df
df = dropna(df, df['Code2'])
</code></pre>
<p>Intended Output:</p>
<pre><code> Code1 Code2 Code3
0 A B C
1 A NaN NaN
2 B A C
3 B B C
4 C B A
</code></pre>
|
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.notna.html" rel="nofollow noreferrer"><code>DataFrame.notna</code></a> + <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.all.html" rel="nofollow noreferrer"><code>DataFrame.all</code></a> to performance a <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a>:</p>
<pre><code>new_df=df[df.Code1.eq('A')|df.notna().all(axis=1)]
print(new_df)
Code1 Code2 Code3
0 A B C
1 A NaN NaN
2 B A C
3 B B C
5 C B A
</code></pre>
|
python|pandas
| 3
|
374,819
| 58,644,850
|
How to sum multiple values in a dataframe column, if they are corresponding to 1 value in an other column
|
<p>I have a data frame like this:</p>
<pre><code>Code Group Name Number
ABC Group_1_ABC Mike 40
Amber 60
Group_2_ABC Rachel 90
XYZ Group_1_XYZ Bob 30
Peter 75
Nikki 55
Group_2_XYZ Julia 23
Ross 80
LMN Group_1_LMN Paul 95
. . . .
. . . .
</code></pre>
<p>I have created this data frame by grouping by code, group, name and summing the number.</p>
<p>Now i want to calculate the percentage of each name for a particular code. For that i want to sum all the numbers that are part of one code. I was doing this to calculate the percentage.</p>
<pre><code>df['Percentage']= (df['Number']/df['??'])*100
</code></pre>
<p>Now for the total sum part for each group, I can`t figure out how to calculate it? I want the total sum for each code category, in order to calculate the percentage. </p>
<p>So for example for Code: ABC the total should be 40+60+90=190. This 190 would than be divided with all the number for each user in ABC to calculate their percentage for their respective code category. So technically the column group and name don`t have any role in calculating the total sum for each code category.</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.transform.html" rel="nofollow noreferrer"><code>GroupBy.transform</code></a> by first level or by level name <code>Code</code>:</p>
<pre><code>df['Percentage']= (df['Number']/df.groupby(level=0)['Number'].transform('sum'))*100
</code></pre>
<hr>
<pre><code>df['Percentage']= (df['Number']/df.groupby(level=['Code'])['Number'].transform('sum'))*100
</code></pre>
<p>Or in last pandas versions is not necessary specified level parameter:</p>
<pre><code>df['Percentage']= (df['Number']/df.groupby('Code')['Number'].transform('sum'))*100
</code></pre>
<hr>
<pre><code>print (df)
Number Percentage
Code Group Name
ABC Group_1_ABC Mike 40 21.052632
Amber 60 31.578947
Group_2_ABC Rachel 90 47.368421
XYZ Group_1_XYZ Bob 30 11.406844
Peter 75 28.517110
Nikki 55 20.912548
Group_2_XYZ Julia 23 8.745247
Ross 80 30.418251
LMN Group_1_LMN Paul 95 100.000000
</code></pre>
<p><strong>Detail</strong>:</p>
<pre><code>print (df.groupby(level=0)['Number'].transform('sum'))
Code Group Name
ABC Group_1_ABC Mike 190
Amber 190
Group_2_ABC Rachel 190
XYZ Group_1_XYZ Bob 263
Peter 263
Nikki 263
Group_2_XYZ Julia 263
Ross 263
LMN Group_1_LMN Paul 95
Name: Number, dtype: int64
</code></pre>
|
python|pandas|sum|multiple-columns
| 1
|
374,820
| 58,905,628
|
heatmap hashtag and location in python pandas dataframe
|
<p>I have Pandas Dataframe as below</p>
<p><a href="https://i.stack.imgur.com/VE9yx.png" rel="nofollow noreferrer">newdf[['name_left','text']]</a></p>
<p>from each text column I would like to extract every hashtag and create heatmap
with name_left on X axis and extracted hashtag on Y axis</p>
<p>I can perform count of each hashtag using code below</p>
<p>newdf.text.str.extractall(r'(#\w+)').reset_index(level=0).drop_duplicates()[0].value_counts()</p>
<p>unfortunately im struggling to add name_left and later create heatmap to see correlations</p>
|
<p>I think what you want is this</p>
<pre><code>
import pandas as pd
df = pd.DataFrame({'name_left': ['Canada', 'Peru'],
'text': ['asdf #broccoli sadfsd #milk', 'sdfsd #king bbas #toast']})
df = df.groupby(['name_left']).apply(lambda x: x.text.str.extractall(r'(#\w+)').reset_index(level=0).drop_duplicates()[0].value_counts())
print(df)
</code></pre>
<pre><code>Canada #milk 1
#broccoli 1
Peru #king 1
#toast 1
Name: 0, dtype: int64
</code></pre>
|
python|pandas|heatmap
| 0
|
374,821
| 58,763,795
|
Setting subset of a pandas DataFrame by a DataFrame
|
<p>I feel like this question has been asked a millions times before, but I just can't seem to get it to work or find a SO-post answering my question.</p>
<p>So I am selecting a subset of a pandas DataFrame and want to change these values individually.</p>
<p>I am subselecting my DataFrame like this:</p>
<pre class="lang-py prettyprint-override"><code>df.loc[df[key].isnull(), [keys]]
</code></pre>
<p>which works perfectly. If I try and set all values to the same value such as</p>
<pre class="lang-py prettyprint-override"><code>df.loc[df[key].isnull(), [keys]] = 5
</code></pre>
<p>it works as well. But if I try and set it to a DataFrame it does not, however no error is produced either.</p>
<p>So for example I have a DataFrame:</p>
<pre class="lang-py prettyprint-override"><code>data = [['Alex',10,0,0,2],['Bob',12,0,0,1],['Clarke',13,0,0,4],['Dennis',64,2],['Jennifer',56,1],['Tom',95,5],['Ellen',42,2],['Heather',31,3]]
df1 = pd.DataFrame(data,columns=['Name','Age','Amount_of_cars','cars_per_year','some_other_value'])
Name Age Amount_of_cars cars_per_year some_other_value
0 Alex 10 0 0.0 2.0
1 Bob 12 0 0.0 1.0
2 Clarke 13 0 0.0 4.0
3 Dennis 64 2 NaN NaN
4 Jennifer 56 1 NaN NaN
5 Tom 95 5 NaN NaN
6 Ellen 42 2 NaN NaN
7 Heather 31 3 NaN NaN
</code></pre>
<p>and a second DataFrame:</p>
<pre class="lang-py prettyprint-override"><code>data = [[2/64,5],[1/56,1],[5/95,7],[2/42,5],[3/31,7]]
df2 = pd.DataFrame(data,columns=['cars_per_year','some_other_value'])
cars_per_year some_other_value
0 0.031250 5
1 0.017857 1
2 0.052632 7
3 0.047619 5
4 0.096774 7
</code></pre>
<p>and I would like to replace those <code>nans</code> with the second DataFrame</p>
<pre class="lang-py prettyprint-override"><code>df1.loc[df1['cars_per_year'].isnull(),['cars_per_year','some_other_value']] = df2
</code></pre>
<p>Unfortunately this does not work as the index does not match. So how do I ignore the index, when setting values?</p>
<p>Any help would be appreciated. Sorry if this has been posted before.</p>
|
<p>Just add <code>.values</code> or <code>.to_numpy()</code> if using pandas v 0.24 +</p>
<pre><code>df1.loc[df1['cars_per_year'].isnull(),['cars_per_year','some_other_value']] = df2.values
Name Age Amount_of_cars cars_per_year some_other_value
0 Alex 10 0 0.000000 2.0
1 Bob 12 0 0.000000 1.0
2 Clarke 13 0 0.000000 4.0
3 Dennis 64 2 0.031250 5.0
4 Jennifer 56 1 0.017857 1.0
5 Tom 95 5 0.052632 7.0
6 Ellen 42 2 0.047619 5.0
7 Heather 31 3 0.096774 7.0
</code></pre>
|
python|pandas|dataframe
| 2
|
374,822
| 58,837,565
|
Pandas: Remove Column Based on Threshold Criteria
|
<p>I have to solve this problem:
Objective: Drops columns most of whose rows missing
<strong>Inputs</strong>:
1. Dataframe df: Pandas dataframe
2. threshold: Determines which columns will be dropped. If threshold is .9, the columns with 90% missing value will be dropped
<strong>Outputs</strong>:
1. Dataframe df with dropped columns (if no columns are dropped, you will return the same dataframe)</p>
<p><a href="https://i.stack.imgur.com/YZIEK.png" rel="nofollow noreferrer">Excel Doc Screenshot</a></p>
<p>I've coded this: </p>
<pre><code>class variableTreatment():
def drop_nan_col(self, df, threshold):
self.threshold = threshold
self.df = df
for i in df.columns:
if (float(df[i].isnull().sum())/df[i].shape[0]) > threshold:
df = df.drop(i)
</code></pre>
<p>I have to have "self, dr, and threshold" and cannot add more. The code must pass the test cases below:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.read_excel('CKD.xlsx')
VT = variableTreatment()
VT
VT.drop_nan_col(df, 0.9).head()
</code></pre>
<p>When I run VT.drop_nan_col(df, 0.9).head(), I cannot change this line of code, I get :</p>
<pre><code>KeyError: "['yls'] not found in axis"
</code></pre>
<p>If I change the shape to have 0 instead of 1, I don't think this is correct for what I'm doing, I get:</p>
<pre><code>IndexError: tuple index out of range
</code></pre>
<p>Can anyone help me understand how I can fix this? </p>
|
<p>I think you need to change from </p>
<p><code>df = df.drop(i)</code></p>
<p>to </p>
<p><code>df = df.drop(i, axis=1)</code></p>
<p>So you account for columns instead of rows, which is the default option. See here for the same error <a href="https://stackoverflow.com/a/44931865/5184851">https://stackoverflow.com/a/44931865/5184851</a></p>
<p>Also, to use <code>.head()</code> the function <code>drop_nan_col(...)</code> needs to return dataframe i.e <code>df</code></p>
|
python|excel|pandas|numpy|dataframe
| 0
|
374,823
| 58,740,076
|
Plot a table from selected rows in DataFrame
|
<p>I have a dataframe that looks like this:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'Institution':['Uni1', 'Uni2', 'Uni3', 'Uni1', 'Uni2', 'Uni3'],
'Year': [2018, 2018, 2018, 2019, 2019, 2019],
'Value': [1000000, 2000000, 250000, 2300000, 3000000, 90000],
'Rank': [10, 9, 1, 8, 7, 3]})
</code></pre>
<p>I want to plot the data grouped as a table:</p>
<pre><code> Uni1 Uni2 Uni3
2018 1000000 2000000 250000
2019 2300000 3000000 90000
</code></pre>
<p>So far, I am just trying to plot a simple table that does not separate by year, and that it looks like:</p>
<pre><code> Uni1 Uni2 Uni3 Uni1 Uni2 Uni3
1000000 2000000 250000 2300000 3000000 90000
</code></pre>
<p>This is what I'm using:</p>
<pre><code>import matplotlib.pyplot as plt
plt.table(cellText = df.values.T)
</code></pre>
<p>That works and prints the whole dataframe, but when I try to get just one row, I get the following:</p>
<pre><code>plt.table(cellText = df['Value'].values.T)
TypeError: object of type 'numpy.int64' has no len()
</code></pre>
<p>I know a solution would be defining a new DataFrame that's only comprised by the row I want plotted, but I doubt that's the cleanest solution.</p>
|
<p>To plot Values column:</p>
<pre><code>plt.table(cellText = df[['Value']].values.T)
</code></pre>
<p>keep in mind that <code>df[['Value']]</code> return a DataFrame but <code>df['Value']</code> return a Series.</p>
<p>creating rows for each year using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.pivot_table.html#pandas.DataFrame.pivot_table" rel="nofollow noreferrer"><code>DataFrame.pivot_table</code></a>:</p>
<pre><code>df_table=df.pivot_table(index='Year',columns='Institution',values='Value')
print(df_table)
</code></pre>
<hr>
<pre><code>Institution Uni1 Uni2 Uni3
Year
2018 1000000 2000000 250000
2019 2300000 3000000 90000
</code></pre>
<p>then use <code>plt.table</code>:</p>
<pre><code>plt.table(cellText = df_table.values)
</code></pre>
|
python-3.x|pandas|matplotlib
| 1
|
374,824
| 58,876,412
|
Tree structured input in keras/tensorflow
|
<p>For some school project I am trying to implement a tree convolution as described in <em>"Convolutional Neural Networks over Tree Structures for Programming Language Processing" Lili Mou, et al.</em></p>
<p><strong>Goal</strong></p>
<p>Basically, the outcome should be a neural network. The samples to this network are binary trees whose nodes have a fixed length features such as <code>1xN</code>. The challenging part for me has been the freedom in the tree shape. This means a sample tree may have any number of nodes in any shape. A left-deep tree, right-deep tree, complete tree are all possible. The only constraint is they all should be binary trees.</p>
<p>The tree convolution on a sample tree is defined with 3 weight matrices <code>W_p, W_l, W_r</code>. These weights are used for each node in the tree to generate another tree of the same shape but with different features such as <code>1xM</code> if the weights are of shape <code>NxM</code>. For each node its feature gets multiplied by <code>W_p</code> and its children by <code>W_l, W_r</code> so the node in the new tree will contain information about itself and both its children.</p>
<p>Then there comes finally a dynamic pooling layer over all the tree nodes to have a <code>1xM</code> flattened vector in the end so that it could be fed into a Dense Layer for example. The way it works is they call each entry of <code>1xM</code> vectors a channel. Then for each channel the maximum value over all nodes is returned to have a <code>1xM</code> vector.</p>
<p><strong>Problem</strong></p>
<p>This was a quick explanation of the paper. Now the problem as I said in the first paragraph is the varying number of children of these binary trees. First I tried to use Keras, but obviously it needs fixed-size input for Layers. Then it occured to me I can use array implementation of binary trees to encode each tree in a fixed-size fashion. This means for example a parent at node <code>i</code> would have its children at <code>2*i</code> and <code>2*i+1</code>. Whenever there are not children in some places, put <code>N</code> zeros for padding if the features are of length <code>N</code>.</p>
<p>This required me to have information about the maximum index over all trees such that I can create some <code>AxN</code> array where <code>A</code> is the maximum indexing used in this fixed-size schema. Sadly, the input trees may be really deep with fewer nodes so to encode 16 nodes I have to create a <code>60000xN</code> or <code>6000xN</code> array most of which gets zero padded just because the tree is not well-balanced.</p>
<p>Then I switched to a custom SGD implementation where I defined Dense, Tree Convolution, Dynamic Pooling quickly. The forward pass was really easy. In the backprop, however, I got it to the point I can propagate derivatives from Dense to the Pooling to the tree before the pooling and do a weight update in that tree, but not for the before trees. Since Keras/TF handles differentiation in the background it was easier indeed.</p>
<p>Now I feel really stuck between choosing approaches for this problem. Obviously Keras/TF has lots of functionality available for designing such a network. Should there be an efficient way of passing this tree structured data to these libraries so for 30 nodes I do not end up creating 60000 nodes with 59970 zero vectors? The idea of generating 6000 or 60000 nodes for some 15 nodes is just crazy at this point even if you got the best GPU out there.</p>
<p>Or should I work on deriving the derivative equations on the paper to continue the custom SGD implementation?</p>
<p>For reference, this was how it looked like with Keras, with the inefficient encoding of the trees I mentioned above. </p>
<pre><code>class MyLayer(Layer):
def __init__(self, output_dim, **kwargs):
self.output_dim = output_dim
super(MyLayer, self).__init__(**kwargs)
def build(self, input_shape):
# Create a trainable weight variable for this layer
self.kernel = self.add_weight(name='kernel',
shape=(3, input_shape[2], self.output_dim[1]),
initializer='ones',
trainable=True)
super(MyLayer, self).build(input_shape) # Be sure to call this at the end
def call(self, x):
_, tree_size, feature_size = K.int_shape(x)
new_tree = []
for i in range(tree_size // 2):
parent = tf.gather_nd(x, (0,i))
left = tf.gather_nd(x, (0, 2*i + 1) )
right = tf.gather_nd(x, (0, 2*i + 2))
p_l_r = K.expand_dims(K.stack([parent, left, right]), axis = 1)
product = K.sum(K.batch_dot(p_l_r, self.kernel), axis = 0)
new_tree.append(product)
for j in range (tree_size //2, tree_size):
parent = tf.gather_nd(x, (0, j))
parent = K.expand_dims(parent, axis = 0)
product = K.dot(parent, self.kernel[0])
new_tree.append(product)
new_tree = K.stack(new_tree, axis = 1)
return new_tree
def compute_output_shape(self, input_shape):
return (input_shape[0], self.output_dim[0], self.output_dim[1])
</code></pre>
|
<p>Tensorflow used to have a decision tree implementation. You can see the data structures (variables) it used here: <a href="https://github.com/tensorflow/tensorflow/blob/v0.10.0rc0/tensorflow/contrib/tensor_forest/python/tensor_forest.py#L155" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/blob/v0.10.0rc0/tensorflow/contrib/tensor_forest/python/tensor_forest.py#L155</a></p>
<p>It shows that you can implement a tree by creating a 2D tensor of shape <code>(max_nodes, max_children)</code>. The <code>(i, j)</code> entry has an integer telling the index of the jth child of the ith node in the same tensor. So an upside-down-v shaped binary tree with three nodes would be <code>[[1, 2], [-1, -1], [-1, -1]]</code>.</p>
<p>You could easily create a second tensor to hold the features, where the ith row hold the features for the ith node. Then it would be possible to perform the convolution operation you mentioned, although it would require looping. I don't see a way to vectorize it, but that's the cost of using a (somewhat) sparse representation.</p>
|
python|tensorflow|keras|deep-learning|tree
| 1
|
374,825
| 58,891,834
|
Passing random seed to numpy random.choice function
|
<p>Let's say I have an function that takes a <code>random_state</code> argument to ensure replicability</p>
<pre><code>def replicable_function(random_seed):
choice = np.random.choice(X, 10)
#do more stuff here with choice
return f(choice)
</code></pre>
<p>These are my two requirements:</p>
<ol>
<li>Passing the same <code>random_seed</code>(could be an integer or a <code>np.random.RandomState</code> object) means the <code>replicable_function</code> always outputs the same thing</li>
<li>I don't change the global numpy random seed state (trying to be nice to the user and not change things she doesn't expect)</li>
</ol>
<p>Ideally, I'd like to pass this <code>random_state</code> to the <code>np.random.choice</code> function, but it doesn't seem to take such an argument (<a href="https://github.com/numpy/numpy/blob/master/numpy/random/mtrand.pyx#L761" rel="nofollow noreferrer">see the source code here</a>!)</p>
|
<p>I figured this answer right after asking it here.
I'm not sure it's the best solution though, so I'm happy to receive other suggestions.</p>
<p>I ended up using a utility function from <code>sklearn</code> that, if needed, turns an integer input into a <code>RandomState</code> instance.</p>
<pre><code>def check_random_state(seed):
"""Turn seed into a np.random.RandomState instance
Parameters
----------
seed : None | int | instance of RandomState
If seed is None, return the RandomState singleton used by np.random.
If seed is an int, return a new RandomState instance seeded with seed.
If seed is already a RandomState instance, return it.
Otherwise raise ValueError.
"""
if seed is None or seed is np.random:
return np.random.mtrand._rand
if isinstance(seed, (numbers.Integral, np.integer)):
return np.random.RandomState(seed)
if isinstance(seed, np.random.RandomState):
return seed
raise ValueError('%r cannot be used to seed a numpy.random.RandomState'
' instance' % seed)
</code></pre>
<p>Thus, I could write:</p>
<pre><code>from sklearn.utils import check_random_state
def replicable_function(random_seed):
random_seed = check_random_state(random_seed)
choice = random_seed.choice(X, 10) #instead of np.random.choice(X, 10)
#do more stuff here with choice
return f(choice)
</code></pre>
|
numpy|random-seed
| 0
|
374,826
| 59,003,985
|
Why is the Pytorch Dropout layer affecting all values, not only the ones set to zero?
|
<p>The dropout layer from Pytorch changes the values that are not set to zero. Using Pytorch's documentation example: (<a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Dropout" rel="nofollow noreferrer">source</a>):</p>
<pre class="lang-py prettyprint-override"><code>import torch
import torch.nn as nn
m = nn.Dropout(p=0.5)
input = torch.ones(5, 5)
</code></pre>
<pre class="lang-py prettyprint-override"><code>print(input)
tensor([[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.],
[1., 1., 1., 1., 1.]])
</code></pre>
<p>Then I pass it through a <code>dropout</code> layer:</p>
<pre class="lang-py prettyprint-override"><code>output = m(input)
print(output)
tensor([[0., 0., 2., 2., 0.],
[2., 0., 2., 0., 0.],
[0., 0., 0., 0., 2.],
[2., 2., 2., 2., 2.],
[2., 0., 0., 0., 2.]])
</code></pre>
<p>The values that aren't set to zero are now 2. Why?</p>
|
<p>It is how the dropout regularization works. After a dropout the values are divided by the keeping probability (in this case 0.5). </p>
<p>Since PyTorch Dropout function receives the probability of zeroing a neuron as input, if you use <code>nn.Dropout(p=0.2)</code> that means it has 0.8 chance of keeping. so the values on the table will be 1/(1-0.2).</p>
<p>This is called "inverted dropout technique" and it is done in order to ensure that the expected value of the activation remains the same.</p>
|
python|pytorch|tensor|dropout
| 6
|
374,827
| 58,618,885
|
Unable to use resample.ohlc() method - DataError: No numeric types to aggregate
|
<p>I am receiving stock ticks second-wise and I am storing them in a dataframe. I need to resample them to get the ohlc value for a minute. Here is my code:</p>
<pre><code> def on_ticks(ws, ticks):
global time_second, df_cols, tick_cols, data_frame
for company_data in ticks:
ltp = company_data['last_price']
timestamp = company_data['timestamp']
lowest_sell = company_data['depth']['sell'][0]['price']
highest_buy = company_data['depth']['buy'][0]['price']
data = [timestamp, ltp, lowest_sell, highest_buy]
tick_df = pd.DataFrame([data], columns=tick_cols)
#print(tick_df)
data_frame = pd.concat([data_frame, tick_df], axis=0, sort=True, ignore_index='true')
#print("time_second is ", time_second)
if time_second > timestamp.second:
#print("now we will print data_frame")
#print(data_frame)
print("Resampling dataframe & Calculating the EMAs............")
resamp_df = data_frame.resample('1T', on='Timestamp').ohlc()
</code></pre>
<p>When I run this code, it triggers following error <strong><em>DataError: No numeric types to aggregate</em></strong>:</p>
<pre><code> ---------------------------------------------------------------------------
DataError Traceback (most recent call last)
<ipython-input-8-166d9105fb91> in <module>
----> 1 resamp = df.resample('1T', on='Timestamp').ohlc()
~\Anaconda3\lib\site-packages\pandas\core\resample.py in g(self, _method, *args, **kwargs)
904 def g(self, _method=method, *args, **kwargs):
905 nv.validate_resampler_func(_method, args, kwargs)
--> 906 return self._downsample(_method)
907
908 g.__doc__ = getattr(GroupBy, method).__doc__
~\Anaconda3\lib\site-packages\pandas\core\resample.py in _downsample(self, how, **kwargs)
1068 # we are downsampling
1069 # we want to call the actual grouper method here
-> 1070 result = obj.groupby(self.grouper, axis=self.axis).aggregate(how, **kwargs)
1071
1072 result = self._apply_loffset(result)
~\Anaconda3\lib\site-packages\pandas\core\groupby\generic.py in aggregate(self, arg, *args, **kwargs)
1453 @Appender(_shared_docs["aggregate"])
1454 def aggregate(self, arg=None, *args, **kwargs):
-> 1455 return super().aggregate(arg, *args, **kwargs)
1456
1457 agg = aggregate
~\Anaconda3\lib\site-packages\pandas\core\groupby\generic.py in aggregate(self, func, *args, **kwargs)
227 func = _maybe_mangle_lambdas(func)
228
--> 229 result, how = self._aggregate(func, _level=_level, *args, **kwargs)
230 if how is None:
231 return result
~\Anaconda3\lib\site-packages\pandas\core\base.py in _aggregate(self, arg, *args, **kwargs)
354
355 if isinstance(arg, str):
--> 356 return self._try_aggregate_string_function(arg, *args, **kwargs), None
357
358 if isinstance(arg, dict):
~\Anaconda3\lib\site-packages\pandas\core\base.py in _try_aggregate_string_function(self, arg, *args, **kwargs)
303 if f is not None:
304 if callable(f):
--> 305 return f(*args, **kwargs)
306
307 # people may try to aggregate on a non-callable attribute
~\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.py in ohlc(self)
1438 """
1439
-> 1440 return self._apply_to_column_groupbys(lambda x: x._cython_agg_general("ohlc"))
1441
1442 @Appender(DataFrame.describe.__doc__)
~\Anaconda3\lib\site-packages\pandas\core\groupby\generic.py in _apply_to_column_groupbys(self, func)
1579 (func(col_groupby) for _, col_groupby in self._iterate_column_groupbys()),
1580 keys=self._selected_obj.columns,
-> 1581 axis=1,
1582 )
1583
~\Anaconda3\lib\site-packages\pandas\core\reshape\concat.py in concat(objs, axis, join, join_axes, ignore_index, keys, levels, names, verify_integrity, sort, copy)
253 verify_integrity=verify_integrity,
254 copy=copy,
--> 255 sort=sort,
256 )
257
~\Anaconda3\lib\site-packages\pandas\core\reshape\concat.py in __init__(self, objs, axis, join, join_axes, keys, levels, names, ignore_index, verify_integrity, copy, sort)
299 objs = [objs[k] for k in keys]
300 else:
--> 301 objs = list(objs)
302
303 if len(objs) == 0:
~\Anaconda3\lib\site-packages\pandas\core\groupby\generic.py in <genexpr>(.0)
1577
1578 return concat(
-> 1579 (func(col_groupby) for _, col_groupby in self._iterate_column_groupbys()),
1580 keys=self._selected_obj.columns,
1581 axis=1,
~\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.py in <lambda>(x)
1438 """
1439
-> 1440 return self._apply_to_column_groupbys(lambda x: x._cython_agg_general("ohlc"))
1441
1442 @Appender(DataFrame.describe.__doc__)
~\Anaconda3\lib\site-packages\pandas\core\groupby\groupby.py in _cython_agg_general(self, how, alt, numeric_only, min_count)
886
887 if len(output) == 0:
--> 888 raise DataError("No numeric types to aggregate")
889
890 return self._wrap_aggregated_output(output, names)
DataError: No numeric types to aggregate
</code></pre>
<p>Can anyone please help me find where I am going wrong?</p>
|
<p>Problem is there is no numeric column, only datetimes in <code>Timestamp</code>. </p>
<hr>
<p>I think you can create <code>DatetimeIndex</code> and then convert all columns to <code>float</code>s, also is necessary remove parameter <code>on</code> in <code>resample</code>:</p>
<pre><code>resamp_df = data_frame.set_index('Timestamp').astype(float).resample('1T').ohlc()
</code></pre>
<hr>
<p>Another idea (if working with scalars) is convert them to floats:</p>
<pre><code>for company_data in ticks:
ltp = float(company_data['last_price'])
timestamp = company_data['timestamp']
lowest_sell = float(company_data['depth']['sell'][0]['price'])
highest_buy = float(company_data['depth']['buy'][0]['price'])
</code></pre>
|
python|pandas|dataframe|resampling|ohlc
| 1
|
374,828
| 59,022,664
|
Combining two dataframes based on specific column
|
<p>I'm attempting to combine different dataframes for NBA data. My first dataframe is from a <a href="https://www.basketball-reference.com/leagues/NBA_2019_per_poss.html" rel="nofollow noreferrer">basketball-reference</a> page and my second dataframe is from a <a href="https://projects.fivethirtyeight.com/2020-nba-player-ratings/" rel="nofollow noreferrer">538 stats page</a>. I've already webscraped them.</p>
<p>I want to combine them so that it is by the player name. One of the dataframes is still bigger than the other. How can I combine the dataframes together? Both have the column id of "Player"</p>
|
<p>I think you probably want to use pandas .merge().</p>
<pre><code>import pandas as pd
df1 = pd.DataFrame({'player': ['foo', 'bar', 'baz', 'foo', 'bar', 'foo'],
'value': [1, 2, 3, 5, 7, 9]})
df2 = pd.DataFrame({'player': ['foo', 'bar', 'baz', 'foo'],
'value': [5, 6, 7, 8]})
merged_df = df1.merge(df2, how='outer', on='player')
</code></pre>
|
python|pandas|dataframe
| 0
|
374,829
| 59,006,554
|
Webscraping and decoding string into pandas DF
|
<p>I'm webscraping and would like to have a Pandas dataframe as a result of my content scraping. I'm able to get an <code>UTF-8</code> string that I'd like to read as a Pandas dataframe, but I'm not sure how to do it and I'd like to avoid outputting to CSV and reading it back. How would I do it?</p>
<p>E.g.</p>
<pre><code>string='term_ID,description,frequency,plot_X,plot_Y,plot_size,uniqueness,dispensability,representative,eliminated\r\nGO:0006468,"protein phosphorylation",4.137%, 4.696, 0.927,5.725,0.430,0.000,6468,0\r\nGO:0050821,"protein stabilization, positive",0.045%,-4.700, 0.494,3.763,0.413,0.000,50821,0\r\n'
</code></pre>
<p>I was splitting the string with</p>
<pre><code>fcsv_content=[x.split(',') for x in string.split("\r\n")]
</code></pre>
<p>But this will not work as some fields have commas inside. What can I do? Could I change the decoding so that this is fixed?
For some background, I'm using <a href="https://gist.github.com/SamDM/b7e8a13a5529c24291e293ee6ebe2366" rel="nofollow noreferrer">robobrowser</a> to decode the webpage.</p>
|
<p>You can use pythons csv module to read and spit your csv. It will take care of things like commas being inside quoted strings and know not to split those. below is a small example using your input string. As you will see in the example below the field <code>protein stabilization, positive</code> doesn't get split into separate columns as its a quoted string</p>
<pre class="lang-py prettyprint-override"><code>import csv
string = 'term_ID,description,frequency,plot_X,plot_Y,plot_size,uniqueness,dispensability,representative,eliminated\r\nGO:0006468,"protein phosphorylation",4.137%, 4.696, 0.927,5.725,0.430,0.000,6468,0\r\nGO:0050821,"protein stabilization, positive",0.045%,-4.700, 0.494,3.763,0.413,0.000,50821,0\r\n'
csv_reader = csv.reader(string.splitlines())
for record in csv_reader:
print(f'number of fields: {len(record)}, Record: {record}'
</code></pre>
<p><strong>OUTPUT</strong></p>
<pre><code>number of fields: 10, Record: ['term_ID', 'description', 'frequency', 'plot_X', 'plot_Y', 'plot_size', 'uniqueness', 'dispensability', 'representative', 'eliminated']
number of fields: 10, Record: ['GO:0006468', 'protein phosphorylation', '4.137%', ' 4.696', ' 0.927', '5.725', '0.430', '0.000', '6468', '0']
number of fields: 10, Record: ['GO:0050821', 'protein stabilization, positive', '0.045%', '-4.700', ' 0.494', '3.763', '0.413', '0.000', '50821', '0']
</code></pre>
|
python|python-3.x|pandas|web-scraping|robobrowser
| 1
|
374,830
| 58,909,918
|
Can't Install Python Pandas in 3.6.6
|
<p>I am Mac user and trying to install Pandas in Python 3.6.6. to use in IDLE / VS Code to do my work.</p>
<pre><code>Hasans-MacBook-Pro:~ hasan-macbookpro$ python3 --version
Python 3.6.6
Hasans-MacBook-Pro:~ hasan-macbookpro$
</code></pre>
<p>But when i run the <code>pip install pandas</code> it download it in Python 2.7 as you can see below. When i go in IDLE since Pandas is not in 3.6.6. it give me error. </p>
<p>can anyone guide how to go about it?</p>
<pre><code>Hasans-MacBook-Pro:~ hasan-macbookpro$ pip install pandas
DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7. More details about Python 2 support in pip, can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support
Requirement already satisfied: pandas in ./Library/Python/2.7/lib/python/site-packages (0.24.2)
Requirement already satisfied: python-dateutil>=2.5.0 in ./Library/Python/2.7/lib/python/site-packages (from pandas) (2.8.1)
Requirement already satisfied: pytz>=2011k in /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python (from pandas) (2013.7)
Requirement already satisfied: numpy>=1.12.0 in ./Library/Python/2.7/lib/python/site-packages (from pandas) (1.16.5)
Requirement already satisfied: six>=1.5 in /System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python (from python-dateutil>=2.5.0->pandas) (1.12.0)
Hasans-MacBook-Pro:~ hasan-macbookpro$
</code></pre>
|
<p>try <code>pip3 install pandas</code> to specify it for python3.</p>
|
python-3.x|pandas
| 0
|
374,831
| 58,981,134
|
Creating a variable conditional to the value of another variable in Python
|
<p>I'm trying to generate variable which is value depend on the value of another variable. My dataset is <code>urban_classification</code> and I am trying to create the variable <code>URBRUR</code> based on the value of the variable <code>prc_urbain</code>. This is my code:</p>
<pre><code>if urban_classification.prc_urbain>0.5 :
urban_classification['URBRUR'] = "urban"
else:
urban_classification['URBRUR'] = "rural"
</code></pre>
<p>and I get this error message:</p>
<pre><code> Traceback (most recent call last):
File "C:\Users\Utilisateur\AppData\Roaming\Python\Python37\site-packages\IPython\core\interactiveshell.py", line 3326, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-3-a94aadb86c32>", line 31, in <module>
if urban_classification.prc_urbain>0.5 :
File "C:\Users\Utilisateur\AppData\Local\Programs\Python\Python37\lib\site-packages\pandas\core\generic.py", line 1555, in __nonzero__
self.__class__.__name__
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
<p>Can you indicate me what I am doing wrong?</p>
<p>Thanks!</p>
|
<p>The error message:</p>
<blockquote>
<p>The truth value of a Series is ambiguous.</p>
</blockquote>
<p>comes from </p>
<pre><code>if urban_classification.prc_urbain>0.5 :
</code></pre>
<p>because <code>urban_classification.prc_urbain</code> is a pd.Series, hence <code>urban_classification.prc_urbain>0.5</code> is also a pd.Series made of True/False values, and python is not able to determine if this list of booleans should evaluate to True or not.</p>
<p>To achieve what you want, you can use <a href="https://pandas.pydata.org/pandas-docs/version/0.23.4/generated/pandas.cut.html" rel="nofollow noreferrer">pd.cut</a>:</p>
<pre><code>urban_classification["URBRUR"] = pd.cut(urban_classification.prc_urbain, [0, 0.5, 1], labels=["rural", "urban], include_lowest=True)
</code></pre>
<p>Example:</p>
<pre><code>import pandas as pd
s = pd.Series([0, 0.1, 0.45, 0.6, 0.8, 1])
pd.cut(s, [0, 0.5, 1], labels=("rural", "urban"), include_lowest=True)
0 rural
1 rural
2 rural
3 urban
4 urban
5 urban
</code></pre>
|
python|pandas
| 0
|
374,832
| 58,629,183
|
How can I take an input 'n' to define a matrix of order n in python?
|
<pre><code> num_array = list()
num = input("Enter how many elements you want:")
print('Enter numbers in array: ')
for i in range(int(num)):
n=input("num :")
num_array.append(int(n))
print('ARRAY: ',num_array)
</code></pre>
<p>this one was there but it's not gonna give me matrix of order n </p>
|
<p>I think if you want a matrix representation, you should go with a list of lists. You only input n numbers, but for a matrix you need n*n numbers. Do that with a second for loop like so:</p>
<pre><code># matrix is gonna be a list of lists
num_array = list()
num = input("Enter how many elements you want:")
print('Enter numbers in array: ')
# first for iterates for rows
for i in range(int(num)):
row = list()
# second for iterates numbers in every row
for j in range(int(num)):
n = input("num :")
row.append(int(n))
num_array.append(row)
# output as matrix
for row in num_array:
for number in row:
print(number, end=" ")
print()
</code></pre>
|
python|arrays|numpy|matrix|input
| 0
|
374,833
| 58,914,513
|
Error while converting a frozen.pb file to tflite format
|
<pre><code>tflite_convert --output_file=./graph.tflite --graph_def_file=output_graph_frozen.pb --input_arrays=IteratorV2 --output_arrays=linear/head/predictions/probabilities
</code></pre>
<p>Traceback (most recent call last):
File "/usr/local/bin/tflite_convert", line 11, in
sys.exit(main())
File "/usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/tflite_convert.py", line 503, in main
app.run(main=run_main, argv=sys.argv[:1])
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/platform/app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 299, in run
_run_main(main, args)
File "/usr/local/lib/python3.6/dist-packages/absl/app.py", line 250, in _run_main
sys.exit(main(argv))
File "/usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/tflite_convert.py", line 499, in run_main
_convert_tf1_model(tflite_flags)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/tflite_convert.py", line 193, in _convert_tf1_model
output_data = converter.convert()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/lite.py", line 898, in convert
**converter_kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/convert.py", line 401, in toco_convert_impl
input_tensors, output_tensors, *args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/convert.py", line 304, in build_toco_convert_protos
input_tensor.dtype)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/util.py", line 58, in convert_dtype_to_tflite_type
raise ValueError("Unsupported tf.dtype {0}".format(tf_dtype))
ValueError: Unsupported tf.dtype </p>
|
<p>what is the purpose for setting the "input_arrays" and "output_arrays"?</p>
<p>In the most simple case, you can just do </p>
<pre><code>converter = tf.lite.TFLiteConverter.from_saved_model(export_dir)
tflite_model = converter.convert()
</code></pre>
<p>as documented at <a href="https://www.tensorflow.org/lite/convert/python_api" rel="nofollow noreferrer">https://www.tensorflow.org/lite/convert/python_api</a>.</p>
|
tensorflow|tensorflow-lite
| 0
|
374,834
| 58,705,410
|
how to set session property when executing pd.to_sql? {error:"Required field 'numDVs' is unset! }
|
<p>there is issue when insert dataframe data to presto db.</p>
<p>error message is</p>
<pre><code>{'message': "Required field 'numDVs' is unset! Struct:LongColumnStatsData(lowValue:0, highValue:2, numNulls:0, numDVs:0),
'errorCode': 16777216,
'errorName': 'HIVE_METASTORE_ERROR',
'errorType': 'EXTERNAL' ..."
</code></pre>
<p>i think</p>
<pre><code>SET SESSION COLLECT_COLUMN_STATISTICS_ON_WRITE = FALSE
</code></pre>
<p>should be executed before inserting data.
but i can't find way to do.
is there any way to set session property, before execute pd.to_sql?</p>
<pre><code>from pyhive import presto
result.to_sql(table_name, engine, if_exists='append', index=False)
</code></pre>
<p>table format</p>
<pre><code>CREATE TABLE tablename (
similarity DOUBLE,
cluster INTEGER,
member_cnt INTEGER,
member_list VARCHAR,
mean_similarity DOUBLE,
ym VARCHAR(6)
)
WITH (format = 'ORC', PARTITIONED_BY = ARRAY['ym'])
</code></pre>
|
<p>We can set session properties by <code>session_props</code> as below.</p>
<pre class="lang-py prettyprint-override"><code>from pyhive import presto
cursor = presto.connect('localhost', session_props={'hive.collect_column_statistics_on_write': 'false'}).cursor()
</code></pre>
|
python|pandas|sqlalchemy|presto
| 2
|
374,835
| 58,659,212
|
Specifying a merge function in pandas?
|
<p>Can a merge criteria function be specified in pandas?</p>
<p>So instead of just matching two fields, specifying a function that will return true or false to determine if _merge is ‘both’ or the alternatives?</p>
|
<p>If I understand your question correctly, you can do this:</p>
<pre><code>pd.merge(df1,df2, on = 'your_var' how = 'outer', indicator = True)
</code></pre>
<p>You can then just look at _merge variable and it will show.</p>
|
python-3.x|pandas|merge
| 0
|
374,836
| 58,637,390
|
Withou onnx, how to convert a pytorch model into a tensorflow model manually?
|
<p>Since ONNX supports limited models, I tried to do this conversion by assigning parameters directly, but the gained tensorflow model failed to show the desired accuracy. Details are described as follows:</p>
<ol>
<li>The source model is Lenet trained on MNIST dataset.</li>
<li>I firstly extracted each module and its parameters by model.named_parameters() and save them into a dictionary where the key is the module's name and the value is the parameters</li>
<li>Then, I built and initiated a tensorflow model with the same architecture</li>
<li>Finally, I assign each layer's parameters of pytroch model to the tensorflow model</li>
</ol>
<p>However, the accuracy of gained tensorflow model is only about 20%. Thus, my question is that is it possible to convert the pytorch model by this method?. If yes, what's the possible issue causing the bad result? If no, then please kindly explain the reasons.</p>
<p>PS: assume the assignment procedure is right.</p>
|
<p>As the comment by jodag mentioned, there are many differences between operator representations in Tensorflow and PyTorch that might cause discrepancies in your workflow.</p>
<p>We would recommend using the following method:</p>
<ol>
<li>Use the <a href="https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html" rel="nofollow noreferrer">ONNX exporter in PyTorch</a> to export the model to the ONNX format.</li>
</ol>
<pre><code>import torch.onnx
# Argument: model is the PyTorch model
# Argument: dummy_input is a torch tensor
torch.onnx.export(model, dummy_input, "LeNet_model.onnx")
</code></pre>
<ol start="2">
<li>Use the <a href="https://github.com/onnx/onnx-tensorflow" rel="nofollow noreferrer">onnx-tensorflow backend</a> to convert the ONNX model to Tensorflow.</li>
</ol>
<pre><code>import onnx
from onnx_tf.backend import prepare
onnx_model = onnx.load("LeNet_model.onnx") # load onnx model
tf_rep = prepare(onnx_model) # prepare tf representation
tf_rep.export_graph("LeNet_model.pb") # export the model
</code></pre>
|
tensorflow|pytorch|exchange-server|onnx
| 2
|
374,837
| 58,620,552
|
tf.reshape is not giving ?(None) for first element
|
<p>I am new to tensorflow, I have tensor like below,</p>
<pre><code>a = tf.constant([[1, 2, 3], [4, 5, 6]])
</code></pre>
<p>Output of <code>a.shape</code> is </p>
<blockquote>
<p>TensorShape([Dimension(2), Dimension(3)])</p>
</blockquote>
<p>For my computational process I want to reshape the tensor to <code>(?, 2, 3)</code></p>
<p>I am unable to reshape it to desire format.</p>
<p>I tried,</p>
<pre><code>tf.reshape(a, [-1, 2, 3])
</code></pre>
<p>But it returns,</p>
<pre><code><tf.Tensor 'Reshape_18:0' shape=(1, 2, 3) dtype=int32> # 1 has to be replaced by ?
</code></pre>
<p>further I tried,</p>
<pre><code>tf.reshape(a, [-1, -1, 2, 3])
</code></pre>
<p>it returns,</p>
<pre><code><tf.Tensor 'Reshape_19:0' shape=(?, ?, 2, 3) dtype=int32> # two ? are there
</code></pre>
<p>How do I get the desired result?</p>
<p>Sorry if it sounds simple problem. </p>
|
<p>The "problem" is TensorFlow does as much shape inference as it can, which is generally something good, but it makes it more complicated if you explicitly want to have a <code>None</code> dimension. Not an ideal solution, but one possible workaround is to use a <a href="https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/placeholder_with_default" rel="nofollow noreferrer"><code>tf.placeholder_with_default</code></a>, for example like this:</p>
<pre><code>import tensorflow as tf
a = tf.constant([[1, 2, 3], [4, 5, 6]])
# This placeholder is never actually fed
z = tf.placeholder_with_default(tf.zeros([1, 1, 1], a.dtype), [None, 1, 1])
b = a + z
print(b)
# Tensor("add:0", shape=(?, 2, 3), dtype=int32)
</code></pre>
<p>Or another similar option, just with reshaping:</p>
<pre><code>import tensorflow as tf
a = tf.constant([[1, 2, 3], [4, 5, 6]])
s = tf.placeholder_with_default([1, int(a.shape[0]), int(a.shape[1])], [3])
b = tf.reshape(a, s)
b.set_shape(tf.TensorShape([None]).concatenate(a.shape))
print(b)
# Tensor("Reshape:0", shape=(?, 2, 3), dtype=int32)
</code></pre>
|
python|python-3.x|tensorflow
| 1
|
374,838
| 58,811,263
|
How to update the Index of df with a new Index?
|
<p>I am currently having one df which has an incomplete Index.
like this:</p>
<pre><code>Idx bar baz zoo
001 A 1 x
003 B 2 y
005 C 3 z
007 A 4 q
008 B 5 w
009 C 6 t
</code></pre>
<p>I have the complete <code>Index([001, 002, ...... 010])</code>.
Would like to how to supplement the complete Index into the incomplete df. </p>
<pre><code>Idx bar baz zoo
001 A 1 x
002 nan nan nan
003 B 2 y
004 nan nan nan
005 C 3 z
006 nan nan nan
007 A 4 q
008 B 5 w
009 C 6 t
010 nan nan nan
</code></pre>
<p>The <code>nan</code> can be "", the purpose is for me to identify which case I am currently missing.
It's the first time I ask question on stackover, apology for the poor formatting.</p>
|
<p>You can try with <code>reindex</code></p>
<pre><code>df=df.reindex(completeIndex)
</code></pre>
|
python|pandas|dataframe
| 2
|
374,839
| 58,761,791
|
Python ingestion of csv files
|
<p>I am trying to ingest daily csv data into Python. I have different files such as follows for each day.I need help in appending two columns where the values from the columns are from the file name, for eg first column should take the value before '_' and the second column takes the date part from the file name.</p>
<pre><code> board_2019-08-08.csv
sign_2019-08-08.csv
Summary_2019-08-08.csv
</code></pre>
<p>Code :</p>
<pre><code>path = "C:\xyz\Files\ETL\Dashboard"
all_files = glob.glob(os.path.join(path, "*.csv"))
for file in all_files:
file_name = os.path.splitext(os.path.basename(file))[0]
dfn = pd.read_csv(file, skiprows = 17)
dfn['Page'] = 'Dashboard'
del dfn['Dimension']
dfn = dfn.iloc[1:]
dfn.columns = ['LoanId', 'Impressions', 'Page']
</code></pre>
<p>`</p>
|
<p>Try this</p>
<pre><code>path = "C:\xyz\Files\ETL\Dashboard"
files = list(filter(lambda x: '.csv' in x, os.listdir('path')))
for file in files:
pre,post = file.split("_")
post = post.split(".")[0]
dfn = pd.read_csv(f"{path}/{file}", skiprows = 17)
# assume your inital values for column 0 and 1 is 1
dfn.insert(0,"column1",value=pre)
dfn.insert(1,"column2",value=post)
// rest of your code
</code></pre>
|
python-3.x|pandas|automation
| 0
|
374,840
| 58,655,860
|
Creating a loop of 12 hours in dataframe with timestamp index
|
<pre><code>df['index_day'] = df.index.floor('d')
</code></pre>
<p>my dataframe is <code>df.head</code></p>
<pre><code> index_day P2_Qa ... P2_Qcon P2_m
2019-01-10 17:00:00 2019-01-10 93.599342 ... 107.673342 14.962424
2019-01-10 17:01:00 2019-01-10 90.833884 ... 104.658384 14.343642
2019-01-10 17:02:00 2019-01-10 90.907001 ... 104.601001 14.568892
2019-01-10 17:03:00 2019-01-10 93.579973 ... 107.115473 14.884902
2019-01-10 17:04:00 2019-01-10 93.688072 ... 107.168072 14.831412
</code></pre>
<p>I'm looping for every day</p>
<pre><code>for day, i in df.groupby('index_day'):
sns.jointplot(x='P2_Tam', y='P2_Qa', data=i, kind='reg')
j=j+1
plt.savefig(j+'.png')
</code></pre>
<p>This gives me regression plots for one day 24 hours. However, I want such plots for nights only. Loop around 12 hours where <code>one night = one loop= 1 plot</code> from 18:00 till 6 in the morning morning.</p>
<p>However, i want to loop with <code>one loop = 18:00 till 6:00 of next day</code> rather than <code>one loop=24 hours of one day</code>. How do I do that?</p>
|
<p>I think you can filter first by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.between_time.html" rel="nofollow noreferrer"><code>DataFrame.between_time</code></a> only for nights and then loop by <code>12H</code> with <code>base=6</code>:</p>
<pre><code>rng = pd.date_range('2017-04-03', periods=35, freq='H')
df = pd.DataFrame({'a': range(35)}, index=rng)
df = df.between_time('18:00:01', '6:00')
print (df)
a
2017-04-03 00:00:00 0
2017-04-03 01:00:00 1
2017-04-03 02:00:00 2
2017-04-03 03:00:00 3
2017-04-03 04:00:00 4
2017-04-03 05:00:00 5
2017-04-03 06:00:00 6
2017-04-03 19:00:00 19
2017-04-03 20:00:00 20
2017-04-03 21:00:00 21
2017-04-03 22:00:00 22
2017-04-03 23:00:00 23
2017-04-04 00:00:00 24
2017-04-04 01:00:00 25
2017-04-04 02:00:00 26
2017-04-04 03:00:00 27
2017-04-04 04:00:00 28
2017-04-04 05:00:00 29
2017-04-04 06:00:00 30
</code></pre>
<hr>
<pre><code>for i, g in df.groupby(pd.Grouper(freq='12H', base=6, closed='right')):
if not g.empty:
print (g)
a
2017-04-03 00:00:00 0
2017-04-03 01:00:00 1
2017-04-03 02:00:00 2
2017-04-03 03:00:00 3
2017-04-03 04:00:00 4
2017-04-03 05:00:00 5
2017-04-03 06:00:00 6
a
2017-04-03 19:00:00 19
2017-04-03 20:00:00 20
2017-04-03 21:00:00 21
2017-04-03 22:00:00 22
2017-04-03 23:00:00 23
2017-04-04 00:00:00 24
2017-04-04 01:00:00 25
2017-04-04 02:00:00 26
2017-04-04 03:00:00 27
2017-04-04 04:00:00 28
2017-04-04 05:00:00 29
2017-04-04 06:00:00 30
</code></pre>
<p>EDIT:</p>
<p>If want select by 12 hours after start time one possible solution with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.truncate.html" rel="nofollow noreferrer"><code>DataFrame.truncate</code></a>:</p>
<pre><code>rng = pd.date_range('2017-04-03', periods=35, freq='2H')
df = pd.DataFrame({'a': range(35)}, index=rng)
dates = df.index.floor('d').unique()
for s, e in zip(dates + pd.Timedelta(18, unit='H'),
dates + pd.Timedelta(30, unit='H')):
df1 = df.truncate(s, e)
if not df1.empty:
print (df1)
a
2017-04-03 18:00:00 9
2017-04-03 20:00:00 10
2017-04-03 22:00:00 11
2017-04-04 00:00:00 12
2017-04-04 02:00:00 13
2017-04-04 04:00:00 14
2017-04-04 06:00:00 15
a
2017-04-04 18:00:00 21
2017-04-04 20:00:00 22
2017-04-04 22:00:00 23
2017-04-05 00:00:00 24
2017-04-05 02:00:00 25
2017-04-05 04:00:00 26
2017-04-05 06:00:00 27
a
2017-04-05 18:00:00 33
2017-04-05 20:00:00 34
</code></pre>
|
python-3.x|pandas|loops|datetime|timestamp
| 0
|
374,841
| 58,976,313
|
How to fix: AttributeError: module 'tensorflow' has no attribute 'contrib'
|
<p>I'm training a LSTM and I'm defining parameters and regression layer. I get the error in the title with this code:</p>
<pre><code> lstm_cells = [
tf.contrib.rnn.LSTMCell(num_units=num_nodes[li],
state_is_tuple=True,
initializer= tf.contrib.layers.xavier_initializer()
)
for li in range(n_layers)]
drop_lstm_cells = [tf.contrib.rnn.DropoutWrapper(
lstm, input_keep_prob=1.0,output_keep_prob=1.0-dropout, state_keep_prob=1.0-dropout
) for lstm in lstm_cells]
drop_multi_cell = tf.contrib.rnn.MultiRNNCell(drop_lstm_cells)
multi_cell = tf.contrib.rnn.MultiRNNCell(lstm_cells)
w = tf.get_variable('w',shape=[num_nodes[-1], 1], initializer=tf.contrib.layers.xavier_initializer())
b = tf.get_variable('b',initializer=tf.random_uniform([1],-0.1,0.1))
</code></pre>
<p>I'm using tensorflow2 and I have already read the <a href="https://www.tensorflow.org/guide/migrate" rel="nofollow noreferrer">https://www.tensorflow.org/guide/migrate</a> guide and I think almost everything on the net.
But I'm not able to solve it.
How can I do it?</p>
|
<p>This error occurs because the <code>contrib</code> module has been removed from version 2 of tensorflow. There are two solutions to this problem:</p>
<ol>
<li><p>You can delete the current package and install one of the Series 1 versions.</p>
</li>
<li><p>You can use this command, which is also compatible with the version two package: Use <code>tf.compat.v1.nn.rnn_cell.LSTMCell</code> instead of <code>tf.contrib.rnn.LSTMCell</code> and use <code>tf.initializers.GlorotUniform ()</code> instead of <code>tf.contrib.layers.xavier_initializer ()</code> in other command which include rnn you can use <code>tf.compat.v1.nn.rnn_cell</code>.</p>
</li>
</ol>
|
python-3.x|tensorflow|tensorflow2.0
| 3
|
374,842
| 58,718,365
|
Fast way to convert upper triangular matrix into symmetric matrix
|
<p>I have an upper-triangular matrix of <code>np.float64</code> values, like this:</p>
<pre class="lang-py prettyprint-override"><code>array([[ 1., 2., 3., 4.],
[ 0., 5., 6., 7.],
[ 0., 0., 8., 9.],
[ 0., 0., 0., 10.]])
</code></pre>
<p>I would like to convert this into the corresponding symmetric matrix, like this:</p>
<pre class="lang-py prettyprint-override"><code>array([[ 1., 2., 3., 4.],
[ 2., 5., 6., 7.],
[ 3., 6., 8., 9.],
[ 4., 7., 9., 10.]])
</code></pre>
<p>The conversion can be done in place, or as a new matrix. I would like it to be as fast as possible. How can I do this quickly?</p>
|
<p><code>np.where</code> seems quite fast in the out-of-place, no-cache scenario:</p>
<pre><code>np.where(ut,ut,ut.T)
</code></pre>
<p>On my laptop:</p>
<pre><code>timeit(lambda:np.where(ut,ut,ut.T))
# 1.909718865994364
</code></pre>
<p>If you have pythran installed you can speed this up 3 times with near zero effort. But note that as far as I know pythran (currently) only understands contguous arrays.</p>
<p>file <code><upp2sym.py></code>, compile with <code>pythran -O3 upp2sym.py</code></p>
<pre><code>import numpy as np
#pythran export upp2sym(float[:,:])
def upp2sym(a):
return np.where(a,a,a.T)
</code></pre>
<p>Timing:</p>
<pre><code>from upp2sym import *
timeit(lambda:upp2sym(ut))
# 0.5760842661838979
</code></pre>
<p>This is almost as fast as looping:</p>
<pre><code>#pythran export upp2sym_loop(float[:,:])
def upp2sym_loop(a):
out = np.empty_like(a)
for i in range(len(a)):
out[i,i] = a[i,i]
for j in range(i):
out[i,j] = out[j,i] = a[j,i]
return out
</code></pre>
<p>Timing:</p>
<pre><code>timeit(lambda:upp2sym_loop(ut))
# 0.4794591029640287
</code></pre>
<p>We can also do it inplace:</p>
<pre><code>#pythran export upp2sym_inplace(float[:,:])
def upp2sym_inplace(a):
for i in range(len(a)):
for j in range(i):
a[i,j] = a[j,i]
</code></pre>
<p>Timing</p>
<pre><code>timeit(lambda:upp2sym_inplace(ut))
# 0.28711927914991975
</code></pre>
|
python|numpy|optimization
| 5
|
374,843
| 58,937,277
|
Pandas.ExcelWriter KeyError when using writer.sheets method
|
<p>Please help, I don't know why this error is happening. I have used this code previously with no issues. I hope it's not something stupid. Always appreciate the help. </p>
<p>Versions:</p>
<p>python 3.6</p>
<p>pd 0.23.0</p>
<p>xlsxwriter 1.0.4</p>
<pre><code>writer = pd.ExcelWriter('Output.xlsx', engine='xlsxwriter')
workbook = writer.book
worksheet = writer.sheets['Sheet1']
</code></pre>
<p>Output:</p>
<pre><code>Traceback (most recent call last):
File "/opt/eclipse/dropins/plugins/org.python.pydev.core_7.2.0.201903251948/pysrc/_pydevd_bundle/pydevd_exec2.py", line 3, in Exec
exec(exp, global_vars, local_vars)
File "<console>", line 1, in <module>
KeyError: 'Sheet1'
</code></pre>
|
<p>You didn't create a Sheet 1. </p>
<p>from <a href="https://xlsxwriter.readthedocs.io/working_with_pandas.html" rel="noreferrer">here</a> there's an example:</p>
<pre><code>import pandas as pd
# Create a Pandas dataframe from the data.
df = pd.DataFrame({'Data': [10, 20, 30, 20, 15, 30, 45]})
# Create a Pandas Excel writer using XlsxWriter as the engine.
writer = pd.ExcelWriter('pandas_simple.xlsx', engine='xlsxwriter')
# Convert the dataframe to an XlsxWriter Excel object.
df.to_excel(writer, sheet_name='Sheet1') ***#this is where you create Sheet 1***
# Get the xlsxwriter objects from the dataframe writer object.
workbook = writer.book
worksheet = writer.sheets['Sheet1'] ***#here is where you select it***
</code></pre>
|
python|pandas|xlsxwriter
| 7
|
374,844
| 58,708,732
|
Loop through the columns and avoid IndexError
|
<p>Given sample data set (real data has <code>(931, 674)</code>):</p>
<pre><code>12_longitude_1 12_latitude_1 14_longitude_2 14_latitude_2 15_longitude_3 15_latitude_3
16 11 12 13 14 15
16 11 12 13 14 15
16 11 12 13 14 15
</code></pre>
<p>I am running:</p>
<pre><code>pd_out = pd.DataFrame({'zone': [], 'number': []})
for col_num in range(0, len(border.columns), 2):
curr_lon_name = border.columns[col_num]
curr_lat_name = border.columns[col_num + 1] # PROBLEM IS HERE
num = curr_lon_name.split("_")[-1]
border = border[[curr_lon_name, curr_lat_name]].dropna()
border[curr_lon_name] = border[curr_lon_name].replace(r'[()]', '', regex=True)
border[curr_lat_name] = border[curr_lat_name].replace(r'[()]', '', regex=True)
border[curr_lon_name] = pd.to_numeric(border[curr_lon_name], errors='coerce')
border[curr_lat_name] = pd.to_numeric(border[curr_lat_name], errors='coerce')
geometry2 = [Point(xy) for xy in zip(border[curr_lon_name],border[curr_lat_name])]
border_point = gpd.GeoDataFrame(border,crs=crs,geometry=geometry2)
turin_final = Polygon([[p.x, p.y] for p in border_point.geometry])
within_turin = turin_point[turin_point.geometry.within(turin_final)]
curr_len = len(within_turin)
pd_out = pd_out.append({'zone': "long_lat_{}".format(num), 'number': curr_len}, ignore_index=True)
</code></pre>
<hr>
<p>But in the line <code>----> 7 curr_lat_name = border.columns[col_num + 1]</code> I get:</p>
<blockquote>
<p>IndexError: index 3 is out of bounds for axis 0 with size 3</p>
</blockquote>
|
<p>Python starts indexing at 0. So if your axis is of size 3, then you can only access it with indices 0, 1, and 2.</p>
<p><code>len(border.columns)</code> is (I presume) 3. And so <code>col_num</code> will take values 0 and 2 in your for loop.</p>
<p>When it takes value 2, and then you do <code>border.columns[col_num + 1]</code> in your problematic line, you are trying to access the axis outside of its bounds, as <code>col_num+1</code> is 3.</p>
|
python|pandas|geopandas
| 0
|
374,845
| 58,792,897
|
Creating a random matrix with 7 rows by 21 column vectors with A,C,T G
|
<p>I am new to coding and need to create a random matrix with 7 rows by 21 column vectors with A,C,T G as values.</p>
|
<pre><code>In [412]: np.random.choice(list('ACTG'),(3,4),replace=True)
Out[412]:
array([['C', 'C', 'C', 'G'],
['A', 'A', 'G', 'T'],
['G', 'A', 'T', 'T']], dtype='<U1')
</code></pre>
|
python|numpy
| 1
|
374,846
| 58,649,009
|
Write pandas dataframe to_csv in columns with trailing zeros
|
<p>I have a pandas dataframe of floats and wish to write out to_csv, setting whitespace as the delimeter, and with trailing zeros to pad so it is still readable (i.e with equally spaced columns).</p>
<p>The complicating factor is I also want each column to be rounded to different number of decimals (some need much higher accuracy).</p>
<h3>To reproduce:</h3>
<pre><code>import pandas as pd
df = pd.DataFrame( [[1.00000, 3.00000, 5.00000],
[1.45454, 3.45454, 5.45454]] )
df_rounded = df.round( {0:1, 1:3, 2:5} )
df_rounded.to_csv('out.txt', sep=' ', header=False)
</code></pre>
<h3>Current result for out.txt:</h3>
<pre><code>0 1.0 3.0 5.0
1 1.5 3.455 5.45454
</code></pre>
<h3>Desired:</h3>
<pre><code>0 1.0 3.000 5.00000
1 1.5 3.455 5.45454
</code></pre>
|
<p>You can get the string representation of the dataframe using <code>df.to_string()</code> (<a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_string.html" rel="nofollow noreferrer">docs</a>). Then simply write this string to a text file.</p>
<p>This method also has <code>col_space</code> parameter to further adjust the spacing between columns. </p>
<p>Example:</p>
<pre class="lang-py prettyprint-override"><code>with open('out.txt', 'w') as file:
file.writelines(df_rounded.to_string(header=False))
</code></pre>
<p>Outputs:</p>
<pre><code>0 1.0 3.000 5.00000
1 1.5 3.455 5.45454
</code></pre>
<hr>
<p>There is pandas <code>df.to_csv(float_format='%.5f')</code> (<a href="https://stackoverflow.com/questions/12877189/float64-with-pandas-to-csv">more info</a>) but it applies the formatting to values in all columns. Whereas, in your case, you need different formatting for each column.</p>
|
python|pandas|csv|dataframe
| 1
|
374,847
| 58,758,390
|
I want to get a specific value from a data frame and see what another value is a few rows down , but in a different column
|
<p>I have the following data frame now3: </p>
<pre><code> size date unix price
0 4.0 2019-11-03 02:42:00 1.570000e+12 9288.5
1 4.0 2019-11-03 02:42:00 1.570000e+12 9288.5
2 4.0 2019-11-03 02:42:00 1.570000e+12 9288.5
3 4.0 2019-11-03 02:42:00 1.570000e+12 9288.5
4 4.0 2019-11-03 02:42:00 1.570000e+12 9288.5
... ... ... ... ...
1048570 15.0 2019-11-05 05:48:00 1.570000e+12 9331.0
1048571 3851.0 2019-11-05 05:48:00 1.570000e+12 9331.0
1048572 3793.0 2019-11-05 05:48:00 1.570000e+12 9331.0
1048573 1000.0 2019-11-05 05:48:00 1.570000e+12 9331.0
1048574 200.0 2019-11-05 05:48:00 1.570000e+12 9331.0
</code></pre>
<p>I want to see what the price is at a certain size but 5 minutes later. For example at size 4 I want to see what the value of the price is but 5 minutes later. </p>
<p>I have the following code right now, and am having trouble get that certain data:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import csv
%matplotlib inline
headers = ['ticker', 'size', 'price', 'unix','type','date']
dtypes = {'ticker': 'str', 'size': 'float', 'price': 'float', 'unix': 'float','type': 'str','date': 'str'}
parse_dates = ['date']
btcnow = pd.read_csv('new 113-115.csv', header=None, names=headers, dtype=dtypes, parse_dates=parse_dates)
now3 = pd.DataFrame(btcnow, columns=['size','date','unix','price'])
x1 = now3.loc[now3["size"] == 24022, "date"]
y1 = now3.loc[now3["size"] == 24022, "size"]
</code></pre>
<p>This can either be done using unix time or the date-time. (5 minutes is 300 unix).</p>
<p>The expected output format should be the price of that size in 5 minutes. For example of size 4 , it is 2:42 I want to know the price at 2:47. So, the output will be the price at 2:47. However, there is more than one size 4 in the data, so it should output the current time and price and the price in 5 min in next to each size 4.
Example of wanted output:</p>
<pre><code>size date Date +5 Price(in 5)
4 4.0 2019-11-03 02:42:00 2019-11-03 02:42:00 9278.5
4 4.0 2019-11-03 02:49:00 2019-11-03 02:54:00 9288
</code></pre>
<p>I tried the following:</p>
<pre><code>d1= now3.loc[(now3["size"] == 24022) & (now3["date"]+pd.Timedelta('5 minutes')), "price"]
</code></pre>
<p>But it gives me an error</p>
|
<p>In the below code, it uses timedelta to modify the original time to get the desired ones, then store it in a separate dataframe. Inner join the desired (time, size) pairs with all the data, you will get the data you want. </p>
<pre><code>from datetime import datetime, timedelta
time_interval = timedelta(minutes = 5)
df = df[[ 'time', 'size', 'price']]
# extract time size for merge
df_time_size= df[['time', 'size']]
df_time_size.loc[:, 'time'] = df_time_size.loc[:, 'time'] + time_interval
# inner join dataframe by size&time
df = df_time_size.merge(df[['time', 'size', 'price']], how = 'inner')
df['orig_time'] = df['time'] - time_interval
</code></pre>
<p>Out put will be like:</p>
<pre><code> time size price orig_time
0 2019-01-01 12:26:00 1 3 2019-01-01 12:21:00
1 2019-01-01 12:27:00 1 1 2019-01-01 12:22:00
</code></pre>
<p>Edit:</p>
<p>In order to get the latest price, we can do groupby, then sort(descending) by time, then get the first row.</p>
<pre><code>df = df.groupby('size').apply(lambda x: x.sort_values('time', ascending=False).head(1)).reset_index(drop=True)
</code></pre>
|
python|pandas|dataframe
| 1
|
374,848
| 58,791,635
|
Python How to sliding for sum data in dataframe?
|
<p>If I have data frame like this.</p>
<pre><code>df = [3, 2, 4, 1, 0, 3]
</code></pre>
<p>I want to slice to sum 3 value like this.</p>
<pre><code>3+2+4 = 9
2+4+1 = 7
4+1+0 = 5
1+0+3 = 4
</code></pre>
<p>So, the result will be.</p>
<pre><code>9, 7, 5, 4
</code></pre>
<p>How to for sum dataframe with python ?</p>
|
<p>You can do this with a sliding slice in a list comprehension</p>
<pre><code>df = [3, 2, 4, 1, 0, 3]
print([sum(df[i:i+3]) for i in range(len(df)-2)])
</code></pre>
<pre><code>[9, 7, 5, 4]
</code></pre>
|
python|python-3.x|pandas
| 2
|
374,849
| 58,823,497
|
Any way to speedup itertool.product
|
<p>I am using itertools.product to find the possible weights an asset can take given that the sum of all weights adds up to 100. </p>
<pre><code>min_wt = 10
max_wt = 50
step = 10
nb_Assets = 5
weight_mat = []
for i in itertools.product(range(min_wt, (max_wt+1), step), repeat = nb_Assets):
if sum(i) == 100:
weight = [i]
if np.shape(weight_mat)[0] == 0:
weight_mat = weight
else:
weight_mat = np.concatenate((weight_mat, weight), axis = 0)
</code></pre>
<p>The above code works, but it is too slow as it goes through the combinations that are not acceptable, example [50,50,50,50,50] eventually testing 3125 combinations instead of 121 possible combinations. Is there any way we can add the 'sum' condition within the loop to speed things up?</p>
|
<p>Many improvements are possible.</p>
<p>For starters, the search space can be reduced using <em>itertools.combinations_with_replacement()</em> because summation is commutative.</p>
<p>Also, the last addend should be computed rather than tested. For example if <code>t[:4]</code> was <code>(10, 20, 30, 35)</code>, you could compute <code>t[4]</code> as <code>1 - sum(t)</code>, giving a value of <em>5</em>. This will give a 100-fold speed-up over trying one-hundred values of <em>x</em> in <code>(10, 20, 30, 35, x)</code>.</p>
|
python|numpy|nested-loops|itertools
| 4
|
374,850
| 58,745,819
|
Splitting list of nested json to multiple columns
|
<p>This is sort of an extension on a previous question I asked, but different scope and approach.</p>
<p>I have a dataframe with a column populated by lists of dictionaries in each row</p>
<pre><code>0 [{"date":"0 1 0" firstBoxerRating:[null null] ...
1 [{"date":"2 2 1" firstBoxerRating:[null null] ...
2 [{"date":"2013-10-05" firstBoxerRating:[null n...
</code></pre>
<p>This is short sample of some of the info In a given row:</p>
<pre><code>[{"date":"2 2 1" firstBoxerRating:[null null] firstBoxerWeight:201.75 judges:[{"id":404749 name:"David Hudson" scorecard:[]} {"id":477070 name:"Mark Philips" scorecard:[]} {"id":404277 name:"Oren Shellenberger" scorecard:[]}] links:{"bio":1346666 bout:"558867/1346666" event:558867 other:[]} location:"Vanderbilt University Memorial Gymnasium Nashville" metadata:" time: 2:54\n | <span>referee:</span> <a href=\"/en/referee/403887\">Anthony Bryant</a><span> | </span><a href=\"/en/judge/404749\">David Hudson</a> | <a href=\"/en/judge/477070\">Mark Philips</a>
</code></pre>
<p>I would like to create a clean dataframe where the key in the dictionary becomes the column and the value, the row related to the particular column.</p>
<p>So here is an example of my desired output using the short sample as the input data:</p>
<pre><code>date firstBoxerRating firstBoxerWeight judges id.......
2 2 1 [null null] 201.75 404749.....
</code></pre>
<p>I do not believe the question is a duplicate of <a href="https://stackoverflow.com/questions/13575090/construct-pandas-dataframe-from-items-in-nested-dictionary">this</a></p>
<p>Have tried every solution in this question, my data also contains lists of nested dictionaries, if anything resembling a json</p>
<p>For example, this solution:</p>
<pre><code>pd.DataFrame.from_dict({(i,j): df[i][j]
for i in df.keys()
for j in df[i].keys()},
orient='index')
</code></pre>
<p>produces the exact same output I have</p>
<p>I have also tried unpacking the dicts in the column:</p>
<pre><code>df[0].apply(pd.Series)
</code></pre>
<p>However, again this produces the same output</p>
|
<p>Managed to resolve this issue with using regex and str.extract. </p>
<p>I extract the text between two strings and append said text to its relevant column</p>
<p>Example:</p>
<pre><code>df[0].str.extract('date(?P<date>.*?)firstBoxerRating(?P<firstBoxerRating>.*?)firstBoxerWeight(?P<firstBoxerWeight>.*?)judges(?P<JudgeID>.*?)links(?P<Links>.*?)location(?P<location>.*?)metadata(?P<metadata>.*?)')
</code></pre>
|
python|json|pandas
| 0
|
374,851
| 58,871,889
|
How to plot a numpy array with matplotlib?
|
<p>I generate a circle numpy array like this:</p>
<pre><code># -*- coding: utf-8 -*-
import numpy as np
a, b = 3, 3
n = 7
r = 3
arr = np.ones((n, n))
y, x = np.ogrid[-a:n-a, -b:n-b]
mask = x ** 2 + y ** 2 <= r**2
arr = 255 * mask.astype(int)
print(arr)
</code></pre>
<p>it print result like this:</p>
<pre><code>[[ 0 0 0 255 0 0 0]
[ 0 255 255 255 255 255 0]
[ 0 255 255 255 255 255 0]
[255 255 255 255 255 255 255]
[ 0 255 255 255 255 255 0]
[ 0 255 255 255 255 255 0]
[ 0 0 0 255 0 0 0]]
</code></pre>
<p>I want to plot this numpy array? How can I do this?</p>
<hr>
<p>EDIT</p>
<p>What I want to show is a circle, not the default picture:</p>
<p><a href="https://i.stack.imgur.com/7dUcf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7dUcf.png" alt="plt.imgshow show this"></a></p>
|
<p>This can be done using <a href="https://matplotlib.org/3.1.1/api/_as_gen/matplotlib.colors.ListedColormap.html" rel="nofollow noreferrer">matplotlib's color map feature.</a></p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from matplotlib import colors
a, b = 3, 3
n = 7
r = 3
arr = np.ones((n, n))
y, x = np.ogrid[-a:n-a, -b:n-b]
mask = x ** 2 + y ** 2 <= r**2
arr = 255 * mask.astype(int)
cmap = colors.ListedColormap(['purple', 'yellow'])
fig, ax = plt.subplots()
ax.imshow(arr, cmap=cmap)
plt.show()
</code></pre>
<p>Which results in:</p>
<p><a href="https://i.stack.imgur.com/hXGcH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hXGcH.png" alt="enter image description here"></a></p>
|
python|numpy|matplotlib
| 0
|
374,852
| 58,698,611
|
Removing rows until a metric point is reached and extracting the minimum value
|
<p>I'am classifying some text using a Machine learning model. Essentially I am fitting 80% of the data to the model and predicting the remaining 20%. On top of this, for each classification, I am outputting a confidence level as given by the ML model and and a <code>check</code> variable, which is set to <code>TRUE</code> if the prediction is correct and <code>FALSE</code> otherwise. </p>
<p>The data frame output I get from the process above looks like this:</p>
<pre><code>+----------------------+
| confidence Check |
+----------------------+
| 1 TRUE |
| 0.72 TRUE |
| 0.68 TRUE |
| 1 TRUE |
| 0.150287157 FALSE |
| 1 TRUE |
| 0.7 TRUE |
| 0.68 TRUE |
| 1 TRUE |
| 0.903333333 FALSE |
+----------------------+
</code></pre>
<p>I would like to know what the minimum confidence level is that approximates 95% accuracy the closest in the data set. E.g, If I remove enough rows with <code>FALSE</code> Values, starting with the rows which have the lowest confidence levels and are <code>FALSE</code>, what is the minimum confidence level in the data set which is reached? </p>
<p>I compute accuracy as the number of rows which are <code>TRUE</code> divided by the the total number of rows in the data frame.</p>
<p>How can I do this?</p>
|
<p>Here is the solution:</p>
<ol>
<li>read your dataframe in (code below), treating <code>Check</code> column as int (rather than boolean), and sort in order of increasing <code>confidence</code>.</li>
<li>now look at the values as you sweep your confidence threshold over rows:
<code>[ round(df.iloc[n:].Check.mean(), 3) for n in range(len(df.index))]</code> which gives <code>[0.8, 0.889, 0.875, 0.857, 0.833, 0.8, 1.0, 1.0, 1.0, 1.0]</code></li>
<li>after you find the number of the cutoff row <code>n</code>, then <code>df.iloc[n].confidence</code> gives you a cutoff confidence value which gives >= 0.95 accuracy. Hence <strong>you can pick your cutoff confidence threshold as any number between <code>df.iloc[n-1].confidence</code> ... <code>df.iloc[n].confidence</code></strong></li>
</ol>
<p>Code:</p>
<pre><code>import pandas as pd
dat = """confidence Check
1 TRUE
0.72 TRUE
0.68 TRUE
1 TRUE
0.150287157 FALSE
1 TRUE
0.7 TRUE
0.68 TRUE
1 TRUE
0.903333333 FALSE"""
df = pd.read_csv(pd.compat.StringIO(dat), header=0, delim_whitespace=True, dtype={'confidence':'float', 'Check':'int'})
df.sort_values(by='confidence', inplace=True)
df
confidence Check
4 0.150287 0
2 0.680000 1
7 0.680000 1
6 0.700000 1
1 0.720000 1
9 0.903333 0
0 1.000000 1
3 1.000000 1
5 1.000000 1
8 1.000000 1
# Sweep over the df, finding the cutoff row which gives us 0.95 confidence...
for n in range(len(df.index)):
if df.iloc[n:].Check.mean() >= 0.95:
break
# ...then find the range for the cutoff confidence level
print("Cutoff confidence level is between:", df.iloc[n-1].confidence, df.iloc[n].confidence)
# Cutoff confidence level is between: 0.903333333 1.0
</code></pre>
|
python|pandas
| 1
|
374,853
| 58,991,545
|
Sum the values of specific rows if the rows have same values in specific column
|
<p>I have a data frame like this:</p>
<pre><code> a b c
12456 11 123.1
12678 19 345.67
13278 19 1235.345
</code></pre>
<p>or in another format </p>
<pre><code><table>
<tr>
<td>12456</td>
<td>11</td><td>123.1</td>
</tr>
<tr>
<td>12678</td>
<td>19</td><td>345.67</td>
</tr>
<tr>
<td>13278</td>
<td>19</td>
<td>1235.345</td>
</tr>
</table>
</code></pre>
<p>The first column is the index.I need to add the rows of third column and make it one if second column has same value. Could you suggest me something to do this? Following is what I have tried but doesnt work</p>
<pre><code> a,b,c=df_addweight.iloc[:,0].values,df_addweight.iloc[:, 1].values,df_addweight.iloc[:, 3].values`
for u,v,w, in zip(range(1,len(a)),range(1,len(b)),range(1,len(c))):
if a[u]==a[u-1] and b[v]==b[v-1]:
df_addweight['W']= c[w]+c[w-1]
elif a[u]==a[u-1] and b[v]!=b[v-1]:
df_addweight['W']=c[w]
</code></pre>
|
<p>Use <strong>pandas</strong>:</p>
<pre><code>import pandas as pd
df = pd.read_csv("data.csv", delim_whitespace=True)
df
a b c
0 12456 11 123.100
1 12678 19 345.670
2 13278 19 1235.345
df.groupby('b')['c'].sum()
</code></pre>
<p>Output:</p>
<pre><code>b
11 123.100
19 1581.015
Name: c, dtype: float64
</code></pre>
|
python|pandas|dataframe
| 0
|
374,854
| 58,831,422
|
How to delete row data from a CSV file using pandas?
|
<p>I am new to Pandas and was wondering how to delete a specific row using the row id. Currently, I have a CSV file that contains data about different students. I do not have any headers in my CSV file. </p>
<p><strong>data.csv:</strong></p>
<pre><code>John 21 34 87 ........ #more than 100 columns of data
Abigail 18 45 53 ........ #more than 100 columns of data
Norton 19 45 12 ........ #more than 100 columns of data
</code></pre>
<p><strong>data.py:</strong></p>
<p>I have a list that has a record of some names.</p>
<pre><code>names = ['Jonathan', 'Abigail', 'Cassandra', 'Ezekiel']
</code></pre>
<p>I opened my CSV file in Python and used list comprehension in order to read all the names in the first column and store them in a list with a variable 'student_list' assigned.</p>
<p>Now, for all elements in the student_list, if the element is not seen in the <strong>'names'</strong> list, I want to delete that element in my CSV file. In this example, I want to delete John and Norton since they do not appear in the names list. How can I achieve this using pandas? Or, is there a better alternative out there than compared to using pandas for this problem?</p>
<p>I have tried the following code below:</p>
<pre><code>csv_filename = data.csv
with open(csv_filename, 'r') as readfile:
reader = csv.reader(readfile, delimiter=',')
student_list = [row[0] for row in reader] #returns John, Abigail and Norton.
for student in student_list:
if student not in names:
id = student_list.index(student) #grab the index of the student in student list who's not found in the names list.
#using pandas
df = pd.read_csv(csv_filename) #read data.csv file
df.drop(df.index[id], in_place = True) #delete the row id for the student who does not exist in names list.
df.to_csv(csv_filename, index = False, sep=',') #close the csv file with no index
else:
print("Student name found in names list")
</code></pre>
<p>I am not able to delete the data properly. Can anybody explain?</p>
|
<p>You can just use a filter to filter out the ids you don't want. </p>
<p>Example:</p>
<pre><code>import pandas as pd
from io import StringIO
data = """
1,John
2,Beckey
3,Timothy
"""
df = pd.read_csv(StringIO(data), sep=',', header=None, names=['id', 'name'])
unwanted_ids = [3]
new_df = df[~df.id.isin(unwanted_ids)]
</code></pre>
<p>You could also use a filter and get the indices to drop the columns in the original dataframe. Example:</p>
<pre><code>df.drop(df[df.id.isin([3])].index, inplace=True)
</code></pre>
<p><strong>Update</strong> for updated question:</p>
<pre><code>df = pd.read_csv(csv_filename, sep='\t', header=None, names=['name', 'age'])
# keep only names wanted and reset index starting from 0
# drop=True makes sure to drop old index and not add it as column
df = df[df.name.isin(names)].reset_index(drop=True)
# if you really want index starting from 1 you can use this
df.index = df.index + 1
df.to_csv(csv_filename, index = False, sep=',')
</code></pre>
|
python-3.x|pandas
| 0
|
374,855
| 58,842,298
|
What is the fastest and the best way to get a specific number of group after applying groupby?
|
<p>I have more than <strong>1000</strong> groups with different <code>id</code> and I only need to select a <strong>specific number</strong> of groups and read the <code>nth</code> number of every group. <a href="http://tpcg.io/XyNS8UNP" rel="nofollow noreferrer">Here</a> an example of what I need:</p>
<pre><code> #These are the codes from different answers
import pandas as pd
import numpy as np
import time
import sys
df = pd.DataFrame({
'index':[0, 1, 2, 2, 3, 3, 4, 4, 5, 5, 5, 6, 6, 6, 7, 7, 7, 8, 8, 8, 8, 9, 9, 9, 9, 10, 10, 10, 10, 11, 3855, 3856, 3857, 3858, 3859, 3860, 3861, 3862, 3863, 3864, 3865, 3866, 3867, 3868, 3869, 3870, 3871, 3872, 3873, 3874, 3875, 3876, 3877, 3878, 3879, 3880, 3881, 3882, 3883, 3884,0, 1, 2, 2, 3, 3, 4, 4, 5, 5, 5, 6, 6, 6, 7, 7, 7, 8, 8, 8, 8, 9, 9, 9, 9, 10, 10, 10, 10, 11, 3855, 3856, 3857, 3858, 3859, 3860, 3861, 3862, 3863, 3864, 3865, 3866, 3867, 3868, 3869, 3870, 3871, 3872, 3873, 3874, 3875, 3876, 3877, 3878, 3879, 3880, 3881, 3882, 3883, 3884],
'id' : ['veh0', 'veh0', 'veh0', 'veh1', 'veh0', 'veh1', 'veh0', 'veh1', 'veh0', 'veh1', 'veh2', 'veh0', 'veh1', 'veh2', 'veh0', 'veh1', 'veh2', 'veh0', 'veh1', 'veh2', 'veh3', 'veh0', 'veh1', 'veh2', 'veh3', 'veh0', 'veh1', 'veh2', 'veh3', 'veh0', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192','veh0', 'veh0', 'veh0', 'veh1', 'veh0', 'veh1', 'veh0', 'veh1', 'veh0', 'veh1', 'veh2', 'veh0', 'veh1', 'veh2', 'veh0', 'veh1', 'veh2', 'veh0', 'veh1', 'veh2', 'veh3', 'veh0', 'veh1', 'veh2', 'veh3', 'veh0', 'veh1', 'veh2', 'veh3', 'veh0', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192', 'veh1192'],
'veh_x' :[0, 1, 2, 2, 3, 3, 4, 4, 5, 5, 5, 6, 6, 6, 7, 7, 7, 8, 8, 8, 8, 9, 9, 9, 9, 10, 10, 10, 10, 11, 3855, 3856, 3857, 3858, 3859, 3860, 3861, 3862, 3863, 3864, 3865, 3866, 3867, 3868, 3869, 3870, 3871, 3872, 3873, 3874, 3875, 3876, 3877, 3878, 3879, 3880, 3881, 3882, 3883, 3884,0, 1, 2, 2, 3, 3, 4, 4, 5, 5, 5, 6, 6, 6, 7, 7, 7, 8, 8, 8, 8, 9, 9, 9, 9, 10, 10, 10, 10, 11, 3855, 3856, 3857, 3858, 3859, 3860, 3861, 3862, 3863, 3864, 3865, 3866, 3867, 3868, 3869, 3870, 3871, 3872, 3873, 3874, 3875, 3876, 3877, 3878, 3879, 3880, 3881, 3882, 3883, 3884],
'veh_y':[0, 1, 2, 2, 3, 3, 4, 4, 5, 5, 5, 6, 6, 6, 7, 7, 7, 8, 8, 8, 8, 9, 9, 9, 9, 10, 10, 10, 10, 11, 3855, 3856, 3857, 3858, 3859, 3860, 3861, 3862, 3863, 3864, 3865, 3866, 3867, 3868, 3869, 3870, 3871, 3872, 3873, 3874, 3875, 3876, 3877, 3878, 3879, 3880, 3881, 3882, 3883, 3884,0, 1, 2, 2, 3, 3, 4, 4, 5, 5, 5, 6, 6, 6, 7, 7, 7, 8, 8, 8, 8, 9, 9, 9, 9, 10, 10, 10, 10, 11, 3855, 3856, 3857, 3858, 3859, 3860, 3861, 3862, 3863, 3864, 3865, 3866, 3867, 3868, 3869, 3870, 3871, 3872, 3873, 3874, 3875, 3876, 3877, 3878, 3879, 3880, 3881, 3882, 3883, 3884]
}
)
data=['veh0', 'veh1', 'veh2', 'veh3']
# print(df.groupby(['id']).head(1))
#first part
start = time.clock()
for i in range(0,20):
g=df.groupby(['id']).nth([i]).reset_index()
for x in data:
for idx, row in g.iterrows():
if x==row['id']:
print("code1 group",i,"=",row['id'])
end = time.clock()
print ("%.2gs" % (end-start) )
#second part
#This is what I need but it is running slowly when I add it to my whole dataset
start = time.clock()
for i in range(0,20):
for x in data: #these are the selected groups
g = df[df['id'].isin([x])].groupby(['id']).nth([i]).reset_index()
for x, row in g.iterrows():
print("code2 group",i,"=",row['id'])
end = time.clock()
print ("%.2gs" % (end-start) )
#Third part
start = time.clock()
for i in range(0,20):
g=df[df['id'].isin(data)].groupby('id').nth([i]).reset_index()
for x, row in g.iterrows():
print("code3 group",i,"=",row['id'])
end = time.clock()
print ("%.2gs" % (end-start))
#fourth part
start = time.clock()
df2 = df[df['id'].isin(data)]
for i in range(0,20):
for x in data:
row = df2.groupby('id').nth(i)
if(x in row.index):
print("code4 group",i, " = ", x)
end = time.clock()
print ("%.2gs" % (end-start))
#fifth part
def printf(text):
print text
start = time.clock()
tmp = df.loc[df.id.isin(data)].groupby(['id']).apply(lambda x: x.reset_index(drop=True)).reset_index(level=1)
# cleanup and rename index
tmp = tmp.rename(columns={'level_1': 'group'})
# print 20 first groups
for i in range(20):
lst= tmp.loc[tmp.group == i].apply(lambda x:x, axis=1)
for x, row in lst.iterrows():
print("code5 group",i,"=",row['id'])
end = time.clock()
print ("%.2gs" % (end-start))
</code></pre>
<p>The first part of the code read all the groups and return the <code>nth</code> number of every group but I need only five or six or more. The problem is that I don't know any information about the group. I can use a <code>counter</code> after that I can use <code>break</code> but the code is running so slowly because I need to load more than 30000 records every iteration. Here I added <code>data=['veh0', 'veh1', 'veh2', 'veh3']</code> as an example but it can be chosen randomly. </p>
<p>The second part is what I want but the code still running slowly. The second part takes 0.43s, the first part takes 0.14s, and the third part takes 0.077s. What is the best way of making it better?</p>
<p>I appreciate any help?</p>
|
<p>To the best of my understanding of your problem:</p>
<pre><code>>>> import pandas as pd
>>> df = \
pd.DataFrame(
{
'id': [i for i in range (1000)]*10,
'col1': ['col1 occurence {} for id {}'.format(j, i) for j in range(10) for i in range (1000)],
'col2': ['col2 occurence {} for id {}'.format(j, i) for j in range(10) for i in range (1000)]
}
)
>>> df.head()
id col1 col2
0 0 col1 occurence 0 for id 0 col2 occurence 0 for id 0
1 1 col1 occurence 0 for id 1 col2 occurence 0 for id 1
2 2 col1 occurence 0 for id 2 col2 occurence 0 for id 2
3 3 col1 occurence 0 for id 3 col2 occurence 0 for id 3
4 4 col1 occurence 0 for id 4 col2 occurence 0 for id 4
</code></pre>
<p>This will give you precisely 0th, 5th and 9th data row for each id (modify list [0,5,9] in accordance to your case):</p>
<pre><code>>>> df.groupby(['id']).nth([0,5,9]).reset_index()
id col1 col2
0 0 col1 occurence 0 for id 0 col2 occurence 0 for id 0
1 0 col1 occurence 5 for id 0 col2 occurence 5 for id 0
2 0 col1 occurence 9 for id 0 col2 occurence 9 for id 0
3 1 col1 occurence 0 for id 1 col2 occurence 0 for id 1
4 1 col1 occurence 5 for id 1 col2 occurence 5 for id 1
... ... ... ...
2995 998 col1 occurence 0 for id 998 col2 occurence 0 for id 998
2996 998 col1 occurence 5 for id 998 col2 occurence 5 for id 998
2997 999 col1 occurence 5 for id 999 col2 occurence 5 for id 999
2998 999 col1 occurence 0 for id 999 col2 occurence 0 for id 999
2999 999 col1 occurence 9 for id 999 col2 occurence 9 for id 999
[3000 rows x 3 columns]
</code></pre>
<p><strong>EDIT:</strong>
Maybe this might help you (modify list [1,300] in accordance to your case):</p>
<pre><code>>>> df[df['id'].isin([1,300])].groupby(['id']).nth([0]).reset_index()
id col1 col2
0 1 col1 occurence 0 for id 1 col2 occurence 0 for id 1
1 300 col1 occurence 0 for id 300 col2 occurence 0 for id 300
</code></pre>
|
python|python-3.x|pandas
| 1
|
374,856
| 58,807,858
|
Update multiple rows of SQL table from Python script
|
<p>I have a massive table (over 100B records), that I added an empty column to. I parse strings from another field (string) if the required string is available, extract an integer from that field, and want to update it in the new column for all rows that have that string.</p>
<p>At the moment, after data has been parsed and saved locally in a dataframe, I iterate on it to update the Redshift table with clean data. This takes approx 1sec/iteration, which is way too long.</p>
<p>My current code example:</p>
<pre><code>conn = psycopg2.connect(connection_details)
cur = conn.cursor()
clean_df = raw_data.apply(clean_field_to_parse)
for ind, row in clean_df.iterrows():
update_query = build_update_query(row.id, row.clean_integer1, row.clean_integer2)
cur.execute(update_query)
</code></pre>
<p>where <code>update_query</code> is a function to generate the update query:</p>
<pre><code>def update_query(id, int1, int2):
query = """
update tab_tab
set
clean_int_1 = {}::int,
clean_int_2 = {}::int,
updated_date = GETDATE()
where id = {}
;
"""
return query.format(int1, int2, id)
</code></pre>
<p>and where clean_df is structured like:</p>
<pre><code>id . field_to_parse . clean_int_1 . clean_int_2
1 . {'int_1':'2+1'}. 3 . np.nan
2 . {'int_2':'7-0'}. np.nan . 7
</code></pre>
<p>Is there a way to update specific table fields in bulk, so that there is no need to execute one query at a time?</p>
<p>I'm parsing the strings and running the update statement from Python. The database is stored on Redshift.</p>
|
<p>As mentioned, consider pure SQL and avoid iterating through billions of rows by pushing the Pandas data frame to Postgres as a staging table and then run one single <code>UPDATE</code> across both tables. With SQLAlchemy you can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_sql.html" rel="nofollow noreferrer"><code>DataFrame.to_sql</code></a> to create a table replica of data frame. Even add an index of the join field, <em>id</em>, and drop the very large staging table at end.</p>
<pre><code>from sqlalchemy import create_engine
engine = create_engine("postgresql+psycopg2://myuser:mypwd!@myhost/mydatabase")
# PUSH TO POSTGRES (SAME NAME AS DF)
clean_df.to_sql(name="clean_df", con=engine, if_exists="replace", index=False)
# SQL UPDATE (USING TRANSACTION)
with engine.begin() as conn:
sql = "CREATE INDEX idx_clean_df_id ON clean_df(id)"
conn.execute(sql)
sql = """UPDATE tab_tab t
SET t.clean_int_1 = c.int1,
t.clean_int_2 = c.int2,
t.updated_date = GETDATE()
FROM clean_df c
WHERE c.id = t.id
"""
conn.execute(sql)
sql = "DROP TABLE IF EXISTS clean_df"
conn.execute(sql)
engine.dispose()
</code></pre>
|
python|sql|database|pandas|amazon-redshift
| 4
|
374,857
| 70,201,921
|
BERT Domain Adaptation
|
<p>I am using <code>transformers.BertForMaskedLM</code> to further pre-train the BERT model on my custom dataset. I first serialize all the text to a <code>.txt</code> file by separating the words by a whitespace. Then, I am using <code>transformers.TextDataset</code> to load the serialized data with a BERT tokenizer given as <code>tokenizer</code> argument. Then, I am using <code>BertForMaskedLM.from_pretrained()</code> to load the pre-trained model (which is what <code>transformers</code> library presents). Then, I am using <code>transformers.Trainer</code> to further pre-train the model on my custom dataset, i.e., domain adaptation, for 3 epochs. I save the model with <code>trainer.save_model()</code>. Then, I want to load the further pre-trained model to get the embeddings of the words in my custom dataset. To load the model, I am using <code>AutoModel.from_pretrained()</code> but this pops up a warning.</p>
<pre><code>Some weights of the model checkpoint at {path to my further pre-trained model} were not used when initializing BertModel
</code></pre>
<p>So, I know why this pops up. Because I further pre-trained using <code>transformers.BertForMaskedLM</code> but when I load with <code>transformers.AutoModel</code>, it loads it as <code>transformers.BertModel</code>. What I do not understand is if this is a problem or not. I just want to get the embeddings, e.g., embedding vector with a size of 768.</p>
|
<p>You saved a <code>BERT</code> model with LM head attached. Now you are going to load the serialized file into a standalone <code>BERT</code> structure without any extra element and the warning is issued. This is pretty normal and there is no Fatal error to do so! You can check the list of unloaded params like below:</p>
<pre><code>from transformers import BertTokenizer, BertModel
from transformers import BertTokenizer, BertLMHeadModel, BertConfig
import torch
lmbert = BertLMHeadModel.from_pretrained('bert-base-cased', config=config)
lmbert.save_pretrained('you_desired_path/BertLMHeadModel')
lmbert_params = []
for name, param in lmbert.named_parameters():
lmbert_params.append(name)
bert = BertModel.from_pretrained('you_desired_path/BertLMHeadModel')
bert_params = []
for name, param in bert.named_parameters():
bert_params.append(name)
params_ralated_to_lm_head = [param_name for param_name in lmbert_params if param_name.replace('bert.', '') not in bert_params]
params_ralated_to_lm_head
</code></pre>
<p>output:</p>
<pre><code>['cls.predictions.bias',
'cls.predictions.transform.dense.weight',
'cls.predictions.transform.dense.bias',
'cls.predictions.transform.LayerNorm.weight',
'cls.predictions.transform.LayerNorm.bias']
</code></pre>
|
python|nlp|pytorch|huggingface-transformers|bert-language-model
| 1
|
374,858
| 70,250,296
|
How to interpret a 4 dimensional contigiency table in pandas
|
<p>I am trying to understand this contingency table and I have no luck looking in the documentation of pandas or any other related questions. This is for a personal machine learning project.</p>
<p>I have the following example data:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({"la":[0,1,1,0,1], "lp1": [1,0,1,0,0], "lp2":[1,1,0,0,1], "lp3":[0,0,1,1,1], "lp4":[0,1,1,0,0]})
</code></pre>
<p>Hence we will have a Boolean contingency table. If I run:</p>
<pre class="lang-py prettyprint-override"><code>df.crosstab(index=df['la'], columns=[df['lp1'],df['lp2']])
</code></pre>
<p>I get the output:</p>
<pre class="lang-py prettyprint-override"><code>lp1 0 1
lp2 0 1 0 1
la
0 1 0 0 1
1 0 2 1 0
</code></pre>
<p>This can be better visualised in a table like so:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
<th></th>
</tr>
</thead>
<tbody>
<tr>
<td></td>
<td><strong>lp1</strong>:</td>
<td><strong>0</strong></td>
<td><strong>0</strong></td>
<td><strong>1</strong></td>
<td><strong>1</strong></td>
</tr>
<tr>
<td></td>
<td><strong>lp2</strong>:</td>
<td><strong>0</strong></td>
<td><strong>1</strong></td>
<td><strong>0</strong></td>
<td><strong>1</strong></td>
</tr>
<tr>
<td><strong>la</strong></td>
<td><strong>0</strong></td>
<td><em>1</em></td>
<td><em>0</em></td>
<td><em>0</em></td>
<td><em>1</em></td>
</tr>
<tr>
<td></td>
<td><strong>1</strong></td>
<td><em>0</em></td>
<td><em>2</em></td>
<td><em>1</em></td>
<td><em>0</em></td>
</tr>
</tbody>
</table>
</div>
<p>Which can be better understood as (e.g.) there were <code>2</code> occurrences of <code>lp1=0 lp2=1 and la=0</code> in the dataset. However, if I run:</p>
<pre class="lang-py prettyprint-override"><code>pd.crosstab(index=df['la'], columns=[df['lp1'],df['lp2'],df['lp3']])
</code></pre>
<p>I expect a table like this:</p>
<pre class="lang-py prettyprint-override"><code>lp1 0 1
lp2 0 1 0 1
lp3 0 1 0 1 0 1 0 1
la
0 1 #results here
1 0
</code></pre>
<p>Instead, I get:</p>
<pre class="lang-py prettyprint-override"><code>lp1 0 1
lp2 0 1 0 1
lp3 1 0 1 1 0
la
0 1 0 0 0 1
1 0 1 1 1 0
</code></pre>
<p>I have no idea how to interpret this table. I am unsure why there is a repeating label (the repeating 1 in lp3), why the labels suddenly change order from <code>0,1</code> to <code>1,0</code>. This continues for higher dimensions (e.g. 5 dimensions):</p>
<pre class="lang-py prettyprint-override"><code>pd.crosstab(index=df['la'], columns=[df['lp1'],df['lp2'],df['lp3'],df['lp4']])
</code></pre>
<p>produces:</p>
<pre class="lang-py prettyprint-override"><code>lp1 0 1
lp2 0 1 0 1
lp3 1 0 1 1 0
lp4 0 1 0 1 0
la
0 1 0 0 0 1
1 0 1 1 1 0
</code></pre>
<p>It all makes sense to me until you have a table > 3 dimensions. I am running Python 3.10 and pandas 1.3.4.</p>
<p>I have tried to go through the source code, docs and related StackOverflow questions and I have not found an answer.</p>
<p>How do I properly interpret this 4 dimensional table, please?</p>
|
<p>I figured it out, basically, the columns whose elements are all NaN are dropped so truncate the output due to <code>dropna</code>, which is set to <code>True</code> by default:</p>
<blockquote>
<p><strong>dropna</strong>: <em><strong>bool</strong></em>, <em><strong>default</strong></em> <em><strong>True</strong></em></p>
<p>Do not include columns whose entries are all NaN. This means that the table I showed above simply does not contain these columns.</p>
</blockquote>
<p>Hence, setting <code>dropna</code> to <code>False</code> made the table more readable (to me). So if I run:</p>
<p><code>pd.crosstab(index=df['la'], columns=[df['lp1'],df['lp2'],df['lp3']], dropna=False)</code></p>
<p>I get output:</p>
<pre><code>lp1 0 1
lp2 0 1 0 1
lp3 0 1 0 1 0 1 0 1
la
0 0 1 0 0 0 0 1 0
1 0 0 1 1 0 1 0 0
</code></pre>
<p>which is much more clear.</p>
<p>In conclusion: rtfm</p>
|
python|pandas|dataframe
| 0
|
374,859
| 70,118,623
|
ValueError after attempting to use OneHotEncoder and then normalize values with make_column_transformer
|
<p>So I was trying to convert my data's timestamps from Unix timestamps to a more readable date format. I created a simple Java program to do so and write to a .csv file, and that went smoothly. I tried using it for my model by one-hot encoding it into numbers and then turning everything into normalized data. However, after my attempt to one-hot encode (which I am not sure if it even worked), my normalization process using make_column_transformer failed.</p>
<pre><code># model 4
# next model
import tensorflow as tf
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from tensorflow.keras import layers
from sklearn.compose import make_column_transformer
from sklearn.preprocessing import MinMaxScaler, OneHotEncoder
from sklearn.model_selection import train_test_split
np.set_printoptions(precision=3, suppress=True)
btc_data = pd.read_csv(
"/content/drive/MyDrive/Science Fair/output2.csv",
names=["Time", "Open"])
X_btc = btc_data[["Time"]]
y_btc = btc_data["Open"]
enc = OneHotEncoder(handle_unknown="ignore")
enc.fit(X_btc)
X_btc = enc.transform(X_btc)
print(X_btc)
X_train, X_test, y_train, y_test = train_test_split(X_btc, y_btc, test_size=0.2, random_state=62)
ct = make_column_transformer(
(MinMaxScaler(), ["Time"])
)
ct.fit(X_train)
X_train_normal = ct.transform(X_train)
X_test_normal = ct.transform(X_test)
callback = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=3)
btc_model_4 = tf.keras.Sequential([
layers.Dense(100, activation="relu"),
layers.Dense(100, activation="relu"),
layers.Dense(100, activation="relu"),
layers.Dense(100, activation="relu"),
layers.Dense(100, activation="relu"),
layers.Dense(100, activation="relu"),
layers.Dense(1, activation="linear")
])
btc_model_4.compile(loss = tf.losses.MeanSquaredError(),
optimizer = tf.optimizers.Adam())
history = btc_model_4.fit(X_train_normal, y_train, batch_size=8192, epochs=100, callbacks=[callback])
btc_model_4.evaluate(X_test_normal, y_test, batch_size=8192)
y_pred = btc_model_4.predict(X_test_normal)
btc_model_4.save("btc_model_4")
btc_model_4.save("btc_model_4.h5")
# plot model
def plot_evaluations(train_data=X_train_normal,
train_labels=y_train,
test_data=X_test_normal,
test_labels=y_test,
predictions=y_pred):
print(test_data.shape)
print(predictions.shape)
plt.figure(figsize=(100, 15))
plt.scatter(train_data, train_labels, c='b', label="Training")
plt.scatter(test_data, test_labels, c='g', label="Testing")
plt.scatter(test_data, predictions, c='r', label="Results")
plt.legend()
plot_evaluations()
# plot loss curve
pd.DataFrame(history.history).plot()
plt.ylabel("loss")
plt.xlabel("epochs")
</code></pre>
<p>My normal data format is like so:</p>
<pre><code>2015-12-05 12:52:00,377.48
2015-12-05 12:53:00,377.5
2015-12-05 12:54:00,377.5
2015-12-05 12:56:00,377.5
2015-12-05 12:57:00,377.5
2015-12-05 12:58:00,377.5
2015-12-05 12:59:00,377.5
2015-12-05 13:00:00,377.5
2015-12-05 13:01:00,377.79
2015-12-05 13:02:00,377.5
2015-12-05 13:03:00,377.79
2015-12-05 13:05:00,377.74
2015-12-05 13:06:00,377.79
2015-12-05 13:07:00,377.64
2015-12-05 13:08:00,377.79
2015-12-05 13:10:00,377.77
2015-12-05 13:11:00,377.7
2015-12-05 13:12:00,377.77
2015-12-05 13:13:00,377.77
2015-12-05 13:14:00,377.79
2015-12-05 13:15:00,377.72
2015-12-05 13:16:00,377.5
2015-12-05 13:17:00,377.49
2015-12-05 13:18:00,377.5
2015-12-05 13:19:00,377.5
2015-12-05 13:20:00,377.8
2015-12-05 13:21:00,377.84
2015-12-05 13:22:00,378.29
2015-12-05 13:23:00,378.3
2015-12-05 13:24:00,378.3
2015-12-05 13:25:00,378.33
2015-12-05 13:26:00,378.33
2015-12-05 13:28:00,378.31
2015-12-05 13:29:00,378.68
</code></pre>
<p>The first is the date and the second value after the comma is the price of BTC at that time. Now after "one-hot encoding", I added a print statement to print the value of those X values, and that gave the following value:</p>
<pre><code> (0, 0) 1.0
(1, 1) 1.0
(2, 2) 1.0
(3, 3) 1.0
(4, 4) 1.0
(5, 5) 1.0
(6, 6) 1.0
(7, 7) 1.0
(8, 8) 1.0
(9, 9) 1.0
(10, 10) 1.0
(11, 11) 1.0
(12, 12) 1.0
(13, 13) 1.0
(14, 14) 1.0
(15, 15) 1.0
(16, 16) 1.0
(17, 17) 1.0
(18, 18) 1.0
(19, 19) 1.0
(20, 20) 1.0
(21, 21) 1.0
(22, 22) 1.0
(23, 23) 1.0
(24, 24) 1.0
: :
(2526096, 2526096) 1.0
(2526097, 2526097) 1.0
(2526098, 2526098) 1.0
(2526099, 2526099) 1.0
(2526100, 2526100) 1.0
(2526101, 2526101) 1.0
(2526102, 2526102) 1.0
(2526103, 2526103) 1.0
(2526104, 2526104) 1.0
(2526105, 2526105) 1.0
(2526106, 2526106) 1.0
(2526107, 2526107) 1.0
(2526108, 2526108) 1.0
(2526109, 2526109) 1.0
(2526110, 2526110) 1.0
(2526111, 2526111) 1.0
(2526112, 2526112) 1.0
(2526113, 2526113) 1.0
(2526114, 2526114) 1.0
(2526115, 2526115) 1.0
(2526116, 2526116) 1.0
(2526117, 2526117) 1.0
(2526118, 2526118) 1.0
(2526119, 2526119) 1.0
(2526120, 2526120) 1.0
</code></pre>
<p>Following fitting for normalization, I receive the following error:</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/sklearn/utils/__init__.py in _get_column_indices(X, key)
408 try:
--> 409 all_columns = X.columns
410 except AttributeError:
5 frames
AttributeError: columns not found
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/sklearn/utils/__init__.py in _get_column_indices(X, key)
410 except AttributeError:
411 raise ValueError(
--> 412 "Specifying the columns using strings is only "
413 "supported for pandas DataFrames"
414 )
ValueError: Specifying the columns using strings is only supported for pandas DataFrames
</code></pre>
<p>Am I one-hot encoding correctly? What is the appropriate way to do this? Should I directly implement the one-hot encoder in my normalization process?</p>
|
<p>using <strong>OneHotEncoder</strong> is not the way to go here, it's better to extract the features from the column <strong>time</strong> as separate features like year, month, day, hour, minutes etc... and give these columns as input to your model.</p>
<pre><code>btc_data['Year'] = btc_data['Date'].astype('datetime64[ns]').dt.year
btc_data['Month'] = btc_data['Date'].astype('datetime64[ns]').dt.month
btc_data['Day'] = btc_data['Date'].astype('datetime64[ns]').dt.day
</code></pre>
<p>the issue here is coming from the <strong>oneHotEncoder</strong> which is getting returning a <strong>scipy sparse matrix</strong> and get rides of the column "Time" so to correct this you must re-transform the output to a <strong>pandas dataframe</strong> and add the "Time" column.</p>
<pre><code>enc = OneHotEncoder(handle_unknown="ignore")
enc.fit(X_btc)
X_btc = enc.transform(X_btc)
X_btc = pd.DataFrame(X_btc.todense())
X_btc["Time"] = btc_data["Time"]
</code></pre>
<p>one way to <strong>countournate</strong> the <strong>memory issue</strong> is :</p>
<ol>
<li>Generate two indexes with the same random_state, one for the <strong>pandas data frame</strong> and one for the <strong>scipy sparse matrix</strong></li>
</ol>
<pre><code>X_train, X_test, y_train, y_test = train_test_split(X_btc, y_btc, test_size=0.2, random_state=62)
X_train_pd, X_test_pd, y_train_pd, y_test_pd = train_test_split(btc_data, y_btc, test_size=0.2, random_state=62)
</code></pre>
<ol start="2">
<li>Use the pandas data frame for the <strong>MinMaxScaler()</strong>.</li>
</ol>
<pre class="lang-py prettyprint-override"><code> ct = make_column_transformer((MinMaxScaler(), ["Time"]))
ct.fit(X_train_pd)
result_train = ct.transform(X_train_pd)
result_test = ct.transform(X_test_pd)
</code></pre>
<ol start="3">
<li>Use generators for load data in train and test phase ( this will get ride of the memory issue ) and include the scaled time in the generators.</li>
</ol>
<pre class="lang-py prettyprint-override"><code>def nn_batch_generator(X_data, y_data, scaled, batch_size):
samples_per_epoch = X_data.shape[0]
number_of_batches = samples_per_epoch / batch_size
counter = 0
index = np.arange(np.shape(y_data)[0])
while True:
index_batch = index[batch_size * counter:batch_size * (counter + 1)]
scaled_array = scaled[index_batch]
X_batch = X_data[index_batch, :].todense()
y_batch = y_data.iloc[index_batch]
counter += 1
yield np.array(np.hstack((np.array(X_batch), scaled_array))), np.array(y_batch)
if (counter > number_of_batches):
counter = 0
def nn_batch_generator_test(X_data, scaled, batch_size):
samples_per_epoch = X_data.shape[0]
number_of_batches = samples_per_epoch / batch_size
counter = 0
index = np.arange(np.shape(X_data)[0])
while True:
index_batch = index[batch_size * counter:batch_size * (counter + 1)]
scaled_array = scaled[index_batch]
X_batch = X_data[index_batch, :].todense()
counter += 1
yield np.hstack((X_batch, scaled_array))
if (counter > number_of_batches):
counter = 0
</code></pre>
<p>Finally fit the model</p>
<pre class="lang-py prettyprint-override"><code>
history = btc_model_4.fit(nn_batch_generator(X_train, y_train, scaled=result_train, batch_size=2), steps_per_epoch=#Todetermine,
batch_size=2, epochs=10,
callbacks=[callback])
btc_model_4.evaluate(nn_batch_generator(X_test, y_test, scaled=result_test, batch_size=2), batch_size=2, steps=#Todetermine)
y_pred = btc_model_4.predict(nn_batch_generator_test(X_test, scaled=result_test, batch_size=2), steps=#Todetermine)
</code></pre>
|
python|pandas|tensorflow|deep-learning|one-hot-encoding
| 3
|
374,860
| 70,212,748
|
How to make an order column when grouping by another column
|
<p>I have a dataframe in a format:</p>
<pre><code>d = {'hour': [1, 1,2,2], 'value': [10, 50,200,100]}
df = pd.DataFrame(data=d)
</code></pre>
<p>How can I create a column order, where order will be an order of values when grouped by the hour column.</p>
<p>The result should be:</p>
<pre><code>index hour value order
0 1 10 1
1 1 50 2
2 2 200 2
3 2 100 1
</code></pre>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.rank.html" rel="nofollow noreferrer"><code>GroupBy.rank</code></a>:</p>
<pre><code>df['order'] = df.groupby('hour')['value'].rank(method='dense').astype(int)
print (df)
hour value order
0 1 10 1
1 1 50 2
2 2 200 2
3 2 100 1
</code></pre>
|
python|pandas
| 4
|
374,861
| 70,214,493
|
the inplace parameter in pandas how it works?
|
<p>in pandas the <strong>inplace</strong> parameter make modification on the reference but I know in python data are sent by value not by reference i want to know how this is implemented or how this work</p>
|
<blockquote>
<p>Python’s argument passing model is neither “Pass by Value” nor “Pass by Reference” but it is “Pass by Object Reference”</p>
</blockquote>
<p>When you pass a dictionary to a function and modify that dictionary inside the function, the changes will reflect on the dictionary <em>everywhere</em>.</p>
<p>However, here we are dealing with something even less ambiguous. When passing <code>inplace=True</code> to a method call on a <code>pandas</code> object (be it a <code>Series</code> or a <code>DataFrame</code>), we are simply saying: change the current object instead of getting me a new one. Method calls can modify variables of the instances on which they were called - this is independent of whether a language is "call by value" or "call by reference". The only case in which this would get tricky is if a language only had constants (think <code>val</code>) and no variables (think <code>var</code>) - think purely functional languages. Then, it's true - you can only return new objects and can't modify any old ones. In practice, though, even in purest of languages you can find ways to update records in-place.</p>
|
python|python-3.x|pandas|mutability|call-by-value
| 1
|
374,862
| 70,374,119
|
What is making my model predicting the wrong value when running on my laptop and colab?
|
<p>I've exported a TF model to <code>.h5</code> format to use it for my project. When running and testing on Colab, it predicts perfectly but when I tried to predict the <code>.h5</code> format model in my machine(laptop), it did not predict the correct one therefore it did not work like it used to in Colab. I've tried to browse in the net but did not seem to find an answer or a clue. Does anyone know where the problem might be?</p>
<p>Example
Input image: dog type of <code>golden_retriever</code></p>
<p><strong>(COLAB)</strong> -> predicts <code>golden_retriever</code> (correct)</p>
<pre><code>model = tf.keras.models.load_model("model_mac.h5", custom_objects={"KerasLayer": hub.KerasLayer})
custom_images_paths = ["golde.jpeg"]
custom_data = create_data_batches(custom_images_paths, test_data=True)
custom_preds = model.predict(custom_data)
custom_pred_labels = [get_pred_label(custom_preds[i]) for i in range(len(custom_preds))]
</code></pre>
<p><strong>(MY MACHINE/LAPTOP)</strong> -> predicts <code>norwegian_elkhound</code> (something else that does not look like a <code>golden_retriever</code>. (wrong)</p>
<pre><code>model = tf.keras.models.load_model("model_mac.h5", custom_objects {"KerasLayer":hub.KerasLayer})
img = "golde.jpeg"
custom_data = create_data_batches([img], test_data=True)
custom_preds = model.predict(custom_data)
custom_pred_labels = [get_pred_label(custom_preds[i]) for i in range(len(custom_preds))]
</code></pre>
<p>Thanks in advance.</p>
|
<p>I'd check the outputs of each step of your model prediction code.</p>
<p>Are you able to verify your model gets the same results when you call <code>model.evaluate()</code> on a test split of the dataset?</p>
<p>That's one of the first things I'd try to do.</p>
<p>Otherwise, you might want to check out the part of the docs dealing with saving to <code>.h5</code>: <a href="https://www.tensorflow.org/tutorials/keras/save_and_load#hdf5_format" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/keras/save_and_load#hdf5_format</a></p>
<p>See the "Saving custom objects" section, have you defined the <code>hub.KerasLayer</code> on your local machine? Perhaps that has something to do with it.</p>
|
python|tensorflow|machine-learning|keras|deep-learning
| 0
|
374,863
| 70,277,501
|
Exception Report from pandas DataFrame
|
<p><strong>Background:<br></strong>
The following function takes a pandas DataFrame and renames it <code>exceptions_df</code> whilst applying 2x conditions to it.</p>
<p><strong>Function:</strong></p>
<pre><code>def ownership_exception_report():
df = ownership_qc()
exceptions_df = df[df['Entity ID %'] != 100.00]
exceptions_df = df[df['Account # %'] != 100.00]
return exceptions_df
</code></pre>
<p><strong>My problem:</strong><br>Whilst my code works fine, I wonder if there is a simple and more eloquent way to apply 2x conditions to a DataFrame and resave it? At the moment I am simply resaving the <code>exceptions_df</code> twice and it seems rather messy. Or perhaps I am wrong, and this is the correct way to apply conditions to a DataFrame?</p>
|
<pre><code>def ownership_exception_report():
df = ownership_qc()
return df[(df['Entity ID %'] != 100.00) & (df['Account # %'] != 100.00)]
</code></pre>
<p>Or:</p>
<pre><code>def ownership_exception_report():
df = ownership_qc()
return df[df['Entity ID %'].ne(100.00) & df['Account # %'].ne(100.00)]
</code></pre>
<p>Both will return a copy of <code>df</code> with only the rows where <code>Entity ID %</code> is <code>100</code> AND <code>Account # %</code> is <code>100</code>.</p>
|
python|pandas|exception
| 3
|
374,864
| 70,138,600
|
Is there any way to check the repetition of the value in a B field, taking into account a sorted A field, for each ID group? (See example below)
|
<p>Suppose we have a table of thousands of users with an <em>ID</em>, a <em>year-month</em> and a <em>balance($)</em>.
Let's simplify it in the following table with 3 users:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">user ID (numeric)</th>
<th style="text-align: center;">year-month (string)</th>
<th style="text-align: right;">balance(float)</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">2019-01</td>
<td style="text-align: right;">500.0</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">2019-02</td>
<td style="text-align: right;">500.0</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">2019-03</td>
<td style="text-align: right;">0.0</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">2019-04</td>
<td style="text-align: right;">500.0</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">2019-05</td>
<td style="text-align: right;">0.0</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">2019-06</td>
<td style="text-align: right;">0.0</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: center;">2018-09</td>
<td style="text-align: right;">1000.0</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: center;">2018-10</td>
<td style="text-align: right;">1000.0</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: center;">2018-11</td>
<td style="text-align: right;">750.0</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: center;">2018-12</td>
<td style="text-align: right;">500.0</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: center;">2019-01</td>
<td style="text-align: right;">0.0</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: center;">2019-02</td>
<td style="text-align: right;">0.0</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: center;">2019-03</td>
<td style="text-align: right;">0.0</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: center;">2019-04</td>
<td style="text-align: right;">0.0</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: center;">2019-05</td>
<td style="text-align: right;">0.0</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: center;">2019-06</td>
<td style="text-align: right;">0.0</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: center;">2019-07</td>
<td style="text-align: right;">0.0</td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: center;">2018-01</td>
<td style="text-align: right;">200.0</td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: center;">2018-02</td>
<td style="text-align: right;">0.0</td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: center;">2018-03</td>
<td style="text-align: right;">200.0</td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: center;">2018-04</td>
<td style="text-align: right;">0.0</td>
</tr>
</tbody>
</table>
</div>
<p>The main rule is that: <strong>If the balance reaches 0 in a given month, there cannot be a month afterwards where the balance value is other than 0</strong>. This means that the only user who would have his records correctly reported would be ID=2.</p>
<p>As a final output, I want a table that shows me how many user IDs satisfy the rule and how many do not:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">well_informed</th>
<th style="text-align: center;">num_cases</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">YES</td>
<td style="text-align: center;">1</td>
</tr>
<tr>
<td style="text-align: left;">NO</td>
<td style="text-align: center;">2</td>
</tr>
</tbody>
</table>
</div>
<p>I have tried several things without even getting close to a result because of the difficulty of iterating through the consecutive records of a user ID and checking the condition.</p>
<p>A solution in both Python-Pandas and SQL is valid for the environment I am working in. Thank you very much!</p>
<p><strong>EDIT v1</strong>: @d.b @Henry Ecker solution works fine for the example I have provided, but not for my problem because I have not specified some cases that would be valid, such as the following:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">user ID (numeric)</th>
<th style="text-align: center;">year-month (string)</th>
<th style="text-align: right;">balance(float)</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">4</td>
<td style="text-align: center;">2019-02</td>
<td style="text-align: right;">1000.0</td>
</tr>
<tr>
<td style="text-align: left;">4</td>
<td style="text-align: center;">2019-03</td>
<td style="text-align: right;">1000.0</td>
</tr>
<tr>
<td style="text-align: left;">4</td>
<td style="text-align: center;">2019-04</td>
<td style="text-align: right;">1000.0</td>
</tr>
<tr>
<td style="text-align: left;">4</td>
<td style="text-align: center;">2019-05</td>
<td style="text-align: right;">1000.0</td>
</tr>
<tr>
<td style="text-align: left;">4</td>
<td style="text-align: center;">2019-06</td>
<td style="text-align: right;">1000.0</td>
</tr>
<tr>
<td style="text-align: left;">4</td>
<td style="text-align: center;">2019-07</td>
<td style="text-align: right;">1000.0</td>
</tr>
<tr>
<td style="text-align: left;">4</td>
<td style="text-align: center;">2019-08</td>
<td style="text-align: right;">1000.0</td>
</tr>
<tr>
<td style="text-align: left;">4</td>
<td style="text-align: center;">2019-09</td>
<td style="text-align: right;">1000.0</td>
</tr>
<tr>
<td style="text-align: left;">4</td>
<td style="text-align: center;">2019-10</td>
<td style="text-align: right;">1000.0</td>
</tr>
<tr>
<td style="text-align: left;">4</td>
<td style="text-align: center;">2019-11</td>
<td style="text-align: right;">1000.0</td>
</tr>
<tr>
<td style="text-align: left;">4</td>
<td style="text-align: center;">2019-12</td>
<td style="text-align: right;">1000.0</td>
</tr>
</tbody>
</table>
</div>
<p>which should be considered TRUE, but classifies it as FALSE.</p>
|
<p>For each <code>ID</code>, perform run length encoding on <code>balance</code> and check if only the last value for that encoding is <code>0</code>.</p>
<pre class="lang-py prettyprint-override"><code>import pdrle
def foo(x):
rle = pdrle.encode(x.eq(0))
if rle.vals.sum() == 0:
return True
if rle.vals.sum() == 1:
return rle.vals.tail(1).item()
return False
ans = dat.groupby(dat["user ID"], as_index=False).balance.apply(foo)
ans
# user ID balance
# 0 1 False
# 1 2 True
# 2 3 False
</code></pre>
<p>In the next step, you can summarize <code>ans</code></p>
<pre class="lang-py prettyprint-override"><code>ans.groupby("balance").size()
# balance
# False 2
# True 1
# dtype: int64
</code></pre>
|
python|sql|pandas|group-by|sas
| 1
|
374,865
| 70,087,537
|
How to prevent overflow in MLE method for large data
|
<p>I am trying to do a manual MLE estimation using scipy. My dataset is not that large so it surprises me that my values get very large very fast and scipy.optimize.minimize seems to get into NaNs for my density extremely quickly. I've tried to use the sum of logarithms instead of the product of the densities but that did not make things better at all.</p>
<p>This is my data:</p>
<pre><code>[
1.000, 1.000, 1.000, 1.004, 1.005, 1.008, 1.014, 1.015, 1.023, 1.035, 1.038,
1.046, 1.048, 1.050, 1.050, 1.052, 1.052, 1.057, 1.063, 1.070, 1.070, 1.076,
1.087, 1.090, 1.091, 1.096, 1.101, 1.102, 1.113, 1.114, 1.120, 1.130, 1.131,
1.150, 1.152, 1.154, 1.155, 1.162, 1.170, 1.177, 1.189, 1.191, 1.193, 1.200,
1.200, 1.200, 1.200, 1.205, 1.210, 1.218, 1.238, 1.238, 1.241, 1.250, 1.250,
1.256, 1.257, 1.272, 1.278, 1.289, 1.299, 1.300, 1.316, 1.331, 1.349, 1.374,
1.378, 1.382, 1.396, 1.426, 1.429, 1.439, 1.443, 1.446, 1.473, 1.475, 1.478,
1.499, 1.506, 1.559, 1.568, 1.594, 1.609, 1.626, 1.649, 1.650, 1.669, 1.675,
1.687, 1.715, 1.720, 1.735, 1.750, 1.755, 1.787, 1.797, 1.805, 1.898, 1.908,
1.940, 1.989, 2.010, 2.012, 2.024, 2.047, 2.081, 2.085, 2.097, 2.136, 2.178,
2.181, 2.193, 2.200, 2.220, 2.301, 2.354, 2.359, 2.382, 2.409, 2.418, 2.430,
2.477, 2.500, 2.534, 2.572, 2.588, 2.591, 2.599, 2.660, 2.700, 2.700, 2.744,
2.845, 2.911, 2.952, 3.006, 3.021, 3.048, 3.059, 3.092, 3.152, 3.276, 3.289,
3.440, 3.447, 3.498, 3.705, 3.870, 3.896, 3.969, 4.000, 4.009, 4.196, 4.202,
4.311, 4.467, 4.490, 4.601, 4.697, 5.100, 5.120, 5.136, 5.141, 5.165, 5.260,
5.329, 5.778, 5.794, 6.285, 6.460, 6.917, 7.295, 7.701, 8.032, 8.142, 8.864,
9.263, 9.359, 10.801, 11.037, 11.504, 11.933, 11.998, 12.000, 14.153, 15.000,
15.398, 19.793, 23.150, 27.769, 28.288, 34.325, 42.691, 62.037, 77.839
]
</code></pre>
<p>How can I perform the MLE algorithm without running into overlow issues?</p>
<p>In case you are wondering, I am trying to fit the function <code>f(x) = alpha * 500000^(alpha) / x^(alpha+1)</code>. So what I have is</p>
<pre><code>import numpy as np
from scipy.optimize import minimize
# data = the given dataset
log_pareto_pdf = lambda alpha, x: np.log(alpha * (5e5 ** alpha) / (x ** (alpha + 1)))
ret = minimize(lambda alpha: -np.sum([log_pareto_pdf(alpha, x) for x in data]), x0=np.array([1]))
</code></pre>
|
<p>You need to avoid the giant exponentiations. One way to do this is to actually simplify your function:</p>
<pre><code>log_pareto_pdf = lambda alpha, x: np.log(alpha) + alpha*np.log(5e5) - (alpha + 1)*np.log(x)
</code></pre>
<p>Without simplifying, your program still needs to try to calculate the <code>5e5**alpha</code> term, which will get really big really fast (overflow around 55).</p>
<p>You'll also need to supply the <code>bounds</code> argument to <code>minimize</code> to prevent any negatives inside the <code>log</code>'s.</p>
|
python|numpy|scipy|statistics|scipy-optimize
| 3
|
374,866
| 70,258,123
|
I want to count the number of lines for each different groups
|
<p>Let say that we have this dataframe:</p>
<pre><code>d = {'col1': [1, 2,0,55,12], 'col2': [3, 4,44,34,46], 'col3': [A,A,B,B,A] }
df = pd.DataFrame(data=d)
df
col1 col2 col3
0 1 3 A
1 2 4 A
2 0 44 B
3 55 34 B
4 12 46 A
</code></pre>
<p>I want another column that would count the numbers of the A and B separately as follows:</p>
<pre><code> col1 col2 col3 count
0 1 3 A 2
1 2 4 A 2
2 0 44 B 2
3 55 34 B 2
4 12 46 A 1
</code></pre>
<p>I have tried the group by, but it does not do what I want, could you please help please ?</p>
|
<p>You can create consecutive groups by compare shifted values for not eqaual with cumulative sum and pass to <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.DataFrameGroupBy.transform.html" rel="nofollow noreferrer"><code>GroupBy.transform</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.size.html" rel="nofollow noreferrer"><code>GroupBy.size</code></a>:</p>
<pre><code>g = df['col3'].ne(df['col3'].shift()).cumsum()
df['count'] = df.groupby(g)['col3'].transform('size')
print (df)
col1 col2 col3 count
0 1 3 A 2
1 2 4 A 2
2 0 44 B 2
3 55 34 B 2
4 12 46 A 1
</code></pre>
<p>Or alternative with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.value_counts.html" rel="nofollow noreferrer"><code>Series.value_counts</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.map.html" rel="nofollow noreferrer"><code>Series.map</code></a>:</p>
<pre><code>s = df['col3'].ne(df['col3'].shift()).cumsum()
df['count'] = s.map(s.value_counts())
</code></pre>
|
python|pandas|dataframe
| 2
|
374,867
| 70,183,494
|
Is my configuration for Densenet in tensorflow wrong?
|
<p>When I am running the code pasted below, the model is just training for “multiplier” =1 or =4.
Running the same code in google colab → just training for multiplier=1</p>
<p>Is there any mistake in how I am using DenseNet here?</p>
<p>Thanks in advance, appreciate your help!</p>
<pre><code>import numpy as np
import tensorflow as tf
from tensorflow.keras.applications.densenet import DenseNet201
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.losses import BinaryCrossentropy
random_array = np.random.rand(128,128,3)
image = tf.convert_to_tensor(
random_array
)
label = tf.constant(0)
model = DenseNet201(
include_top=False, weights='imagenet', input_tensor=None,
input_shape=(128, 128, 3), pooling=None, classes=2
)
model.compile(
optimizer=Adam(),
loss=BinaryCrossentropy(),
metrics=['accuracy'],
)
for multiplier in range(1,20):
print(f"Using multiplier {multiplier}")
x_train = np.array([image]*multiplier)
y_train = np.array([label]*multiplier)
try:
model.fit(x=x_train,y=y_train, epochs=2)
except:
print("Not training...")
pass
</code></pre>
<p>if the training does not start, the output is:</p>
<pre><code>2021-12-01 11:48:40.372387: W tensorflow/core/framework/op_kernel.cc:1680] Invalid argument: required broadcastable shapes
2021-12-01 11:48:40.372660: W tensorflow/core/framework/op_kernel.cc:1680] Invalid argument: required broadcastable shapes
2021-12-01 11:48:40.372734: W tensorflow/core/framework/op_kernel.cc:1680] Invalid argument: required broadcastable shapes
</code></pre>
|
<p>apparently, it is necessary to add a custom GlobalAveragePooling and Dense Layer if a custom <code>input_shape</code> (not the standard 224x224x3 of ImageNet) and <code>include_top = False</code> is used:</p>
<pre><code>base_model = DenseNet201(
include_top=False, weights='imagenet', input_tensor=None,
input_shape=(128, 128, 3),
pooling=None, classes=2
)
x= base_model.output
x = GlobalAveragePooling2D(name = "avg_pool")(x)
outputs = Dense(2, activation=tf.nn.softmax, name="predictions")(x)
model = Model(base_model.input, outputs)
</code></pre>
|
python|tensorflow|keras|densenet
| 0
|
374,868
| 70,293,437
|
Writing pandas dataframe to CSV with decimal places
|
<p><strong>Background</strong> - I am trying to round the values of 2x columns (<code>Entity ID %</code> and <code>Account # %</code>) to 7 decimal places in a pandas Dataframe, before writing to a <code>.csv</code></p>
<p><strong>Function</strong> - this function takes a dataframe (<code>df</code>) strips out any rows that don't meet a criteria and write the <code>df</code> to a <code>.csv</code>. Importantly, I have a line of code that reads <code>df.round({'Entity ID %': 7, 'Account # %': 7})</code> which I thought 'should' be rounding all the values in those columns to 7 decimal places.</p>
<pre><code>def ownership_exceptions():
df = ownership_qc()
df.round({'Entity ID %': 7, 'Account # %': 7})
df = df[(df['Entity ID %'] != 1.000000) & (df['Account # %'] != 1.000000)]
# Counting rows in df
index = df.index
number_of_rows = len(index)
timestr = datetime.datetime.now().strftime("%Y-%m-%d-%H-%M")
filename = 'ownership_exceptions_'+timestr
with open(filename, 'w') as output_data:
df.to_csv(filename+'.csv')
print("---------------------------\n","EXCEPTION REPORT:", number_of_rows, "rows", "\n---------------------------")
return df
</code></pre>
<p><strong>Expected output (per a preview of <code>df</code></strong> -</p>
<p><a href="https://i.stack.imgur.com/0zas0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0zas0.png" alt="enter image description here" /></a></p>
<p><strong>Actual <code>.csv</code> output</strong> -</p>
<p><a href="https://i.stack.imgur.com/EhlwJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EhlwJ.png" alt="enter image description here" /></a></p>
<p><strong>Question</strong> - Should be de defining the decimal places as part of the <code>df.to_csv</code>? What am I missing which is causing this erroneous output of 17 decimal places?</p>
|
<p>Your problem is probably because you don't set the output of <code>round</code> to <code>df</code>:</p>
<pre><code># Replace
df.round({'Entity ID %': 7, 'Account # %': 7})
# By
df = df.round({'Entity ID %': 7, 'Account # %': 7})
</code></pre>
|
python|pandas|csv
| 1
|
374,869
| 70,194,131
|
Using np.where with multiple conditions
|
<p>Why does the first line work but not the second?</p>
<p>ok:</p>
<pre><code>data_frame['C'] = np.where(np.logical_and(np.greater_equal(data_frame['A'],1), np.not_equal(data_frame['B'],0)), 'OK', '-' )
</code></pre>
<p>not ok:</p>
<pre><code>data_frame['C'] = np.where(data_frame['A']== 1 & data_frame['B']!=0, 'OK', '-')
</code></pre>
<blockquote>
<p>TypeError: Cannot perform <code>rand_</code> with a dtyped <code>[float64]</code> array and scalar of type <code>[bool]]</code></p>
</blockquote>
|
<p>It's just the order of operations not being correct if you don't have parens/brackets in the appropriate places. This should work in place of your 2nd variant:</p>
<pre><code>np.where((data_frame['A'] == 1) &
(data_frame['B'] != 0), 'OK', '-')
</code></pre>
<p>So the comparison operations -- <code>==</code>, <code>!=</code> -- execute before the bitwise one: <code>&</code>.</p>
<p>Edit: this other SO answer explains the order-of-operations thing in more detail -- under the "UPDATE" heading -- in case you need/want that level of explanation: <a href="https://stackoverflow.com/a/57922782/42346">https://stackoverflow.com/a/57922782/42346</a></p>
|
python|pandas|numpy
| 2
|
374,870
| 70,080,360
|
Remove top-N layers from a pretrained model and Save as new model
|
<p>How can I remove certain layers AND be able to save it as a new model in tensorflow?</p>
<p>I have the following code for removing top-N layers in tensorflow and it works:</p>
<pre><code>reconstructed_model = tf.keras.models.load_model(model_path)
embedding = Model(reconstructed_model.input,
reconstructed_model.layers[-4].output)
</code></pre>
<p>However, when I am trying to save it with either of these two methods:</p>
<pre><code>tf.keras.models.save_model(model=embedding, model_path)
embedding.save(model_path)
</code></pre>
<p>I am encountering the following error:</p>
<pre><code>KeyError: "Failed to add concrete function 'b'__inference_model_3_layer_call_fn_286241'' to object-based SavedModel as it captures tensor <tf.Tensor: shape=(), dtype=resource, value=<Resource Tensor>> which is unsupported or not reachable from root. One reason could be that a stateful object or a variable that the function depends on is not assigned to an attribute of the serialized trackable object (see SaveTest.test_captures_unreachable_variable)."
</code></pre>
<p>The pretrained model that I am using is a fine-tuned efficientnetv2 from the tensorflow applications api</p>
<pre><code>from tensorflow.keras.applications import EfficientNetB0
</code></pre>
<p>and I was able to save and reuse it here, just don't know how to save a modified one after reloading.</p>
|
<p>I tried using <code>EfficientNetB0</code> and constructed a model truncating the last four layers, just as you did.</p>
<pre><code>from tensorflow.keras.applications import EfficientNetB0
import tensorflow as tf
efficientnet = EfficientNetB0( include_top=False )
embedding = tf.keras.models.Model(
efficientnet.input ,
efficientnet.layers[ -4 ].output
)
embedding.summary()
embedding.save( 'embedding_model.h5' )
</code></pre>
<p>And then loading the model, using <code>tf.keras.models.load_model</code>,</p>
<pre><code>model = tf.keras.models.load_model( 'embedding_model.h5' )
model.summary()
</code></pre>
<p>I could load the model without any problems. Maybe there's some problem with the <code>reconstructed_model</code>.</p>
|
tensorflow|keras
| 0
|
374,871
| 70,177,432
|
how to find numbers of row above mean in pandas.dataframe?
|
<p>and here i am stuck at a question about finding how many number of rows above average/mean score.</p>
<p>my df like this:</p>
<pre><code> Subject Name Score
0 s1 Amy 100
1 s1 Bob 90
2 s1 Cathy 92
3 s1 David 88
4 s2 Emma 95
5 s2 Frank 80
6 s2 Gina 86
7 s2 Helen 89
...
</code></pre>
<p>I can get mean of each subject, by using <code>df.groupby('Subject').Score.mean()</code> <br>
But I don't know how to find how many students have score more than average in each subject. <br>
(I guess I can use for loop to calculate the count. But I want to know if there is a way in pandas to do it. )</p>
<p>It would be great if anyone can help.
Thank you.</p>
|
<p>You can try using <code>groupby</code> and <code>apply</code>:</p>
<pre class="lang-py prettyprint-override"><code>def count_above_avg(g):
avg = g.Score.mean()
return (g.Score > avg).sum()
df.groupby('Subject').apply(count_above_avg)
</code></pre>
|
python|pandas|dataframe
| 0
|
374,872
| 70,281,357
|
Monthly climatology across several years, repeated for each day in that month over all years
|
<p>I need to find the monthly climatology of some data that has daily values across several years. The code below sufficiently summarizes what I am trying to do. <code>monthly_mean</code> holds the averages over all years for specific months. I then need to assign that average in a new column for each day in a specific month over all of the years. For whatever reason, my assignment, <code>df['A Climatology'] = group['A Climatology']</code>, is only assigning values to the month of December. How can I make the assignment happen for all months?</p>
<pre><code>data = np.random.randint(5,30,size=(365*3,3))
df = pd.DataFrame(data, columns=['A', 'B', 'C'], index=pd.date_range('2021-01-01', periods=365*3))
df['A Climatology'] = np.nan
monthly_mean = df['A'].groupby(df.index.month).mean()
for month, group in df.groupby(df.index.month):
group['A Climatology'] = monthly_mean.loc[month]
df['A Climatology'] = group['A Climatology']
df
</code></pre>
|
<p>Your code is setting the column == to the group, so every iteration of your loop you're setting the df's values only for that group---which is why your df ends on December, the last month in the list.</p>
<pre><code>monthly_mean = df['A'].groupby(df.index.month).mean()
for month, group in df.groupby(df.index.month):
df.loc[lambda df: df.index.month == month, 'A Climatology'] = monthly_mean.loc[month]
</code></pre>
<p>Instead, you could directly set the df's values where the month == the iterable month.</p>
|
python|pandas|dataframe
| 1
|
374,873
| 70,155,614
|
How to create a new dataframe from input dataframe based on certain condition
|
<p>I have pandas dataframe like this.</p>
<pre><code>api region base_path
https://apis.us/image/ us /image
https://apis.emea/video/ emea /video
https://apis.asia/docs/ asia /docs
https://apis.emea/image/ emea /image
https://apis.us/video/ us /video
https://apis.us/docs/ us /docs
https://apis.asia/location/ asia /location
</code></pre>
<p>From the api list few apis are common in more than 1 region .Ex: <code>/image</code> is common for both <code>us</code> and <code>emea</code>. The output dataframe I want like this:</p>
<pre><code>api_us_emea api_asia_us api_asia_emea api_us_emea_asia api_usa api_emea api_asia
https://apis.us/image/ https://apis.us/docs/ No Common api No Common api N/A N/A https://apis.asia/location/
https://apis.us/video/
</code></pre>
<p>Here, for common apis I always want <code>us</code> api to be present in the column value.Ex: <code>api_us_emea</code> column holds only US api, for <code>api_asia_emea</code> <code>asia</code> api and for <code>api_us_emea_asia</code> <code>us</code> api as well.
Hiow can I acheive this?</p>
|
<p>Try this:</p>
<pre><code>import itertools
import functools, operator
def find_coomon_elements(p):
return list(set.intersection(*[set(li) for li in p]))
def find_unique_elements(p, l):
merged_p = functools.reduce(operator.iconcat, p, [])
return [x for x in l if merged_p.count(x)==1]
strings_array = df["api"].str[:-1].str.split("/").str[-2:].apply(lambda x: (x[0][5:], x[1])).values
d = dict()
[d[t[0]].append(t[1]) if t[0] in list(d.keys()) else d.update({t[0]: [t[1]]}) for t in strings_array]
se = set([x[0] for x in strings_array])
combs = [list(itertools.combinations(se, i)) for i in range(1, len(se)+1)]
col1, col2 = [], []
for item in combs[0]:
col1.append("_".join(["api"] + list(item)))
col2.append(["https://apis."+item[0]+"/"+s for s in find_unique_elements([d[c] for c in d.keys()], d[item[0]])])
for i in range(1, len(combs)):
for item in combs[i]:
common = find_coomon_elements([d[c] for c in item])
if len(common)>0:
col1.append("_".join(["api"] + list(item)))
col2.append(["https://apis."+item[0]+"/"+s for s in common])
else:
col1.append("_".join(["api"] + list(item)))
col2.append("No Common api")
output_df = pd.DataFrame({"col1":col1, "col2":col2})
output_df
</code></pre>
<p>Output:</p>
<pre><code> col1 col2
0 api_us []
1 api_asia [https://apis.asia/location]
2 api_emea []
3 api_us_asia [https://apis.us/docs]
4 api_us_emea [https://apis.us/image, https://apis.us/video]
5 api_asia_emea No Common api
6 api_us_asia_emea No Common api
</code></pre>
|
python-3.x|pandas
| 2
|
374,874
| 70,123,409
|
pandas dataframe create new columns and fill with an external API response as calculated using concertinaed values from same df
|
<pre><code>df=
User id
0 u1 id1
1 u2 id2
2 u3 id3
user_limit1=api('u1:id1')
new_df=
User id user_limit
0 u1 id1 user_limit1
1 u2 id2 user_limit2
2 u3 id3 user_limit3
</code></pre>
<p>how can i update df as above for about 9800 rows of DF ?</p>
|
<p>Create a column and use <code>apply</code> on rows</p>
<pre><code>df['user_limit'] = df.apply(lambda x: api_call(f'{x.User}:{x.id}'), axis=1)
</code></pre>
<p>OR</p>
<pre><code>df['user_limit'] = df.User + ':' + df.id
df['user_limit'] = df.user_limit.map(api_call)
</code></pre>
<p>Note: Ensure that the <code>api_call</code> function returns only one value</p>
|
python|pandas|dataframe|for-loop|rows
| 0
|
374,875
| 70,202,374
|
How to change histogram color based on x-axis in matplotlib
|
<p>I have this histogram computed from a pandas dataframe.</p>
<p><a href="https://i.stack.imgur.com/BnznK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BnznK.png" alt="enter image description here" /></a></p>
<p>I want to change the colors based on the x-axis values.<br />
For example:</p>
<pre><code>If the value is = 0 the color should be green
If the value is > 0 the color should be red
If the value is < 0 the color should be yellow
</code></pre>
<p>I'm only concerned with the x-axis. The height of the bar doesn't matter much to me. All other solutions are for the y-axis.</p>
|
<p>Just plot them one by one:</p>
<pre><code>import matplotlib as mpl
import matplotlib.pyplot as plt
x = np.linspace(-1,1,10)
y = np.random.uniform(0,1,10)
width = 0.2
plt.figure(figsize = (12, 6))
cmap = mpl.cm.RdYlGn.reversed()
norm = mpl.colors.Normalize(vmin=0, vmax=10)
for x0, y0 in zip(x,y):
plt.bar(x0, y0, width = width, color = cmap(norm(np.abs(x0*10))))
</code></pre>
<p><a href="https://i.stack.imgur.com/Zm6VU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Zm6VU.png" alt="enter image description here" /></a></p>
|
python|pandas|matplotlib|histogram
| 0
|
374,876
| 70,065,974
|
InvalidArgumentError: ConcatOp : Dimensions of inputs should match when predicting on X_test with Conv2D - why?
|
<p>I'm learning Tensorflow and am trying to build a classifier on the Fashion MNIST dataset. I can fit the model, but when I try to predict on my test set I get the following error:</p>
<pre><code>y_pred = model.predict(X_test).argmax(axis=1)
InvalidArgumentError: ConcatOp : Dimensions of inputs should match: shape[0] = [1,32,10] vs. shape[312] = [1,16,10] [Op:ConcatV2] name: concat
</code></pre>
<p>I don't get an error if I predict on X_test in batches, for example:</p>
<pre><code>y_pred = []
step_size = 10
for i in trange(0, len(X_test), step_size):
y_pred += model.predict(X_test[i:i+step_size]).argmax(axis=1).tolist()[0]
</code></pre>
<p>I've spent some time googling and looking at other examples of the same error but still can't figure out what I'm doing wrong. I've tried a few different things, such as applying the scale and expand dimensions steps manually to X_train and X_test before building the model, but get the same result.</p>
<p>This is my full code (using Python 3.7.12 and Tensorflow 2.7.0):</p>
<pre><code>import tensorflow as tf # 2.7.0
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# load data
mnist = tf.keras.datasets.fashion_mnist
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# Build model
# Input
inputs = tf.keras.Input(shape=X_train[0].shape)
# # Scale
x = tf.keras.layers.Rescaling(scale=1.0/255)(inputs)
# Add extra dimension for use in conv2d
x = tf.expand_dims(x, -1)
# Conv2D
x = tf.keras.layers.Conv2D(filters=32, kernel_size=(3, 3), activation="relu", strides=2)(x)
x = tf.keras.layers.Conv2D(filters=64, kernel_size=(3, 3), activation="relu", strides=2)(x)
x = tf.keras.layers.Conv2D(filters=128, kernel_size=(3, 3), activation="relu", strides=2)(x)
# Flatten
x = tf.keras.layers.Flatten()(x),
x = tf.keras.layers.Dropout(rate=.2)(x) # 20% chance of dropout
x = tf.keras.layers.Dense(512, activation='relu')(x)
x = tf.keras.layers.Dropout(rate=.2)(x)
x = tf.keras.layers.Dense(K, activation='softmax')(x)
model = tf.keras.Model(inputs=inputs, outputs=x)
# Compile
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
# Fit
r = model.fit(X_train, y_train, validation_data=[X_test, y_test], epochs=10)
# Throws an error
y_pred = model.predict(X_test).argmax(axis=1)
</code></pre>
<p>Which gives</p>
<pre><code>InvalidArgumentError: ConcatOp : Dimensions of inputs should match: shape[0] = [1,32,10] vs. shape[312] = [1,16,10] [Op:ConcatV2] name: concat
</code></pre>
|
<p>With <code>model.predict</code> you are making predictions on batches as stated <a href="https://www.tensorflow.org/api_docs/python/tf/keras/Model?version=nightly#predict" rel="nofollow noreferrer">here</a>:</p>
<blockquote>
<p>Computation is done in batches. This method is designed for batch processing of large numbers of inputs. It is not intended for use inside of loops that iterate over your data and process small numbers of inputs at a time.</p>
</blockquote>
<p>But the size of <code>X_test</code> is not evenly divisible by the default <code>batch_size=32</code>. I think this might be the cause of your problem. You could change your <code>batch_size</code> to 16 for example and it will work:</p>
<pre class="lang-py prettyprint-override"><code>y_pred = model.predict(X_test, batch_size=16).argmax(axis=1)
print(y_pred)
</code></pre>
<pre><code>[[ 8 0 2 ... 14 8 2]
[15 15 8 ... 10 8 14]
[ 5 13 4 ... 4 5 6]
...
[11 11 12 ... 7 2 3]
[ 3 8 0 ... 15 3 14]
[ 3 13 1 ... 1 15 0]]
</code></pre>
<p>You could also use <code>model.predict_on_batch(X_test)</code> to make predictions for a single batch of samples. However, you are most flexible if you use the call function of your model directly:</p>
<pre><code>y_pred = model(X_test[:10])
tf.print(tf.argmax(y_pred, axis=1), summarize=-1)
</code></pre>
<pre><code>[[2 8 0 1 1 1 8 2 2 6]]
</code></pre>
|
python|tensorflow|keras|deep-learning|conv-neural-network
| 1
|
374,877
| 70,350,573
|
merge two pyspark dataframe based on one column containing list and other as values
|
<p>I have two tables</p>
<pre><code>+-----+-----+
|store|sales|
+-----+-----+
| F| 4000|
| M| 3000|
| A| 4000|
+-----+-----+`
+-----+------+
| upc| store|
+-----+------+
|40288|[F, M]|
|42114| [M]|
|39192|[F, A]|
+-----+------+`
</code></pre>
<p>I wish to have the final table as</p>
<pre><code>+-----+------+-----+
| upc| store|sales|
+-----+------+-----+
|40288|[F, M]| 7000|
|42114| [M]| 3000|
|39192|[F, A]| 8000|
+-----+------+-----+
</code></pre>
<p>Please use this code for data frame generation</p>
<pre class="lang-py prettyprint-override"><code>from pyspark.sql.types import StructType, StructField, StringType, IntegerType
from pyspark.sql import *
spark = SparkSession.builder.appName("SparkByExamples.com").getOrCreate()
data2 = [
("F", 4000),
("M", 3000),
("A", 4000),
]
schema = StructType(
[
StructField("store", StringType(), True),
StructField("sales", IntegerType(), True),
]
)
df11 = spark.createDataFrame(data=data2, schema=schema)
data3 = [
("40288", ["F", "M"]),
("42114", ["M"]),
("39192", ["F", "A"]),
]
schema = StructType(
[
StructField("upc", StringType(), True),
StructField("store", StringType(), True),
]
)
df22 = spark.createDataFrame(data=data3, schema=schema)
</code></pre>
<p>I can make this work using loops but it will be very inefficient for big_data. I have this piece of code with loops for pandas data frame but now migrating to Pyspark, so need an equivalent in Pyspark. is there a better way to do without loops to get final_table as shown above?</p>
<pre class="lang-py prettyprint-override"><code>for i, row in df22.iterrows():
new_sales = df11[df11.store.isin(df22[df22.upc == row.upc]["store"].values[0])][
"sales"
].sum()
df22.at[i, "sales"] = new_sales
</code></pre>
|
<p>You can <code>join</code> based on <a href="https://spark.apache.org/docs/3.1.1/api/python/reference/api/pyspark.sql.functions.array_contains.html?highlight=array_contains" rel="nofollow noreferrer"><code>array_contains</code></a>. After join, group by <code>upc</code> and <code>store</code> in df22 and <code>sum</code> sales.</p>
<pre class="lang-py prettyprint-override"><code>from pyspark.sql import functions as F
df11_with_df22 = df11.join(df22, F.array_contains(df22["store"], df11["store"]))
df11_with_df22.groupBy(df22["upc"], df22["store"]).agg(F.sum("sales").alias("sales")).show()
</code></pre>
<h2>Output</h2>
<pre><code>+-----+------+-----+
| upc| store|sales|
+-----+------+-----+
|40288|[F, M]| 7000|
|39192|[F, A]| 8000|
|42114| [M]| 3000|
+-----+------+-----+
</code></pre>
|
python|pandas|dataframe|pyspark
| 3
|
374,878
| 70,185,922
|
Input random dates into the column pandas
|
<p>I have a data frame called df_planned and would like to insert into column "order_date" some random dates from Oct-2021. Is there a way to that in the loop? There are 300 rows of the data so obviously the dates can repeat.</p>
<p>I wrote this function to generate the dates:</p>
<pre><code>import datetime
def new(test_date,K):
res = [test_date + datetime.timedelta(days=idx) for idx in range(K)]
# printing result
return(res)
test_date = datetime.datetime(2021, 10, 1)
</code></pre>
<p><a href="https://i.stack.imgur.com/fHJpX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fHJpX.png" alt="enter image description here" /></a></p>
|
<p>You can use <code>np.random.choice</code> and <code>pd.date_range</code>:</p>
<pre><code>import pandas as pd
import numpy as np
df['order_date'] = np.random.choice(pd.date_range('2021-10-01', '2021-10-31'), 300)
print(df)
# Output:
order_date
0 2021-10-08
1 2021-10-27
2 2021-10-21
3 2021-10-11
4 2021-10-05
.. ...
295 2021-10-13
296 2021-10-05
297 2021-10-23
298 2021-10-22
299 2021-10-31
[300 rows x 1 columns]
</code></pre>
<p><em>Note</em>: replace 300 by the length of dataframe <code>len(df)</code>.</p>
|
python|pandas|dataframe|datetime
| 3
|
374,879
| 70,239,115
|
Sensibly merging two dataframes
|
<p>If one of my dataframes gives me some info about items:</p>
<pre><code> itemId property_1 property_2 property_n Decision
0 i1 88.90 NaN 0 1
1 i2 87.09 7.653800e+06 0 0
2 i3 78.90 7.623800e+06 1 1
3 i4 93.02 NaN 1 0
...
</code></pre>
<p>And the other one gives me some info about how users interacted with the items:</p>
<pre><code> userId itemId Decision
0 u1 i1 0
1 u1 i2 1
2 u2 i1 1
3 u2 i3 0
4 u2 i4 1
5 u3 i5 0
...
</code></pre>
<p>I am interested in predicting the <code>Decision</code>, which is easy to do if I work with each dataframe, separately. <strong>But can I somehow incorporate the second one into the first one</strong>, given that in the second one, each <code>item</code> appears multiple times with different <code>Decisions</code>?</p>
<p>I would like to have something like:</p>
<pre><code> itemId property_1 property_2 property_n u1_decision ... Decision
0 i1 88.90 NaN 0 0 1
1 i2 87.09 7.653800e+06 0 1 0
2 i3 78.90 7.623800e+06 1 NaN 1
4 i4 93.02 NaN 1 NaN 0
...
</code></pre>
<p>So each user becomes a column, result in something very sparse. The first question would be whether this makes sense, and the second question would be how do I merge the rows from the second dataframe as columns into the first one (I know how to <code>df.merge</code> on <code>Decision</code>, but this doesn't give me the desired result).</p>
|
<p>You can <code>pivot</code> the second table like:</p>
<pre><code>df.pivot(index='itemId', columns='userId', values='Decision').reset_index()
</code></pre>
<p>Then you can do the <code>merge</code> on <code>itemId</code>.</p>
|
python|pandas|dataframe|merge
| 1
|
374,880
| 70,191,573
|
it there any way to convert 3D numpy array to 2D
|
<p>I got a 3d NumPy array:</p>
<pre><code>array([[[ 12., 0., 0.],
[ 15., 0., 0.],
[ 13., 0., 0.]],
[[ 12., 0., 0.],
[ 11., 0., 0.],
[ 13., 0., 0.]]])
</code></pre>
<p>Is there any way to convert to a 2d and only get</p>
<pre><code>[12., 15., 13.]
[12., 11., 13.]
</code></pre>
|
<pre><code>x = np.array(
[[[ 12., 0., 0.],
[ 15., 0., 0.],
[ 13., 0., 0.]],
[[ 12., 0., 0.],
[ 11., 0., 0.],
[ 13., 0., 0.]]]
)
x_2d = x[:, :, 0]
>> x_2d
>> array([[12., 15., 13.],
[12., 11., 13.]])
</code></pre>
|
python|pandas|numpy
| 0
|
374,881
| 70,089,884
|
Extract data and sort them by date
|
<p>I am trying to figure out an exercise on string manipulation and sorting.
The exercise asks to extract words that have time reference (e.g., hours, days) from the text, and sort rows based on the time extracted in an ascendent order.
An example of data is:</p>
<pre><code>Customer Text
1 12 hours ago — the customer applied for a discount
2 6 hours ago — the customer contacted the customer service
3 1 day ago — the customer reported an issue
4 1 day ago — no answer
4 2 days ago — Open issue
5
</code></pre>
<p>In this task I can identify several difficulties:</p>
<pre><code>- time reference can be expressed as hours/days/weeks
- there are null values or no reference to time
- get a time format suitable and more general, e.g., based on the current datetime
</code></pre>
<p>On the first point, I noted that generally the dates are before —, whether present, so it could be easy to extract them.
On the second point, an if statement could avoid error messages due to incomplete/missing fields.
I do not know how to answer to the third point, though.</p>
<p>My expected result would be:</p>
<pre><code>Customer Text Sort by
1 12 hours ago — the customer applied for a discount 1
2 6 hours ago — the customer contacted the customer service 2
3 1 day ago — the customer reported an issue 2
4 1 day ago — no answer 2
4 2 days ago — Open issue 3
5
</code></pre>
|
<p>Given the DataFrame sample, I will assume that for this exercise the first two words of the text are what you are after. I am unclear on how the sorting works, but for the third point, a more suitable time would be the <code>current time - timedelta</code> from by the Text column</p>
<p>You can apply an if-else lambda function to the first two words of each row of <code>Text</code> and convert this to a pandas <a href="https://pandas.pydata.org/docs/reference/api/pandas.Timedelta.html" rel="nofollow noreferrer">Timedelta</a> object - for example <code>pd.Timedelta("1 day")</code> will return a Timedelta object.</p>
<p>Then you can subtract the Timedelta column from the current time which you can obtain with <code>pd.Timestamp.now()</code>:</p>
<pre><code>df["Timedelta"] = df.Text.apply(lambda x: pd.Timedelta(' '.join(x.split(" ")[:2])) if pd.notnull(x) else x)
df["Time"] = pd.Timestamp.now() - df["Timedelta"]
</code></pre>
<p>Output:</p>
<pre><code>>>> df
Customer Text Timedelta Time
0 1 12 hours ago — the customer applied for a disc... 0 days 12:00:00 2021-11-23 09:22:40.691768
1 2 6 hours ago — the customer contacted the custo... 0 days 06:00:00 2021-11-23 15:22:40.691768
2 3 1 day ago — the customer reported an issue 1 days 00:00:00 2021-11-22 21:22:40.691768
3 4 1 day ago — no answer 1 days 00:00:00 2021-11-22 21:22:40.691768
4 4 2 days ago — Open issue 2 days 00:00:00 2021-11-21 21:22:40.691768
5 5 NaN NaT NaT
</code></pre>
|
python|pandas|data-manipulation
| 1
|
374,882
| 70,025,995
|
Using isna() as a condition in a if else statement
|
<p>I have a df that looks like the following with many more rows:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>LastTravelDate</th>
<th>TripStartDate</th>
<th>TripEndDate</th>
</tr>
</thead>
<tbody>
<tr>
<td>2021-07-10</td>
<td>2021-08-16</td>
<td>NaT</td>
</tr>
<tr>
<td>2021-08-28</td>
<td>2021-09-30</td>
<td>NaT</td>
</tr>
<tr>
<td>2021-07-29</td>
<td>2021-09-27</td>
<td>2021-09-28</td>
</tr>
</tbody>
</table>
</div>
<p>I am trying to write a loop that goes through every row in the df and sets the ith value of LastTravelDate equal to the ith value of TripEndDate. Wherever TripEndDate is equal to NaT I would like the script to set the ith value of LastTravelDate to the corresponding value in TripStartDate.</p>
<p>My issue is that the code seems to ignore all the details in the if/else statement and just sets all df.LastTravelDate equal to df.TripStartDate. What should happen is that df.LastTravelDate and df.TripStartDate are only equal wherever df.TripEndDate is null. However, these become equal in every instance. Below is the full code I am using.</p>
<pre><code>for i in range(df.shape[0]):
if np.any(df.loc[df.TripEndDate.isna()]):
df["LastTravelDate"][i] = df.TripStartDate[i]
else:
df["LastTravelDate"][i] = df.TripEndDate[i]
</code></pre>
<p>Thank you</p>
|
<p>For a vectorized approach you can use <code>np.where()</code>:</p>
<pre><code>df['LastTravelDate'] = np.where(df['TripEndDate'].isna(),df['TripStartDate'],df['TripEndDate'])
</code></pre>
|
python|pandas|numpy|if-statement
| 3
|
374,883
| 70,086,114
|
Snowflake- How to ignore the row number (first column) in the result set
|
<p>Whenever i run any select query in snowflake the result set is having auto generated row number column (as a first column).. how to ignore this column from the code...</p>
<p>Like : select * from emp ignore row;</p>
|
<p>If you're referring to the unnamed column just before TABLE_CATALOG in the below picture.</p>
<p>I'm pretty sure that's not something we can not change -> maybe if you wrote some custom JS to fiddle with the page you might be able to hide it by perhaps changing the TEXT color to white or something. But that seems like a lot of work.</p>
<p>If you extract the data to a CSV (or any file format) this number does not appear in the payload.</p>
<p><a href="https://i.stack.imgur.com/R6Qj1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/R6Qj1.png" alt="enter image description here" /></a></p>
|
python|pandas|dataframe|snowflake-cloud-data-platform|series
| 0
|
374,884
| 70,348,437
|
What is the difference between x.view(x.size(0), -1) and torch.nn.Flatten() layer and torch.flatten(x)? pytorch question
|
<p>I'm quite curious on what's the difference between using view(,-1) and flatten like the simple code here:</p>
<p>Since I found that the size and data all flatten to one dimension.</p>
<pre class="lang-py prettyprint-override"><code>import torch
from torch import nn
from torch.utils.data import DataLoader
from torchvision import datasets
from torchvision.transforms import ToTensor, Lambda, Compose
import matplotlib.pyplot as plt
x = torch.rand(3,256,256)
x.size()
a = x.view(x.size(0), -1)
print('after view:',a.size())
m = nn.Sequential(nn.Flatten())
y = m(x)
print('after nn flatten:',y.size())
z = torch.flatten(x)
print('after torch flatten:',y.size())
</code></pre>
<p>And also, it seems there is no difference between = and =.contiguous which said :Returns a contiguous in memory tensor containing the same data as self tensor. but for me, it seems just return the self tensor but not copy or new tensor with same data.</p>
<pre class="lang-py prettyprint-override"><code>c = y
print(c)
b = y.contiguous()
print(b)
# change original data
y[0][0]=1
print(b)
print(c)
print(y)
</code></pre>
|
<p>A view is a way to modify the way you look at your data without modifying the data itself:</p>
<ul>
<li><a href="https://pytorch.org/docs/stable/generated/torch.Tensor.view.html?highlight=view#torch.Tensor.view" rel="nofollow noreferrer"><code>torch.view</code></a> returns a view on the data: the data is not copied, only the "window" which you look through on the data changes</li>
<li><code>torch.flatten</code> returns a one-dimensional output from a multi-dimensional input. It may not copy the data if</li>
</ul>
<blockquote>
<p>[the] input can be viewed as the flattened shape (<a href="https://pytorch.org/docs/stable/generated/torch.flatten.html?highlight=flatten#torch.flatten" rel="nofollow noreferrer">source</a>)</p>
</blockquote>
<ul>
<li><code>torch.nn.Flatten</code> is just a <a href="https://pytorch.org/docs/stable/_modules/torch/nn/modules/flatten.html#Flatten" rel="nofollow noreferrer">wrapper</a> for convenience around <code>torch.flatten</code></li>
</ul>
<p>Contiugous data just means that the data is linearly adressable in memory, e.g. for two dimension data this would mean that element <code>[i][j]</code> is at position <code>i * num_columns + j</code>. If this is already the case then <a href="https://pytorch.org/docs/stable/generated/torch.Tensor.contiguous.html?highlight=contiguous#torch.Tensor.contiguous" rel="nofollow noreferrer"><code>.contiguous</code></a> will not change your data or copy anything.</p>
|
python|pytorch
| 0
|
374,885
| 70,058,128
|
How to discretize a datetime column?
|
<p>I have a dataset that contains a column of datetime of a month, and I need to divide it into two blocks (day and night or am\pm) and then discretize the time in each block into 10mins bins. I could add another column of 0 and 1 to show it is am or pm, but I cannot discretize it! Can you please help me with it?</p>
<pre><code>df['started_at'] = pd.to_datetime(df['started_at'])
df['start hour'] = df['started_at'].dt.hour.astype('int')
df['mor/aft'] = np.where(df['start hour'] < 12, 1, 0)
df['started_at']
0 16:05:36
2 06:22:40
3 16:08:10
4 12:28:57
6 15:47:30
...
3084526 15:24:24
3084527 16:33:07
3084532 14:08:12
3084535 09:43:46
3084536 17:02:26
</code></pre>
|
<p>If I understood correctly you are trying to add a column for every interval of ten minutes to indicate if an observation is from that interval of time.</p>
<p>You can use <code>lambda expressions</code> to loop through each observation from the series.</p>
<p>Dividing by 10 and making this an integer gives the first digit of the minutes, based on which you can add indicator columns.</p>
<p>I also included how to extract the day indicator column with a <code>lambda expression</code> for you to compare. It achieves the same as your <code>np.where()</code>.</p>
<pre><code>import pandas as pd
from datetime import datetime
# make dataframe
df = pd.DataFrame({
'started_at': ['14:20:56',
'00:13:24',
'16:01:33']
})
# convert column to datetime
df['started_at'] = pd.to_datetime(df['started_at'])
# make day indicator column
df['day'] = df['started_at'].apply(lambda ts: 1 if ts.hour > 12 else 0)
# make indicator column for every ten minutes
for i in range(24):
for j in range(6):
col = 'hour_' + str(i) + '_min_' + str(j) + '0'
df[col] = df['started_at'].apply(lambda ts: 1 if int(ts.minute/10) == j and ts.hour == i else 0)
print(df)
</code></pre>
<p>Output first columns:</p>
<pre><code> started_at day hour_0_min_00 hour_0_min_10 hour_0_min_20
0 2021-11-21 14:20:56 1 0 0 0
1 2021-11-21 00:13:24 0 0 1 0
2 2021-11-21 16:01:33 1 0 0 0
...
...
...
</code></pre>
|
python|pandas|dataframe|datetime|discretization
| 0
|
374,886
| 70,233,279
|
tensorflow lite program crashing with kivy on buildozer
|
<p>I tried running this github program <a href="https://github.com/tito/experiment-tensorflow-lite" rel="nofollow noreferrer">https://github.com/tito/experiment-tensorflow-lite</a>
It is basically about running tensorflow lite using kivy on android.</p>
<p>I tried running the program on my pc but I got this error``</p>
<pre><code>STDOUT:
patching file jnius/jnius_jvm_android.pxi
Hunk #1 FAILED at 1.
1 out of 1 hunk FAILED -- saving rejects to file jnius/jnius_jvm_android.pxi.rej
patching file jnius/env.py
Hunk #1 FAILED at 185.
1 out of 1 hunk FAILED -- saving rejects to file jnius/env.py.rej
STDERR:
[INFO]: STDOUT:
patching file jnius/jnius_jvm_android.pxi
Hunk #1 FAILED at 1.
1 out of 1 hunk FAILED -- saving rejects to file jnius/jnius_jvm_android.pxi.rej
patching file jnius/env.py
Hunk #1 FAILED at 185.
1 out of 1 hunk FAILED -- saving rejects to file jnius/env.py.rej
[INFO]: STDERR:
Traceback (most recent call last):
File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/pls/experiment-tensorflow-lite/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py", line 1276, in <module>
main()
File "/home/pls/experiment-tensorflow-lite/.buildozer/android/platform/python-for-android/pythonforandroid/entrypoints.py", line 18, in main
ToolchainCL()
File "/home/pls/experiment-tensorflow-lite/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py", line 725, in init
getattr(self, command)(args)
File "/home/pls/experiment-tensorflow-lite/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py", line 153, in wrapper_func
build_dist_from_args(ctx, dist, args)
File "/home/pls/experiment-tensorflow-lite/.buildozer/android/platform/python-for-android/pythonforandroid/toolchain.py", line 212, in build_dist_from_args
build_recipes(build_order, python_modules, ctx,
File "/home/pls/experiment-tensorflow-lite/.buildozer/android/platform/python-for-android/pythonforandroid/build.py", line 573, in build_recipes
recipe.apply_patches(arch)
File "/home/pls/experiment-tensorflow-lite/.buildozer/android/platform/python-for-android/pythonforandroid/recipe.py", line 560, in apply_patches
self.apply_patch(
File "/home/pls/experiment-tensorflow-lite/.buildozer/android/platform/python-for-android/pythonforandroid/recipe.py", line 263, in apply_patch
shprint(sh.patch, "-t", "-d", build_dir, "-p1",
File "/home/pls/experiment-tensorflow-lite/.buildozer/android/platform/python-for-android/pythonforandroid/logger.py", line 167, in shprint
for line in output:
File "/usr/local/lib/python3.8/dist-packages/sh-1.14.1-py3.8.egg/sh.py", line 911, in next
self.wait()
File "/usr/local/lib/python3.8/dist-packages/sh-1.14.1-py3.8.egg/sh.py", line 841, in wait
self.handle_command_exit_code(exit_code)
File "/usr/local/lib/python3.8/dist-packages/sh-1.14.1-py3.8.egg/sh.py", line 865, in handle_command_exit_code
raise exc
sh.ErrorReturnCode_1:
RAN: /usr/bin/patch -t -d /home/pls/experiment-tensorflow-lite/.buildozer/android/platform/build-armeabi-v7a/build/other_builds/pyjnius-sdl2/armeabi-v7a__ndk_target_21/pyjnius -p1 -i /home/pls/experiment-tensorflow-lite/.buildozer/android/platform/python-for-android/pythonforandroid/recipes/pyjnius/sdl2_jnienv_getter.patch
STDOUT:
patching file jnius/jnius_jvm_android.pxi
Hunk #1 FAILED at 1.
1 out of 1 hunk FAILED -- saving rejects to file jnius/jnius_jvm_android.pxi.rej
patching file jnius/env.py
Hunk #1 FAILED at 185.
1 out of 1 hunk FAILED -- saving rejects to file jnius/env.py.rej
</code></pre>
<p>I used the same buildozer.init file on the github
Python version: 3.8.10
I ran the program on ubuntu virtual machine using virtual box</p>
|
<p>Just use command:</p>
<pre><code>buildozer android clean
</code></pre>
<p>before:</p>
<pre><code>buildozer android debug
</code></pre>
|
ubuntu|kivy|tensorflow-lite|buildozer
| 0
|
374,887
| 70,043,838
|
Pandas Data Frame, reading from a file or setting a new Data Frame inside a function
|
<p>I am trying to read 3 CSV files into 3 pandas DataFrame. But after executing the function the variable seems not available. Tries to create a blank data frame outside the function and read and set the frame in the function. But the frame is blank.</p>
<pre><code># Load data from the csv file
def LoadFiles():
x = pd.read_csv('columns_description.csv', index_col=None)
print("Columns Description")
print(f"Number of rows/records: {x.shape[0]}")
print(f"Number of columns/variables: {x.shape[1]}")
LoadFiles()
x.head()
</code></pre>
<p><a href="https://i.stack.imgur.com/6JNfZ.png" rel="nofollow noreferrer">Python Notebook for above code with Error</a></p>
<p>In the second approach, I am trying to create a new data frame with some consolidated information from the dataset. The issue reappears as the variable seems to be no longer available.</p>
<pre><code># Understand the variables
y = pd.read_csv('columns_description.csv', index_col=None)
def refresh_y():
var_y = pd.DataFrame(columns=['Variable','Number of unique values'])
for i, var in enumerate(y.columns):
var_y.loc[i] = [y, y[var].nunique()]
refresh_y()
</code></pre>
<p><a href="https://i.stack.imgur.com/5GeWU.png" rel="nofollow noreferrer">Screenshot with error code and solution restructuring in the function</a></p>
<p>I am a bit new to Python, The code is a sample and does not represent actual data and in the function, an example is with a single column. I have multiple columns to refresh in this derived data set based on changes further hence the function approach.</p>
|
<p>try following code</p>
<pre><code># Load data from the csv file
def LoadFiles():
x = pd.read_csv('columns_description.csv', index_col=None)
print("Columns Description")
print(f"Number of rows/records: {x.shape[0]}")
print(f"Number of columns/variables: {x.shape[1]}")
return x
x2 = LoadFiles()
x2.head()
</code></pre>
<p>Variables in a function is only available inside function. You may need study about scope. I recommend the following simple site about scope in Python.</p>
<p><a href="https://www.w3schools.com/python/python_scope.asp" rel="nofollow noreferrer">https://www.w3schools.com/python/python_scope.asp</a></p>
|
python|pandas|jupyter-notebook|vscode-python|exploratory-data-analysis
| 0
|
374,888
| 70,274,443
|
Parse CSV to Extract Filenames and Rename Files (Python)
|
<p>I'm looking to try and extract filenames from a comma CSV, rename the files they refer to by sequential numbering, then going back to the CSV in the process.</p>
<p>I am able to extract all the first column:</p>
<pre><code>import pandas as pd
my_data = pd.read_csv('test.csv', sep=',', header=0, usecols=[0])
</code></pre>
<p>And then the list of entries that I need:</p>
<p><code>values = list(x for x in my_data["full path"])</code></p>
<p>From there I want to use that path to rename each file sequentially as per its path(1.msg, 2.msg, 3.msg), then go back and update the CSV with the "new" path.</p>
<p>My CSV looks like:</p>
<pre><code>full path, name, data1, data2
\path\to\a\file.msg,data,moredata,evenmoredata
</code></pre>
<p>Existing file path:</p>
<p><code>\path\to\a\file.msg</code></p>
<p>New file path:</p>
<p><code>\path\to\a\1.msg</code></p>
<p>Any help is appreciated.</p>
|
<p>You can directly modify the <code>dataframe</code> and the file by iterating trough the dataframe itself. Once you have edited the desired rows, you persist the dataframe by rewriting it to a csv file (the same if you want to overwrite it). I assume here that <code>file_path</code> is the name of the column containing the filepath: change it accordingly.</p>
<p>Explanations come with the code comments</p>
<pre><code>import os
import pandas as pd
# I'm assuming everything is correct up to the data reading
df = pd.read_csv('test.csv', sep=',', header=0, usecols=[0])
# You can iterate trough the index of the dataframe itself. Were it inconsistent, you can use a custom one (here `k`)
k = 0
for index, row in df.iterrows():
# Extract the current file path, `/path/to/file.msg
fp = row['file_path']
# Extract the filename, e.g. `file.msg`
fn = os.path.basename(fp)
# Extract the dir path, e.g. `/path/to
dir_path = os.path.dirname(fp)
# split and separate from the extention
name, ext = os.path.splitext(fn)
# Reconstruct the new filepath
new_path = os.path.join(dir_path, str(k) + ext)
# Important to try in order to avoid any prohibited access to the file or its absence
try:
os.rename(fp,new_path)
except:
# Here you can enrich the code to handle different specific exceptions
print(f'Error: file {fp} cannot be renamed')
else:
# If the file was correctly replaced, you can modify the dataframe. Put this code outside the try-except-else block to modify the dataframe in any case. NOTE: the index here MUST be that of the dataframe! Not your custom one.
df.at[index, 'file_path'] = new_path
k = k + 1
# Overwrite the dataframe. Adjust accordingly!
df.to_csv('test.csv')
</code></pre>
<p>Disclaimer: I couldn't try the code above. In case of any slip, I'll correct it as soon as possible</p>
|
pandas|csv
| 0
|
374,889
| 70,089,168
|
Python Selenium Table Body Data Extraction
|
<p>I am trying to get the data elements of class <code>td</code> from my table, but my code consistently is only capable of pulling the rows from the <code>thead</code>. If I add <code>find_element_by_tag_name("tbody")</code>, then I get the classic <em>Message: no such element: Unable to locate element...</em>. Any tips?</p>
<p><strong>Source Code</strong>: from <a href="https://shinyapps.asee.org/apps/Profiles/" rel="nofollow noreferrer">https://shinyapps.asee.org/apps/Profiles/</a></p>
<pre><code><table class="cell-border stripe compact dataTable no-footer" id="DataTables_Table_4" role="grid" aria-describedby="DataTables_Table_4_info">
<thead>
<tr>
<th>...</th>
.
.
.
</tr>
</thead>
<tbody>
<tr>
<td>...</td>
.
.
.
</tr>
.
.
.
</tbody>
</table>
</code></pre>
<p><strong>Selenium Python</strong>:</p>
<pre><code>for opt in element.find_elements_by_css_selector("div.option"):
#Record College Names
colleges.append(opt.get_attribute("data-value"))
time.sleep(2)
#Select College
opt.click() #does pull data into graph
#Scrape Data
table = driver.find_element_by_tag_name("table")
alldata = table.find_element_by_tag_name("tbody")
rows = table.find_elements_by_tag_name("tr")
#print(table.tag_name)
for row in rows:
print(row.tag_name)
data = []
data.append(year)
data.append("Degrees Awarded")
data_elements = row.find_elements_by_tag_name("td")
#add to pandas table
for fact in data_elements:
try:
data.append(fact.text)
except:
print("nothing")
print(data)
#DF.loc[len(DF.index)]=data
#reclick on dropdown box to get next school's data
element.click()
</code></pre>
|
<p>There are two table elements - one for the <strong>Header</strong> (without <code>id</code> attribute) and other for the <strong>Data</strong> (with <code>id</code> attribute).</p>
<p>Try like below and confirm.</p>
<pre><code>driver.get("https://shinyapps.asee.org/apps/Profiles/")
# Code to select "Degrees Awarded" and other option in the drop down.
table_header = driver.find_elements(By.XPATH,"//table[not(@id)]//th")
header_row = []
for header in table_header:
header_row.append(header.text)
print(header_row)
table_data = driver.find_elements(By.XPATH,"//table[@id]/tbody/tr")
for row in table_data:
columns = row.find_elements(By.XPATH,"./td") # Use dot in the xpath to find elements with in element.
table_row = []
for column in columns:
table_row.append(column.text)
print(table_row)
</code></pre>
<pre><code>['INSTITUTIONS', 'DEGREE NAME', 'DISCIPLINE NAME', 'NON RES ALIEN M', 'NON RES ALIEN F', 'UNKNOWN M', 'UNKNOWN F', 'HISPANIC M', 'HISPANIC F', 'AMERICAN INDIAN M', 'AMERICAN INDIAN F', 'ASIAN AMERICAN M', 'ASIAN AMERICAN F', 'AFRICAN AMERICAN M', 'AFRICAN AMERICAN F', '', '', '', '', '', '', '']
['Air Force Institute of Technology', 'Aeronautical Engineering (M.S)', 'Aerospace Engineering', '0', '0', '0', '0', '1', '0', '0', '0', '1', '0', '0', '0', '0', '0', '17', '4', '0', '0', '23']
['Air Force Institute of Technology', 'Applied Mathematics (M.S)', 'Other Engineering Disciplines', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '1', '0', '0', '0', '1']
...
</code></pre>
<p>To get the <code>id</code> attribute of the table element, you can use below lines.</p>
<pre><code>table_id = driver.find_element(By.XPATH,"//table[@id]").get_attribute("id")
print(table_id)
</code></pre>
<pre><code>DataTables_Table_3
</code></pre>
|
python|pandas|selenium|datatables
| 0
|
374,890
| 70,081,419
|
How to take transpose of one particular DataFrame column in Python? Also how to get certain values from second iteration onwards from 'for' loop?
|
<p>I am running a 'for' loop whose output is a data frame with two columns, column 1 with columns names and column 2 with data. It can be seen below:</p>
<p><a href="https://i.stack.imgur.com/kaUK9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kaUK9.png" alt="enter image description here" /></a></p>
<p>Next, I would take the transpose of this data like this:</p>
<p><a href="https://i.stack.imgur.com/Vn4Gh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Vn4Gh.png" alt="enter image description here" /></a></p>
<p>In the next iteration again I would get the similar data(with 2 columns as the first table) for which I don't need columns data, just data should append to the data from the first iteration as seen:</p>
<p><a href="https://i.stack.imgur.com/TTfrS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TTfrS.png" alt="enter image description here" /></a></p>
|
<p>For your first question first set your index to you desired column names column, then transpose using <code>T</code>:</p>
<p><code>df2=df.set_index('1').T</code></p>
<pre><code>1 Column1 Column2 Column3 Column4 Column5
2 data_1_1 data_1_2 data_1_3 data_1_4 data_1_5
</code></pre>
|
python|pandas|dataframe|for-loop
| -1
|
374,891
| 70,281,636
|
Two Pandas dataframes, how to interpolate row-wise using scipy
|
<p>How can I use scipy interpolate on two dataframes, interpolating row-rise?</p>
<p>For example, if I have:</p>
<pre><code>dfx = pd.DataFrame({"a": [0.1, 0.2, 0.5, 0.6], "b": [3.2, 4.1, 1.1, 2.8]})
dfy = pd.DataFrame({"a": [0.8, 0.2, 1.1, 0.1], "b": [0.5, 1.3, 1.3, 2.8]})
display(dfx)
display(dfy)
</code></pre>
<p><a href="https://i.stack.imgur.com/wpVb5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wpVb5.png" alt="enter image description here" /></a></p>
<p>And say I want to interpolate for y(x=0.5), how can I get the results into an array that I can put in a new dataframe?</p>
<p>Expected result is: <code>[0.761290323 0.284615385 1.1 -0.022727273]</code></p>
<p>For example, for first row, you can see the expected value is 0.761290323:</p>
<pre><code>x = [0.1, 3.2] # from dfx, row 0
y = [0.8,0.5] # from dfy, row 0
fig, ax = plt.subplots(1,1)
ax.plot(x,y)
f = scipy.interpolate.interp1d(x,y)
out = f(0.5)
print(out)
</code></pre>
<p><a href="https://i.stack.imgur.com/A81pJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/A81pJ.png" alt="enter image description here" /></a></p>
<p>I tried the following but received <code>ValueError: x and y arrays must be equal in length along interpolation axis.</code></p>
<pre><code>f = scipy.interpolate.interp1d(dfx, dfy)
out = np.exp(f(0.5))
print(out)
</code></pre>
|
<p>Since you are looking for linear interpolation, you can do:</p>
<pre><code>def interpolate(val, dfx, dfy):
t = (dfx['b'] - val) / (dfx['b'] - dfx['a'])
return dfy['a'] * t + dfy['b'] * (1-t)
interpolate(0.5, dfx, dfy)
</code></pre>
<p>Output:</p>
<pre><code>0 0.885714
1 0.284615
2 1.100000
3 -0.022727
dtype: float64
</code></pre>
|
pandas|scipy|interpolation
| 0
|
374,892
| 70,175,266
|
How to take a subset of the columns of a pandas data frame?
|
<p>I have got a pandas data frame with multiple columns and a list with column indices (0, 1, ..., n) that index a subset of the columns of the data frame. How can I create a new data frame with exactly this subset of columns?</p>
|
<p>The answer to your question can be found in the pandas documentation:</p>
<p><a href="https://pandas.pydata.org/docs/getting_started/intro_tutorials/03_subset_data.html" rel="nofollow noreferrer">How do I select a subset of DataFrame</a></p>
<p>The article displays many different ways to do it.</p>
|
python|pandas|dataframe
| 1
|
374,893
| 70,202,728
|
More Efficient Way To Insert Dataframe into SQL Server
|
<p>I am trying to update a SQL table with updated information which is in a dataframe in pandas.</p>
<p>I have about 100,000 rows to iterate through and it's taking a long time. Any way I can make this code more efficient. Do I even need to truncate the data? Most rows will probably be the same.</p>
<pre><code> conn = pyodbc.connect ("Driver={xxx};"
"Server=xxx;"
"Database=xxx;"
"Trusted_Connection=yes;")
cursor = conn.cursor()
cursor.execute('TRUNCATE dbo.Sheet1$')
for index, row in df_union.iterrows():
print(row)
cursor.execute("INSERT INTO dbo.Sheet1$ (Vendor, Plant) values(?,?)", row.Vendor, row.Plant)
</code></pre>
<p>Update: This is what I ended up doing.</p>
<pre><code> params = urllib.parse.quote_plus(r'DRIVER={xxx};SERVER=xxx;DATABASE=xxx;Trusted_Connection=yes')
conn_str = 'mssql+pyodbc:///?odbc_connect={}'.format(params)
engine = create_engine(conn_str)
df = pd.read_excel('xxx.xlsx')
print("loaded")
df.to_sql(name='tablename',schema= 'dbo', con=engine, if_exists='replace',index=False, chunksize = 1000, method = 'multi')
</code></pre>
|
<p>Don't use <code>for</code> or <code>cursors</code> just <code>SQL</code></p>
<pre><code>insert into TABLENAMEA (A,B,C,D)
select A,B,C,D from TABLENAMEB
</code></pre>
<p>Take a look to this link to see another demo:
<a href="https://www.sqlservertutorial.net/sql-server-basics/sql-server-insert-into-select/" rel="nofollow noreferrer">https://www.sqlservertutorial.net/sql-server-basics/sql-server-insert-into-select/</a></p>
<p>You just need to update this part to run a normal insert</p>
<pre><code>conn = pyodbc.connect ("Driver={xxx};"
"Server=xxx;"
"Database=xxx;"
"Trusted_Connection=yes;")
cursor = conn.cursor()
cursor.execute('insert into TABLENAMEA (A,B,C,D) select A,B,C,D from TABLENAMEB')
</code></pre>
<p>You don't need to store the dataset in a variable, just run the query directly as normal SQL, performance will be better than a iteration</p>
|
python|sql|sql-server|pandas
| 0
|
374,894
| 70,086,045
|
Grouping by ID choosing highest values in columns from same ID
|
<p>I have a problem trying to calculate some final tests marks. I need to group by Students, getting only the highest value in each column for each student.</p>
<p>Being DF the dataframe:</p>
<pre><code>data = {'Students': ['Student1', 'Student1', 'Student1', 'Student2','Student2','Studen3'],
'Result1': [2, 4, 5, 8, 2, 5],
'Result2': [5, 3, 2, 8, 5, 5],
'Result3': [7, 5, 7, 3, 8, 9]}
df = pd.DataFrame(data)
Students Result1 Result2 Result3
0 Student1 2 5 7
1 Student1 4 3 5
2 Student1 5 2 7
3 Student2 8 8 3
4 Student2 2 5 8
5 Studen3 5 5 9
</code></pre>
<p>I need to generate a DF choosing the higher mark, for each student, in each Result.</p>
<p>So, the final DF should look like:</p>
<pre><code> Students Result1 Result2 Result3
0 Student1 5 5 7
1 Student2 8 8 8
2 Student3 5 5 9
</code></pre>
<p>Any help?</p>
|
<p>The dataframe can be generated using simply iterations over groups:</p>
<pre><code>df2 = pd.DataFrame(columns=('Student', 'res1', 'res2', 'res3'))
for s in df.Students.unique():
stdf = df[df["Students"]==s]
df2 = df2.append({'Student':s,'res1':max(stdf.Result1),'res2':max(stdf.Result2),
'res3':max(stdf.Result3)}, ignore_index=True)
</code></pre>
|
python|pandas|pandas-groupby
| 2
|
374,895
| 70,143,435
|
Creating labels and updating a column based on multiple conditions
|
<p>I have a data frame which looks like this:</p>
<pre><code>data = {
'user_id': [
'9EPWZVMNP6D6KWX', '9EPWZVMNP6D6KWX', '9EPWZVMNP6D6KWX',
'9EPWZVMNP6D6KWX', '9EPWZVMNP6D6KWX', '9EPWZVMNP6D6KWX'
],
'timestamp': [
1612139269, 1612139665, 1612139579,
1612141096, 1612143046, 1612143729
],
'type': [
'productDetails', 'productDetails', 'checkout:confirmation',
'checkout:confirmation', 'productList', 'checkout:confirmation'
],
'session': [0,1,2,3,4,5],
'count_session_products': [4, 1, 0, 4, 2, 2],
'loyalty' : [0,0,0,0,0,0]
}
test_df = pd.DataFrame(data)
test_df
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>user_id</th>
<th>timestamp</th>
<th>type</th>
<th>session</th>
<th>prods</th>
<th>loyalty</th>
</tr>
</thead>
<tbody>
<tr>
<td>9EPWZVMNP6D6KWX</td>
<td>1612139269</td>
<td>productDetails</td>
<td>0</td>
<td>4</td>
<td>0</td>
</tr>
<tr>
<td>9EPWZVMNP6D6KWX</td>
<td>1612139665</td>
<td>productDetails</td>
<td>1</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>9EPWZVMNP6D6KWX</td>
<td>1612139579</td>
<td>checkout:confirmation</td>
<td>2</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>9EPWZVMNP6D6KWX</td>
<td>1612141096</td>
<td>checkout:confirmation</td>
<td>3</td>
<td>4</td>
<td>0</td>
</tr>
<tr>
<td>9EPWZVMNP6D6KWX</td>
<td>1612143046</td>
<td>productList</td>
<td>4</td>
<td>2</td>
<td>0</td>
</tr>
<tr>
<td>9EPWZVMNP6D6KWX</td>
<td>1612143729</td>
<td>checkout:confirmation</td>
<td>5</td>
<td>2</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
<p>Now I want to create loyalty labels and their conditions such as:</p>
<ul>
<li><p><code>first_time_visitor</code> - any user with session = 0</p>
</li>
<li><p><code>frequent_visitor</code> - any user with session > 0 and count_session_products > 0</p>
</li>
<li><p><code>first_time_customer</code> - first time checkout:confirmation appears in the type column</p>
</li>
<li><p><code>repeat_customer</code> - second time of checkout:confirmation appears in the type column</p>
</li>
<li><p><code>loyal_customer</code> - third time of checkout:confirmation appears in the type column</p>
</li>
</ul>
<p>I have already have the conditions for <code>first_time_visitor</code> and <code>frequent_visitor</code> but I am having trouble creating <code>first_time_customer</code>, <code>repeat_customer</code> and <code>loyal_customer</code> labels.</p>
<p>Conditions for <code>first_time_visitor</code> and <code>frequent_visitor</code> are as follows:</p>
<pre><code>test_df['loyalty'] = np.where((test_df['session'] > 0) & ((test_df['type'] != 'checkout:confirmation')), 'frequent_visitor', None)
test_df.loc[test_df['session'] == 0, 'loyalty'] = 'first_time_visitor'
</code></pre>
<p>which gives me a dataframe looking like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>user_id</th>
<th>timestamp</th>
<th>type</th>
<th>session</th>
<th>prods</th>
<th>loyalty</th>
</tr>
</thead>
<tbody>
<tr>
<td>9EPWZVMNP6D6KWX</td>
<td>1612139269</td>
<td>productDetails</td>
<td>0</td>
<td>4</td>
<td>first_time_visitor</td>
</tr>
<tr>
<td>9EPWZVMNP6D6KWX</td>
<td>1612139665</td>
<td>productDetails</td>
<td>1</td>
<td>1</td>
<td>frequent_visitor</td>
</tr>
<tr>
<td>9EPWZVMNP6D6KWX</td>
<td>1612139579</td>
<td>checkout:confirmation</td>
<td>2</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>9EPWZVMNP6D6KWX</td>
<td>1612141096</td>
<td>checkout:confirmation</td>
<td>3</td>
<td>4</td>
<td>0</td>
</tr>
<tr>
<td>9EPWZVMNP6D6KWX</td>
<td>1612143046</td>
<td>productList</td>
<td>4</td>
<td>2</td>
<td>frequent_visitor</td>
</tr>
<tr>
<td>9EPWZVMNP6D6KWX</td>
<td>1612143729</td>
<td>checkout:confirmation</td>
<td>5</td>
<td>2</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
<p>I have had a couple of ideas only, one being to use <code>first_valid_index()</code> or <code>argmax()</code> to find the index and somehow use that in a condition to create the <code>first_time_customer</code> label. But I am not sure how to implement these conditions.</p>
<pre><code>(test_df.type.values == 'checkout:confirmation').argmax()
test_df[test_df.type == 'checkout:confirmation'].first_valid_index()
</code></pre>
<p>In the end, I would expect my loyalty column to look like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>user_id</th>
<th>timestamp</th>
<th>type</th>
<th>session</th>
<th>prods</th>
<th>loyalty</th>
</tr>
</thead>
<tbody>
<tr>
<td>9EPWZVMNP6D6KWX</td>
<td>1612139269</td>
<td>productDetails</td>
<td>0</td>
<td>4</td>
<td>first_time_visitor</td>
</tr>
<tr>
<td>9EPWZVMNP6D6KWX</td>
<td>1612139665</td>
<td>productDetails</td>
<td>1</td>
<td>1</td>
<td>frequent_visitor</td>
</tr>
<tr>
<td>9EPWZVMNP6D6KWX</td>
<td>1612139579</td>
<td>checkout:confirmation</td>
<td>2</td>
<td>0</td>
<td>first_time_customer</td>
</tr>
<tr>
<td>9EPWZVMNP6D6KWX</td>
<td>1612141096</td>
<td>checkout:confirmation</td>
<td>3</td>
<td>4</td>
<td>repeat_customer</td>
</tr>
<tr>
<td>9EPWZVMNP6D6KWX</td>
<td>1612143046</td>
<td>productList</td>
<td>4</td>
<td>2</td>
<td>frequent_visitor</td>
</tr>
<tr>
<td>9EPWZVMNP6D6KWX</td>
<td>1612143729</td>
<td>checkout:confirmation</td>
<td>5</td>
<td>2</td>
<td>loyal_customer</td>
</tr>
</tbody>
</table>
</div>
<p>Any suggestions and help would be appreciated. Thank you!</p>
|
<p>edit: my original post had the first loop on <code>n</code> in reverse, I don't think we need that or it helps... i also updated it so that we don't count previous occurrences of checkout:confirmation but instead just add 1 to the last checkout:confirmation count, so we are able to skip running through as many lines.</p>
<p>I think I was able to get a solution that uses a double-reverse loop over the dataframe. Probably not the most efficient thing in the world, but should work...</p>
<p>It uses the idea that each checkout seems to move the user up a level, and so we use integers and just add 1 to the loyalty before mapping those integers to words. Since loyalty-level = 3 is the highest, we are able to bail out of the loop early if we get to that level 3.</p>
<p>This doesn't do anything with the visitor status, so you'll want to make sure that your two methods work together for that.</p>
<pre class="lang-py prettyprint-override"><code>df['l'] = 0 # makes a new temp column in the df to store our loyalty calcs
for n in range(len(df)): # start to loop through the dataframe
l = 0 # set the loyalty level to 0
if df['type'][n] == 'checkout:confirmation': # only do this for checkouts
l = 1 # if it is a checkout, the loyalty is now 1 (might go up)
for i in reversed(range(n)): # start looking at the rows above the current one for more checkouts
if df['user_id'][n] == df['user_id'][i] and df['type'][i] == 'checkout:confirmation': # make sure the userid matches as well as the above row is checkout
l = min(df['l'][i]+1,3) # if our new loyalty would be > 3, just set it to 3, which is the max
break
df['l'][n] = l # set the loyalty level in row n to l
df['loyalty'] = df['l'].map({1:'first_time_customer',2:'frequent_customer',3:'loyal_customer'}) # maps the integers to text for the loyalty. gives NaN for the non-purchase rows.
</code></pre>
|
python|pandas|dataframe|conditional-statements|data-analysis
| 1
|
374,896
| 70,083,221
|
Rename pandas column of type datetime pandas
|
<p>i have dataframe like this:</p>
<pre><code>df=pd.DataFrame(data={'2021-11-21':['10','20'],'2021-11-14':['39','21']})
df
2021-11-21 2021-11-14
10 39
20 21
</code></pre>
<p>i want rename columns like this:</p>
<pre><code>curr_week_2021-11-21 prev_week_2021-11-14
10 39
20 21
</code></pre>
<p>i tried this:</p>
<pre><code>df_cols=df.columns.to_list()
df=df.rename(columns={'2021-11-21':'curr_week_'+df_cols[0],'2021-11-14':'prev_week_'+df_cols[1]})
</code></pre>
<p>but it did not work.</p>
|
<p>One way to do it programmatically with an arbitrary list of prefixes would be to use <code>map</code>/<code>zip</code>/<code>join</code>:</p>
<pre><code>prefixes = ['curr_week', 'prev_week']
df.columns = map('_'.join, zip(prefixes, df.columns))
</code></pre>
<p>output:</p>
<pre><code> curr_week_2021-11-21 prev_week_2021-11-14
0 10 39
1 20 21
</code></pre>
|
python|pandas
| 1
|
374,897
| 70,103,620
|
How to create different dataframes from dictionaries
|
<p>I have a dataframe with dictionaries saved under two columns:</p>
<pre><code>Name Trust_Value Affordability_Value
0 J. {'J.': 0.25, 'M.': 0.23} {'Z.': 0.024, 'M.': 0.34}
1 M. {'M.': 0.12, 'S.': 0.14} {'S.': 0.017, 'B.': 0.21}
1 C. {'S.': 0.21, 'N.': 0.13} {'D.': 0.015, 'B.': 0.22}
</code></pre>
<p>For each name I would like to have separates dataframes including <code>Name</code> of interest, <code>Trust_Value</code> (key and value in separate columns) and <code>Affordability_Value</code> (key and value in separate columns):</p>
<pre><code>df1 (J.):
Name Trust_Key Trust_Value Affordability_Key Affordability_Value
0 J. J. 0.25 Z. 0.024
M. 0.23 M. 0.34
df2 (M.):
Name Trust_Key Trust_Value Affordability_Key Affordability_Value
0 M. M. 0.12 S. 0.017
S. 0.14 B. 0.021
df3 (C.):
Name Trust_Key Trust_Value Affordability_Key Affordability_Value
0 M. S. 0.21 D. 0.015
N. 0.13 B. 0.22
</code></pre>
<p>I have no difficulties to split key-value pairs: my difficulties are in generating different dataframes that can include these values in separate columns.</p>
<p>The output from df.head().to_dict() is the following (I took only the first three elements):</p>
<pre><code>{'Name': {0: 'J.',
1: 'M.',
2: 'C.',
},
'Trust_Value': {0: {'J.': 0.25,
'M.': 0.23, 'D.': 0.22, 'S.':0.12,'N.':0.12}, 1: {'M.': 0.12, 'S.': 0.14, 'C.': 0.12, 'D.': 0.12}, 2: {'S.': 0.21, 'N.': 0.13, 'C.':0.34, 'D.':0.12, 'T.':0.42}}, 'Affordability_Value':{0: {'Z.': 0.024,
'M.': 0.34, 'D.': 0.21, 'X.':0.23,'N.':0.15}, 1: {'S.': 0.51, 'B.': 0.21, 'C.': 0.29, 'D.': 0.12}, 2: {'D.': 0.26, 'B.': 0.26, 'C.':0.38, 'D2.':0.25, 'T.':0.42}}}
</code></pre>
|
<p>You first need to <code>explode</code> your dictionaries:</p>
<pre><code>df2 = (df.assign(Trust_Key=df['Trust_Value'].apply(lambda d: d.values()),
Affordability_Key=df['Affordability_Value'].apply(lambda d: d.values())
)
.set_index('Name')
.apply(pd.Series.explode)
.reset_index()
)
</code></pre>
<p>Output:</p>
<pre><code> Name Trust_Value Affordability_Value Trust_Key Affordability_Key
0 J. J. Z. 0.25 0.024
1 J. M. M. 0.23 0.34
2 J. D. D. 0.22 0.21
3 J. S. X. 0.12 0.23
4 J. N. N. 0.12 0.15
5 M. M. S. 0.12 0.51
6 M. S. B. 0.14 0.21
7 M. C. C. 0.12 0.29
8 M. D. D. 0.12 0.12
...
</code></pre>
<p>Then you can split the new dataframe using <code>groupby</code>:</p>
<pre><code>for name, d in df2.groupby('Name'):
print(name)
print(d)
# you can save to CSV instead
# d.to_csv(f'{name}.csv')
</code></pre>
<p>Output:</p>
<pre><code>C.
Name Trust_Value Affordability_Value Trust_Key Affordability_Key
9 C. S. D. 0.21 0.26
10 C. N. B. 0.13 0.26
11 C. C. C. 0.34 0.38
12 C. D. D2. 0.12 0.25
13 C. T. T. 0.42 0.42
...
</code></pre>
|
python|pandas|dataframe
| 3
|
374,898
| 70,264,645
|
How to read parquet file partitioned by date folder to dataframe from s3 using python?
|
<p>Using python, I should go till cwp folder and get into the date folder and read the parquet file.
I have this folder structure inside s3.</p>
<p><strong>Sample s3 path:</strong></p>
<p><strong>bucket name = lla.analytics.dev</strong></p>
<p><strong>path = bigdata/dna/fixed/cwp/dt=YYYY-MM-DD/file.parquet</strong></p>
<pre><code>s3://lla.analytics.dev/bigdata/dna/fixed/cwp/dt=2021-11-24/file.parquet
dt=2021-11-25/file.parquet
dt=2021-11-26/file.parquet
........................
........................
dt=YYYY-MM-DD/file.parquet
</code></pre>
<p>I should access the recent date folder and read the files into dataframe from s3</p>
|
<p>I see you have pyarrow tagged. If you would like to use pyarrow (disclaimer, I work with pyarrow), you should be able to do:</p>
<pre><code>import pyarrow.fs as fs
import pyarrow.dataset as ds
s3, path = fs.FileSystem.from_uri("s3://lla.analytics.dev/bigdata/dna/fixed/cwp")
dataset = ds.dataset(path, partitioning='hive', filesystem=s3, format='parquet')
table = dataset.to_table()
</code></pre>
<p>There are a lot more details in pyarrow's <a href="https://arrow.apache.org/docs/python/filesystems.html" rel="nofollow noreferrer">filesystem docs</a> and <a href="https://arrow.apache.org/docs/python/dataset.html" rel="nofollow noreferrer">tabular dataset</a> docs. There are also recipes for this on the <a href="https://arrow.apache.org/cookbook/py/io.html#reading-partitioned-data-from-s3" rel="nofollow noreferrer">pyarrow cookbook</a>.</p>
|
python|pandas|dataframe|pyarrow|fastparquet
| 2
|
374,899
| 70,343,845
|
pandas/JupyterLab hiding first half of string between $$
|
<pre><code> one two three
0 $97500_$9500 $9000_$7500 nan
1 $97500_$9500 $9000_$7500 7000
2 $97500_$9500 $9000_$7500 7000
3 $97500_$9500 $9000_$7500 7000
4 $97500_$9500 $9000_$9900 $7500_$7000
5 97500 77500 7000
6 7700 7000 7000
7 9000 7500 nan
8 9000 7500 7000
9 9500 7500 7000
</code></pre>
<p>When I display this pandas dataframe in Jupterlab, it appears like this, with hidden values in boxes between the columns:</p>
<p><a href="https://i.stack.imgur.com/k2FQF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/k2FQF.png" alt="enter image description here" /></a></p>
<p>It's obviously the bracketing with the two <code>$</code> that's making this happen, but I can't find this anywhere in the documentation. Has anyone run into this before? What's the purpose of the functionality?</p>
|
<p>Pandas has a <a href="https://pandas.pydata.org/pandas-docs/dev/user_guide/options.html" rel="nofollow noreferrer">display option</a> <code>display.html.use_mathjax</code> which is <code>True</code> by default:</p>
<blockquote>
<p>When True, Jupyter notebook will process table contents using MathJax, rendering mathematical expressions enclosed by the dollar symbol.</p>
</blockquote>
<p>You can change this with <a href="https://pandas.pydata.org/docs/reference/api/pandas.set_option.html" rel="nofollow noreferrer"><code>pd.set_option('display.html.use_mathjax', False)</code></a>. This would disable automatic mathjax styling in pandas.</p>
<p>Alternatively, you could try to change the styling. See this issue referencing a similar situation: <a href="https://github.com/pandas-dev/pandas/issues/40318" rel="nofollow noreferrer">https://github.com/pandas-dev/pandas/issues/40318</a></p>
|
python|pandas|numpy|jupyter-notebook|jupyter-lab
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.