Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
4,500
| 69,164,245
|
Filter out rows based on condition
|
<p>I have a dataframe:</p>
<pre><code> | from id | from group | to id | to group |
| 1 | A | 3 | B |
| 4 | B | 4 | X |
| 5 | F | 5 | J |
| 2 | B | 3 | A |
</code></pre>
<p>Looking at the 'from group' and 'to group' column. I want to remove rows where 'A and B' AND 'F and J' are present in both columns.</p>
<p>Expected output:</p>
<pre><code> | from id | from group | to id | to group |
| 4 | B | 4 | X |
</code></pre>
<p>I am looking for a solution that is flexible. Meaning if a 3rd condition was added, it would not change</p>
|
<p>Supposing that you're using <code>pandas</code> try something like:</p>
<pre><code>df.loc[~((df['from group'].isin(['A','B'])) & (df['to group'].isin(['A','B'])))]
</code></pre>
<p>The <code>~</code> in front of the first parenthesis will define a negation of the coming filter</p>
|
python|pandas|numpy
| -1
|
4,501
| 69,260,384
|
Get column name where value match with multiple condition python
|
<p>Looking for a solution to my problem an entire day and cannot find the answer. I'm trying to follow the example of this topic: <a href="https://stackoverflow.com/questions/14734695/get-column-name-where-value-is-something-in-pandas-dataframe">Get column name where value is something in pandas dataframe</a>
to make a version with multiple conditions.</p>
<p>I want to extract column name (under a list) <strong>where</strong> :</p>
<p>value == 4 <strong>or</strong>/<strong>and</strong> value == 3<br />
+<br />
<strong>Only if</strong> there is no 4 or/and 3, then extract the column name where value == 2</p>
<p>Example:</p>
<pre><code>data = {'Name': ['Tom', 'Joseph', 'Krish', 'John'], 'acne': [1, 4, 1, 2], 'wrinkles': [1, 3, 4, 4],'darkspot': [2, 2, 3, 4] }
df1 = pd.DataFrame(data)
df1
</code></pre>
<p><strong>df1</strong><br />
'''</p>
<pre><code> Name acne wrinkles darkspot
0 Tom 1 1 2
1 Joseph 4 3 2
2 Krish 1 4 3
3 John 2 4 4
</code></pre>
<p>'''</p>
<p><strong>The result i'm looking for</strong> :</p>
<p><strong>df2</strong></p>
<pre><code> Name acne wrinkles darkspot problem
0 Tom 1 1 2 [darkspot]
1 Joseph 4 3 2 [acne, wrinkles]
2 Krish 1 4 3 [wrinkles, darkspot]
3 John 2 4 4 [wrinkles, darkspot]
</code></pre>
<p>'''</p>
<p>I tried with the apply function with a lambda detailled in the topic i mentionned above but it can only take one argument.
Many thanks for your answers if somebody can help me :)</p>
|
<p>You can use boolean mask:</p>
<pre><code>problems = ['acne', 'wrinkles', 'darkspot']
m1 = df1[problems].isin([3, 4]) # main condition
m2 = df1[problems].eq(2) # fallback condition
mask = m1 | (m1.loc[~m1.any(axis=1)] | m2)
df1['problem'] = mask.mul(problems).apply(lambda x: [i for i in x if i], axis=1)
</code></pre>
<p>Output:</p>
<pre><code>>>> df1
Name acne wrinkles darkspot problem
0 Tom 1 1 2 [darkspot]
1 Joseph 4 3 2 [acne, wrinkles]
2 Krish 1 4 3 [wrinkles, darkspot]
3 John 2 4 4 [wrinkles, darkspot]
</code></pre>
|
python|pandas|dataframe|extract|columnname
| 1
|
4,502
| 69,283,272
|
Excel file gets corrupted after bulking data with panda
|
<p>Ok so apparently this is a very simple task, but for some reason it's giving me trouble.</p>
<p>Here's the code:</p>
<pre><code> marcacoes = pd.read_excel(file_loc, sheet_name="MONITORAMENTO", index_col=None, na_values=['NA'], usecols ="AN")
x=0
while x < len(statusclientes):
if (statusclientes.iloc[x][0] == "Offline"):
p=marcacoes.iloc[x][0]
p=p+1
marcacoes.iat[x,0]= p
tel_off.append(telclientes.iloc[x][0])
if (statusclientes.iloc[x][0] == "Indefinido"):
tel_off.append(telclientes.iloc[x][0])
x=x+1
y=0
with pd.ExcelWriter(file_loc,mode='a',if_sheet_exists='replace') as writer:
marcacoes.to_excel(writer, sheet_name='MONITORAMENTO',startcol=37,startrow=5)
writer.save()
</code></pre>
<p>But the problematic part is:</p>
<pre><code>with pd.ExcelWriter(file_loc,mode='a',if_sheet_exists='replace') as writer:
marcacoes.to_excel(writer, sheet_name='MONITORAMENTO',startcol=37,startrow=5)
writer.save()
</code></pre>
<p>Since the code runs fine without it.
Those specific lines are supposed to dump the dataframe "marcacoes" on an existing excel file, replacing another existing column on the file, but whenever I run this code, that existing excel file becomes corrupted.</p>
<p>I'm pretty sure I'm missing some fundamentals on pandas here, but I cn't find where on documentation this issue is addressed.</p>
<p>EDIT:</p>
<p>I've tried the following code:</p>
<pre><code>wb = openpyxl.load_workbook(file_loc)
ws = wb['MONITORAMENTO']
startcol = 37
startrow = 5
k=0
while k < len(marcacoes):
ws.cell(startrow, startcol).value = marcacoes.iloc[k][0]
startrow +=1
k+=1
wb.save(file_loc)
</code></pre>
<p>but the same thing happens, now it's caused by the "wb.save(file_loc)" line.</p>
|
<p>You could try using openpyxl, as such:</p>
<pre><code>from openpyxl import load_workbook
book = load_workbook(file_loc)
writer = pd.ExcelWriter(file_loc, engine='openpyxl')
writer.book = book
writer.sheets = dict((ws.title, ws) for ws in book.worksheets)
marcacoes.to_excel(writer, sheet_name='MONITORAMENTO', startcol=37, startrow=5)
writer.save()
</code></pre>
<p>I have used this approach a couple of times and generally it yields good results.</p>
<p>Also make sure your starting fil isn't corrupted. The corruption error can also be caused by pictures, pivot tables, data validation or external connections.</p>
|
python|excel|pandas|openpyxl|xlsm
| 1
|
4,503
| 60,882,887
|
How to fill null values in a column conditionally in pandas
|
<p>I have the following dataframe :</p>
<pre><code>time label
2020-03-03 08:35:03.585 ok
2020-03-03 08:05:01.288 ok
2020-03-03 11:50:01.944 faulty
2020-03-03 08:45:04.540 ok
2020-03-12 10:30:02.227 None
2020-03-12 11:10:02.385 None
2020-03-05 11:15:03.526 None
2020-03-10 10:55:01.084 faulty
2020-03-05 11:35:04.563 None
</code></pre>
<p>I would like to only fill null values in <code>label</code> column where <code>time</code> is less than <code>2020-03-10</code>. </p>
<p>i tried </p>
<pre><code> df[df["label"].isna()] =np.where(df['triggerTs'] < '2020-03-10', 'ok' ,'no label')
</code></pre>
<p>But apparently it is not the correct way to do it because returns this error </p>
<p><code>ValueError: Must have equal len keys and value when setting with an iterable</code></p>
|
<p>In your solution is necessary filter missing rows for both sides for same length of assigned array to <code>label</code> column:</p>
<pre><code>m = df["label"].isna()
df.loc[m, 'label'] = np.where(df.loc[m, 'time'] < '2020-03-10', 'ok' ,'no label')
print (df)
time label
0 2020-03-03 08:35:03.585 ok
1 2020-03-03 08:05:01.288 ok
2 2020-03-03 11:50:01.944 faulty
3 2020-03-03 08:45:04.540 ok
4 2020-03-12 10:30:02.227 no label
5 2020-03-12 11:10:02.385 no label
6 2020-03-05 11:15:03.526 ok
7 2020-03-10 10:55:01.084 faulty
8 2020-03-05 11:35:04.563 ok
</code></pre>
|
python|python-3.x|pandas|dataframe
| 0
|
4,504
| 71,621,643
|
how use some function to avoid writing for loops?
|
<p>I have a data frame like this:</p>
<pre><code>2pair counts
'A','B','C','D' 5
'A','B','K','D' 3
'A','B','P','R' 2
'O','Y','C','D' 1
'O','Y','CL','lD' 4
</code></pre>
<p>I want to make a nested list, based on the first 2 elements. the first element is the first 2 letters and the rest is 2 other letter and counts column. For example, for the above data the result should be:</p>
<pre><code>[
[
['A','B'],
[['C','D'],5],
[['K','D'],3],
['P','R'],2]
],
[
['O','Y'],
[['C','D'],1],
[['CL','lD'],4]
]
]
</code></pre>
<p>The following code does exactly what I want, but it is too slow. How can I make it faster?</p>
<pre><code>pairs=[]
trans=[]
for i in range(df3.shape[0]):
if df3['2pair'].values[i].split(',')[:2] not in trans:
trans.append(df3['2pair'].values[i].split(',')[:2])
sub=[]
sub.append(df3['2pair'].values[i].split(',')[:2])
for j in range(df3.shape[0]):
if df3['2pair'].values[i].split(',')[:2]==df3['2pair'].values[j].split(',')[:2]:
sub.append([df3['2pair'].values[j].split(',')[2:],df3['counts'].values[j]])
pairs.append(sub)
</code></pre>
|
<p>Here's one way using <code>str.split</code> to split the strings in <code>2pair</code> column; then use <code>groupby.apply</code> + <code>to_dict</code> to create the lists:</p>
<pre><code>df[['head', 'tail']] = [[(*x[:2],), x[2:]] for x in df['2pair'].str.split(',')]
out = [[[*k]] + v for k,v in (df.groupby('head')[['tail','counts']]
.apply(lambda x: x.to_numpy().tolist()).to_dict()
.items())]
</code></pre>
<p>Output:</p>
<pre><code>[[['A', 'B'], [['C', 'D'], 5], [['K', 'D'], 3], [['P', 'R'], 2]],
[['O', 'Y'], [['C', 'D'], 1], [['CL', 'lD'], 4]]]
</code></pre>
|
python|pandas|list|dataframe
| 2
|
4,505
| 71,600,264
|
Pandas parallel URL downloads with pd.read_html
|
<p>I know I can download a csv file from a web page by doing:</p>
<pre><code>import pandas as pd
import numpy as np
from io import StringIO
URL = "http://www.something.com"
data = pd.read_html(URL)[0].to_csv(index=False, header=True)
file = pd.read_csv(StringIO(data), sep=',')
</code></pre>
<p>Now I would like to <strong>do the above for</strong> <strong>more URLs at the same time</strong>, like when you open different tabs in your browser. In other words, a way to parallelize this when you have different URLs, instead of looping through or doing it one at a time. So, I thought of having a series of URLs inside a dataframe, and then create a new column which contains the strings 'data', one for each URL.</p>
<pre><code>list_URL = ["http://www.something.com", "http://www.something2.com",
"http://www.something3.com"]
df = pd.DataFrame(list_URL, columns =['URL'])
df['data'] = pd.read_html(df['URL'])[0].to_csv(index=False, header=True)
</code></pre>
<p>But it gives me error: <code>cannot parse from 'Series'</code></p>
<p>Is there a better syntax, or does this mean I cannot do this in parallel for more than one URL?</p>
|
<p>You could try like this:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
URLS = [
"https://en.wikipedia.org/wiki/Periodic_table#Presentation_forms",
"https://en.wikipedia.org/wiki/Planet#Planetary_attributes",
]
df = pd.DataFrame(URLS, columns=["URL"])
df["data"] = df["URL"].map(
lambda x: pd.read_html(x)[0].to_csv(index=False, header=True)
)
</code></pre>
<pre class="lang-py prettyprint-override"><code>print(df)
# Output
URL data
0 https://en.wikipedia.org/wiki/Periodic_t... 0\r\nPart of a series on the\r\nPeriodic...
1 https://en.wikipedia.org/wiki/Planet#Pla... 0\r\n"The eight known planets of the Sol...
</code></pre>
|
html|pandas|web-scraping
| 1
|
4,506
| 71,495,713
|
Sampling a dataframe according to some rules: balancing a multilabel dataset
|
<p>I have a dataframe like this:</p>
<pre><code>df = pd.DataFrame({'id':[10,20,30,40],'text':['some text','another text','random stuff', 'my cat is a god'],
'A':[0,0,1,1],
'B':[1,1,0,0],
'C':[0,0,0,1],
'D':[1,0,1,0]})
</code></pre>
<p>Here I have columns from <code>A</code>to <code>D</code> but my real dataframe has 100 columns with values of <code>0</code>and <code>1</code>. This real dataframe has 100k reacords.</p>
<p>For example, the column <code>A</code> is related to the 3rd and 4rd row of <code>text</code>, because it is labeled as <code>1</code>. The Same way, <code>A</code> is not related to the 1st and 2nd rows of <code>text</code> because it is labeled as <code>0</code>.</p>
<p>What I need to do is to sample this dataframe in a way that I have the same or about the same number of features.</p>
<p>In this case, the feature <code>C</code> has only one occurrece, so I need to filter all others columns in a way that I have one text with <code>A</code>, one <code>text</code> with <code>B</code>, one <code>text</code> with <code>C</code>etc..</p>
<p>The best would be: I can set using for example <code>n=100</code> that means I want to sample in a way that I have 100 records with all the features.</p>
<p>This dataset is a multilabel dataset training and is higly unbalanced, I am looking for the best way to balance it for a machine learning task.</p>
<p><strong>Important</strong>: I don't want to exclude the <code>0</code> features. I just want to have ABOUT the same number of columns with <code>1</code> and <code>0</code></p>
<p>For example. with a final data set with 1k records, I would like to have all columns from <code>A</code> to the <code>final_column</code> and all these columns with the same numbers of <code>1</code> and <code>0</code>. To accomplish this I will need to random discard <code>text</code> rows and <code>id</code> only.</p>
<p>The approach I was trying was to look to the feature with the lowest <code>1</code> and <code>0</code> counts and then use this value as threshold.</p>
<p>Edit 1: One possible way I thought is to use:</p>
<pre><code>df.sum(axis=0, skipna=True)
</code></pre>
<p>Then I can use the column with the lowest sum value as threshold to filter the text column. I dont know how to do this filtering step</p>
<p>Thanks</p>
|
<p>The exact output you expect is unclear, but assuming you want to get 1 random row per letter with 1 you could reshape (while dropping the 0s) and use <code>GroupBy.sample</code>:</p>
<pre><code>(df
.set_index(['id', 'text'])
.replace(0, float('nan'))
.stack()
.groupby(level=-1).sample(n=1)
.reset_index()
)
</code></pre>
<p><em>NB. you can rename the columns if needed</em>
output:</p>
<pre><code> id text level_2 0
0 30 random stuff A 1.0
1 20 another text B 1.0
2 40 my cat is a god C 1.0
3 30 random stuff D 1.0
</code></pre>
|
python-3.x|pandas|multilabel-classification
| 1
|
4,507
| 42,530,216
|
How to access weight variables in Keras layers in tensor form for clip_by_weight?
|
<p>I'm implementing WGAN and need to clip weight variables.</p>
<p>I'm currently using <em>Tensorflow</em> with <em>Keras</em> as high-level API. Thus building layers with Keras to avoid manually creation and initialization of variables.</p>
<p>The problem is WGAN need to clip weight varibales, This can be done using <code>tf.clip_by_value(x, v0, v1)</code> once I got those weight variable tensors, but I don't know to how to get them safely.</p>
<p>One possible solution maybe using <code>tf.get_collection()</code> to get all trainable variables. But I don't know how to get only <strong>weight</strong> variable without <strong>bias</strong> variables.</p>
<p>Another solution is <code>layer.get_weights()</code>, but it get <code>numpy</code> arrays, although I can clip them with <code>numpy</code> APIs and set them using <code>layer.set_weights()</code>, but this may need CPU-GPU corporation, and may not be a good choice since clip operation needs to be performed on each train step.</p>
<p>The only way I know is access them directly using <strong>exact</strong> variable names which I can get from TF lower level APIs or TensorBoard, but this is may not be safe since naming rule of Keras is not guaranteed to be stable.</p>
<p>Is there any clean way to perform <code>clip_by_value</code> only on those <code>W</code>s with Tensorflow and Keras?</p>
|
<p>You can use constraints(<a href="https://keras.io/constraints/" rel="nofollow noreferrer">here</a>) class to implement new constraints on parameters. </p>
<p>Here is how you can easily implement clip on weights and use it in your model.</p>
<pre><code>from keras.constraints import Constraint
from keras import backend as K
class WeightClip(Constraint):
'''Clips the weights incident to each hidden unit to be inside a range
'''
def __init__(self, c=2):
self.c = c
def __call__(self, p):
return K.clip(p, -self.c, self.c)
def get_config(self):
return {'name': self.__class__.__name__,
'c': self.c}
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
model = Sequential()
model.add(Dense(30, input_dim=100, W_constraint = WeightClip(2)))
model.add(Dense(1))
model.compile(loss='mse', optimizer='rmsprop')
X = np.random.random((1000,100))
Y = np.random.random((1000,1))
model.fit(X,Y)
</code></pre>
<p>I have tested the running of the above code, but not the validity of the constraints. You can do so by getting the model weights after training using <code>model.get_weights()</code> or <code>model.layers[idx].get_weights()</code> and checking whether its abiding the constraints.</p>
<p>Note: The constrain is not added to all the model weights .. but just to the weights of the specific layer its used and also <code>W_constraint</code> adds constrain to <code>W</code> param and b_constraint to <code>b</code> (bias) param</p>
|
python|tensorflow|deep-learning|keras
| 4
|
4,508
| 69,667,513
|
How to plot timeline in a single bar?
|
<p>I am trying to plot a timeline chart but they are stacking over each other.</p>
<pre><code>import pandas as pd
import plotly.express as pex
d1 = dict(Start= '2021-10-10 02:00:00', Finish = '2021-10-10 09:00:00', Task = 'Sleep')
d2 = dict(Start= '2021-10-10 09:00:00', Finish = '2021-10-10 09:30:00', Task = 'EAT')
d3 = dict(Start= '2021-10-10 09:30:00', Finish = '2021-10-10 12:00:00', Task = 'Study')
d4 = dict(Start= '2021-10-10 12:00:00', Finish = '2021-10-10 16:00:00', Task = 'Work')
d5 = dict(Start= '2021-10-10 16:00:00', Finish = '2021-10-10 16:50:00', Task = 'EAT')
d6 = dict(Start= '2021-10-10 17:00:00', Finish = '2021-10-10 20:00:00', Task = 'Study')
d8 = dict(Start= '2021-10-10 20:00:00', Finish = '2021-10-10 20:40:00', Task = 'EAT')
d7 = dict(Start= '2021-10-10 21:00:00', Finish = '2021-10-11 05:00:00', Task = 'Sleep')
df = pd.DataFrame([d1,d2,d3,d4,d5,d6,d7,d8])
</code></pre>
<p>My DataFrame(df) is:</p>
<p><a href="https://i.stack.imgur.com/m7xp2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/m7xp2.png" alt="enter image description here" /></a></p>
<pre><code>gantt = pex.timeline(df, x_start='Start', x_end = 'Finish', color = 'Task', height=300)
gantt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/Bx6Be.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Bx6Be.png" alt="enter image description here" /></a></p>
<p>This is the graph I am getting. But I don't want them to stack. I want them to be in a single line (There will not be any overlapping intervals). How do I achieve this?</p>
|
<p>According to the <a href="https://plotly.com/python-api-reference/generated/plotly.express.timeline.html" rel="nofollow noreferrer">documentation</a>, you can use the <code>y</code> parameter and provide an array_like object of size equal to the number of rows in <code>df</code> and all elements equal.</p>
<p>So, one way (using the empty string <code>''</code> results in the plot having no <code>y-axis</code> range):</p>
<pre class="lang-py prettyprint-override"><code>gantt = pex.timeline(df, x_start='Start', x_end = 'Finish', color = 'Task', height=300, y=['']*df.shape[0])
</code></pre>
|
python|pandas|graph|plotly|gantt-chart
| 2
|
4,509
| 69,736,701
|
Comparing embeddings of a siamese network
|
<p>I have created a Siamese network using tensorflow 2.4.</p>
<pre><code>def create_encoder_siamese(pairs, cfg):
# Based on https://keras.io/examples/vision/siamese_network/
# and https://keras.io/examples/vision/siamese_contrastive/
def euclidean_distance(vects):
x, y = vects
sum_square = tf.math.reduce_sum(tf.math.square(x - y), axis=1, keepdims=True)
return tf.math.sqrt(tf.math.maximum(sum_square, tf.keras.backend.epsilon()))
# Create the base model from the pre-trained model MobileNet V2
IMG_SHAPE = pairs.shape[2:]
print('Image shape:', IMG_SHAPE)
# img_batch = pairs[0:cfg['train']['batch_size'],0,:,:,:]
# print('batch shape:', img_batch.shape)
# pre-trained feature extraction model
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
# freeze the feature extraction part
base_model.trainable = False
feature_batch = base_model(pairs[0:cfg['train']['batch_size'],0,:,:,:])
print('batch feature shape:', feature_batch.shape)
# add a layer to convert to 1-d vectors
global_average_layer = tf.keras.layers.GlobalAveragePooling2D()
feature_batch_average = global_average_layer(feature_batch)
print('batch average pooling shape:', feature_batch_average.shape)
# add a layer to convert to encoded vectors that we will use to measure distances
encoded_layer = tf.keras.layers.Dense(EMBEDDING_SIZE, activation='relu')
encoded_batch = encoded_layer(feature_batch_average)
print('batch encoded shape:', encoded_batch.shape)
# put all together
inputs = tf.keras.Input(shape=IMG_SHAPE)
x = base_model(inputs, training=False) # The False is necessary as it contains a BatchNormalisation layer
x = global_average_layer(x)
x = tf.keras.layers.Dropout(0.2)(x)
encoded = encoded_layer(x)
encoder = tf.keras.Model(inputs, encoded, name='encoder')
print(encoder.summary())
print()
input_1 = tf.keras.layers.Input(IMG_SHAPE)
input_2 = tf.keras.layers.Input(IMG_SHAPE)
# Link two towers
tower_1 = encoder(input_1)
tower_2 = encoder(input_2)
# Compute distance between embeddings
distance_layer = tf.keras.layers.Lambda(euclidean_distance)([tower_1, tower_2])
output_layer = tf.keras.layers.Dense(1, activation="sigmoid")(distance_layer)
model = tf.keras.Model(inputs=[input_1, input_2], outputs=output_layer)
print(model.summary())
print()
model.compile(optimizer='Adam',
loss=tfa.losses.ContrastiveLoss(margin=cfg['train']['margin']),
# loss=loss(margin=cfg['train']['margin']),
metrics='accuracy')
print()
print('Number of layers to train:', len(model.trainable_variables))
return model, encoder
</code></pre>
<p>Which I can successfully train and use to compute distances between desired image pairs. However for efficiency purposes at deployment I want to first transform all images into their 1D embeddings (which I can do with the <em>encoder</em> model) and then compute distances directly between embeddings with a submodel.</p>
<p>But when I try to create the <em>comparer</em> model with something like:</p>
<pre><code> comparer = tf.keras.Model(inputs=[(EMBEDDING_SIZE), (EMBEDDING_SIZE)], outputs=output_layer, name='comparer')
</code></pre>
<p>I get the error:</p>
<pre><code>ValueError Traceback (most recent call last)
<ipython-input-22-dfe3ad813151> in <module>
88 return model, encoder, output_layer
89
---> 90 model, encoder, comparer = create_encoder_siamese(train_pairs, cfg)
<ipython-input-22-dfe3ad813151> in create_encoder_siamese(pairs, cfg)
64
65 # comparer model
---> 66 comparer = tf.keras.Model(inputs=[(EMBEDDING_SIZE), (EMBEDDING_SIZE)], outputs=output_layer, name='comparer')
67
68 # full model
/usr/local/lib/python3.7/site-packages/tensorflow/python/training/tracking/base.py in _method_wrapper(self, *args, **kwargs)
515 self._self_setattr_tracking = False # pylint: disable=protected-access
516 try:
--> 517 result = method(self, *args, **kwargs)
518 finally:
519 self._self_setattr_tracking = previous_value # pylint: disable=protected-access
/usr/local/lib/python3.7/site-packages/tensorflow/python/keras/engine/functional.py in __init__(self, inputs, outputs, name, trainable, **kwargs)
118 generic_utils.validate_kwargs(kwargs, {})
119 super(Functional, self).__init__(name=name, trainable=trainable)
--> 120 self._init_graph_network(inputs, outputs)
121
122 @trackable.no_automatic_dependency_tracking
/usr/local/lib/python3.7/site-packages/tensorflow/python/training/tracking/base.py in _method_wrapper(self, *args, **kwargs)
515 self._self_setattr_tracking = False # pylint: disable=protected-access
516 try:
--> 517 result = method(self, *args, **kwargs)
518 finally:
519 self._self_setattr_tracking = previous_value # pylint: disable=protected-access
/usr/local/lib/python3.7/site-packages/tensorflow/python/keras/engine/functional.py in _init_graph_network(self, inputs, outputs)
155 base_layer_utils.create_keras_history(self._nested_outputs)
156
--> 157 self._validate_graph_inputs_and_outputs()
158
159 # A Network does not create weights of its own, thus it is already
/usr/local/lib/python3.7/site-packages/tensorflow/python/keras/engine/functional.py in _validate_graph_inputs_and_outputs(self)
680 'is redundant. '
681 'All inputs should only appear once.'
--> 682 ' Found: ' + str(self.inputs))
683
684 for x in self.inputs:
ValueError: The list of inputs passed to the model is redundant. All inputs should only appear once. Found: [64, 64]
</code></pre>
<p>So how can I adapt this network to have a full model for training, an encoder to convert a single image to the embedding, and a final model to compare the 2 embeddings directly?</p>
<p>Note: I know everybody uses the encoder to transform the images into the embeddings and then compute the Euclidian distance to compute the similarity between samples. However the Euclidean distance, and the distance computed from the entire model are not the same and follow this relationship:</p>
<p><a href="https://i.stack.imgur.com/9BpIo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9BpIo.png" alt="enter image description here" /></a></p>
<p>And who doesn't want a nice distance metric bounded in the range 0 to 1?</p>
|
<p>Regarding the error, the Model 'inputs' take a tensor or a list of tensors and not a list of sizes:</p>
<pre><code>tower_1 = tf.keras.layers.Input(EMBEDDING_SIZE)
tower_2 = tf.keras.layers.Input(EMBEDDING_SIZE)
# Compute distance between embeddings
distance_layer = tf.keras.layers.Lambda(euclidean_distance)([tower_1, tower_2])
output_layer = tf.keras.layers.Dense(1, activation="sigmoid")(distance_layer)
comparer = tf.keras.Model(inputs=[tower_1, tower_2], outputs=output_layer)
</code></pre>
<p>btw, in your code, distance_layer and output_layer are tensors not layers.</p>
|
python|tensorflow|siamese-network
| 1
|
4,510
| 69,881,567
|
pandas subtract rows in dataframe according to a few columns
|
<p>I have the following dataframe</p>
<pre><code>data = [
{'col1': 11, 'col2': 111, 'col3': 1111},
{'col1': 22, 'col2': 222, 'col3': 2222},
{'col1': 33, 'col2': 333, 'col3': 3333},
{'col1': 44, 'col2': 444, 'col3': 4444}
]
</code></pre>
<p>and the following list:</p>
<pre><code>lst = [(11, 111), (22, 222), (99, 999)]
</code></pre>
<p>I would like to get out of my data only rows that col1 and col2 do not exist in the lst</p>
<p>result for above example would be:</p>
<pre><code>[
{'col1': 33, 'col2': 333, 'col3': 3333},
{'col1': 44, 'col2': 444, 'col3': 4444}
]
</code></pre>
<p>how can I achieve that?</p>
<pre><code>import pandas as pd
df = pd.DataFrame(data)
list_df = pd.DataFrame(lst)
# command like ??
# df.subtract(list_df)
</code></pre>
|
<p>If need test by pairs is possible compare <code>MultiIndex</code> created by both columns in <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Index.isin.html" rel="nofollow noreferrer"><code>Index.isin</code></a> with inverted mask by <code>~</code> in <a href="http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a>:</p>
<pre><code>df = df[~df.set_index(['col1','col2']).index.isin(lst)]
print (df)
col1 col2 col3
2 33 333 3333
3 44 444 4444
</code></pre>
<p>Or with left join by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.merge.html" rel="nofollow noreferrer"><code>merge</code></a> with indicator parameter:</p>
<pre><code>m = df.merge(list_df,
left_on=['col1','col2'],
right_on=[0,1],
indicator=True,
how='left')['_merge'].eq('left_only')
df = df[mask]
print (df)
col1 col2 col3
2 33 333 3333
3 44 444 4444
</code></pre>
|
python|pandas
| 2
|
4,511
| 43,288,542
|
Max in a sliding window in NumPy array
|
<p>I want to create an array which holds all the <code>max()</code>es of a window moving through a given numpy array. I'm sorry if this sounds confusing. I'll give an example. Input:</p>
<pre><code>[ 6,4,8,7,1,4,3,5,7,2,4,6,2,1,3,5,6,3,4,7,1,9,4,3,2 ]
</code></pre>
<p>My output with a window width of 5 shall be this:</p>
<pre><code>[ 8,8,8,7,7,7,7,7,7,6,6,6,6,6,6,7,7,9,9,9,9 ]
</code></pre>
<p>Each number shall be the max of a subarray of width 5 of the input array:</p>
<pre><code>[ 6,4,8,7,1,4,3,5,7,2,4,6,2,1,3,5,6,3,4,7,1,9,4,3,2 ]
\ / \ /
\ / \ /
\ / \ /
\ / \ /
[ 8,8,8,7,7,7,7,7,7,6,6,6,6,6,6,7,7,9,9,9,9 ]
</code></pre>
<p>I did not find an out-of-the-box function within numpy which would do this (but I would not be surprised if there was one; I'm not always thinking in the terms the numpy developers thought). I considered creating a shifted 2D-version of my input:</p>
<pre><code>[ [ 6,4,8,7,1,4,3,5,7,8,4,6,2,1,3,5,6,3,4,7,1 ]
[ 4,8,7,1,4,3,5,7,8,4,6,2,1,3,5,6,3,4,7,1,9 ]
[ 8,7,1,4,3,5,7,8,4,6,2,1,3,5,6,3,4,7,1,9,4 ]
[ 7,1,4,3,5,7,8,4,6,2,1,3,5,6,3,4,7,1,9,4,3 ]
[ 1,4,3,5,7,8,4,6,2,1,3,5,6,3,4,7,1,9,4,3,2 ] ]
</code></pre>
<p>Then I could apply <code>np.max(input, 0)</code> on this and would get my results. But this does not seem efficient in my case because both my array and my window width can be large (>1000000 entries and >100000 window width). The data would be blown up more or less by a factor of the window width.</p>
<p>I also considered using <code>np.convolve()</code> in some fashion but couldn't figure out a way to achieve my goal with it.</p>
<p>Any ideas how to do this efficiently?</p>
|
<p>Pandas has a rolling method for both Series and DataFrames, and that could be of use here:</p>
<pre><code>import pandas as pd
lst = [6,4,8,7,1,4,3,5,7,8,4,6,2,1,3,5,6,3,4,7,1,9,4,3,2]
lst1 = pd.Series(lst).rolling(5).max().dropna().tolist()
# [8.0, 8.0, 8.0, 7.0, 7.0, 8.0, 8.0, 8.0, 8.0, 8.0, 6.0, 6.0, 6.0, 6.0, 6.0, 7.0, 7.0, 9.0, 9.0, 9.0, 9.0]
</code></pre>
<p>For consistency, you can coerce each element of <code>lst1</code> to <code>int</code>:</p>
<pre><code>[int(x) for x in lst1]
# [8, 8, 8, 7, 7, 8, 8, 8, 8, 8, 6, 6, 6, 6, 6, 7, 7, 9, 9, 9, 9]
</code></pre>
|
python|performance|numpy|scipy|max
| 14
|
4,512
| 50,537,015
|
Setting Dataframe Value based on idx Series
|
<p>Given the Dataframe <code>df</code>:</p>
<pre><code>A B C
0.10 0.83 0.07
0.40 0.30 0.30
0.70 0.17 0.13
0.72 0.04 0.24
0.15 0.07 0.78
</code></pre>
<p>And the Series <code>s</code>:</p>
<pre><code>A 3
B 0
C 4
dtype: int64
</code></pre>
<p>Is there a way to easily set <br>
the <code>3</code> element of column <code>A</code>, <br>
the <code>0</code> element of column <code>B</code> & <br>
the <code>4</code> element of column <code>C</code> <br> without looping over the series?</p>
<p>Something in the vain of:</p>
<pre><code>df.loc[s] = 'spam'
</code></pre>
<p>(but this sets the entire rows)</p>
<p>The desired output would be:</p>
<pre><code>A B C
0.10 spam 0.07
0.40 0.30 0.30
0.70 0.17 0.13
spam 0.04 0.24
0.15 0.07 spam
</code></pre>
|
<p>There are a couple of ways you can do this. Both require converting your data to <code>object</code> type in order to assign strings to previously <code>float</code> series.</p>
<h2>Option 1: numpy</h2>
<p>This requires you to input coordinates via an integer array or, as here, a list of tuples.</p>
<pre><code>import numpy as np
# convert to numpy object array
vals = df.values.astype(object)
# transform coordinates
coords = [(3, 0), (0, 1), (4, 2)]
idx = np.r_[coords].T
# apply indices
vals[idx[0], idx[1]] = 'spam'
# create new dataframe
res = pd.DataFrame(vals, index=df.index, columns=df.columns)
print(res)
A B C
0 0.1 spam 0.07
1 0.4 0.3 0.3
2 0.7 0.17 0.13
3 spam 0.04 0.24
4 0.15 0.07 spam
</code></pre>
<h2>Option 2: pd.DataFrame.at</h2>
<p>A non-vectorised, but more straightforward, solution is to use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.at.html" rel="nofollow noreferrer"><code>pd.DataFrame.at</code></a> in a <code>for</code> loop:</p>
<pre><code>coords = [(3, 'A'), (0, 'B'), (4, 'C')]
df = df.astype(object)
for row, col in coords:
df.at[row, col] = 'spam'
print(df)
A B C
0 0.1 spam 0.07
1 0.4 0.3 0.3
2 0.7 0.17 0.13
3 spam 0.04 0.24
4 0.15 0.07 spam
</code></pre>
|
python|pandas|indexing
| 1
|
4,513
| 50,300,983
|
Involuntary conversion of int64 to float64 in pandas
|
<p>Never mind the below--I see the cause of the problem. The shift of course produces a N/A.</p>
<p>I want to prevent a type conversion that occurs when concatenating a dataframe to itself horizontally. I have a dataframe where all columns are int64 (and the index is a datetime64[ns]):</p>
<pre><code>df.dtypes
Out[118]:
op int64
</code></pre>
<p>I concatenate to have the next row's columns (suffixed with "_next") appear on the same line as the current row: </p>
<pre><code>df = pd.concat([df, df.shift(-1).add_suffix('_next')], axis=1)
</code></pre>
<p>But then the types on the concatenated columns change to float64:</p>
<pre><code>df.dtypes
Out[122]:
op int64
op_next float64
</code></pre>
<p>Is there a way to prevent that type conversion? Thanks.</p>
|
<p>This occurs because <code>df.shift(-1)</code> has one element which is <code>NaN</code>, which is a <code>float</code>. Such a series will automatically be upcasted to <code>float</code>. Here is a minimal example:</p>
<pre><code>df = pd.DataFrame({'op': [1, 2, 3]})
df = pd.concat([df, df.shift(-1).add_suffix('_next')], axis=1)
print(df)
op op_next
0 1 2.0
1 2 3.0
2 3 NaN
</code></pre>
<p>There is nothing you can do except use <code>fillna</code> to fill with an integer and recast. You can do this either before or after <code>pd.concat</code>:</p>
<p><strong>Before</strong></p>
<pre><code>df = pd.concat([df, df.shift(-1).fillna(0).astype(int).add_suffix('_next')], axis=1)
</code></pre>
<p><strong>After</strong></p>
<pre><code>df = pd.concat([df, df.shift(-1).add_suffix('_next')], axis=1)
df = df.fillna(0).astype(int)
</code></pre>
|
python|pandas|dataframe
| 2
|
4,514
| 45,358,766
|
Using a function to calculate the frequency of columns in a dataframe (pandas)
|
<p>For the following data set:</p>
<pre><code>Index ADR EF INF SS
1 1 1 0 0
2 1 0 1 1
3 0 1 0 0
4 0 0 1 1
5 1 0 1 1
</code></pre>
<p>I am going to calculate the frequency for each column. This is my code: </p>
<pre><code>df.ADR.value_counts()
df.EF.value_counts()
df.INF.value_counts()
df.SS.value_counts()
</code></pre>
<p>How I can do it by writing a function, rather than repeating the code for each column? I tried this: </p>
<pre><code>def frequency (df, *arg):
count =df.arg.value_counts()
return (count)
</code></pre>
<p>But it does not work.</p>
|
<p>Assuming you want to calculate the frequency of all columns, rather than selectively, I don't recommend a custom function.</p>
<p>Try using <code>df.apply</code>, passing <code>pd.value_counts</code>:</p>
<pre><code>In [1048]: df.apply(pd.value_counts, axis=0)
Out[1048]:
ADR EF INF SS
0 2 3 2 2
1 3 2 3 3
</code></pre>
<p>If you want to calculate selectively, you may pass a list of columns to a function:</p>
<pre><code>def foo(df, columns):
return df[columns].apply(pd.value_counts, axis=0)
print(foo(df, ['ADR', 'EF']))
</code></pre>
|
python|pandas|dataframe
| 3
|
4,515
| 62,751,221
|
Error Loading Tensorflow Frozen Inference Graph to OpenCV DNN
|
<p>I have trained an object detection model using Tensorflow API, following an example based on this Google Colaboratory notebook by Roboflow.
<a href="https://colab.research.google.com/drive/1wTMIrJhYsQdq_u7ROOkf0Lu_fsX5Mu8a" rel="noreferrer">https://colab.research.google.com/drive/1wTMIrJhYsQdq_u7ROOkf0Lu_fsX5Mu8a</a></p>
<p>So far so good and i have successfully extracted my trained model as an Inference graph, again following the same notebook:</p>
<pre><code>import re
import numpy as np
output_directory = './fine_tuned_model'
lst = os.listdir(model_dir)
lst = [l for l in lst if 'model.ckpt-' in l and '.meta' in l]
steps=np.array([int(re.findall('\d+', l)[0]) for l in lst])
last_model = lst[steps.argmax()].replace('.meta', '')
last_model_path = os.path.join(model_dir, last_model)
print(last_model_path)
!python /content/models/research/object_detection/export_inference_graph.py \
--input_type=image_tensor \
--pipeline_config_path={pipeline_fname} \
--output_directory={output_directory} \
--trained_checkpoint_prefix={last_model_path}
</code></pre>
<p>That gives me a <code>frozen_inference_graph.pb</code>file that i can use to make my object detection program in OpenCV DNN. Also following this example <a href="https://stackoverflow.com/a/57055266/9914815">https://stackoverflow.com/a/57055266/9914815</a> i prepared a .pbtxt file of the model and pipeline config as the second argument for the <code>cv2.dnn.readNetFromTensorflow</code> function. Here is the code just enough to reproduce the error i'm having:</p>
<pre><code>model = cv2.dnn.readNetFromTensorflow('models/trained/frozen_inference_graph.pb',
'models/trained/output.pbtxt')
</code></pre>
<p>This code works successfully when i used the pretrained SSD MobileNet V2 COCO model, <code>ssd_mobilenet_v2_coco_2018_03_29.pbtxt</code></p>
<p>however using my trained .pbtxt file, it will throw this error:</p>
<pre><code>C:\Users\Satria\Desktop\ExploreOpencvDnn-master>python trainedmodel_video.py -i test1.mp4 -o test1result.mp4
Traceback (most recent call last):
File "trainedmodel_video.py", line 48, in <module> 'models/trained/output.pbtxt') cv2.error:
OpenCV(4.1.1) C:\projects\opencv-python\opencv\modules\dnn\src\tensorflow\tf_importer.cpp:544:error:
(-2:Unspecified error) Input layer not found: FeatureExtractor/MobilenetV2/Conv/weights in function
'cv::dnn::dnn4_v20190621::`anonymous-namespace'::TFImporter::connect'
</code></pre>
<p>It says that Input Layer is not found. Why does this happen?
Also Notice the error message points out to a directory:</p>
<pre><code>C:\projects\opencv-python\opencv\modules\dnn\src\tensorflow\tf_importer.cpp
</code></pre>
<p>which is incredibly strange, because i <strong>do not</strong> have that directory at all in my computer.
I tried diffchecking the pbtxt and config files of my and the sample SSD mobilenet model and i cannot find any instance of that particular directory used in anywhere, nor even they have a directory path inside.</p>
<p>Is this caused by training using Google Colab?
Is there any correct way i can use Colab-trained Tensorflow models in OpenCV DNN?</p>
<p>Thanks in advance!</p>
|
<p><strong>Solved after adding an additional input node in my own generated pbtxt file</strong></p>
<p>Someone suggested that OpenCV Version 4.11 which i was using is outdated.
I updated to 4.30, still not working, however it now lets me to use FusedBatchNormV3 which is very important in the future.</p>
<p>Now, after taking a close look at the diffcheck in the sample and the generated pbtxt,</p>
<p>In the sample .pbtxt file <code>ssd_mobilenet_v2_coco_2018_03_29.pbtxt</code>, line 30 onward</p>
<pre><code>node {
name: "Preprocessor/mul"
op: "Mul"
input: "image_tensor"
input: "Preprocessor/mul/x"
}
node {
name: "Preprocessor/sub"
op: "Sub"
input: "Preprocessor/mul"
input: "Preprocessor/sub/y"
}
node {
name: "FeatureExtractor/MobilenetV2/Conv/Conv2D"
op: "Conv2D"
input: "Preprocessor/sub"
input: "FeatureExtractor/MobilenetV2/Conv/weights"
</code></pre>
<p>It has an additional Input nodes which uses <code>Preprocessor</code>, not only <code>FeatureExtractor/MobilenetV2/Conv/Conv2D</code></p>
<p>meanwhile on the generated pbtxt it only has this</p>
<pre><code>node {
name: "FeatureExtractor/MobilenetV2/Conv/Conv2D"
op: "Conv2D"
input: "FeatureExtractor/MobilenetV2/Conv/weights"
</code></pre>
<p>I copied the Input nodes of the sample .pbtxt and into my own generated .pbtxt and it worked!!!</p>
|
python|tensorflow|opencv|google-colaboratory|roboflow
| 2
|
4,516
| 62,605,041
|
Change from .apply() to a function that uses list comprehension to compare one dataframe with a column of lists to values in another dataframe
|
<p>Simply put, I want to change the following code into a funtion that doesn't use <code>apply</code> or <code>progress_apply</code>, so that the performance doesn't take 4+ hours to execute on 20 million+ rows.</p>
<pre><code>d2['B'] = d2['C'].progress_apply(lambda x: [z for y in d1['B'] for z in y if x.startswith(z)])
d2['B'] = d2['B'].progress_apply(max)
</code></pre>
<p><strong>Full question below:</strong></p>
<p>I have two dataframes. The first dataframe has a column with 4 categories (A,B,C,D) with four different lists of numbers that I want to compare against a column in the second dataframe, which is not a list like in the first dataframe but instead just a single value that will start with one or more values from the first dataframe. As such, after executing some list comprehension to return a list of matching values in a new column in the second dataframe, the final step is to get the max of those values per list per row:</p>
<pre><code>d1 = pd.DataFrame({'A' : ['A', 'B', 'C', 'D'],
'B' : [['84'], ['8420', '8421', '8422', '8423', '8424', '8425', '8426'], ['847', '8475'], ['8470', '8471']]})
A B
0 A [84]
1 B [8420, 8421, 8422, 8423, 8424, 8425, 8426]
2 C [847, 8475]
3 D [8470, 8471]
d2 = pd.DataFrame({'C' : [8420513, 8421513, 8426513, 8427513, 8470513, 8470000, 8475000]})
C
0 8420513
1 8421513
2 8426513
3 8427513
4 8470513
5 8470000
6 8475000
</code></pre>
<p>My current code is this:</p>
<pre><code>from tqdm import tqdm, tqdm_notebook
tqdm_notebook().pandas()
d1 = pd.DataFrame({'A' : ['A', 'B', 'C', 'D'], 'B' : [['84'], ['8420', '8421', '8422', '8423', '8424', '8425', '8426'], ['847', '8475'], ['8470', '8471']]})
d2 = pd.DataFrame({'C' : [8420513, 8421513, 8426513, 8427513, 8470513, 8470000, 8475000]})
d2['C'] = d2['C'].astype(str)
d2['B'] = d2['C'].progress_apply(lambda x: [z for y in d1['B'] for z in y if x.startswith(z)])
d2['B'] = d2['B'].progress_apply(max)
d2
</code></pre>
<p>and successfully returns this output:</p>
<pre><code> C B
0 8420513 8420
1 8421513 8421
2 8426513 8426
3 8427513 84
4 8470513 8470
5 8470000 8470
6 8475000 8475
</code></pre>
<p>The problem lies with the fact that the <code>tqdm</code> progress bar is estimating the code will take <strong>4-5 hours</strong> to run on my actual DataFrame with 20 million plus rows. I know that <code>.apply</code> should be avoided and that a custom function can be much faster, so that I don't have to go row-by-row. I can usually change <code>apply</code> to a function, but I am struggling with this particular one. I think I am far away, but I will share what I have tried:</p>
<pre><code>def func1(df, d2C, d1B):
return df[[z for y in d1B for z in y if z in d2C]]
d2['B'] = func1(d2, d2['C'], d1['B'])
d2
</code></pre>
<p>With this code, I am receiving <code>ValueError: Wrong number of items passed 0, placement implies 1</code> and also still need to include code to get the max of each list per row.</p>
|
<p>Let's try, using <code>explode</code> and regex with <code>extract</code>:</p>
<pre><code>d1e = d1['B'].explode()
regstr = '('+'|'.join(sorted(d1e)[::-1])+')'
d2['B'] = d2['C'].astype('str').str.extract(regstr)
</code></pre>
<p>Output:</p>
<pre><code> C B
0 8420513 8420
1 8421513 8421
2 8426513 8426
3 8427513 84
4 8470513 8470
5 8470000 8470
6 8475000 8475
</code></pre>
<p>Since, .str access is slower than list comprehension</p>
<pre><code>import re
regstr = '|'.join(sorted(d1e)[::-1])
d2['B'] = [re.match(regstr, i).group() for i in d2['C'].astype('str')]
</code></pre>
<h3>Timings:</h3>
<pre><code>from timeit import timeit
import re
d1 = pd.DataFrame({'A' : ['A', 'B', 'C', 'D'], 'B' : [['84'], ['8420', '8421', '8422', '8423', '8424', '8425', '8426'], ['847', '8475'], ['8470', '8471']]})
d2 = pd.DataFrame({'C' : [8420513, 8421513, 8426513, 8427513, 8470513, 8470000, 8475000]})
d2['C'] = d2['C'].astype(str)
def orig(d):
d['B'] = d['C'].apply(lambda x: [z for y in d1['B'] for z in y if x.startswith(z)])
d['B'] = d['B'].apply(max)
return d
def comtorecords(d):
d['B']=[max([z for y in d1.B for z in y if str(row[1]) .startswith(z)]) for row in d.to_records()]
return d
def regxstracc(d):
d1e = d1['B'].explode()
regstr = '('+'|'.join(sorted(d1e)[::-1])+')'
d['B'] = d['C'].astype('str').str.extract(regstr)
return d
def regxcompre(d):
regstr = '|'.join(sorted(d1e)[::-1])
d['B'] = [re.match(regstr, i).group() for i in d['C'].astype('str')]
return d
res = pd.DataFrame(
index=[10, 30, 100, 300, 1000, 3000, 10000, 30000],
columns='orig comtorecords regxstracc regxcompre'.split(),
dtype=float
)
for i in res.index:
d = pd.concat([d2]*i)
for j in res.columns:
stmt = '{}(d)'.format(j)
setp = 'from __main__ import d, {}'.format(j)
print(stmt, d.shape)
res.at[i, j] = timeit(stmt, setp, number=100)
# res.groupby(res.columns.str[4:-1], axis=1).plot(loglog=True);
res.plot(loglog=True);
</code></pre>
<p>Output:</p>
<p><a href="https://i.stack.imgur.com/HHuCr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HHuCr.png" alt="enter image description here" /></a></p>
|
python|pandas|function|lambda|list-comprehension
| 3
|
4,517
| 62,848,600
|
Python skimage image with one value for color definition
|
<p>I just started working with <code>skimage</code> and I am using it in python.3.6 with the <code>skimage-version: 0.17.2</code><br />
And I started using the example form: <a href="https://scikit-image.org/docs/stable/user_guide/numpy_images.html" rel="nofollow noreferrer">https://scikit-image.org/docs/stable/user_guide/numpy_images.html</a><br />
There I found something that I did not understand. The color of this image is only defined by one singe value in the numpy-array. How can that be? Is it nit using RBG or something like this?
My code looks like this:</p>
<pre><code>from skimage import data
camera = data.camera()
print(camera.shape)
print(camera[0,0])
</code></pre>
<p>And the output is:</p>
<pre><code>(10, 100)
0.0
</code></pre>
<p>What is driving me crazy is the <code>0.0</code> shouldn't it be something like <code>[0,0,0]</code> for white in this example ?</p>
<p>When I show the image I get following result :<a href="https://i.stack.imgur.com/ZaIFt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZaIFt.png" alt="image i get" /></a><br />
Can anybody please help me ?</p>
|
<p>The color is defined by a single value because it's not RGB, it's greyscale. So the image shape is <code>(512, 512)</code>, and not <code>(512, 512, 3)</code>. As a result, if you pick a single <em>white</em> point it will be <code>[255]</code> and not <code>[255, 255, 255]</code>.</p>
<p>If you're confused because the picture isn't black and white, it's just because the default colormap of <code>matplotlib</code> is viridis, so green and yellow. This doesn't change the pixel values, it's essentially just a "theme" or camera filter. If you changed the colormap to grays, you will get:</p>
<pre><code>import matplotlib.pyplot as plt
plt.imshow(255 - camera, cmap='Greys')
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/KJLez.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KJLez.png" alt="enter image description here" /></a></p>
<p>If you don't specify the colormap, even a random array of pixels will get the yellowish tint:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
fig, ax = plt.subplots(figsize=(4, 4))
plt.imshow(np.random.randint(0, 256, (512, 512)))
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/zrFut.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zrFut.png" alt="enter image description here" /></a></p>
<p>There is one thing I don't understand, though. The pic seems inverted. I had to subtract the pixel values from 255 to get the normal camera pic.</p>
|
python|python-3.x|image|numpy|scikit-image
| 1
|
4,518
| 73,791,775
|
ValueError: Output of generator should be a tuple `(x, y, sample_weight)` or `(x, y)`. Found: [[[[0.08627451 0.10980393 0.10980393]
|
<p>I am a new learner in tensorflow, when I try to do the transfer learning. I meet an error of
Value error. Does anyone know where the bug is? This code is related to the transfer learning of VGG16. Basically I just created my own MLP layers and do the fine-tuning</p>
<pre><code>import os
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Flatten, Dense, Dropout, ZeroPadding2D
from tensorflow.keras.layers import Convolution2D, MaxPooling2D
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.optimizers import Adam
from tensorflow.keras import applications
import numpy as np
def get_length(Path, Pattern):
# Pattern: name of the subdirectory
Length = len(os.listdir(os.path.join(Path, Pattern)))
return Length
train_data_dir = ''
validation_data_dir = ''
img_width, img_height = 224, 224
epochs = 150
batch_size = 8
LR = 0.00001
Len_C1_Train = get_length(train_data_dir,'AFF')
Len_C2_Train = get_length(train_data_dir,'NFF')
Len_C1_Val = get_length(validation_data_dir,'AFF')
Len_C2_Val = get_length(validation_data_dir,'NFF')
model = applications.VGG16(include_top=False, weights='imagenet')
datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = datagen.flow_from_directory(train_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode=None,
shuffle=False)
# Extracting the features from the loaded images
features_train = model.predict_generator(train_generator,
(Len_C1_Train+Len_C2_Train) // batch_size,
max_queue_size=1)
val_generator = datagen.flow_from_directory(validation_data_dir,
target_size=(img_width, img_height),
batch_size=batch_size,
class_mode=None,
shuffle=False)
# Extracting the features from the loaded images
features_val = model.predict_generator(val_generator,
(Len_C1_Val+Len_C2_Val) // batch_size, max_queue_size=1)
train_data = features_train
train_labels = np.array([0] * int(Len_C1_Train) + [1] * int(Len_C2_Train))
validation_data = features_val
validation_labels = np.array([0] * int(Len_C1_Val) + [1] * int(Len_C2_Val))
# Building the MLP model
model2=Sequential()
model2.add(Dense(128, activation='relu'))
model2.add(Dropout(0.5))
model2.add(Dense(1, activation='softmax'))
model_total = Sequential([model, model2])
model.trainable = False
model_total.compile(loss='binary_crossentropy', optimizer = Adam(lr = LR), metrics=['binary_accuracy'])
</code></pre>
|
<p>The output of VGG16 is of shape <code>(batch_size, height, width, channels)</code>. So before you use a <code>Dense</code> layer, you should apply a <code>Flatten</code> layer. Try this:</p>
<pre><code>model2=Sequential()
model2.add(Flatten())
model2.add(Dense(128, activation='relu'))
model2.add(Dropout(0.5))
model2.add(Dense(1, activation='softmax'))
model_total = Sequential([model, model2])
</code></pre>
|
python|tensorflow|keras|transfer-learning
| 0
|
4,519
| 71,399,160
|
AttributeError: module 'numpy.random' has no attribute 'BitGenerator' in python 3.8.10
|
<p>I'm trying to import the xarray module into python 3.8.10 but I get this error:</p>
<p><code> AttributeError: module 'numpy.random' has no attribute 'BitGenerator'</code></p>
<p>In order to allow you to reproduce the error: First of all, I created a new environment with conda and by importing at the same time the modules I need (to avoid the problems of incompatible dependencies) :</p>
<p><code> conda create -n Myenv Python=3.8 matplotlib numpy time xarray netCDF4 termcolor</code></p>
<p>Then, I try to import in ipython3 all modules I need to run my code:</p>
<pre><code>import matplotlib as mpl
mpl.use('agg')
import numpy as np
import os
import time
import glob
import sys
from datetime import datetime,date,timedelta
import matplotlib.pyplot as plt
import matplotlib.ticker as mtick
import matplotlib.colors as colors
# from operator import itemgetter
from netCDF4 import Dataset
from mpl_toolkits.basemap import Basemap, shiftgrid
from termcolor import colored
import xarray as xr
</code></pre>
<p>and, at this moment, I get the error...</p>
<p>I searched the documentation to see if the BitGenerator Attribute exists in my version of numpy (1.22.3), and it does. So I don't understand why this error occurs.</p>
<p>Someone can help me to understand please ?</p>
<p>Thank u !</p>
<p>If you want more informations about my environnement, I can provide.</p>
|
<p>I solved mine with <code>pip install --upgrade numpy</code></p>
|
python-3.x|numpy|attributeerror|python-module|python-xarray
| 2
|
4,520
| 71,438,100
|
Pandas, Python - Assembling a Data Frame with multiple lists from loop
|
<p>Using loop to collect target data into lists from JSON file. These lists are organized as columns and their values are organized; thus, no manipulation/reorganization is required. Only attaching them horizontally.</p>
<pre><code>#Selecting Data into List
i=1
target = f'{pathway}\calls_{i}.json'
with open(target,'r') as f: #Reading JSON file
data = json.load(f)
specsA=('PreviousDraws',['DrawNumber'])
draw=(glom(data,specsA)) #list type; glom is a package to access nested data in JSON file.
print(draw)
for j in range(0,5):
specsB=('PreviousDraws',['WinningNumbers'],[f'{j}'],['Number'])
number=(glom(data,specsB)) #list type; glom is a package to access nested data in JSON file.
print(number)
#Now assembling lists into a table using pandas
</code></pre>
<p>The resulting lists from the code above are as followed below:</p>
<pre><code>#This is from variable draw
[10346, 10345, 10344, 10343, 10342, 10341, 10340, 10339, 10338, 10337, 10336, 10335, 10334, 10333, 10332, 10331, 10330, 10329, 10328, 10327]
#This is from variable number
['22', '9', '4', '1', '1', '14', '5', '3', '2', '8', '2', '1', '4', '9', '4', '4', '3', '13', '7', '14']
['28', '18', '16', '2', '3', '17', '16', '13', '11', '9', '8', '2', '9', '19', '7', '13', '7', '23', '21', '17']
['33', '24', '21', '4', '9', '20', '27', '19', '23', '19', '19', '7', '19', '30', '19', '27', '19', '32', '26', '21']
['35', '30', '28', '11', '21', '23', '33', '26', '35', '37', '27', '12', '20', '31', '22', '34', '22', '36', '27', '25']
['36', '32', '33', '19', '29', '38', '35', '27', '37', '38', '32', '30', '22', '36', '33', '39', '36', '38', '30', '27']
</code></pre>
<p>Expected Data Frame table after assembly:</p>
<pre><code>Draw | Number[0] | Number[1] | Number[2] ...
10346 | 22 | 28 |
10345 | 9 | 18 |
10344 | 4 | 16 |
10343 | 1 | 2 |
10342 | 1 | 3 |
</code></pre>
<p>My attempt at assembling the table: Organize as dictionary with Series, below:</p>
<pre><code>dct = {'DrawNumbers':pd.Series(draw),
'Index1':pd.Series(number),
'Index2':pd.Series(number),
'Index3':pd.Series(number),
'Index4':pd.Series(number),
'Index5':pd.Series(number)
}
df = pd.DataFrame(dct)
print(df)
</code></pre>
<p>Actual result - incorrect due to last list's value being repeated in table's row. So far, only Index5 column is correct, while all index columns are incorrectly represented with index 5's values.</p>
<pre><code> DrawNumbers Index1 Index2 Index3 Index4 Index5
0 10346 36 36 36 36 36
1 10345 32 32 32 32 32
2 10344 33 33 33 33 33
3 10343 19 19 19 19 19
4 10342 29 29 29 29 29
5 10341 38 38 38 38 38
6 10340 35 35 35 35 35
7 10339 27 27 27 27 27
8 10338 37 37 37 37 37
9 10337 38 38 38 38 38
... ... ... ... ... ... ...
</code></pre>
<p>Also had tried to change the data type of the number from string to int, but having repeated errors attempted that. Either way, I am stuck and would like to request for assistance.</p>
|
<p>The problem is that you are overwriting the <code>number</code> variable in the loop, so is no longer available at the end of each iteration, I add a solution adding the column Index in each iteration.</p>
<pre><code># create an empty dataframe
df = pd.DataFrame()
#Selecting Data into List
i=1
target = f'{pathway}\calls_{i}.json'
with open(target,'r') as f: #Reading JSON file
data = json.load(f)
specsA=('PreviousDraws',['DrawNumber'])
draw=(glom(data,specsA)) #list type; glom is a package to access nested data in JSON file.
print(draw)
# insert the draw to the dataframe
df['DrawNumbers'] = draw
for j in range(0,5):
specsB=('PreviousDraws',['WinningNumbers'],[f'{j}'],['Number'])
number=(glom(data,specsB)) #list type; glom is a package to access nested data in JSON file.
print(number)
# insert each number to the dataframe
df[f'Index{j}'] = number
</code></pre>
|
python|python-3.x|pandas|dataframe
| 1
|
4,521
| 52,142,344
|
Creating a placeholder having a shape that is a function of another shape
|
<p>Suppose that we have a tensorflow placeholder as follows:</p>
<pre><code>x = tf.placeholder(tf.float32, (2, 2, 3 ..., 1))
</code></pre>
<p>I would like to create another tensor <code>y</code> whose shape is the as <code>x</code> except the first and second dimensions, which are three times those of <code>x</code>.</p>
<pre><code>y = tf.placeholder(tf.float32, (6, 6, 3, ..., 1))
</code></pre>
<p>The shape of <code>x</code> is not predefined, so what I want to do is something like the following:</p>
<pre><code>y = tf.placeholder(tf.float32, (x.shape[0]* 3, x.shape[1] * 3, remaining_are_the_same_as_x_shape))
</code></pre>
<p>Could you advice me how to do this in tensorflow?</p>
|
<p>What about this ?</p>
<pre><code>x = tf.placeholder(tf.float32, (2, 2, 3 , 1))
shape = x.get_shape().as_list()
shape[0] = shape[0] * 3
shape[1] = shape[1] * 3
y = tf.placeholder(tf.float32, shape=shape)
shape = y.get_shape().as_list()
print(shape)
</code></pre>
<blockquote>
<p>[6, 6, 3, 1]</p>
</blockquote>
|
python|tensorflow|placeholder
| 2
|
4,522
| 52,340,101
|
Tensor.name is meaningless in eager execution
|
<p>I was doing some exercise in tensorflow in google colab and trying something under eager execution. When I was practicing on the <code>tf.case</code> by running the code below:</p>
<pre><code>x = tf.random_normal([])
y = tf.random_normal([])
op = tf.case({tf.less(x,y):tf.add(x,y), tf.greater(x,y):tf.subtract(x,y)}, default = tf.multiply(x,y), exclusive = True)
</code></pre>
<p>I have followed the example in the tf.case carefully but it just keeps reporting an attribute error: </p>
<pre><code>AttributeError: Tensor.name is meaningless when eager execution is enabled.
</code></pre>
<p>I am new to python and TF as well as deep learning. Can anyone try to run the code above and help me figure out?</p>
<p>Thank you</p>
|
<p>This seems like a bug in eager execution, which you should feel encouraged to <a href="https://github.com/tensorflow/tensorflow/issues" rel="nofollow noreferrer">report</a>.</p>
<p>That said, using <code>tf.case</code> to express what it does only makes sense when constructing graphs. Enabling eager execution allows one to write easier to read, more idiomatic Python code. For the example you had, it would be something like this:</p>
<pre><code>def case(x, y):
if tf.less(x, y):
return tf.add(x, y)
if tf.greater(x, y:
return tf.subtract(x, y)
return tf.multiply(x, y)
</code></pre>
<p>Hope that helps.
You may want to report this as a bug so that using <code>tf.case</code> when eager executing is enabled has the same effect as the code above.</p>
|
python|tensorflow
| 2
|
4,523
| 60,754,633
|
Why getting values of model parameters and reassigning of new values takes longer and longer in TensorFlow?
|
<p>I have a Python function that takes TensorFlow session, symbolic variables (tensors representing parameters of the model, gradients of the model parameters). I call this function in a loop and each subsequent call takes longer and longer. So, I wonder what might be the reason for that.</p>
<p>Here is the code of the function:</p>
<pre><code>def minimize_step(s, params, grads, min_lr, factor, feed_dict, score):
'''
Inputs:
s - TensorFlow session
params - list of nodes representing model parameters
grads - list of nodes representing gradients of parameters
min_lr - startning learnig rate
factor - growth factor for the learning rate
feed_dict - feed dictionary used to evaluate gradients and score
Normally it contains X and Y
score - score that is minimized
Result:
One call of this function makes an update of model parameters.
'''
ini_vals = [s.run(param) for param in params]
grad_vals = [s.run(grad, feed_dict = feed_dict) for grad in grads]
lr = min_lr
best_score = None
while True:
new_vals = [ini_val - lr * grad for ini_val, grad in zip(ini_vals, grad_vals)]
for i in range(len(new_vals)):
s.run(tf.assign(params[i], new_vals[i]))
score_val = s.run(score, feed_dict = feed_dict)
if best_score == None or score_val < best_score:
best_score = score_val
best_lr = lr
best_params = new_vals[:]
else:
for i in range(len(new_vals)):
s.run(tf.assign(params[i], best_params[i]))
break
lr *= factor
return best_score, best_lr
</code></pre>
<p>Could it be that the symbolic variables, representing model parameters, somehow accumulate old old values? </p>
|
<p>It seems that you are missing the point on how tensorflow 1.* is used. I'm not going into details here since you could find plenty of resources on the internet.
I think <a href="http://download.tensorflow.org/paper/whitepaper2015.pdf" rel="nofollow noreferrer">this paper</a> would be enough to understand the concept of how to use tensorflow 1.*.</p>
<p>In your example at every iteration you are continuously adding new nodes to the graph. </p>
<p>Let's say this is your execution graph</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
import numpy as np
x = tf.placeholder(tf.float32, (None, 2))
y = tf.placeholder(tf.int32, (None))
res = tf.keras.layers.Dense(2)(x)
xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(
logits=res, labels=y)
loss_tensor = tf.reduce_mean(xentropy)
lr = tf.placeholder(tf.float32, ())
grads = tf.gradients(loss_tensor, tf.trainable_variables())
weight_updates = [tf.assign(w, w - lr * g) for g, w in zip(grads, tf.trainable_variables())]
</code></pre>
<p>Each time the <code>weight_updates</code> are executed the weights of the model will be updated. </p>
<pre class="lang-py prettyprint-override"><code>with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# before
print(sess.run(tf.trainable_variables()))
# [array([[ 0.7586721 , -0.7465675 ],
# [-0.34097505, -0.83986187]], dtype=float32), array([0., 0.], dtype=float32)]
# after
evaluated = sess.run(weight_updates,
{x: np.random.normal(0, 1, (2, 2)),
y: np.random.randint(0, 2, 2),
lr: 0.001})
print(evaluated)
# [array([[-1.0437444 , -0.7132262 ],
# [-0.8282471 , -0.01127395]], dtype=float32), array([ 0.00072743, -0.00072743], dtype=float32)]
</code></pre>
<p>In your example at each step you are adding additional execution flow to the graph instead of using existing one.</p>
|
python|tensorflow
| 1
|
4,524
| 60,503,194
|
Dropping index column of mutiple excel files in python
|
<p>I have multiple excel sheets that have identical column names. When I was saving the files from previous computations I forgot to set ‘Date’ as index and now all of them (40) have index columns with numbers from 1-200. If I load these into python they get an additional index column again resulting in 2 unnamed columns. I know I can use the glob function to access all my files. But is there a way I can access all the files, drop/delete the unnamed index column and set the new index to the date column</p>
<p>Here is an example of 1 excel sheet right now</p>
<pre><code>df = pd.DataFrame({
'': [0, 1,2,3,4],
'Date': [1930, 1931, 1932, 1933,1934],
'value': [11558522, 12323552, 13770958, 18412280, 13770958],
})
</code></pre>
|
<p>dfs = [pd.read_csv(file).set_index('Date')[['value']] for file in glob.glob("/your/path/to/folder/*.csv")]</p>
|
python|pandas|dataframe|datetimeindex
| 0
|
4,525
| 60,399,734
|
Indexing in two dimensional PyTorch Tensor using another Tensor
|
<p>Suppose that tensor A is defined as:</p>
<pre><code> 1 2 3 4
5 6 7 8
9 10 11 12
13 14 15 16
</code></pre>
<p>I'm trying to extract a flat array out of this matrix by using another tensor as indices. For example, if the second tensor is defined as:</p>
<pre><code>0
1
2
3
</code></pre>
<p>I want the result of the indexing to be 1-D tensor with the contents:</p>
<pre><code>1
6
11
16
</code></pre>
<p>It doesn't seem to behave like NumPy; I've tried <code>A[:, B]</code> but it just throws an error for not being able to allocate an insane amount of memory and I've no idea why!</p>
|
<p><strong>1st Approach: using <code>torch.gather</code></strong></p>
<pre><code>torch.gather(A, 1, B.unsqueeze_(dim=1))
</code></pre>
<p>if you want one-dimensional vector, you can add squeeze to the end:</p>
<pre><code>torch.gather(A, 1, B.unsqueeze_(dim=1)).squeeze_()
</code></pre>
<p><strong>2nd Approach: using list comprehensions</strong></p>
<p>You can use list comprehensions to select the items at specific indexes, then they can be concatenated using the <code>torch.stack</code>. An importat point here is that you should not use <code>torch.tensor</code> to create a new tensor from a list, if you do, you will break the chain (you cannot calculate gradient through that node):</p>
<pre><code>torch.stack([A[i, B[i]] for i in range(A.size()[0])])
</code></pre>
|
python|pytorch
| 1
|
4,526
| 72,698,869
|
pandas - get difference to previous n-th rows
|
<p>Assume I have the following data frame in pandas, with accumulated values over time for all ids:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>date</th>
<th>value</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>01.01.1999</td>
<td>2</td>
</tr>
<tr>
<td>2</td>
<td>01.01.1999</td>
<td>3</td>
</tr>
<tr>
<td>3</td>
<td>01.01.1999</td>
<td>5</td>
</tr>
<tr>
<td>1</td>
<td>03.01.1999</td>
<td>5</td>
</tr>
<tr>
<td>2</td>
<td>03.01.1999</td>
<td>8</td>
</tr>
<tr>
<td>3</td>
<td>03.01.1999</td>
<td>7</td>
</tr>
</tbody>
</table>
</div>
<p>And I want to have the following, the difference for each id to the previous date:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>date</th>
<th>value</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>01.01.1999</td>
<td>2</td>
</tr>
<tr>
<td>2</td>
<td>01.01.1999</td>
<td>3</td>
</tr>
<tr>
<td>3</td>
<td>01.01.1999</td>
<td>5</td>
</tr>
<tr>
<td>1</td>
<td>03.01.1999</td>
<td>3</td>
</tr>
<tr>
<td>2</td>
<td>03.01.1999</td>
<td>5</td>
</tr>
<tr>
<td>3</td>
<td>03.01.1999</td>
<td>2</td>
</tr>
</tbody>
</table>
</div>
<p>This is basically the difference. I can only apply something like this:</p>
<p><code>df["values"].diff().fillna(0)</code></p>
<p>But this would not include the date column. Any help?</p>
|
<p>IIUC, you want to groupby and <code>diff</code></p>
<pre class="lang-py prettyprint-override"><code>df['value'] = df.groupby('id')['value'].diff().fillna(df['value'])
</code></pre>
<pre><code>print(df)
id date value
0 1 01.01.1999 2.0
1 2 01.01.1999 3.0
2 3 01.01.1999 5.0
3 1 03.01.1999 3.0
4 2 03.01.1999 5.0
5 3 03.01.1999 2.0
</code></pre>
|
python-3.x|pandas
| 0
|
4,527
| 72,536,408
|
How to drop columns from a pandas DataFrame that have elements containing a string?
|
<p>This is not about dropping columns whose name contains a string.</p>
<p>I have a dataframe with 1600 columns. Several hundred are garbage. Most of the garbage columns contain a phrase such as <code>invalid value encountered in double_scalars (XYZ)</code> where `XYZ' is a filler name for the column name.</p>
<p>I would like to delete all columns that contain, in any of their elements, the string <code>invalid</code></p>
<p>Purging columns with strings in general would work too. What I want is to clean it up so I can fit a machine learning model to it, so removing any/all columns that are not boolean or real would work.</p>
<p>This must be a duplicate question, but I can only find answers to how to remove a column with a specific column name.</p>
|
<p>You can use <code>df.select_dtypes(include=[float,bool]</code>) or <code>df.select_dtypes(exclude=['object'])</code></p>
<p>Link to docs <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.select_dtypes.html" rel="nofollow noreferrer">https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.select_dtypes.html</a></p>
|
python-3.x|pandas|dataframe
| 2
|
4,528
| 72,712,449
|
SageMaker Pipeline - Processing step for ImageClassification model
|
<p>I'm trying to solve ImageClassification task. I have prepared a code to train, evaluate and deploy tensorflow model in SageMaker Notebook. I'm new with SageMaker and SageMaker Pipeline too. Currently, I'm trying to split my code and create SageMaker pipeline to solve Image Classification task.
In reference to AWS documentation there is <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/build-and-manage-steps.html#step-type-processing" rel="nofollow noreferrer">Processing steps</a>. I have a code which read data from S3 and use <a href="https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image/ImageDataGenerator" rel="nofollow noreferrer">ImageGenerator</a> to generate augmented images on the fly while tensorflow model is still in the training stage.</p>
<p>I don't find anything of how I can use <code>ImageGenerator</code> inside of Processing step in SageMaker Pipeline.</p>
<p>My Code of <code>ImageGenerator</code>:</p>
<pre><code>def load_data(mode):
if mode == 'TRAIN':
datagen = ImageDataGenerator(
rescale=1. / 255,
rotation_range = 0.5,
shear_range=0.2,
zoom_range=0.2,
width_shift_range = 0.2,
height_shift_range = 0.2,
fill_mode = 'nearest',
horizontal_flip=True)
else:
datagen = ImageDataGenerator(rescale=1. / 255)
return datagen
def get_flow_from_directory(datagen,
data_dir,
batch_size,
shuffle=True):
assert os.path.exists(data_dir), ("Unable to find images resources for input")
generator = datagen.flow_from_directory(data_dir,
class_mode = "categorical",
target_size=(HEIGHT, WIDTH),
batch_size=batch_size,
shuffle=shuffle
)
print('Labels are: ', generator.class_indices)
return generator
</code></pre>
<p>Question is - does it possible to use <code>ImageGenerator</code> inside of <a href="https://docs.aws.amazon.com/sagemaker/latest/dg/build-and-manage-steps.html#step-type-processing" rel="nofollow noreferrer">Processing step</a> of SageMaker Pipeline?
I'd appreciate for any ideas, Thanks.</p>
|
<p>So, <code>ImageGenerator</code> and <code>flow_from_directory</code> I continue use inside of Training step. Processing step I skip at all, just use Training, Evaluating and Register model.</p>
|
tensorflow|keras|amazon-sagemaker|image-preprocessing
| 0
|
4,529
| 72,580,594
|
ModuleNotFoundError: No module named 'yaml' when trying to fit sagemaker tensorflow estimator locally
|
<p>I am trying to follow <a href="https://gitlab.com/juliensimon/aim410/-/tree/master" rel="nofollow noreferrer">this</a> tutorial for connecting aws to a jupyter notebook for local development (I am running jupyter inside of vscode which I don't think matters but just noting it in case).</p>
<p>I have <a href="https://gitlab.com/juliensimon/aim410/-/blob/master/local_training.ipynb" rel="nofollow noreferrer">this</a> ipynb file from the tutorial up and have gotten it to run successfully all the way until the cell where you try to fit the tensorflow estimator on the data:</p>
<p>code looks like this:</p>
<pre><code> # Train! This will pull (once) the SageMaker CPU/GPU container for TensorFlow to your local machine.
# Make sure that Docker is running and that docker-compose is installed
tf_estimator.fit({'training': training_input_path, 'validation': validation_input_path})
</code></pre>
<p>I have docker installed but I don't really know what I'm doing with it since I've never used it before. I can't tell if this is a docker issue or a sagemaker issue but this is the error that is thrown when I try to run that cell:</p>
<pre><code>Failed to import yaml. Local mode features will be impaired or broken. Please run "pip install 'sagemaker[local]'" to install all required dependencies.
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
/var/folders/cf/ft88j_856fv5rk3whgs12d9w0000gq/T/ipykernel_97461/84733352.py in <module>
2 # Make sure that Docker is running and that docker-compose is installed
3
----> 4 tf_estimator.fit({'training': training_input_path, 'validation': validation_input_path})
~/opt/anaconda3/envs/localsm/lib/python3.7/site-packages/sagemaker/workflow/pipeline_context.py in wrapper(*args, **kwargs)
207 return self_instance.sagemaker_session.context
208
--> 209 return run_func(*args, **kwargs)
210
211 return wrapper
~/opt/anaconda3/envs/localsm/lib/python3.7/site-packages/sagemaker/estimator.py in fit(self, inputs, wait, logs, job_name, experiment_config)
976 self._prepare_for_training(job_name=job_name)
977
--> 978 self.latest_training_job = _TrainingJob.start_new(self, inputs, experiment_config)
979 self.jobs.append(self.latest_training_job)
980 if wait:
~/opt/anaconda3/envs/localsm/lib/python3.7/site-packages/sagemaker/estimator.py in start_new(cls, estimator, inputs, experiment_config)
1806 train_args = cls._get_train_args(estimator, inputs, experiment_config)
1807
-> 1808 estimator.sagemaker_session.train(**train_args)
1809
1810 return cls(estimator.sagemaker_session, estimator._current_job_name)
~/opt/anaconda3/envs/localsm/lib/python3.7/site-packages/sagemaker/session.py in train(self, input_mode, input_config, role, job_name, output_config, resource_config, vpc_config, hyperparameters, stop_condition, tags, metric_definitions, enable_network_isolation, image_uri, algorithm_arn, encrypt_inter_container_traffic, use_spot_instances, checkpoint_s3_uri, checkpoint_local_path, experiment_config, debugger_rule_configs, debugger_hook_config, tensorboard_output_config, enable_sagemaker_metrics, profiler_rule_configs, profiler_config, environment, retry_strategy)
592 self.sagemaker_client.create_training_job(**request)
593
--> 594 self._intercept_create_request(train_request, submit, self.train.__name__)
595
596 def _get_train_request( # noqa: C901
~/opt/anaconda3/envs/localsm/lib/python3.7/site-packages/sagemaker/session.py in _intercept_create_request(self, request, create, func_name)
4201 func_name (str): the name of the function needed intercepting
4202 """
-> 4203 return create(request)
4204
4205
~/opt/anaconda3/envs/localsm/lib/python3.7/site-packages/sagemaker/session.py in submit(request)
590 LOGGER.info("Creating training-job with name: %s", job_name)
591 LOGGER.debug("train request: %s", json.dumps(request, indent=4))
--> 592 self.sagemaker_client.create_training_job(**request)
593
594 self._intercept_create_request(train_request, submit, self.train.__name__)
~/opt/anaconda3/envs/localsm/lib/python3.7/site-packages/sagemaker/local/local_session.py in create_training_job(self, TrainingJobName, AlgorithmSpecification, OutputDataConfig, ResourceConfig, InputDataConfig, Environment, **kwargs)
190 logger.info("Starting training job")
191 training_job.start(
--> 192 InputDataConfig, OutputDataConfig, hyperparameters, Environment, TrainingJobName
193 )
194
~/opt/anaconda3/envs/localsm/lib/python3.7/site-packages/sagemaker/local/entities.py in start(self, input_data_config, output_data_config, hyperparameters, environment, job_name)
235
236 self.model_artifacts = self.container.train(
--> 237 input_data_config, output_data_config, hyperparameters, environment, job_name
238 )
239 self.end_time = datetime.datetime.now()
~/opt/anaconda3/envs/localsm/lib/python3.7/site-packages/sagemaker/local/image.py in train(self, input_data_config, output_data_config, hyperparameters, environment, job_name)
234
235 compose_data = self._generate_compose_file(
--> 236 "train", additional_volumes=volumes, additional_env_vars=training_env_vars
237 )
238 compose_command = self._compose()
~/opt/anaconda3/envs/localsm/lib/python3.7/site-packages/sagemaker/local/image.py in _generate_compose_file(self, command, additional_volumes, additional_env_vars)
687 except ImportError as e:
688 logger.error(sagemaker.utils._module_import_error("yaml", "Local mode", "local"))
--> 689 raise e
690
691 yaml_content = yaml.dump(content, default_flow_style=False)
~/opt/anaconda3/envs/localsm/lib/python3.7/site-packages/sagemaker/local/image.py in _generate_compose_file(self, command, additional_volumes, additional_env_vars)
684
685 try:
--> 686 import yaml
687 except ImportError as e:
688 logger.error(sagemaker.utils._module_import_error("yaml", "Local mode", "local"))
ModuleNotFoundError: No module named 'yaml'
</code></pre>
<p>Wondering if anyone else has had this happen or if someone can point me in the right direction to try and figure out what the issue is and how to solve it. Thanks!</p>
|
<p>You are just missing the pyyaml module</p>
<p>Install it by:</p>
<pre><code>pip install pyyaml
</code></pre>
|
python|docker|tensorflow|amazon-sagemaker
| 1
|
4,530
| 59,863,058
|
How to avoid the bottleneck to GPU in case of CNN ( Conv Neural Net )?
|
<p>I am using Keras ( with tensorflow ) to implement CNN, doing image classification.
My GPU usage isn't crossing %1 , reasons I have found that it is due to delay in loading data into memory hence low GPU utilisation.
But I am not getting how to implement it using keras.</p>
<p>Can anyone help me with code snippet that will avoid this bottleneck ? </p>
<p>( Suggestions I read are to preload the data and parallelism etc. but I dont have any idea about it ) </p>
|
<p>There is a lot to do here.</p>
<p>Firstly, you'll need a generator. It will give the data batch per batch. You can either write your own generator or leave keras to do it, use <a href="https://stanford.edu/~shervine/blog/keras-how-to-generate-data-on-the-fly" rel="nofollow noreferrer">this tutorial</a> in the first case. So basically a generator look like that :</p>
<pre class="lang-py prettyprint-override"><code>class DataGenerator(tf.keras.utils.Sequence):
def __init__(self, ...):
self.images = ...
self.targets = ...
self.batch_size = ...
def __len__(self):
return int(np.floor(len(self.images) / self.batch_size))
def __getitem__(self, index):
return self.images[index*self.batch_size:(index+1)*self.batch_size], self.targets[index*self.batch_size:(index+1)*self.batch_size]
</code></pre>
<p>You can also use <a href="https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/image" rel="nofollow noreferrer">keras preprocessing</a> that will do that for you.</p>
<p><strong>This is very important if you want to use 100% of your GPUs.</strong></p>
<p>When your data generator is working, time to go for the parallelisation.</p>
<p>Parallelisation is made using a <a href="https://www.tensorflow.org/guide/distributed_training" rel="nofollow noreferrer">distributed strategy</a></p>
<pre class="lang-py prettyprint-override"><code>strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
model = # your keras model
train_generator = DataGenerator(...)
validation_generator = DataGenerator(...)
test_generator = DataGenerator(...)
optimizer = tf.keras.optimizers...
model.compile(loss=loss,
optimizer=optimizer,
metrics=['...'])
model.fit(train_generator, validation_data=validation_generator, validation_freq=1, epochs=FLAGS.epochs, callbacks=callbacks)
</code></pre>
<p>You'll have to put everything(data generator + model) under the scope of your distributed strategy. And that's it.</p>
|
tensorflow|machine-learning|keras
| 1
|
4,531
| 59,476,863
|
Reindexing Multiindex dataframe
|
<p>I have Multiindex dataframe and I want to reindex it. However, I get 'duplicate axis error'.</p>
<pre><code>Product Date col1
A September 2019 5
October 2019 7
B September 2019 2
October 2019 4
</code></pre>
<p>How can I achieve output like this?</p>
<pre><code>Product Date col1
A January 2019 0
February 2019 0
March 2019 0
April 2019 0
May 2019 0
June 2019 0
July 2019 0
August 2019 0
September 2019 5
October 2019 7
B January 2019 0
February 2019 0
March 2019 0
April 2019 0
May 2019 0
June 2019 0
July 2019 0
August 2019 0
September 2019 2
October 2019 4
</code></pre>
<p>First I tried this:</p>
<pre><code>nested_df = nested_df.reindex(annual_date_range, level = 1, fill_value = 0)
</code></pre>
<p>Secondly,</p>
<pre><code>nested_df = nested_df.reset_index().set_index('Date')
nested_df = nested_df.reindex(annual_date_range, fill_value = 0)
</code></pre>
|
<p>Let <code>df1</code> be your first data frame with non-zero values. The approach is to create another data frame <code>df</code> with zero values and merge both data frames to obtain the result.</p>
<pre><code>dates = ['{month}-2019'.format(month=month) for month in range(1,9)]*2
length = int(len(dates)/2)
products = ['A']*length + ['B']*length
Col1 = [0]*len(dates)
df = pd.DataFrame({'Dates': dates, 'Products': products, 'Col1':Col1}).set_index(['Products','Dates'])
</code></pre>
<p>Now the MultiIndex is converted to datetime:</p>
<pre><code>df.index.set_levels(pd.to_datetime(df.index.get_level_values(1)[:8]).strftime('%m-%Y'), level=1,inplace=True)
</code></pre>
<p>In <code>df1</code> you have to do the same, i.e. change the datetime multiindex level to the same format:</p>
<pre><code>df1.index.set_levels(pd.to_datetime(df1.index.get_level_values(1)[:2]).strftime('%m-%Y'), level=1,inplace=True)
</code></pre>
<p>I did it because otherwise (for example if datetimes are formatted like <code>%B %y</code>) the sorting of the MultiIndex by months goes wrong. Now it is sufficient to merge both data frames:</p>
<pre><code>result = pd.concat([df1,df]).sort_values(['Products','Dates'])
</code></pre>
<p>The final move is to change the datetime format:</p>
<pre><code>result.index.set_levels(levels = pd.to_datetime(result.index.get_level_values(1)[:10]).strftime('%B %Y'), level=1, inplace=True)
</code></pre>
|
pandas|multi-index|reindex
| 0
|
4,532
| 59,627,166
|
Update value in a cell based on summation of other cells matching other grouping variables
|
<p>I have dataframe like this -</p>
<pre><code> Alpha Title Jan Feb Mar Apr
0 a T1 63 66 65 53
1 b T2 35 88 81 42
2 b T3 0 23 51 95
3 c T2 83 70 77 57
4 c T3 0 81 15 59
</code></pre>
<p>I want to update the value in column <code>Jan</code> where <code>Title = T3</code> using summation of values from <code>Jan, Feb and Mar</code> where <code>Title = T2</code>, with matching <code>Alpha</code></p>
<p>The output should look like this -</p>
<pre><code> Alpha Title Jan Feb Mar Apr
0 a T1 63 66 65 53
1 b T2 35 88 81 42
2 b T3 204 23 51 95
3 c T2 83 70 77 57
4 c T3 230 81 15 59
</code></pre>
|
<p>Use:</p>
<pre><code>#create Series for match by conditions and columns names
df1 = df.set_index('Alpha')
s = df1.loc[df1['Title'].eq('T2'), ['Jan','Feb','Mar']].sum(1)
#another condition
m = df['Title'].eq('T3')
#replace by mask
df.loc[m, 'Jan'] = df.loc[m, 'Alpha'].map(s)
print (df)
Alpha Title Jan Feb Mar Apr
0 a T1 63 66 65 53
1 b T2 35 88 81 42
2 b T3 204 23 51 95
3 c T2 83 70 77 57
4 c T3 230 81 15 59
</code></pre>
|
python|pandas
| 4
|
4,533
| 61,630,255
|
Resetting index to flat after pivot_table in pandas
|
<p>I have the following dataframe <code>df</code></p>
<pre><code> Datum HH DayPrecipitation
9377 2016-01-26 18 3
9378 2016-01-26 19 4
9379 2016-01-26 20 11
9380 2016-01-26 21 23
9381 2016-01-26 22 12
</code></pre>
<p>Which I converted to wide format using </p>
<pre><code>df.pivot_table(index = 'Datum', columns='HH' ,values = 'DayPrecipitation')
</code></pre>
<p>This leaves me with a double column</p>
<pre><code> HH 18 19 20 21 22
Datum
2016-01-26 3 4 11 23 12
</code></pre>
<p>I want to make the column look like this and rename the columns:</p>
<pre><code> Datum col1 col2 col3 col4 col5 col6
2016-01-26 1 3 4 11 23 12
</code></pre>
<p>However when I use <code>reset_index</code> it just adds another index column and does not remove the multi_index.
Does anyone know how to achieve said table? Help would be much appreciated!</p>
|
<p>You can remove <code>[]</code> in <code>['DayPrecipitation']</code> for avoid <code>MultiIndex in columns</code>, then set new columns names by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.set_axis.html" rel="nofollow noreferrer"><code>DataFrame.set_axis</code></a> and last convert index to column by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reset_index.html" rel="nofollow noreferrer"><code>DataFrame.reset_index</code></a>:</p>
<pre><code>L = [f'col{x+1}' for x in range(df['HH'].nunique())]
df1 = (df.pivot_table(index = 'Datum', columns='HH' ,values = 'DayPrecipitation')
.rename_axis(None,axis=1)
.set_axis(L, inplace=False, axis=1)
.reset_index())
print (df1)
Datum col1 col2 col3 col4 col5
0 2016-01-26 3 4 11 23 12
</code></pre>
|
python|pandas|pivot-table
| 1
|
4,534
| 58,156,797
|
Replacing bad date values in python pandas
|
<p>I have a dataframe in pandas which mostly contain correct date values but it also contain bad date values. How can check for those bad date fields and replace it with today's date.</p>
<p>My dataframe will look like</p>
<pre><code>Date
12/12/2018
12/11/2018
#REF
12/1/205
12/1/205
N/A
Unknown
6/12/2018
6/3/2018
</code></pre>
|
<p>We can using <code>to_datetime</code></p>
<pre><code>pd.to_datetime(df.Date,errors='coerce').fillna(pd.to_datetime('today')).dt.date
Out[484]:
0 2018-12-12
1 2018-12-11
2 2019-09-29
3 2019-09-29
4 2019-09-29
5 2019-09-29
6 2019-09-29
7 2018-06-12
8 2018-06-03
Name: Date, dtype: object
#df.Date=pd.to_datetime(df.Date,errors='coerce').fillna(pd.to_datetime('today')).dt.date
</code></pre>
|
python-3.x|pandas
| 1
|
4,535
| 54,955,859
|
Pandas/Dataframe: How to assign default value when condition fails while taking single cell value from data frame using python?
|
<p>Let consider the below code:</p>
<pre><code>import pandas as pd
df = pd.DataFrame([[1, 2], [3, 4], [5, 6], [7, 8]], columns=["A", "B"])
x=0
print(df)
x=df.loc[df['A'] == 3, 'B', ''].iloc[0]
print(x)
</code></pre>
<p>while printing the x I get 4 as the output.Its fine. If the condition get fails as per the below code</p>
<pre><code>x=df.loc[df['A'] == 33, 'B', ''].iloc[0]
</code></pre>
<p>I want to print the x's initial value 0 and I want avoid the below error:</p>
<blockquote>
<p>IndexError: single positional indexer is out-of-bounds</p>
</blockquote>
<p>Guide me to avoid the error and display the initial value of x. Thanks in advance. </p>
|
<p>you can have a look at try and except for exception handling,
Use:</p>
<pre><code>import pandas as pd
df = pd.DataFrame([[1, 2], [3, 4], [5, 6], [7, 8]], columns=["A", "B"])
x=0
print(df)
try:
x=df.loc[df['A'] == 3, 'B', ''].iloc[0]
print(x)
except Exception as e:
print(e)
print(x)
</code></pre>
<p>Output:</p>
<pre><code> A B
0 1 2
1 3 4
2 5 6
3 7 8
Too many indexers #the exception
0 #the initial value
</code></pre>
|
python-3.x|pandas|dataframe
| 2
|
4,536
| 54,695,828
|
Transforming a Nested Dict in a dataframe?
|
<p>I have been trying to parse the nested dict in data frame.
I made this df from dict, but couldn't figure out this nested one.</p>
<p>df</p>
<pre><code> First second third
0 1 2 {nested dict}
</code></pre>
<p>nested dict:</p>
<pre><code> {'fourth': '4', 'fifth': '5', 'sixth': '6'}, {'fourth': '7', 'fifth': '8', 'sixth': '9'}
</code></pre>
<p>My Desired output would be:</p>
<pre><code> First second fourth fifth sixth fourth fifth sixth
0 1 2 4 5 6 7 8 9
</code></pre>
<p>Edit:
original Dict</p>
<pre><code> 'archi': [{'fourth': '115',
'fifth': '-162',
'sixth': '112'},
{'fourth': '52',
'fifth': '42',
'sixth': ' 32'}]
</code></pre>
|
<p>I can't quit tell the format of the nested dict in the "third" column, but here is what I recommend using <a href="https://stackoverflow.com/questions/29681906/python-pandas-dataframe-from-series-of-dict">Python: Pandas dataframe from Series of dict</a> as a starting point. Here is a dict and dataframe which are reproducible:</p>
<pre><code>nst_dict = {'archi': [{'fourth': '115', 'fifth': '-162', 'sixth': '112'},
{'fourth': '52', 'fifth': '42','sixth': ' 32'}]}
df = pd.DataFrame.from_dict({'First':[1,2], 'Second':[2,3],
'third': [nst_dict,nst_dict]})
</code></pre>
<p>Then you need to first access the list within the dict, then the items of the list:</p>
<pre><code>df.thrd_1 = df.third.apply(lambda x: x['archi']) # convert to list
df.thrd_1a = df.thrd_1.apply(lambda x: x[0]) # access first item
df.thrd_1b = df.thrd_1.apply(lambda x: x[1]) # access second item
out = df.drop('third', axis=1).merge(
df.thrd_1a.apply(pd.Series).merge(df.thrd_1a.apply(pd.Series),
left_index=True, right_index=True),
left_index=True, right_index=True)
print(out)
First Second fourth_x fifth_x sixth_x fourth_y fifth_y sixth_y
0 1 2 115 -162 112 115 -162 112
1 2 3 115 -162 112 115 -162 112
</code></pre>
<p>I will try to clean this up with <code>collections.abc</code> and turn into a function, but this should do the trick for your specific case.</p>
|
python|pandas|dataframe
| 1
|
4,537
| 54,998,898
|
Passing Genrator function to TF-Hub Universal sentence Encoder from pandas dataframe
|
<p>I have a pandas dataframe in which one column contains text body of an Email, I am trying to encode it using this <a href="https://colab.research.google.com/github/tensorflow/hub/blob/master/examples/colab/semantic_similarity_with_tf_hub_universal_encoder.ipynb" rel="nofollow noreferrer">tutorial</a>. I have managed to encode the sentences, by </p>
<pre><code>module_url = "https://tfhub.dev/google/universal-sentence-encoder-large/3"
embed = hub.Module(module_url)
tf.logging.set_verbosity(tf.logging.ERROR)
messages = df['EmailBody'].tolist()[:50] #Why 50 explained below
with tf.Session() as session:
session.run([tf.global_variables_initializer(), tf.tables_initializer()])
message_embeddings = session.run(embed(messages))
</code></pre>
<p>Now if I increase the size from here, it starts leaking out memory, I also tried running it in batches by</p>
<pre><code>module_url = "https://tfhub.dev/google/universal-sentence-encoder-large/3"
embed = hub.Module(module_url)
tf.logging.set_verbosity(tf.logging.ERROR)
messages = df_RF_final['Preprocessed_EmailBody'].tolist()
message_embeddings = []
with tf.Session() as session:
session.run([tf.global_variables_initializer(), tf.tables_initializer()])
for i in range(int(len(messages)/100)):
message_embeddings.append(session.run(embed(messages[i*100:(1+1)*200])))
</code></pre>
<p>Which gave the error available at the bottom, I am looking for an implementation where rather than having to pass the list I can pass a generator function, if it is not possible to use a generator function, then gelp me fix the second approach.</p>
<p>Error stack</p>
<pre><code>---------------------------------------------------------------------------
InvalidArgumentError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
1333 try:
-> 1334 return fn(*args)
1335 except errors.OpError as e:
/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py in _run_fn(feed_dict, fetch_list, target_list, options, run_metadata)
1318 return self._call_tf_sessionrun(
-> 1319 options, feed_dict, fetch_list, target_list, run_metadata)
1320
/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py in _call_tf_sessionrun(self, options, feed_dict, fetch_list, target_list, run_metadata)
1406 self._session, options, feed_dict, fetch_list, target_list,
-> 1407 run_metadata)
1408
InvalidArgumentError: Requires start <= limit when delta > 0: 0/-2147483648
[[{{node module_3_apply_default_4/Encoder_en/Transformer/SequenceMask/Range}}]]
During handling of the above exception, another exception occurred:
InvalidArgumentError Traceback (most recent call last)
<ipython-input-13-d0c75bde4d87> in <module>()
7 session.run([tf.global_variables_initializer(), tf.tables_initializer()])
8 for i in range(int(len(messages)/100)):
----> 9 message_embeddings.append(session.run(embed(messages[i*100:(1+1)*200])))
/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
927 try:
928 result = self._run(None, fetches, feed_dict, options_ptr,
--> 929 run_metadata_ptr)
930 if run_metadata:
931 proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
1150 if final_fetches or final_targets or (handle and feed_dict_tensor):
1151 results = self._do_run(handle, final_targets, final_fetches,
-> 1152 feed_dict_tensor, options, run_metadata)
1153 else:
1154 results = []
/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
1326 if handle is None:
1327 return self._do_call(_run_fn, feeds, fetches, targets, options,
-> 1328 run_metadata)
1329 else:
1330 return self._do_call(_prun_fn, handle, feeds, fetches)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
1346 pass
1347 message = error_interpolation.interpolate(message, self._graph)
-> 1348 raise type(e)(node_def, op, message)
1349
1350 def _extend_graph(self):
InvalidArgumentError: Requires start <= limit when delta > 0: 0/-2147483648
[[node module_3_apply_default_4/Encoder_en/Transformer/SequenceMask/Range (defined at /usr/local/lib/python3.6/dist-packages/tensorflow_hub/native_module.py:514) ]]
Caused by op 'module_3_apply_default_4/Encoder_en/Transformer/SequenceMask/Range', defined at:
File "/usr/lib/python3.6/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py", line 16, in <module>
app.launch_new_instance()
File "/usr/local/lib/python3.6/dist-packages/traitlets/config/application.py", line 658, in launch_instance
app.start()
File "/usr/local/lib/python3.6/dist-packages/ipykernel/kernelapp.py", line 477, in start
ioloop.IOLoop.instance().start()
File "/usr/local/lib/python3.6/dist-packages/tornado/ioloop.py", line 888, in start
handler_func(fd_obj, events)
File "/usr/local/lib/python3.6/dist-packages/tornado/stack_context.py", line 277, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/zmq/eventloop/zmqstream.py", line 450, in _handle_events
self._handle_recv()
File "/usr/local/lib/python3.6/dist-packages/zmq/eventloop/zmqstream.py", line 480, in _handle_recv
self._run_callback(callback, msg)
File "/usr/local/lib/python3.6/dist-packages/zmq/eventloop/zmqstream.py", line 432, in _run_callback
callback(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tornado/stack_context.py", line 277, in null_wrapper
return fn(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/kernelbase.py", line 283, in dispatcher
return self.dispatch_shell(stream, msg)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/kernelbase.py", line 235, in dispatch_shell
handler(stream, idents, msg)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/kernelbase.py", line 399, in execute_request
user_expressions, allow_stdin)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/ipkernel.py", line 196, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "/usr/local/lib/python3.6/dist-packages/ipykernel/zmqshell.py", line 533, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py", line 2718, in run_cell
interactivity=interactivity, compiler=compiler, result=result)
File "/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py", line 2822, in run_ast_nodes
if self.run_code(code, result):
File "/usr/local/lib/python3.6/dist-packages/IPython/core/interactiveshell.py", line 2882, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-13-d0c75bde4d87>", line 9, in <module>
message_embeddings.append(session.run(embed(messages[i*100:(1+1)*200])))
File "/usr/local/lib/python3.6/dist-packages/tensorflow_hub/module.py", line 247, in __call__
name=name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow_hub/native_module.py", line 514, in create_apply_graph
import_scope=relative_scope_name)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/saver.py", line 1435, in import_meta_graph
meta_graph_or_file, clear_devices, import_scope, **kwargs)[0]
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/training/saver.py", line 1457, in _import_meta_graph_with_return_elements
**kwargs))
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/meta_graph.py", line 806, in import_scoped_meta_graph_with_return_elements
return_elements=return_elements)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/util/deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/importer.py", line 442, in import_graph_def
_ProcessNewOps(graph)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/importer.py", line 235, in _ProcessNewOps
for new_op in graph._add_new_tf_operations(compute_devices=False): # pylint: disable=protected-access
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 3433, in _add_new_tf_operations
for c_op in c_api_util.new_tf_operations(self)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 3433, in <listcomp>
for c_op in c_api_util.new_tf_operations(self)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 3325, in _create_op_from_tf_operation
ret = Operation(c_op, self)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 1801, in __init__
self._traceback = tf_stack.extract_stack()
InvalidArgumentError (see above for traceback): Requires start <= limit when delta > 0: 0/-2147483648
[[node module_3_apply_default_4/Encoder_en/Transformer/SequenceMask/Range (defined at /usr/local/lib/python3.6/dist-packages/tensorflow_hub/native_module.py:514) ]]
</code></pre>
|
<p>I had a similar issue and is very similar to "<a href="https://stackoverflow.com/questions/56488857/strongly-increasing-memory-consumption-when-using-elmo-from-tensorflow-hub/56493291#56493291">Strongly increasing memory consumption when using ELMo from Tensorflow-Hub</a>". I got a great answer from <a href="https://stackoverflow.com/users/7061249/arnoegw">arnoegw</a> <a href="https://stackoverflow.com/a/57952186/2605956">here</a>.</p>
<p>In short, since you are using tf.session you are using the TF.v1 programing model. This means you first build the dataflow and then run it repeatedly i.e. feeding it inputs and fetching outputs from the graph. But you keep adding the new application of hub.Module to the graph. Instead of doing it once and using the graph.</p>
<p>The correct way to implement this according to the answer is:</p>
<pre class="lang-py prettyprint-override"><code>tf.logging.set_verbosity(tf.logging.ERROR)
messages = df_RF_final['Preprocessed_EmailBody'].tolist()
message_embeddings = []
with hub.eval_function_for_module("https://tfhub.dev/google/universal-sentence-encoder-large/3") as embed:
for i in range(int(len(messages)/100)):
message_embeddings.append(embed(messages[i*100:(1+1)*200])
</code></pre>
|
python|pandas|tensorflow|generator
| 0
|
4,538
| 54,817,230
|
Can re.search() skip past integer objects?
|
<p>Question is pretty self-explanatory. I have a column in a pandas dataframe that contains both int and str objects. When I attempt to search it with re.search() it can't run because (I believe) some of the columns contain integer and it doesn't know what to do.</p>
<p>Is there some type of a way to fix this? I do not see an ignore errors argument.</p>
|
<p>Best thing to do would be to use pandas inbuilt <code>pandas.Series.str.match</code> <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.match.html#pandas-series-str-match" rel="nofollow noreferrer">Docs</a>. It automatically "skips" int values by casting them all to string.</p>
<pre><code>import pandas as pd
df = pd.DataFrame(data={
'Col1': [...],
'Col2': [...]}
)
df['Col1'].str.match("*pattern*")
</code></pre>
<p>You can adjust your pattern to make sure that none of your int strings are matched.</p>
<pre><code>>>> import pandas as pd
>>> df = pd.DataFrame(data={
'Col1': ["a string", "a second string", 123, 456, "another string"],
'Col2': [1, 2, 3, 4, 5]}
)
>>> df['Col1'].str.match("[^0-9]+")
0 True
1 True
2 NaN
3 NaN
4 True
Name: Col1, dtype: object
</code></pre>
|
python|regex|pandas
| 0
|
4,539
| 49,393,659
|
tf.Data: what are stragglers in parallel interleaving?
|
<p><a href="https://www.tensorflow.org/versions/master/api_docs/python/tf/data/Dataset#interleave" rel="noreferrer"><code>interleave</code></a> is a <code>tf.Data.Dataset</code> method that can be used to interleave together elements from multiple datasets. <a href="https://www.tensorflow.org/api_docs/python/tf/contrib/data/parallel_interleave" rel="noreferrer"><code>tf.contrib.data.parallel_interleave</code></a> provides a parallel version of the same functionality with the help of <a href="https://www.tensorflow.org/api_docs/python/tf/data/Dataset#apply" rel="noreferrer"><code>apply</code></a>.</p>
<p>I can see that reading from many datasets in parallel and having buffers for them as allowed by the parallel version will improve throughput. But the <a href="https://www.tensorflow.org/api_docs/python/tf/contrib/data/parallel_interleave" rel="noreferrer">documentation</a> also has this to say about how <code>parallel_interleave</code> can increase data throughput:</p>
<blockquote>
<p>Unlike tf.data.Dataset.interleave, it gets elements from cycle_length
nested datasets in parallel, which increases the throughput,
especially in the presence of stragglers.</p>
</blockquote>
<p>What exactly are stragglers, and why does <code>parallel_interleave</code> work especially well in terms of throughput in their presence?</p>
|
<p>A straggler is a function which takes longer than normal to produce its output. This can be due to congestion on the network, or weird combination of randomness.</p>
<p><code>interleave</code> does all the processing in a sequential manner, on a single thread. In the following schema, let <code>___</code> denote <em>waiting for IO/Computation</em>, <code><waiting></code> denote <em>waiting for its turn to spit an element</em> and <code>111</code> denote <em>producing the first element (<code>1</code>)</em>.</p>
<p>Suppose we have a dataset of directories <code>ds = [A, B, C, D]</code> and we produce files <code>1,2,3...</code> from each of them. Then using <code>r = ds.interleave(cycle_length=3, block_length=2)</code> will work kind of like this:</p>
<pre class="lang-none prettyprint-override"><code>A: ___111___222
B: <waiting> ___111___________222
C: <waiting> <waiting> <waiting> ___111___222
R: ____A1____A2____B1____________B2____C1____C2
</code></pre>
<p>You see that if producing elements from B straggles, all following elements will have to wait to be processed.</p>
<p><code>parallel_interleave</code> helps in two ways with stragglers. First, it starts each element in the cycle <em>in parallel</em> (hence the name). Therefore, the production schema becomes:</p>
<pre class="lang-none prettyprint-override"><code>A: ___111___222
B: ___<waiting>111___________222
C: ___<waiting><waiting><waitin>111___222
R: ____A1____A2_B1____________B2_C1____C2|....|
</code></pre>
<p>Doing this helps with reducing useless waiting by waiting in parallel. The part <code>|....|</code> shows how much we saved compared to the sequential version.</p>
<p>The second way it helps is by allowing a <code>sloppy</code> argument. If we set it to <code>True</code>, it allows skipping over an unavailable element until it is available, at the cost of producing a non-deterministic order. Here's how:</p>
<pre class="lang-none prettyprint-override"><code>A: ___111___<w>222
B: ___<w>111___________222
C: ___<w><w>111___222
R: ____A1_B1_C1_A2_C2___B2|...................|
</code></pre>
<p>Look at that saving!! But also look at the order of the elements !</p>
<hr>
<p>I reproduce these in code. It is an ugly way, but it illustrates the differences a bit.</p>
<pre><code>from time import sleep
DS = tf.data.Dataset
def repeater(val):
def _slow_gen():
for i in range(5):
if i % 2:
sleep(1)
yield i
return DS.from_generator(_slow_gen, tf.int8)
ds = DS.range(5)
slow_ds = ds.interleave(repeater, cycle_length=2, block_length=3)
para_ds = ds.apply(tf.contrib.data.parallel_interleave(
repeater, cycle_length=2, block_length=3)
)
sloppy_ds = ds.apply(tf.contrib.data.parallel_interleave(
repeater, cycle_length=2, block_length=3, sloppy=True)
)
%time apply_python_func(slow_ds, print, sess)
# 10 sec, you see it waiting each time
%time apply_python_func(para_ds, print, sess)
# 3 sec always! you see it burping a lot after the first wait
%time apply_python_func(sloppy_ds, print, sess)
# sometimes 3, sometimes 4 seconds
</code></pre>
<p>And here's the function to show a dataset</p>
<pre><code>def apply_python_func(ds, func, sess):
"""Exact values from ds using sess and apply func on them"""
it = ds.make_one_shot_iterator()
next_value = it.get_next()
num_examples = 0
while True:
try:
value = sess.run(next_value)
num_examples += 1
func(value)
except tf.errors.OutOfRangeError:
break
print('Evaluated {} examples'.format(num_examples))
</code></pre>
|
python|tensorflow|tensorflow-datasets
| 10
|
4,540
| 49,569,708
|
How to determine highest occurrence of categorical labels across multiple columns per row
|
<p>I am trying to determine the label name with the highest occurrence across multiple columns and set the another pandas columns with that label.</p>
<p>For examples, given this dataframe:</p>
<pre><code> Class_1 Class_2 Class_3
0 versicolor setosa setosa
1 virginica versicolor virginica
2 virginica setosa setosa
3 versicolor setosa setosa
4 versicolor versicolor virginica
</code></pre>
<p>I want to add a column called Predictions per the reasoning above:</p>
<pre><code> Class_1 Class_2 Class_3 Predictions
0 versicolor setosa setosa setosa
1 virginica versicolor virginica virginica
2 virginica setosa setosa setosa
3 versicolor setosa setosa setosa
4 versicolor versicolor virginica versicolor
</code></pre>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html" rel="nofollow noreferrer"><code>value_counts</code></a> for return first index by most common value per rows with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html" rel="nofollow noreferrer"><code>apply</code></a> and <code>axis=1</code>:</p>
<pre><code>df['Predictions'] = df.apply(lambda x: x.value_counts().index[0], axis=1)
print (df)
Class_1 Class_2 Class_3 Predictions
0 versicolor setosa setosa setosa
1 virginica versicolor virginica virginica
2 virginica setosa setosa setosa
3 versicolor setosa setosa setosa
4 versicolor versicolor virginica versicolor
</code></pre>
<p>Alternative with <a href="https://docs.python.org/2/library/collections.html#collections.Counter.most_common" rel="nofollow noreferrer"><code>Counter.most_common</code></a>:</p>
<pre><code>from collections import Counter
df['Predictions'] = [Counter(x).most_common(1)[0][0] for x in df.itertuples()]
print (df)
Class_1 Class_2 Class_3 Predictions
0 versicolor setosa setosa setosa
1 virginica versicolor virginica virginica
2 virginica setosa setosa setosa
3 versicolor setosa setosa setosa
4 versicolor versicolor virginica versicolor
</code></pre>
|
python|pandas
| 2
|
4,541
| 49,626,554
|
How to binary encode tow mixed features?
|
<p>I have a dataset looking like this one:</p>
<pre><code>import pandas as pd
pd.DataFrame({"A": [2, 2, 1, 0, 5, 3, 0, 4, 5], "B": [1, 0, 0, 0, 1, 1, 1, 0, 0]})
A B
0 2 1
1 2 0
2 1 0
3 0 0
4 5 1
5 3 1
6 0 1
7 4 0
</code></pre>
<p>(I know that A is between 0 and 5; B is only 0 or 1)</p>
<p>I would like to transform it and get:</p>
<pre><code> A0_B0 A1_B0 A2_B0 A3_B0 ... A5_B1
0 0 0 0 0 ...
1 0 0 1 0 ...
2 0 1 0 0 ...
...
</code></pre>
<p>(knowing which column correspond to which combination is important)</p>
<p>with a method that can be integrated with sklearn Pipeline and/or sklearn_pandas DataFrameMapper (need to be reproducible on a test sample).</p>
<p>For now, I have tried using OneHotEncoder or LabelBinarizer but they apply to A or B columns without mixing them.</p>
<p>I have also tried to it manually with a custom transformer, but DataFrameMapper looses column names:</p>
<pre><code>from sklearn.base import BaseEstimator, TransformerMixin
class ABTransformer(BaseEstimator, TransformerMixin):
def fit(self, x, y=None):
return self
def transform(self, x):
A = x.A
B = x.B
A0_B0 = np.logical_and((A==0), (B == 0))
A1_B0 = np.logical_and((A==1), (B == 0))
...
data = pd.DataFrame(np.stack((A0_B0, A1_B0,.... ), axis=1),
columns=["A0_B0", "A1_B0", ...]
)
return data
mapper = DataFrameMapper([
(["A", "B"], [ABTransformer()] , {'input_df':True, "alias": None}),
],
df_out=True, sparse=False)
</code></pre>
<p>At the end, the data I get are labelled: "A_B_0", "A_B_1", etc...</p>
<p>Is there a way to achieve the desired output?</p>
|
<p>Given that the number of distinct values for column A and B is <code>n_A</code> and <code>n_B</code> respectively, and all values are represented as the zero-based integers, you can use the following transform function. </p>
<pre><code>def transform(self, x):
indices = x.B * n_A + x.A
columns = ["A%d_B%d" % (j, i) for i in range(n_B) for j in range(n_A)]
onehot = np.eye(n_A * n_B)[indices]
data = pd.DataFrame(data=onehot, columns=columns)
return data
</code></pre>
|
python|scikit-learn|sklearn-pandas
| 1
|
4,542
| 49,363,850
|
Approximation of vector-valued multivariate function with arbitrary in- and output dimensions in Numpy/Scipy
|
<p>Starting point is a m-dimensional vector-valued function </p>
<p><img src="https://chart.googleapis.com/chart?cht=tx&chl=f(x)%3D(f_1(x)%2C...%2Cf_m(x))%20" alt="eq">,</p>
<p>where the input is also a n-dimensional vector:</p>
<p><img src="https://chart.googleapis.com/chart?cht=tx&chl=x%3D(x_1%2C...%2Cx_n)%20" alt="eq">.</p>
<p>The in- and output of this function are numpy vectors. This function is expensive to calculate, so I need an approximation/interpolation.</p>
<p>Is there a numpy/scipy function which returns an approximation, e.g. Taylor expansion, of this function in proximity of a given value for <em>x</em> for arbitrary dimensions <em>m, n</em>?</p>
<p>So essentially, I am asking for a generalization of <em>scipy.interpolate.approximate_taylor_polynomial</em>, since I am also interested in the quadratic terms of the approximation.</p>
<p>In <em>scipy.interpolate</em>, there seem to be some options for vector-valued <em>x</em>, but only for scalar functions, but just looping over the m components of the function is not an option, since the components cannot be calculated separately and the function would be called more often than necessary.</p>
<p>If such a function does not exist, a quick way with using existing methods and avoiding unnecessary function calls would also be great.</p>
|
<p>I think you have to roll your own approximation for that. The idea is simple: sample the function at some reasonable points (at least as many as there are monomials in the Taylor approximation, but preferably more), and fit the coefficients with <code>np.linalg.lstsq</code>. The actual fit is one line, the rest is preparation for it. </p>
<p>I'll use an example with n=3 and m=2, so three variables and 2-dimensional values. Initial setup:</p>
<pre><code>import numpy as np
def f(point):
x, y, z = point[0], point[1], point[2]
return np.array([np.exp(x + 2*y + 3*z), np.exp(3*x + 2*y + z)])
n = 3
m = 2
scale = 0.1
</code></pre>
<p>The choice of <code>scale</code> parameter is subject to same considerations as in the docstring of <code>approximate_taylor_polynomial</code> (see the <a href="https://github.com/scipy/scipy/blob/v1.0.0/scipy/interpolate/polyint.py#L436" rel="nofollow noreferrer">source</a>). </p>
<p>Next step is generation of points. With n variables, quadratic fit involves <code>1 + n + n*(n+1)/2</code> monomials (one constant, n linear, n(n+1)/2 quadratic). I use <code>1 + n + n**2</code> points which are placed around <code>(0, 0, 0)</code> and have either one or two nonzero coordinates. The particular choices are somewhat arbitrary; I could not find a "canonical" choice of sample points for multivariate quadratic fit. </p>
<pre><code>points = [np.zeros((n, ))]
points.extend(scale*np.eye(n))
for i in range(n):
for j in range(n):
point = np.zeros((n,))
point[i], point[j] = scale, -scale
points.append(point)
points = np.array(points)
values = f(points.T).T
</code></pre>
<p>The array <code>values</code> holds the values of function at each of those points. The preceding line is the only place where <code>f</code> is called. Next step, generate monomials for the model, and evaluate them at these same points. </p>
<pre><code>monomials = [np.zeros((1, n)), np.eye(n)]
for i in range(n):
for j in range(i, n):
monom = np.zeros((1, n))
monom[0, i] += 1
monom[0, j] += 1
monomials.append(monom)
monomials = np.concatenate(monomials, axis=0)
monom_values = np.prod(points**monomials[:, None, :], axis=-1).T
</code></pre>
<p>Let's review the situation: we have <code>values</code> of the function, of shape (13, 2) here, and of monomials, of shape (13, 10). Here 13 is the number of points and 10 is the number of monomials. For each column of <code>values</code>, the <code>lstsq</code> method will find the linear combination of the columns of <code>monomials</code> that best approximates it. These are the coefficients we want.</p>
<pre><code>coeffs = np.linalg.lstsq(monom_values, values, rcond=None)[0]
</code></pre>
<p>Let's see if these are any good. The coefficients are</p>
<pre><code>[[1. 1. ]
[1.01171761 3.03011523]
[2.01839762 2.01839762]
[3.03011523 1.01171761]
[0.50041681 4.53385141]
[2.00667556 6.04011017]
[3.02759266 3.02759266]
[2.00667556 2.00667556]
[6.04011017 2.00667556]
[4.53385141 0.50041681]]
</code></pre>
<p>and the array <code>monomials</code>, for reference, is </p>
<pre><code>[[0. 0. 0.]
[1. 0. 0.]
[0. 1. 0.]
[0. 0. 1.]
[2. 0. 0.]
[1. 1. 0.]
[1. 0. 1.]
[0. 2. 0.]
[0. 1. 1.]
[0. 0. 2.]]
</code></pre>
<p>So, for example, the monomial <code>x**2</code>, encoded as <code>[2, 0, 0]</code>, gets the coefficients <code>[0.50041681 4.53385141]</code> for the two components of the function <code>f</code>. This makes perfect sense because its coefficient in the Taylor expansion of <code>exp(x + 2*y + 3*z)</code> is 0.5, and in the Taylor expansion of <code>exp(3*x + 2*y + z)</code> it is 4.5. </p>
<p>An approximation of the function f can be obtained by</p>
<pre><code>def fFit(point,coeffs,monomials):
return np.prod(point**monomials[:, None, :], axis=-1).T.dot(coeffs)[0]
testpoint = np.array([0.05,-0.05,0.0])
# true value:
print(f(testpoint)) # output: [ 0.95122942 1.0512711 ]
# approximation:
print(fFit(testpoint,coeffs,monomials)) # output: [ 0.95091704 1.05183692]
</code></pre>
|
numpy|scipy|interpolation|taylor-series|function-approximation
| 1
|
4,543
| 49,673,005
|
Any way for a faster Python For Loop
|
<p>Can anyone please let me know if the below for loop can be adjusted to be faster. the below for loop runs on a spreadsheet of almost 200k rows and it takes around 22 hours to compute. Any help would be appreciated.</p>
<p>So my initial spreadsheet have the 2 columns highlighted in green. </p>
<p>My code job is to fill in all the other columns in yellow according to the criteria in the code below.</p>
<p>My initial spreadsheet :</p>
<p><a href="https://i.stack.imgur.com/8owO2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8owO2.png" alt="My initial spreadsheet :"></a> </p>
<p>My code (Sample to fill in just one column):</p>
<pre><code>for i in range(0,len(rolling)):
# Fill in the 3 Month OT
rolling.iloc[i,9]=sum(rolling.fSM_OT[(rolling['PERIOD_DATE'].isin(pd.date_range(rolling.BO3M[i], rolling.PERIOD_DATE[i]))) &
(rolling['CUSTOMER_ID']==rolling.CUSTOMER_ID[i]) &
(rolling['SUPPLIER_ID']==rolling.SUPPLIER_ID[i])
& (rolling['SUPPLIER_LOCATION_ID']==rolling.SUPPLIER_LOCATION_ID[i])])
</code></pre>
|
<p>Yeah reduce down to minimal complexity then optimize as @jpp commented. </p>
<p>Take a look at this also, great way to get things like this done at speed with Python. <a href="http://chriskiehl.com/article/parallelism-in-one-line/" rel="nofollow noreferrer">http://chriskiehl.com/article/parallelism-in-one-line/</a></p>
|
python|pandas|for-loop|dataframe
| 1
|
4,544
| 73,341,288
|
Replace a column with binned values and return a new DataFrame
|
<p>I have a DataFrame <code>df</code> that has an <code>Age</code> column with continuous variables. I would like to create a new DataFrame <code>new_df</code>, replacing the original continuous variables with categorical variables that I created from binning.</p>
<p>Is there a way to do this?</p>
<p><strong>DataFrame (<code>df</code>):</strong></p>
<pre><code> Customer_ID Gender Age
0 0002-ORFBO Female 37
1 0003-MKNFE Male 46
2 0004-TLHLJ Male 50
3 0011-IGKFF Male 78
4 0013-EXCHZ Female 75
5 0013-MHZWF Female 23
6 0013-SMEOE Female 67
7 0014-BMAQU Male 52
8 0015-UOCOJ Female 68
9 0016-QLJIS Female 43
10 0017-DINOC Male 47
11 0017-IUDMW Female 25
12 0018-NYROU Female 58
13 0019-EFAEP Female 32
14 0019-GFNTW Female 39
15 0020-INWCK Female 58
16 0020-JDNXP Female 52
17 0021-IKXGC Female 72
18 0022-TCJCI Male 79
</code></pre>
<p><strong>My code:</strong></p>
<pre><code># Ages 0 to 3: Toddler
# Ages 4 to 17: Child
# Ages 18 to 25: Young Adult
# Ages 26 to 64: Adult
# Ages 65 to 99: Elder
pd.cut(df.Age,bins=[0,3,17,25,64,99], labels=['Toddler', 'Child', 'Young Adult', 'Adult', 'Elder'])
</code></pre>
|
<p>You can try add <code>include_lowest</code> argument to make <code>0</code> included to <code>Toddler</code> label</p>
<pre class="lang-py prettyprint-override"><code>out = df.join(pd.cut(df.pop('Age'),
bins=[0,3,17,25,64,99],
labels=['Toddler', 'Child', 'Young Adult', 'Adult', 'Elder'],
include_lowest=True,).to_frame('label'))
</code></pre>
<pre><code>print(out)
label
0 NaN
1 Toddler
2 Toddler
3 Toddler
4 Toddler
5 Child
6 Child
7 Child
8 Child
9 Child
10 Child
11 Child
12 Child
13 Child
14 Child
15 Child
16 Child
17 Child
18 Child
19 Young Adult
20 Young Adult
21 Young Adult
22 Young Adult
23 Young Adult
24 Young Adult
25 Young Adult
26 Young Adult
27 Adult
28 Adult
29 Adult
30 Adult
31 Adult
32 Adult
33 Adult
34 Adult
35 Adult
36 Adult
37 Adult
38 Adult
39 Adult
40 Adult
41 Adult
42 Adult
43 Adult
44 Adult
45 Adult
46 Adult
47 Adult
48 Adult
49 Adult
50 Adult
51 Adult
52 Adult
53 Adult
54 Adult
55 Adult
56 Adult
57 Adult
58 Adult
59 Adult
60 Adult
61 Adult
62 Adult
63 Adult
64 Adult
65 Adult
66 Elder
67 Elder
68 Elder
69 Elder
70 Elder
71 Elder
72 Elder
73 Elder
74 Elder
75 Elder
76 Elder
77 Elder
78 Elder
79 Elder
80 Elder
81 Elder
82 Elder
83 Elder
84 Elder
85 Elder
86 Elder
87 Elder
88 Elder
89 Elder
90 Elder
91 Elder
92 Elder
93 Elder
94 Elder
95 Elder
96 Elder
97 Elder
98 Elder
99 Elder
100 Elder
</code></pre>
<p>New column to original df</p>
<pre><code>df['label'] = pd.cut(df['Age'],
bins=[0,3,17,25,64,99],
labels=['Toddler', 'Child', 'Young Adult', 'Adult', 'Elder'],
include_lowest=True)
</code></pre>
<pre><code>print(df)
Age label
0 -1 NaN
1 0 Toddler
2 1 Toddler
3 2 Toddler
4 3 Toddler
5 4 Child
6 5 Child
7 6 Child
8 7 Child
9 8 Child
10 9 Child
11 10 Child
12 11 Child
13 12 Child
14 13 Child
15 14 Child
16 15 Child
17 16 Child
18 17 Child
19 18 Young Adult
20 19 Young Adult
21 20 Young Adult
22 21 Young Adult
23 22 Young Adult
24 23 Young Adult
25 24 Young Adult
26 25 Young Adult
27 26 Adult
28 27 Adult
29 28 Adult
30 29 Adult
31 30 Adult
32 31 Adult
33 32 Adult
34 33 Adult
35 34 Adult
36 35 Adult
37 36 Adult
38 37 Adult
39 38 Adult
40 39 Adult
41 40 Adult
42 41 Adult
43 42 Adult
44 43 Adult
45 44 Adult
46 45 Adult
47 46 Adult
48 47 Adult
49 48 Adult
50 49 Adult
51 50 Adult
52 51 Adult
53 52 Adult
54 53 Adult
55 54 Adult
56 55 Adult
57 56 Adult
58 57 Adult
59 58 Adult
60 59 Adult
61 60 Adult
62 61 Adult
63 62 Adult
64 63 Adult
65 64 Adult
66 65 Elder
67 66 Elder
68 67 Elder
69 68 Elder
70 69 Elder
71 70 Elder
72 71 Elder
73 72 Elder
74 73 Elder
75 74 Elder
76 75 Elder
77 76 Elder
78 77 Elder
79 78 Elder
80 79 Elder
81 80 Elder
82 81 Elder
83 82 Elder
84 83 Elder
85 84 Elder
86 85 Elder
87 86 Elder
88 87 Elder
89 88 Elder
90 89 Elder
91 90 Elder
92 91 Elder
93 92 Elder
94 93 Elder
95 94 Elder
96 95 Elder
97 96 Elder
98 97 Elder
99 98 Elder
100 99 Elder
</code></pre>
|
python|pandas|dataframe|binning
| 1
|
4,545
| 67,205,712
|
Sorting two separate xarray DataArrays based on one of those arrays only in a dask-friendly way
|
<p>Say I have a two DataArrays, <code>A</code> and <code>B</code>, both with dimensions time, x, z. I want to sort all values of <code>A</code> only in x and z. So that at each individual time I will have a DataArray with sorted values. Simultaneously, I also want to sort <code>B</code> but based on the values of <code>A</code>.</p>
<p>If I only had 1-D numpy arrays I could what I want following <a href="https://stackoverflow.com/a/1903579">this answer</a>:</p>
<pre class="lang-py prettyprint-override"><code>>>> a = numpy.array([2, 3, 1])
>>> b = numpy.array([4, 6, 7])
>>> p = a.argsort()
>>> p
[2, 0, 1]
>>> a[p]
array([1, 2, 3])
>>> b[p]
array([7, 4, 6])
</code></pre>
<p>However, with DataArrays the problem is a bit more complicated. I can get something that works with the following code:</p>
<pre class="lang-py prettyprint-override"><code>def zipsort_xarray(da_a, da_b, unsorted_dim="time"):
assert da_a.dims == da_b.dims, "Dimensions aren't the same"
for dim in da_a.dims:
assert np.allclose(da_a[dim], da_b[dim]), f"Coordinates of {dim} aren't the same"
sorted_dims = [ dim for dim in da_a.dims if dim != unsorted_dim ]
daa_aux = da_a.stack(aux_dim=sorted_dims) # stack all dims to be sorted into one
indices = np.argsort(daa_aux, axis=-1) # get indices that sort the last (stacked) dim
indices[unsorted_dim] = range(len(indices.time)) # turn unsorted_dim into a counter
flat_indices = np.concatenate(indices + indices.time*len(indices.aux_dim)) # Make indices appropriate for indexing a fully flattened version of the data array
daa_aux2 = da_a.stack(aux_dim2=da_a.dims) # get a fully flatten version of the data array
daa_aux2.values = daa_aux2.values[flat_indices] # apply the flattened indices to sort it
dab_aux2 = da_b.stack(aux_dim2=da_b.dims) # get a fully flatten version of the data array
dab_aux2.values = dab_aux2.values[flat_indices] # apply the same flattened indices to sort it
return daa_aux2.unstack(), dab_aux2.unstack() # return unflattened (unstacked) DataArrays
tsize=2
xsize=2
zsize=2
data1 = xr.DataArray(np.random.randn(tsize, xsize, zsize), dims=("time", "x", "z"),
coords=dict(time=range(tsize),
x=range(xsize),
z=range(zsize)))
data2 = xr.DataArray(np.random.randn(tsize, xsize, zsize), dims=("time", "x", "z"),
coords=dict(time=range(tsize),
x=range(xsize),
z=range(zsize)))
sort1, sort2 = zipsort_xarray(data1.transpose("time", "z", "x"), data2.transpose("time", "z", "x"))
</code></pre>
<p>However, not only I feel this is a bit "hacky", it also doesn't work well with dask.</p>
<p>I'm planning on using this on large DataArrays that will be chunked in time, so it's important that I get something going that can work in those cases. However if I chunk the DataArrays in time I get:</p>
<pre class="lang-py prettyprint-override"><code>data1 = data1.chunk(dict(time=1))
data2 = data2.chunk(dict(time=1))
sort1, sort2 = zipsort_xarray(data1.transpose("time", "z", "x"), data2.transpose("time", "z", "x"))
</code></pre>
<p>and the output</p>
<pre><code>NotImplementedError: 'argsort' is not yet a valid method on dask arrays
</code></pre>
<p>Is there any way to make this work with chunked DataArrays?</p>
|
<p>I think I have something working that seems to be fully parallel. Only works when the time dimension is chunked with size one:</p>
<pre class="lang-py prettyprint-override"><code>import xarray as xr
import numpy as np
def zipsort3(da_a, da_b, unsorted_dim="time"):
"""
Only works if both `da_a` and `da_b` are chunked in `unsorted_dim`
with size 1 chunks
"""
from dask.array import map_blocks
assert da_a.dims == da_b.dims, "Dimensions aren't the same"
for dim in da_a.dims:
assert np.allclose(da_a[dim], da_b[dim]), f"Coordinates of {dim} aren't the same"
sorted_dims = [ dim for dim in da_a.dims if dim != unsorted_dim ]
daa_aux = da_a.stack(aux_dim=sorted_dims).transpose(unsorted_dim, "aux_dim") # stack all dims to be sorted into one
dab_aux = da_b.stack(aux_dim=sorted_dims).transpose(unsorted_dim, "aux_dim") # stack all dims to be sorted into one
indices = map_blocks(np.argsort, daa_aux.data, axis=-1, dtype=np.int64)
def reorder(A, ind): return A[0,ind]
daa_aux.data = map_blocks(reorder, daa_aux.data, indices, dtype=np.float64)
dab_aux.data = map_blocks(reorder, dab_aux.data, indices, dtype=np.float64)
return daa_aux.unstack(), dab_aux.unstack()
tsize=2
xsize=2
zsize=2
data1 = xr.DataArray(np.random.randn(tsize, xsize, zsize), dims=("time", "x", "z"),
coords=dict(time=range(tsize),
x=range(xsize),
z=range(zsize)))
data2 = xr.DataArray(np.random.randn(tsize, xsize, zsize), dims=("time", "x", "z"),
coords=dict(time=range(tsize),
x=range(xsize),
z=range(zsize)))
data1 = data1.chunk(dict(time=1))
data2 = data2.chunk(dict(time=1))
sorted1, sorted2 = zipsort3(data1.transpose("time", "z", "x"), data2.transpose("time", "z", "x"))
</code></pre>
|
python|arrays|numpy|dask|python-xarray
| 0
|
4,546
| 60,000,899
|
Is Python runtime needed to use Tensorflow.js Node?
|
<p>As mentioned in the node version installation instructions, is Python runtime needed to use Tensorflow.js Node? I can install whatever is required but not sure if our production servers have it.</p>
|
<p>It depends of what you want to do. </p>
<p>If you already have a tensorflow model written in python that you would like to deploy for inference in nodejs, </p>
<ul>
<li><p>you can use the tensorflow.js converter. In this case you will need a python runtime</p></li>
<li><p>since the version 1.3 of tfjs-node, it is possible to load directly the savedModel in js (only possible in nodejs) using <a href="https://js.tensorflow.org/api_node/1.5.1/#node.loadSavedModel" rel="nofollow noreferrer">loadSavedModel</a>.</p></li>
</ul>
<p>But If you want to write your complete pipeline in js, you don't need to have python installed.</p>
|
tensorflow.js
| 1
|
4,547
| 60,226,735
|
How to count overlapping datetime intervals in Pandas?
|
<p>I have a following DataFrame with two datetime columns:</p>
<pre><code> start end
0 01.01.2018 00:47 01.01.2018 00:54
1 01.01.2018 00:52 01.01.2018 01:03
2 01.01.2018 00:55 01.01.2018 00:59
3 01.01.2018 00:57 01.01.2018 01:16
4 01.01.2018 01:00 01.01.2018 01:12
5 01.01.2018 01:07 01.01.2018 01:24
6 01.01.2018 01:33 01.01.2018 01:38
7 01.01.2018 01:34 01.01.2018 01:47
8 01.01.2018 01:37 01.01.2018 01:41
9 01.01.2018 01:38 01.01.2018 01:41
10 01.01.2018 01:39 01.01.2018 01:55
</code></pre>
<p>I would like to count how many <em>starts</em> (intervals) are active at the same time before they end at given time (in other words: <strong>how many times each row overlaps with the rest of the rows</strong>). </p>
<p>E.g. from 00:47 to 00:52 only one is active, from 00:52 to 00:54 two, from 00:54 to 00:55 only one again, and so on.</p>
<p>I tried to stack columns onto each other, sort by date and by iterrating through whole dataframe give each "start" +1 to counter and -1 to each "end". It works but on my original data frame, where I have few millions of rows, <strong>iteration takes forever</strong> - I need to find a quicker way.</p>
<p>My original <em>basic-and-not-very-good</em> code:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.read_csv('something.csv', sep=';')
df = df.stack().to_frame()
df = df.reset_index(level=1)
df.columns = ['status', 'time']
df = df.sort_values('time')
df['counter'] = np.nan
df = df.reset_index().drop('index', axis=1)
print(df.head(10))
</code></pre>
<p>gives:</p>
<pre><code> status time counter
0 start 01.01.2018 00:47 NaN
1 start 01.01.2018 00:52 NaN
2 stop 01.01.2018 00:54 NaN
3 start 01.01.2018 00:55 NaN
4 start 01.01.2018 00:57 NaN
5 stop 01.01.2018 00:59 NaN
6 start 01.01.2018 01:00 NaN
7 stop 01.01.2018 01:03 NaN
8 start 01.01.2018 01:07 NaN
9 stop 01.01.2018 01:12 NaN
</code></pre>
<p>and:</p>
<pre><code>counter = 0
for index, row in df.iterrows():
if row['status'] == 'start':
counter += 1
else:
counter -= 1
df.loc[index, 'counter'] = counter
</code></pre>
<p>final output:</p>
<pre><code> status time counter
0 start 01.01.2018 00:47 1.0
1 start 01.01.2018 00:52 2.0
2 stop 01.01.2018 00:54 1.0
3 start 01.01.2018 00:55 2.0
4 start 01.01.2018 00:57 3.0
5 stop 01.01.2018 00:59 2.0
6 start 01.01.2018 01:00 3.0
7 stop 01.01.2018 01:03 2.0
8 start 01.01.2018 01:07 3.0
9 stop 01.01.2018 01:12 2.0
</code></pre>
<p>Is there any way i can do this by <strong>NOT</strong> using iterrows()?</p>
<p>Thanks in advance!</p>
|
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.cumsum.html" rel="noreferrer"><code>Series.cumsum</code></a> with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.map.html" rel="noreferrer"><code>Series.map</code></a> (or <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.replace.html" rel="noreferrer"><code>Series.replace</code></a>):</p>
<pre><code>new_df = df.melt(var_name = 'status',value_name = 'time').sort_values('time')
new_df['counter'] = new_df['status'].map({'start':1,'end':-1}).cumsum()
print(new_df)
status time counter
0 start 2018-01-01 00:47:00 1
1 start 2018-01-01 00:52:00 2
11 end 2018-01-01 00:54:00 1
2 start 2018-01-01 00:55:00 2
3 start 2018-01-01 00:57:00 3
13 end 2018-01-01 00:59:00 2
4 start 2018-01-01 01:00:00 3
12 end 2018-01-01 01:03:00 2
5 start 2018-01-01 01:07:00 3
15 end 2018-01-01 01:12:00 2
14 end 2018-01-01 01:16:00 1
16 end 2018-01-01 01:24:00 0
6 start 2018-01-01 01:33:00 1
7 start 2018-01-01 01:34:00 2
8 start 2018-01-01 01:37:00 3
9 start 2018-01-01 01:38:00 4
17 end 2018-01-01 01:38:00 3
10 start 2018-01-01 01:39:00 4
19 end 2018-01-01 01:41:00 3
20 end 2018-01-01 01:41:00 2
18 end 2018-01-01 01:47:00 1
21 end 2018-01-01 01:55:00 0
</code></pre>
<hr>
<p>We could also use <a href="https://numpy.org/doc/1.18/reference/generated/numpy.cumsum.html" rel="noreferrer"><code>numpy.cumsum</code></a>:</p>
<pre><code>new_df['counter'] = np.where(new_df['status'].eq('start'),1,-1).cumsum()
</code></pre>
|
python|pandas|datetime|count
| 6
|
4,548
| 60,158,162
|
Convolution 3D image with 2D filter
|
<p>I have a single image of shape <code>img.shape = (500, 439, 3)</code></p>
<p>The convolution function is</p>
<pre><code>def convolution(image, kernel, stride=1, pad=0):
n_h, n_w, _ = image.shape
f = kernel.shape[0]
kernel = np.repeat(kernel[None,:], 3, axis=0)
n_H = int(((n_h + (2*pad) - f) / stride) + 1)
n_W = int(((n_w + (2*pad) - f) / stride) + 1)
n_C = 1
out = np.zeros((n_H, n_W, n_C))
for h in range(n_H):
vert_start = h*stride
vert_end = h*stride + f
for w in range(n_W):
horiz_start = w*stride
horiz_end = w*stride + f
for c in range(n_C):
a_slice_prev = image[vert_start:vert_end,
horiz_start:horiz_end, :]
s = np.multiply(a_slice_prev, kernel)
out[h, w, c] = np.sum(s, dtype=float)
return out
</code></pre>
<p>I want to see the image after any kernel/filter applied to the image, so I got the following</p>
<pre><code>img = plt.imread('cat.png')
kernel = np.arange(25).reshape((5, 5))
out2 = convolution(img, kernel)
plt.imshow(out2)
plt.show()
</code></pre>
<p>I get</p>
<blockquote>
<p>s = np.multiply(a_slice_prev, kernel)</p>
<p>ValueError: operands could not be broadcast together with shapes
(5,5,3) (3,5,5)</p>
</blockquote>
|
<p>np.multiply is doing an elementwise multiplication. However, your arguments do not have matching dimensions. You could transpose your kernel or image with this to ensure it can work: </p>
<pre><code>kernel = kernel.transpose()
</code></pre>
<p>You could do this prior to your <code>np.multiply</code> call. </p>
|
python|numpy|convolution
| 2
|
4,549
| 65,453,509
|
Make a attendace student report using pandas
|
<p>I have some <code>csv</code> table from google form for attendance report. The data looks like this</p>
<pre><code>df1= pd.read_csv("12-9-2020.csv")
df1
Name StudentID
Robert C 102
Jessica Myla 103
Nana D 105
df2= pd.read_csv("12-10-2020.csv")
df2
Name StudentID
J Myla 103
Harris Kurt 104
Nana Duncan 105
</code></pre>
<p>I have many tables that I want to make a compilation attendance report. The basic compilation attendance report looks like this:</p>
<pre><code>df_Basic
Name StudentID 12/9/2020 12/10/2020
Robert Case 102 0 0
Jessica Myla 103 0 0
Harris Kurt 104 0 0
Nana Duncan 105 0 0
</code></pre>
<p>I want to input the data form <code>df1, df2</code> to the compilation attendance report. If the student attend the class, it must be as 1 and the spell of the student name will match with the compilation attendance report format.</p>
<p>The desired result looks like this:</p>
<pre><code>df_Result
Name StudentID 12/9/2020 12/10/2020
Robert Case 102 1 0
Jessica Myla 103 1 1
Harris Kurt 104 0 1
Nana Duncan 105 1 1
</code></pre>
<p>Thank you for helping me</p>
|
<p>Here is a solution for two dataframes:</p>
<pre><code>df1.set_index('StudentID', inplace=True)
df1.loc[:, '12-9-2020.csv'] = 1
df2.set_index('StudentID', inplace=True)
df2.loc[:, '12-10-2020.csv'] = 1
df1 = df1.join(df2, how='outer', rsuffix='_')
df1['Name'] = df1['Name'].combine_first(df1['Name_'])
df1.drop('Name_', axis=1, inplace=True)
df1.fillna(0).reset_index()
</code></pre>
<p>For more dataframes, repeat the lines 3-7, as needed.</p>
|
python|pandas|dataframe
| 2
|
4,550
| 65,157,759
|
Filling missing values with increasing values in pandas
|
<p>I have a dataframe, which contains tool <em>id</em> and <em>time</em>.</p>
<p>For the last date I have tool <em>counter</em> values, and I need to fill up missing <em>counter</em> values in the dataframe by substracting <em>1</em> from the <em>counter</em> for each time the <em>id</em> has been used at a particular <em>date</em>.</p>
<pre><code>data = {"id":["01","02","03","04","05",
"02","02","03","05","04",
"03","05","01","05","04",],
"counter": [100,200,300,400,500,
np.nan,np.nan,np.nan,np.nan,np.nan,
np.nan,np.nan,np.nan,np.nan,np.nan],
"date": ["2020-02-04","2020-02-04","2020-02-04","2020-02-04","2020-02-04",
"2020-02-02","2020-02-02","2020-02-02","2020-02-02","2020-02-02",
"2020-02-03","2020-02-03","2020-02-03","2020-02-03","2020-02-03"]}
df = pd.DataFrame(data)
df_sort = df.sort_values(by=["id","date"], ascending = False)
</code></pre>
<p>Dataframe looks like this:</p>
<pre><code> id counter date
4 05 500.0 2020-02-04
11 05 NaN 2020-02-03
13 05 NaN 2020-02-03
8 05 NaN 2020-02-02
3 04 400.0 2020-02-04
14 04 NaN 2020-02-03
9 04 NaN 2020-02-02
2 03 300.0 2020-02-04
10 03 NaN 2020-02-03
7 03 NaN 2020-02-02
1 02 200.0 2020-02-04
5 02 NaN 2020-02-02
6 02 NaN 2020-02-02
0 01 100.0 2020-02-04
12 01 NaN 2020-02-03
</code></pre>
<p>Desired outcome would be like this:</p>
<pre><code> id counter date
4 05 500.0 2020-02-04
11 05 501 2020-02-03
13 05 502 2020-02-03
8 05 503 2020-02-02
3 04 400.0 2020-02-04
14 04 401 2020-02-03
9 04 402 2020-02-02
2 03 300.0 2020-02-04
10 03 301 2020-02-03
7 03 302 2020-02-02
1 02 200.0 2020-02-04
5 02 201 2020-02-02
6 02 202 2020-02-02
0 01 100.0 2020-02-04
12 01 101 2020-02-03
</code></pre>
<p>Please help me how can I do this.
Thanks!</p>
|
<p>You can <code>groupby</code> the dataframe <code>df_sort</code> on <code>id</code> then forward fill the <code>counter</code> values per group using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.DataFrameGroupBy.ffill.html" rel="nofollow noreferrer"><code>ffill</code></a> and add them with a sequential counter created using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.cumcount.html" rel="nofollow noreferrer"><code>groupby.cumcount</code></a>:</p>
<pre><code>g = df_sort.groupby('id')
df_sort['counter'] = g['counter'].ffill() + g.cumcount()
</code></pre>
<hr />
<pre><code>print(df_sort)
id counter date
4 05 500.0 2020-02-04
11 05 501.0 2020-02-03
13 05 502.0 2020-02-03
8 05 503.0 2020-02-02
3 04 400.0 2020-02-04
14 04 401.0 2020-02-03
9 04 402.0 2020-02-02
2 03 300.0 2020-02-04
10 03 301.0 2020-02-03
7 03 302.0 2020-02-02
1 02 200.0 2020-02-04
5 02 201.0 2020-02-02
6 02 202.0 2020-02-02
0 01 100.0 2020-02-04
12 01 101.0 2020-02-03
</code></pre>
|
python|pandas|dataframe
| 2
|
4,551
| 49,950,186
|
Can I change drop out rate when use the call method in tf.nn.rnn_cell.MultiRNNCell
|
<p>Here is how I define the MultiRNNCell:</p>
<pre><code> n_lstm_cells = [tf.contrib.rnn.DropoutWrapper(tf.contrib.rnn.LSTMCell(hs),
output_keep_prob=1-dropout_ph,
variational_recurrent=True,
dtype=tf.float32) for hs in n_layer_sizes]
n_multi_rnn_cell = tf.nn.rnn_cell.MultiRNNCell(n_lstm_cells)
self.n_multi_rnn_cell = n_multi_rnn_cell
</code></pre>
<p>the <code>dropout_ph</code> is a plcaeholder.
And when I use the <code>call</code> method in MultiRNNCell can I change the dropout rate?
Here is an example how I use the call method when I predicting:</p>
<pre><code>note_output, new_state = self.n_multi_rnn_cell.call(inputs=indata,state=hidden)
</code></pre>
<p>The reason why I did this instead of just use <code>tf.nn.dynamic_rnn()</code> is that, in predicting, I need the output of each step and then feed this output to another RNN structure to get the final output as input of the next step of <code>self.n_multi_rnn_cell</code>. In the precess, it also includes some other operations defined by myself. I use <code>tf.scan()</code> to do the looping.<br>
Is there a method could treat the dropout rate as one of the inputs of the <code>call</code> method?</p>
|
<p>I solved it by creating another MutilRNNCell with shared(reuse) weights in certain <code>tf.variable_scope</code>. And when use the call method, we need to specific the <code>tf.variable_scope</code> to <code>’[YOUR SCOPE]/rnn/mutil_rnn_cell’</code>.</p>
|
python|tensorflow|rnn
| 1
|
4,552
| 50,050,617
|
Assign Unique Numeric Group IDs to Groups in Pandas
|
<p>I've consistently run into this issue of having to assign a unique ID to each group in a data set. I've used this when zero padding for RNN's, generating graphs, and many other occasions. </p>
<p>This can usually be done by concatenating the values in each <code>pd.groupby</code> column. However, it is often the case the number of columns that define a group, their dtype, or the value sizes make concatenation an impractical solution that needlessly uses up memory. </p>
<p>I was wondering if there was an easy way to assign a unique numeric ID to groups in pandas. </p>
|
<p>You just need <code>ngroup</code> data from seeiespi (or <code>pd.factorize</code>)</p>
<pre><code>df.groupby('C').ngroup()
Out[322]:
0 0
1 0
2 2
3 1
4 1
5 1
6 1
7 2
8 2
dtype: int64
</code></pre>
<p>More Option</p>
<pre><code>pd.factorize(df.C)[0]
Out[323]: array([0, 0, 1, 2, 2, 2, 2, 1, 1], dtype=int64)
df.C.astype('category').cat.codes
Out[324]:
0 0
1 0
2 2
3 1
4 1
5 1
6 1
7 2
8 2
dtype: int8
</code></pre>
|
python|pandas|pandas-groupby
| 29
|
4,553
| 50,071,518
|
What is the proper way of selecting date ranges in Pandas multi-indexes?
|
<p><strong>What is the proper way of selecting date ranges in Pandas multi-indexes?</strong></p>
<p>I've got a multi-index dataframe, that looks like the following:</p>
<p><a href="https://i.stack.imgur.com/tECvB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tECvB.png" alt="enter image description here"></a></p>
<p>If I wish to select a particular day, it's trivial using <code>xs</code>:</p>
<pre><code>data.xs('2011-11-11', level='Date').head()
</code></pre>
<p>However if I wish to select a date range, I cannot. All the the following give me an <code>Invalid Syntax</code> error: </p>
<pre><code>data.xs('2011-10-10':'2011-11-11', level='Date').head()
data.xs(['2011-10-10':'2011-11-11'], level='Date').head()
</code></pre>
<p><em><strong>Note #1</strong>: I am looking for a way to use elegant Pandas functionality. Naturally it's easy enough to hack around the problem using 4 or 5 lines of code, the question is about what the "right way" is.</em></p>
<p><em><strong>Note #2</strong>: I've seen <a href="https://stackoverflow.com/questions/44876664/how-to-filter-dates-on-multiindex-dataframe?utm_medium=organic&utm_source=google_rich_qa&utm_campaign=google_rich_qa">this answer</a>, but that didn't cover this case.</em></p>
|
<p>Using data from previous question:</p>
<pre><code>d = {'Col1': {(Timestamp('2015-05-14 00:00:00'), '10'): 81.370003,
(Timestamp('2015-05-14 00:00:00'), '11'): 80.41999799999999,
(Timestamp('2015-05-14 00:00:00'), 'C3'): 80.879997,
(Timestamp('2015-05-19 00:00:00'), '3'): 80.629997,
(Timestamp('2015-05-19 00:00:00'), 'S9'): 80.550003,
(Timestamp('2015-05-21 00:00:00'), '19'): 80.480003,
(Timestamp('2015-05-22 00:00:00'), 'C3'): 80.540001},
'Col2': {(Timestamp('2015-05-14 00:00:00'), '10'): 6.11282,
(Timestamp('2015-05-14 00:00:00'), '11'): 6.0338,
(Timestamp('2015-05-14 00:00:00'), 'C3'): 6.00746,
(Timestamp('2015-05-19 00:00:00'), '3'): 6.10465,
(Timestamp('2015-05-19 00:00:00'), 'S9'): 6.1437,
(Timestamp('2015-05-21 00:00:00'), '19'): 6.16096,
(Timestamp('2015-05-22 00:00:00'), 'C3'): 6.1391599999999995},
'Col3': {(Timestamp('2015-05-14 00:00:00'), '10'): 39.753,
(Timestamp('2015-05-14 00:00:00'), '11'): 39.289,
(Timestamp('2015-05-14 00:00:00'), 'C3'): 41.248999999999995,
(Timestamp('2015-05-19 00:00:00'), '3'): 41.047,
(Timestamp('2015-05-19 00:00:00'), 'S9'): 41.636,
(Timestamp('2015-05-21 00:00:00'), '19'): 42.137,
(Timestamp('2015-05-22 00:00:00'), 'C3'): 42.178999999999995},
'Col4': {(Timestamp('2015-05-14 00:00:00'), '10'): 44.950001,
(Timestamp('2015-05-14 00:00:00'), '11'): 44.75,
(Timestamp('2015-05-14 00:00:00'), 'C3'): 44.360001000000004,
(Timestamp('2015-05-19 00:00:00'), '3'): 40.98,
(Timestamp('2015-05-19 00:00:00'), 'S9'): 42.790001000000004,
(Timestamp('2015-05-21 00:00:00'), '19'): 43.68,
(Timestamp('2015-05-22 00:00:00'), 'C3'): 43.490002000000004}}
df = pd.Dataframe(d)
</code></pre>
<p>Then you can use <a href="https://pandas.pydata.org/pandas-docs/stable/timeseries.html#partial-string-indexing" rel="nofollow noreferrer">partial string indexing</a> to select a range of dates:</p>
<pre><code>df.loc['2015-05-14':'2015-05-19']
</code></pre>
<p>Output:</p>
<pre><code> Col1 Col2 Col3 Col4
2015-05-14 10 81.370003 6.11282 39.753 44.950001
11 80.419998 6.03380 39.289 44.750000
C3 80.879997 6.00746 41.249 44.360001
2015-05-19 3 80.629997 6.10465 41.047 40.980000
S9 80.550003 6.14370 41.636 42.790001
</code></pre>
|
python|pandas|datetime|time-series|multi-index
| 2
|
4,554
| 50,106,021
|
Adding a column of repeating values to dataframe
|
<p>I have some quarter level data for finance deals, so a pretty big dataset. I now want to add the following values to a new column repeated over and over:</p>
<pre><code>[-12,-11,-10,-9,-8,-7,-6,-5,-4,-3,-2,-1,0,1,2,3,4,5,6,7,8,9,10,11,12]
</code></pre>
<p>The column should then look something like this:</p>
<pre><code>A
-12
-11
-10
...
11
12
-12
-11
...
11
12
</code></pre>
<p>So basically just that list repeating over and over until the last row of my Dataframe. I hope this question is clear enough. </p>
|
<p>Try this:</p>
<pre><code>N = len(df)
df['A'] = pd.Series(np.tile(lst, N//len(lst))).iloc[:N]
</code></pre>
|
python|pandas|dataframe
| 3
|
4,555
| 46,761,348
|
Change the value of `row`
|
<pre><code>def multiple_dfs(item, sheets, *args):
"""
Put multiple dataframes into one xlsx sheet
"""
writer, row = args[:2]
response = send_request(item).content
df = pd.read_csv(io.StringIO(response.decode('utf-8')))
df.to_excel(writer, sheets, startrow=row, index=False)
row += len(df.index) + 2
def create_and_update_worksheets():
"""
Add 'Player statistics' if the worksheet is not in file_name.
Otherwise, it will update the worksheet itself.
"""
os.chdir(os.path.dirname(os.path.abspath(__file__)))
writer = pd.ExcelWriter(file_name, engine='openpyxl')
for key, value in worksheets:
--> row = 0
if isinstance(value, dict):
values = value.values()
for item in values:
--> multiple_dfs(item, key, writer, row)
else:
multiple_dfs(value, key, writer, row)
for sheet in writer.sheets.values():
resize_columns(sheet)
writer.save()
writer.close()
</code></pre>
<p>I have two arrows in <code>create_and_update_worksheets</code> function. Why the <code>row</code> is always equal to zero in the for loop from the second arrow? It should enter <code>multiple_dfs</code> function and change the value of <code>row</code>. I tried to put at different place in the code, but it changed nothing. I don't have other idea anymore. </p>
|
<p>You can't change argument values like that.</p>
<pre><code> ...
...
return len(df.index) + 2
...
...
row = multiple_dfs(item, key, writer, row)
</code></pre>
|
python|pandas
| 0
|
4,556
| 67,885,893
|
Mixture of Multi-Level columns & Regular Columns
|
<p>It is straightforward to create Pandas dataframes with Multi-level columns like so:</p>
<pre><code>import numpy as np
import pandas as pd
dat = np.random.randn(5, 4)
header = pd.MultiIndex.from_product([['Truck','Car'],
['Speed','Position']],
names=['',''])
df2 = pd.DataFrame(dat, columns=header)
</code></pre>
<p>to get</p>
<p><a href="https://i.stack.imgur.com/qy0mh.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qy0mh.jpg" alt="" /></a></p>
<p>However, I require a table such as the one below (addition of the Age column). Is this possible using Pandas?</p>
<p><a href="https://i.stack.imgur.com/bTS4B.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bTS4B.jpg" alt="" /></a></p>
|
<p>Always need <code>MultiIndex</code>, there is possible use empty string for some level like:</p>
<pre><code>df2[('', 'Age')] = np.random.randn(5)
print (df2)
Truck Car
Speed Position Speed Position Age
0 1.224236 -0.545658 0.906748 -0.982617 -0.654448
1 -0.633162 0.825520 -1.284497 -0.347309 -0.672104
2 -1.077761 0.972575 0.412191 -0.132086 0.870368
3 -0.673351 1.222222 -0.926413 1.424994 1.003245
4 -0.124790 0.705492 0.719548 0.141464 -0.366450
print (df2.columns)
MultiIndex([('Truck', 'Speed'),
('Truck', 'Position'),
( 'Car', 'Speed'),
( 'Car', 'Position'),
( '', 'Age')],
)
</code></pre>
<p>If pass single column in last versions pandas convert second level to empty string:</p>
<pre><code>df2['Age'] = np.random.randn(5)
print (df2)
Truck Car Age
Speed Position Speed Position
0 1.128052 0.792584 -1.750842 -0.808869 -1.330033
1 -1.412602 -0.803010 0.798280 1.755996 1.261033
2 -0.075504 0.420177 0.156556 -0.056861 -0.648126
3 -0.538234 0.901387 0.224944 1.277788 2.245300
4 0.629269 0.361891 3.638726 -1.201221 -1.012394
print (df2.columns)
MultiIndex([('Truck', 'Speed'),
('Truck', 'Position'),
( 'Car', 'Speed'),
( 'Car', 'Position'),
( 'Age', '')],
)
</code></pre>
|
python|pandas
| 1
|
4,557
| 67,883,437
|
Get weekdays as column based on index
|
<p>I tried this code to get the following output:</p>
<pre><code>idx = pd.date_range("2021-06-08", periods=3, freq="D")
ts = pd.Series(['Tuesday', 'Wednesday', 'Thursday'], index=idx)
ts
2021-06-08 Tuesday
2021-06-09 Wednesday
2021-06-10 Thursday
Freq: D, dtype: object
</code></pre>
<p>But I don't want to pass list of days. I want that this should extract weekday from index only. I tried following code to get this but this is giving me error:</p>
<pre><code>idx = pd.date_range("2021-06-08", periods=3, freq="D")
ts = pd.Series((x for x in idx.weekday()), index=idx)
</code></pre>
<p>Do anyone have any idea of how to get this and suggest what needs to be changed.</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DatetimeIndex.day_name.html" rel="nofollow noreferrer"><code>DatetimeIndex.day_name</code></a>:</p>
<pre><code>idx = pd.date_range("2021-06-08", periods=3, freq="D")
ts = pd.Series(idx.day_name(), index=idx)
print (ts)
2021-06-08 Tuesday
2021-06-09 Wednesday
2021-06-10 Thursday
Freq: D, dtype: object
</code></pre>
|
python|pandas
| 1
|
4,558
| 61,537,558
|
Pytorch:1.2.0 - AttributeError: 'Conv2d' object has no attribute 'weight'
|
<p>I am trying to initialise the following weights the following way:</p>
<pre><code>def _initialize_weights(self):
for m in self.modules():
if isinstance(m, nn.Conv2d):
nn.init.kaiming_normal_(m.weight)
if m.bias is not None:
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.BatchNorm2d):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
</code></pre>
<p>This gives me the Attribute error. nn.Conv2d definitely has an attribute named 'weight', Not sure why I am getting this error.</p>
|
<p>I think the attribute you're looking for is <code>weights</code>, not <code>weight</code>. (<em>Plural form instead of singular)</em>
You can visit this link for more information.
<a href="https://faroit.com/keras-docs/1.2.2/layers/convolutional/#convolution2d" rel="nofollow noreferrer">https://faroit.com/keras-docs/1.2.2/layers/convolutional/#convolution2d</a></p>
|
initialization|pytorch
| 0
|
4,559
| 61,408,653
|
How can I show data related to a max value from a dataset with pandas?
|
<p>I have this dataframe where I have more than one column and I want to know additional data to the maximum value of one column</p>
<p>For example, given the following code, show the country where the number is the highest per year per causes What I did was:</p>
<pre><code>var=data.groupby(["Year","Causes"])["number"].max()
</code></pre>
<p>But this only shows the max value for each of the years and each of the causes. <strong>I would like to know which country is the one associated with the max value from the number.</strong></p>
<hr>
<ol>
<li>This code shows the highest number per cause per year, but I need to
show the country associated with the highest number per cause per
year</li>
</ol>
<hr>
<p>I tried using idxmax() instead of max() but it did not work</p>
|
<p>There's probably a more efficient way but if I've understood what you'd like to achieve and your data structure then this works:</p>
<pre><code>var=data.groupby(["Year","Causes"])["number"].max()
var = pd.DataFrame(var)
new = var.merge(data, how='inner').drop_duplicates()
new
</code></pre>
|
python|pandas|pandas-groupby
| 0
|
4,560
| 61,209,145
|
Transform schedule to comprehensive report with Python
|
<p>I'm trying to turn a schedule with the following format into a report format. </p>
<p>Currently the data is stored as follows:</p>
<pre><code>Person Name Jun 1 Jun 2 Jun 3 Jun 4 Jun 5 Jun 6 Jun 7 Jun 8 Jun 9 Jun 10 ...
John Smith X X X X O O O X X X ...
Aaron Roberts O O X X X X O O O O ...
Jess Lewis O O O O X X X X X X ...
Edgar Blue X X X X X O O O O X ...
Lara Irvin X X O O O O X X X X ...
</code></pre>
<p>The X represent days that they're "ON" and the O represent days that they're off.</p>
<p>What I want is to run a python script that summarizes the schedule in a report of this format:</p>
<pre><code>Person Name From: To:
John Smith Jun 1 Jun 4
John Smith Jun 8 Jun 10
Aaron Roberts Jun 3 Jun 6
Jess Lewis Jun 5 Jun 10
Edgar Blue Jun 1 Jun 5
Edgar Blue Jun 10 Jun 13
Lara Irvin Jun 1 Jun 2
Lara Irvin Jun 7 Jun 10
</code></pre>
<p>What I've tried is to create a unique names list</p>
<pre><code>names = ["John Smith", "Aaron Roberts", "Jess Lewis", "Edgar Blue", "Lara Irvin"]
</code></pre>
<p>And then do </p>
<pre><code>for name in names:
df.iloc["Person Name"] == name
</code></pre>
<p>Then I'm not sure how to proceed, I'm trying to look if the i position is an X or an O, then look if the i-1 position is an X or an O</p>
<p>Then...</p>
<pre><code>if i == "X" && i-1 == "O"
</code></pre>
<p>Populate that header date in the other table's "From:" column</p>
<p>and...</p>
<pre><code>if == "O" && i-1 == "X"
</code></pre>
<p>Populate that header date in the other table's "To:" column</p>
<p>Then repeat the same process using a nested for loop across all "Person Name" and all dates.</p>
<p>Thank you in advance for any help.</p>
|
<p>This will need the <code>cumsum</code> create the subgroup then we stack , <code>groupby</code> with <code>agg</code> </p>
<pre><code>df=df.set_index('PersonName')
s1=df.eq('O').cumsum(1).stack().reset_index()
s=s1[df.stack().ne('O').values].groupby(['PersonName',0])['level_1'].agg(['first','last']).reset_index(level=1,drop=True)
s
first last
PersonName
AaronRoberts Jun3 Jun6
EdgarBlue Jun1 Jun5
EdgarBlue Jun10 Jun10
JessLewis Jun5 Jun10
JohnSmith Jun1 Jun4
JohnSmith Jun8 Jun10
LaraIrvin Jun1 Jun2
LaraIrvin Jun7 Jun10
</code></pre>
|
python|pandas|dataframe|for-loop
| 2
|
4,561
| 61,529,235
|
How to run tflite model not on image classification swift
|
<p>I have seen several tutorials on how to run a tflite model for image classification, but don’t know how to do it for any other application... For example, I have a model that takes in audio data in the form of a (16000, 1) array. How can I pass this array into the tflite model?</p>
|
<p><code>TensorFlow Lite</code> provides all the tools we need to convert and run <code>TensorFlow models</code> on mobile, embedded, and IoT devices. </p>
<p>To use a <code>model</code> with <code>TensorFlow Lite</code>, we must convert a full <code>TensorFlow model</code> into the <code>TensorFlow Lite</code> format. We cannot create or train a model using <code>TensorFlow Lite</code>. So we must start with a regular <code>TensorFlow model</code>, and then <a href="https://www.tensorflow.org/lite/guide/get_started#2_convert_the_model_format" rel="nofollow noreferrer">convert the model</a>.</p>
<p>See full list of pre-trained models which are ready to use in applications : <a href="https://www.tensorflow.org/lite/models" rel="nofollow noreferrer">in Models</a>.</p>
<p>If we have designed and trained our own TensorFlow model, or we have trained a model obtained from another source, we must convert it to the TensorFlow Lite format.</p>
<p><strong>Reference :</strong> <a href="https://www.tensorflow.org/lite/guide/get_started" rel="nofollow noreferrer">TensorFlow Lite</a></p>
|
swift|xcode|tensorflow|audio|tensorflow-lite
| 0
|
4,562
| 68,477,081
|
With Pandas, how do I use to_sql to insert a cell with lists into a Postgresql database?
|
<p>I'm stuck on how to insert a column that contains lists into a Postgresql database. I know it is theoretically possible, because there are datatypes like BIGINT[] that exist, whereas it doesn't exist with other SQL variants.</p>
<p>Here is my code:</p>
<pre><code>import datetime
import json
import pandas as pd
import pymysql.cursors
from sqlalchemy import create_engine
mock_data = {}
mock_data['a'] = 'HELLO'
mock_data['b'] = {
'c' : {
'd' : True,
# NOTE: If you replace this with 'e' : '', instead of the list, it works fine.
'e' : []
},
'f' : 'TESTING'
}
df = pd.json_normalize(mock_data)
engine = create_engine("postgresql://postgres:ABCDEFG@localhost:5432/testing")
con = engine.connect()
table_name = 'testing-db'
try:
frame = df.to_sql(con=con, name=table_name, index=False, if_exists='replace')
display(frame)
except ValueError as vx:
print(vx)
except Exception as ex:
print(ex)
else:
print("Table %s created successfully."%table_name);
finally:
connection.close()
</code></pre>
<p>The code above fails, due to 'e' : []. Python/Pandas doesn't report a failure, but I can't see the table being updated in Postgres. However, if you changed the list to an empty string, like this: 'e' : ''
The postgres database is updated. I can't figure out how to insert a list into a Postgres database with Pandas. Any help would be much appreciated.</p>
|
<p>You cannot insert list in a SQL Database Cell as it breaks the normalization parameters.</p>
<p>What you can do instead is:</p>
<ol>
<li>Convert your list to a json string object (or XML)</li>
<li>Create a new table that has the elements in individual cells and refers to original table.</li>
</ol>
<p>For example: If you have a list field for which the length is fixed</p>
<pre><code>list_field = [Element_A, Element_B, Element_C]
</code></pre>
<p>You can create another table list_data that has 4 columns (1 primary and 3 columns). Store your list data here and use this to refer your original table. This gives you a far more efficient handle on the data values and is the traditional way of doing so.</p>
<p>However, if you have variable length list_data, it is far more efficient to just dump it in a json and store it as a json or string object. But remember doing so will mean that you will have to pre-process the response on each fetch to get the data you want.</p>
|
python|pandas|postgresql
| 0
|
4,563
| 53,035,762
|
Creating new column based on value in another column
|
<p>I'm trying to create new feature columns based on values in a different column. So I have a column with comments, and if they contain a url address, I want to output 1 to the new column, or else output 0, so it would be a binary feature creation.</p>
<pre><code>Text Contains_Url
Buy round lot on the open MT @WSJD #AAPL 1
stock briefly dove 6.4% today. Analysts
not sure why https://blogs.wsj.com/moneybeat/
2014/12/01/apple-crash-catches-wall-street-off-guard/
@apple Contact sync between Yosemite and iOS8 is 0
seriously screwed up. It used to be much more stable
in the past. #icloud #isync
</code></pre>
<p>So there would be rows like this and I would like to create a new column in the dataframe with 1 or 0 based on the text column if it has a url or not. Just to check the number of tweets with urls compared to the rest of the dataset, I did</p>
<pre><code>data.shape
(3804, 12)
data[data.text.str.contains("http")].shape
(2130, 12)
</code></pre>
<p>So it shows accurately the number of rows that have a url. My idea was to create a function where I can do this, and apply it using lambda</p>
<pre><code>def contains_url(row):
if data[data.text.str.contains("http")]:
return 1
else:
return 0
data['contains_url'] = data.apply (lambda row: contains_url(row),axis=1)
ValueError: ('The truth value of a DataFrame is ambiguous. Use a.empty,
a.bool(), a.item(), a.any() or a.all().', 'occurred at index 0')
</code></pre>
<p>But doing that is giving me this error above. Any help would be appreciated. Thanks!</p>
|
<p>I think you can do this much more efficiently without <code>apply</code>, simply by using the boolean value resulting from <code>str.contains('http')</code>, and casting it to <code>int</code>:</p>
<pre><code>data['contains_url'] = data['Text'].str.contains('http').astype(int)
</code></pre>
|
python|python-3.x|pandas|numpy
| 1
|
4,564
| 65,850,595
|
Remove rows in a csv file based on the format of column value
|
<p>I have a csv file which contains three columns - computer_name, software_code, software_update_date. The file contains computers that I don't need in my final report. I only need the data for computers whose name starts with 40- , 46- or 98-. Here is the sample file:</p>
<pre><code>computer_name software_code software_update_date
07-0708 436 2019-02-07 0:00
30-0207 35170 2021-01-18 0:00
40-0049 41 2017-06-21 23:00
46-0001 11 2013-11-23 0:00
</code></pre>
<p>So I would like to delete rows 07-0708 and 30-0207. I tried with pandas but the generated file is exactly the same with no error message. I am quite new to python and still grasping the concepts. I wrote the below code:</p>
<pre><code>import csv
import pandas as pd
fname = 'RAWfile.csv'
df=pd.read_csv(fname,encoding='ISO-8859-1')
#Renaming columns from the report
df.rename(columns = {'computer_name':'PC_NO', 'software_code':'SOFT_CODE', 'software_update_date':'UPDATE_DATE'}, inplace=True)
computers = ['40-','46-','98-']
searchstr = '|'.join(computers)
df[df['PC_NO'].str.contains(searchstr)]
df.to_csv('updatedfile.csv',index=False,quoting=csv.QUOTE_ALL,line_terminator='\n')
</code></pre>
<p>UPDATE: There are almost 70,000 rows in the csv file. Corrected the values in computers list to match the question.</p>
|
<p>You can try this,</p>
<pre><code># String to be searched in start of string
search = ("40-", "46-", "98-")
# boolean series returned with False at place of NaN
series = df["computer_name"].str.startswith(search, na = False)
# displaying filtered dataframe
df[series]
</code></pre>
|
python|pandas
| 0
|
4,565
| 63,341,708
|
How to filter columns containing dates in format yyyyMmm?
|
<p>I am working on a data that looks like that:</p>
<pre><code> unit coicop geotime 2020M07 ... 1996M04 1996M03 1996M02 1996M01
122 IA5 CP5261 AAT NaN ... 84.43 84.60 84.52 84.85
7630 IA5 CP5261 AAT NaN ... 62.60 62.72 62.66 62.91
23690 IA6 CP5261 AAT NaN ... 99.70 99.90 99.80 100.20
</code></pre>
<p>What would be the best way to filter specific years? Let's say I'd like to filter columns containing data from 2005. Or in two specific years 2010 and 2015?</p>
|
<p>You can convert all columns without first 3 to datetimes:</p>
<pre><code>df = df.set_index(['unit','coicop','geotime'])
df.columns = pd.to_datetime(df.columns, format='%YM%m')
print (df)
2020-07-01 1996-04-01 1996-03-01 1996-02-01 \
unit coicop geotime
IA5 CP5261 AAT NaN 84.43 84.60 84.52
AAT NaN 62.60 62.72 62.66
IA6 CP5261 AAT NaN 99.70 99.90 99.80
1996-01-01
unit coicop geotime
IA5 CP5261 AAT 84.85
AAT 62.91
IA6 CP5261 AAT 100.20
</code></pre>
<p>Then you can filter like:</p>
<pre><code>df1 = df.loc[:, df.columns.year.isin([2010, 2015])]
</code></pre>
<p>Another approach is use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.filter.html" rel="nofollow noreferrer"><code>DataFrame.filter</code></a> with values contains in columns names joined by <code>|</code>:</p>
<pre><code>df1 = df.filter(regex='2010|2015|unit|coicop|geotime')
</code></pre>
|
python|pandas
| 1
|
4,566
| 53,750,143
|
Flip tensorflow values
|
<p>I have a tensor which contains only 1s and 0s, like the following:</p>
<pre><code>[0.0, 1.0, 1.0, 0.0, 1.0, 1.0]
</code></pre>
<p>What is the fastest method to "flip the bits" and output</p>
<pre><code>[1.0, 0.0, 0.0, 1.0, 0.0, 0.0]
</code></pre>
|
<p><code>1.0-x</code> should do the trick in your case.</p>
|
tensorflow|bit|logical-operators|flip
| 2
|
4,567
| 53,409,647
|
Combining different columns with overlapping index in pandas
|
<p>I have a pandas Dataframe which looks like this:</p>
<pre><code> ABC_1 ABC_2 ABC_3 ABC_4
x y z k
NaN y NaN k
x NaN z NaN
x NaN z k
... ... ... ...
</code></pre>
<p>This is just one column <code>ABC</code> which has been split into many columns. Similarly there are other columns like <code>PQR</code> which has been split into different parts. </p>
<ul>
<li>Each column contains 100 values(including NaNs), i.e. the shape of the <code>df</code> can be considered as <code>(100,4)</code> in this case.</li>
<li>I want to combine all the four columns into a single column named <code>ABC</code> but it should contain all the values from all the four columns. <code>NaN</code> values can be removed beforehand or after concatenating so that's not a concern, although I feel that removing all <code>NaNs</code> at once after concatenating will be more efficient.</li>
</ul>
<p>In short the new column should look like this:</p>
<pre><code> ABC
x
x
x
y
y
z
z
z
k
k
k
...
</code></pre>
<p>What I tried:</p>
<p>I tried to use <code>pd.concat</code> but it didn't work as it throws <code>duplicate index error</code> which is obvious from the case. Now, there are ways to deal with this but I don't think it will computationally efficient if the dataframe is quite big.</p>
<p>I tried putting all values into a single list and then assigning it to the column of a new dataframe but as I said, the dataframe can be huge and list would occupy a lot of space.</p>
<p>Can anyone please tell me how to do this efficiently? </p>
<p>Edit: There can be one more situation. It is not necessary for all the column names to follow the same pattern. For example the above dataframe also contain columns like this</p>
<pre><code>ABC_1 ABC_2 ABC_3 ABC_4 ABC_5_patt
x y z k p
NaN y NaN k p
x NaN z NaN p
x NaN z k NaN
... ... ... ... ...
</code></pre>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.unstack.html" rel="nofollow noreferrer"><code>unstack</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dropna.html" rel="nofollow noreferrer"><code>dropna</code></a> and for remove MultiIndex <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.reset_index.html" rel="nofollow noreferrer"><code>reset_index</code></a>, last <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.to_frame.html" rel="nofollow noreferrer"><code>to_frame</code></a> for convert Series to one column <code>DataFrame</code>:</p>
<pre><code>df = df.unstack().dropna().reset_index(drop=True).to_frame('ABC')
print (df)
ABC
0 x
1 x
2 x
3 y
4 y
5 z
6 z
7 z
8 k
9 k
10 k
</code></pre>
<p>If possible multiple categories:</p>
<pre><code>print (df)
ABC_1 PQR_2 ABC_3 PQR_4
0 x y z k
1 NaN y NaN k
2 x NaN z NaN
3 x NaN z k
df.columns = df.columns.str.split('_', expand=True)
df = df.unstack().dropna().reset_index(level=[1,2],drop=True)
df.index = [df.groupby(level=0).cumcount(), df.index]
df = df.unstack()
print (df)
ABC PQR
0 x y
1 x y
2 x k
3 z k
4 z k
5 z NaN
</code></pre>
|
python|python-3.x|pandas|dataframe|data-analysis
| 3
|
4,568
| 53,571,236
|
'DNN' object has no attribute 'fit_generator' in ImageDataGenerator() - keras - python
|
<p>Following code is developed for identify 5 image classes using keras and python with tensorflow backend. I have used imageDataGenerator but when I run this, it's started to train and after a while, following error occured.</p>
<p>How can I solve this?</p>
<blockquote>
<p>Training Step: 127 | total loss: 0.01171 |
time: 32.772s | Adam | epoch: 005 | loss: 0.01171 - acc: 0.9971 --
iter: 1536/1550 Training Step: 128 | total loss: 0.01055 | time:
36.283s | Adam | epoch: 005 | loss: 0.01055 - acc: 0.9974 | val_loss: 3.05709 - val_acc: 0.6500 -- iter: 1550/1550
-- Found 0 images belonging to 0 classes. Found 0 images belonging to 0 classes. Traceback (most recent call last):</p>
<p>File "", line 1, in
runfile('D:/My Projects/FinalProject_Vr_01.2/CNN_IMGDG_stackoverflow.py', wdir='D:/My
Projects/FinalProject_Vr_01.2')</p>
<p>File
"C:\Users\Asus\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py",
line 704, in runfile
execfile(filename, namespace)</p>
<p>File
"C:\Users\Asus\Anaconda3\lib\site-packages\spyder_kernels\customize\spydercustomize.py",
line 108, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)</p>
<p>File "D:/My
Projects/FinalProject_Vr_01.2/CNN_IMGDG_stackoverflow.py", line 191,
in
model.fit_generator(train_generator,</p>
<p>AttributeError: 'DNN' object has no attribute 'fit_generator'</p>
</blockquote>
<pre><code>import cv2
import numpy as np
import os
from random import shuffle
from tqdm import tqdm
from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
TRAIN_DIR = 'train'
VALID_DIR = 'validate'
TEST_DIR = 'test'
IMG_SIZE = 128
LR = 1e-3
train_samples = 1500
valdate_samples = 250
epochs = 5
batch_size = 10
MODEL_NAME = 'snakes-{}-{}.model'.format(LR, '2conv-basic')
def label_img(img):
print("\nImage = ",img)
print("\n",img.split('.')[-2])
temp_name= img.split('.')[-2]
print("\n",temp_name[:1])
temp_name=temp_name[:1]
word_label = temp_name
if word_label == 'A': return [0,0,0,0,1]
elif word_label == 'B': return [0,0,0,1,0]
elif word_label == 'C': return [0,0,1,0,0]
elif word_label == 'D': return [0,1,0,0,0]
elif word_label == 'E' : return [1,0,0,0,0]
def create_train_data():
training_data = []
for img in tqdm(os.listdir(TRAIN_DIR)):
label = label_img(img)
path = os.path.join(TRAIN_DIR,img)
img = cv2.imread(path,cv2.IMREAD_GRAYSCALE)
img = cv2.resize(img,(IMG_SIZE,IMG_SIZE))
training_data.append([np.array(img),np.array(label)])
shuffle(training_data)
np.save('train_data.npy', training_data)
return training_data
def create_validate_data():
validating_data = []
for img in tqdm(os.listdir(VALID_DIR)):
label = label_img(img)
path = os.path.join(VALID_DIR,img)
img = cv2.imread(path,cv2.IMREAD_GRAYSCALE)
img = cv2.resize(img,(IMG_SIZE,IMG_SIZE))
validating_data.append([np.array(img),np.array(label)])
shuffle(validating_data)
np.save('validate_data.npy', validating_data)
return validating_data
def process_test_data():
testing_data = []
for img in tqdm(os.listdir(TEST_DIR)):
path = os.path.join(TEST_DIR,img)
img_num = img.split('.')[0]
img = cv2.imread(path,cv2.IMREAD_GRAYSCALE)
img = cv2.resize(img,(IMG_SIZE,IMG_SIZE))
testing_data.append([np.array(img), img_num])
shuffle(testing_data)
np.save('test_data.npy', testing_data)
return testing_data
train_data = create_train_data()
validate_data = create_validate_data()
import tflearn
from tflearn.layers.conv import conv_2d, max_pool_2d
from tflearn.layers.core import input_data, dropout, fully_connected
from tflearn.layers.estimator import regression
import tensorflow as tf
tf.reset_default_graph()
convnet = input_data(shape=[None, IMG_SIZE, IMG_SIZE, 1], name='input')
convnet = conv_2d(convnet, 32, 5, activation='relu')
convnet = max_pool_2d(convnet, 5)
convnet = conv_2d(convnet, 64, 5, activation='relu')
convnet = max_pool_2d(convnet, 5)
convnet = conv_2d(convnet, 128, 5, activation='relu')
convnet = max_pool_2d(convnet, 5)
convnet = conv_2d(convnet, 64, 5, activation='relu')
convnet = max_pool_2d(convnet, 5)
convnet = conv_2d(convnet, 32, 5, activation='relu')
convnet = max_pool_2d(convnet, 5)
convnet = fully_connected(convnet, 1024, activation='relu')
convnet = dropout(convnet, 0.8)
convnet = fully_connected(convnet, 5, activation='softmax')
convnet = regression(convnet, optimizer='adam', learning_rate=LR, loss='categorical_crossentropy', name='targets')
model = tflearn.DNN(convnet, tensorboard_dir='log')
if os.path.exists('{}.meta'.format(MODEL_NAME)):
model.load(MODEL_NAME)
print('model loaded!')
train = train_data[:]
validate = validate_data[:]
X = np.array([i[0] for i in train]).reshape(-1,IMG_SIZE,IMG_SIZE,1)
Y = [i[1] for i in train]
validate_x = np.array([i[0] for i in validate]).reshape(-1,IMG_SIZE,IMG_SIZE,1)
validate_y = [i[1] for i in validate]
model.fit({'input': X}, {'targets': Y}, n_epoch=epochs, validation_set=({'input': validate_x}, {'targets': validate_y}),
snapshot_step=500, show_metric=True, run_id=MODEL_NAME)
train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
validation_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory('train',
target_size=(IMG_SIZE, IMG_SIZE),
batch_size=batch_size,
class_mode='categorical')
validation_generator = validation_datagen.flow_from_directory('validate',
target_size=(IMG_SIZE, IMG_SIZE),
batch_size=batch_size,
class_mode='categorical')
model.fit_generator(train_generator,
steps_per_epoch=25,
epochs=epochs,
validation_data=validation_generator,
validation_steps=25)
model.save(MODEL_NAME)
</code></pre>
|
<p>Your model object is an instance of the <code>tflearn.DNN</code> class, which simply does not have a <code>fit_generator</code> method. This method is only available for keras objects. Maybe you could define your architecture in keras and you would be able to use your data generators.</p>
|
python|tensorflow|machine-learning|keras
| 2
|
4,569
| 71,928,602
|
Difference between using a ML docker image and running pip install in Dockerfile
|
<p>I see there are many available Docker images for popular ML frameworks such as <a href="https://hub.docker.com/r/pytorch/pytorch" rel="nofollow noreferrer">PyTorch</a> and <a href="https://hub.docker.com/r/tensorflow/tensorflow/" rel="nofollow noreferrer">Tensorflow</a>.</p>
<p>What is the difference between using these pre-built images vs installing these libraries using <code>pip install</code> or <code>conda install</code> in the Dockerfile?</p>
<p>I usually build my custom Docker images from an <code>nvidia/cuda</code> base image which supports GPU and later run a bash command to install my <code>requirements.txt</code> file which contain the afore-mentioned libraries. Example:</p>
<pre><code>FROM nvidia/cuda:10.2-cudnn7-runtime-ubuntu18.04
...
# Activate virtual environment and install requirements
RUN /bin/bash -c "cd src \
&& source activate my_venv \
&& pip install -r requirements.txt"
</code></pre>
<p>I feel that using <code>pip install</code> gives me more liberties and allows me to choose a base image that enables GPU-usage with my favorite OS. I guess that it might have to do with performance issues.</p>
|
<p>There is no difference, using a pre-built image saves you from misconfiguring the docker environment or the missing dependencies and ensuring a safer execution (the whole point of docker I say). Your approach is fine in case you want to have a full control on the image being built.</p>
|
python|docker|machine-learning|deep-learning|pytorch
| 0
|
4,570
| 55,508,210
|
Grouping and Summing in Pandas
|
<p>I have a dataframe with two columns. The first column contains <code>years</code> and the second column contain <code>value</code>. I want to group a certain year and change it to one name for that group and add all the corresponding values.</p>
<p>For example, below is the small dataset</p>
<pre><code>years value
1950 3
1951 1
1952 2
1961 4
1964 10
1970 34
</code></pre>
<p>The output should look like</p>
<pre><code>years value
1950's 6
1960's 14
1970's 34
</code></pre>
<p>I am trying this in Python using <code>pandas</code> and tried a lot many ways, converting to dict or for loop and but every time I was not able to achieve as desired. Can someone please help?</p>
|
<p>Use integer division, multiple <code>10</code>, cast to string and add <code>s</code> and use this Series for aggregating <code>sum</code>:</p>
<pre><code>y = ((df['years'] // 10) * 10).astype(str) + 's'
df = df.groupby(y)['value'].sum().reset_index()
print (df)
years value
0 1950s 6
1 1960s 14
2 1970s 34
</code></pre>
<p><strong>Detail</strong>:</p>
<pre><code>print (y)
0 1950s
1 1950s
2 1950s
3 1960s
4 1960s
5 1970s
Name: years, dtype: object
</code></pre>
|
python|pandas|dataframe
| 3
|
4,571
| 67,020,577
|
AttributeError: module 'xlwings' has no attribute 'load'
|
<p>I am on Windows 10, Python 3.8.5, xlwings-0.23.0</p>
<p>I am trying to load a selected range of cells in Excel into Pandas DataFrame.
I am following documentation on:
<a href="https://docs.xlwings.org/en/stable/api.html#xlwings.load" rel="nofollow noreferrer">https://docs.xlwings.org/en/stable/api.html#xlwings.load</a></p>
<pre><code>xlwings.load(index=1, header=1)
</code></pre>
<p>Loads the selected cell(s) of the active workbook into a pandas DataFrame. If you select a single cell that has adjacent cells, the range is auto-expanded and turned into a pandas DataFrame. If you don’t have pandas installed, it returns the values as nested lists.</p>
<p>Parameters:
<code>index (bool or int, default 1)</code> – Defines the number of columns on the left that will be turned into the DataFrame’s index</p>
<p><code>header (bool or int, default 1)</code> – Defines the number of rows at the top that will be turned into the DataFrame’s columns</p>
<p>Examples</p>
<pre><code>import xlwings as xw
xw.load()
</code></pre>
<p>My script has selected relevant cells through this command:</p>
<pre><code>wb.sheets['myTab'].range('C5').expand().select()
</code></pre>
<p>I am now trying to load those cells into Pandas DataFrame:</p>
<pre><code>df = xw.load()
</code></pre>
<p>I am getting this error message:
<strong>AttributeError: module 'xlwings' has no attribute 'load'</strong></p>
<p>Also tried:</p>
<pre><code>wb = xw.Book(strMyExcelFile)
wb.sheets[strTab].activate()
df = wb.load()
</code></pre>
<p>In that case the error message is: <strong>AttributeError: 'Book' object has no attribute 'load'</strong></p>
<p>Does anyone have any suggestions on how to pass selected cells into Pandas DataFrame, please?</p>
|
<p>Case resolved with the following solution:</p>
<p>Open Excel file and select a target tab.</p>
<pre><code>wb = xw.Book(strMyExcelFile)
wb.sheets[strMyTab].activate()
</code></pre>
<p>Critical issue #1: Need to define the range of selected cells to use downstream.</p>
<p>Select a starting cell, C5 in this case, and all adjacent cells.</p>
<pre><code>cellRange = sheet.range('C5').expand()
</code></pre>
<p>Critical issue #2: Assign data from the selected range into Pandas DataFrame.</p>
<p>This is what I was trying to solve for earlier but issue #1 is a pre-requisite.
The first row in Excel file is headers.</p>
<pre><code>df = wb.sheets[strMyTab].range(cellRange).options(pd.DataFrame, index=False, header=True).value
</code></pre>
<p>Close Excel file without saving it.</p>
<pre><code>wb.close()
</code></pre>
<p>Note: Credits due to suggestions received here <a href="https://github.com/xlwings/xlwings/issues/1555" rel="nofollow noreferrer">https://github.com/xlwings/xlwings/issues/1555</a></p>
|
excel|pandas|xlwings
| 0
|
4,572
| 66,904,053
|
How do I select the first item independent of the number of dimensions?
|
<p>I have a multidimensional <code>numpy.array</code> called <code>my_imgs</code>. I want to store the array as an image. The dimensions of the array are not constant. At the moment I have four or five dimensions; later I will add more dimensions. For all diemensions from the 4th diemension on, I always want to select the first element. Is there a function to always select the first element regardless of the number of dimensions?</p>
<p>At the moment I have a separate if statement for each dimension.</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
# shape (number, row, col, channel, stack, ...)
my_imgs = '' # Multidimensional image array
fig, axes = plt.subplots(nrows=3, ncols=3)
for i, ax in enumerate(axes.flat):
ax.axis('off')
if len(my_imgs.shape) == 4:
ax.imshow(my_imgs[i, :, :, 0], cmap='gray_r')
elif len(my_imgs.shape) == 5:
ax.imshow(my_imgs[i, :, :, 0, 0], cmap='gray_r')
elif len(my_imgs.shape) == 6:
ax.imshow(my_imgs[i, :, :, 0, 0, 0], cmap='gray_r')
</code></pre>
|
<p>The slice that numpy uses to specify 'the entire dimension', <code>:</code>, is equal to <code>slice(None)</code>.</p>
<p>This means you can do</p>
<pre><code>import numpy as np
arr_5d = np.zeros((10, ) * 5)
arr_6d = np.zeros((10, ) * 6)
def my_slicer(arr, i=0):
indexer = (i, slice(None), slice(None)) + (0,) * (len(arr.shape)-3)
return arr[indexer]
# will print (10, 10) (10, 10)
print(
my_slicer(arr_5d, i=2).shape,
my_slicer(arr_6d, i=-3).shape,
)
</code></pre>
|
python|numpy
| 1
|
4,573
| 66,799,192
|
Numpy matrix with values equal to offset from central row/column
|
<p>For given odd value <code>a</code>, I want to generate two matrices, where values represent the offset from central row/column in x or y direction. Example for <code>a=5</code>:</p>
<pre><code> | -2 -1 0 1 2 | | -2 -2 -2 -2 -2 |
| -2 -1 0 1 2 | | -1 -1 -1 -1 -1 |
X = | -2 -1 0 1 2 | Y = | 0 0 0 0 0 |
| -2 -1 0 1 2 | | 1 1 1 1 1 |
| -2 -1 0 1 2 | | 2 2 2 2 2 |
</code></pre>
<p>What is the easiest way to achieve this with <code>Numpy</code>?</p>
|
<p><code>np.arange</code> and <code>np.repeat</code> will do:</p>
<pre><code>a = 5
limits = -(a//2), a//2 + 1
col = np.c_[np.arange(*limits)]
Y = np.repeat(col, repeats=a, axis=1)
X = Y.T
</code></pre>
|
python|numpy|numpy-ndarray
| 1
|
4,574
| 66,991,234
|
Python image_dataset_loader Module Instances are inconsistent
|
<p>I want to import an image dataset into Numpy arrays with images and labels. I am trying to use the <code>image_dataset_loader</code> to do this and have wrote this so far:</p>
<pre><code>import image_dataset_loader
(x_train, y_train), (x_test, y_test) = image_dataset_loader.load('./data', ['train', 'test'])
</code></pre>
<p>I also have my data directory structured as follows:</p>
<pre><code>data
-train
-male
-male_1.jpg
-male_2.jpg
-male_3.jpg
-male_4.jpg
-......
-female
-female_1.jpg
-female_2.jpg
-female_3.jpg
-female_4.jpg
-......
-test
-male
-male_1.jpg
-male_2.jpg
-male_3.jpg
-male_4.jpg
-......
-female
-female_1.jpg
-female_2.jpg
-female_3.jpg
-female_4.jpg
-......
</code></pre>
<p>I have formated all my images to be 120x120 and named them exactly as shown above. I have about 56000 files per category. When I run the script above, it throws the following error:</p>
<pre><code>Traceback (most recent call last):
File "main.py", line 33, in <module>
(x_train, y_train), (x_test, y_test) = image_dataset_loader.load('./data', ['train', 'test'])
File "/home/user/anaconda3/envs/AIOS/lib/python3.8/site-packages/image_dataset_loader.py", line 44, in load
raise RuntimeError('Instance shapes are not consistent.')
RuntimeError: Instance shapes are not consistent.
</code></pre>
<p>Can someone please help me sort these images into Numpy arrays?</p>
|
<p>Check the color of your images. Chances are that some of your images may be grayscale.</p>
|
arrays|python-3.x|image|numpy|artificial-intelligence
| 0
|
4,575
| 47,128,212
|
Is it possible to have dynamic batchsize in keras?
|
<p>Keras codes I have looked or wrote have fixed batchsize during training (i.e. 32, 64, 128 ...). I am wondering if it is possible to have dynamic batchsize. (For example, 104 in the first iteration, 82 in the next iteration, 95 in next, and so on.) </p>
<p>I am currently using tensorflow backend.</p>
|
<p>It is possible if you train on a loop vs training with fit. an example</p>
<pre><code>from random import shuffle
dataSlices = [(0,104),(104,186),(186,218)]
for epochs in range(0,10):
shuffle(dataSlices)
for i in dataSlices:
x,y = X[i[0]:i[1],:],Y[i[0]:i[1],:]
model.fit(x,y,epochs=1,batchsize=x.shape[0])
#OR as suggest by Daniel Moller
#model.train_on_batch(x,y)
</code></pre>
<p>This would assume your data is 2d numpy arrays. This idea can be further expanded to use a <code>fit_generator()</code> inplace of the for loop if you so choose (see <a href="https://keras.io/models/sequential/" rel="nofollow noreferrer">docs</a>).</p>
|
tensorflow|deep-learning|keras
| 4
|
4,576
| 68,112,182
|
Replace the string in pandas dataframe
|
<p>I have the following dataframe (df):</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>shape</th>
<th>data</th>
</tr>
</thead>
<tbody>
<tr>
<td>POINT</td>
<td>POINT (4495 33442)</td>
</tr>
<tr>
<td>POLYGON</td>
<td>POLYGON ((6324 32691, 6326 32691, 6330 32691, 6333 32693, 6332 32696, 6329 32700, 6328 32704, 6327 32707, 6325 32710, 6322 32713, 6319 32716, 6316 32719, 6313 32722, 6310 32725, 6307 32728, 6303 32728, 6299 32727, 6295 32727, 6291 32730, 6288 32733, 6285 32735, 6281 32735, 6277 32735, 6275 32732, 6274 32729, 6274 32725, 6272 32722, 6269 32720, 6265 32719, 6261 32719, 6258 32716, 6257 32712, 6259 32708, 6262 32705, 6265 32702, 6268 32701, 6272 32701, 6276 32701, 6279 32702, 6283 32702, 6287 32702, 6291 32699, 6294 32696, 6297 32693, 6300 32692, 6304 32692, 6308 32692, 6312 32692, 6316 32692, 6320 32693, 6324 32691))</td>
</tr>
<tr>
<td>POINT</td>
<td>POINT (4673 33465)</td>
</tr>
<tr>
<td>POLYGON</td>
<td>POLYGON ((5810 33296, 5813 33297, 5816 33299, 5819 33301, 5822 33303, 5826 33306, 5829 33307, 5833 33307, 5836 33308, 5837 33312, 5837 33316, 5836 33319, 5834 33323, 5832 33327, 5830 33330, 5828 33333, 5826 33336, 5824 33339, 5821 33342, 5817 33342, 5813 33341, 5808 33340, 5803 33339, 5800 33338))</td>
</tr>
</tbody>
</table>
</div>
<p>I would like to convert it into the following format: if POINT then (4495, 33442) if POLYGON then [(5810, 33296), (5813, 33297), (5816, 33299), (5819, 33301), (5822, 33303), (5826, 33306), (5829, 33307), (5833, 33307), (5836, 33308), (5837, 33312), (5837, 33316), (5836, 33319), (5834, 33323), (5832, 33327), (5830, 33330), (5828, 33333), (5826, 33336), (5824, 33339), (5821, 33342), (5817, 33342), (5813, 33341), (5808, 33340), (5803, 33339), (5800, 33338)]. How do I do that?</p>
<p>What I tried so far?</p>
<pre><code>op2=[]
for st, shape in zip(df['data'],df['shape']):
if 'POINT' in shape:
val=re.findall('\([0-9., ]+\)', st)[-1]
op2.append("({})".format(", ".join(re.findall(r"\d+", val))))
#op2_list = [ast.literal_eval(l) for l in op2]
#poi = [Point(i).wkt for i in op2_list]
else: # Polygon
val=re.findall('\([0-9., ]+\)', st)[-1]
paran=val.replace(', ', '),(')
fin=paran.replace(' ', ',')
op2.append(fin)
data['converted']=pd.DataFrame(op2)
</code></pre>
<p>Desired output:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>shape</th>
<th>data</th>
<th>converted</th>
</tr>
</thead>
<tbody>
<tr>
<td>POINT</td>
<td>POINT (4495 33442)</td>
<td>(4495, 33442)</td>
</tr>
<tr>
<td>POLYGON</td>
<td>POLYGON ((6324 32691, 6326 32691, 6330 32691, 6333 32693, 6332 32696, 6329 32700, 6328 32704, 6327 32707, 6325 32710, 6322 32713, 6319 32716, 6316 32719, 6313 32722, 6310 32725, 6307 32728, 6303 32728, 6299 32727, 6295 32727, 6291 32730, 6288 32733, 6285 32735, 6281 32735, 6277 32735, 6275 32732, 6274 32729, 6274 32725, 6272 32722, 6269 32720, 6265 32719, 6261 32719, 6258 32716, 6257 32712, 6259 32708, 6262 32705, 6265 32702, 6268 32701, 6272 32701, 6276 32701, 6279 32702, 6283 32702, 6287 32702, 6291 32699, 6294 32696, 6297 32693, 6300 32692, 6304 32692, 6308 32692, 6312 32692, 6316 32692, 6320 32693, 6324 32691))</td>
<td>[(6324, 32691), (6326, 32691), (6330, 32691), (6333, 32693), (6332, 32696), (6329, 32700), (6328, 32704), (6327, 32707), (6325, 32710), (6322, 32713), (6319, 32716), (6316, 32719), (6313, 32722), (6310, 32725), (6307, 32728), (6303, 32728), (6299, 32727), (6295, 32727), (6291, 32730), (6288 ,32733), (6285, 32735), (6281, 32735), (6277, 32735), (6275, 32732), (6274, 32729), (6274, 32725), (6272, 32722), (6269, 32720), (6265, 32719), (6261, 32719), (6258, 32716), (6257, 32712), (6259, 32708), (6262, 32705), (6265, 32702), (6268, 32701), (6272, 32701), (6276, 32701), (6279, 32702), (6283, 32702), (6287, 32702), (6291, 32699), (6294, 32696), (6297, 32693), (6300, 32692), (6304, 32692), (6308, 32692), (6312, 32692), (6316, 32692), (6320, 32693), (6324, 32691)]</td>
</tr>
<tr>
<td>POINT</td>
<td>POINT (4673 33465)</td>
<td>(4673, 33465)</td>
</tr>
<tr>
<td>POLYGON</td>
<td>POLYGON ((5810 33296, 5813 33297, 5816 33299, 5819 33301, 5822 33303, 5826 33306, 5829 33307, 5833 33307, 5836 33308, 5837 33312, 5837 33316, 5836 33319, 5834 33323, 5832 33327, 5830 33330, 5828 33333, 5826 33336, 5824 33339, 5821 33342, 5817 33342, 5813 33341, 5808 33340, 5803 33339, 5800 33338))</td>
<td>[(5810, 33296), (5813, 33297), (5816, 33299), (5819, 33301), (5822, 33303), (5826, 33306), (5829, 33307), (5833, 33307), (5836, 33308), (5837, 33312), (5837, 33316), (5836, 33319), (5834, 33323), (5832, 33327), (5830, 33330), (5828, 33333), (5826, 33336), (5824, 33339), (5821, 33342), (5817, 33342), (5813, 33341), (5808, 33340), (5803, 33339), (5800, 33338)]</td>
</tr>
</tbody>
</table>
</div>
<p>This does not convert the polygons. How do I do that?</p>
|
<p>This function will format the polygon strings correctly:</p>
<pre><code>def format_polygon(s):
return [tuple([float(i) for i in x.split(" ")]) for x in s[10:-2].split(", ")]
</code></pre>
<p>and this code will format the point strings correctly:</p>
<pre><code>def format_point(s):
return tuple([float(i) for i in s[7:-1].split(" ")])
</code></pre>
<p>they can then be applied to your dataframe like so:</p>
<pre><code>df[df["shape"]=="POINT"]["data"] = df[df["shape"]=="POINT"]["data"].apply(lambda x: format_point(x))
df[df["shape"]=="POLYGON"]["data"] = df[df["shape"]=="POLYGON"]["data"].apply(lambda x: format_polygon(x))
</code></pre>
|
python|pandas|replace|python-re|parentheses
| 0
|
4,577
| 68,397,820
|
List of lists to array conversion. Mixed strings and floats
|
<p>I have an array (150,40) that looks like:</p>
<pre><code>list_of_lists= ['name_1' 0.0123
'name_2' 0.1234
... 'name_40' 0.213241
Name: 2015-03-26 16:02:42.117000, dtype: float64,
and so on, 149 more ]
</code></pre>
<p>I have two questions:</p>
<p>The 40 names are all the same for all 150 lists, how can I convert these to the columns of an array, and the values (whcih differ across all 150 lists) to rows corresponding to each column name ?
Example:</p>
<pre><code>array= [ 'name_1', 'name_2',... 'name_40
0.0123, 0.1234, 0.213241]
</code></pre>
<p>Second, the <code>Name: 2015-03-26 16:02:42.117000</code> is actually a timestamp, and I would need this to be the 0 column, with 150 rows
like:</p>
<pre><code>array= [ 'timestamp' 'name_1', 'name_2',... 'name_40
16:02:42.117000 0.0123, 0.1234, 0.213241]
</code></pre>
<p>I have no clue why the timestamp is the Name of the list in the first place
And I have no idea how to convert this to an array for further processing</p>
|
<p>In regards the first question, one way you could do this, strictly using only <code>numpy</code> would be to create a new <code>numpy</code> structured array and create a set of two for each loops (<code>for x in list</code>), the top level to loop over array then the nested one to loop over each array element, appending it to the new structured array. That being said this approach will result in code that has poor readability and overly complicated.</p>
<p>A better option is to use a <code>pandas</code> DataFrame. You should be able to convert all your <code>numpy</code> structured arrays directly into pandas as simply as this <code>pandas.DataFrame(mylist)</code> or read in the data directly into a dataframe if you're reading it in from external files.</p>
<p>The approach below is starting from your structured arrays (but you could just swap <code>pd.DataFrame</code> to something like <code>pd.read_csv(...)</code> if using csv files or another function for other file formats.</p>
<pre><code>import pandas as pd
# need a list of those arrays to allow easy looping
listofArrays = [array1, array2, ...]
# now using list comprehension, converting all those to dataframes
listofDataFrames = [pd.DataFrame(arrayX) for arrayX in listofArrays]
# now we can just use pd.Concat to put all those together
completeSet = pd.concat(listofDataFrames)
</code></pre>
<p>And with that you have a dataframe with everything pulled together. Which if you strictly need it, can be converted back to a purely <code>numpy</code> data structure.</p>
<hr />
<p>In regards to the second question, this can be solved using regular expression, or since its a fairly simple string, using <code>split</code> and <code>join</code>.</p>
<p>Its not clear on what your input data format, but if I'm reading correctly this is not in an array yet. So I take it the input looks something like</p>
<pre><code>name: date1 timestamp data: 213123 12313
name2: date2 timestamp2 data: 21311223 313
</code></pre>
<p>Even if it is different, the same principle applies.</p>
<p>Since this is relatively simple, <code>split</code> and <code>join</code> would be the easiest approach</p>
<pre><code>originalnamestring = "Name: 2015-03-26 16:02:42.117000"
splitbyspace = originalnamestring.split(" ")
name = splitbyspace[0]
# and if you want to remove the : at end
name = name[:-1]
# if you want separate date and time
date = splitbyspace[1]
time = splitbyspace[2]
# or combined timestamp
timestamp = " ".join(splitbyspace[1:])
</code></pre>
|
python|arrays|list|numpy
| 0
|
4,578
| 68,073,270
|
Pandas: full outer join with filled-in blanks
|
<p>I have the following df:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Date</th>
<th>Product</th>
<th>Region</th>
<th>MTD PnL</th>
<th>FYTD PnL</th>
</tr>
</thead>
<tbody>
<tr>
<td>21/6/1</td>
<td>Coke</td>
<td>Northeast</td>
<td>$300</td>
<td>$15,000</td>
</tr>
<tr>
<td>21/6/1</td>
<td>Coke</td>
<td>Southeast</td>
<td>$200</td>
<td>$10,000</td>
</tr>
<tr>
<td>21/6/1</td>
<td>Coke</td>
<td>Mid-Atlantic</td>
<td>$375</td>
<td>$12,000</td>
</tr>
<tr>
<td>21/6/1</td>
<td>Pepsi</td>
<td>Northeast</td>
<td>$150</td>
<td>$7,000</td>
</tr>
<tr>
<td>21/6/1</td>
<td>Pepsi</td>
<td>Southwest</td>
<td>$70</td>
<td>$4,000</td>
</tr>
</tbody>
</table>
</div>
<p>And I have a separate, exhaustive list of Regions:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Region</th>
</tr>
</thead>
<tbody>
<tr>
<td>Northeast</td>
</tr>
<tr>
<td>Southeast</td>
</tr>
<tr>
<td>Mid-Atlantic</td>
</tr>
<tr>
<td>Midwest</td>
</tr>
<tr>
<td>Prairies</td>
</tr>
<tr>
<td>Southwest</td>
</tr>
<tr>
<td>Northwest</td>
</tr>
<tr>
<td>California</td>
</tr>
</tbody>
</table>
</div>
<p>I need to join this list to the df so that for each given product, all of the regions are shown (with zeroes in the PnL columns if no values are present)</p>
<p>So far I have used <code>pd.merge</code> to (full outer) join the df and region list and I got the following output:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Date</th>
<th>Product</th>
<th>Region</th>
<th>MTD PnL</th>
<th>FYTD PnL</th>
</tr>
</thead>
<tbody>
<tr>
<td>21/6/1</td>
<td>Coke</td>
<td>Northeast</td>
<td>$300</td>
<td>$15,000</td>
</tr>
<tr>
<td>21/6/1</td>
<td>Coke</td>
<td>Southeast</td>
<td>$200</td>
<td>$10,000</td>
</tr>
<tr>
<td>21/6/1</td>
<td>Coke</td>
<td>Mid-Atlantic</td>
<td>$375</td>
<td>$12,000</td>
</tr>
<tr>
<td>21/6/1</td>
<td></td>
<td>Midwest</td>
<td>NaN</td>
<td>NaN</td>
</tr>
<tr>
<td>21/6/1</td>
<td></td>
<td>Prairies</td>
<td>NaN</td>
<td>NaN</td>
</tr>
<tr>
<td>21/6/1</td>
<td></td>
<td>Southwest</td>
<td>NaN</td>
<td>NaN</td>
</tr>
<tr>
<td>21/6/1</td>
<td></td>
<td>Northwest</td>
<td>NaN</td>
<td>NaN</td>
</tr>
<tr>
<td>21/6/1</td>
<td></td>
<td>California</td>
<td>NaN</td>
<td>NaN</td>
</tr>
<tr>
<td>21/6/1</td>
<td>Pepsi</td>
<td>Northeast</td>
<td>$150</td>
<td>$7,000</td>
</tr>
<tr>
<td>21/6/1</td>
<td></td>
<td>Southeast</td>
<td>NaN</td>
<td>NaN</td>
</tr>
<tr>
<td>21/6/1</td>
<td></td>
<td>Mid-Atlantic</td>
<td>NaN</td>
<td>NaN</td>
</tr>
<tr>
<td>21/6/1</td>
<td></td>
<td>Midwest</td>
<td>NaN</td>
<td>NaN</td>
</tr>
<tr>
<td>21/6/1</td>
<td></td>
<td>Prairies</td>
<td>NaN</td>
<td>NaN</td>
</tr>
<tr>
<td>21/6/1</td>
<td>Pepsi</td>
<td>Southwest</td>
<td>$70</td>
<td>$4,000</td>
</tr>
<tr>
<td>21/6/1</td>
<td></td>
<td>Northwest</td>
<td>NaN</td>
<td>NaN</td>
</tr>
<tr>
<td>21/6/1</td>
<td></td>
<td>California</td>
<td>NaN</td>
<td>NaN</td>
</tr>
</tbody>
</table>
</div>
<p>But I need the Product column to list the product in every row, as well as the two PnL columns to show $0 instead of NaN. Like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Date</th>
<th>Product</th>
<th>Region</th>
<th>MTD PnL</th>
<th>FYTD PnL</th>
</tr>
</thead>
<tbody>
<tr>
<td>21/6/1</td>
<td>Coke</td>
<td>Northeast</td>
<td>$300</td>
<td>$15,000</td>
</tr>
<tr>
<td>21/6/1</td>
<td>Coke</td>
<td>Southeast</td>
<td>$200</td>
<td>$10,000</td>
</tr>
<tr>
<td>21/6/1</td>
<td>Coke</td>
<td>Mid-Atlantic</td>
<td>$375</td>
<td>$12,000</td>
</tr>
<tr>
<td>21/6/1</td>
<td>Coke</td>
<td>Midwest</td>
<td>$0</td>
<td>$0</td>
</tr>
<tr>
<td>21/6/1</td>
<td>Coke</td>
<td>Prairies</td>
<td>$0</td>
<td>$0</td>
</tr>
<tr>
<td>21/6/1</td>
<td>Coke</td>
<td>Southwest</td>
<td>$0</td>
<td>$0</td>
</tr>
<tr>
<td>21/6/1</td>
<td>Coke</td>
<td>Northwest</td>
<td>$0</td>
<td>$0</td>
</tr>
<tr>
<td>21/6/1</td>
<td>Coke</td>
<td>California</td>
<td>$0</td>
<td>$0</td>
</tr>
<tr>
<td>21/6/1</td>
<td>Pepsi</td>
<td>Northeast</td>
<td>$150</td>
<td>$7,000</td>
</tr>
<tr>
<td>21/6/1</td>
<td>Pepsi</td>
<td>Southeast</td>
<td>$0</td>
<td>$0</td>
</tr>
<tr>
<td>21/6/1</td>
<td>Pepsi</td>
<td>Mid-Atlantic</td>
<td>$0</td>
<td>$0</td>
</tr>
<tr>
<td>21/6/1</td>
<td>Pepsi</td>
<td>Midwest</td>
<td>$0</td>
<td>$0</td>
</tr>
<tr>
<td>21/6/1</td>
<td>Pepsi</td>
<td>Prairies</td>
<td>$0</td>
<td>$0</td>
</tr>
<tr>
<td>21/6/1</td>
<td>Pepsi</td>
<td>Southwest</td>
<td>$70</td>
<td>$4,000</td>
</tr>
<tr>
<td>21/6/1</td>
<td>Pepsi</td>
<td>Northwest</td>
<td>$0</td>
<td>$0</td>
</tr>
<tr>
<td>21/6/1</td>
<td>Pepsi</td>
<td>California</td>
<td>$0</td>
<td>$0</td>
</tr>
</tbody>
</table>
</div>
<p>So far I used <code>.fillna(0)</code> to replace the NaNs but I cannot find a way to fill in the products names in a programmatic way. What is the simplest way to do this?</p>
|
<p>Use <code>.reindex</code> with the product of all the unique values for the other levels -- that way the values for the basis are filled in. <code>.reindex</code> also supports a <code>fill_value</code> for non-Index columns.</p>
<pre><code>import pandas as pd
idx = pd.MultiIndex.from_product([df1.Date.unique(), df1.Product.unique(), df2.Region.unique()],
names=['Date', 'Product', 'Region'])
df1 = (df1.set_index(['Date', 'Product', 'Region'])
.reindex(idx, fill_value='$0')
.reset_index())
</code></pre>
<hr />
<pre><code> Date Product Region MTD PnL FYTD PnL
0 21/6/1 Coke Northeast $300 $15,000
1 21/6/1 Coke Southeast $200 $10,000
2 21/6/1 Coke Mid-Atlantic $375 $12,000
3 21/6/1 Coke Midwest $0 $0
4 21/6/1 Coke Prairies $0 $0
5 21/6/1 Coke Southwest $0 $0
6 21/6/1 Coke Northwest $0 $0
7 21/6/1 Coke California $0 $0
8 21/6/1 Pepsi Northeast $150 $7,000
9 21/6/1 Pepsi Southeast $0 $0
10 21/6/1 Pepsi Mid-Atlantic $0 $0
11 21/6/1 Pepsi Midwest $0 $0
12 21/6/1 Pepsi Prairies $0 $0
13 21/6/1 Pepsi Southwest $70 $4,000
14 21/6/1 Pepsi Northwest $0 $0
15 21/6/1 Pepsi California $0 $0
</code></pre>
|
python|pandas|dataframe|join
| 0
|
4,579
| 59,119,925
|
Exclude column from being read using pd.ExcelFile().parse()
|
<p>I would like to exclude certain columns from being read when using pd.ExcelFile('my.xls').parse()</p>
<p>Excel file I am trying to parse has too many columns to list them all in usecols argument since I only need to get rid off a single column that is causing trouble.</p>
<p>Is there like a simple way to ~ invert list (I know you can't do that) passed to usecols or something?</p>
|
<p>We can usually do </p>
<pre><code>head = list(pd.read_csv('your.xls', nrows = 1))
df = pd.read_excel('your.xls', usecols = [col for col in head if col != 'the one drop']))
</code></pre>
<p>However , why not read whole file then <code>drop</code> it </p>
<pre><code>df = pd.read_excel('your.xls').drop('the col drop', axis = 1)
</code></pre>
|
python|pandas|python-3.7
| 2
|
4,580
| 46,004,393
|
Convolutional layer output size
|
<p>I'm currently trying to get into deep learning and I have a minor problem in understanding concerning CNNs. </p>
<p>According to <a href="http://cs231n.github.io/convolutional-networks" rel="nofollow noreferrer">CS231n</a>, the common formula for computing the output size of a conv. layer is <code>W'=(W−F+2P)/S+1</code>, where <code>W</code> is the input size, <code>F</code> is the receptive field, <code>P</code> is the padding and <code>S</code> is the stride. So far so good and I can perfectly comprehend that formula.</p>
<p>But then there's the <a href="https://www.tensorflow.org/get_started/mnist/pros#first_convolutional_layer" rel="nofollow noreferrer">TensorFlow tutorial</a>. According to the tutorial, the output size of the first convolutional layer is 28x28x32. Why not (28–5)/1 + 1 = 24 → 24x24x32 so that the first pooling layer would reduce it to 12x12x32? What am I doing wrong here?</p>
|
<p>Here for the conv layer by default they used <code>SAME</code> padding. <code>P=floor(F/2)</code> for <code>SAME</code> padding. So <code>(28- 5 + 2*2)/1 +1 = 28</code> </p>
|
tensorflow|deep-learning
| 2
|
4,581
| 45,768,126
|
When we do supervised classification with NN, why do we train for cross-entropy and not for classification error?
|
<p>The standard supervised classification setup: we have a bunch of samples, each with the correct label out of <code>N</code> labels. We build a NN with N outputs, transform those to probabilities with softmax, and the loss is the mean <code>cross-entropy</code> between each NN output and the corresponding true label, represented as a <code>1-hot</code> vector with <code>1</code> in the true label and <code>0</code> elsewhere. We then optimize this loss by following its gradient. The classification error is used just to measure our model quality. </p>
<p>HOWEVER, I know that when doing <code>policy gradient</code> we can use the <a href="http://adv-ml-2017.wdfiles.com/local--files/course-schedule/rl_class1.pdf" rel="nofollow noreferrer">likelihood ratio trick</a>, and we no longer need to use <code>cross-entropy</code>! our loss simply <code>tf.gather</code> the NN output corresponding to the correct label. E.g. <a href="https://gist.github.com/shanest/535acf4c62ee2a71da498281c2dfc4f4" rel="nofollow noreferrer">this solution of OpenAI gym CartPole</a>. </p>
<p>WHY can't we use the same trick when doing supervised learning? I was thinking that the reason we used <code>cross-entropy</code> is because it is differentiable, but apparently <code>tf.gather</code> is <a href="https://stackoverflow.com/questions/45701722/tensorflow-how-come-gather-nd-is-differentiable">differentiable as well</a>. </p>
<p>I mean - IF we measure ourselves on classification error, and we CAN optimize for classification error as it's differentiable, isn't it BETTER to also optimize for classification error instead of this weird <code>cross-entropy</code> proxy? </p>
|
<p>Policy gradient <strong>is</strong> using cross entropy (or KL divergence, as Ishant pointed out). For supervised learning tf.gather is really just implementational trick, nothing else. For RL on the other hand it is a must because you do not know "what would happen" if you would execute other action. Consequently you end up with high variance estimator of your gradients, something that you would like to avoid for all costs, if possible.</p>
<p>Going back to supervised learning though</p>
<pre><code>CE(p||q) = - SUM_i q_i log p_i
</code></pre>
<p>Lets assume that q_i is one hot encoded, with 1 at k'th position, then</p>
<pre><code>CE(p||q) = - q_k log p_k = - log p_k
</code></pre>
<p>So if you want, you can implement this as tf.gather, it simply does not matter. The cross-entropy is simply more generic because it handles more complex targets. In particular, in TF you have <strong>sparse</strong> cross entropy which does exactly what you describe - exploits one hot encoding, that's it. Mathematically there is no difference, there is small difference computation-wise, and there are functions doing exactly what you want.</p>
|
tensorflow|neural-network|gradient-descent|reinforcement-learning
| 1
|
4,582
| 45,742,265
|
Matrix m1 multiplied by tf.inverse(m1) does not yield identity matrix
|
<p>Using TensorFlow in python, I have the following code:</p>
<pre><code>sess = tf.InteractiveSession() # so I can eval()
t1 = tf.convert_to_tensor([[1,4,5],[34,5,1],[53,1,4]],dtype=tensorflow.float32)
t1.eval()
OUTPUT>> array([[ 1., 4., 5.],
[ 34., 5., 1.],
[ 53., 1., 4.]], dtype=float32)
# so far, so good!
t1_inverse = tf.matrix_inverse(t1)
t1_inverse.eval()
OUTPUT>> array([[-0.01294278, 0.00749319, 0.01430518],
[ 0.05653951, 0.17779292, -0.11512262],
[ 0.15735695, -0.14373296, 0.08923706]], dtype=float32)
# I'm not a math whiz but this looks like an inverted matrix to me!
(t1*t1_inverse).eval() # should yield identity matrix, but..
OUTPUT>> array([[-0.01294278, 0.02997275, 0.07152588],
[ 1.92234337, 0.88896459, -0.11512262],
[ 8.33991814, -0.14373296, 0.35694823]], dtype=float32)
</code></pre>
<p>So my question is, why does matrix t1 multiplied by its inverse not yield the identity matrix, or [[1,0,0],[0,1,0],[0,0,1]] ?</p>
|
<p>Here <code>t1*t1_inverse</code> is element-wise multiplication, you need to use <code>tf.matmul</code></p>
<pre><code>idenity_mat = tf.matmul(t1, t1_inverse)
sess.run(identity_mat)
# Results: array([[ 1.00000000e+00, 5.96046448e-08, 0.00000000e+00],
[ 0.00000000e+00, 1.00000000e+00, -6.70552254e-08],
[ 0.00000000e+00, 5.96046448e-08, 9.99999881e-01]], dtype=float32)
</code></pre>
|
python|tensorflow|matrix-multiplication|matrix-inverse
| 1
|
4,583
| 45,866,555
|
Pycharm not displaying wide Dataframe in Jupyter Notebook
|
<p>In Pycharm, I'm using a Jupyter notebook, but when the pandas dataframe I'm working with gets wider than the width of the cell it doesn't display the dataframe anymore. Instead there's just a horizontal line across the output cell. I've tried setting the max columns, width, and every other pandas display option and it's still happening. The dataframe displays fine if I widen the Pycharm window, but for dataframes wider than my screen that's not possible.</p>
<p><a href="https://i.stack.imgur.com/kEmzq.png" rel="noreferrer">Here's a screengrab of what it looks like</a></p>
<pre><code>import pandas as pd
import numpy as np
display(HTML("<style>.container { width:95% !important; }</style>"))
pd.options.display.max_rows = 50
pd.options.display.max_columns = 200
desired_width = 320
pd.set_option('display.width', desired_width)
pd.set_option('display.height', 1000)
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_columns', 500)
pd.set_option('display.width', 1000)
pd.set_option('display.expand_frame_repr', False)
pd.set_option('display.max_columns', None)
dates = pd.date_range('20130101',periods=6)
dates
df = pd.DataFrame(np.random.randn(6,19),index=dates,columns=list('ABCDEFGHIJKLMNOPQRS'))
#df
df = pd.DataFrame(np.random.randn(6,11),index=dates,columns=list('ABCDEFGHIJK'))
df
#df = pd.DataFrame(np.random.randn(6,8),index=dates,columns=list('ABCDEFGH'))
</code></pre>
|
<p>This is a PyCharm bug and it still has not been resolved in the latest (2017.3) version. </p>
<p>Only 1D dataframes are shown as expected, multidimensional dataframes are shown as a horizontal line. </p>
<p>You can vote this issue in IntelliJ's issue tracker: <a href="https://youtrack.jetbrains.com/issue/PY-25931" rel="nofollow noreferrer">https://youtrack.jetbrains.com/issue/PY-25931</a> </p>
|
python|pandas|ipython|pycharm|jupyter
| 1
|
4,584
| 46,144,951
|
Sci-kit-learn Normalization removes column headers
|
<p>I have a pandas data frame with 22 columns, where the index is datetime. </p>
<p>I am trying to normalize this data using the following code:</p>
<pre><code>from sklearn.preprocessing import MinMaxScaler
# Normalization
scaler = MinMaxScaler(copy = False)
normal_data = scaler.fit_transform(all_data2)
</code></pre>
<p>The problem is that I lose a lot of data by applying this function, for example, here is the before:</p>
<pre><code>all_data2.head(n = 5)
Out[105]:
btc_price btc_change btc_change_label eth_price \
time
2017-09-02 21:54:00 4537.8338 -0.066307 0 330.727
2017-09-02 22:29:00 4577.6050 -0.056294 0 337.804
2017-09-02 23:04:00 4566.3600 -0.059716 0 336.938
2017-09-02 23:39:00 4590.0313 -0.056242 0 342.929
2017-09-03 00:14:00 4676.1925 -0.035857 0 354.171
block_size difficulty estimated_btc_sent \
time
2017-09-02 21:54:00 142521291.0 8.880000e+11 2.040000e+13
2017-09-02 22:29:00 136524566.0 8.880000e+11 2.030000e+13
2017-09-02 23:04:00 134845546.0 8.880000e+11 2.010000e+13
2017-09-02 23:39:00 133910638.0 8.880000e+11 1.990000e+13
2017-09-03 00:14:00 130678099.0 8.880000e+11 2.010000e+13
estimated_transaction_volume_usd hash_rate \
time
2017-09-02 21:54:00 923315359.5 7.417412e+09
2017-09-02 22:29:00 918188066.9 7.152505e+09
2017-09-02 23:04:00 910440915.6 7.240807e+09
2017-09-02 23:39:00 901565929.9 7.284958e+09
2017-09-03 00:14:00 922422228.4 7.152505e+09
miners_revenue_btc ... n_blocks_mined \
time ...
2017-09-02 21:54:00 2395.0 ... 168.0
2017-09-02 22:29:00 2317.0 ... 162.0
2017-09-02 23:04:00 2342.0 ... 164.0
2017-09-02 23:39:00 2352.0 ... 165.0
2017-09-03 00:14:00 2316.0 ... 162.0
n_blocks_total n_btc_mined n_tx nextretarget \
time
2017-09-02 21:54:00 483207.0 2.100000e+11 241558.0 483839.0
2017-09-02 22:29:00 483208.0 2.030000e+11 236661.0 483839.0
2017-09-02 23:04:00 483216.0 2.050000e+11 238682.0 483839.0
2017-09-02 23:39:00 483220.0 2.060000e+11 237159.0 483839.0
2017-09-03 00:14:00 483223.0 2.030000e+11 237464.0 483839.0
total_btc_sent total_fees_btc totalbtc \
time
2017-09-02 21:54:00 1.620000e+14 2.959788e+10 1.650000e+15
2017-09-02 22:29:00 1.600000e+14 2.920230e+10 1.650000e+15
2017-09-02 23:04:00 1.600000e+14 2.923498e+10 1.650000e+15
2017-09-02 23:39:00 1.580000e+14 2.899158e+10 1.650000e+15
2017-09-03 00:14:00 1.580000e+14 2.917904e+10 1.650000e+15
trade_volume_btc trade_volume_usd
time
2017-09-02 21:54:00 102451.92 463497284.7
2017-09-02 22:29:00 102451.92 463497284.7
2017-09-02 23:04:00 102451.92 463497284.7
2017-09-02 23:39:00 102451.92 463497284.7
2017-09-03 00:14:00 96216.78 440710136.1
[5 rows x 22 columns]
</code></pre>
<p>Afterwards, I get a <code>numpy</code> array where the new index has been normalized (which is not good as it is the date column) and also all of the column headers are removed. </p>
<p>Can I somehow normalize only select columns of the original data frame while keeping them in-place? </p>
<p>If not, then how can I select only the desired columns froms the normalized numpy array and insert them back into the original df? </p>
|
<p>Try <a href="http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.scale.html" rel="nofollow noreferrer"><code>sklearn.preprocessing.scale</code></a>. No need for the class-based scaler here.</p>
<blockquote>
<p>Standardize a dataset along any axis. Center to the mean and component
wise scale to unit variance.</p>
</blockquote>
<p>You can use this like so:</p>
<pre><code>from sklearn.preprocessing import scale
df = pd.DataFrame({'col1' : np.random.randn(10),
'col2' : np.arange(10, 30, 2),
'col3' : np.arange(10)},
index=pd.date_range('2017', periods=10))
# Specify columns to scale to N~(0,1)
to_scale = ['col2', 'col3']
df.loc[:, to_scale] = scale(df[to_scale])
print(df)
col1 col2 col3
2017-01-01 -0.28292 -1.56670 -1.56670
2017-01-02 -1.55172 -1.21854 -1.21854
2017-01-03 0.51800 -0.87039 -0.87039
2017-01-04 -1.75596 -0.52223 -0.52223
2017-01-05 1.34857 -0.17408 -0.17408
2017-01-06 0.12600 0.17408 0.17408
2017-01-07 0.21887 0.52223 0.52223
2017-01-08 0.84924 0.87039 0.87039
2017-01-09 0.32555 1.21854 1.21854
2017-01-10 0.54095 1.56670 1.56670
</code></pre>
<p>To return a modified copy:</p>
<pre><code>new_df = df.copy()
new_df.loc[:, to_scale] = scale(df[to_scale])
</code></pre>
<p>As for the warning: hard to say without seeing your data, but it does look like you have some large values (7.417412e+09). That warning is from <a href="https://github.com/scikit-learn/scikit-learn/blob/ef5cb84a/sklearn/preprocessing/data.py#L164" rel="nofollow noreferrer">here</a>, and I would venture to say it's safe to ignore--it's being thrown because there's some tolerance test, testing whether your new mean is equal to 0, that's failing. To see if it's actually failing, just use <code>new_df.mean()</code> and <code>new_df.std()</code> to check that your columns have been normalized to N~(0,1).</p>
|
python|pandas|numpy|scikit-learn
| 1
|
4,585
| 66,680,988
|
Cumulative Sum of Grouped Strings in Pandas
|
<p>I have a pandas data frame that I want to group by two columns and then return the cumulative sum of a third column of strings as a list within one of these groups.</p>
<p>Example:</p>
<pre><code>Year Bucket Name
2000 1 A
2001 1 B
2003 1 C
2000 2 B
2002 2 C
</code></pre>
<p>The output I want is:</p>
<pre><code>Year Bucket Cum_Sum
2000 1 [A]
2001 1 [A,B]
2002 1 [A,B]
2003 1 [A,B,C]
2000 2 [B]
2001 2 [B]
2002 2 [B,C]
2003 2 [B,C]
</code></pre>
<p>I tried to piece together an answer from two responses:
<a href="https://stackoverflow.com/a/39623235/5143841">https://stackoverflow.com/a/39623235/5143841</a>
<a href="https://stackoverflow.com/a/22651188/5143841">https://stackoverflow.com/a/22651188/5143841</a></p>
<p>But I can't quite get there.</p>
|
<p>My Dr. Frankenstein Answer</p>
<pre><code>dat = []
rng = range(df.Year.min(), df.Year.max() + 1)
for b, d in df.groupby('Bucket'):
for y in rng:
dat.append([y, b, [*d.Name[d.Year <= y]]])
pd.DataFrame(dat, columns=[*df])
Year Bucket Name
0 2000 1 [A]
1 2001 1 [A, B]
2 2002 1 [A, B]
3 2003 1 [A, B, C]
4 2000 2 [B]
5 2001 2 [B]
6 2002 2 [B, C]
7 2003 2 [B, C]
</code></pre>
<p>Another freaky answer</p>
<pre><code>rng = range(df.Year.min(), df.Year.max() + 1)
i = [(y, b) for b, d in df.groupby('Bucket') for y in rng]
s = df.set_index(['Year', 'Bucket']).Name.map(lambda x: [x])
s.reindex(i, fill_value=[]).groupby(level=1).apply(pd.Series.cumsum).reset_index()
Year Bucket Name
0 2000 1 [A]
1 2001 1 [A, B]
2 2002 1 [A, B]
3 2003 1 [A, B, C]
4 2000 2 [B]
5 2001 2 [B]
6 2002 2 [B, C]
7 2003 2 [B, C]
</code></pre>
|
python|pandas|cumsum
| 3
|
4,586
| 66,649,154
|
pandas read_excel adds fractional seconds that don't appear in original xlsx file
|
<p>I am reading an excel spreadsheet into pandas as:</p>
<p><code>input_df: pd.DataFrame = pd.read_excel(data_filename, engine='openpyxl')</code></p>
<p>Here's a screenshot of the beginning of the excel file:</p>
<p><a href="https://i.stack.imgur.com/GYKTB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GYKTB.png" alt="enter image description here" /></a></p>
<p>However, when I exam the dataframe, fractional parts are added to two out of the three time columns.</p>
<pre><code>Out[6]:
Real Time Current(nA) Unnamed: 2 Unnamed: 3 Sensor 4 Time Sensor 4 Current nA Unnamed: 6 FS Time FS Value
0 11:58:03.111700 119.400 NaN NaN 10:53:39 119.428 NaN 10:43:12 101.0
1 11:58:04.681197 119.439 NaN NaN 10:53:40.795800 119.474 NaN 10:44:06 103.0
2 11:58:07.246866 119.417 NaN NaN 10:53:43.214300 119.447 NaN 10:51:36 88.0
3 11:58:09.388763 119.416 NaN NaN 10:53:45.294400 119.439 NaN 10:53:39 88.0
4 11:58:11.454134 119.411 NaN NaN 10:53:47.302400 119.451 NaN 11:06:58 83.0
</code></pre>
<p>These don't appear in the original excel file as evidenced by the screenshot below:</p>
<p><a href="https://i.stack.imgur.com/Q5uDv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Q5uDv.png" alt="enter image description here" /></a>
I have no idea where these fractions come from. They don't appear in the original file. Why is this happening, and how can I read in the correct times?</p>
|
<p>OK. This was my fault. It turns out that there is a fractional part to the timestamps. Google sheets needed to be configured to show that fractional part. In summary, it appears that there is agreement between the xlsx file and the pandas dataframe.</p>
|
python|excel|pandas|dataframe
| 0
|
4,587
| 66,385,270
|
Getting groups by group index
|
<p>I want to access the group by group index. My dataframe is as given below</p>
<pre><code>import pandas as pd
from io import StringIO
import numpy as np
data = """
id,name
100,A
100,B
100,C
100,D
100,pp;
212,E
212,F
212,ds
212,G
212, dsds
212, sas
300,Endüstrisi`
"""
df = pd.read_csv(StringIO(data))
</code></pre>
<p>I want to groupby 'id' and access the groups by its group index.</p>
<pre><code>dfg=df.groupby('id',sort=False,as_index=False)
dfg.get_group(0)
</code></pre>
<p>I was expecting this to return the first group which is the group for <code>id =1</code> (which is the first group)</p>
|
<p>You need pass value of <code>id</code>:</p>
<pre><code>dfg=df.groupby('id',sort=False)
a = dfg.get_group(100)
print (a)
id name
0 100 A
1 100 B
2 100 C
3 100 D
4 100 pp;
</code></pre>
<hr />
<pre><code>dfg=df.groupby('id',sort=False)
a = dfg.get_group(df.loc[0, 'id'])
print (a)
id name
0 100 A
1 100 B
2 100 C
3 100 D
4 100 pp;
</code></pre>
<p>If need enumerate groups is possible use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.ngroup.html" rel="nofollow noreferrer"><code>GroupBy.ngroup</code></a>:</p>
<pre><code>dfg=df.groupby('id',sort=False)
a = df[dfg.ngroup() == 0]
print (a)
id name
0 100 A
1 100 B
2 100 C
3 100 D
4 100 pp;
</code></pre>
<p><strong>Detail</strong>:</p>
<pre><code>print (dfg.ngroup())
0 0
1 0
2 0
3 0
4 0
5 1
6 1
7 1
8 1
9 1
10 1
11 2
dtype: int64
</code></pre>
<p>EDIT: Another idea is if need select groups by positions (all id are consecutive groups) with compare by unique values of <code>id</code> selected by positions:</p>
<pre><code>ids = df['id'].unique()
print (ids)
[100 212 300]
print (df[df['id'].eq(ids[0])])
id name
0 100 A
1 100 B
2 100 C
3 100 D
4 100 pp;
print (df[df['id'].eq(ids[1])])
id name
5 212 E
6 212 F
7 212 ds
8 212 G
9 212 dsds
10 212 sas
</code></pre>
|
pandas|group-by|python-3.8
| 1
|
4,588
| 66,413,040
|
Converting a naive datetime column to a new timezone Pandas Dataframe
|
<p>I have the following dataframe, named 'ORDdataM', with a DateTimeIndex column 'date', and a price point column 'ORDprice'. The date column has no timezone associated with it (and is naive) but is actually in 'Australia/ACT'. I want to convert it into 'America/New_York' time.</p>
<pre><code> ORDprice
date
2021-02-23 18:09:00 24.01
2021-02-23 18:14:00 23.91
2021-02-23 18:19:00 23.98
2021-02-23 18:24:00 24.00
2021-02-23 18:29:00 24.04
... ...
2021-02-25 23:44:00 23.92
2021-02-25 23:49:00 23.88
2021-02-25 23:54:00 23.92
2021-02-25 23:59:00 23.91
2021-02-26 00:09:00 23.82
</code></pre>
<p>The line below is one that I have played around with quite a bit, but I cannot figure out what is erroneous. The only error message is:
KeyError: 'date'</p>
<p><code>ORDdataM['date'] = ORDdataM['date'].dt.tz_localize('Australia/ACT').dt.tz_convert('America/New_York')</code></p>
<p>I have also tried</p>
<p><code>ORDdataM.date = ORDdataM.date.dt.tz_localize('Australia/ACT').dt.tz_convert('America/New_York')</code></p>
<p>What is the issue here?</p>
|
<p>Your <code>date</code> is index not a column, try:</p>
<pre><code>df.index = df.index.tz_localize('Australia/ACT').tz_convert('America/New_York')
df
# ORDprice
#date
#2021-02-23 02:09:00-05:00 24.01
#2021-02-23 02:14:00-05:00 23.91
#2021-02-23 02:19:00-05:00 23.98
#2021-02-23 02:24:00-05:00 24.00
#2021-02-23 02:29:00-05:00 24.04
#2021-02-25 07:44:00-05:00 23.92
#2021-02-25 07:49:00-05:00 23.88
#2021-02-25 07:54:00-05:00 23.92
#2021-02-25 07:59:00-05:00 23.91
#2021-02-25 08:09:00-05:00 23.82
</code></pre>
|
python|pandas|datetime|timezone|datetimeindex
| 4
|
4,589
| 66,681,835
|
Pandas: Remove rows except the first new occurrence of a value
|
<p>I have a dataframe</p>
<pre><code>x y
a 1
b 1
c 1
d 0
e 0
f 0
g 1
h 1
i 0
j 0
</code></pre>
<p>I want to remove the rows with 0 except every first new occurence of 0 after 1, so the resultant dataframe should be</p>
<pre><code>x y
a 1
b 1
c 1
d 0
g 1
h 1
i 0
</code></pre>
<p>Is it possible to do it without creating groups or row by row iteration to make it faster since I have a big dataframe.</p>
|
<p>Check consecutive similarity using shift()</p>
<pre><code> df[df.y.ne(0)|(df.y.eq(0)&df.y.shift(1).ne(0))]
x y
0 a 1
1 b 1
2 c 1
3 d 0
6 g 1
7 h 1
8 i 0
</code></pre>
|
python|pandas|dataframe|numpy
| 2
|
4,590
| 72,996,734
|
Create columns from another column which is a list of items
|
<p>Let's say that I have a <code>DataFrame</code> with column <code>A</code> which is a list of strings of the form "Type:Value" where <code>Type</code> can have 5 different values and <code>Value</code> can be anything. What I would like to do is to create new 5 columns (each having appropriate <code>Type</code> name) where the value in each column would be the list of items which has a given <code>Type</code>. So if I have (1 row for simplicity):</p>
<pre><code>df = pd.DataFrame("A": [["Type1:Value1", "Type2:Value2", "Type1:Value3"]])
</code></pre>
<p>then the result should be:</p>
<pre><code>df = pd.DataFrame("Type1": [["Value1", "Value3"]], "Type2":[["Value2"]])
</code></pre>
|
<p>One Solution. This can be done on loop as well. But since the number of columns were small, the code is less automated.</p>
<pre><code> df = pd.DataFrame({"A": [["Type1:Value1", "Type2:Value2", "Type1:Value3"]]})
df[['x','y','z']] = df.A[0]
df['type1'] = df.x.str.split(':').str[1]
df['type2'] = "[" +"[" + df.x.str.split(':').str[1]+ "]"
df['type1'] = "[" +"[" + df['type1'] +","+ df.x.str.split(':').str[1] + "]"+ "]"
print(df.drop(['A','x','y','z'], axis = 'columns'))
type1 type2
0 [[Value1,Value1]] [[Value1]
</code></pre>
|
python-3.x|pandas
| 0
|
4,591
| 70,631,234
|
creating series_route from multiple chunks of string in list
|
<p>I am working on a route building code and have half a million record which taking around 3-4 hrs to get executed.</p>
<p>For creating dataframe:</p>
<pre><code># initialize list of lists
data = [[['1027', '(K)', 'TRIM']], [[SJCL, (K), EJ00, (K), ZQFC, (K), 'DYWH']]
# Create the pandas DataFrame
df = pd.DataFrame(data, columns = ['route'])
</code></pre>
<p>Will look like something this:</p>
<pre><code>route
[1027, (K), TRIM]
[SJCL, (K), EJ00, (K), ZQFC, (K), DYWH]
</code></pre>
<p><strong>O/P</strong>
Code I have used:</p>
<pre><code>def func_list(hd1):
required_list=[]
for j,i in enumerate(hd1):
#print(i,j)
if j==0:
req=i
else:
if (i[0].isupper() or i[0].isdigit()):
required_list.append(req)
req=i
else:
req=req+i
required_list.append(req)
return required_list
df['route2']=df.route1.apply(lambda x : func_list (x))
#op
route2
[1027(K), TRIM]
[SJCL(K), EJ00(K), ZQFC(K), DYWH]
</code></pre>
<p>For half million rows, it taking 3-4 hrs, I dont know how to reduce it pls help.</p>
|
<p>Use <code>explode</code> to flatten your dataframe:</p>
<pre><code>sr1 = df['route'].explode()
sr2 = pd.Series(np.where(sr1.str[0] == '(', sr1.shift() + sr1, sr1), index=sr1.index)
df['route'] = sr2[sr1.eq(sr2).shift(-1, fill_value=True)].groupby(level=0).apply(list)
print(df)
# Output:
0 [1027(K), TRIM]
1 [SJCL(K), EJ00(K), ZQFC(K), DYWH]
dtype: object
</code></pre>
<p>For 500K records:</p>
<pre><code>7.46 s ± 97.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
</code></pre>
|
python|pandas
| 1
|
4,592
| 70,396,644
|
VCF file is missing mandatory header line ("#CHROM...")
|
<p>I am getting an error when I am going to read a VCF file using <strong>scikit-allel</strong> library inside a docker image and os ubuntu 18.04. It shows that</p>
<p>raise RuntimeError('VCF file is missing mandatory header line ("#CHROM...")')
RuntimeError: VCF file is missing mandatory header line ("#CHROM...")</p>
<p>But in the VCF file is well-formatted.</p>
<p>Here is my code of how I applied :</p>
<pre><code>import pandas as pd
import os
import numpy as np
import allel
import tkinter as tk
from tkinter import filedialog
import matplotlib.pyplot as plt
from scipy.stats import norm
GenomeVariantsInput = allel.read_vcf('quartet_variants_annotated.vcf', samples=['ISDBM322015'],fields=[ 'variants/CHROM', 'variants/ID', 'variants/REF',
'variants/ALT','calldata/GT'])
</code></pre>
<p>version what Installed :
Python 3.6.9
Numpy 1.19.5
pandas 1.1.5
scikit-allel 1.3.5</p>
|
<p>You need to add a line like this in the first:</p>
<p><code>#CHROM POS ID REF ALT QUAL FILTER INFO FORMAT NA00001 NA00002 NA00003 </code></p>
<p>but it's not static for all of the files, you have to make a <code>Header</code> like above for your file. (I suggest try this header first and if it's got error then customize it)</p>
|
pandas|numpy|scikit-learn|python-3.6|vcftools
| 0
|
4,593
| 51,496,017
|
Pandas >0.20 indexing with labels and position for a writing operation
|
<p>Since ix operator is deprecated since 0.20 version, how should I update this line?</p>
<pre><code>df_final.ix[int(len(df_final)/2):, 'type'] = 1
</code></pre>
<p>I tried this:</p>
<pre><code>df_final['type'][int(len(df_final)/2):]
</code></pre>
<p>and works well for reading operations (not the most efficient because of the double indexing... but works). But for writing</p>
<pre><code>df_final['type'][int(len(df_final)/2):] = 0
</code></pre>
<p>I got</p>
<blockquote>
<p>SettingWithCopyWarning: A value is trying to be set on a copy of a
slice from a DataFrame See the caveats in the documentation:
<a href="http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy" rel="nofollow noreferrer">http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy</a></p>
<h1>!/usr/bin/env python3</h1>
</blockquote>
<p>I somehow overcome this limitation doing this: </p>
<pre><code>target_feature_index = list(df_final.columns).index('type')
df_final.iloc[int(len(df_final)/2):, target_feature_index] = 0
</code></pre>
<p>It looks to me like a workaround. Is there a better way?</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.get_loc.html" rel="nofollow noreferrer"><code>Index.get_loc</code></a> for position of column <code>type</code>:</p>
<pre><code>df_final = pd.DataFrame({
'A': ['a','a','a','a','b','b','b'],
'type': list(range(7))
})
print (df_final)
A type
0 a 0
1 a 1
2 a 2
3 a 3
4 b 4
5 b 5
6 b 6
df_final.iloc[int(len(df_final)/2):, df_final.columns.get_loc('type')] = 0
print(df_final)
A type
0 a 0
1 a 1
2 a 2
3 a 0
4 b 0
5 b 0
6 b 0
</code></pre>
|
python|pandas
| 0
|
4,594
| 51,215,569
|
Optimizing shuffle buffer size in tensorflow dataset api
|
<p>I'm trying to use the <code>dataset</code> api to load data and find that I'm spending a majority of the time loading data into the shuffle buffer. How might I optimize this pipeline in order to minimize the amount of time spent populating the shuffle buffer.</p>
<pre><code>(tf.data.Dataset.list_files(path)
.shuffle(num_files) # number of tfrecord files
.apply(tf.contrib.data.parallel_interleave(lambda f: tf.data.TFRecordDataset(f), cycle_length=num_files))
.shuffle(num_items) # number of images in the dataset
.map(parse_func, num_parallel_calls=8)
.map(get_patches, num_parallel_calls=8)
.apply(tf.contrib.data.unbatch())
# Patch buffer is currently the number of patches extracted per image
.apply(tf.contrib.data.shuffle_and_repeat(patch_buffer))
.batch(64)
.prefetch(1)
.make_one_shot_iterator())
</code></pre>
|
<p>Since I have at most thousands of images, my solution to this problem was to have a separate tfrecord file per image. That way individual images could be shuffled without having to load them into memory first. This drastically reduced the buffering that needed to occur.</p>
|
python|tensorflow|tensorflow-datasets
| 2
|
4,595
| 51,402,943
|
Tensorflow Saver restores all variables no matter which ones I specified
|
<p>I'm trying to save and restore a subset of variables from Tensorflow graph, so that everything I don't need is discarded and their weights don't take memory. The common advice to pass list or dict of desired variables to <code>tf.train.Saver</code> doesn't work: the saver restores all the variables no matter what. </p>
<p>A minimal working example:</p>
<pre><code>import os
import tensorflow as tf
sess = tf.Session()
with sess.as_default():
v1 = tf.get_variable("v1", [5, 5, 3])
v2 = tf.get_variable("v2", [5, 5, 3])
saver = tf.train.Saver([v2])
initializer2 = tf.variables_initializer([v1, v2])
sess.run(initializer2)
saver.save(sess, '/path/to/tf_model')
sess2 = tf.Session()
checkpoint = '/path/to/tf_model.meta'
saver.restore(sess2, tf.train.latest_checkpoint(os.path.dirname(checkpoint)))
with sess2.as_default(), sess2.graph.as_default():
loaded_vars = tf.trainable_variables()
print(loaded_vars)
</code></pre>
<p>outputs </p>
<pre><code>[<tf.Variable 'v1:0' shape=(5, 5, 3) dtype=float32_ref>,
<tf.Variable 'v2:0' shape=(5, 5, 3) dtype=float32_ref>]
</code></pre>
<p>Nevertheless, <code>print(saver._var_list)</code> outputs </p>
<p><code>[<tf.Variable 'v2:0' shape=(5, 5, 3) dtype=float32_ref>]</code></p>
<p>What's wrong here?</p>
|
<p>This is what you want to do. Please examine the code carefully.</p>
<h3>To save the selected variables</h3>
<pre><code>import tensorflow as tf
tf.reset_default_graph()
# =============================================================================
# to save
# =============================================================================
# create variables
v1 = tf.get_variable(name="v1", initializer=[5, 5, 3])
v2 = tf.get_variable(name="v2", initializer=[5, 5, 3])
# initialize variables
init_op = tf.global_variables_initializer()
# ops to save variable v2
saver = tf.train.Saver({"my_v2": v2})
with tf.Session() as sess:
sess.run(init_op)
save_path = saver.save(sess, './tf_vars/model.ckpt')
print("Model saved in file: %s" % save_path)
'Output':
Model saved in file: ./tf_vars/model.ckpt
</code></pre>
<h3>To restore the saved variables</h3>
<pre><code># =============================================================================
# to restore
# =============================================================================
# Create some variables.
v1 = tf.Variable(initial_value=[0, 0, 0], name="v1")
v2 = tf.Variable(initial_value=[0, 0, 0], name="v2")
# initialize variables
init_op = tf.global_variables_initializer()
# ops to restore variable v2.
saver = tf.train.Saver({"my_v2": v2})
with tf.Session() as sess:
sess.run(init_op)
# Restore variables from disk.
saver.restore(sess, './tf_vars/model.ckpt')
print("v1: %s" % v1.eval())
print("v2: %s" % v2.eval())
print("V2 variable restored.")
'Output':
v1: [0 0 0]
v2: [5 5 3]
V2 variable restored.
</code></pre>
|
python|tensorflow
| 1
|
4,596
| 51,753,964
|
How to simultaneously sort columns in pandas dataframe
|
<p>Suppose that I want to sort a data frame in Pandas and my data frame looks like this</p>
<pre><code> First Name Last Name Street Address Type
0 Joe Smith 123 Main St. Property Address
1 Gregory Stanton 124 Main St. X Old Property Address
2 Phill Allen 38 Maple St. Alternate Address
3 Joe Smith PO Box 3165 Alternate Address
4 Xi Dong 183 Main St. Property Address
5 Phill Allen 128 Main St. Property Address
</code></pre>
<p>I want to first sort the data frame by last name so that it will look like this:</p>
<pre><code> First Name Last Name Street Address Type
0 Phill Allen 38 Maple St. Alternate Address
1 Phill Allen 128 Main St. Property Address
2 Xi Dong 183 Main St. Property Address
3 Joe Smith 123 Main St. Property Address
4 Joe Smith PO Box 3165 Alternate Address
5 Gregory Stanton 124 Main St. X Old Property Address
</code></pre>
<p>Now for each person I want the property address to become before the alternate address (If the person has both a property and alternate address) so that the dataframe would look like this:</p>
<pre><code> First Name Last Name Street Address Type
0 Phill Allen 128 Main St Property Address
1 Phill Allen 38 Maple St. Alternate Address
2 Xi Dong 183 Main St. Property Address
3 Joe Smith 123 Main St. Property Address
4 Joe Smith PO Box 3165 Alternate Address
5 Gregory Stanton 124 Main St. X Old Property Address
</code></pre>
<p>Notice that Phill Allen's entries got switched in the above data frame because his alternate address came before his property address.
My code looks like this: </p>
<pre><code>duplicates = df[df.duplicated(['Last Name'], keep=False)]
duplicates = duplicates.sort_values(['Last Name'], ascending = True)
duplicates = duplicates.sort_values(['Address Type'], ascending = True)
</code></pre>
<p>I have already tried using</p>
<pre><code>duplicates = df.sort_values(['last', 'Address Type'], ascending = True)
</code></pre>
<p>This does not work because the Address Type can be many different things not just primary/alternate and this code will not necessarily always work when sorted in ascending/descending order. </p>
<p>But it does not switch the property address and alternate address in the correct order because python first sorts the dataframe by Last Name then resorts it based on Address Type. I am looking for code that will sort by last name first and based on those last names, then sort by address type. Any help would be appreciated.
Thanks!</p>
|
<p>You can sort by multiple columns. Just put both columns in the list.</p>
<pre><code>duplicates = duplicates.sort_values(['Last Name', 'Address Type'], ascending = True)
</code></pre>
|
python|pandas|sorting|for-loop|columnsorting
| 0
|
4,597
| 51,874,054
|
Python data frame manipulation
|
<p>maybe my question is too simple and sorry for this :</p>
<p>I have the following sample data frame (My actual data frame has many rows and columns):</p>
<pre><code>Months =("JAN","FEB","MAR","APR","MAY","JUN")
df = pd.DataFrame(np.random.randn(2, 6), columns=Months).round(1)
</code></pre>
<p><code>df</code></p>
<pre><code> JAN FEB MAR APR MAY JUN
0,1 0,1 1,3 -0,5 -0,3 0,4
-1,2 0,1 1,1 -1,2 0,4 -0,6
</code></pre>
<p>I am trying to create a new data frame which has as values the difference between a month value and the month value 3 months ago. Therefore the output from the specific sample data frame should be:</p>
<pre><code> APR MAY JUN
-0,6 -0,4 -0,9
0 0,3 -1,7
</code></pre>
<p>So, the first APR value is the : (-0,5 - 0,1) = -0,6 etc...</p>
<p>I have tried this:</p>
<pre><code>new_df=pd.DataFrame(0,index = df.index.values, columns = df.columns.values)
for i in list(df.index.values):
for j in list(df.columns.values):
new_df.iloc[i,j] = df.iloc[i,j+3] - df.iloc[i,j]
</code></pre>
<p>I get this error:</p>
<pre><code>----> 3 new_df.iloc[i,j] = df.iloc[i,j+3] - df.iloc[i,j]
TypeError: must be str, not int
</code></pre>
<p>Any help on how I can do this?
Thanks in advance</p>
|
<p>Dont use loops, because slow, if exist vectorized solution:</p>
<pre><code>df1 = df.sub(df.shift(3, axis=1)).iloc[:, 3:]
print (df1)
APR MAY JUN
0 -0.6 -0.4 -0.9
1 0.0 0.3 -1.7
</code></pre>
<p><strong>Details</strong>:</p>
<p>First <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.shift.html" rel="nofollow noreferrer"><code>shift</code></a> values:</p>
<pre><code>print (df.shift(3, axis=1))
JAN FEB MAR APR MAY JUN
0 NaN NaN NaN 0.1 0.1 1.3
1 NaN NaN NaN -1.2 0.1 1.1
</code></pre>
<p>Then subtract by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sub.html" rel="nofollow noreferrer"><code>sub</code></a>:</p>
<pre><code>print (df.sub(df.shift(3, axis=1)))
JAN FEB MAR APR MAY JUN
0 NaN NaN NaN -0.6 -0.4 -0.9
1 NaN NaN NaN 0.0 0.3 -1.7
</code></pre>
<p>And last remove first <code>3</code> columns by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.iloc.html" rel="nofollow noreferrer"><code>iloc</code></a>:</p>
<pre><code>df1 = df.sub(df.shift(3, axis=1)).iloc[:, 3:]
</code></pre>
|
python|pandas
| 1
|
4,598
| 36,195,904
|
Combine two rows with the same class using Pandas
|
<p>Here is my question. </p>
<pre><code> ## data for example
Name type Value1 Value2 Value3 Value4
A unemp 1.733275e+09 2.067889e+09 3.279421e+09 3.223396e+09
B unemp 1.413758e+09 2.004171e+09 2.383106e+09 2.540857e+09
C unemp 1.287548e+09 1.462072e+09 2.831217e+09 3.528558e+09
A unemp 2.651480e+09 2.846055e+09 5.882084e+09 5.247459e+09
D unemp 2.257016e+09 4.121532e+09 4.961291e+09 5.330930e+09
C unemp 7.156784e+08 1.182770e+09 1.704251e+09 2.587171e+09
E emp 6.012397e+09 9.692455e+09 2.288822e+10 3.215460e+10
F emp 5.647393e+09 9.597211e+09 2.121828e+10 3.107219e+10
G emp 4.617047e+09 8.030113e+09 2.005203e+10 2.755665e+10
</code></pre>
<p>My target: <em>Compare the "Name" column and combine rows with same "Name"</em>. </p>
<p>Using code below: </p>
<pre><code> f_test = pd.read_clipboard()
f_test.groupby('Name').sum().reset_index()
</code></pre>
<p><a href="https://i.stack.imgur.com/35wFY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/35wFY.png" alt="enter image description here"></a></p>
<p>The result shows like this.
But how to retain the "type" column? Wish someone's advice!</p>
|
<p>You can <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html" rel="nofollow"><code>merge</code></a> the result with a column-subset of the original DataFrame:</p>
<pre><code>>>> pd.merge(
f_test.groupby('Name').sum().reset_index(),
f_test[['Name', 'type']].drop_duplicates(),
how='right')
Name Value1 Value2 Value3 Value4 type
0 A 4.384755e+09 4.913944e+09 9.161505e+09 8.470855e+09 unemp
1 B 1.413758e+09 2.004171e+09 2.383106e+09 2.540857e+09 unemp
2 C 2.003226e+09 2.644842e+09 4.535468e+09 6.115729e+09 unemp
3 D 2.257016e+09 4.121532e+09 4.961291e+09 5.330930e+09 unemp
4 E 6.012397e+09 9.692455e+09 2.288822e+10 3.215460e+10 emp
5 F 5.647393e+09 9.597211e+09 2.121828e+10 3.107219e+10 emp
6 G 4.617047e+09 8.030113e+09 2.005203e+10 2.755665e+10 emp
</code></pre>
|
python|pandas
| 2
|
4,599
| 37,533,921
|
Google Appengine application written in Java with access to Python+Numpy+Scipy?
|
<p>I have a rather big appengine application completely written in Java. I would need to obtain results from functions completely written in python (if possible 3.x) with numpy and similar packages. </p>
<p>What is the best way to do it?</p>
|
<p>I'm thinking in two options.</p>
<ol>
<li>You can write an app-engine's python module (now called <a href="https://cloud.google.com/appengine/docs/java/an-overview-of-app-engine#services_the_building_blocks_of_app_engine" rel="nofollow">services</a>) using default or <a href="https://cloud.google.com/appengine/docs/flexible/" rel="nofollow" title="Flexible Environment">Flexible Environment</a>. Then your default module have access to this special module via HTTP requests, synchronously via <a href="https://cloud.google.com/appengine/docs/java/outbound-requests" rel="nofollow">URL Fetch API</a> or async via <a href="https://cloud.google.com/appengine/docs/java/taskqueue/push/" rel="nofollow">Push Queue API</a>, which can only be done via *.appspot.com URLs.
<a href="https://developers.google.com/appengine/docs/java/modules/#Java_Communication_between_modules" rel="nofollow">Here</a> is the official docs about module communication uses ModuleService API which resolves module addresses to *.appspot.com addresses.</li>
<li>You can try to execute python code using PythonInterpreter class, but i not sure if the sandbox avoid to use it.</li>
</ol>
|
java|python|google-app-engine|numpy|scipy
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.