Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
5,300
| 60,563,115
|
Pytorch error: Could not run 'aten::slow_conv3d_forward' with arguments from the 'CUDATensorId' backend
|
<p>I am training a CNN on CUDA GPU which takes 3D medical images as input and outputs a classifier. I suspect there may be a bug in pytorch. I am running pytorch 1.4.0. The GPU is 'Tesla P100-PCIE-16GB'. When I run the model on CUDA I get the error </p>
<pre><code>Traceback (most recent call last):
File "/home/ub/miniconda3/lib/python3.7/site-packages/IPython/core/interactiveshell.py", line 3331, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-55-cc0dd3d9cbb7>", line 1, in <module>
net(cc)
File "/home/ub/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "<ipython-input-2-19e11966d1cd>", line 181, in forward
out = self.layer1(x)
File "/home/ub/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/home/ub/miniconda3/lib/python3.7/site-packages/torch/nn/modules/container.py", line 100, in forward
input = module(input)
File "/home/ub/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in __call__
result = self.forward(*input, **kwargs)
File "/home/ub/miniconda3/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 480, in forward
self.padding, self.dilation, self.groups)
RuntimeError: Could not run 'aten::slow_conv3d_forward' with arguments from the 'CUDATensorId' backend. 'aten::slow_conv3d_forward' is only available for these backends: [CPUTensorId, VariableTensorId].
</code></pre>
<p>To replicate the issue:</p>
<pre><code>#input is a 64,64,64 3d image batch with 2 channels
class ConvNet(nn.Module):
def __init__(self):
super(ConvNet, self).__init__()
self.layer1 = nn.Sequential(
nn.Conv3d(2, 32, kernel_size=5, stride=1, padding=2),
nn.ReLU(),
nn.MaxPool3d(kernel_size=2, stride=2))
self.layer2 = nn.Sequential(
nn.Conv3d(32, 64, kernel_size=5, stride=1, padding=2),
nn.ReLU(),
nn.MaxPool3d(kernel_size=2, stride=2))
self.drop_out = nn.Dropout()
self.fc1 = nn.Linear(16 * 16*16 * 64, 1000)
self.fc2 = nn.Linear(1000, 2)
# self.softmax = nn.LogSoftmax(dim=1)
def forward(self, x):
# print(out.shape)
out = self.layer1(x)
# print(out.shape)
out = self.layer2(out)
# print(out.shape)
out = out.reshape(out.size(0), -1)
# print(out.shape)
out = self.drop_out(out)
# print(out.shape)
out = self.fc1(out)
# print(out.shape)
out = self.fc2(out)
# out = self.softmax(out)
# print(out.shape)
return out
net = Convnet()
input = torch.randn(16, 2, 64, 64, 64)
net(input)
</code></pre>
|
<p>Initially, I was thinking the error message indicates that <code>'aten::slow_conv3d_forward'</code> is not implemented with GPU (CUDA). But after looked at your network, it does not make sense to me, since Conv3D is a very basic op, and Pytorch team should implement this in CUDA.</p>
<p>Then I dived a bit about the source code, finding that the input is not a CUDA tensor, which causes the problem.</p>
<p>Here is a working sample:</p>
<pre class="lang-py prettyprint-override"><code>import torch
from torch import nn
#input is a 64,64,64 3d image batch with 2 channels
class ConvNet(nn.Module):
def __init__(self):
super(ConvNet, self).__init__()
self.layer1 = nn.Sequential(
nn.Conv3d(2, 32, kernel_size=5, stride=1, padding=2),
nn.ReLU(),
nn.MaxPool3d(kernel_size=2, stride=2))
self.layer2 = nn.Sequential(
nn.Conv3d(32, 64, kernel_size=5, stride=1, padding=2),
nn.ReLU(),
nn.MaxPool3d(kernel_size=2, stride=2))
self.drop_out = nn.Dropout()
self.fc1 = nn.Linear(16 * 16*16 * 64, 1000)
self.fc2 = nn.Linear(1000, 2)
# self.softmax = nn.LogSoftmax(dim=1)
def forward(self, x):
# print(out.shape)
out = self.layer1(x)
# print(out.shape)
out = self.layer2(out)
# print(out.shape)
out = out.reshape(out.size(0), -1)
# print(out.shape)
out = self.drop_out(out)
# print(out.shape)
out = self.fc1(out)
# print(out.shape)
out = self.fc2(out)
# out = self.softmax(out)
# print(out.shape)
return out
net = ConvNet()
input = torch.randn(16, 2, 64, 64, 64)
net.cuda()
input = input.cuda() # IMPORTANT to reassign your tensor
net(input)
</code></pre>
<p>Remember when you put a model from CPU to GPU, you can directly call <code>.cuda()</code>, but if you put a tensor from CPU to GPU, you will need to reassign it, such as <code>tensor = tensor.cuda()</code>, instead of only calling <code>tensor.cuda()</code>. Hope that helps.</p>
<p>Output: </p>
<pre class="lang-py prettyprint-override"><code>tensor([[-0.1588, 0.0680],
[ 0.1514, 0.2078],
[-0.2272, -0.2835],
[-0.1105, 0.0585],
[-0.2300, 0.2517],
[-0.2497, -0.1019],
[ 0.1357, -0.0475],
[-0.0341, -0.3267],
[-0.0207, -0.0451],
[-0.4821, -0.0107],
[-0.1779, 0.1247],
[ 0.1281, 0.1830],
[-0.0595, -0.1259],
[-0.0545, 0.1838],
[-0.0033, -0.1353],
[ 0.0098, -0.0957]], device='cuda:0', grad_fn=<AddmmBackward>)
</code></pre>
|
debugging|pytorch
| 18
|
5,301
| 72,792,770
|
How to save weights in tensorflow federated
|
<p>I want to save weights only when loss is getting lower and reuse them for evaluation.</p>
<pre><code>lowest_loss = Inf
if loss[round] < lowest_loss:
lowest_loss = loss[round]
model_weights = transfer_learning_iterative_process.get_model_weights(state)
eval_metric = federated_eval(model_weights, [fed_valid_data])
</code></pre>
<p>where:</p>
<pre><code> federated_eval = tff.learning.build_federated_evaluation(model_fn)
</code></pre>
<p>Is there a possible way to save server weights in hdf5 format or as a checkpoint and reuse it?</p>
|
<p>Yes, this can be done with helpers in TFF. Generally, this kind of functionality is implemented by <a href="https://www.tensorflow.org/federated/api_docs/python/tff/program/ProgramStateManager" rel="nofollow noreferrer"><code>tff.program.ProgramStateManagers</code></a>. An implementation which saves to a filesystem can be found <a href="https://www.tensorflow.org/federated/api_docs/python/tff/program/FileProgramStateManager" rel="nofollow noreferrer">here</a>, and example usages can be found in the implementation of <a href="https://github.com/tensorflow/federated/blob/v0.28.0/tensorflow_federated/python/simulation/training_loop.py#L68-L185" rel="nofollow noreferrer"><code>tff.simulation.run_training_process</code></a>.</p>
|
python|tensorflow|tensorflow-federated|federated-learning
| 1
|
5,302
| 72,825,479
|
PyTorch index in a batch
|
<p>Given tensor <code>IN</code> of shape <code>(A, B, C, D)</code> and index tensor <code>IDX</code> of shape <code>[A, B, C]</code> with <code>torch.long</code> values in <code>[0, C)</code>, how can I get a tensor <code>OUT</code> of shape <code>(A, B, C, D)</code> such that:</p>
<pre><code>OUT[a, b, c, :] == IN[a, b, IDX[a, b, c], :]
</code></pre>
<p>This is trivial without dimensions <code>A</code> and <code>B</code>:</p>
<pre class="lang-py prettyprint-override"><code># C = 2, D = 3
IN = torch.arange(6).view(2, 3)
IDX = torch.tensor([0,0])
print(IN[IDX])
# tensor([[0, 1, 2],
# [0, 1, 2]])
</code></pre>
<p>Obviously, I can write a nested for loop over A and B. But surely there must be a vectorized way to do it?</p>
|
<p>This is the perfect use case for <a href="https://pytorch.org/docs/stable/generated/torch.gather.html" rel="nofollow noreferrer"><code>torch.gather</code></a>. Given two 4d tensors, <code>input</code> the input tensor and <code>index</code> the tensor containing the indices for <code>input</code>, calling <code>torch.gather</code> on <code>dim=2</code> will return a tensor out shaped like <code>input</code> such that:</p>
<pre><code>out[i][j][k][l] = input[i][j][index[i][j][k][l]][l]
</code></pre>
<p>In other words, <code>index</code> indexes dimension n°3 of <code>input</code>.</p>
<p>Before applying such function though, notice all tensors must have the same number of dimensions. Since <code>index</code> is only 3d, we need to insert and expand an additional 4th dimension on it. We can do so with the following lines:</p>
<pre><code>>>> idx_ = idx[...,None].expand_as(x)
</code></pre>
<p>Then call the <code>torch.gather</code> function</p>
<pre><code>>>> x.gather(dim=2, index=idx_)
</code></pre>
<hr />
<p>You can try out the solution with this code:</p>
<pre><code>>>> A = 1; B = 2; C=3; D=2
>>> x = torch.rand(A,B,C,D)
tensor([[[[0.6490, 0.7670],
[0.7847, 0.9058],
[0.3606, 0.7843]],
[[0.0666, 0.7306],
[0.1923, 0.3513],
[0.5287, 0.3680]]]])
>>> idx = torch.randint(0, C, (A,B,C))
tensor([[[1, 2, 2],
[0, 0, 1]]])
>>> x.gather(dim=2, index=idx[...,None].expand_as(x))
tensor([[[[0.7847, 0.9058],
[0.3606, 0.7843],
[0.3606, 0.7843]],
[[0.0666, 0.7306],
[0.0666, 0.7306],
[0.1923, 0.3513]]]])
</code></pre>
|
python|pytorch
| 1
|
5,303
| 72,785,289
|
python script call another script with dataframes as arguments and return a new dataframe
|
<p>I need to make a script in python that calls a lists of scripts that I have stored in a dictionary this way:</p>
<pre><code>TestsDictionary = {
"Test_1": 1,
"Test_2": 0,
"Test_3": 0,
"Test_4": 1,
"Test_5": 0,
"Test_6": 0
}
</code></pre>
<p>In this example the script will only have to execute Test_1 and Test_6 scripts. This scripts receive as arguments 2 dataframes and return a new dataframe.</p>
<p>I loop in the dictionary to store the test to run in a list this way:</p>
<pre><code>TestsToRun = []
for test in TestsDictionary:
if(TestsDictionary[test]):
TestsToRun.append(test)
</code></pre>
<p>So I'll have to execute the tests stored on TestToRun, but don't know how to run them and send the needed arguments</p>
|
<p>Assuming all the scripts are in same directory</p>
<pre class="lang-py prettyprint-override"><code>import os
TestsToRun = []
for test in TestsDictionary:
if(TestsDictionary[test]):
TestsToRun.append("python "+test+".py")
for test in TestsToRun:
os.system(test)
</code></pre>
|
python|pandas|dataframe|scripting|script
| 0
|
5,304
| 59,547,109
|
Tensorflow 2.0 save preprocessing tonkezier for nlp into tensorflow server
|
<p>I have trained a tensforflow 2.0 keras model to make some natural language processing. </p>
<p>What I am doing basically is get the title of different news and predicting in what category they belong. In order to do that I have to tokenize the sentences and then add 0 to fill the array to have the same lenght that I defined:</p>
<pre><code> from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
max_words = 1500
tokenizer = Tokenizer(num_words=max_words )
tokenizer.fit_on_texts(x.values)
X = tokenizer.texts_to_sequences(x.values)
X = pad_sequences(X, maxlen = 32)
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Dense, Embedding, LSTM, GRU,InputLayer
numero_clases = 5
modelo_sentimiento = Sequential()
modelo_sentimiento.add(InputLayer(input_tensor=tokenizer.texts_to_sequences, input_shape=(None, 32)))
modelo_sentimiento.add(Embedding(max_palabras, 128, input_length=X.shape[1]))
modelo_sentimiento.add(LSTM(256, dropout=0.2, recurrent_dropout=0.2, return_sequences=True))
modelo_sentimiento.add(LSTM(256, dropout=0.2, recurrent_dropout=0.2))
modelo_sentimiento.add(Dense(numero_clases, activation='softmax'))
modelo_sentimiento.compile(loss = 'categorical_crossentropy', optimizer='adam',
metrics=['acc',f1_m,precision_m, recall_m])
print(modelo_sentimiento.summary())
</code></pre>
<p>Now once trained I want to deploy it for example in tensorflow serving, but I don't know how to save this preprocessing(tokenizer) into the server, like make a scikit-learn pipeline, it is possible to do it here? or I have to save the tokenizer and make the preprocessing by my self and then call the model trained to predict?</p>
|
<p>Unfortunately, you won't be able to do something as elegant as a <code>sklearn</code> Pipeline with Keras models (at least I'm not aware of) easily. Of course you'd be able to create your own Transformer which will achieve the preprocessing you need. But given my experience trying to incorporate custom objects in sklearn pipelines, I don't think it's worth the effort. </p>
<p>What you can do is save the tokenizer along with metadata using,</p>
<pre><code>with open('tokenizer_data.pkl', 'wb') as handle:
pickle.dump(
{'tokenizer': tokenizer, 'num_words':num_words, 'maxlen':pad_len}, handle)
</code></pre>
<p>And then load it when you want to use it,</p>
<pre><code>with open("tokenizer_data.pkl", 'rb') as f:
data = pickle.load(f)
tokenizer = data['tokenizer']
num_words = data['num_words']
maxlen = data['maxlen']
</code></pre>
|
tensorflow|machine-learning|deep-learning|tensorflow-serving
| 3
|
5,305
| 59,604,783
|
Fast way of turning categorical Pandas series to string
|
<p>I have a series that is categorical.</p>
<p>At the moment I am mapping to string using the following code.</p>
<pre><code>import pandas as pd
import numpy as np
test = np.random.rand(int(5e6))
test[0] = np.nan
test_cut = pd.cut(test,(-np.inf,0.2,0.4,np.inf))
test_str = test_cut.astype('str')
test_str[test_str.isna()] = 'missing'
</code></pre>
<p>This astype('str') operation is very slow, is there a way to speed this up?</p>
<p>Based on the link below, I understand that apply is faster than astype. I tried the following.</p>
<pre><code>test_str = test_cut.apply(str)
#AttributeError: 'Categorical' object has no attribute 'apply'
test_str = test_cut.map(str)
# still categorical type
test_str = test_cut.values.astype(str)
# AttributeError: 'Categorical' object has no attribute 'values'
</code></pre>
<p><a href="https://stackoverflow.com/questions/49371629/converting-a-series-of-ints-to-strings-why-is-apply-much-faster-than-astype">Converting a series of ints to strings - Why is apply much faster than astype?</a></p>
<p>I do not care about the exact string representations of the categories, only that the groups are preserved, and coverted to strings. </p>
<p>As an alternative, is there a way to define a new category in the test_cut categorical 'Missing' (or something else), and set the 'missing' cases in 'test' to this category?</p>
<pre><code># some code to create 'MISSING' category
test_cat[test_str.isna()] = 'MISSING'
</code></pre>
|
<p>Use, the labels parameter to generate strings instead of pd.Intevals:</p>
<pre><code>breaks = [-np.inf, .2, .4, np.inf]
test_cut = pd.cut(test,breaks, labels=pd.IntervalIndex.from_breaks(breaks).astype(str))
</code></pre>
<p>Try timings with this code.</p>
|
python|pandas|optimization|categories
| 1
|
5,306
| 59,821,801
|
how to let pandas show every data row
|
<p>when I use pandas and I have a really big rows he makes the rows shorter by telling: <code>and 543512 more rows</code>
but I want to write all the rows to a file. How is that possible?</p>
|
<p>There is a option called <code>display.max_rows</code>. Setting it to <code>None</code> means unlimited:</p>
<pre><code>pd.set_option('display.max_rows', None)
</code></pre>
<p>But writing the data to a file can also be done by panda's <code>to_csv</code> function or by using <code>np.savetxt</code>. This depends on which format you want.</p>
|
python|python-3.x|pandas
| 1
|
5,307
| 61,989,691
|
Python Pandas join a few files
|
<p>I import a few xlsx files into pandas dataframe. It works fine, but my problem that it copies all the data under each other (so I have 10 excel file with 100 lines = 1000 lines).</p>
<p>I need the Dataframe with 100 lines and 10 columns, so each file will be copied next to each other and not below.</p>
<p>Are there any ideas how to do it?</p>
<pre><code>import os
import pandas as pd
os.chdir('C:/Users/folder/')
path = ('C:/Users/folder/')
files = os.listdir(path)
allNames = pd.DataFrame()
for f in files:
info = pd.read_excel(f,'Sheet1')
allNames = allNames.append(info)
writer = pd.ExcelWriter ('Output.xlsx')
allNames.to_excel(writer, 'Copy')
writer.save()
</code></pre>
|
<p>You can feed your spreadsheets as an array of dataframes directly to <code>pd.concat()</code>:</p>
<pre><code>import os
import pandas as pd
os.chdir('C:/Users/folder/')
path = ('C:/Users/folder/')
files = os.listdir(path)
allNames = pd.concat([pd.read_excel(f,'Sheet1') for f in files], axis=1)
writer = pd.ExcelWriter ('Output.xlsx')
allNames.to_excel(writer, 'Copy')
writer.save()
</code></pre>
|
python|pandas
| 1
|
5,308
| 58,133,430
|
How to substitute `keras.layers.merge._Merge` in `tensorflow.keras`
|
<p>I want to create a custom Merge layer using the <code>tf.keras</code> API. However, the new API hides the <code>keras.layers.merge._Merge</code> class that I want to inherit from.</p>
<p>The purpose of this is to create a Layer that can perform a weighted sum/merge of the outputs of two different layers. Before, and in <code>keras</code> python API (not the one included in <code>tensorflow.keras</code>) I could inherit from <code>keras.layers.merge._Merge</code> class, which is now not accessible from <code>tensorflow.keras</code>.</p>
<p>Where before I could do this</p>
<pre class="lang-py prettyprint-override"><code>class RandomWeightedAverage(keras.layers.merge._Merge):
def __init__(self, batch_size):
super().__init__()
self.batch_size = batch_size
def _merge_function(self, inputs):
alpha = K.random_uniform((self.batch_size, 1, 1, 1))
return (alpha * inputs[0]) + ((1 - alpha) * inputs[1])
</code></pre>
<p>Now I cannot use the same logic if using <code>tensorflow.keras</code></p>
<pre class="lang-py prettyprint-override"><code>class RandomWeightedAverage(tf.keras.layers.merge._Merge):
def __init__(self, batch_size):
super().__init__()
self.batch_size = batch_size
def _merge_function(self, inputs):
alpha = K.random_uniform((self.batch_size, 1, 1, 1))
return (alpha * inputs[0]) + ((1 - alpha) * inputs[1])
</code></pre>
<p>Produces</p>
<pre><code>AttributeError: module 'tensorflow.python.keras.api._v1.keras.layers' has no attribute 'merge'
</code></pre>
<p>I have also tried inheriting from <code>Layer</code> class instead</p>
<pre class="lang-py prettyprint-override"><code>class RandomWeightedAverage(tensorflow.keras.layers.Layer):
def __init__(self, batch_size):
super().__init__()
self.batch_size = batch_size
def call(self, inputs):
alpha = K.random_uniform((self.batch_size, 1, 1, 1))
return (alpha * inputs[0]) + ((1 - alpha) * inputs[1])
</code></pre>
<p>which gives me a layer with output shape equals to <code>multiple</code>, whereas I want the output shape to be well defined. I further attempted</p>
<pre class="lang-py prettyprint-override"><code>class RandomWeightedAverage(tensorflow.keras.layers.Layer):
def __init__(self, batch_size):
super().__init__()
self.batch_size = batch_size
def call(self, inputs):
alpha = K.random_uniform((self.batch_size, 1, 1, 1))
return (alpha * inputs[0]) + ((1 - alpha) * inputs[1])
def compute_output_shape(self, input_shape):
return input_shape[0]
</code></pre>
<p>But this did not solve the <code>multiple</code> ambiguity as output shape.</p>
|
<p>I have slightly modified your code to use <code>tf.random_uniform</code> instead of <code>K.random_uniform</code> and it's working fine on 1.13.1 and 1.14.0 (full snippet and resulting <code>model.summary()</code> below). </p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
print(tf.__version__)
class RandomWeightedAverage(tf.keras.layers.Layer):
def __init__(self, batch_size):
super().__init__()
self.batch_size = batch_size
def call(self, inputs, **kwargs):
alpha = tf.random_uniform((self.batch_size, 1, 1, 1))
return (alpha * inputs[0]) + ((1 - alpha) * inputs[1])
def compute_output_shape(self, input_shape):
return input_shape[0]
x1 = tf.keras.layers.Input((32, 32, 1))
x2 = tf.keras.layers.Input((32, 32, 1))
y = RandomWeightedAverage(4)(inputs=[x1, x2])
model = tf.keras.Model(inputs=[x1, x2], outputs=[y])
print(model.summary())
</code></pre>
<p><a href="https://i.stack.imgur.com/lcmrt.png" rel="noreferrer"><img src="https://i.stack.imgur.com/lcmrt.png" alt="model summary"></a></p>
|
python|tensorflow|keras|tf.keras
| 5
|
5,309
| 58,134,672
|
Converting single column to multiple columns based on unique values
|
<p>I have the following <code>pandas.DataFrame</code> with shape <code>(1464, 2)</code>:</p>
<pre><code>df = pd.DataFrame()
for name in list('ABCD'):
temp_df = pd.DataFrame(np.random.randint(0,100,size=(len(date_rng), 1)), columns=['value'], index=date_rng)
temp_df['name'] = name
df = df.append(temp_df)
</code></pre>
<p>The index column has each data duplicated 4 times: one for each string <code>('ABCD')</code> in the <code>name</code> column.</p>
<p>The dataframe head and tail look like so:</p>
<p><strong>Head</strong></p>
<pre><code>value name
2018-01-01 47 A
2018-01-02 22 A
2018-01-03 13 A
2018-01-04 66 A
2018-01-05 19 A
</code></pre>
<p><strong>Tail</strong></p>
<pre><code> value name
2018-12-28 32 D
2018-12-29 1 D
2018-12-30 5 D
2018-12-31 50 D
2019-01-01 75 D
</code></pre>
<p>I would like to convert this <code>(1464, 2)</code> dataframe to shape <code>(366, 4)</code>, such that each of the 4 columns are the 4 unique values in <code>df.name.unique()</code> (i..e <code>A, B, C, D</code>). The values for each column are the respective integers in the <code>df.value</code> column.</p>
<p>The final DataFrame should look something like this:</p>
<pre><code> A B C D
2018-12-28 32 22 21 4
2018-12-29 1 16 2 12
2018-12-30 5 1 65 26
2018-12-31 50 92 21 75
2019-01-01 75 55 33 34
</code></pre>
<p>I am sure there must be a nice reindex function or something of this sort to perform the task efficiently, as opposed to looping and recreating the dataframe.</p>
|
<p>You can use this:</p>
<pre><code>df.pivot(columns='name',values='value')
</code></pre>
|
pandas
| 1
|
5,310
| 55,058,546
|
How is get_updates() of optimizers.SGD used in Keras during training?
|
<p>I am not familiar with the inner workings of Keras and have difficulty understanding how Keras uses the <code>get_updates()</code> function of optimizers.SGD during training. </p>
<p>I searched quite a while on the internet, but only got few details. Specifically, my understanding is that the parameters/weights update rule of SGD is defined in the <code>get_updates()</code> function. But it appears that <code>get_updates()</code> isn't <em>literally</em> called in every iteration during training; otherwise 'moments' wouldn't carry from one iteration to the next to implement momentum correctly, as it's reset in every call, c.f. optimizers.py:</p>
<pre class="lang-python prettyprint-override"><code>shapes = [K.get_variable_shape(p) for p in params]
moments = [K.zeros(shape) for shape in shapes]
self.weights = [self.iterations] + moments
for p, g, m in zip(params, grads, moments):
v = self.momentum * m - lr * g # velocity
self.updates.append(K.update(m, v))
</code></pre>
<p>As pointed out in <a href="https://github.com/keras-team/keras/issues/7502" rel="nofollow noreferrer">https://github.com/keras-team/keras/issues/7502</a>, get_updates() only defines 'a symbolic computation graph'. I'm not sure what that means. Can someone give a more detailed explanation of how it works? </p>
<p>For example, how is the 'v' computed in one iteration got passed to 'moments' in the next iteration to implement momentum? I'd also appreciate it if someone can point me to some tutorial about how this works. </p>
<p>Thanks a lot! (BTW, I'm using tensorflow, if it matters.)</p>
|
<p>get_updates() defines graph operations that update the gradients.
When the graph is evaluated for training it will look somehow like this:</p>
<ul>
<li>forward passes compute a prediction value</li>
<li>loss computes a cost</li>
<li>backward passes compute gradients</li>
<li>gradients are updated</li>
</ul>
<p>Updating the gradients is a graph computation itself; i.e. the snippet of code that you quote defines how to perform the operation by specifying which tensors are involves and what math operations occur. The math operations themselves are not occurring at that point.</p>
<p>moments is a vectors of tensors defined in the code above. The code creates a graph operation that updates each moments element.</p>
<p>Every iteration of the graph will run this update operation.</p>
<p>The following link tries to explain the concept of the computational graph in TensorFlow:
<a href="https://www.tensorflow.org/guide/graphs" rel="nofollow noreferrer">https://www.tensorflow.org/guide/graphs</a></p>
<p>Keras uses the same underlying ideas but abstract the user from having to deal with the low level details. Defining a model in traditional TensorFlow 1.0 API requires a much higher level of detail.</p>
|
tensorflow|keras
| 1
|
5,311
| 54,787,164
|
pandas.series.rolling.apply method seems to implicitly convert Series into numpy array
|
<p>I want to compute the rolling volatility of a net value curve. </p>
<pre><code># demo
import pandas as pd
def get_rolling_vol(s: pd.Series) -> float:
return s.pct_change().iloc[1:].std()
s = pd.Series([1, 1.2, 1.15, 1.19, 1.23, 1.3])
rolling = s.rolling(window=2)
stds = rolling.apply(lambda s: get_rolling_vol(s))
</code></pre>
<p>Throws error:</p>
<pre><code>FutureWarning: Currently, 'apply' passes the values as ndarrays to the applied function. In the future, this will change to passing it as Series objects. You need to specify 'raw=True' to keep the current behaviour, and you can pass 'raw=False' to silence this warning
stds = rolling.apply(lambda s: get_rolling_vol(s))
... (omits intermediate tracebacks)
AttributeError: 'numpy.ndarray' object has no attribute 'pct_change'
</code></pre>
<p>Is there any way to make the argument pass as <code>Series</code> instead of <code>ndarrays</code> in <code>apply</code>? The <code>FutureWarning</code> says it's gonna be the case in the future, what if I want it now? (Don't want to modify the <code>get_rolling_vol</code> function since there are many other functions that also assume the argument is <code>Series</code> and modifying all of them will be tedious.) Thanks.</p>
|
<p>Yes, this is possible as stated in the warning message: Use <code>raw=False</code> as argument in <code>rolling.apply</code></p>
<p>This works at least in pandas 0.24.1</p>
|
python|pandas|numpy|type-conversion|numpy-ndarray
| 2
|
5,312
| 55,087,855
|
Python | Automatic DataFrame generation
|
<p>I have two folders with images from city skylines two different daytimes (day and night). I want to read in all images in different color spaces in the corresponding folders and then I want to calculate statistics for all the color channels. Then I want to create a pandas data frame containing all statistics.</p>
<p>In order to prevent unnecessarily repeated code, I am trying to use dictionaries. At the moment I am able to print out all the statistics for all the combinations of color space x channel x statistic. But I conceptually fail to get this stuff into a pandas DataFrame with rows (separate images) and columns (filename, color_space x channel x statistic).</p>
<p>I would appreciate any help.</p>
<pre><code>import os
import numpy as np
import matplotlib.pyplot as plt
import cv2
import pandas as pd
dictionary_of_color_spaces = {
'RGB': cv2.COLOR_BGR2RGB, # Red, Green, Blue
'HSV': cv2.COLOR_BGR2HSV, # Hue, Saturation, Value
'HLS': cv2.COLOR_BGR2HLS, # Hue, Lightness, Saturation
'YUV': cv2.COLOR_BGR2YUV, # Y = Luminance , U, V = Chrominance color components
}
dictionary_of_channels = {
'channel_1': 0,
'channel_2': 1,
'channel_3': 2,
}
dictionary_of_statistics = {
'min': np.min,
'max': np.max,
'mean': np.mean,
'median': np.median,
'std': np.std,
}
# get filenames inside training folders for day and night
path_training_day = './day_night_images/training/day/'
path_training_night = './day_night_images/training/night/'
filenames_training_day = [file for file in os.listdir(path_training_day)]
filenames_training_night = [file for file in os.listdir(path_training_night)]
for filename in filenames_training_day:
image = cv2.imread(path_training_day + filename)
for color_space in dictionary_of_color_spaces:
image = cv2.cvtColor(image, dictionary_of_color_spaces[color_space])
for channel in dictionary_of_channels:
for statistic in dictionary_of_statistics:
print(dictionary_of_statistics[statistic](image[:,:,dictionary_of_channels[channel]]))
</code></pre>
|
<p>The easiest thing I can think of without changing the bulk of your code would be:</p>
<ul>
<li>create an empty df whose columns are all combinations of statistic x channel x color_space (easily done with a list comprehension);</li>
<li>for each image, append all statistics to a variable (<code>row</code>):</li>
<li>cast <code>row</code> into a pd.Series object, using <code>row</code> as the values, columns of the dataframe as index and <code>filename</code> as its name;</li>
<li>append the row to your empty df.</li>
</ul>
<p>The most important detail is to get the df column names right, i.e. in the same order as the values which populate the <code>row</code> variable. When we create the combinations in the list comprehension for the column names, it's important that we move from the innermost loop to the outmost, in order to the values to match later when we append <code>row</code> into the df.</p>
<p>This should work:</p>
<pre><code>import os
import numpy as np
import matplotlib.pyplot as plt
import cv2
import pandas as pd
dictionary_of_color_spaces = {
'RGB': cv2.COLOR_BGR2RGB, # Red, Green, Blue
'HSV': cv2.COLOR_BGR2HSV, # Hue, Saturation, Value
'HLS': cv2.COLOR_BGR2HLS, # Hue, Lightness, Saturation
'YUV': cv2.COLOR_BGR2YUV, # Y = Luminance , U, V = Chrominance color components
}
dictionary_of_channels = {
'channel_1': 0,
'channel_2': 1,
'channel_3': 2,
}
dictionary_of_statistics = {
'min': np.min,
'max': np.max,
'mean': np.mean,
'median': np.median,
'std': np.std,
}
# creates column names in the same order as loops below
cols = [f'{s}_{c}_{cs}' for s in dictionary_of_statistics for c in dictionary_of_channels for cs in dictionary_of_color_spaces]
# creates empty df
df = pd.DataFrame(column=cols)
# get filenames inside training folders for day and night
path_training_day = './day_night_images/training/day/'
path_training_night = './day_night_images/training/night/'
filenames_training_day = [file for file in os.listdir(path_training_day)]
filenames_training_night = [file for file in os.listdir(path_training_night)]
for filename in filenames_training_day:
row = [] # row for the current image - to be populated with stat values
image = cv2.imread(path_training_day + filename)
for color_space in dictionary_of_color_spaces:
image = cv2.cvtColor(image, dictionary_of_color_spaces[color_space])
for channel in dictionary_of_channels:
for statistic in dictionary_of_statistics:
row.append(dictionary_of_statistics[statistic](image[:,:,dictionary_of_channels[channel]]))
row_series = pd.Series(row, index=cols, name=filename)
df = df.append(row_series)
</code></pre>
<p>This code casts the filename of each image as the index of each row in the final df. If you don't want that, cast the index to a new column (<code>df['filename'] = df.index</code>) and use pandas.reset_index afterwards (<code>pd = pd.reset_index(drop=True)</code>.</p>
|
python|pandas
| 1
|
5,313
| 49,718,162
|
tfjs-converter html javascript trouble importing class
|
<p>I'm trying to use the tensorflow.js API, and I want to import a saved python tensorflow model. I'm using <a href="https://github.com/tensorflow/tfjs-converter" rel="nofollow noreferrer">this github library</a> for the conversion. I've got these script imports in my html file:</p>
<pre><code><script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@0.8.0"></script>
<script type="module" src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-core"></script>
<script type="module" src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-converter"></script>
</code></pre>
<p>Then when I do this:</p>
<pre><code>const model = await loadFrozenModel(MODEL_URL, WEIGHTS_URL);
</code></pre>
<p>It says the class "loadFrozenModel is not defined".</p>
<p>In the github page it said to import using these:</p>
<pre><code>import * as tfc from '@tensorflow/tfjs-core';
import {loadFrozenModel} from '@tensorflow/tfjs-converter';
</code></pre>
<p>When I do that, it gives:<br>
" Uncaught SyntaxError: Unexpected token * "
<br>and<br>
" Uncaught SyntaxError: Unexpected token { " <br>
respectively. This error is even given when I install libraries using npm. Note: I'm using windows 10 and installed a third party npm.
<br><br><br>
This are the two files in entirety if I missed out some important details: <br>
index.html :</p>
<pre><code><html>
<head>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@0.8.0"></script>
<script type="module" src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-core"></script>
<script type="module" src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-converter"></script>
</head>
<body>
Tiny TFJS example.<hr>
<div id="micro_out_div"></div>
<script src="index.js"> </script>
</body>
</code></pre>
<p></p>
<p>index.js :</p>
<pre><code>async function myFirstTfjs() {
const MODEL_URL = "PATH/TO/tensorflowjs_model.pb";
const WEIGHTS_URL = 'PATH/TO/weights_manifest.json';
const model = await loadFrozenModel(MODEL_URL, WEIGHTS_URL);
const feed = {
'op_to_restore': tf.tensor1d([0, 0, 0, 0])
};
document.getElementById('micro_out_div').innerText += model.execute(feed);
}
myFirstTfjs();
</code></pre>
|
<p>By the time now, the problem is solved.
here's the final one.</p>
<pre><code><!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs@0.11.2"> </script>
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/tfjs-converter"></script>
<script type="text/javascript">
const MODEL_URL = 'http://localhost:8000/tensorflowjs_model.pb';
const WEIGHTS_URL = 'http://localhost:8000/weights_manifest.json';
const model = tf.loadFrozenModel(MODEL_URL, WEIGHTS_URL);
console.log("model loaded");
</script>
</head>
<body>
</body>
</html>
</code></pre>
<p>Do remember to add <a href="https://chrome.google.com/webstore/detail/allow-control-allow-origi/nlfbmbojpeacfghkpbjhddihlkkiljbi?hl=en-US" rel="nofollow noreferrer">this</a> extension to disable CORS warning</p>
|
javascript|html|tensorflow
| 2
|
5,314
| 49,402,326
|
cosine similarity pandas dataframe interpretation
|
<pre><code>import numpy as np; import pandas as pd
from sklearn.metrics.pairwise import cosine_similarity
df_flask = pd.DataFrame([[100,152,70,80,2,10]],columns=['weight','height','wc','hc','sex','age'])
df_flask2 = pd.DataFrame([[55.6,154,92,27,1,70]],columns=['weight','height','wc','hc','sex','age'])
print (cosine_similarity(df_flask2.iloc[[0]],df_flask.iloc[[0]]))
</code></pre>
<p>i have this sample code to try to get the cosine similarity as my objective is get them most similar person, I want to know if this is applicable to use as a similarity metric? I have seen papers use pearson correlation and other algorithm for person-person comparison but I want to try and use cosine similarity if it applies.</p>
|
<p>Yes, but with potential problems.</p>
<p>As probably know the cosine similarity will compute the dot product between the two entries. Since the range of the values is not similar the components that reach higher values will dominate the result. In this case it will be height and weight. Compare that two sex (which reaches 2) and you'll see that sex will not matter much (unless everything else is the same).</p>
<p>This is probably not what you want. To make sure the similarity more uniform with respect to the different dimensions consider normalizing the values to be in similar ranges (say 0 to 1).</p>
<p>If you do want some features to matter more than other you can scale them up or down to get something that works for your application.</p>
|
python|pandas
| 2
|
5,315
| 49,372,880
|
Write Pandas DataFrame to String Buffer with Chunking
|
<p>I have a 10k row csv that I want to write to s3 in chunks of 1k rows.</p>
<pre><code>from io import StringIO
import pandas as pd
csv_buffer = StringIO()
df.to_csv(csv_buffer, chunksize=1000)
s3_resource = boto3.resource('s3')
s3_resource.Object(bucket, 'df.csv').put(Body=csv_buffer.getvalue())
</code></pre>
<p>This gives me the first 1k rows in a string buffer to write to s3, but it doesn't seem like csv buffer is an iterator that i can loop over.</p>
<p>anyone know how to achieve this?</p>
|
<p>It looks like <code>StringIO</code> isn't really heeding the chunksize. (<code>.readlines()</code> will always just return one line, never a chunk of lines.)</p>
<p>I'm not too familiar with boto3, but <code>itertools.islice</code> may work for you here in terms of needing to slice an iterable without creating some intermediate data structure.</p>
<p>If this looks like it may suit your needs, I can add some explanation alongside the code:</p>
<pre><code>>>> from io import StringIO
... from itertools import islice
... import sys
...
... import numpy as np
... import pandas as pd
...
... df = pd.DataFrame(np.arange(300).reshape(100, -1))
... csv_buffer = StringIO()
... df.to_csv(csv_buffer)
... csv_buffer.seek(0)
...
... # Account for indivisibility (scoop up a remainder on the final slice).
... chunksize = 33
... rowsize = df.shape[1]
... slices = [(0, chunksize)] * (rowsize - 1) + [(0, sys.maxsize)]
... chunks = (tuple(islice(csv_buffer, i, j)) for i, j in slices)
...
>>> next(chunks)
(',0,1,2\n',
'0,0,1,2\n',
'1,3,4,5\n',
'2,6,7,8\n',
'3,9,10,11\n',
'4,12,13,14\n',
'5,15,16,17\n',
'6,18,19,20\n',
'7,21,22,23\n',
'8,24,25,26\n',
'9,27,28,29\n',
'10,30,31,32\n',
'11,33,34,35\n',
'12,36,37,38\n',
'13,39,40,41\n',
'14,42,43,44\n',
'15,45,46,47\n',
'16,48,49,50\n',
'17,51,52,53\n',
'18,54,55,56\n',
'19,57,58,59\n',
'20,60,61,62\n',
'21,63,64,65\n',
'22,66,67,68\n',
'23,69,70,71\n',
'24,72,73,74\n',
'25,75,76,77\n',
'26,78,79,80\n',
'27,81,82,83\n',
'28,84,85,86\n',
'29,87,88,89\n',
'30,90,91,92\n',
'31,93,94,95\n')
</code></pre>
|
python|pandas|amazon-s3
| 4
|
5,316
| 73,253,642
|
importing numpy from different directory
|
<p>I have numpy folder placed in abc and added it to path</p>
<pre><code>>>> from abc import numpy
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\ahsin\Desktop\naresh\abc\numpy\__init__.py", line 166, in <module>
core.getlimits._register_known_types()
File "C:\Users\ahsin\Desktop\naresh\abc\numpy\core\getlimits.py", line 109, in _register_known_types
eps=exp2(f16(-10)),
TypeError: can only be called with ndarray object
</code></pre>
<p>I tried replicating issue in python prompt and could observe this</p>
<pre><code>>>> exp2(f16(-10))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: can only be called with ndarray object
>>> f16(-10)
-10.0
>>> exp2(-10.0)
0.0009765625
</code></pre>
<p>Can anyone help please</p>
|
<p>Little wrong question I think - not "importing numpy from different directory" but "problem with importing numpy from non-standard package".</p>
<p>It not works because you use patch/tricks - that is wrong idea in programs :)</p>
<p>Do this in normal way - set correct path and import:</p>
<pre><code># change path before import
saved_path = sys.path
sys.path = ['abc']
import numpy
# restore path
sys.path = saved_path
</code></pre>
<p>You fault is that you changing module name and dependency in this module can not be imported I think but I am not sure.</p>
|
python|numpy
| 0
|
5,317
| 73,303,432
|
Find trigrams for all groupby clusters in a Pandas Dataframe and return in a new column
|
<p>I'm trying to return the highest frequency trigram in a new column in a pandas dataframe for each group of keywords. (Essentially something like a groupby with transform, returning the highest trigram in a new column).</p>
<p><strong>An example dataframe with dummy data</strong></p>
<pre><code> cluster_name keyword
0 summer summer dresses size 10
1 summer summer dresses size 12
2 summer large summer dresses
3 summer summer dresses size 14
4 strappy ladies strappy summer dresses
5 strappy strappy summer dresses uk 2022
6 strappy strappy summer dress
7 strappy strappy summer dresses
8 strappy thin strap summer dresses
</code></pre>
<p><strong>Desired Output</strong></p>
<pre><code> cluster_name trigram
0 summer summer dresses size
4 strappy strappy summer dresses
</code></pre>
<p><strong>Minimum Reproducible Example</strong></p>
<pre><code>import pandas as pd
data = [
["summer", "summer dresses size 10"],
["summer", "summer dresses size 12"],
["summer", "large summer dresses"],
["summer", "summer dresses size 14"],
["strappy", "ladies strappy summer dresses"],
["strappy", "strappy summer dresses uk 2022"],
["strappy", "strappy summer dress"],
["strappy", "strappy summer dresses"],
["strappy", "thin strap summer dresses"],
]
df = pd.DataFrame(data, columns=['cluster_name', 'keyword'])
print(df)
</code></pre>
<p><strong>What I've tried.</strong></p>
<p>I have working code to find bigrams but it's a bit hacky. It is fast though (much faster than iterows, which I'd be keen to avoid). It was taken from this solution: <a href="https://stackoverflow.com/questions/55348500/how-to-get-group-by-and-get-most-frequent-words-and-bigrams-for-each-group-panda">How to get group-by and get most frequent words and bigrams for each group pandas</a></p>
<p>The ideal outcome would be a universal solution I could tinker slightly to return unigrams, bigrams or trigrams etc just by changing a single value.</p>
<pre><code>def bigram(row):
lst = row['keyword'].split(' ')
return bigrams.append([(lst[x].strip(), lst[x+1].strip()) for x in range(len(lst)-1)])
df['parent_cluster'] = df.apply(lambda row: bigram(row), axis=1)
df2 = df.groupby('cluster_name').agg({'parent_cluster': 'sum'})
df3 = df2.parent_cluster.apply(lambda row: Counter(row)).to_frame().astype(str)
df3["parent_cluster"] = (df3["parent_cluster"].str.split(',').str[0])
# clean up the unigram column to remove the string of the Counter library.
df3["parent_cluster"] = df3["parent_cluster"].str.replace("Counter\({\('", '')
df3["parent_cluster"] = df3["parent_cluster"].str.replace("'", '')
</code></pre>
|
<p>You can use <code>nltk.ngrams</code> combined with <code>explode</code>/<code>groupby</code>/<code>mode</code>:</p>
<pre><code>from nltk import ngrams # or use a custom function
out = (df
.assign(keyword=[list(ngrams(s.split(), n=3)) for s in df['keyword']])
.explode('keyword')
.groupby('cluster_name')['keyword'].apply(lambda g: g.mode()[0])
)
</code></pre>
<p>output:</p>
<pre><code>cluster_name
strappy (strappy, summer, dresses)
summer (summer, dresses, size)
Name: keyword, dtype: object
</code></pre>
<p>As strings:</p>
<pre><code>out = (df
.assign(keyword=[[' '.join(x) for x in ngrams(s.split(), n=3)]
for s in df['keyword']])
.explode('keyword')
.groupby('cluster_name')['keyword'].apply(lambda g: g.mode()[0])
.reset_index(name='trigram')
)
</code></pre>
<p>output:</p>
<pre><code> cluster_name trigram
0 strappy strappy summer dresses
1 summer summer dresses size
</code></pre>
|
python|pandas|dataframe|nltk|n-gram
| 1
|
5,318
| 73,378,839
|
Loop through excel sheets and save each sheet into a csv based on a condition
|
<p>I have an excel file that has multiple sheets. I would like to iterate through each sheet and check against a string to either read the file after 0 rows or 4 rows. (As some of the sheets datasets start after the first 4 rows) After the sheet gets read I want to save the file as a csv.</p>
<p>This is my code so far, but I am not sure if I am doing the loop correctly.</p>
<pre><code>import pandas as pd
def converToCsv(excel_file):
df = pd.read_excel(excel_file, sheet_name = None)
for sheets in df.items():
if sheets[df.items()] == 'Shipment':
newdf = pd.read_excel(excel_file, sheet_name = sheets[df.items(), header = 4]
newdf.to_csv('path', decimal = ',', index = False)
else:
newdf = pd.read_excel(excel_file, sheet_name = sheets[df.items(), header = 0]
newdf.to_csv('path', decimal = ',', index = False)
</code></pre>
|
<p>There are several things not working with the snippet you posted :</p>
<ul>
<li><code>sheets[df.items()]</code> is not valid. <code>df.items()</code> returns a dict like : <code>{<sheet_name>: <sheet_content>}</code>, so you can not use it as an index</li>
<li>missing parenthesis and misplacement of square brackets</li>
<li>admitting that the loop worked, you are always saving the data to the same file <code>path</code>, doing so you are overwriting the previously saved sheet to csv on each loop turn.</li>
</ul>
<p>Did you try running this code before posting it ?</p>
<p>You could do something along those lines :</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
def converToCsv(excel_file):
workbook = pd.read_excel(excel_file, sheet_name = None)
for sheet_name in workbook.keys():
header = 0
if sheet_name == 'Shipment':
header = 4
newdf = pd.read_excel(excel_file, sheet_name = sheet_name, header=header)
# TODO: handle the case where sheet name is not a valid file name
newdf.to_csv(f"{sheet_name}.csv", decimal = ',', index = False)
converToCsv("test.xlsx")
</code></pre>
|
python|excel|pandas
| 1
|
5,319
| 67,257,473
|
Pytorch install fails on conda with error as -CondaHTTPError: HTTP 403 FORBIDDEN for url <https..........>
|
<p>I have installed anaconda 2019 version from repository as the latest version was throwing error on windows 10.
After installing anaconda(conda 4.10.1), i am unable to install Pytorch using command 'conda install pytorch-cpu -c pytorch' on anaconda prompt. It throws below error. I believe it is trying to look for file 'win-64/pytorch-cpu-1.1.0-py3.7_cpu_1.tar.bz2' and strangely when i check on <a href="https://anaconda.org/pytorch/pytorch/files" rel="nofollow noreferrer">https://anaconda.org/pytorch/pytorch/files</a> i can't locate this file. Please help.</p>
<p>Error message :-</p>
<p>CondaHTTPError: HTTP 403 FORBIDDEN for url <a href="https://conda.anaconda.org/pytorch/win-64/pytorch-cpu-1.1.0-py3.7_cpu_1.tar.bz2" rel="nofollow noreferrer">https://conda.anaconda.org/pytorch/win-64/pytorch-cpu-1.1.0-py3.7_cpu_1.tar.bz2</a>
Elapsed: 01:00.706682</p>
<p>An HTTP error occurred when trying to retrieve this URL.
HTTP errors are often intermittent, and a simple retry will get you on your way.</p>
|
<p>I experienced the same issue, but it turned out the error message was misleading. In my case extending the remote connection timeout parameter in the condarc file fixed the issue</p>
|
anaconda|pytorch
| 0
|
5,320
| 59,973,758
|
Get a error claiming my 2D array is not 2D in impyute
|
<p>Here is the shape of my array</p>
<pre><code>b = data[0].values
print(b.shape)
(5126, 4229)
</code></pre>
<p>I get this error when I run this code:</p>
<pre><code>from impyute.imputation.cs import mice
# start the MICE training
a=mice(b)
</code></pre>
<p>Error:</p>
<pre><code>ValueError: Expected 2D array, got 1D array instead:
array=[].
Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.
</code></pre>
<p>I am confused by this error message, any recommendations?</p>
|
<p>First, you must change your input data to a 2D array, so you must specify the number of features in your data using reshape function. </p>
<p>Please try to use b.reshape(5126, 4229), if not try to follow this example until you figure out the problem</p>
<p><a href="https://i.stack.imgur.com/MnRxT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MnRxT.png" alt=""></a></p>
|
arrays|pandas|numpy
| 1
|
5,321
| 59,946,211
|
How to convert list of tuple (tf.constant, tf.constant) as tensorflow dataset?
|
<p>I'm trying to create a Transformer Model, I have 2 <code>np.arrays</code>, both have strings, I used them to create a list of tuples</p>
<p>The format of the tuple is : </p>
<pre><code>class 'tuple' (tf.Tensor: shape=(), dtype=string, numpy=b'abc', tf.Tensor: shape=(), dtype=string, numpy=b'xyz')
</code></pre>
<p>I want to combine these tuples to form <code>tensorflow.python.data.ops.dataset_ops._OptionsDataset</code>, how can I do that?</p>
<p>Or is there any other way I could do it?</p>
<p>New to this, thanks for your help! </p>
|
<p>You can use <code>tensor_from_slices</code>. <a href="https://www.tensorflow.org/api_docs/python/tf/data/Dataset#from_tensor_slices" rel="nofollow noreferrer">Here</a>.</p>
<p><strong>EDIT</strong></p>
<pre><code>Example
# Two tensors can be combined into one Dataset object.
values1 = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor
values2 = tf.constant(['A', 'B', 'A']) # ==> 3x1 tensor
dataset = Dataset.from_tensor_slices((values1, values2))
</code></pre>
|
python|numpy|tensorflow|dataset
| 2
|
5,322
| 65,484,859
|
Error when checking target: expected dense_192 to have 3 dimensions, but got array with shape (37118, 1)
|
<p>Dear all: I'm very new to deep learning. I was trying to add a for loop to test all the possible combinations to get the best result. Currently what I have is the following.</p>
<pre><code>def coeff_determination(y_true, y_pred):
SS_res = K.sum(K.square( y_true-y_pred ))
SS_tot = K.sum(K.square( y_true - K.mean(y_true) ) )
return ( 1 - SS_res/(SS_tot + K.epsilon()) )
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.3)
x_train = x_train.to_numpy()
x_test = x_test.to_numpy()
y_train = y_train.to_numpy()
y_test = y_test.to_numpy()
print(x_train.shape)
print(y_train.shape)
print(x_test.shape)
print(y_test.shape)
(37118, 105)
(37118,)
(15908, 105)
(15908,)
timesteps = 3
features = 35 #this is the number of features
x_train = x_train.reshape((x_train.shape[0], timesteps, features))
x_test = x_test.reshape((x_test.shape[0], timesteps, features))
dense_layers=[0, 1, 2]
layer_sizes=[32, 64, 128]
LSTM_layers=[1,2,3]
for dense_layer in dense_layers:
for layer_size in layer_sizes:
for LSTM_layer in LSTM_layers:
NAME="{}-lstm-{}-nodes-{}-dense-{}".format(LSTM_layer, layer_size, dense_layer, int(time.time()))
tensorboard = TensorBoard(log_dir=f"LSTM_logs\\{NAME}")
print(NAME)
model = Sequential()
model.add(LSTM(layer_size, input_shape=(x_train.shape[1], x_train.shape[2]), return_sequences=True))
for i in range(LSTM_layer-1):
model.add(LSTM(layer_size, input_shape=(x_train.shape[1], x_train.shape[2]), return_sequences=True))
for i in range(dense_layer):
model.add(Dense(layer_size))
model.add(Dense(1))
model.compile(loss='mae', optimizer='adam',metrics=[coeff_determination])
epochs = 10
result = model.fit(x_train, y_train, epochs=epochs, batch_size=72, validation_data=(x_test, y_test), verbose=2, shuffle=False)
</code></pre>
<p>However, a got a traceback says the following</p>
<pre><code>ValueError: Error when checking target: expected dense_192 to have 3 dimensions, but got array with shape (37118, 1)
</code></pre>
<p>and the error occurs in the following line.</p>
<pre><code>---> 19 result = model.fit(x_train, y_train, epochs=epochs, batch_size=72, validation_data=(x_test, y_test), verbose=2, shuffle=False)
</code></pre>
<p>Could anyone please kindly give me some hint regarding how to solve the problem. Thanks a lot for your time and support.</p>
<p>Sincerely</p>
<p>Wilson</p>
|
<p>Use return_sequence = False for your last LSTM layer so it only returns a vector with the last hidden state.</p>
<p>Sincerely,</p>
<p>Alexander</p>
<p>more details: <a href="https://stackoverflow.com/questions/42755820/how-to-use-return-sequences-option-and-timedistributed-layer-in-keras">How to use return_sequences option and TimeDistributed layer in Keras?</a></p>
|
python|tensorflow|keras|deep-learning
| 1
|
5,323
| 65,175,404
|
Pandas Pivot table get max with column name
|
<p>I have the following pivot table</p>
<p><a href="https://i.stack.imgur.com/Z6isR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Z6isR.png" alt="enter image description here" /></a></p>
<p>I want to get the max value from each row, but also, I need to get the column it came from.
So far I know who to get the max row of every column using this:</p>
<pre><code>dff['State'] = stateRace.max(axis=1)
dff
</code></pre>
<p>I get this:
<a href="https://i.stack.imgur.com/YnPJs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YnPJs.png" alt="enter image description here" /></a></p>
<p>which is returning the correct max value but not the column it came from.</p>
|
<p>You suffer a disadvantage getting help because you have supplied images and the question is not clear. Happy to help if the below answer doesn't help.</p>
<pre><code>stateRace=stateRace.assign(max_value=stateRace.select_dtypes(exclude='object').max(axis=1),\
max_column=stateRace.select_dtypes(exclude='object').idxmax(axis=1))
</code></pre>
|
pandas|database|pivot|pivot-table|data-science
| 0
|
5,324
| 50,076,514
|
Create a pandas dataframe from dictionary whilst maintaining order of columns
|
<p>When creating a dataframe as below (instructions from <a href="https://pythonprogramming.net/basics-data-analysis-python-pandas-tutorial/" rel="nofollow noreferrer">here</a>), the order of the columns changes from "Day, Visitors, Bounce Rate" to "Bounce Rate, Day, Visitors"</p>
<pre><code>import pandas as pd
web_stats = {'Day':[1,2,3,4,5,6],
'Visitors':[43,34,65,56,29,76],
'Bounce Rate':[65,67,78,65,45,52]}
df = pd.DataFrame(web_stats)
</code></pre>
<p>Gives: </p>
<pre><code>Bounce Rate Day Visitors
0 65 1 43
1 67 2 34
2 78 3 65
3 65 4 56
4 45 5 29
5 52 6 76
</code></pre>
<p>How can the order be kept in tact? (i.e. Day, Visitors, Bounce Rate)</p>
|
<p>One approach is to use <code>columns</code></p>
<p><strong>Ex:</strong></p>
<pre><code>import pandas as pd
web_stats = {'Day':[1,2,3,4,5,6],
'Visitors':[43,34,65,56,29,76],
'Bounce Rate':[65,67,78,65,45,52]}
df = pd.DataFrame(web_stats, columns = ['Day', 'Visitors', 'Bounce Rate'])
print(df)
</code></pre>
<p><strong>Output:</strong></p>
<pre><code> Day Visitors Bounce Rate
0 1 43 65
1 2 34 67
2 3 65 78
3 4 56 65
4 5 29 45
5 6 76 52
</code></pre>
|
python|pandas|dictionary|dataframe
| 6
|
5,325
| 63,865,856
|
Python set value in Dataframe to Nan Based on Value
|
<p>I am trying to set the values of certain columns in a dataframe to Nan based on the value of the cell. I am having problems getting it to work. Here is what I have tried.
I need to set all cells where the windspeed is < -200 to NaN.</p>
<pre><code>filterA.loc[filterA['WindSpeedMPH'] < -200, 'WindSpeedMPH'] == NaN
filterA.loc[filterA['WindSpeedMPH'].le -200, 'WindSpeedMPH'] == Nan
</code></pre>
<p>but it doesn;t'work. I ma sure it is something fairly simple but I can't figure it out and haven't found the answer googling it.
I have tried multiple</p>
|
<p>You can also try to do it as simple as this:</p>
<pre><code>filterA.loc[filterA['WindSpeedMPH'] < -200, 'WindSpeedMPH'] = np.nan
</code></pre>
<p>See documentation <a href="https://datatofish.com/if-condition-in-pandas-dataframe/" rel="nofollow noreferrer">here</a> for 5 ways to apply an IF condition in pandas DataFrame.</p>
|
python-3.x|pandas|dataframe|nan
| 0
|
5,326
| 63,821,314
|
Create new column in Pandas DataFrame based on other columns
|
<p>I have Dataframe like:</p>
<p><a href="https://i.stack.imgur.com/zrhhC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zrhhC.png" alt="enter image description here" /></a></p>
<p>Now I would like to add column V based on conditions on I1, I2 and I3. Conditions are like:</p>
<pre><code>v = 1 if I1>23 and I2.str.contains('abc')
v = 2 if I3 == 20
v == ...............
...................
</code></pre>
<p>One row can satisfy multiple condition what I want is to multiply such rows and filter out the rows which is not satisfying any condition such as suppose N1 will satisfy for V=1,2 and 3. While N2 does not satisfy any and N3 will satisfy v=2.
I want the final dataframe to look like:</p>
<p><a href="https://i.stack.imgur.com/i6nun.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/i6nun.png" alt="enter image description here" /></a></p>
<p>Could someone please help me with this?
Thanks.</p>
|
<p>If I understood your question correctly, suppose you have a dataframe like the below:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({
"NAME": [ "N1", "N2", "N3" ],
"I1": [ 1, 4, 4 ],
"I2": [ 2, 5, 2 ],
"I3": [ 3, 6, 6 ]
})
</code></pre>
<p>i.e:</p>
<pre class="lang-py prettyprint-override"><code>>>> df
NAME I1 I2 I3
0 N1 1 2 3
1 N2 4 5 6
2 N3 4 2 6
</code></pre>
<p>To reproduce your example, I will assume that the conditions are <code>I1 = 1</code>, <code>I2 = 2</code>, and <code>I3 = 3</code>:</p>
<pre class="lang-py prettyprint-override"><code>cond1 = df["I1"] == 1
cond2 = df["I2"] == 2
cond3 = df["I3"] == 3
</code></pre>
<p>To build the expected dataframe, you can do:</p>
<pre class="lang-py prettyprint-override"><code>result = pd.concat([
df[cond1].assign(V=1),
df[cond2].assign(V=2),
df[cond3].assign(V=3)
])
</code></pre>
<p>Result:</p>
<pre class="lang-py prettyprint-override"><code>>>> result
NAME I1 I2 I3 V
0 N1 1 2 3 1
0 N1 1 2 3 2
2 N3 4 2 6 2
0 N1 1 2 3 3
</code></pre>
|
python|pandas|dataframe
| 3
|
5,327
| 64,075,052
|
Fastest way to get count of rows of strings containing a substring in python between two data frames
|
<p>I have two data frames, 1 has words and the other one has the text. I want to get the count of all the rows containing the word in the first data frame.</p>
<p>Word =</p>
<pre><code>ID | Word
------------
1 | Introduction
2 | database
3 | country
4 | search
</code></pre>
<p>Text =</p>
<pre><code>ID | Text
------------
1 | Introduction to python
2 | sql is a database
3 | Introduction to python in our country
4 | search for a python teacher in our country
</code></pre>
<p>What I want as final output is</p>
<pre><code>ID | Word | Count
---------------------
1 | Introduction | 2
2 | database | 1
3 | country | 1
4 | search | 2
</code></pre>
<p>I have 200000 rows in the word df and 55000 rows in the text (length of each text is around 2000 words) df. It takes approx 76 hours to complete the entire process with below code</p>
<p>'''</p>
<pre><code>def docCount(docdf, worddf):
final_dict = {}
for i in tqdm(worddf.itertuples()):
docdf["Count"] = docdf.Text.str.contains(i[2])
temp_dict = {i[2]: docdf.Count.sum()}
final_dict = dict(Counter(final_dict)+Counter(temp_dict))
return final_dict
</code></pre>
<p>'''</p>
|
<h3>Here is simple solution</h3>
<pre><code>world_count = pd.DataFrame(
{'words': Word['Word'].tolist(),
'count': [Text['Text'].str.contains(w).sum() for w in words],
}).rename_axis('ID')
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>world_count.head()
'''
words count
ID
0 Introduction 2
1 database 1
2 country 2
3 search 1
'''
</code></pre>
<p><strong>Step by step solution:</strong></p>
<pre><code># Convert column to list
words = Word['Word'].tolist()
# Get the count
count = [Text['Text'].str.contains(w).sum() for w in words]
world_count = pd.DataFrame(
{'words': words,
'count': count,
}).rename_axis('ID')
</code></pre>
<p>Tip:</p>
<p>I would suggest you to convert to lower case so that you won't miss any count due to upper/lower case</p>
<pre><code>import re
import pandas as pd
world_count = pd.DataFrame(
{'words': Word['Word'].str.lower().str.strip().tolist(),
'count': [Text['Text'].str.contains(w,flags=re.IGNORECASE, regex=True).sum() for w in words],
}).rename_axis('ID')
</code></pre>
|
python|python-3.x|pandas|nltk
| 2
|
5,328
| 46,991,578
|
IndexError: index 666 is out of bounds for axis 1 with size 501
|
<pre><code>import math
import numpy as np
def ExplicitMethod(S0, K, r, q, T, Sigma, M, N, Option):
M = int(M)
N = int(N)
dt = T / N
K = float(K)
Smax = 2 * K
dS = Smax / N
FGrid = np.zeros(shape=(N+1, M+1))
if Option == 'Call':
FGrid[-1, :] = np.maximum(np.arange(0, M+1) * dS - K, 0)
elif Option == 'Put':
FGrid[-1, :] = np.maximum(K - np.arange(0, M+1) * dS, 0)
A = np.zeros(shape=(M+1, M+1))
A[0,0] = 1
A[-1,-1] = 1
for j in range(1, M-1):
A[j, (j-1,j, j+1)] = \
[0.5 * dt * (Sigma ** 2 * j ** 2 - (r - q) * j ), \
1 - dt * (Sigma ** 2 * j ** 2 + r), \
0.5 * dt * (Sigma **2 * j ** 2 + (r - q) * j)]
for i in range(N):
Fhat = FGrid [i, ]
FGrid[i, :] = np.dot(A , Fhat)
Fhat[0] = 0
Fhat[-1] = Smax - K * math.exp(-r * (N - i) * dt)
k = math.floor(S0 / dS)
V = FGrid[0, k] + (FGrid[0, k] - FGrid[0, k]) / dS * [(S0 - k * dS)]
print (V)
ExplicitMethod (50, 30, 0.1, 0.05, 2, 0.3, 400, 800, Option = 'Call')
</code></pre>
<p>And I got this error: </p>
<blockquote>
<p>IndexError Traceback (most recent call
last) in ()
37 print (V)
38
---> 39 ExplicitMethod (50, 30, 0.1, 0.05, 2, 0.3, 400, 800, Option = 'Call')</p>
<p> in ExplicitMethod(S0, K, r, q, T,
Sigma, M, N, Option)
33 k = math.floor(S0 / dS)
34
---> 35 V = FGrid[0, k] + (FGrid[0, k] - FGrid[0, k]) / dS * [(S0 - k * dS)]
36
37 print (V)</p>
<p>IndexError: index 666 is out of bounds for axis 1 with size 401</p>
</blockquote>
<p>And FYI I'm coding Option Pricing Explicit Method: </p>
<p><img src="https://i.stack.imgur.com/AMZxs.png" alt="enter image description here"></p>
<p>Please help, thank you!</p>
|
<p>You have initialized <code>FGrid</code> to be (801,401). The tells us that, for some reason, <code>k=666</code></p>
<pre><code>k = math.floor(S0 / dS)
V = FGrid[0, k] + (FGrid[0, k] - FGrid[0, k]) / dS * [(S0 - k * dS)]
</code></pre>
<p>You need to refine how <code>k</code> is set. Maybe the math is wrong. At the very least you need to ensure that it does not get above 400.</p>
<pre><code>In [200]: S0=50; K=30; N=800
In [201]: dS = 2*K/N
In [202]: dS
Out[202]: 0.075
In [203]: S0/dS
Out[203]: 666.6666666666667
</code></pre>
<p>There's your <code>666</code>.</p>
|
python|algorithm|numpy
| 2
|
5,329
| 63,038,234
|
How do I plot steps_per_epoch against loss using fit_generator in Keras?
|
<p>I have the following code and I would like to plot the graph of <code>loss</code> against <code>steps_per_epoch</code></p>
<pre><code>model = unet(pretrained=False)
model.compile(optimizer=Adam(0.005), loss="binary_crossentropy",
metrics=["accuracy"])
history = model.fit_generator(train_gen, steps_per_epoch=500, epochs=5,
callbacks=[dynamic_lr, chkp])
</code></pre>
<p>where <code>lr</code> and <code>chkp</code> are my callbacks for the model:</p>
<pre><code>def lr_scheduler(epoch, lr):
if epoch <= 2:
lr = 0.002
return lr
lr = 0.001
return lr
chkp = keras.callbacks.ModelCheckpoint(
filepath="mypath/model.hdf5",
monitor="loss",
verbose=1,
save_best_only=True,
mode="min",
)
dynamic_lr = LearningRateScheduler(lr_scheduler, verbose=1)
</code></pre>
<p>I do not think the <code>history</code> dict holds the <code>loss</code> for each step in epoch, but is there any way?</p>
|
<p>you can get the values of training accuracy, training loss, validation accuracy and validation loss from the history object. See code below.</p>
<pre><code>training_accuracy=history.history['accuracy']
training_loss=history.history['loss']
valid_accuracy=history.history['val_accuracy']
valid_loss=history.history['val_loss']
</code></pre>
|
python|tensorflow|matplotlib|keras|deep-learning
| 1
|
5,330
| 67,924,575
|
How to change the value of a column items using pandas?
|
<p>This is my fist question on stackoverflow.</p>
<p>I'm implementing a Machine Learning classification algorithm and I want to generalize it for any input dataset that have their target class in the last column. For that, I want to modify all values of this column without needing to know the names of each column or rows using pandas in python.</p>
<p>For example, let's suppose I load a dataset:</p>
<pre><code>dataset = pd.read_csv('random_dataset.csv')
</code></pre>
<p>Let's say the last column has the following data:</p>
<pre><code>0 dog
1 dog
2 cat
3 dog
4 cat
</code></pre>
<p>I want to change each "dog" appearence to 1 and each cat appearance to 0, so that the column would look:</p>
<pre><code>0 1
1 1
2 0
3 1
4 0
</code></pre>
<p>I have found some ways of changing the values of specific cells using pandas, but for this case, what would be the best way to do that?</p>
<p>I appreciate each answer.</p>
|
<p>use the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.map.html" rel="nofollow noreferrer">map</a> and map the values as per requirement:</p>
<pre><code>df['col_name'] = df['col_name'].map({'dog' : 1 , 'cat': 0})
</code></pre>
<p>OR -> Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.factorize.html" rel="nofollow noreferrer">factorize</a>(Encode the object as an enumerated type) -> if you wanna assign random numeric values</p>
<pre><code>df['col_name'] = df['col_name'].factorize()[0]
</code></pre>
<h4>OUTPUT:</h4>
<pre><code>0 1
1 1
2 0
3 1
4 0
</code></pre>
|
python|pandas|dataframe
| 1
|
5,331
| 67,620,499
|
AgGrid in Python giving blank grid
|
<p>AgGrid in Python giving blank grid when run with Justpy to display a Dataframe on the webpage.</p>
<p>Please find below the python code I am trying to run... It is giving a blank grid can you please help me debug???</p>
<pre><code>import pandas as pd
import justpy as jp
w1=pd.DataFrame([[1,2,3],[2,3,4],[3,4,5]])
def grid_test():
print(w1)
wp = jp.WebPage()
jp.Strong(text=str(w1), a=wp)
grid = jp.AgGrid(a=wp)
grid.load_pandas_frame(w1)
return wp
jp.justpy(grid_test)
</code></pre>
|
<p>I encountered the same issue. The issue for me resolved when I made certain the recordset contained no null values. My orignal recordset had some null values.</p>
|
python|pandas|ag-grid|justpy
| 1
|
5,332
| 67,927,429
|
Mark last date record in Pandas/Dask dataframes
|
<p>In the Dask dataset below I have a list of ids (for example <code>1</code> and <code>2</code>) and dates (the last column).</p>
<p>What I need is to add a new column to the dataframe that will have <code>1</code> if the date is the last one for that id. For example, for id <code>2</code> the last date is <code>2021-05-01</code>. If it's not the last, the column should be None or 0.</p>
<p>The entire set is ordered by id and date, in case that's necessary.</p>
<p>I used to do this with SQL, having in the where clause <code>where NOT EXISTS date > (select max(date)) ...</code></p>
<p>Is this possible with Pandas and/or Dask?</p>
<pre><code> pdf = pd.DataFrame({
'id': [1, 1, 1, 2, 2],
'balance': [150, 140, 130, 280, 260],
'date' : ['2021-03-01', '2021-04-01', '2021-05-01', '2021-01-01', '2021-02-01']
})
print(pdf)
id balance date
0 1 150 2021-03-01
1 1 140 2021-04-01
2 1 130 2021-05-01
3 2 280 2021-04-01
4 2 260 2021-05-01
pdf['date2'] = pd.to_datetime(pdf['date'])
ddf = dd.from_pandas(pdf, npartitions=1)
ddf.compute()
id balance date date2
0 1 150 2021-03-01 2021-03-01
1 1 140 2021-04-01 2021-04-01
2 1 130 2021-05-01 2021-05-01
3 2 280 2021-04-01 2021-04-01
4 2 260 2021-05-01 2021-05-01
</code></pre>
<p>For example the end result would be</p>
<pre><code>id balance date date2 last_date_flag
0 1 150 2021-03-01 2021-03-01 0
1 1 140 2021-04-01 2021-04-01 0
2 1 130 2021-05-01 2021-05-01 1
3 2 280 2021-04-01 2021-04-01 0
4 2 260 2021-05-01 2021-05-01 1
</code></pre>
|
<p>This is the approach I would use for pandas, not sure about Dask.</p>
<pre class="lang-py prettyprint-override"><code>pdf['last_date_flag'] = pdf.groupby('id')['date'].transform(lambda x: x == x.max()).astype(int)
</code></pre>
<p>Gives</p>
<pre><code>
id balance date date2 last_date_flag
0 1 150 2021-03-01 2021-03-01 0
1 1 140 2021-04-01 2021-04-01 0
2 1 130 2021-05-01 2021-05-01 1
3 2 280 2021-01-01 2021-01-01 0
4 2 260 2021-02-01 2021-02-01 1
</code></pre>
<p>If you are concerned with scalability this performs faster on large datasets:</p>
<pre><code>
pdf['last_date_flag'] = (pdf['date'] == pdf['id'].map(pdf.groupby('id')['date'].max().to_dict())).astype(int)
</code></pre>
|
python|pandas|dataframe|dask|dask-distributed
| 1
|
5,333
| 61,514,746
|
Multiple condition in pandas dataframe - np.where
|
<p>I have the following dataframe</p>
<pre><code>Year M
1991-1990 10
1992-1993 9
</code></pre>
<p>What I am trying to so is a if statement: <strong>=IF(M>9,LEFT(Year),RIGHT(C2,4))*1</strong></p>
<p>So basically if M if 10 choose the left value of the column year else choose the second value</p>
<p>I tried using np.where but I have no idea how to choose between two values in the same column. </p>
<p>Help?</p>
|
<p>You can do this:</p>
<pre><code>In [448]: df['val'] = np.where( df['M'].gt(9),\
...: df.Year.str.split('-').tolist()[0],\
...: df.Year.str.split('-').tolist()[1] )
In [444]: df
Out[444]:
Year M val
0 1991-1990 10 1991
1 1992-1993 9 1993
</code></pre>
|
python|pandas|dataframe|if-statement
| 1
|
5,334
| 68,716,810
|
Checking for a string in two different dataframes and copy the corresponding rows to calculate statistics in Pandas
|
<p>I want to write a python code and have for example 2 different DataFrames (the number of dataframes can be more than 2) as follows:</p>
<pre><code>df1 =
Index Name Age Height
0 Tom 20 166
1 Bill 27 170
2 Jacob 39 180
3 Vivian 26 155
</code></pre>
<pre><code>df2 =
Index Name Age Height
0 Mary 20 166
1 Tom 27 170
2 Bill 39 180
3 Jack 26 155
</code></pre>
<p>I want to check the names in both the dataframes and if they match add corresponding entries in the columns so that the final result looks like a third dataframe:</p>
<pre><code>result =
Index Name Age Height
0 Tom 47 336
1 Bill 66 350
2 Jacob 39 180
3 Vivian 26 155
4 Mary 20 166
5 Jack 26 155
</code></pre>
<p>Tom and Bill have 2 entries in 2 dataframes, so their Age and Height get added and others have a single entry, so the original number is displayed. Thank you in advance.</p>
|
<p>You can concatenate using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.concat.html" rel="nofollow noreferrer"><code>pd.concat</code></a> both the DataFrames then GroupBy name using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>DataFrame.groupby</code></a>.</p>
<pre><code># Assuming `Index` is not a column. If it's a column
# set it as index using `df.set_index("Index")
out = (
pd.concat([df1, df2], ignore_index=True)
.groupby("Name", as_index=False, sort=False)
.sum()
)
out
# Name Age Height
# 0 Tom 47 336
# 1 Bill 66 350
# 2 Jacob 39 180
# 3 Vivian 26 155
# 4 Mary 20 166
# 5 Jack 26 155
</code></pre>
|
python|pandas|dataframe
| 2
|
5,335
| 68,512,446
|
Numpy element-wise isin for two 2d arrays
|
<p>I have two arrays:</p>
<pre class="lang-py prettyprint-override"><code>a = np.array([[1, 2], [3, 4], [5, 6]])
b = np.array([[1, 1, 1, 3, 3],
[1, 2, 4, 5, 9],
[1, 2, 3, 4, 5]])
</code></pre>
<p>The expected output would match the shape of array 'a' and would be:</p>
<pre class="lang-py prettyprint-override"><code>array([True, False], [False, True], [True, False])
</code></pre>
<p>The first dimension size of arrays a and b always matches (in this case 3).</p>
<p>What I wish to calculate is for each index of each array (0 to 2 as there are 3 dimensions here) is if each number in the array 'a' exists in the corresponding second dimension of array 'b'.</p>
<p>I can solve this in a loop with the following code, but I would like to vectorise it to gain a speed boost but having sat here for several hours, I cannot figure it out:</p>
<pre class="lang-py prettyprint-override"><code>output = np.full(a.shape, False)
assert len(a) == len(b)
for i in range(len(a)):
output[i] = np.isin(a[i], b[i])
</code></pre>
<p>Thank you for any guidance! Anything would be very appreciated :)</p>
|
<p>Properly reshape the arrays so they can broadcast correctly while comparing:</p>
<pre><code>(a[...,None] == b[:,None]).any(2)
#[[ True False]
# [False True]
# [ True False]]
</code></pre>
<ul>
<li><code>a[...,None]</code> adds an extra dimension to the end, with shape <code>(3, 2, 1)</code>;</li>
<li><code>b[:,None]</code> inserts a dimension as 2nd axis, with shape <code>(3, 1, 5)</code>;</li>
<li>When you compare the two arrays, both will be broadcasted to <code>(3, 2, 5)</code> so essentially you compare each element in the row of <code>a</code> to each element in the corresponding row of <code>b</code>;</li>
<li>finally you can check if there's any match for every element in <code>a</code>;</li>
</ul>
|
python|arrays|numpy|isin
| 4
|
5,336
| 68,556,561
|
How to get the indexes of the greatest N values greater than a threshold in Numpy?
|
<p>For a project I need to be able to get, from a vector with shape <code>(k, m)</code>, the indexes of the N greatest values of each row greater than a fixed threshold.
For example, if k=3, m=5, N=3 and the threshold is 5 and the vector is :</p>
<pre class="lang-py prettyprint-override"><code>[[3 2 6 7 0],
[4 1 6 4 0],
[7 10 6 9 8]]
</code></pre>
<p>I should get the result (or the flattened version, I don't care) :</p>
<pre class="lang-py prettyprint-override"><code>[[2, 3],
[2],
[1, 3, 4]]
</code></pre>
<p>The indexes don't have to be sorted.</p>
<p>My code is currently :</p>
<pre class="lang-py prettyprint-override"><code>indexes = []
for row, inds in enumerate(np.argsort(results, axis=1)[:, -N:]):
for index in inds:
if results[row, index] > threshold:
indexes.append(index)
</code></pre>
<p>but I feel like I am not using Numpy to its full capacity.</p>
<p>Does anybody know a better and more elegant solution ?</p>
|
<p>How about this method:</p>
<pre><code>import numpy as np
arr = np.array(
[[3, 2, 6, 7, 0],
[4, 1, 6, 4, 0],
[7, 10, 6, 9, 8]]
)
t = 5
n = 3
sorted_idxs = arr.argsort(1)[:, -n:]
sorted_arr = np.sort(arr, 1)[:, -n:]
item_nums = np.cumsum((sorted_arr > t).sum(1))
masked_idxs = sorted_idxs[sorted_arr > t]
idx_lists = np.split(masked_idxs, item_nums)
</code></pre>
<p>output:</p>
<pre><code>[array([2, 3]), array([2]), array([4, 3, 1])]
</code></pre>
|
python|python-3.x|numpy
| 1
|
5,337
| 53,017,598
|
best curve fitting the distribution
|
<p>I tried to use a polynomial (3-degrees) to fit a data series, but it seems that it's still not the best fit (some points are off in graph shown below). I also tried to add a log function to help plot. But result is not improved either.</p>
<p>What would be the best curve fitting here?</p>
<p>Here are the raw data points I have:
<code>
x_values = [ 0.51,0.56444444,0.61888889 , 0.67333333 , 0.72777778, 0.78222222, 0.83666667, 0.89111111 , 0.94555556 , 1. ]
y_values = [0.67154591, 0.66657266, 0.65878351, 0.6488696, 0.63499979, 0.6202393, 0.59887225, 0.56689689, 0.51768976, 0.33029004]
</code></p>
<p>Results with polynomial fit:
<a href="https://i.stack.imgur.com/V5xDA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/V5xDA.png" alt="enter image description here"></a></p>
|
<p>It would be better, if your curve fitting procedure were hypothesis driven, i.e., you had already an idea, what kind of relationship to expect. The shape looked to me more like an exponential function:</p>
<pre><code>from matplotlib import pyplot as plt
import numpy as np
from scipy.optimize import curve_fit
#the function that describes the data
def func(x, a, b, c, d):
return a * np.exp(b * x + c) + d
x_values = [0.51,0.56444444, 0.61888889, 0.67333333 , 0.72777778, 0.78222222, 0.83666667, 0.89111111 , 0.94555556 , 1. ]
y_values = [0.67154591, 0.66657266, 0.65878351, 0.6488696, 0.63499979, 0.6202393, 0.59887225, 0.56689689, 0.51768976, 0.33029004]
#start values [a, b, c, d]
start = [-.1, 1, 0, .1]
#curve fitting
popt, pcov = curve_fit(func, x_values, y_values, p0 = start)
#output [a, b, c, d]
print(popt)
#calculating the fit curve at a better resolution
x_fit = np.linspace(min(x_values), max(x_values), 1000)
y_fit = func(x_fit, *popt)
#plot data and fit
plt.scatter(x_values, y_values, label = "data")
plt.plot(x_fit, y_fit, label = "fit")
plt.legend()
plt.show()
</code></pre>
<p>This gives the following output:</p>
<p><a href="https://i.stack.imgur.com/jO9ol.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jO9ol.jpg" alt="enter image description here" /></a></p>
<p>This still does not look correct, the first part seems to have a linear offset. If we take this into consideration:</p>
<pre><code>from matplotlib import pyplot as plt
import numpy as np
from scipy.optimize import curve_fit
def func(x, a, b, c, d, e):
return a * np.exp(b * x + c) + d * x + e
x_values = [0.51,0.56444444, 0.61888889, 0.67333333 , 0.72777778, 0.78222222, 0.83666667, 0.89111111 , 0.94555556 , 1. ]
y_values = [0.67154591, 0.66657266, 0.65878351, 0.6488696, 0.63499979, 0.6202393, 0.59887225, 0.56689689, 0.51768976, 0.33029004]
start = [-.1, 1, 0, .1, 1]
popt, pcov = curve_fit(func, x_values, y_values, p0 = start)
print(popt)
x_fit = np.linspace(min(x_values), max(x_values), 1000)
y_fit = func(x_fit, *popt)
plt.scatter(x_values, y_values, label = "data")
plt.plot(x_fit, y_fit, label = "fit")
plt.legend()
plt.show()
</code></pre>
<p>we have the following output:
<a href="https://i.stack.imgur.com/7nzeI.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7nzeI.jpg" alt="enter image description here" /></a></p>
<p>This now is closer to your data points.
BUT. You should go to your data and think about, which model is most likely to reflect reality, then implement this model. You can always construct more complicated functions that better fit your data, but they do not necessarily reflect better reality.</p>
|
numpy|plot|scipy|model-fitting
| 0
|
5,338
| 53,200,722
|
Using str.contains across multiple rows
|
<p>I have a dataframe with five rows that looks like this:</p>
<pre><code>index col1 col2 col3 col4 col5
1 word1 None word1 None None
2 None word1 word2 None None
3 None None None word2 word2
4 word1 word2 None None None
</code></pre>
<p>I'm trying to find all rows that contain both strings in <em>any</em> combination of columns---in this case, rows 2 and 4. Normally I would use the <code>str.contains</code> method to filter by string:</p>
<pre><code>df[df['col1'].str.contains('word1 | word2'), case=False)
</code></pre>
<p>But this only gives me A) results for one column, and B) a True if the column has one word. I intuitively tried <code>df[df[['col1', 'col2', 'col3', 'col4', 'col5']].str.contains('word1' & 'word2'), case=False)</code> but <code>.str.contains</code> doesn't work on DataFrame objects.</p>
<p>Is there a way to do this without resorting to a for loop?</p>
|
<p>Using <code>any</code> </p>
<pre><code>s1=df.apply(lambda x : x.str.contains(r'word1')).any(1)
s2=df.apply(lambda x : x.str.contains(r'word2')).any(1)
df[s1&s2]
Out[452]:
col1 col2 col3 col4 col5
index
2 None word1 word2 None None
4 word1 word2 None None None
</code></pre>
|
python|pandas
| 4
|
5,339
| 65,632,154
|
Group By and Count occurences of values in list of nested dicts
|
<p>I have a JSON file that looks structurally like this:</p>
<pre><code>{
"content": [
{
"name": "New York",
"id": "1234",
"Tags": {
"hierarchy": "CITY"
}
},
{
"name": "Los Angeles",
"id": "1234",
"Tags": {
"hierarchy": "CITY"
}
},
{
"name": "California",
"id": "1234",
"Tags": {
"hierarchy": "STATE"
}
}
]
}
</code></pre>
<p>And as an outcome I would like a table view in CSV like so:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">tag.key</th>
<th style="text-align: left;">tag.value</th>
<th style="text-align: left;">occurrance</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">hierarchy</td>
<td style="text-align: left;">CITY</td>
<td style="text-align: left;">2</td>
</tr>
<tr>
<td style="text-align: left;">hierarchy</td>
<td style="text-align: left;">STATE</td>
<td style="text-align: left;">1</td>
</tr>
</tbody>
</table>
</div>
<p>Meaning I want to count the occurance of each unique "tag" in my json file and create an output csv that shows this. My original json is a pretty large file.</p>
|
<p>Firstly construct a dictionary object by using <code>ast.literal_eval</code> function, and then split this object to get a key, value tuples in order to create a dataframe by using <code>zip</code>. Apply <code>groupby</code> to newly formed dataframe, and finally create a <code>.csv</code> file through use of <code>df_agg.to_csv</code> such as</p>
<pre><code>import json
import ast
import pandas as pd
Js= """{
"content": [
{
"name": "New York",
"id": "1234",
"Tags": {
"hierarchy": "CITY"
}
},
....
....
{
"name": "California",
"id": "1234",
"Tags": {
"hierarchy": "STATE"
}
}
]
}"""
data = ast.literal_eval(Js)
key = []
value=[]
for i in list(range(0,len(data['content']))):
value.append(data['content'][i]['Tags']['hierarchy'])
for j in data['content'][i]['Tags']:
key.append(j)
df = pd.DataFrame(list(zip(key, value)), columns =['tag.key', 'tag.value'])
df_agg=df.groupby(['tag.key', 'tag.value']).size().reset_index(name='occurrance')
df_agg.to_csv(r'ThePath\\to\\your\\file\\result.csv',index = False)
</code></pre>
|
python|json|python-3.x|pandas|dataframe
| 0
|
5,340
| 65,561,503
|
Re-shaping pandas dataframe to match specific output
|
<p>I am struggling to find some <code>elegant</code> solution to get what I need from my data. I am able to get what I want but with <code>too much</code> efforts, which I believe can be done quite better and that's what I am looking for.</p>
<p>So here is sample of my DataFrame</p>
<pre><code>>>> df = pd.DataFrame({'device_name': ['tap_switch_1', 'tap_switch_1', 'tap_switch_1', 'tap_switch_1', 'tap_switch_1', 'tap_switch_1', 'tap_switch_1', 'tap_switch_1', 'tap_switch_1'], 'interface': ['ethernet3', 'ethernet4', 'ethernet38', 'ethernet7', 'ethernet8', 'ethernet31', 'ethernet1', 'ethernet12', 'ethernet20'], 'tap_port': ['1-tx-a-rx', '1-tx-b-rx', '1-b', '2-tx-a-rx', '2-tx-b-rx', '2-b', '3-tx-a-rx', '3-tx-b-rx', '3-b'], 'switch_name': ['sw_ag1', 'sw_ag1', 'sw_client1', 'sw_ag1', 'sw_ag1', 'sw_client2', 'sw_ag1', 'sw_ag1', 'sw_client3']})
</code></pre>
<h3>Current DataFrame shape:</h3>
<pre><code>device_name interface tap_port switch_name
tap_switch_1 ethernet3 1-tx-a-rx sw_ag1
tap_switch_1 ethernet4 1-tx-b-rx sw_ag1
tap_switch_1 ethernet38 1-b sw_client1
tap_switch_1 ethernet7 2-tx-a-rx sw_ag1
tap_switch_1 ethernet8 2-tx-b-rx sw_ag1
tap_switch_1 ethernet31 2-b sw_client2
tap_switch_1 ethernet1 3-tx-a-rx sw_ag1
tap_switch_1 ethernet12 3-tx-b-rx sw_ag1
tap_switch_1 ethernet20 3-b sw_client3
</code></pre>
<h3>The desired output I am looking to get is this:</h3>
<pre><code>device_name id agg_switch rx_int tx_int client_switch client_port
tap_switch_1 1 sw_ag1 ethernet3 ethernt4 sw_client1 ethernet38
tap_switch_1 2 sw_ag1 ethernet7 ethernt8 sw_client2 ethernet31
tap_switch_1 3 sw_ag1 ethernet1 ethernt12 sw_client3 ethernet20
</code></pre>
<h3>The logic</h3>
<p>So basically I have network setup where one switch is used to tap multiple interfaces. For every <code>client switch port</code> there are two <code>device_name</code> interfaces - one for each direction (RX/TX). I am using <code>tap_port</code> name to combine all that into one group, based on first integer I see, which indicates the "tap_group".</p>
<h3>Current "solution"</h3>
<p>Here is the way I am doing it now, which I don't quite and it doesn't give me the <code>desired output</code>:</p>
<pre><code># Add new `id` column
>>> df['id']=df.tap_port.str[0]
# Get RX/TX direction as new column `direction`
>>> df['direction']=df.tap_port.apply(lambda x: x[-4:] if 'x' in x else '-')
# Trying to get the desired output
>>> df.pivot(index='id', columns='direction')[['switch_name','interface']]
switch_name interface
direction - a-rx b-rx - a-rx b-rx
id
1 sw_client1 sw_ag1 sw_ag1 ethernet38 ethernet3 ethernet4
2 sw_client2 sw_ag1 sw_ag1 ethernet31 ethernet7 ethernet8
3 sw_client3 sw_ag1 sw_ag1 ethernet20 ethernet1 ethernet12
</code></pre>
<p>This is very close to what I need, but not quite as per the <code>desired output</code>.</p>
<p>Many thanks in advance for your help!</p>
|
<p>I dont think this is an optimal solution , but it is the desired output.</p>
<pre><code>df = pd.DataFrame({'device_name': ['tap_switch_1', 'tap_switch_1', 'tap_switch_1', 'tap_switch_1', 'tap_switch_1', 'tap_switch_1', 'tap_switch_1', 'tap_switch_1', 'tap_switch_1'], 'interface': ['ethernet3', 'ethernet4', 'ethernet38', 'ethernet7', 'ethernet8', 'ethernet31', 'ethernet1', 'ethernet12', 'ethernet20'], 'tap_port': ['1-tx-a-rx', '1-tx-b-rx', '1-b', '2-tx-a-rx', '2-tx-b-rx', '2-b', '3-tx-a-rx', '3-tx-b-rx', '3-b'], 'switch_name': ['sw_ag1', 'sw_ag1', 'sw_client1', 'sw_ag1', 'sw_ag1', 'sw_client2', 'sw_ag1', 'sw_ag1', 'sw_client3']})
df['col']=pd.DataFrame([i.split('-') for i in df['tap_port']])[2].fillna('').replace({'a':'rx_int','b':'tx_int','':'client port'})
df['id']=[i.split('-')[0] for i in df['tap_port']]
df_pvt=df.pivot(index='id',columns='col',values='interface').reset_index()
x=df_pvt.join(df[['switch_name','device_name']][df['col']=='client port'].rename(columns={'switch_name':'client switch'}).reset_index(drop=True))
final=x.join(df[['switch_name']][df['col']=='tx_int'].rename(columns={'switch_name':'agg_switch'}).reset_index(drop=True))
Out[126]:
id client port rx_int tx_int client switch device_name agg_switch
0 1 ethernet38 ethernet3 ethernet4 sw_client1 tap_switch_1 sw_ag1
1 2 ethernet31 ethernet7 ethernet8 sw_client2 tap_switch_1 sw_ag1
2 3 ethernet20 ethernet1 ethernet12 sw_client3 tap_switch_1 sw_ag1
</code></pre>
|
python|pandas
| 1
|
5,341
| 63,689,848
|
Pandas group by find minimum of column if it doesn't exist return NaN
|
<p>Suppose I have the following dataframe:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'id': [1,1,1,2,3,2], 'year': ['2020', '2014', '2002', '2020', '2016', '2014'], 'e': [True, False, True, True, False, True]})
df.info()
id year e
1 2020 True
1 2014 False
1 2002 True
2 2020 True
3 2016 False
2 2014 True
</code></pre>
<p>And I want to find the minimum year of each id where <em>e</em> is True, if there isn't any True in <em>e</em> for that id return NaN. The end result would be:</p>
<pre><code>id year
1 2002
2 2014
3 NaN
</code></pre>
|
<p>Try filter before <code>groupby</code> and <code>reindex</code> back</p>
<pre><code>s = df.loc[df.e].groupby('id').year.min().reindex(df.id.unique()).reset_index()
s
Out[307]:
id year
0 1 2002
1 2 2014
2 3 NaN
</code></pre>
<hr />
<p>Or convert to <code>Categorical</code></p>
<pre><code>df['id'] = pd.Categorical(df['id'])
df.loc[df.e].groupby('id').year.min()
Out[309]:
id
1 2002
2 2014
3 None
Name: year, dtype: object
</code></pre>
|
python|pandas|dataframe
| 1
|
5,342
| 53,679,716
|
pandas isin not filtering data on multiple columns
|
<p>nifty</p>
<pre><code> name date time open high low close
0 NIFTY 20180903 09:16 11736.05 11736.10 11699.35 11700.15
1 NIFTY 20180903 09:17 11699.00 11707.60 11699.00 11701.85
2 NIFTY 20180903 09:18 11702.65 11702.65 11690.95 11692.40
3 NIFTY 20180903 09:19 11692.55 11698.10 11688.65 11698.10
4 NIFTY 20180903 09:20 11698.40 11698.40 11687.25 11687.70
</code></pre>
<p>option</p>
<pre><code> date time option_type strike_price open high low close volume
0 20180903 09:15 CE 11500 313.65 319.10 296.00 299.80 5250
1 20180903 09:16 CE 11500 299.00 303.85 299.00 300.60 3975
2 20180903 09:17 CE 11500 299.05 302.30 290.65 293.25 4500
3 20180903 09:18 CE 11500 294.95 300.00 291.00 300.00 1500
4 20180903 09:19 CE 11500 300.50 300.50 295.60 295.60 975
</code></pre>
<p>In both dfs, I want to filter only those rows where date and time is present in both. I have tried isin for same. But it is not working as expected. </p>
<pre><code>option=option[(option.date.isin(nifty.date)) & (option.time.isin(nifty.time))]
nifty=nifty[(nifty.date.isin(option.date)) & (nifty.time.isin(option.time))]
</code></pre>
<p>Can anybody help me on this. My expected output is:</p>
<p>nifty</p>
<pre><code> name date time open high low close
0 NIFTY 20180903 09:16 11736.05 11736.10 11699.35 11700.15
1 NIFTY 20180903 09:17 11699.00 11707.60 11699.00 11701.85
2 NIFTY 20180903 09:18 11702.65 11702.65 11690.95 11692.40
3 NIFTY 20180903 09:19 11692.55 11698.10 11688.65 11698.10
</code></pre>
<p>option</p>
<pre><code> date time option_type strike_price open high low close volume
1 20180903 09:16 CE 11500 299.00 303.85 299.00 300.60 3975
2 20180903 09:17 CE 11500 299.05 302.30 290.65 293.25 4500
3 20180903 09:18 CE 11500 294.95 300.00 291.00 300.00 1500
4 20180903 09:19 CE 11500 300.50 300.50 295.60 295.60 975
</code></pre>
|
<p>Use <code>merge</code> for this on 'date' and 'time' only, this way both your df's will return a subset for the matching values;</p>
<pre><code>nifty_ = nifty.merge(option[['date','time']])
option_ = option.merge(nifty[['date', 'time']])
</code></pre>
|
python|pandas
| 1
|
5,343
| 55,166,874
|
Faster pytorch dataset file
|
<p>I have the following problem, I have many files of 3D volumes that I open to extract a bunch of numpy arrays.
I want to get those arrays randomly, i.e. in the worst case I open as many 3D volumes as numpy arrays I want to get, if all those arrays are in separate files.
The IO here isn't great, I open a big file only to get a small numpy array from it.
Any idea how I can store all these arrays so that the IO is better?
I can't pre-read all the arrays and save them all in one file because then that file would be too big to open for RAM.</p>
<p>I looked up LMDB but it all seems to be about Caffe.
Any idea how I can achieve this?</p>
|
<p>I iterated through my dataset, created an hdf5 file and stored elements in the hdf5. Turns out, when the hdf5 is opened, it doesn't load all data in ram, it loads the header instead.
The header is then used to fetch the data on request, that's how I solved my problem.</p>
<p>Reference:
<a href="http://www.machinelearninguru.com/deep_learning/data_preparation/hdf5/hdf5.html" rel="nofollow noreferrer">http://www.machinelearninguru.com/deep_learning/data_preparation/hdf5/hdf5.html</a></p>
|
python|machine-learning|dataset|pytorch|lmdb
| 1
|
5,344
| 55,556,562
|
Dict[str, Any] or Dict[str, Field] in pytext
|
<p>I'm reading the document of pytext (NLP modeling framework built on PyTorch) and this simple method <code>from_config</code>, a factory method to create a component from a config, has lines like <code>Dict[str, Field] = {ExtraField.TOKEN_RANGE: RawField()}</code>.</p>
<pre><code>@classmethod
def from_config(cls, config: Config, model_input_config, target_config, **kwargs):
model_input_fields: Dict[str, Field] = create_fields(
model_input_config,
{
ModelInput.WORD_FEAT: TextFeatureField,
ModelInput.DICT_FEAT: DictFeatureField,
ModelInput.CHAR_FEAT: CharFeatureField,
},
)
target_fields: Dict[str, Field] = {WordLabelConfig._name: WordLabelField.from_config(target_config)}
extra_fields: Dict[str, Field] = {ExtraField.TOKEN_RANGE: RawField()}
kwargs.update(config.items())
return cls(
raw_columns=config.columns_to_read,
targets=target_fields,
features=model_input_fields,
extra_fields=extra_fields,
**kwargs,
)
</code></pre>
<p>and </p>
<pre><code> def preprocess(self, data: List[Dict[str, Any]]):
tokens = []
for row in data:
tokens.extend(self.preprocess_row(row))
return [{"text": tokens}]
</code></pre>
<p>How can a dictionary have keys with 2 items? What exactly is this?</p>
<p>I would appreciate any pointer! </p>
|
<p>What you're seeing are python type annotations. You can read about the syntax, design and rationale <a href="https://www.python.org/dev/peps/pep-0484/" rel="nofollow noreferrer">here</a> and about the actual implementation (possible types, how to construct custom ones, etc) <a href="https://docs.python.org/3/library/typing.html" rel="nofollow noreferrer">here</a>. Note that here <code>List</code> and <code>Dict</code> are upper cased - <code>Dict[str, Any]</code> is meant to construct the <em>type</em> "a dictionary with string keys and Any values" and not to access an instance of that type.</p>
<p>Those are optional and by default are not used for anything (so you can just ignore them when reading your code, because python also does). However, there are tools like <a href="http://mypy-lang.org/" rel="nofollow noreferrer">mypy</a> which can interpret these type annotations and check whether they are consistent.</p>
<p>I don't know for sure how they are used in <code>torchtext</code> - I don't use it myself and I haven't found anything quickly searching the documentation - but they are likely helpful to the developers who use some special tooling. But they can also be helpful to you! From your perspective, they are best treated as <em>comments</em> rather than code. Reading the signature of <code>preprocess</code> you know that <code>data</code> should be a <code>list</code> of <code>dict</code>s with <code>str</code> keys and any value type. If you have bugs in your code and find that <code>data</code> is a <code>str</code> itself, you know for sure that it is a bug (perhaps not the only one).</p>
|
python|pytorch|pytext
| 2
|
5,345
| 56,563,041
|
CalledProcessError while installing Tensorflow using Bazel
|
<p>I am trying to install Tensorflow from source using Bazel on Raspberry pi. I am following the official documentation as given <a href="https://www.tensorflow.org/install/source" rel="nofollow noreferrer">here</a>. When I run the <code>./configure</code> in Tensorflow directory after completing all the steps written for Bazel, I get the following error</p>
<pre><code>/home/cvit/bin/bazel: line 88: /home/cvit/.bazel/bin/bazel-real: cannot execute binary file: Exec format error
/home/cvit/bin/bazel: line 88: /home/cvit/.bazel/bin/bazel-real: Success
Traceback (most recent call last):
File "./configure.py", line 1552, in <module>
main()
File "./configure.py", line 1432, in main
check_bazel_version('0.15.0')
File "./configure.py", line 450, in check_bazel_version
curr_version = run_shell(['bazel', '--batch', '--bazelrc=/dev/null', 'version'])
File "./configure.py", line 141, in run_shell
output = subprocess.check_output(cmd)
File "/usr/lib/python2.7/subprocess.py", line 223, in check_output
raise CalledProcessError(retcode, cmd, output=output)
subprocess.CalledProcessError: Command '['bazel', '--batch', '--bazelrc=/dev/null', 'version']' returned non-zero exit status 1
</code></pre>
<p>I didn't put the user flag in the bazel installation. So, I think this might be bazelrc error so I tried to set <code>$PATH=$BAZEL/bin</code> but nothing happened.</p>
<p>Please give any suggestion !!</p>
|
<p>Probably the problem is that the non appropriate version of bazel is installed.
Run <code>bazel version</code> in the tensorflow directory, and see if there is an error.
If there is a problem with bazel version, then check out the .baselversion file, and if it contains a version that isn't installable with apt, then dowload the installer from <a href="https://github.com/bazelbuild/bazel/releases" rel="nofollow noreferrer">https://github.com/bazelbuild/bazel/releases</a> then install it, else install with apt.
After that everything should work fine.</p>
|
tensorflow|raspberry-pi|bazel
| 4
|
5,346
| 66,816,984
|
Colors assosiated with dataframe column in folium
|
<p>I have a dataframe looking like this:</p>
<pre><code>Lat | Long | Label
x1 | y1 | id1
x2 | y2 | id2
x3 | y3 | id3
</code></pre>
<p>and I want to plot <code>Lat</code> and <code>Long</code> in a folium map, where the markers are colored based on the value of <code>Label</code>. The problem is that <code>Label</code> is a <code>string</code>. If it was an <code>int</code>, I could do the following</p>
<pre><code>m = folium.Map(location=[21.37000, -158.08000], zoom_start=1000)
color_pallete = sns.color_palette()
color_pallete = sns.color_palette("Set2", 8000)
color_pallete = color_pallete.as_hex()
for index, row in test.iterrows():
c = color_pallete[int(row['label'])]
folium.CircleMarker([row['Lat'], row['Long']], fill_color=c, radius=2, fill=True, color=c).add_to(m)
</code></pre>
<p>Can anyone help me approach this when the Labels are strings? Ie. could I create another column of integers, based on the values of <code>Label</code>? If so, how to do that?</p>
<p>Edit: Just adding <code>color=row['label']</code> in the args of <code>CircleMarker</code> doesn't help, cause that way everything is a hue of black, not very distinguisable.</p>
<p>Edit2: changed labels to represent uuids</p>
|
<p>For anyone with the same problem, I used the following to create random color hexes,</p>
<pre><code>color = "%06x" % random.randint(0, 0xFFFFFF)
</code></pre>
<p>as proposed <a href="https://stackoverflow.com/questions/13998901/generating-a-random-hex-color-in-python/18035471">here</a> and then create a dictionary with the labels as keys, and the hexes as keys, to pass to folium.</p>
|
pandas|folium
| 0
|
5,347
| 47,199,871
|
What is b flops in tfprof (tensorflow profiler) model analysis report?
|
<p>Eg: </p>
<pre><code>_TFProfRoot (--/3163.86b flops)
InceptionResnetV2/InceptionResnetV2/Mixed_6a/Branch_1/Conv2d_0b_3x3/convolution (173.41b/173.41b flops)
</code></pre>
<p>What does <code>b flops</code> mean?
I guess <code>m flops</code> means <code>mega flops</code>. But, what does <code>'b' flops</code> mean?
Apparently, <code>b flops</code> is bigger than <code>m flops</code> since I know that model analysis report prints flops values in descending order.</p>
|
<p>It means billions :)</p>
<p>quoted from source:
"return strings::Printf("%.2fb", n / 1000000000.0);"</p>
|
tensorflow|profiling|flops
| 1
|
5,348
| 47,515,996
|
Apply function to every column value of each row using pandas
|
<p>For this dataframe : </p>
<pre><code>columns = ['A','B', 'C']
data = np.array([[1,2,2] , [4,5,4], [7,8,18]])
df2 = pd.DataFrame(data,columns=columns)
df2['C']
</code></pre>
<p>If the difference between consecutive rows for column C is <= 2 then the previous and current row should be returned. So I'm attempting to filter out rows where difference for previous row > 2.</p>
<p>So expecting these array values to be returned : </p>
<pre><code> [1,2,2]
[4,5,4]
[7,8,18]
</code></pre>
<p>I'm attempting to implement this functionality using the shift function : </p>
<pre><code>df2[(df2.A - df2.shift(1).A >= 2)]
</code></pre>
<p>The result of which is : </p>
<pre><code> A B C
1 4 5 4
2 7 8 18
</code></pre>
<p>I think need to apply function to each row in order to achieve this ?</p>
<p>Update : </p>
<p>Alternative use case :</p>
<pre><code>columns = ['A','B', 'C']
data = np.array([[1,2,2] , [2,5,3], [7,8,16]])
df2 = pd.DataFrame(data,columns=columns)
df2[df2.A.diff().shift(-1) >= 2]
</code></pre>
<p>Returned is : </p>
<pre><code> A B C
1 2 5 3
</code></pre>
<p>but expecting</p>
<pre><code> A B C
1 2 5 3
1 7 8 16
</code></pre>
<p>so in this case expecting the next and current row to be returned as difference between 2 & 8 in
<code>2 5 3</code> & <code>8 8 18</code> is > 2</p>
<p>Update 2 :</p>
<p>Edge case : if the last value being compared is < 2 then the row is ignored</p>
<pre><code>columns = ['A','B', 'C']
data = np.array([[2,2,2] , [3,5,3], [5,8,16], [6,8,16]])
df2 = pd.DataFrame(data,columns=columns)
df2[df2.A.diff().shift(-1).ffill() >= 2]
</code></pre>
<p>returns : </p>
<pre><code>A B C
1 3 5 3
</code></pre>
|
<p>I believe you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.diff.html" rel="nofollow noreferrer"><code>diff</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.shift.html" rel="nofollow noreferrer"><code>shift</code></a> and last <code>NaN</code>s replace by <code>ffill</code>:</p>
<pre><code>a = df2[df2.A.diff().shift(-1).ffill() >= 2]
#same as
a = df2[df2.A.diff().shift(-1).ffill().ge(2)]
print (a)
A B C
1 2 5 3
2 7 8 16
</code></pre>
|
python|pandas
| 1
|
5,349
| 47,410,517
|
Concat pandas dataframes without following a certain sequence
|
<p>I have data files which are converted to pandas dataframes which sometimes share column names while others sharing time series index, which all I wish to combine as one dataframe based on both column and index whenever matching. Since there is no sequence in naming they appear randomly for concatenation. If two dataframe have different columns are concatenated along <code>axis=1</code> it works well, but if the resulting dataframe is combined with new df with the column name from one of the earlier merged pandas dataframe, it fails to concat. For example with these data <a href="https://www.dropbox.com/s/tldju1uil4yfjqs/example_files.zip?dl=0" rel="nofollow noreferrer">files</a> :</p>
<pre><code>import pandas as pd
df1 = pd.read_csv('0.csv', index_col=0, parse_dates=True, infer_datetime_format=True)
df2 = pd.read_csv('1.csv', index_col=0, parse_dates=True, infer_datetime_format=True)
df3 = pd.read_csv('2.csv', index_col=0, parse_dates=True, infer_datetime_format=True)
data1 = pd.DataFrame()
file_list = [df1, df2, df3] # fails
# file_list = [df2, df3,df1] # works
for fn in file_list:
if data1.empty==True or fn.columns[1] in data1.columns:
data1 = pd.concat([data1,fn])
else:
data1 = pd.concat([data1,fn], axis=1)
</code></pre>
<p>I get <code>ValueError: Plan shapes are not aligned</code> when I try to do that. In my case there is no way to first load all the DataFrames and check their column names. Having that I could combine all <code>df</code> with same column names to later only <code>concat</code> these resulting dataframes with different column names along <code>axis=1</code> which I know always works as shown below. However, a solution which requires preloading all the DataFrames and rearranging the sequence of concatenation is not possible in my case (it was only done for a working example above). I need a flexibility in terms of in whichever sequence the information comes it can be concatenated with the larger dataframe <code>data1</code>. Please let me know if you have a suggested suitable approach.</p>
|
<p>If you go through the loop step by step, you can find that in the first iteration it goes into the <code>if</code>, so <code>data1</code> is equal to <code>df1</code>. In the second iteration it goes to the <code>else</code>, since <code>data1</code> is not empty and <code>''Temperature product barrel ValueY''</code> is not in <code>data1.columns</code>.
After the else, <code>data1</code> has some duplicated column names. In every row of the duplicated column names. (one of the 2 columns is <code>Nan</code>, the other one is a float). This is the reason why <code>pd.concat()</code> fails. </p>
<p>You can aggregate the duplicate columns before you try to concatenate to get rid of it:</p>
<pre><code>for fn in file_list:
if data1.empty==True or fn.columns[1] in data1.columns:
# new:
data1 = data1.groupby(data1.columns, axis=1).agg(np.nansum)
data1 = pd.concat([data1,fn])
else:
data1 = pd.concat([data1,fn], axis=1)
</code></pre>
<p>After that, you would get </p>
<pre><code>data1.shape
(30, 23)
</code></pre>
|
python|python-3.x|pandas|dataframe
| 1
|
5,350
| 68,030,403
|
Create multiple columns from one column (with the same data)
|
<p>I have this column (similar but with a lot of more entries)</p>
<pre><code>import pandas as pd
numbers = range(1,16)
sequence = []
for number in numbers:
sequence.append(number)
df = pd.DataFrame(sequence).rename(columns={0: 'sequence'})
</code></pre>
<p><a href="https://i.stack.imgur.com/yv3WB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yv3WB.png" alt="df" /></a></p>
<p>and I want to distribute the same values into lots of more columns periodically (and automatically) to get something like this (but with a bunch of values)</p>
<p><a href="https://i.stack.imgur.com/x7MU6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/x7MU6.png" alt="df2" /></a></p>
<p>Thanks</p>
|
<p>Use <code>reshape</code> with <code>5</code> for number of new rows, <code>-1</code> is for count automatically number of columns:</p>
<pre><code>numbers = range(1,16)
df = pd.DataFrame(np.array(numbers).reshape(-1, 5).T)
print (df)
0 1 2
0 1 6 11
1 2 7 12
2 3 8 13
3 4 9 14
4 5 10 15
</code></pre>
<p>If length of values in <code>range</code> cannot be filled to <code>N</code> rows here is possible solution:</p>
<pre><code>L = range(1,22)
N = 5
filled = 0
arr = np.full(((len(L) - 1)//N + 1)*N, filled)
arr[:len(L)] = L
df = pd.DataFrame(arr.reshape((-1, N)).T)
print(df)
0 1 2 3 4
0 1 6 11 16 21
1 2 7 12 17 0
2 3 8 13 18 0
3 4 9 14 19 0
4 5 10 15 20 0
</code></pre>
|
python|pandas|dataframe
| 3
|
5,351
| 68,355,371
|
Match the first 3 characters of a string to specific column
|
<p>I have a dataframe,df, where I would like to take the first 3 characters of a string from a specific column and place these characters under another column</p>
<p><strong>Data</strong></p>
<pre><code>id value stat
aaa 10 aaa123
aaa 20
aaa 500 aaa123
bbb 20
bbb 10 bbb123
aaa 5 aaa123
aaa123
ccc123
</code></pre>
<p><strong>Desired</strong></p>
<pre><code> id value stat
aaa 10 aaa123
aaa 20
aaa 500 aaa123
bbb 20
bbb 10 bbb123
aaa 5 aaa123
aaa aaa123
ccc ccc123
</code></pre>
<p><strong>Doing</strong></p>
<pre><code> df.append({'aaa':aaa123}, ignore_index=True)
</code></pre>
<p>I believe I have to append the values, perhaps using a mapping or append function, however, not sure how to specify first 3 characters. Any suggestion is appreciated</p>
|
<p>One option would be <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.fillna.html" rel="nofollow noreferrer"><code>Series.fillna</code></a> + <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.html" rel="nofollow noreferrer"><code>Series.str</code></a> to slice the first 3 values:</p>
<pre><code>df['id'] = df['id'].fillna(df['stat'].str[:3])
</code></pre>
<pre><code> id value stat
0 aaa 10.0 aaa123
1 aaa 20.0 NaN
2 aaa 500.0 aaa123
3 bbb 20.0 NaN
4 bbb 10.0 bbb123
5 aaa 5.0 aaa123
6 aaa NaN aaa123
7 ccc NaN ccc123
</code></pre>
<p>Probably overkill for this situation, but <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.str.extract.html" rel="nofollow noreferrer"><code>Series.str.extract</code></a> could also be used:</p>
<pre><code>df['id'] = df['id'].fillna(df['stat'].str.extract(r'(^.{3})')[0])
</code></pre>
<hr />
<p><code>mask</code> if those are empty strings and not <code>NaN</code>:</p>
<pre><code>df['id'] = df['id'].mask(df['id'].eq('')).fillna(df['stat'].str[:3])
</code></pre>
|
python|pandas|numpy
| 5
|
5,352
| 59,395,609
|
Remove row when within group we reach a treshold in pandas
|
<p>Hel lo I need help with pandas. </p>
<p>Here is the table :</p>
<pre><code>Col1 Col2
Grp1 80.3
Grp1 129.2
Grp1 356.0
Grp1 435.3
Grp2 20.2
Grp2 34.0
Grp2 67.0
Grp3 130.3
Grp3 167.9
</code></pre>
<p>And the idea is to remove row when within each Grp, the number in col2 is already > 100.
Here I should get : </p>
<pre><code>Col1 Col2
Grp1 80.3
Grp1 129.2
Grp2 20.2
Grp2 34.0
Grp2 67.0
Grp3 130.3
</code></pre>
<p>Does someone have na idea using pandas, I guess we should use groupby ? Thank you</p>
|
<p>You could do with <code>groupby</code>:</p>
<pre><code>s = df['Col2'].gt(100).groupby(df['Col1']).transform('idxmax')
df[df.index <= s]
</code></pre>
<p>Output:</p>
<pre><code> Col1 Col2
0 Grp1 80.3
1 Grp1 129.2
4 Grp2 20.2
7 Grp3 130.3
</code></pre>
|
pandas
| 3
|
5,353
| 57,112,346
|
Sort a Dataframe with a coumn from another Dataframe
|
<p>I have got two dataframes which looks like these. </p>
<pre><code>df1 =
Name Order
John 2
Alice 3
Alisha 1
Mike 5
Katie 6
Steve 4
df2 =
Name Condition Action
Mike Stable Out
Mike Unstable In
Steve Stable Out
Steve Unstable In
Katie Stable Out
Katie Unstable In
Alisha Stable Out
Alisha Unstable In
John Stable Out
John Unstable In
Alice Stable Out
Alice Unstable In
</code></pre>
<p>I want to sort df2 based on the order number given in df1.</p>
<p>I have tried using .index() and .reindex() but since there are repeating rows in df2, it is giving an error.
ValueError: cannot reindex from a duplicate axis</p>
<p>The expected outcome should be something like this.</p>
<pre><code>df_sort =
Name Condition Action
Alisha Stable Out
Alisha Unstable In
John Stable Out
John Unstable In
Alice Stable Out
Alice Unstable In
Steve Stable Out
Steve Unstable In
Mike Stable Out
Mike Unstable In
Katie Stable Out
Katie Unstable In
</code></pre>
|
<p>First add the <code>Order</code> column to df2:</p>
<pre><code>df2['Order'] = df2.Name.map(df1.set_index('Name').Order)
</code></pre>
<p>Then do the sort and remove the Order column:</p>
<pre><code>df2.sort_values('Order').drop('Order', 1)
</code></pre>
|
python-3.x|pandas|dataframe
| 1
|
5,354
| 57,035,248
|
How to optimize searching and comparing rows in pandas?
|
<p>I have a two dfs. Base is 100k rows, Snps is 54k rows.</p>
<p>This is structure of dfs:</p>
<p>base:</p>
<pre><code>SampleNum SampleIdInt SecondName
1 ASA2123313 A2123313
2 ARR4112234 R4112234
3 AFG4234122 G4234122
4 GGF412233 F412233
5 GTF423512 F423512
6 POL23523552 L23523552
...
</code></pre>
<p>And this is <code>Snps</code> df:</p>
<pre><code> SampleNum SampleIdInt
1 ART2114155
2 KWW4112234
3 AFG4234122
4 GGR9999999
5 YUU33434324
6 POL23523552
...
</code></pre>
<p>And now look for example on 2nd row in Snps and base. They have a same numbers (the first 3 chars are not important to me now). </p>
<p>So I created a <code>commonlist</code> contained a numbers from <code>snsp</code> which also appear in the <code>base</code>. The all rows with SAME numbers between dfs. (common has 15k length)</p>
<pre><code>common_list = [4112234, 4234122, 23523552]
</code></pre>
<p>And now I want create three new lists. </p>
<p><code>confirmedSnps</code> = where whole SampleIdInt is identical as in base. In this example: <code>AFG4234122</code>. For this I have a sure that <code>secondName</code> will be proper.</p>
<p>un_comfirmedSnpS = where I have a good number but first three chars are different. Example: <code>KWW4112234</code> in <code>SnpS</code> and <code>ARR4112234</code> in base. In this case, I'm not sure that <code>SecondName</code> is proper, so I need to check it later. </p>
<p>And last <code>moreThanOne</code> list. That list should append all duplicate rows. For example If in base I will have KWW4112234 and AFG4112234 both should go to that list. </p>
<p>I wrote a code. It's work fine, but the problem is time. I got 15k elements to filter, and each element processing 4 second. It's mean whole loop will be run for 17h!
I looking for help in optimization that code. </p>
<p>That's my code:</p>
<pre><code>comfirmedSnps = []
un_comfirmedSnps = []
moreThanOne = []
for i in range(len(common)):
testa = baza[baza['SampleIdInt'].str.contains(common[i])]
testa = testa.SampleIdInt.unique()
print("StepOne")
testb = snps[snps['SampleIdInt'].str.contains(common[i])]
testb = testb.SampleIdInt.unique()
print("StepTwo")
if len(testa) == 1 and len(testb) == 1:
if (testa == testb) == True:
comfirmedSnps.append(testb)
else:
un_comfirmedSnps.append(testb)
else:
print("testa has more than one contains records. ")
moreThanOne.append(testb)
print("StepTHREE")
print(i,"/", range(len(common)))
</code></pre>
<p>I added a Steps prints to check which part takes most of the time. It's the code between <code>StepOne</code> and <code>stepTwo</code>. First and third steps are running instant. </p>
<p>Can someone help me with that case? For sure most of U will see better solution to this problem. </p>
|
<p>What you are trying to do is commonly called <a href="https://en.wikipedia.org/wiki/Join_(SQL)" rel="nofollow noreferrer">join</a>, which annoyingly enough is called <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.merge.html" rel="nofollow noreferrer">merge</a> in pandas. There's just the minor annoyance of the three initial letters to deal with, but that's easy:</p>
<pre><code>snps.numeric_id = snps.SampleIdInt.apply(lambda s: s[3:])
base.numeric_id = base.SampleIdInt.apply(lambda s: s[3:])
</code></pre>
<p>now you can compute the three dataframes:</p>
<pre><code>confirmed = snps.merge(base, on='SampleIdInt')
unconfirmed = snps.merge(
base, on='numeric_id'
).filter(
lambda r.SampleIdInt_x != r.SampleIdInt_y
)
more_than_one = snps.group_by('numeric_id').filter(lambda g: len(g) > 1)
</code></pre>
<p>I bet it won't work, but hopefully you get the idea.</p>
|
python|pandas|dataframe
| 1
|
5,355
| 57,139,676
|
SavedModel - TFLite - SignatureDef - TensorInfo - Get intermediate Layer outputs
|
<p>I would like to get intermediate layers output of a TFLite graph. Something in the lines of below.</p>
<p><a href="https://stackoverflow.com/questions/56885007/visualize-tflite-graph-and-get-intermediate-values-of-a-particular-node">Visualize TFLite graph and get intermediate values of a particular node?</a></p>
<p>The above solution works on frozen graphs only. Since SavedModel is the preferred way of serializing the model in TF 2.0, I would like to have a solution with a saved model. I tried to pass --output_arrays for "toco" with savedModelDir as input. This is not helping.</p>
<p>From the documentation, it looks like SignatureDefs in SavedModel is the option to achieve this. But, I could not get it working. </p>
<pre><code>x = test_images[0:1]
output = model.predict(x, batch_size=1)
signature_def = signature_def_utils.build_signature_def(
inputs={name:"x:0", dtype: DT_FLOAT, tensor_shape: (1, 28,28, 1)})
outputs = {{name: "output:0", dtype: DT_FLOAT, tensor_shape: (1, 10)},
{name:"Dense_1:0", dtype: DT_FLOAT, tensor_shape: (1, 10)}})
tf.saved_model.save(model, './tf-saved-model-sigdefs', signature_def)
</code></pre>
<p>Can you share an example usage of SignatureDefs for this purpose?
BTW, I have been playing around with the below tutorial for this experiment.
<a href="https://www.tensorflow.org/beta/tutorials/images/intro_to_cnns" rel="nofollow noreferrer">https://www.tensorflow.org/beta/tutorials/images/intro_to_cnns</a></p>
|
<p><code>tf.lite.Interpreter</code> has a new parameter as of tf 2.5: <code>experimental_preserve_all_tensors</code>, when set to <code>True</code> it allows you to query the output from any node. Here is how I used it:</p>
<pre><code>import numpy as np
import tensorflow as tf
import cv2
import argparse
parser = argparse.ArgumentParser(description='Use tensorflow framework to determine class')
parser.add_argument('--model-path', '-m', default='model.tflite', type=str, help='path to the model')
parser.add_argument('--nodes', '-n', default='concat,convert_scores', type=str, help='comma separated list of node names')
parser.add_argument('--image', '-i', default='test.jpg', type=str, help='image to process')
args = parser.parse_args()
# read in model
interpreter = tf.lite.Interpreter(
args.model_path, experimental_preserve_all_tensors=True
)
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# prepare image
img = cv2.imread(args.image)
inp = cv2.resize(img, tuple(input_details[0]["shape"][1:3]))
inp = inp[:, :, [2, 1, 0]] # BGR2RGB
inp = inp[np.newaxis, :, :, :].astype(input_details[0]["dtype"])
# invoke mode
interpreter.set_tensor(input_details[0]["index"], inp)
interpreter.invoke()
# write out layer output to files
for node in args.nodes.split(','):
for tensor_details in interpreter.get_tensor_details():
if tensor_details["name"] == node:
tensor = interpreter.get_tensor(tensor_details["index"])
np.save(node, tensor)
break
</code></pre>
<p>You can use <code>netron</code> to graphically view the network described by your tflite model file, the node names can be found by clicking on a node.</p>
|
tensorflow2.0
| 0
|
5,356
| 56,976,671
|
Time series plot of categorical or binary variables in pandas or matplotlib
|
<p>I have data that represent a time series of categorical variables. I want to display the transitions in categories below a traditional line plot of related continuous time series to show off context as time evolves. I'd like to know the best way to do this. My attempt was in terms of Rectangles. The appearance is a bit weird, and importantly the axis labels for the x axis don't render as dates.</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
import matplotlib as mpl
import numpy as np
from pandas.plotting import register_matplotlib_converters
import matplotlib.dates as mdates
register_matplotlib_converters()
t0 = pd.DatetimeIndex(["2017-06-01 00:00","2017-06-17 00:00","2017-07-03 00:00","2017-08-02 00:00","2017-08-09 00:00","2017-09-01 00:00"])
t1 = pd.DatetimeIndex(["2017-06-01 00:00","2017-08-15 00:00","2017-09-01 00:00"])
df0 = pd.DataFrame({"cat":[0,2,1,2,0,1]},index = t0)
df1 = pd.DataFrame({"op":[0,1,0]},index=t1)
# Create new plot
fig,ax = plt.subplots(1,figsize=(8,3))
data_layout = {
"cat" : {0: ('bisque','Low'),
1: ('lightseagreen','Medium'),
2: ('rebeccapurple','High')},
"op" : {0: ('darkturquoise','Open'),
1: ('tomato','Close')}
}
vars =("cat","op")
dfs = [df0,df1]
all_ticks = []
leg = []
for j,(v,d) in enumerate(zip(vars,dfs)):
dvals = d[v][:].astype("d")
normal = mpl.colors.Normalize(vmin=0, vmax=2.)
colors = plt.cm.Set1(0.75*normal(dvals.as_matrix()))
handles = []
for i in range(d.count()-1):
s = d[v].index.to_pydatetime()
level = d[v][i]
base = d[v].index[i]
w = s[i+1] - s[i]
patch=mpl.patches.Rectangle((base,float(j)),width=w,color=data_layout[v][level][0],height=1,fill=True)
ax.add_patch(patch)
for lev in data_layout[v]:
print data_layout[v][level]
handles.append(mpl.patches.Patch(color=data_layout[v][lev][0],label=data_layout[v][lev][1]))
all_ticks.append(j+0.5)
leg.append( plt.legend(handles=handles,loc = (3-3*j+1)))
plt.axhline(y=1.,linewidth=3,color="gray")
plt.xlim(pd.Timestamp(2017,6,1).to_pydatetime(),pd.Timestamp(2017,9,1).to_pydatetime())
plt.ylim(0,2)
ax.add_artist(leg[0]) # two legends on one axis
ax.format_xdata = mdates.DateFormatter('%Y-%m-%d') # This fails
plt.yticks(all_ticks,vars)
plt.show()
</code></pre>
<p>which produces this with no dates and has jittery lines:<a href="https://i.stack.imgur.com/GlUwH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GlUwH.png" alt="mock up"></a>. How do I fix this? Is there a better way entirely?</p>
|
<p>This is a way to display dates on x-axis:</p>
<p>In your code substitute the line that fails with this one:</p>
<pre><code>ax.xaxis.set_major_formatter((mdates.DateFormatter('%Y-%m-%d')))
</code></pre>
<p>But I don't remember how it should look like, can you show us the end-result again?</p>
|
pandas|matplotlib|plot|time-series|categorical-data
| 1
|
5,357
| 56,895,221
|
Following a TensorFlow tutorial and hitting issues with model.predict
|
<p><a href="https://towardsdatascience.com/all-the-steps-to-build-your-first-image-classifier-with-code-cf244b015799" rel="nofollow noreferrer">I am following a tutorial for TensorFlow</a> and I am having problems during the model prediction phase.</p>
<p>The final bit of code is :</p>
<pre><code>import cv2
import tensorflow as tf
CATEGORIES = ["bishopB", "bishopW", "empty", "kingB", "kingW",
"knightB", "knightW", "pawnB", "pawnW",
"queenB", "queenW", "rookB", "rookW"]
def prepare(file):
IMG_SIZE = 50
img_array = cv2.imread(file, cv2.IMREAD_GRAYSCALE)
new_array = cv2.resize(img_array, (IMG_SIZE, IMG_SIZE))
return new_array.reshape(-1, IMG_SIZE, IMG_SIZE, 1)
model = tf.keras.models.load_model("CNN.model")
image = "test.jpg" #your image path
prediction = model.predict([image])
prediction = list(prediction[0])
print(CATEGORIES[prediction.index(max(prediction))])
</code></pre>
<p>This should allow me to get a prediction based upon a file input.</p>
<p>However when I run it, I get the following error:</p>
<pre><code> prediction = model.predict([image])
File "/Users/stuff/Library/Python/2.7/lib/python/site-packages/tensorflow/python/keras/engine/training.py", line 1060, in predict
x, check_steps=True, steps_name='steps', steps=steps)
File "/Users/stuff/Library/Python/2.7/lib/python/site-packages/tensorflow/python/keras/engine/training.py", line 2651, in _standardize_user_data
exception_prefix='input')
File "/Users/stuff/Library/Python/2.7/lib/python/site-packages/tensorflow/python/keras/engine/training_utils.py", line 334, in standardize_input_data
standardize_single_array(x, shape) for (x, shape) in zip(data, shapes)
File "/Users/stuff/Library/Python/2.7/lib/python/site-packages/tensorflow/python/keras/engine/training_utils.py", line 265, in standardize_single_array
if (x.shape is not None and len(x.shape) == 1 and
AttributeError: 'str' object has no attribute 'shape'
</code></pre>
<p>Can anybody please help me understand what I have done wrong here? I don't believe it is even getting to the point where it processes my test image.</p>
|
<p>Your <code>image</code> should be <code>prepare_file(img_path)</code> instead of just a string.</p>
|
python|tensorflow|keras|artificial-intelligence
| 1
|
5,358
| 57,172,340
|
Change column value based on conditions in other columns in Pandas
|
<p>I want to change the value in 1 column in the data frame based on the conditions and comparison of values in other columns.</p>
<p>This is the original data frame:</p>
<pre><code> start end diff
0 2016-05-08 unknown 3
1 2016-05-08 2017-09-08 5
2 2018-09-01 2017-09-01 5
</code></pre>
<p>This is the data frame that I want:</p>
<pre><code> start end diff
0 2016-05-08 unknown 3
1 2016-05-08 2017-09-08 1
2 2018-09-01 2017-09-01 -1
</code></pre>
<p>Basically, I want the values in diff column to remain the same if end is unknown, otherwise, I want it to be the value of year value of end - year value of start.</p>
<p>Can anyone suggest a piece of code?</p>
<p>Thanks in advance!</p>
|
<p>Here is one way using <code>np.where</code> , after convert the datatime by using <code>to_datetime</code>. Also , please do not name a columns with build-in function name like : diff, sum , min, max and cumsum. </p>
<pre><code>df.start=pd.to_datetime(df.start)
df.end=pd.to_datetime(df.end,errors = 'coerce')
df['diff']=np.where(df.end.isnull(),df['diff'],df.end.dt.year-df.start.dt.year)
df
Out[135]:
start end diff
0 2016-05-08 NaT 3.0
1 2016-05-08 2017-09-08 1.0
2 2018-09-01 2017-09-01 -1.0
</code></pre>
|
python|pandas
| 1
|
5,359
| 45,853,387
|
When I restore the saved graph and variables. how can I get the placehold in TF
|
<p>I have used </p>
<pre><code> tf.add_to_collection('Input', X)
tf.add_to_collection('TrueLabel', Y)
tf.add_to_collection('loss', loss)
tf.add_to_collection('accuracy', accuracy)
saver0 = tf.train.Saver()
saver0.save(sess, './save/model')
saver0.export_meta_graph('./save/model.meta')
</code></pre>
<p>to save my code in one session scope. Then, I restore it from another session scope. CUrrent, I only has the training data, and I have save the placeholder X, and Y. WHile I cannot use them at this time:</p>
<pre><code>train_data, train_label = get_data()
with tf.Session() as sess:
new_saver = tf.train.import_meta_graph('./save/model.meta')
new_saver.restore(sess, './save/model')
graph = sess.graph
X = graph.get_collection('Input')
Y = graph.get_collection('TrueLabel')
loss = graph.get_collection('loss')
accuracy = graph.get_collection('accuracy')
for _ in range(5):
loss_str, accuracy_str = sess.run([loss, accuracy], {X:train_data, Y:train_label})
print('loss:{}, accuracy:{}'.format(loss_str, accuracy_str))
</code></pre>
<p>How can I do that? I found the tutorial docs did not give a complete example</p>
|
<p>This concern has been solved by myself. Once we load the graph and the variables. Just to obtain the placeholder like graph.get_tensor_by_name('Input:0'). Use the same way to obtain the loss and accuracy and so on what you want to collect.</p>
<p>A full example could be found from <a href="https://github.com/sunkevin1214/TF_implementation/blob/master/test_funs/test_save_load.py" rel="nofollow noreferrer">https://github.com/sunkevin1214/TF_implementation/blob/master/test_funs/test_save_load.py</a></p>
|
tensorflow
| 0
|
5,360
| 51,100,406
|
Creating python function to create categorical bins in pandas
|
<p>I'm trying to create a reusable function in python 2.7(pandas) to form categorical bins, i.e. group less-value categories as 'other'. Can someone help me to create a function for the below: col1, col2, etc. are different categorical variable columns.</p>
<pre><code>##Reducing categories by binning categorical variables - column1
a = df.col1.value_counts()
#get top 5 values of index
vals = a[:5].index
df['col1_new'] = df.col1.where(df.col1.isin(vals), 'other')
df = df.drop(['col1'],axis=1)
##Reducing categories by binning categorical variables - column2
a = df.col2.value_counts()
#get top 6 values of index
vals = a[:6].index
df['col2_new'] = df.col2.where(df.col2.isin(vals), 'other')
df = df.drop(['col2'],axis=1)
</code></pre>
|
<p>You can use:</p>
<pre><code>df = pd.DataFrame({'A':list('abcdefabcdefabffeg'),
'D':[1,3,5,7,1,0,1,3,5,7,1,0,1,3,5,7,1,0]})
print (df)
A D
0 a 1
1 b 3
2 c 5
3 d 7
4 e 1
5 f 0
6 a 1
7 b 3
8 c 5
9 d 7
10 e 1
11 f 0
12 a 1
13 b 3
14 f 5
15 f 7
16 e 1
17 g 0
</code></pre>
<hr>
<pre><code>def replace_under_top(df, c, n):
a = df[c].value_counts()
#get top n values of index
vals = a[:n].index
#assign columns back
df[c] = df[c].where(df[c].isin(vals), 'other')
#rename processes column
df = df.rename(columns={c : c + '_new'})
return df
</code></pre>
<hr>
<p>Test:</p>
<pre><code>df1 = replace_under_top(df, 'A', 3)
print (df1)
A_new D
0 other 1
1 b 3
2 other 5
3 other 7
4 e 1
5 f 0
6 other 1
7 b 3
8 other 5
9 other 7
10 e 1
11 f 0
12 other 1
13 b 3
14 f 5
15 f 7
16 e 1
17 other 0
</code></pre>
<hr>
<pre><code>df2 = replace_under_top(df, 'D', 4)
print (df2)
A D_new
0 other 1
1 b 3
2 other 5
3 other 7
4 e 1
5 f other
6 other 1
7 b 3
8 other 5
9 other 7
10 e 1
11 f other
12 other 1
13 b 3
14 f 5
15 f 7
16 e 1
17 other other
</code></pre>
|
python|python-2.7|pandas|dataframe
| 3
|
5,361
| 66,702,577
|
Compare a column in 2 different dataframes in pandas (only 1 column is same in both dataframes)
|
<p>I have 2 dataframes <code>df1</code> and <code>df2</code> and I want to compare <code>'col1'</code> of both dataframes and get the rows from <code>df1</code> where <code>'col1'</code> values don't match. Only <code>'col1'</code> is common in both dataframes.</p>
<p>Suppose I have:</p>
<pre><code>df1 = pd.DataFrame({
'col1': range(1, 6),
'col2': range(10, 60, 10),
'col3': [*'abcde']
})
df2 = pd.DataFrame({
'col1': range(1, 4),
'cola': ['Aa', 'bcd', 'h'],
'colb': [12, 'sadf', 'dd']
})
print(df1)
col1 col2 col3
0 1 10 a
1 2 20 b
2 3 30 c
3 4 40 d
4 5 50 e
print(df2)
col1 cola colb
0 1 Aa 12
1 2 bcd sadf
2 3 h dd
</code></pre>
<p>I want to get:</p>
<pre><code> col1 col2 col3
0 4 40 d
1 5 50 e
</code></pre>
|
<h3>Quick and Dirty</h3>
<pre><code>df1.append(df1.merge(df2.col1)).drop_duplicates(keep=False)
col1 col2 col3
3 4 40 d
4 5 50 e
</code></pre>
|
python|pandas|dataframe
| 1
|
5,362
| 57,707,999
|
python pandas: convert/transform between iat/iloc and at/loc indexing
|
<p>I have the iloc index in a Dataframe and want the get the corresponding loc index. In other words: I would like to have a function <code>ilocIndex_to_locIndex</code> converting the <code>ilocIndex</code> to <code>locIndex</code></p>
<pre><code>df = pd.DataFrame({1 : [1,2,3,4], 2 : [5,6,7,8]})
df = df.drop([1])
iatIndex = 2
df.iloc[ilocIndex]
locIndex = df.ilocIndex_to_locIndex(ilocIndex)
df.loc[locIndex]
</code></pre>
|
<p>You can subscript the <code>.index</code>:</p>
<pre><code>>>> df<b>.index[2]</b>
3</code></pre>
|
python|pandas
| 3
|
5,363
| 57,587,738
|
Search for a Pattern in a Pandas Column, abstract the value on the left of the pattern
|
<p>I have a column in a Pandas Dataframe like this (dtype = "O"): </p>
<pre><code>Column_string
! 111 PATTERN1 .......,,,,,,.... !444PATTERN2
! 222 PATTERN3 .......,,,,,,.... !555 PATTERN3
! 333 PATTERN4 .......,,,,,,.... !666 PATTERN5
</code></pre>
<p>I want to <strong>abstract a value on the left side of a pattern, until the '!'.</strong> For example, If I am looking for PATTERN1, the result that I want is: 111. </p>
<p>I want to create new columns, based on a specific pattern. So the desired output (if I am only looking for PATTERN1 and PATTERN2: </p>
<pre><code>Column_string PATTERN1 PATTERN2
! 111 PATTERN1 .......,,,,,,.... !444PATTERN2 111 444
! 222 PATTERN3 .......,,,,,,.... !555 PATTERN3 none none
! 333 PATTERN4 .......,,,,,,.... !666 PATTERN5 none none
</code></pre>
|
<p>use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.findall.html" rel="nofollow noreferrer">str.findall</a></p>
<pre><code>##sample df
Column_string
0 ! 111 PATTERN1 .......,,,,,,.... !444PATTERN2
1 ! 222 PATTERN3 .......,,,,,,.... !555 PATTERN3
2 ! 333 PATTERN4 .......,,,,,,.... !666 PATTERN5
3 3434 PATTERN .................... 435 PATTERN
</code></pre>
<hr>
<pre><code>patterns = df.join(pd.DataFrame(df['Column_string '].str.findall('((?<=!)\s*\d+\s*(?=PATTERN))').tolist()).rename({0:'PATTERN1',1:'PATTERN2'},axis=1))
df.join(patterns)
</code></pre>
<hr>
<pre><code> Column_string PATTERN1 PATTERN2
0 ! 111 PATTERN1 .......,,,,,,.... !444PATTERN2 111 444
1 ! 222 PATTERN3 .......,,,,,,.... !555 PATTERN3 222 555
2 ! 333 PATTERN4 .......,,,,,,.... !666 PATTERN5 333 666
3 3434 PATTERN .................... 435 PATTERN None None
</code></pre>
<p><strong>Note</strong>: If the PATTERN keywords in the string points to some pattern of sort, then below works</p>
<pre><code>##extract the number value where pattern1 and pattern2 is present
print(df.join(pd.DataFrame(df['Column_string '].str.findall('((?<=!)\s*\d+\s*(?=PATTERN1|PATTERN2))').tolist()).rename({0:'PATTERN1',1:'PATTERN2'},axis=1)))
</code></pre>
<hr>
<pre><code> Column_string PATTERN1 PATTERN2
0 ! 111 PATTERN1 .......,,,,,,.... !444PATTERN2 111 444
1 ! 222 PATTERN3 .......,,,,,,.... !555 PATTERN3 None None
2 ! 333 PATTERN4 .......,,,,,,.... !666 PATTERN5 None None
3 3434 PATTERN .................... 435 PATTERN None None
</code></pre>
|
python|string|pandas
| 1
|
5,364
| 73,172,757
|
Local variable referenced before assignment in If statement when calculating mean absolute error
|
<p>I'm trying to add a weight to not penalize as much if prediction is greater than the actual in a forecast. Here's my code, however, I keep getting:</p>
<blockquote>
<p>UnboundLocalError: local variable 'under' referenced before assignment</p>
</blockquote>
<pre><code>import numpy as np
def mae(y, y_hat):
if np.where(y_hat >= y):
over = np.mean(0.5*(np.abs(y - y_hat)))
elif np.where(y_hat < y):
under = np.mean(np.abs(y - y_hat))
return (over + under) / 2
</code></pre>
<p>I've tried setting 'under' to global but that doesn't work either. This is probably an easy fix though I'm more of an R user.</p>
|
<p>So because of the <code>if</code> and <code>elif</code> statement, when you return <code>np.mean(over,under)</code>, either <code>under</code> or <code>over</code> isn't going to be defined. Therefore, you either need to initialize <code>under</code> and <code>over</code> with initial values or rework it because with you current logic only one of those variables will be defined.</p>
<h2>EDIT</h2>
<p>So you changed it to <code>(over + under) / 2</code> as the return statement. Still one of them isn't going to be defined. So you should initialize them as 0. Such as:</p>
<pre><code>import numpy as np
def mae(y, y_hat):
under = 0
over = 0
if np.where(y_hat >= y):
over = np.mean(0.5*(np.abs(y - y_hat)))
elif np.where(y_hat < y):
under = np.mean(np.abs(y - y_hat))
return (over + under) / 2
</code></pre>
<p>Then they won't affect the output at all when not in use.</p>
|
python|numpy|if-statement|mse
| 1
|
5,365
| 51,812,972
|
Extracting quantitative Information out of Strings
|
<p>I am analyzing the Open Food Facts dataset.
The dataset is very messy and has a column called 'quantity' with entries like the following: </p>
<p>'100 g ',<br>
'5 oz (142 g)',<br>
'12 oz',<br>
'200 g ',<br>
'12 oz (340 g)',<br>
'10 f oz (296ml) ',<br>
'750 ml',<br>
'1 l',<br>
'250 ml',
'8 OZ',<br>
'10.5 oz (750 g)',<br>
'1 gallon (3.78 L) ',<br>
'27 OZ (1 LB 11 OZ) 765g ',<br>
'75 cl',</p>
<p>As you can see the values and units of measurement are all over the place! Sometimes the quantity is given in two different measurements...
My goal is to create a new column 'quantity_in_g' in my pandas data frame where I extract the information out of the string and create an integer Value based on the number of grams from the 'quantity' column.
So if the quantity column has '200 g' I want the integer 200 and if it says '1 kg' I want the integer 1000. I would also like to convert the other units of measurements to grams. For '2 oz' I want the integer 56 and for 1 L I would like to get 1000.<br>
Could someone help me to convert this column?
I would really appreciate it!<br>
Thanks in advance</p>
|
<pre><code>raw_data_lst = ['100 g ','5 oz (142 g)','12 oz','200 g ','12 oz (340 g)','10 f oz (296ml)','750 ml','1 l','250 ml', '8 OZ',]
# 10 f oz (296ml) don't know what f is
# if more there is more data like this then gram_conv_dict.keys() loop over this instead of directly ... doing what i have done below
in_grams_colm = []
gram_conv_dict ={
'g':1,
'oz': 28.3495,
'kg':1000,
'l': 1000 # assuming 1 litre of water --> grams
}
# ml --> g is tricky as density varies
def convert2num(string_num):
try:
return int(string_num)
except ValueError:
return float(string_num)
def get_in_grams(unit):
try:
return gram_conv_dict[unit.lower()]
except:
print('don\'t know how much grams is present in 1',unit+'.')
return 1
for data in raw_data_lst:
i = 0
quantity_str =''
quantity_num = 0
while i < len(data):
if 47 < ord(data[i]) < 58 or data[i] == '.':
quantity_str+= data[i]
else:
# data[i] = '' most abbrv has at most length = 2 therefore data[i+1:i+3] or u can just send the whole data[i+1:]
# gram_conv_dict[data[i+1:i+3].strip()] directly check if key exist
break
i+=1
quantity_num = convert2num(quantity_str)*get_in_grams(data[i+1:i+3].strip()) # assuming each data has this format numberspace-- len 2 abbrv
in_grams_colm.append(quantity_num) # if u want only integer int(quantity_num)
#print(in_grams_colm)
def nice_print():
for _ in in_grams_colm:
print('{:.2f}'.format(_))
nice_print()
'''
output
don't know how much grams is present in 1 f.
don't know how much grams is present in 1 ml.
don't know how much grams is present in 1 ml.
100.00
141.75
340.19
200.00
340.19
10.00
750.00
1000.00
250.00
226.80'''
</code></pre>
|
python|regex|pandas|feature-extraction|text-extraction
| 0
|
5,366
| 36,195,485
|
Setting column types while reading csv with pandas
|
<p>Trying to read <strong>csv</strong> file into <strong><em>pandas</em></strong> dataframe with the following formatting</p>
<pre><code>dp = pd.read_csv('products.csv', header = 0, dtype = {'name': str,'review': str,
'rating': int,'word_count': dict}, engine = 'c')
print dp.shape
for col in dp.columns:
print 'column', col,':', type(col[0])
print type(dp['rating'][0])
dp.head(3)
</code></pre>
<p>This is the output:</p>
<pre><code>(183531, 4)
column name : <type 'str'>
column review : <type 'str'>
column rating : <type 'str'>
column word_count : <type 'str'>
<type 'numpy.int64'>
</code></pre>
<p><a href="https://i.stack.imgur.com/764Kv.png" rel="noreferrer"><img src="https://i.stack.imgur.com/764Kv.png" alt="enter image description here"></a></p>
<p>I can sort of understand that <strong><em>pandas</em></strong> might be finding it difficult to convert a string representation of a dictionary into a dictionary given <a href="https://stackoverflow.com/questions/988228/converting-a-string-to-dictionary">this</a> and <a href="https://stackoverflow.com/questions/13675942/converting-string-to-dict?lq=1">this</a>. But how can the content of the "rating" column be both str and numpy.int64???</p>
<p>By the way, tweaks like not specifying an engine or header do not change anything.</p>
<p>Thanks and regards</p>
|
<p>In your loop you are doing:</p>
<pre><code>for col in dp.columns:
print 'column', col,':', type(col[0])
</code></pre>
<p>and you are correctly seeing <code>str</code> as the output everywhere because <code>col[0]</code> is the first letter of the name of the column, which is a string.</p>
<p>For example, if you run this loop:</p>
<pre><code>for col in dp.columns:
print 'column', col,':', col[0]
</code></pre>
<p>you will see the first letter of the string of each column name is printed out - this is what <code>col[0]</code> is.</p>
<p>Your loop only iterates on the <em>column names</em>, not on the <em>series data</em>.</p>
<p>What you really want is to check the type of each column's data (not its header or part of its header) in a loop.</p>
<p>So do this instead to get the types of the column data (non-header data):</p>
<pre><code>for col in dp.columns:
print 'column', col,':', type(dp[col][0])
</code></pre>
<p>This is similar to what you did when printing the type of the <code>rating</code> column separately.</p>
|
python|csv|dictionary|pandas|types
| 6
|
5,367
| 36,104,016
|
Pandas: pivot with rows and columns in a given order
|
<p>I have a dataframe with hierarchical rows, which is the result of a transposed pivot. The second row index is a number and gets sorted in ascending order (which is what I want), but the first row index is a string and gets sorted alphabetically (which I don't want). Similarly, the column names are strings, and are sorted alphabetically, which I don't want.</p>
<p>How can I reorganise the dataframe so that rows and columns appear in the order I want? The only thing I though of is to rename them adding a number at the beginning, but it's messy and I'd rather avoid it, if possible.</p>
<p>A minimal example of what my data looks like is below.</p>
<pre><code>import pandas as pd
import numpy as np
consdf=pd.DataFrame()
for mylocation in ['North','South']:
for scenario in np.arange(1,4):
df= pd.DataFrame()
df['mylocation'] = [mylocation]
df['scenario']= [scenario]
df['this'] = np.random.randint(10,100)
df['that'] = df['this'] * 2
df['something else'] = df['this'] * 3
consdf=pd.concat((consdf, df ), axis=0, ignore_index=True)
mypiv = consdf.pivot('mylocation','scenario').transpose()
</code></pre>
<h2>Note on potential duplicates</h2>
<p>I understand similar questions have been asked, but I couldn't apply any of those solutions to my multi-index.</p>
|
<p>The function <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html" rel="nofollow"><code>reindex</code></a> is for that, just adapt to manage columns :</p>
<pre><code>In [1]: mypiv
Out[1]:
mylocation North South
scenario
this 1 24 11
2 19 53
3 11 92
that 1 48 22
2 38 106
3 22 184
something else 1 72 33
2 57 159
3 33 276
In [2]: mypiv.reindex(['that', 'this', 'something else'], level=0) \
.T.reindex(['South','North']).T
Out[2]:
mylocation South North
scenario
that 1 90 34
2 50 104
3 100 170
this 1 45 17
2 25 52
3 50 85
something else 1 135 51
2 75 156
3 150 255
</code></pre>
|
pandas|dataframe|pivot
| 3
|
5,368
| 7,716,092
|
Creating greyscale video using Python, OpenCV, and numpy arrays
|
<p>I am using 32-bit python with OpenCV 2.3.1. I am trying to write 2-dimensional numpy arrays to a opencv video writer. My code is similar to :</p>
<pre><code>import cv2 as cv
import numpy as np
fourcc = cv.cv.CV_FOURCC('D', 'I', 'V', 'X')
writer = cv.cv.CreateVideoWriter("test.mpg", courcc, 10, (256,256))
if not writer:
print "Error"
sys.exit(1)
for ii in range(numberOfFrames):
numpy_image = GetFrame(ii) #Gets a random image
cv_image = cv.cv.CreateImage((256,256), cv.IPL_DEPTH_8U, 1)
cv.cv.SetData(cv_image, numpy_image.tostring(), numpy_array.dtype.itemsize*1*256)
cv.cv.WriteFrame(writer, cv_image)
del writer
</code></pre>
<p>I can see that I have the appropriate data in my numpy array. And if I try reading the data back from the iplImage I see it is still there. However, writing the frame does not appear to do anything. No file is being made or throwing any error. What could I be doing wrong? Thanks in advance.</p>
|
<p>I've never used OpenCV, but FWIW I write numpy arrays to video files by piping them to mencoder (based on VokkiCoder's <code>VideoSink</code> class <a href="http://vokicodder.blogspot.co.uk/2011/02/numpy-arrays-to-video.html" rel="nofollow">here</a>). It's very fast and seems to work pretty reliably, and it will also write RGB video.</p>
<p><a href="http://pastebin.com/aaKgjsST" rel="nofollow">Here's a paste</a>. It also includes a <code>VideoSource</code> class for reading movie files to numpy arrays using a very similar approach.</p>
|
python|opencv|numpy
| 1
|
5,369
| 37,933,925
|
Pandas: Can pandas groupby filter work on original object?
|
<p>Starting with this question as base.</p>
<p><a href="https://stackoverflow.com/questions/13446480/python-pandas-remove-entries-based-on-the-number-of-occurrences">Python Pandas: remove entries based on the number of occurrences</a></p>
<pre><code>data = pandas.DataFrame(
{'pid' : [1,1,1,2,2,3,3,3],
'tag' : [23,45,62,24,45,34,25,62],
})
# pid tag
# 0 1 23
# 1 1 45
# 2 1 62
# 3 2 24
# 4 2 45
# 5 3 34
# 6 3 25
# 7 3 62
g = data.groupby('tag')
g.filter(lambda x: len(x) > 1) # filters out lengths > 1.
# pid tag
# 1 1 45
# 2 1 62
# 4 2 45
# 7 3 62
#This would create a new object g:
g = g.filter(lambda x: len(x) > 1) #where g is now a dataframe.
</code></pre>
<p>I was wondering is there a way to filter out 'groups' by deleting
them from original object <code>g</code>. And, would it be faster than creating a new <code>groupby</code> object from filtered groupby. </p>
|
<p>A couple of options (yours is at the bottom):</p>
<p>This first one is <code>inplace</code> and as quick as I could make it. Its a bit quicker than your solution but not by virtue of dropping rows in place. I can get even better performance with the second option and this does not change in place.</p>
<pre><code>%%timeit
data = pd.DataFrame(
{'pid' : [1,1,1,2,2,3,3,3],
'tag' : [23,45,62,24,45,34,25,62],
})
mask = ~data.duplicated(subset=['tag'], keep=False)
data.drop(mask[mask].index, inplace=True)
data
1000 loops, best of 3: 1.16 ms per loop
</code></pre>
<hr>
<pre><code>%%timeit
data = pd.DataFrame(
{'pid' : [1,1,1,2,2,3,3,3],
'tag' : [23,45,62,24,45,34,25,62],
})
data = data.loc[data.duplicated(subset=['tag'], keep=False)]
data
1000 loops, best of 3: 719 µs per loop
</code></pre>
<hr>
<pre><code>%%timeit
data = pd.DataFrame(
{'pid' : [1,1,1,2,2,3,3,3],
'tag' : [23,45,62,24,45,34,25,62],
})
g = data.groupby('tag')
g = g.filter(lambda x: len(x) > 1)
g
1000 loops, best of 3: 1.55 ms per loop
</code></pre>
|
python|pandas|filter|group-by
| 1
|
5,370
| 37,820,107
|
Efficiently reshape numpy array
|
<p>I am working with NumPy arrays.</p>
<p>I have a <code>2N</code> length vector <code>D</code> and want to reshape part of it into an <code>N x N</code> array <code>C</code>.</p>
<p>Right now this code does what I want, but is a bottleneck for larger <code>N</code>:</p>
<p>```</p>
<pre><code>import numpy as np
M = 1000
t = np.arange(M)
D = np.sin(t) # initial vector is a sin() function
N = M / 2
C = np.zeros((N,N))
for a in xrange(N):
for b in xrange(N):
C[a,b] = D[N + a - b]
</code></pre>
<p>```</p>
<p>Once <code>C</code> is made I go ahead and do some matrix arithmetic on it, etc.</p>
<p>This nested loop is pretty slow, but since this operation is essentially a change in indexing I figured that I could use NumPy's builtin reshape (<code>numpy.reshape</code>) to speed this part up. </p>
<p>Unfortunately, I cannot seem to figure out a good way of transforming these indices.</p>
<p>Any help speeding this part up?</p>
|
<p>You can use <a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="noreferrer"><code>NumPy broadcasting</code></a> to remove those nested loops -</p>
<pre><code>C = D[N + np.arange(N)[:,None] - np.arange(N)]
</code></pre>
<p>One can also use <a href="http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.take.html" rel="noreferrer"><code>np.take</code></a> to replace the indexing, like so -</p>
<pre><code>C = np.take(D,N + np.arange(N)[:,None] - np.arange(N))
</code></pre>
<p>A closer look reveals the pattern to be close to <a href="http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.linalg.toeplitz.html" rel="noreferrer"><code>toeplitz</code></a> and <a href="http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.linalg.hankel.html" rel="noreferrer"><code>hankel</code></a> matrices. So, using those, we would have two more approaches to solve it, though with comparable speedups as with broadcasting. The implementations would look something like these -</p>
<pre><code>from scipy.linalg import toeplitz
from scipy.linalg import hankel
C = toeplitz(D[N:],np.hstack((D[0],D[N-1:0:-1])))
C = hankel(D[1:N+1],D[N:])[:,::-1]
</code></pre>
<p><strong>Runtime test</strong></p>
<pre><code>In [230]: M = 1000
...: t = np.arange(M)
...: D = np.sin(t) # initial vector is a sin() function
...: N = M / 2
...:
In [231]: def org_app(D,N):
...: C = np.zeros((N,N))
...: for a in xrange(N):
...: for b in xrange(N):
...: C[a,b] = D[N + a - b]
...: return C
...:
In [232]: %timeit org_app(D,N)
...: %timeit D[N + np.arange(N)[:,None] - np.arange(N)]
...: %timeit np.take(D,N + np.arange(N)[:,None] - np.arange(N))
...: %timeit toeplitz(D[N:],np.hstack((D[0],D[N-1:0:-1])))
...: %timeit hankel(D[1:N+1],D[N:])[:,::-1]
...:
10 loops, best of 3: 83 ms per loop
100 loops, best of 3: 2.82 ms per loop
100 loops, best of 3: 2.84 ms per loop
100 loops, best of 3: 2.95 ms per loop
100 loops, best of 3: 2.93 ms per loop
</code></pre>
|
python|arrays|performance|numpy|vectorization
| 9
|
5,371
| 37,724,208
|
assign values to dataframe defined by multiindex
|
<p>I have a 5-dimensional df created by</p>
<pre><code>factor_list = ['factor1', 'factor2', 'factor3']
method_list = ['method1', 'method2', 'method3']
grouping_list = ['group1', 'group2', 'group3']
parameter_list = [1, 5, 10, 20, 40]
iterables = [factor_list, method_list, parameter_list, grouping_list]
axis_names = ['factor', 'method', 'param', 'grouping']
multi_index = pd.MultiIndex.from_product(iterables, names=axis_names)
column_list = ['a', 'b', 'c', 'd', 'e']
results = pd.DataFrame(index=multi_index, columns=column_list)
results.sort_index(inplace=True)
</code></pre>
<p>Then I do</p>
<pre><code>slice = results.loc['factor2'].copy()
</code></pre>
<p>and fill the computed values into slice (done by a function).<br>
Then I find that I can not copy the result back</p>
<pre><code>results.loc['factor2'] = slice
</code></pre>
<p>The following line raises an error: </p>
<pre><code>cannot align on a multi-index with out specifying the join levels
</code></pre>
<p>The exact question is:<br>
How to copy the content in slice back to the <code>factor2</code> part of the results <code>DataFrame</code>? </p>
|
<p>For me works <a href="http://pandas.pydata.org/pandas-docs/stable/advanced.html#using-slicers" rel="nofollow">using slicers</a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html" rel="nofollow"><code>loc</code></a>:</p>
<pre><code>import pandas as pd, numpy as np
factor_list = ['factor1', 'factor2', 'factor3']
method_list = ['method1', 'method2', 'method3']
grouping_list = ['group1', 'group2', 'group3']
strategy_list = ['s1', 's2', 's3']
parameter_list = [1, 5, 10, 20, 40] # -999 mean not applicable
iterables = [factor_list, strategy_list, parameter_list, grouping_list]
axis_names = ['factor', 'method', 'param', 'grouping']
multi_index = pd.MultiIndex.from_product(iterables, names=axis_names)
column_list = ['a', 'b', 'c', 'd', 'e']
results = pd.DataFrame(np.arange(len(multi_index)*len(column_list)).reshape((len(multi_index),len(column_list))),
index=multi_index, columns=column_list)
results.sort_index(inplace=True)
</code></pre>
<pre><code>print (results.head())
a b c d e
factor method param grouping
factor1 s1 1 group1 0 1 2 3 4
group2 5 6 7 8 9
group3 10 11 12 13 14
5 group1 15 16 17 18 19
group2 20 21 22 23 24
idx = pd.IndexSlice
sli = results.loc[idx['factor1',:,:,['group1','group3']],:]
#some test function
sli = sli + 0.1
print (sli.head())
a b c d e
factor method param grouping
factor1 s1 1 group1 0.1 1.1 2.1 3.1 4.1
group3 10.1 11.1 12.1 13.1 14.1
5 group1 15.1 16.1 17.1 18.1 19.1
group3 25.1 26.1 27.1 28.1 29.1
10 group1 30.1 31.1 32.1 33.1 34.1
results.loc[idx['factor1',:,:,['group1','group3']],:] = sli
print (results.head())
a b c d e
factor method param grouping
factor1 s1 1 group1 0.1 1.1 2.1 3.1 4.1
group2 5.0 6.0 7.0 8.0 9.0
group3 10.1 11.1 12.1 13.1 14.1
5 group1 15.1 16.1 17.1 18.1 19.1
group2 20.0 21.0 22.0 23.0 24.0
</code></pre>
|
python|pandas|dataframe|multi-index
| 2
|
5,372
| 37,766,012
|
Adding multiple ndarry using numpy
|
<p>I am quite new to python numpy.</p>
<p>If i do have a list of numpy vectors. What is the best way to ensure computation is fast. </p>
<p>I am currently doing this which i find it to be too slow.</p>
<pre><code>vec = sum(list of numpy vectors) # 4 vectors of 500 dimensions each
</code></pre>
<p>It does take up quite a noticeable amount of time using sum.</p>
|
<p>Is this what you are trying to do (but with much larger arrays)?</p>
<pre><code>In [193]: sum([np.ones((2,3)),np.arange(6).reshape(2,3)])
Out[193]:
array([[ 1., 2., 3.],
[ 4., 5., 6.]])
</code></pre>
<p><code>500 dimensions each</code> is an unclear description. Do you mean an array with shape <code>(500,)</code> or with <code>ndim==500</code>? If the latter, just how many elements total are there.</p>
<p>The fact that it is a list of 4 of these arrays shouldn't be a big deal. What's the time for <code>array1 + array2</code>?</p>
<p>If the arrays just have 500 elements each, the sum time is trivial:</p>
<pre><code>In [195]: timeit sum([np.arange(500),np.arange(500),np.arange(500),np.arange(500)])
10000 loops, best of 3: 20.9 µs per loop
</code></pre>
<p>on the other hand a sum of arrays with many small dimensions is slower simply because such an array is much larger</p>
<pre><code>In [204]: x=np.ones((3,)*10)
In [205]: timeit z=sum([x,x,x,x])
1000 loops, best of 3: 1.6 ms per loop
</code></pre>
|
python|numpy|vector
| 2
|
5,373
| 38,008,512
|
How can I get the value of the error during training in Tensorflow?
|
<p>In the TensorFlow MNIST beginners tutorial, code excerpts here: </p>
<pre><code>cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
sess = tf.Session()
sess.run(init)
#-----training loop starts here-----
for i in range(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
</code></pre>
<p>Is it possible to access/retrieve the values of the cross_entropy error, Weights, and biases while inside the loop? I want to plot the error, and possibly a histogram of the weights.</p>
<p>Thanks!</p>
|
<p>As some person say, TensorBoard is the one for that purpose.</p>
<p>Here I can give you how to.</p>
<p>First, let's define a function for logging min, max, mean and std-dev for the tensor.</p>
<pre><code>def variable_summaries(var, name):
with tf.name_scope("summaries"):
mean = tf.reduce_mean(var)
tf.scalar_summary('mean/' + name, mean)
with tf.name_scope('stddev'):
stddev = tf.sqrt(tf.reduce_sum(tf.square(var - mean)))
tf.scalar_summary('stddev/' + name, stddev)
tf.scalar_summary('max/' + name, tf.reduce_max(var))
tf.scalar_summary('min/' + name, tf.reduce_min(var))
tf.histogram_summary(name, var)
</code></pre>
<p>Then, create a summarize operation after you build a graph like below.
This code saves weight and bias of first layer with cross-entropy in "mnist_tf_log" directory.</p>
<pre><code>variable_summaries(W_fc1, "W_fc1")
variable_summaries(b_fc1, "b_fc1")
tf.scalar_summary("cross_entropy:", cross_entropy)
summary_op = tf.merge_all_summaries()
summary_writer = tf.train.SummaryWriter("mnist_tf_log", graph_def=sess.graph)
</code></pre>
<p>Now you're all set.
You can log those data by returning summary_op and pass it to summary_writer.</p>
<p>Here is an example for logging every 10 training steps.</p>
<pre><code>for i in range(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
if i % 10 == 0:
_, summary_str = sess.run( [train_step, summary_op], feed_dict={x: batch_xs, y_: batch_ys})
summary_writer.add_summary(summary_str, i)
summary_writer.flush()
else:
sess.run( train_step, feed_dict={x: batch_xs, y_: batch_ys})
</code></pre>
<p>Execute TensorBoard after you run the code.</p>
<pre><code>python /path/to/tensorboard/tensorboard.py --logdir=mnist_tf_log
</code></pre>
<p>Then you can see the result by opening <a href="http://localhost:6006" rel="nofollow noreferrer">http://localhost:6006</a> with your web browser.</p>
<p><a href="https://i.stack.imgur.com/1iewu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1iewu.png" alt="enter image description here"></a></p>
|
machine-learning|tensorflow
| 4
|
5,374
| 37,949,588
|
Is there an no-op (pass-through) operation in tensorflow?
|
<p>As per title. I'd like to make use of such operation to rename the nodes and better organize a graph. Or is there other recommended practice for renaming an existing node in the graph? Thanks!</p>
|
<p>There is <a href="https://www.tensorflow.org/api_docs/python/tf/no_op" rel="noreferrer"><code>tf.no_op</code></a> which allows you to add an operation which does nothing.</p>
|
tensorflow
| 18
|
5,375
| 64,345,639
|
I dont understand what is wrong with this line
|
<pre><code>stockxx["Date"]=pd.to_datetime(stockxx.Date, format = '%m/%d/%Y %H:%M:%S.%f'
stockxx.index=stockxx['Date']
plt.figure(figsize=(16,8))
plt.plot(stockxx["Close/Last"], label= 'Close Price History')
</code></pre>
<p>I get this</p>
<pre><code>File "<ipython-input-13-e522099fd646>", line 2
stockxx.index=stockxx['Date']
^
SyntaxError: invalid syntax
</code></pre>
|
<p>In the first line, you are missing a <code>)</code></p>
<p>This</p>
<pre><code>stockxx["Date"]=pd.to_datetime(stockxx.Date, format = '%m/%d/%Y %H:%M:%S.%f'
</code></pre>
<p>should be</p>
<pre><code>stockxx["Date"]=pd.to_datetime(stockxx.Date, format = '%m/%d/%Y %H:%M:%S.%f')
</code></pre>
<p>with the <code>)</code> at the end.</p>
|
pandas|jupyter-notebook
| 1
|
5,376
| 64,556,993
|
Sum a colum per x and per y in a Datafame
|
<p>I have:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({
"ID": [55218,55218,55218,55218,55222],
"Product": [10,10,22,22,21],
"Cluster": [0,0,1,2,1]
"Rating":[-1,2,0,1,2]})
</code></pre>
<p>I want to sum every 0, 1 or 2 in "Cluster" for every x in "ID" and every y in "Product"</p>
<p>My expected output is:</p>
<pre class="lang-py prettyprint-override"><code>df_new = pd.DataFrame({"ID": [55218,55218,55218,55222],
"Product": [10,22,22,21],
"Cluster": [0,1,2,1],
"Sum": [[1,0,1,2] })
</code></pre>
|
<p>This will gives you desired output</p>
<pre><code>import pandas as pd
# Query DF
df = pd.DataFrame({
"ID": [[55218],[55218],[55218],[55218],[55222]],
"Product": [[10],[10],[22],[22],[21]],
"Cluster": [[0],[0],[1],[2],[1]],
"Rating":[[-1],[2],[0],[1],[2]]})
print(df)
unique_list =[]
sum = []
for i in range(df.shape[0]):
entry =[[df.loc[i]['ID'][0]],[df.loc[i]['Product'][0]],[df.loc[i]['Cluster'][0]]]
if entry not in unique_list:
unique_list.append(entry)
sum.append([df.loc[i]['Rating'][0]])
else:
ind = unique_list.index(entry)
sum[ind] = [sum[ind][0] + df.loc[i]['Rating'][0]]
# Required DF
df_new = pd.DataFrame(unique_list)
df_new['Sum'] = sum
df_new.columns = ['ID' ,'Product', 'Cluster', 'Sum']
print(df_new)
</code></pre>
<p>Input</p>
<pre><code> ID Product Cluster Rating
0 [55218] [10] [0] [-1]
1 [55218] [10] [0] [2]
2 [55218] [22] [1] [0]
3 [55218] [22] [2] [1]
4 [55222] [21] [1] [2]
</code></pre>
<p>Output</p>
<pre><code> ID Product Cluster Sum
0 [55218] [10] [0] [1]
1 [55218] [22] [1] [0]
2 [55218] [22] [2] [1]
3 [55222] [21] [1] [2]
</code></pre>
|
python|pandas|dataframe|sum
| 1
|
5,377
| 64,494,042
|
What does the shape of a multi-dimensional array signify?
|
<p>What does the shape in an n-dimensional array mean? Ex:</p>
<pre><code>import numpy as np
arr = np.array([[1]])
print(arr) # output: [[1]]
print(arr.ndim) # output: 2
print(arr.shape) # output: (1, 1)
</code></pre>
|
<p>You yourself said that it's a 2D array that means it has 2 dimensions.<br />
<code>[[1]]</code> is a 1 x 1 matrix that's why you get <code>(1, 1)</code> as output.</p>
<p><strong>Edit:</strong><br />
<em>Question:</em> "For [[1]], there's no element along the 2nd dimension, shouldn't the size be zero along second dimension"<br />
<em>Answer:</em> In an n dimensional array you need n coordinates to represent any element of the array. Since <code>[[1]]</code> is a 2D array you'll need 2 coordinates, <code>(0,0)</code> here, to represent the only element in the array.</p>
|
python|numpy|multidimensional-array
| 3
|
5,378
| 47,551,449
|
How does tensorflow scale RNNCell weight tensors when changing their dimensions?
|
<p>I'm trying to understand how the weights are scaled in a RNNCell when going from training to inference in tensorflow.</p>
<p>Consider the following placeholders defined as:</p>
<p><code>data = tf.placeholder(tf.int32,[None,max_seq_len])
targets = tf.placholder(tf.int32,[None,max_seq_len])</code></p>
<p>During training the batch_size is set to 10, e.g. both tensors have shape [10,max_seq_len]. However, during inference only one example is used, not a batch of ten, so the tensors have shape [1,max_seq_len].</p>
<p>Tensorflow handles this dimension change seamlessly, however, I'm uncertain of how it does this? </p>
<p>My hypothesis is that weigth tensors, in the RNNCell, are actually shape [1,hidden_dim], and scaling to larger batch sizes is acheived by broadcasting, but I'm unable to find something that reflects this in the source. I've read through the <a href="https://github.com/tensorflow/tensorflow/blob/r1.4/tensorflow/python/ops/rnn.py" rel="nofollow noreferrer">rnn source</a> and the <a href="https://github.com/tensorflow/tensorflow/blob/r1.4/tensorflow/python/ops/rnn_cell_impl.py" rel="nofollow noreferrer">rnn cell source</a>. Any help with understanding this would be much appreciated. </p>
|
<ul>
<li>you have defined your <code>data</code> tensor as <code>data = tf.placeholder(tf.int32,[None,max_seq_len])</code> which means that the first dimension will change according to the input but the second dimension will always remain <code>max_seq_len</code></li>
<li>So if <code>max_seq_len = 5</code> than you feed shape can be either [1, 5], [2, 5], [3, 5] which means you can change the first dimension but not the second one</li>
<li>If you change the second dimension to a number other than 5 then it will throw you an error for mismatch shape or similar error </li>
<li>Your input's first dimension which is the batch_size won't affect the weight matrix of any of the neurons in you network </li>
</ul>
|
python|tensorflow
| 0
|
5,379
| 47,770,723
|
Find two dimensional complement in pandas
|
<p>I have two pandas dataframes, a, and b. a and b share two common colums, say x and y, containing english language strings. Each combination of x and y is uniq within a and b. There is a common subset of x and y, which I can compute like</p>
<pre><code>c = pandas.merge(a, b, on=['x', 'y'])
</code></pre>
<p>What I am interested in is the rest, d = a - c, which should be the rows in a not in b, with respect to the two columns x and y.</p>
<p>What I currently do is I add another colum xy:</p>
<pre><code>a['xy'] = a['x'] + a['y']
c['xy'] = c['x'] + c['y']
</code></pre>
<p>and then</p>
<pre><code>d = a[~a['xy'].isin(c['xy'])]
</code></pre>
<p>This seems clumsy to me, is there a more elegant way to do this?</p>
|
<p>Pandas <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html" rel="nofollow noreferrer">merge</a> has an option to add an indicator column which tells you where the data comes from. Combining this with an outer merge should give you what you are looking for.</p>
<pre><code>a_b = pd.merge(a, b, on=['x', 'y'],how="outer",indicator="string")
a.loc[~(a_b.string=="both"),:]
</code></pre>
<p>Testing on some made up dataframes</p>
<pre><code>a_rand = np.reshape(np.random.randint(8,size=40),[10,4])
b_rand = np.reshape(np.random.randint(8,size=40),[10,4])
a = pd.DataFrame(a_rand, columns = ['x','y','a1','a2'])
b = pd.DataFrame(b_rand, columns = ['x','y','b1','b2'])
</code></pre>
<p>The shared rows</p>
<pre><code>pd.merge(a, b, on=['x', 'y'])
x y a1 a2 b1 b2
0 0 6 2 3 1 6
1 3 1 5 5 0 5
2 3 0 4 0 3 2
</code></pre>
<p>The outer join showing where the rows come from</p>
<pre><code>pd.merge(a, b, on=['x', 'y'],how="outer",indicator="string")
x y a1 a2 b1 b2 string
0 0 4 1.0 7.0 NaN NaN left_only
1 0 4 2.0 1.0 NaN NaN left_only
2 0 6 2.0 3.0 1.0 6.0 both
3 5 7 0.0 6.0 NaN NaN left_only
4 5 7 2.0 5.0 NaN NaN left_only
5 3 1 5.0 5.0 0.0 5.0 both
6 3 0 4.0 0.0 3.0 2.0 both
7 1 5 2.0 5.0 NaN NaN left_only
8 6 2 0.0 2.0 NaN NaN left_only
9 4 6 6.0 5.0 NaN NaN left_only
10 0 5 NaN NaN 0.0 2.0 right_only
11 1 4 NaN NaN 4.0 4.0 right_only
12 2 7 NaN NaN 4.0 1.0 right_only
13 5 6 NaN NaN 7.0 1.0 right_only
14 3 5 NaN NaN 0.0 0.0 right_only
15 4 7 NaN NaN 3.0 4.0 right_only
16 7 2 NaN NaN 3.0 4.0 right_only
</code></pre>
<p>Finally, your desired output</p>
<pre><code>a.loc[~(a_b.string=="both"),:]
x y a1 a2
0 0 4 1 7
1 0 6 2 3
3 0 4 2 1
4 3 1 5 5
7 1 5 2 5
8 6 2 0 2
9 4 6 6 5
</code></pre>
|
python|pandas|dataframe
| 2
|
5,380
| 49,079,464
|
Handling mixed string with double quotations and numbers to save in csv
|
<p>I have three lines that I need to save as header of my csv files. They should look like this:</p>
<pre><code>title = "dataset test"
variables = "X", "Y", "Z", "V"
zone t = "Data Field", i = 134, j = 293, k = 5, f=point
</code></pre>
<p>I am using the following code to create the pandas dataframe:</p>
<pre><code>info = pd.DataFrame(['title = "dataset test"',
'variables = "X", "Y", "Z", "V"',
'zone t = "Data Field", i = 134, j = 293, k = 5, f=point'])
</code></pre>
<p>And using the following code to write the csv file: </p>
<pre><code>with open(fpath, 'w') as myfile:
info.to_csv(myfile, header=None, index=False)
</code></pre>
<p>However the output in the csv file is as:</p>
<pre><code>"title = ""dataset test"""
"variables = ""X"", ""Y"", ""Z"", ""V"""
"zone t = ""Data Field"", i = 134, j = 293, k = 5, f=point"
</code></pre>
<p>Below this header there are three columns of number which will be add afterward; the final output should be like this:</p>
<pre><code>title = "dataset test"
variables = "X", "Y", "Z", "V"
zone t = "Data Field", i = 134, j = 293, k = 5, f=point
6.1961335E+06 2.3218804E+06 1.3564390E+03
6.1961547E+06 2.3218672E+06 1.3473630E+03
6.1961759E+06 2.3218540E+06 1.3382290E+03
6.1961972E+06 2.3218408E+06 1.3322720E+03
</code></pre>
<p>which I do it using <code>df.to_csv(myfile, header=None, index=False, sep='\t',float_format='%.7E')</code></p>
|
<p>Have you tried using the <code>\</code> escape character on this line</p>
<pre><code>'variables = "X", "Y", "Z", "V"'
</code></pre>
<p>Like this</p>
<pre><code>'variables = \"X\", \"Y\", \"Z\", \"V\"'
</code></pre>
|
python|pandas|csv|dataframe
| 1
|
5,381
| 58,859,385
|
Getting listed column names of all not nan rows
|
<p>I've pandas dataframe based on pivot table with index and columns. Index are presented with values that are not nan at least in one column, while others are nans.</p>
<pre><code> col_1 col_2 col_3 col_4 ... col_100
index_1 1 2 nan nan ... 5
index_2 nan nan 1 1 ... 10
... ... ... ... ... ... ...
index_100 nan 9 4 ... ... nan
</code></pre>
<p>How can I get column names of all the not nan values in a row and put them into automatically suffixed list names by each index?
Need to get this:</p>
<pre><code>list_1=[col_1, col_2, col_100]
list_2=[col_3, col_4, col_100]
list_100=[col_2, col_3]
</code></pre>
|
<p>You can use <code>stack</code> to remove <code>nan</code> and <code>groupby</code> to gather all column names:</p>
<pre><code>(df.stack()
.reset_index(level=1)
.groupby(level=0, sort=False)
['level_1'].apply(list)
)
</code></pre>
<p>Output:</p>
<pre><code>index_1 [col_1, col_2, col_100]
index_2 [col_3, col_4, col_100]
index_100 [col_2, col_3]
Name: level_1, dtype: object
</code></pre>
|
python|pandas
| 3
|
5,382
| 70,187,074
|
divide by zero encountered in true_divide error without having zeros in my data
|
<p>this is my code and this is my data, and this is the output of the code. I've tried adding one the values on the x axes, thinking maybe values so little can be interpreted as zeros. I've no idea what true_divide could be, and I cannot explain this divide by zero error since there is not a single zero in my data, checked all of my 2500 data points. Hoping that some of you could provide some clarification. Thanks in advance.</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
from iminuit import cost, Minuit
import numpy as np
frame = pd.read_excel('/Users/lorenzotecchia/Desktop/Analisi Laboratorio/Analisi dati/Quinta Esperienza/500Hz/F0000CH2.xlsx', 'F0000CH2')
data = pd.read_excel('/Users/lorenzotecchia/Desktop/Analisi Laboratorio/Analisi dati/Quinta Esperienza/500Hz/F0000CH1.xlsx', 'F0000CH1')
# tempi_500Hz = pd.DataFrame(frame,columns=['x'])
# Vout_500Hz = pd.DataFrame(frame,columns=['y'])
tempi_500Hz = pd.DataFrame(frame,columns=['x1'])
Vout_500Hz = pd.DataFrame(frame,columns=['y1'])
# Vin_500Hz = pd.DataFrame(data,columns=['y'])
def fit_esponenziale(x, α, β):
return α * (1 - np.exp(-x / β))
plt.xlabel('ω(Hz)')
plt.ylabel('Attenuazioni')
plt.title('Fit Parabolico')
plt.scatter(tempi_500Hz, Vout_500Hz)
least_squares = cost.LeastSquares(tempi_500Hz, Vout_500Hz, np.sqrt(Vout_500Hz), fit_esponenziale)
m = Minuit(least_squares, α=0, β=0)
m.migrad()
m.hesse()
plt.errorbar(tempi_500Hz, Vout_500Hz, fmt="o", label="data")
plt.plot(tempi_500Hz, fit_esponenziale(tempi_500Hz, *m.values), label="fit")
fit_info = [
f"$\\chi^2$ / $n_\\mathrm{{dof}}$ = {m.fval:.1f} / {len(tempi_500Hz) - m.nfit}",]
for p, v, e in zip(m.parameters, m.values, m.errors):
fit_info.append(f"{p} = ${v:.3f} \\pm {e:.3f}$")
plt.legend()
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/cPq9l.jpg" rel="nofollow noreferrer">input</a></p>
<p><a href="https://i.stack.imgur.com/WzrzH.png" rel="nofollow noreferrer">output and example of data </a></p>
|
<p>Here is a working <code>Minuit</code> vs <code>curve_fit</code> example. I scaled the function such that the decay in the exponential is in the order of 1 (generally a good idea for non linear fits ). Eventually, both methods give very similar results.</p>
<p>Note:I leave it open whether the error makes sense like this or not. The starting values equal to zero was definitively a bad idea.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import matplotlib.pyplot as plt
from iminuit import cost, Minuit
from scipy.optimize import curve_fit
import numpy as np
def fit_exp(x, a, b, c):
return a * (1 - np.exp(- 1000 * b * x) ) + c
nn = 170
xl = np.linspace( 0, 0.001, nn )
yl = fit_exp( xl, 15, 5.3, -8.1 ) + np.random.normal( size=nn, scale=0.05 )
#######################
### Minuit
#######################
least_squares = cost.LeastSquares(xl, yl, np.sqrt( np.abs( yl ) ), fit_exp )
print(least_squares)
m = Minuit(least_squares, a=1, b=5, c=-7)
print( "grad: ")
print( m.migrad() ) ### needs to be called to get fit values
print( m.values )### gives slightly different output
print("Hesse:")
print( m.hesse() )
#######################
### curve_fit
#######################
opt, cov = curve_fit(
fit_exp, xl, yl, sigma=np.sqrt( np.abs( yl ) ),
absolute_sigma=True
)
print( " curve_fit: ")
print( opt )
print( " covariance ")
print( cov )
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1 )
ax.plot(xl, yl, marker ='+', ls='' )
ax.plot(xl, fit_exp(xl, *m.values), ls="--")
ax.plot(xl, fit_exp(xl, *opt), ls=":")
plt.show()
</code></pre>
|
pandas|numpy|data-analysis|curve-fitting|iminuit
| 0
|
5,383
| 56,132,629
|
Assign arr with numpy fancy indexing
|
<p>I really hope this is not a duplicate and this is probably a very stupid question. Sorry ;) </p>
<p>Problem:
I have a greyscale image with values/classes 1 and 2 and I want to convert/map this to a color image where 1 equals yellow and 2 equals blue. </p>
<pre><code>import numpy as np
import cv2
result=cv2.imread("image.png", cv2.IMREAD_GRAYSCALE)
result[result==2]=[15,100,100]
result[result==1]=[130,255,255]
</code></pre>
<p>But this is failing with the error <code>ValueError: NumPy boolean array indexing assignment cannot assign 3 input values to the 1995594 output values where the mask is true</code></p>
<p>I think I very close to the solution, but I don't get it.
Thanks in Advance for your help!</p>
|
<p><code>result</code> is a Numpy array and is <em>typed</em>, its type being an integer and you try to assign to an integer slot a triple of integers… no good. </p>
<p>What you want to do is creating an empty color image, with the same dimensions of <code>result</code>, and assigning to the last axis the requested triples.</p>
<p>I have not installed <code>cv2</code> but you can look at the following code to have an idea on how to proceed.</p>
<p>Equivalent to what you have done, the same error</p>
<pre><code>In [36]: import numpy as np
In [37]: a = np.random.randint(0,2,(2,4))
In [38]: a
Out[38]:
array([[1, 0, 0, 0],
[0, 1, 0, 1]])
In [39]: a[a==1] = (1,2,3)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-39-24af4c8dbf5a> in <module>
----> 1 a[a==1] = (1,1)
ValueError: NumPy boolean array indexing assignment cannot assign 2 input values to the 3 output values where the mask is true
</code></pre>
<p>Now, allocate a 3D array and apply indexing to it, assigning by default to the last axis</p>
<pre><code>In [40]: b = np.zeros((2,4,3))
In [41]: b[a==1] = (1,2,3)
In [42]: b
Out[42]:
array([[[1., 2., 3.],
[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]],
[[0., 0., 0.],
[1., 2., 3.],
[0., 0., 0.],
[1., 2., 3.]]])
</code></pre>
<p>We have two inner matrices (corresponding to the two rows of <code>a</code>), in each matrix four rows (corresponding to the four columns of <code>a</code>) and finally the columns are the RGB triples that you need.</p>
<p>I don't know exactly how the data is arranged in a <code>cv2</code> image but I think you have to do minor adjustements, if any at all.</p>
|
python|numpy|opencv|cv2
| 1
|
5,384
| 55,784,018
|
How to convert list of lists to bytes?
|
<p>I have list of list float and I want to convert it into bytes. Can please some help me to do this.
for example</p>
<pre><code>l = [[0.1, 1.0, 2.0], [2.0, 3.1, 4.1]]
</code></pre>
<p>and I want something like</p>
<pre><code>bytes(l) -> b'\x01\x02\x03.......'
</code></pre>
|
<p>Since you've tagged this <code>numpy</code>, this is simply <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.tobytes.html" rel="noreferrer"><code>tobytes</code></a></p>
<pre><code>a = np.array(l)
a.tobytes()
</code></pre>
<p></p>
<pre><code>b'\x9a\x99\x99\x99\x99\x99\xb9?\x00\x00\x00\x00\x00\x00\xf0?\x00\x00\x00\x00\x00\x00\x00@\x00\x00\x00\x00\x00\x00\x00@\xcd\xcc\xcc\xcc\xcc\xcc\x08@ffffff\x10@'
</code></pre>
<hr>
<p>This result can be re-processed as an <code>ndarray</code> using <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.frombuffer.html" rel="noreferrer"><code>frombuffer</code></a>, but the original shape will not be maintained.</p>
|
python|python-3.x|numpy
| 7
|
5,385
| 55,990,574
|
Pandas rolling apply function to entire window dataframe
|
<p>I want to apply a function to a rolling window. All the answers I saw here are focused on applying to a single row / column, but I would like to apply my function to the entire window. Here is a simplified example:</p>
<pre><code>import pandas as pd
data = [ [1,2], [3,4], [3,4], [6,6], [9,1], [11,2] ]
df = pd.DataFrame(columns=list('AB'), data=data)
</code></pre>
<p>This is <code>df</code>:</p>
<pre><code> A B
0 1 2
1 3 4
2 3 4
3 6 6
4 9 1
5 11 2
</code></pre>
<p>Take some function to apply to the <em>entire</em> window:</p>
<pre><code>df.rolling(3).apply(lambda x: x.shape)
</code></pre>
<p>In this example, I would like to get something like:</p>
<pre><code> some_name
0 NA
1 NA
2 (3,2)
3 (3,2)
4 (3,2)
5 (3,2)
</code></pre>
<p>Of course, the shape is used as an example showing <code>f</code> treats the entire window as the object of calculation, not just a row / column. I tried playing with the <code>axis</code> keyword for <code>rolling</code>, as well as with the <code>raw</code> keyword for <code>apply</code> but with no success. Other methods (<code>agg, transform</code>) do not seem to deliver either. </p>
<p>Sure, I can do this with a list comprehension. Just thought there is an easier / cleaner way of doing this.</p>
|
<p>Not with <code>pd.DataFrame.rolling</code> .... that function is applied iteratively to the columns, taking in a series of floats/NaN, and returning a series of floats/NaN, one-by-one. I think you'll have better luck with your intuition....</p>
<pre><code>def rolling_pipe(dataframe, window, fctn):
return pd.Series([dataframe.iloc[i-window: i].pipe(fctn)
if i >= window else None
for i in range(1, len(dataframe)+1)],
index = dataframe.index)
df.pipe(rolling_pipe, 3, lambda x: x.shape)
</code></pre>
|
python|pandas|apply|rolling-computation
| 11
|
5,386
| 55,810,419
|
How to remove 'b' character from ndarray that is added by np.genfromtxt
|
<p>I have a text file which contains rows of information in the form of both strings, integers and floats, separated by white space, e.g.</p>
<p>HIP893 23_10 7 0.028
4<br>
HIP1074 43_20 20 0.0141 1<br>
HIP1325 23_10 7 0.02388 5<br>
...</p>
<p>I've imported this data using the following line:</p>
<pre><code>data=np.genfromtxt('98_info.txt', dtype=(object, object, int,float,float))
</code></pre>
<p>However when I do this I get an output of </p>
<pre><code>[(b'HIP893', b'23_10', 7, 0.028, 4)
(b'HIP1074', b'43_20', 20, 0.0141, 1)
(b'HIP1325', b'23_10', 7, 0.02388, 5)
... ]
</code></pre>
<p>Whereas I would like there to be no 'b' and instead:</p>
<pre><code>[('HIP893', '23_10', 7, 0.028, 4.0)
('HIP1074', '43_20', 20, 0.0141, 1.0)
('HIP1325', '23_10', 7, 0.02388, 5.0)
... ]
</code></pre>
<p>I have tried NumPy's core.defchararray but that gave me the error 'string operation on non-string array', I guess because my data is a combination of both strings and numbers maybe? </p>
<p>Is there some way to either remove the character but keep the data in an array or perhaps another way to load in the information that will keep the strings in quotation marks and the numbers without them? </p>
<p>If there is a way to import it in that form as a 2d np array even better, but that is not an issue if not.</p>
<p>Thanks!</p>
|
<p>You can pass <code>converters=</code> with a function that decodes your bytes strings, eg:</p>
<pre><code>convs = dict.fromkeys([0, 1], bytes.decode)
data = np.genfromtxt('98_info.txt', dtype=(object, object, int, float, float), converters=convs)
</code></pre>
<p>Which gives you <code>data</code> of:</p>
<pre><code>array([('HIP893', '23_10', 7, 0.028 , 4.),
('HIP1074', '43_20', 20, 0.0141 , 1.),
('HIP1325', '23_10', 7, 0.02388, 5.)],
dtype=[('f0', 'O'), ('f1', 'O'), ('f2', '<i8'), ('f3', '<f8'), ('f4', '<f8')])
</code></pre>
|
python|python-3.x|numpy
| 3
|
5,387
| 55,785,696
|
Split dataset by every other day
|
<p>I have a dataset that I collected over many days and is indexed by calendar day. Each day has a different number of entries in it. I want to see if the odd days (e.g. day 1, day 3, day 5, etc...) are correlated with the even days (e.g. day 2, day 4, day 6 etc...) and to do this, I have to split my dataset into two.</p>
<p>I can't use day % 2 because I have missing days and weekends in the set that throw it off. I have tried using resample like this:</p>
<pre><code>df_odd = df.resample('2D')
lowest_date = df['date_minus_time'].min()
df_even = df.query('date_minus_time != @lowest_date).resample('2D')
</code></pre>
<p>But this insists on aggregating the data by day. I want to keep all the rows so I can perform further operations (e.g. groupby) on the resulting datasets.</p>
<p>How can I create two dataframes, one with all the rows with an "even" date and one with all the rows with an "odd" date with even and odd being relative to first day of my data set?</p>
<p>Here are some example data:</p>
<pre><code>Date var
2018-12-10 1
2018-12-10 0
2018-12-10 1
2018-12-10 0
2018-12-11 1
2018-12-11 1
2018-12-12 0
2018-12-12 1
2018-12-12 1
2018-12-14 1
2018-12-14 0
2018-12-14 1
2018-12-16 1
2018-12-16 1
2018-12-16 1
</code></pre>
<p>And the expected output:</p>
<p>df_odd:</p>
<pre><code>Date var
2018-12-10 1
2018-12-10 0
2018-12-10 1
2018-12-10 0
2018-12-12 0
2018-12-12 1
2018-12-12 1
2018-12-16 1
2018-12-16 1
2018-12-16 1
</code></pre>
<p>df_even:</p>
<pre><code>Date var
2018-12-11 1
2018-12-11 1
2018-12-14 1
2018-12-14 0
2018-12-14 1
</code></pre>
|
<p>Use <code>pd.Categorical</code> with <code>.codes</code></p>
<pre><code>num = pd.Categorical(df.Date).codes + 1
df_odd = df[num%2 == 0]
df_even = df[num%2 == 1]
df_odd
Date var
0 2018-12-10 1
1 2018-12-10 0
2 2018-12-10 1
3 2018-12-10 0
6 2018-12-12 0
7 2018-12-12 1
8 2018-12-12 1
12 2018-12-16 1
13 2018-12-16 1
14 2018-12-16 1
df_even
Date var
4 2018-12-11 1
5 2018-12-11 1
9 2018-12-14 1
10 2018-12-14 0
11 2018-12-14 1
</code></pre>
|
pandas
| 1
|
5,388
| 64,900,758
|
Create df column concatenated value over each row
|
<p>How can I create a new column in my DataFrame which is a json string equivalent to concatenated column values over each row in the below format?</p>
<p>Code so far:</p>
<pre><code>import pandas as pd
data = {'Name':['Tom', 'Nat', 'Harry', 'Jack'],'Age':[20, 21, 22, 23]}
df = pd.DataFrame(data)
</code></pre>
<p>Input df:</p>
<pre><code> Name Age
0 Tom 20
1 Nat 21
2 Harry 22
3 Jack 23
</code></pre>
<p>Output df:</p>
<pre><code> Name Age Combined
0 Tom 20 [{"Name":"Tom","Age":20}]
1 Nat 21 [{"Name":"Nat","Age":21}]
2 Harry 22 [{"Name":"Harry","Age":22}]
3 Jack 23 [{"Name":"Jack","Age":23}]
</code></pre>
|
<p>Here is one way</p>
<pre><code>import pandas as pd
data = {'Name':['Tom', 'Nat', 'Harry', 'Jack'],'Age':[20, 21, 22, 23]}
df = pd.DataFrame(data)
df['Combined'] = '[{"'+str(df.columns[0])+'": "'+df['Name']+'", "'+str(df.columns[1])+'": '+df['Age'].apply(str)+'}]'
</code></pre>
<p>it works but there may be better ways to do this</p>
<pre><code> Name Age Combined
0 Tom 20 [{"Name": "Tom", "Age": 20}]
1 Nat 21 [{"Name": "Nat", "Age": 21}]
2 Harry 22 [{"Name": "Harry", "Age": 22}]
3 Jack 23 [{"Name": "Jack", "Age": 23}]
</code></pre>
|
python|pandas|dataframe
| 1
|
5,389
| 64,732,291
|
Sum rows in 2d numpy
|
<p>Assume the following 2d numpy is given:</p>
<pre><code>myNP = np.array([[5., 2., 1.],
[3., 3., 3.],
[3., 3., 3.]])
</code></pre>
<p>One has to find the weight of each point in each row in relation to the sum of the row.</p>
<p>in reference to the example above the expected result need to be:</p>
<pre><code>([[0.625, 0.25 , 0.125],
[0.333, 0.333, 0.333],
[0.333, 0.333, 0.333]])
</code></pre>
|
<p>the sum function does exactly what you need. You just need to specify the axis. In your case <code>myNP.sum(axis=1)</code> should do the trick</p>
|
python|numpy
| 1
|
5,390
| 64,841,964
|
Python: Passing in a tuple as an argument in a function
|
<p>I am creating new columns for a <code>Pandas DataFrame</code> using <code>np.select(condition, choices)</code>. I would like to modularize my code into a function to do so, and my cumbersome way is as follows:</p>
<pre><code>def selection(
df: pd.DataFrame,
conditions: Optional[List] = None,
choices: Optional[List] = None,
column_names: Optional[List] = None,
):
if conditions is not None: # if its none, then don't run this, implies choices and column names are none too
for condition, choice, col_name in zip(conditions, choices, column_names):
df[col_name] = np.select(condition, choice, default=" ")
return df
</code></pre>
<p>To run this function, I merely do this:</p>
<pre><code>conditions = [...]
choices = [...]
column_names = [...]
my_tuple = (conditions, choices, column_names)
df = selection(df, *my_tuple)
</code></pre>
<p>I want to improve my coding skills, and I feel this way is suboptimal, in particular, I feel my arguments involving <code>conditions, choices, column_names</code> can be a tuple to be passed in as an argument. I welcome any suggestion on improving this code.</p>
|
<pre><code>In [53]: def foo(df, conditions=None, choices=None):
...: print(df, conditions, choices)
...:
In [54]: foo('df')
df None None
</code></pre>
<p>With keywords, you can supply arguments with a dict:</p>
<pre><code>In [55]: adict={'conditions':[1,2,3], 'choices':['yes','no']}
In [56]: foo('df', **adict)
df [1, 2, 3] ['yes', 'no']
</code></pre>
<p>or a tuple of values:</p>
<pre><code>In [57]: foo('df', *adict.values())
df [1, 2, 3] ['yes', 'no']
</code></pre>
<p>More on argument syntax and unpacking:</p>
<p><a href="https://docs.python.org/3/tutorial/controlflow.html#more-on-defining-functions" rel="nofollow noreferrer">https://docs.python.org/3/tutorial/controlflow.html#more-on-defining-functions</a></p>
|
python|pandas|numpy
| 1
|
5,391
| 64,747,351
|
Parent and child with Panda in python
|
<p>I have a csv file with more than 10000 rows, now I want to create a new column which shows the dependency between parent and child. based on the mentioned rules and policies which exists as below:</p>
<ol>
<li>The unique code which determine and show that the data relates to which family is <em><strong>Team</strong></em>.</li>
<li>If the <em><strong>Position</strong></em> is C , the <em><strong>Parent</strong></em> field must be fulfilled with it's relative position.</li>
<li>If the <em><strong>Position</strong></em> is PF, the <em><strong>Child</strong></em> field must be fulfilled with it's position but if any other <em><strong>Position</strong></em> exists (except C and PF) it must fulfilled as it's relative positin too.</li>
<li>Any other <em><strong>Position</strong></em> exists the Child must be fulfilled.</li>
</ol>
<p>Existing data in CSV:</p>
<pre><code>Team Position
Atlanta Hawks C
Atlanta Hawks PF
Atlanta Hawks PG
Atlanta Hawks SF
Atlanta Hawks SG
</code></pre>
<p>Result:</p>
<pre><code>Parent Child
C PF
PF PG
PF SF
PF SG
</code></pre>
<p>I try to work with Panda in this regard. below code run fine for first condition. I appreciate if any one helps me for modifiying this code?</p>
<pre><code> import pandas as pd
df = pd.read_csv("C:\\Users\\Desktop\\nba.csv")
gdf = df['Team']
gdf.to_csv('C:\\Users\\Desktop\\nba-dependency.csv')
for g in gdf:
df.loc[df['Position']=='C','Parent']=df['Position']
df.loc[df['Position']=='PF','Parent']=df['Position']
df.to_csv('C:\\Users\\Desktop\\result.csv')
</code></pre>
|
<p>If I understood it correctly, this should be the correct approach:</p>
<pre><code>import pandas as pd
import numpy as np
data = {"team":["ah","ah","ah","ah","ah"],"position":["C","PF","PG","SF","SG"]}
df = pd.DataFrame(data)
df["Parent"] = ["C" if x=="C" or x=="PF" else "PF" for x in df["position"]]
df["Child"] = df["position"]
print(df)
</code></pre>
<p>Output here will be</p>
<blockquote>
<pre><code> team position Parent Child
0 ah C C C
1 ah PF C PF
2 ah PG PF PG
3 ah SF PF SF
4 ah SG PF SG
</code></pre>
</blockquote>
<p>I am not entirely sure if the first row is as you expect, you probably wanted to remove this row (you do not show it in result, drop it if so), the rest seem to look as you defined.</p>
|
python|python-3.x|pandas|dataframe|pandas-groupby
| 0
|
5,392
| 64,731,501
|
how can I resample pandas dataframe by day on period time?
|
<p>i have a dataframe like this:<br></p>
<pre><code>df.head()
Out[2]:
price sale_date
0 477,000,000 1396/10/30
1 608,700,000 1396/10/30
2 580,000,000 1396/10/03
3 350,000,000 1396/10/03
4 328,000,000 1396/03/18
</code></pre>
<p>that it has out of bounds datetime<br>
so then i follow below to make them as period time<br></p>
<pre><code>df['sale_date']=df['sale_date'].str.replace('/','').astype(int)
def conv(x):
return pd.Period(year=x // 10000,
month=x // 100 % 100,
day=x % 100, freq='D')
df['sale_date'] = df['sale_date'].str.replace('/','').astype(int).apply(conv)
</code></pre>
<p>now i want to resample them by day like below:<br></p>
<pre><code>df.resample(freq='d', on='sale_date').sum()
</code></pre>
<p>but it gives me this error:<br></p>
<pre><code>resample() got an unexpected keyword argument 'freq'
</code></pre>
|
<p>It seems here not working <code>resample</code> and <code>Grouper</code> with <code>Periods</code> for me in pandas 1.1.3 (I guess bug):</p>
<pre><code>df['sale_date']=df['sale_date'].str.replace('/','').astype(int)
df['price'] = df['price'].str.replace(',','').astype(int)
def conv(x):
return pd.Period(year=x // 10000,
month=x // 100 % 100,
day=x % 100, freq='D')
df['sale_date'] = df['sale_date'].apply(conv)
# df = df.set_index('sale_date').resample('D')['price'].sum()
#OutOfBoundsDatetime: Out of bounds nanosecond timestamp: 1396-03-18 00:00:00
# df = df.set_index('sale_date').groupby(pd.Grouper(freq='D'))['price'].sum()
#OutOfBoundsDatetime: Out of bounds nanosecond timestamp: 1396-03-18 00:00:00
</code></pre>
<p>Possible solution is aggregate by <code>sum</code>, so if duplicated <code>sale_date</code> then <code>price</code> values are summed:</p>
<pre><code>df = df.groupby('sale_date')['price'].sum().reset_index()
print (df)
sale_date price
0 1396-03-18 328000000
1 1396-10-03 580000000
2 1396-10-30 477000000
3 1396-11-25 608700000
4 1396-12-05 350000000
</code></pre>
<p>EDIT: It is possible by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.reindex.html" rel="nofollow noreferrer"><code>Series.reindex</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.period_range.html" rel="nofollow noreferrer"><code>period_range</code></a>:</p>
<pre><code>s = df.groupby('sale_date')['price'].sum()
rng = pd.period_range(s.index.min(), s.index.max(), name='sale_date')
df = s.reindex(rng, fill_value=0).reset_index()
print (df)
sale_date price
0 1396-03-18 328000000
1 1396-03-19 0
2 1396-03-20 0
3 1396-03-21 0
4 1396-03-22 0
.. ... ...
258 1396-12-01 0
259 1396-12-02 0
260 1396-12-03 0
261 1396-12-04 0
262 1396-12-05 350000000
[263 rows x 2 columns]
</code></pre>
|
python|pandas|python-datetime
| 1
|
5,393
| 40,313,369
|
How do I use the avx flag when compiling Fortran code using f2py?
|
<p>In doing some performance testing in Python, I compared the timing for different methods to calculate the Euclidean distance between an array of coordinates. I found my Fortran code compiled with <a href="https://docs.scipy.org/doc/numpy-dev/f2py/" rel="nofollow">F2PY</a> to be roughly 4x slower than the C implementation used by <a href="https://github.com/scipy/scipy/blob/04bce8a934a049729543df93ce0b81370d52cff2/scipy/spatial/src/distance_impl.h" rel="nofollow">SciPy</a>. Comparing that C code, to my Fortran code I see no fundamental difference that would lead to the factor of 4 difference. Here is my code (with some comments explaining its use):</p>
<pre><code> subroutine distance(coor,dist,n)
double precision coor(n,3),dist(n,n)
integer n,i,j
double precision xij,yij,zij
cf2py intent(in):: coor,n
cf2py intent(in,out):: dist
cf2py intent(hide):: xij,yij,zij,
do 200,i=1,n-1
do 300,j=i+1,n
xij=coor(i,1)-coor(j,1)
yij=coor(i,2)-coor(j,2)
zij=coor(i,3)-coor(j,3)
dist(i,j)=dsqrt(xij*xij+yij*yij+zij*zij)
300 continue
200 continue
end
c 1 2 3 4 5 6 7
c123456789012345678901234567890123456789012345678901234567890123456789012
c
c to setup and incorporate into python (requires numpy):
c
c # python setup_distance.py build
c # cp build/lib*/distance.so ./
c
c to call this from python add the following lines:
c
c >>> import sys ; sys.path.append('./')
c >>> from distance import distance
c
c >>> dist = distance(coor, dist)
</code></pre>
<p>Looking at the compile command run by F2PY, I recognized there is no <code>avx</code> compile flag. I tried adding it in the Python setup file using <code>extra_compile_args=['-mavx</code>]` but this had no change to the compile command run by F2PY:</p>
<pre><code>compiling Fortran sources
Fortran f77 compiler: /usr/bin/gfortran -Wall -g -ffixed-form -fno-second-underscore -fPIC -O3 -funroll-loops
Fortran f90 compiler: /usr/bin/gfortran -Wall -g -fno-second-underscore -fPIC -O3 -funroll-loops
Fortran fix compiler: /usr/bin/gfortran -Wall -g -ffixed-form -fno-second-underscore -Wall -g -fno-second-underscore -fPIC -O3 -funroll-loops
compile options: '-I/home/user/anaconda/lib/python2.7/site-packages/numpy/core/include -Ibuild/src.linux-x86_64-2.7 -I/home/user/anaconda/lib/python2.7/site-packages/numpy/core/include -I/home/user/anaconda/include/python2.7 -c'
gfortran:f77: ./distance.f
creating build/lib.linux-x86_64-2.7
/usr/bin/gfortran -Wall -g -Wall -g -shared build/temp.linux-x86_64-2.7/build/src.linux-x86_64-2.7/distancemodule.o build/temp.linux-x86_64-2.7/build/src.linux-x86_64-2.7/fortranobject.o build/temp.linux-x86_64-2.7/distance.o -L/home/user/anaconda/lib -lpython2.7 -lgfortran -o build/lib.linux-x86_64-2.7/distance.so
</code></pre>
|
<p>To answer how to add the <code>avx</code> flag into compiler options.<br>
In your case the f77 complier is being picked <code>gfortran:f77: ./distance.f</code> < That is the key line.<br>
You could try specifying <code>--f77flags=-mavx</code></p>
|
python|numpy|scipy|fortran|f2py
| 1
|
5,394
| 40,233,697
|
Speed up double for loop in numpy
|
<p>I currently have the following double loop in my Python code:</p>
<pre><code>for i in range(a):
for j in range(b):
A[:,i]*=B[j][:,C[i,j]]
</code></pre>
<p>(A is a float matrix. B is a list of float matrices. C is a matrix of integers. By matrices I mean m x n np.arrays.</p>
<p>To be precise, the sizes are: A: mxa B: b matrices of size mxl (with l different for each matrix) C: axb. Here m is very large, a is very large, b is small, the l's are even smaller than b
)</p>
<p>I tried to speed it up by doing</p>
<pre><code>for j in range(b):
A[:,:]*=B[j][:,C[:,j]]
</code></pre>
<p>but surprisingly to me this performed worse. </p>
<p>More precisely, this did improve performance for small values of m and a (the "large" numbers), but from m=7000,a=700 onwards the first appraoch is roughly twice as fast.</p>
<p>Is there anything else I can do?</p>
<p>Maybe I could parallelize? But I don't really know how.</p>
<p>(I am not committed to either Python 2 or 3)</p>
|
<p>Here's a vectorized approach assuming <code>B</code> as a list of arrays that are of the same shape -</p>
<pre><code># Convert B to a 3D array
B_arr = np.asarray(B)
# Use advanced indexing to index into the last axis of B array with C
# and then do product-reduction along the second axis.
# Finally, we perform elementwise multiplication with A
A *= B_arr[np.arange(B_arr.shape[0]),:,C].prod(1).T
</code></pre>
<hr>
<p>For cases with smaller <code>a</code>, we could run a loop that iterates through the length of <code>a</code> instead. Also, for more performance, it might be a better idea to store those elements into a separate 2D array instead and perform the elementwise multiplication only once after we get out of the loop.</p>
<p>Thus, we would have an alternative implementation like so -</p>
<pre><code>range_arr = np.arange(B_arr.shape[0])
out = np.empty_like(A)
for i in range(a):
out[:,i] = B_arr[range_arr,:,C[i,:]].prod(0)
A *= out
</code></pre>
|
python|performance|numpy|parallel-processing|vectorization
| 1
|
5,395
| 44,185,104
|
zip rows of pandas DataFrame with list/array of values
|
<p>My current code is</p>
<pre><code>from numpy import *
def buildRealDataObject(x):
loc = array(x[0])
trueClass = x[1]
evid = ones(len(loc))
evid[isnan(loc)] = 0
loc[isnan(loc)] = 0
return DataObject(location=loc, trueClass=trueClass, evidence=evid)
if trueClasses is None:
trueClasses = zeros(len(dataset), dtype=int8).tolist()
realObjects = list(map(lambda x: buildRealDataObject(x), zip(dataset, trueClasses)))
</code></pre>
<p>and it is working. What I expect is to create for each row of the DataFrame <code>dataset</code> each combined with the corresponding entry of <code>trueClasses</code> a <code>realObject</code>. I am not really sure though why it is working because if run <code>list(zip(dataset, trueClasses))</code> I just get something like <code>[(0, 0.0), (1, 0.0)]</code>. The two columns of <code>dataset</code> are called <code>0</code> and <code>1</code>. So my first question is: <strong>Why is this working and what is happening here?</strong></p>
<p>However I think this might still be wrong on some level, because it might only work due to "clever implicit transformation" on side of pandas. Also, for the line <code>evid[isnan(loc)] = 0</code> I now got the error</p>
<pre><code>TypeError: ufunc 'isnan' not supported for the input types, and the inputs could not be safely coerced to any supported types according to the casting rule ''safe''
</code></pre>
<p><strong>How should I rewrite this code instead?</strong></p>
|
<p>Currently the zip works on columns instead of rows. Use one of the method from <a href="https://stackoverflow.com/questions/9758450/pandas-convert-dataframe-to-array-of-tuples">Pandas convert dataframe to array of tuples</a> to make the zip work on rows instead of columns. For example substitute</p>
<pre><code>zip(dataset, trueClasses)
</code></pre>
<p>with</p>
<pre><code>zip(dataset.values, trueClasses)
</code></pre>
<p>Considiering <a href="https://stackoverflow.com/a/35232645/4533188">this post</a>, if you have already <code>l = list(data_train.values)</code> for some reason, then <code>zip(l, eClass)</code> is faster than <code>zip(dataset.values, trueClasses)</code>. However, if you don't then the transformation takes too much time to make it worth it in my tests.</p>
|
python|pandas
| 4
|
5,396
| 44,187,426
|
Unable to install Scipy in windows?
|
<p><a href="https://i.stack.imgur.com/1xpL6.jpg" rel="nofollow noreferrer">1</a>When I am installing Scipy using
<code>pip install scipy</code> I am getting this error as shown in Image.
I've tried this many time and also I've tried <code>scikit-learn</code> but it also requires this Scipy. Please help me, I've to submit Project tomorrow. :(</p>
<p><strong>ERROR</strong></p>
<blockquote>
<p>Command ""c:\users\siraj munir\appdata\local\programs\python\python36\python.exe" -u -c "import setuptools, tokenize;<strong>file</strong>='C:\Users\SIRAJM~1\AppData\Local\Temp\pip-build-7mua6674\scipy\setup.py';f=getattr(tokenize, 'open', open)(file);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, file, 'exec'))" install --record C:\Users\SIRAJM~1\AppData\Local\Temp\pip-0x99qqd0-record\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\SIRAJM~1\AppData\Local\Temp\pip-build-7mua6674\scipy\ </p>
</blockquote>
|
<p>Try installing scipy from here:</p>
<p><a href="http://www.lfd.uci.edu/~gohlke/pythonlibs/#scipy" rel="nofollow noreferrer">http://www.lfd.uci.edu/~gohlke/pythonlibs/#scipy</a></p>
<p>You'll need to know your version of python to choose correctly (I see you have 3.6). Also you'll need to know if it is 32 or 64 bits. You can do it by trial and error ;) or you can check the output of <code>python.exe</code>:</p>
<p>Python 3.5.1 (v3.5.1:37a07cee5969, Dec 6 2015, 01:38:48) [MSC v.1900 <strong>32 bit</strong> (Intel)] on win32</p>
<p>The bold part indicates which version it is (don't get mislead by <code>win32</code> part).</p>
<p>Another option is to install anaconda. It is much heavier but you don't need to know anything.</p>
|
python|windows|numpy|scipy
| 1
|
5,397
| 69,321,285
|
Cannot load lvis via tfds
|
<p>I am trying to load built-in dataset <a href="https://www.tensorflow.org/datasets/catalog/lvis" rel="nofollow noreferrer">lvis</a>. It turns out that the <code>tfds</code> and <code>lvis</code> should be imported and installed respectively, however, I did possible all, it still does not work.</p>
<pre class="lang-py prettyprint-override"><code>import os
import tensorflow as tf
from matplotlib import pyplot as plt
%matplotlib inline
!pip install lvis
!pip install tfds-nightly
import tensorflow_datasets as tfds
train_data, info = tfds.load('lvis', split='train', as_supervised=True, with_info=True)
validation_data = tfds.load('lvis', split='validation', as_supervised=True)
test_data = tfds.load('lvis', split='test', as_supervised=True)
</code></pre>
<p>There are some odd outputs after running upon codes in colab.</p>
<pre class="lang-py prettyprint-override"><code>otFoundError Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/tensorflow_datasets/core/utils/py_utils.py in try_reraise(*args, **kwargs)
391 try:
--> 392 yield
393 except Exception as e: # pylint: disable=broad-except
15 frames
/usr/local/lib/python3.7/dist-packages/tensorflow_datasets/core/load.py in builder(name, try_gcs, **builder_kwargs)
167 with py_utils.try_reraise(prefix=f'Failed to construct dataset {name}: '):
--> 168 return cls(**builder_kwargs) # pytype: disable=not-instantiable
169
/usr/local/lib/python3.7/dist-packages/tensorflow_datasets/core/dataset_builder.py in __init__(self, file_format, **kwargs)
917 """
--> 918 super().__init__(**kwargs)
919 self.info.set_file_format(file_format)
/usr/local/lib/python3.7/dist-packages/tensorflow_datasets/core/dataset_builder.py in __init__(self, data_dir, config, version)
184 else: # Use the code version (do not restore data)
--> 185 self.info.initialize_from_bucket()
186
/usr/local/lib/python3.7/dist-packages/tensorflow_datasets/core/utils/py_utils.py in __get__(self, obj, objtype)
145 if cached is None:
--> 146 cached = self.fget(obj) # pytype: disable=attribute-error
147 setattr(obj, attr, cached)
/usr/local/lib/python3.7/dist-packages/tensorflow_datasets/core/dataset_builder.py in info(self)
328 "the restored dataset.")
--> 329 info = self._info()
330 if not isinstance(info, dataset_info.DatasetInfo):
/usr/local/lib/python3.7/dist-packages/tensorflow_datasets/object_detection/lvis/lvis.py in _info(self)
94 names_file=tfds.core.tfds_path(
---> 95 'object_detection/lvis/lvis_classes.txt'))
96 return tfds.core.DatasetInfo(
/usr/local/lib/python3.7/dist-packages/tensorflow_datasets/core/features/class_label_feature.py in __init__(self, num_classes, names, names_file)
67 else:
---> 68 self.names = _load_names_from_file(names_file)
69
/usr/local/lib/python3.7/dist-packages/tensorflow_datasets/core/features/class_label_feature.py in _load_names_from_file(names_filepath)
198 name.strip()
--> 199 for name in tf.compat.as_text(f.read()).split("\n")
200 if name.strip() # Filter empty names
/usr/local/lib/python3.7/dist-packages/tensorflow/python/lib/io/file_io.py in read(self, n)
116 """
--> 117 self._preread_check()
118 if n == -1:
/usr/local/lib/python3.7/dist-packages/tensorflow/python/lib/io/file_io.py in _preread_check(self)
79 self._read_buf = _pywrap_file_io.BufferedInputStream(
---> 80 compat.path_to_str(self.__name), 1024 * 512)
81
NotFoundError: /usr/local/lib/python3.7/dist-packages/tensorflow_datasets/object_detection/lvis/lvis_classes.txt; No such file or directory
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
<ipython-input-4-b8c819fe5c62> in <module>()
----> 1 train_data, info = tfds.load('lvis', split='train', as_supervised=True, with_info=True)
2 validation_data = tfds.load('lvis', split='validation', as_supervised=True)
3 test_data = tfds.load('lvis', split='test', as_supervised=True)
/usr/local/lib/python3.7/dist-packages/tensorflow_datasets/core/load.py in load(name, split, data_dir, batch_size, shuffle_files, download, as_supervised, decoders, read_config, with_info, builder_kwargs, download_and_prepare_kwargs, as_dataset_kwargs, try_gcs)
315 builder_kwargs = {}
316
--> 317 dbuilder = builder(name, data_dir=data_dir, try_gcs=try_gcs, **builder_kwargs)
318 if download:
319 download_and_prepare_kwargs = download_and_prepare_kwargs or {}
/usr/local/lib/python3.7/dist-packages/tensorflow_datasets/core/load.py in builder(name, try_gcs, **builder_kwargs)
166 if cls:
167 with py_utils.try_reraise(prefix=f'Failed to construct dataset {name}: '):
--> 168 return cls(**builder_kwargs) # pytype: disable=not-instantiable
169
170 # If neither the code nor the files are found, raise DatasetNotFoundError
/usr/lib/python3.7/contextlib.py in __exit__(self, type, value, traceback)
128 value = type()
129 try:
--> 130 self.gen.throw(type, value, traceback)
131 except StopIteration as exc:
132 # Suppress StopIteration *unless* it's the same exception that
/usr/local/lib/python3.7/dist-packages/tensorflow_datasets/core/utils/py_utils.py in try_reraise(*args, **kwargs)
392 yield
393 except Exception as e: # pylint: disable=broad-except
--> 394 reraise(e, *args, **kwargs)
395
396
/usr/local/lib/python3.7/dist-packages/tensorflow_datasets/core/utils/py_utils.py in reraise(e, prefix, suffix)
359 else:
360 exception = RuntimeError(f'{type(e).__name__}: {msg}')
--> 361 raise exception from e
362 # Otherwise, modify the exception in-place
363 elif len(e.args) <= 1:
RuntimeError: NotFoundError: Failed to construct dataset lvis: /usr/local/lib/python3.7/dist-packages/tensorflow_datasets/object_detection/lvis/lvis_classes.txt; No such file or directory
</code></pre>
|
<p>This is what I did to get it to work on Colab Notebook:</p>
<pre><code>!pip install -q tfds-nightly tensorflow tensorflow-datasets matplotlib lvis pycocotools apache_beam
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
import tensorflow_datasets as tfds
</code></pre>
<p>Since the tfds object detection lvis folder isn't up to date, I deleted that folder then redownloaded it from the tfds github page.</p>
<p>First install github-clone so we can download specific repo subfolders</p>
<pre><code>!pip install github-clone
</code></pre>
<p>Then remove the lvis folder and redownload it from github:</p>
<pre><code>!rm -rf ../usr/local/lib/python3.7/dist-packages/tensorflow_datasets/object_detection/lvis
!ghclone https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/object_detection/lvis
!mv ./lvis ../usr/local/lib/python3.7/dist-packages/tensorflow_datasets/object_detection/
</code></pre>
<p>After that I could get it to work, this next chunk of code worked for me:</p>
<pre><code>ds, info = tfds.load('lvis', split='train[:25%]', with_info=True,
data_dir= '../content/tensorflow_datasets/',
decoders=tfds.decode.PartialDecoding({
'image': True,
'features': tfds.features.FeaturesDict({'image/id':True,
'objects':tfds.features.Sequence({
'id': True,
'bbox': True,
'label': tfds.features.ClassLabel(names=['skateboard','shoe'])
})
})
})
)
</code></pre>
|
tensorflow|keras|tensorflow2.0|tensorflow-datasets
| 2
|
5,398
| 69,420,951
|
Creating custom column for week number of a column value
|
<p>If I have a df such as</p>
<pre><code> Date | User
2019-08-05 Bob
2019-07-01 Chris
2019-08-13 Bob
2019-08-20 Chris
2019-09-24 Bob
</code></pre>
<p>Expected output</p>
<pre><code> Date | User | Week_number
2019-08-05 Bob 1
2019-07-01 Chris 1
2019-08-13 Bob 2
2019-08-20 Chris 9
2019-09-24 Bob 8
</code></pre>
<p>How can I create a new column that would give me the week # of that date range?
(the week # for user would be the # in where the first date would be under week 1</p>
<p>in the past I have used methods such as <code>df['Date'].dt.day</code> and then used that to cut bins but this is different where I am assigning week number based on that users custom range</p>
<p>Thanks for taking time to read my post</p>
|
<p>If you mean how many weeks have passed since the first date of the user:</p>
<pre><code>df.Date = pd.to_datetime(df.Date)
df['WeekNumber'] = df.Date.groupby(df.User).diff().dt.days.fillna(0).astype(int) // 7 + 1
df
Date User WeekNumber
0 2019-08-05 Bob 1
1 2019-07-01 Chris 1
2 2019-08-13 Bob 2
3 2019-08-20 Chris 8
4 2019-09-24 Bob 7
</code></pre>
|
python|python-3.x|pandas
| 1
|
5,399
| 69,420,061
|
How to use lambda function just in null rows of a column in pandas
|
<p>I am trying to put 0 or 1 in place of the null rows of a column using lambda function, but my code doesn't make any changes in the data.</p>
<pre><code>df[df['a'].isnull()]['a']=df[df['a'].isnull()].apply(lambda x:1 if (x.b==0 and x.c==0) else
0,axis=1)
</code></pre>
<p>Where I am wrong in this??
<a href="https://i.stack.imgur.com/0Auxj.png" rel="nofollow noreferrer">sample table</a></p>
|
<p>You can use <code>loc</code> to specifically fill the null value rows in your DataFrame. When you're using the <code>apply</code> method you can use it on the entire DataFrame, you do not need to filter for NULL values there. The <code>loc</code> will take care of only filling the rows which meet the NULL condition. This should work :</p>
<pre><code>df['a'].loc[df['a'].isnull()] = df.apply(lambda x:1 if (x.b==0 and x.c==0) else
0,axis=1)
</code></pre>
|
python|pandas|dataframe|lambda
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.