Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
7,000
| 65,731,662
|
Does Python support declaring a matrix column-wise?
|
<p>In <em>Python numpy</em> when declaring matrices I use <em>np.array([[row 1], [row 2], . . . [row n]])</em> form. This is declaring a matrix row-wise. Is their any facility in <em>Python</em> to declare a matrix column-wise? I would expect something like - <em>np.array([[col 1], [col 2], . . . [col n]], parameter = 'column-wise')</em> so that a matrix with n columns is produced.</p>
<p>I know such a thing can be achieved via transposing. But is there a way for <em>np.array([...], parameter = '...')</em> being considered as a row or column based on the <em>parameter</em> value I provide?</p>
<p>***<em>np.array()</em> is just used as a dummy here. Any function with above desired facility will do.</p>
|
<p>At the time of array-creation itself, you could use <code>numpy.transpose()</code> instead of <code>numpy.array()</code>, because <code>numpy.tranpose()</code> takes any "array-like" object as input:</p>
<pre><code>my_array = np.transpose ([[1,2,3],[4,5,6]])
print (my_array)
</code></pre>
<p><strong>Output:</strong></p>
<pre><code>[[1 4]
[2 5]
[3 6]]
</code></pre>
|
python|arrays|numpy|matrix
| 2
|
7,001
| 65,560,510
|
Import RSS with FeedParser and Get Both Posts and General Information to Single Pandas DataFrame
|
<p>I am working on as a python novice on an exercise to practice importing data in python. Eventually I want to analyze data from different podcasts (infos on the podcasts itself <em>and</em> every episode) by putting the data into a coherent dataframe work on it with NLP.</p>
<p>So far I have managed to read a list of RSS feeds and get the information on every single episode of the RSS feed (a post).</p>
<p>But I am having trouble to find an <em>integrated</em> working process in python to gather both</p>
<ol>
<li>information on every single episode of the RSS feed (a post)</li>
<li>and general information about the RSS feed (like title of the podcast)
in one go.</li>
</ol>
<p><strong>Code</strong>
This is what i have got so far</p>
<pre><code>import feedparser
import pandas as pd
rss_feeds = ['http://feeds.feedburner.com/TEDTalks_audio',
'https://joelhooks.com/rss.xml',
'https://www.sciencemag.org/rss/podcast.xml',
]
#number of feeds is reduced for testing
posts = []
feed = []
for url in rss_feeds:
feed = feedparser.parse(url)
for post in feed.entries:
posts.append((post.title, post.link, post.summary))
df = pd.DataFrame(posts, columns=['title', 'link', 'summary'])
</code></pre>
<p><strong>Output</strong>
The dataframe includes 652 non-null objects for three columns (as intended) - basically every post made in every podcast. The column <em>title</em> refers to the title of the episode but <em>not</em> to the title of the podcast (which in this example is 'Ted Talk Daily').</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>title</th>
<th>link</th>
<th>summary</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>3 questions to ask yourself about everything y...</td>
<td><a href="https://www.ted.com/talks/stacey_abrams_3_ques.." rel="nofollow noreferrer">https://www.ted.com/talks/stacey_abrams_3_ques..</a>.</td>
<td>How you respond to setbacks is what defines yo...</td>
</tr>
<tr>
<td>1</td>
<td>What your sleep patterns say about your relati...</td>
<td><a href="https://www.ted.com/talks/tedx_shorts_what_you.." rel="nofollow noreferrer">https://www.ted.com/talks/tedx_shorts_what_you..</a>.</td>
<td>Wendy Troxel looks at the cultural expectation...</td>
</tr>
<tr>
<td>2</td>
<td>How we can actually pay people enough -- with ...</td>
<td><a href="https://www.ted.com/talks/ted_business_how_we_.." rel="nofollow noreferrer">https://www.ted.com/talks/ted_business_how_we_..</a>.</td>
<td>Capitalism urgently needs an upgrade, says Pay...</td>
</tr>
</tbody>
</table>
</div>
<p>I am struggling to find a way on how to include the title of the podcasts to this dataframe, too. I always get an error selecting parts the whole feed information e.g. ['feed']['title'].</p>
<p><em>Thanks for every hint with this!</em></p>
<p><strong>Source</strong>
I accustomed what I have so far based on this source: <a href="https://stackoverflow.com/questions/45701053/get-feeds-from-feedparser-and-import-to-pandas-dataframe">Get Feeds from FeedParser and Import to Pandas DataFrame</a></p>
|
<p>Feed title can be accessed in this case with <code>feed.feed.title</code>:</p>
<pre><code># ...
for url in rss_feeds:
feed = feedparser.parse(url)
for post in feed.entries:
posts.append((feed.feed.title, post.title, post.link, post.summary))
df = pd.DataFrame(posts, columns=['feed_title', 'title', 'link', 'summary'])
df
</code></pre>
<p>Output:</p>
<pre><code> feed_title title link summary
0 TED Talks Daily 3 ways compa... https://www.... When we expe...
1 TED Talks Daily How we could... https://www.... Concrete is ...
2 TED Talks Daily 3 questions ... https://www.... How you resp...
3 TED Talks Daily What your sl... https://www.... Wendy Troxel...
4 TED Talks Daily How we can a... https://www.... Capitalism u...
.. ... ... ... ...
649 Science Maga... Science Podc... https://traf... Fear-enhance...
650 Science Maga... Science Podc... https://traf... Discussing t...
651 Science Maga... Science Podc... https://traf... Talking kids...
652 Science Maga... Science Podc... https://traf... The minimum ...
653 Science Maga... Science Podc... https://traf... The origin o...
</code></pre>
|
python|pandas|rss|feedparser
| 1
|
7,002
| 63,648,837
|
Excel with pandas - once read, pandas do not take changes made to xlsx file into account
|
<p>I need to convert an xlsx file into csv. After googling, I found this satisfying answer :</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
read_file = pd.read_excel("./data/myxlsxfiles.xlsx" )
read_file.to_csv("./data/mycsv.csv", index=None, header=True, sep=";")
</code></pre>
<p>This works fine. However, something surprising occurs and I could not find any suitable solution on the internet.</p>
<p>The above code is in a script and each time the script is called, I get a csv from the xlsx file.
Now I correct my excel file, close the excel file, erase the csv file and start again the process. And there it is ! the csv file do not take into account the changes made to the excel file. It seems that the previous version of the excel file was cached somewhere in memory and pandas is using it.
For the time being, the only workaround found is to rename the xlsx file. I don't find this very convenient.</p>
<p>Has one of you an idea of what's happening and how to solve this ?</p>
<p>Thanks in advance</p>
|
<p>Maybe you can try to clear the read_file variable first, but I think it's a problem on Windows.</p>
<p>Otherwise, you can just duplicate the file into a temp folder, read the duplicate, proceed it and then delete the duplicate. Like this (not tested):</p>
<pre><code>path = "./data/"
temp_path = "./data/temp/"
filename_orginal = path + "myxlsxfiles.xlsx"
filename_temp = temp_path + "myxlsxfiles_" + int(time.time()) + ".xlsx"
#Check if folder exists
if os.path.exists(path) == False:
os.mkdir(temp_path)
#Copy file
shutil.copyfile(filename_orginal, filename_temp)
#Do your stuff
read_file = pd.read_excel(filename_temp)
read_file.to_csv("./data/mycsv.csv", index=None, header=True, sep=";")
#Remove temp file
os.remove(filename_temp)
</code></pre>
|
python|excel|pandas|csv
| 0
|
7,003
| 63,420,936
|
How to select rows from DataFrame based on subset inclusion?
|
<p>I have a DataFrame with a column of sets and a column of numbers:</p>
<pre><code>df
ant cons
0 ("Q1A_3") 2
1 ("Q1A_2", "Q2A_4") 3
2 ("Q2A_5") 6
</code></pre>
<p>Ideally, I'd like to be able to retrieve all the rows based on them being a subset of some provided set</p>
<pre><code>selection = set(["Q1A_3","Q2A_4"])
</code></pre>
<p>such that the result of such a function/set of operations would look as follows:</p>
<pre><code>df[df.func(selection)]
ant cons
0 ("Q1A_3") 2
1 ("Q1A_3" "Q2A__4") 3
</code></pre>
|
<p>Try two <code>contains</code></p>
<pre><code>subdf=df[df['ant'].str.contains("Q1A_3") & df['ant'].str.contains("Q2A_4")]
</code></pre>
|
python|pandas
| 0
|
7,004
| 63,365,653
|
Preserving unknown batch dimension for custom static tensors in Tensorflow
|
<p>Some notes: I'm using tensorflow 2.3.0, python 3.8.2, and numpy 1.18.5 (not sure if that one matters though)</p>
<p>I'm writing a custom layer that stores a non-trainable tensor N of shape (a, b) internally, where a, b are known values (this tensor is created during init). When called on an input tensor, it flattens the input tensor, flattens its stored tensor, and concatenates the two together. Unfortunately, I can't seem to figure out how to preserve the unknown batch dimension during this concatenation. Here's minimal code:</p>
<pre><code>import tensorflow as tf
from tensorflow.keras.layers import Layer, Flatten
class CustomLayer(Layer):
def __init__(self, N): # N is a tensor of shape (a, b), where a, b > 1
super(CustomLayer, self).__init__()
self.N = self.add_weight(name="N", shape=N.shape, trainable=False, initializer=lambda *args, **kwargs: N)
# correct me if I'm wrong in using this initializer approach, but for some reason, when I
# just do self.N = N, this variable would disappear when I saved and loaded the model
def build(self, input_shape):
pass # my reasoning is that all the necessary stuff is handled in init
def call(self, input_tensor):
input_flattened = Flatten()(input_tensor)
N_flattened = Flatten()(self.N)
return tf.concat((input_flattened, N_flattened), axis=-1)
</code></pre>
<p>The first problem I noticed was that <code>Flatten()(self.N)</code> would return a tensor with the same shape (a, b) as the original <code>self.N</code>, and as a result, the returned value would have a shape of (a, num_input_tensor_values+b). My reasoning for this was that the first dimension, a, was treated as the batch size. I modified the <code>call</code> function:</p>
<pre><code> def call(self, input_tensor):
input_flattened = Flatten()(input_tensor)
N = tf.expand_dims(self.N, axis=0) # N would now be shape (1, a, b)
N_flattened = Flatten()(N)
return tf.concat((input_flattened, N_flattened), axis=-1)
</code></pre>
<p>This would return a tensor with shape (1, num_input_vals + a*b), which is great, but now the batch dimension is permanently 1, which I realized when I started training a model with this layer and it would only work for a batch size of 1. This is also really apparent in the model summary - if I were to put this layer after an input and add some other layers afterwards, the first dimension of the output tensors goes like <code>None, 1, 1, 1, 1...</code>. Is there a way to store this internal tensor and use it in <code>call</code> while preserving the variable batch size? (For example, with a batch size of 4, a copy of the same flattened N would be concatenated onto the end of each of the 4 flattened input tensors.)</p>
|
<p>You have to have as many flattened <code>N</code> vectors, as you have samples in your input, because you are concatenating to every sample. Think of it like pairing up rows and concatenating them. If you have only one <code>N</code> vector, then only one pair can be concatenated.
To solve this, you should use <code>tf.tile()</code> to repeat <code>N</code> as many times as there are samples in your batch.</p>
<p>Example:</p>
<pre><code> def call(self, input_tensor):
input_flattened = Flatten()(input_tensor) # input_flattened shape: (None, ..)
N = tf.expand_dims(self.N, axis=0) # N shape: (1, a, b)
N_flattened = Flatten()(N) # N_flattened shape: (1, a*b)
N_tiled = tf.tile(N_flattened, [tf.shape(input_tensor)[0], 1]) # repeat along the first dim as many times, as there are samples and leave the second dim alone
return tf.concat((input_flattened, N_tiled), axis=-1)
</code></pre>
|
python|tensorflow|keras
| 3
|
7,005
| 63,507,023
|
How to make a Keras Dense Layer deal with 3D tensor as input for this Softmax Fully Connected Layer?
|
<p>I am working on a custom problem, and i have to change the fully connected layer (Dense with softmax), My model code is something like this (with Keras Framework):</p>
<pre><code>.......
batch_size = 8
inputs = tf.random.uniform(shape=[batch_size,1024,256],dtype=tf.dtypes.float32)
preds = Dense(num_classes,activation='softmax')(x) #final layer with softmax activation
....
model = Model(inputs=base_model.input,outputs=preds)
</code></pre>
<p>So, i have to change the Code of Dense Layer to output a Tensor of probabilities with the shape of [batch_size, 1024, num_classes], without using a for loop, i need it to be optimized and not a consuming time function</p>
<p>The Dense code version that i want to change:</p>
<pre><code>class Dense(Layer):
"""Just your regular densely-connected NN layer.
`Dense` implements the operation:
`output = activation(dot(input, kernel) + bias)`
where `activation` is the element-wise activation function
passed as the `activation` argument, `kernel` is a weights matrix
created by the layer, and `bias` is a bias vector created by the layer
(only applicable if `use_bias` is `True`).
Note: if the input to the layer has a rank greater than 2, then
it is flattened prior to the initial dot product with `kernel`.
# Example
```python
# as first layer in a sequential model:
model = Sequential()
model.add(Dense(32, input_shape=(16,)))
# now the model will take as input arrays of shape (*, 16)
# and output arrays of shape (*, 32)
# after the first layer, you don't need to specify
# the size of the input anymore:
model.add(Dense(32))
```
# Arguments
units: Positive integer, dimensionality of the output space.
activation: Activation function to use
(see [activations](../activations.md)).
If you don't specify anything, no activation is applied
(ie. "linear" activation: `a(x) = x`).
use_bias: Boolean, whether the layer uses a bias vector.
kernel_initializer: Initializer for the `kernel` weights matrix
(see [initializers](../initializers.md)).
bias_initializer: Initializer for the bias vector
(see [initializers](../initializers.md)).
kernel_regularizer: Regularizer function applied to
the `kernel` weights matrix
(see [regularizer](../regularizers.md)).
bias_regularizer: Regularizer function applied to the bias vector
(see [regularizer](../regularizers.md)).
activity_regularizer: Regularizer function applied to
the output of the layer (its "activation").
(see [regularizer](../regularizers.md)).
kernel_constraint: Constraint function applied to
the `kernel` weights matrix
(see [constraints](../constraints.md)).
bias_constraint: Constraint function applied to the bias vector
(see [constraints](../constraints.md)).
# Input shape
nD tensor with shape: `(batch_size, ..., input_dim)`.
The most common situation would be
a 2D input with shape `(batch_size, input_dim)`.
# Output shape
nD tensor with shape: `(batch_size, ..., units)`.
For instance, for a 2D input with shape `(batch_size, input_dim)`,
the output would have shape `(batch_size, units)`.
"""
def __init__(self, units,
activation=None,
use_bias=True,
kernel_initializer='glorot_uniform',
bias_initializer='zeros',
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
**kwargs):
if 'input_shape' not in kwargs and 'input_dim' in kwargs:
kwargs['input_shape'] = (kwargs.pop('input_dim'),)
super(Dense, self).__init__(**kwargs)
self.units = units
self.activation = activations.get(activation)
self.use_bias = use_bias
self.kernel_initializer = initializers.get(kernel_initializer)
self.bias_initializer = initializers.get(bias_initializer)
self.kernel_regularizer = regularizers.get(kernel_regularizer)
self.bias_regularizer = regularizers.get(bias_regularizer)
self.activity_regularizer = regularizers.get(activity_regularizer)
self.kernel_constraint = constraints.get(kernel_constraint)
self.bias_constraint = constraints.get(bias_constraint)
self.input_spec = InputSpec(min_ndim=2)
self.supports_masking = True
def build(self, input_shape):
assert len(input_shape) >= 2
input_dim = input_shape[-1]
self.kernel = self.add_weight(shape=(input_dim, self.units),
initializer=self.kernel_initializer,
name='kernel',
regularizer=self.kernel_regularizer,
constraint=self.kernel_constraint)
if self.use_bias:
self.bias = self.add_weight(shape=(self.units,),
initializer=self.bias_initializer,
name='bias',
regularizer=self.bias_regularizer,
constraint=self.bias_constraint)
else:
self.bias = None
self.input_spec = InputSpec(min_ndim=2, axes={-1: input_dim})
self.built = True
def call(self, inputs):
output = K.dot(inputs, self.kernel)
if self.use_bias:
output = K.bias_add(output, self.bias)
if self.activation is not None:
output = self.activation(output)
return output
def compute_output_shape(self, input_shape):
assert input_shape and len(input_shape) >= 2
assert input_shape[-1]
output_shape = list(input_shape)
output_shape[-1] = self.units
return tuple(output_shape)
def get_config(self):
config = {
'units': self.units,
'activation': activations.serialize(self.activation),
'use_bias': self.use_bias,
'kernel_initializer': initializers.serialize(self.kernel_initializer),
'bias_initializer': initializers.serialize(self.bias_initializer),
'kernel_regularizer': regularizers.serialize(self.kernel_regularizer),
'bias_regularizer': regularizers.serialize(self.bias_regularizer),
'activity_regularizer': regularizers.serialize(self.activity_regularizer),
'kernel_constraint': constraints.serialize(self.kernel_constraint),
'bias_constraint': constraints.serialize(self.bias_constraint)
}
base_config = super(Dense, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
</code></pre>
|
<p>There are three different ways in which this can be done (that I can think of). If you want to have a single dense layer, that maps a vector of 256 elements to a vector of <code>num_classes</code> elements, and apply it all across your batch of data (that is, use the same <code>256 x num_classes</code> matrix of weights for every sample), then you don't need to do anything special, just use a regular <code>Dense</code> layer:</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
from tensorflow.keras import Input
from tensorflow.keras.layers import Dense
batch_size = 8
num_classes = 10
inp = Input(shape=(1024, 256))
layer = Dense(num_classes, activation='softmax')
out = layer(inp)
print(out.shape)
# (None, 1024, 10)
print(layer.count_params())
# 2570
</code></pre>
<p>Another way would be to have a single huge <code>Dense</code> layer that takes all <code>1024 * 256</code> values in at the same time and produces all <code>1024 * num_classes</code> values at the output, that is, a layer with a matrix of weights with shape <code>(1024 * 256) x (1024 * num_classes)</code> (in the order if gigabytes of memory!). This is easy to do too, although it seems unlikely to be what you need:</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
from tensorflow.keras import Input
from tensorflow.keras.layers import Flatten, Dense, Reshape, Softmax
batch_size = 8
num_classes = 10
inp = Input(shape=(1024, 256))
res = Flatten()(inp)
# This takes _a lot_ of memory!
layer = Dense(1024 * num_classes, activation=None)
out_res = layer(res)
# Apply softmax after reshaping
out_preact = Reshape((-1, num_classes))(out_res)
out = Softmax()(out_preact)
print(out.shape)
# (None, 1024, 10)
print(layer.count_params())
# 2684364800
</code></pre>
<p>Finally, you may want to have a set of 1024 weight matrices, each one applied to the corresponding sample in the input, which would imply an array of weights with shape <code>(1024, 256, num_classes)</code>. I don't think this can be done with one of the standard Keras layers (or don't know how to)<sup>1</sup>, but it's easy enough to write a custom layer based on <code>Dense</code> to do that:</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
from tensorflow.keras.layers import Dense, InputSpec
class Dense2D(Dense):
def __init__(self, *args, **kwargs):
super(Dense2D, self).__init__(*args, **kwargs)
def build(self, input_shape):
assert len(input_shape) >= 3
input_dim1 = input_shape[-2]
input_dim2 = input_shape[-1]
self.kernel = self.add_weight(shape=(input_dim1, input_dim2, self.units),
initializer=self.kernel_initializer,
name='kernel',
regularizer=self.kernel_regularizer,
constraint=self.kernel_constraint)
if self.use_bias:
self.bias = self.add_weight(shape=(input_dim1, self.units),
initializer=self.bias_initializer,
name='bias',
regularizer=self.bias_regularizer,
constraint=self.bias_constraint)
else:
self.bias = None
self.input_spec = InputSpec(min_ndim=3, axes={-2: input_dim1, -1: input_dim2})
self.built = True
def call(self, inputs):
# Multiply each set of weights with each input element
output = tf.einsum('...ij,ijk->...ik', inputs, self.kernel)
if self.use_bias:
output += self.bias
if self.activation is not None:
output = self.activation(output)
return output
def compute_output_shape(self, input_shape):
assert input_shape and len(input_shape) >= 3
assert input_shape[-1]
output_shape = list(input_shape)
output_shape[-1] = self.units
return tuple(output_shape)
</code></pre>
<p>You would then use it like this:</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
from tensorflow.keras import Input
batch_size = 8
num_classes = 10
inp = Input(shape=(1024, 256))
layer = Dense2D(num_classes, activation='softmax')
out = layer(inp)
print(out.shape)
# (None, 1024, 10)
print(layer.count_params())
# 2631680
</code></pre>
<hr />
<p><sup>1</sup>: As <a href="https://stackoverflow.com/users/2099607">today</a> points out in the comments, you can actually use a <a href="https://www.tensorflow.org/api_docs/python/tf/keras/layers/LocallyConnected1D" rel="nofollow noreferrer"><code>LocallyConnected1D</code></a> layer to do the same that I tried to do with my <code>Dense2D</code> layer. It is as simple as this:</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
from tensorflow.keras import Input
from tensorflow.keras.layers import LocallyConnected1D
batch_size = 8
num_classes = 10
inp = Input(shape=(1024, 256))
layer = LocallyConnected1D(num_classes, 1, activation='softmax')
out = layer(inp)
print(out.shape)
# (None, 1024, 10)
print(layer.count_params())
# 2631680
</code></pre>
|
tensorflow|keras|deep-learning|neural-network|tensor
| 1
|
7,006
| 63,380,691
|
How to model.predict inside loss function? (Tensorflow, Keras)
|
<p>I am trying to construct a custom loss for a regression problem with the following structure, following this answer:
<a href="https://stackoverflow.com/questions/46858016/keras-custom-loss-function-to-pass-arguments-other-than-y-true-and-y-pred">Keras Custom loss function to pass arguments other than y_true and y_pred</a></p>
<p>Now, my function is like the following:</p>
<pre><code>def CustomLoss(model,X_valid,y_valid,batch_size):
def Loss(y_true,y_pred):
n_samples=5
mc_predictions = np.zeros((n_samples,256,256))
for i in range(n_samples):
y_p = model.predict(X_valid, verbose=1,batch_size=batch_size)
(Other operations...)
return LossValue
return Loss
</code></pre>
<p>When trying to execute this line
<code>y_p = model.predict(X_valid, verbose=1,batch_size=batch_size)</code> i get the following error:</p>
<p><em>Method requires being in cross-replica context, use get_replica_context().merge_call()</em></p>
<p>From what I gathered I cannot use model.predict inside loss function. Is there a workaround or solution for this?
Please let me know if my question is clear or if you need any additional information. Thanks!</p>
|
<p>Sounds like you can use model.add_loss for this. You can use this to specify the loss function inside of the model. It also removes the need for the loss function to only take in y and y_pred.
<a href="https://www.tensorflow.org/api_docs/python/tf/keras/layers/Layer#add_loss" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/keras/layers/Layer#add_loss</a>
\ Some psuedo-code:</p>
<pre><code>class YourModel(tf.keras.Model):
...
def call(self, inputs):
unpack, any, extra, stuff = inputs
(your network code goes here)
loss = (other operations)
self.add_loss(loss)
return output
</code></pre>
<p>(In case you don't know, model.predict is basically just model.call but with some extra bells and whistles attached.)</p>
|
python|tensorflow|loss-function
| 1
|
7,007
| 24,601,014
|
Behaviour of custom NaN floats in Python and Numpy
|
<p>I need to pack some extra information into floating point NaN values. I am using single-precision IEEE 754 floats (32-bit floats) in Python. How do Python and NumPy treat these values?</p>
<p><strong>Theory</strong></p>
<p>The IEEE 754-2008 standard seems to think a number is really not a number, if the exponent bits (23..30) are set, and at least one of the significand bits is set. Thus if we transform the float into a 32-bit integer representation, anything satisfying the following conditions goes:</p>
<ul>
<li><code>i & 0x7f800000 == 0x7f800000</code></li>
<li><code>i & 0x007fffff != 0</code></li>
</ul>
<p>This would leave me with plenty of choice. However, the standard seems to say that the highest bit of the significand is <em>is_quiet</em> and should be set to avoid exceptions in calculations.</p>
<p><strong>Practical tests</strong></p>
<p><em>Python 2.7</em></p>
<p>In order to be sure, I ran some tests with interesting results:</p>
<pre><code>import math
import struct
std_nan = struct.unpack("f4", struct.pack("I", 0x7fc00000))[0]
spec_nan = struct.unpack("f4", struct.pack("I", 0x7f800001))[0]
spec2_nan = struct.unpack("f4", struct.pack("I", 0x7fc00001))[0]
print "{:08x}".format(struct.unpack("I", struct.pack("f4", std_nan))[0])
print "{:08x}".format(struct.unpack("I", struct.pack("f4", spec_nan))[0])
print "{:08x}".format(struct.unpack("I", struct.pack("f4", spec2_nan))[0])
</code></pre>
<p>This gives:</p>
<pre><code>7fc00000
7fc00001 <<< should be 7f800001
7fc00001
</code></pre>
<p>This and some further testing seems to suggst that something (<code>struct.unpack</code>?) always sets the <em>is_quiet</em> bit.</p>
<p><em>NumPy</em></p>
<p>I tried the same with NumPy, because there I can always rely on conversions not changing a single bit:</p>
<pre><code>import numpy as np
intarr = np.array([0x7f800001], dtype='uint32')
f = np.fromstring(intarr.tostring(), dtype='f4')
print np.isnan(f)
</code></pre>
<p>This gives:</p>
<pre><code>RuntimeWarning: invalid value encountered in isnan
[True]
</code></pre>
<p>but if the value is replaced by <code>0x7fc00001</code>, there is no error.</p>
<p><strong>Hypothesis</strong></p>
<p>Both Python and NumPy will be happy, if I set the <em>is_quiet</em> and use the rest of the bits for my own purposes. Python handles the bit by itself, NumPy relies on lower-level language implementations and/or the hardware FP implementation.</p>
<p><strong>Question</strong></p>
<p>Is my hypothesis correct, and can it be proved or disproved by some official documentation? Or is it one of those platform-dependent things?</p>
<p>I found something quite related here: <a href="https://stackoverflow.com/questions/3886988/how-to-distinguish-different-types-of-nan-float-in-python">How to distinguish different types of NaN float in Python</a>, but I could not find any official word on how extra-information-carrying NaNs should be handled in Python or NumPy.</p>
|
<p>After thinking of this for some time and having a look at the source code ad then rethinking a bit, I think I can answer my own question. My hypotheses are almost correct but not the whole story.</p>
<p>As NumPy and Python handle numbers quite differently, this answer has two parts.</p>
<p><strong>What really happens in Python and NumPy with NaNs</strong></p>
<p><em>NumPy</em></p>
<p>This may be slightly platform-specific, but on most platforms NumPy uses the <code>gcc</code> builtin <code>isnan</code>, which in turn does something fast. The runtime warnings come from the deeper levels, from the hardware in most cases. (NumPy may use on of several methods of determining the NaN status, such as x != x, which works on at least AMD 64 platforms, but with <code>gcc</code> it is down to <code>gcc</code>, which probably uses some pretty short code for the purpose.)</p>
<p>So, in theory there is no way to guarantee how NumPy handles NaNs, but in practice on the more common platforms it will do as the standard says because that's what the hardware does. NumPy itself does not care about the NaN types at all. (Except for some NumPy-specific non-hw-supported data types and platforms.)</p>
<p><em>Python</em></p>
<p>Here the story becomes interesting. If the platform supports IEEE floats (most do), Python uses the C library for floating point arithmetics, and thus almost directly hardware instructions in most cases. So there should not be any difference to NumPy.</p>
<p>Except for... There is usually no such thing as a 32-bit float in Python. Python float objects use C <code>double</code>, which is a 64-bit format. How does one transform special NaNs between these formats? In order to see what happens in practice, the following little C code helps:</p>
<pre><code>/* nantest.c - Test floating point nan behaviour with type casts */
#include <stdio.h>
#include <stdint.h>
static uint32_t u1 = 0x7fc00000;
static uint32_t u2 = 0x7f800001;
static uint32_t u3 = 0x7fc00001;
int main(void)
{
float f1, f2, f3;
float f1p, f2p, f3p;
double d1, d2, d3;
uint32_t u1p, u2p, u3p;
uint64_t l1, l2, l3;
// Convert uint32 -> float
f1 = *(float *)&u1; f2 = *(float *)&u2; f3 = *(float *)&u3;
// Convert float -> double (type cast, real conversion)
d1 = (double)f1; d2 = (double)f2; d3 = (double)f3;
// Convert the doubles into long ints
l1 = *(uint64_t *)&d1; l2 = *(uint64_t *)&d2; l3 = *(uint64_t *)&d3;
// Convert the doubles back to floats
f1p = (float)d1; f2p = (float)d2; f3p = (float)d3;
// Convert the floats back to uints
u1p = *(uint32_t *)&f1p; u2p = *(uint32_t *)&f2p; u3p = *(uint32_t *)&f3p;
printf("%f (%08x) -> %lf (%016llx) -> %f (%08x)\n", f1, u1, d1, l1, f1p, u1p);
printf("%f (%08x) -> %lf (%016llx) -> %f (%08x)\n", f2, u2, d2, l2, f2p, u2p);
printf("%f (%08x) -> %lf (%016llx) -> %f (%08x)\n", f3, u3, d3, l3, f3p, u3p);
return 0;
}
</code></pre>
<p>This prints:</p>
<pre><code>nan (7fc00000) -> nan (7ff8000000000000) -> nan (7fc00000)
nan (7f800001) -> nan (7ff8000020000000) -> nan (7fc00001)
nan (7fc00001) -> nan (7ff8000020000000) -> nan (7fc00001)
</code></pre>
<p>By looking at row 2 it is obvious that we have the same phenomenon as we had with Python. So, it is the conversion to <code>double</code> that introduces the extra <em>is_quiet</em> bit immediately after the exponent in the 64-bit version.</p>
<p>This sounds a bit strange, but actually the standard says (IEEE 754-2008, section 6.2.3):</p>
<p><em>Conversion of a quiet NaN from a narrower format to a wider format in the same radix, and then back to the same narrower format, should not change the quiet NaN payload in any way except to make it canonical.</em></p>
<p>This does not say anything about the propagation of signaled NaN's. However, that is explained by section 6.2.1.:</p>
<p><em>For binary formats, the payload is encoded in the p − 2 least significant bits of the trailing significand field.</em></p>
<p>The <em>p</em> above is precision, 24 bits for a 32-bit float. So, my mistake was to use signaled NaNs for payload.</p>
<p><strong>Summary</strong></p>
<p>I got the following take home points:</p>
<ul>
<li>the use of qNaNs (quiet NaNs) is supported and encouraged by the IEEE 754-2008</li>
<li>odd results were because I tried to use sNaNs and type conversions resulted in the <em>is_quiet</em> bit being set</li>
<li>both NumPy and Python act according to IEEE 754 on the most common platforms</li>
<li>the implementation leans heavily on the underlying C implementation and thus guarantees very little (there is even some code in Python which acknowledges that NaNs are not handled as they should be on some platforms)</li>
<li>the only safe way to handle this is to do a bit of DIY with the payload</li>
</ul>
<p>There is however, one thing that is implemented in neither Python nor NumPy (nor any other language I have come across with). Section 5.12.1:</p>
<p><em>Language standards should provide an optional conversion of NaNs in a supported format to external character sequences which appends to the basic NaN character sequences a suffix that can represent the NaN payload (see 6.2). The form and interpretation of the payload suffix is language-defined. The language standard shall require that any such optional output sequences be accepted as input in conversion of external character sequences to supported formats.</em></p>
|
python|python-2.7|numpy
| 8
|
7,008
| 29,926,772
|
putting headers into an array, python
|
<p>I have a set of data that is below some metadata. I'm looking to put the headers into a numpy array to be used later. However the first header needs to be ignored as that is the x data header, then the other columns are the y headers. How do i read this?</p>
|
<p>Assuming I have understood what you mean by headers (it would be easier to tell with a few complete lines, even if you had to scale it down from your actual file)...</p>
<p>I would first read the irregular lines with normal python then, on the regular lines, use genfromtxt with skip_header and usecols (make a tuple like (i for i in range(2,102))</p>
|
python|csv|numpy
| 0
|
7,009
| 30,041,286
|
Sum rows where value equal in column
|
<p>How can I sum across rows that have equal values in the first column of a numpy array? For example: </p>
<pre><code>In: np.array([[1,2,3],
[1,4,6],
[2,3,5],
[2,6,2],
[3,4,8]])
Out: [[1,6,9], [2,9,7], [3,4,8]]
</code></pre>
<p>Any help would be greatly appreciated.</p>
|
<p>Pandas has a very very powerful groupby function which makes this very simple.</p>
<pre><code>import pandas as pd
n = np.array([[1,2,3],
[1,4,6],
[2,3,5],
[2,6,2],
[3,4,8]])
df = pd.DataFrame(n, columns = ["First Col", "Second Col", "Third Col"])
df.groupby("First Col").sum()
</code></pre>
|
python|numpy|sum|row
| 16
|
7,010
| 29,920,114
|
How to gauss-filter (blur) a floating point numpy array
|
<p>I have got a numpy array <code>a</code> of type <code>float64</code>. How can I blur this data with a Gauss filter?</p>
<p>I have tried</p>
<pre><code>from PIL import Image, ImageFilter
image = Image.fromarray(a)
filtered = image.filter(ImageFilter.GaussianBlur(radius=7))
</code></pre>
<p>, but this yields <code>ValueError: 'image has wrong mode'</code>. (It has mode <code>F</code>.)</p>
<p>I could create an image of suitable mode by multiplying <code>a</code> with some constant, then rounding to integer. That should work, but I would like to have a more direct way.</p>
<p>(I am using Pillow 2.7.0.)</p>
|
<p>If you have a two-dimensional numpy array <code>a</code>, you can use a Gaussian filter on it directly without using Pillow to convert it to an image first. scipy has a function <a href="http://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.ndimage.filters.gaussian_filter.html"><code>gaussian_filter</code></a> that does the same.</p>
<pre><code>from scipy.ndimage.filters import gaussian_filter
blurred = gaussian_filter(a, sigma=7)
</code></pre>
|
python|numpy|opencv|python-imaging-library|filtering
| 58
|
7,011
| 53,476,876
|
np.around for array with none and integer values
|
<p>I have an array:</p>
<pre><code>MDP= [[0.705,.655,0.614,0.388],[0.762,None,0.660,-1],[0.812,.868,0.918,+1]]
</code></pre>
<p>How can I apply np.around on above array without getting the error for None and -1, +1 values?</p>
<p>TIA</p>
|
<p>Make sure that you work with a numpy array, not lists of lists:</p>
<pre><code>np.around(np.array(MDP).astype(float))
#array([[ 1., 1., 1., 0.],
# [ 1., nan, 1., -1.],
# [ 1., 1., 1., 1.]])
</code></pre>
<p>You can convert the result back to a nested list with <code>.tolist()</code>, if needed.</p>
|
python|numpy
| 1
|
7,012
| 53,657,152
|
Looping Range of Numbers & Appending to df.col using pandas or itertools
|
<p>I would like to iterate a range of numbers through a dataframe column.</p>
<pre><code>data = {'NAME': ['Jason', 'Molly', 'Tina', 'Jake', 'Amy','Tina3', 'Jake2', 'Amy1','Jake3', 'Amy2' ],
'REPORTS': [4, 24, 31, 2, 3, 12, 13, 63, 22, 64]}
df = pd.DataFrame(data)
df['col'] = 0
range = [1,2,3]
</code></pre>
<p>I am would like the output to look like the below:</p>
<pre><code>Jason 4 1
Molly 24 2
Tina 31 3
Jake 2 1
Amy 3 2
</code></pre>
<p>I've tried:</p>
<pre><code>for row in df['col']:
d['col'].append(range)
df['col'] = df.apply(lambda x: df['col']+range)
</code></pre>
|
<p>IIUC, you can use <code>itertools.cycle</code> to cycle through your range for the length of the dataframe:</p>
<pre><code>from itertools import cycle
c = cycle(range(1,4))
df['new_column'] = [next(c) for _ in range(len(df))]
>>> df
NAME REPORTS new_column
0 Jason 4 1
1 Molly 24 2
2 Tina 31 3
3 Jake 2 1
4 Amy 3 2
5 Tina3 12 3
6 Jake2 13 1
7 Amy1 63 2
8 Jake3 22 3
9 Amy2 64 1
</code></pre>
<p>An alternative would be to use <code>np.tile</code> to repeat your range, but this seems less readable to me:</p>
<pre><code>df['new_column'] = pd.np.tile(range(1,4), (len(df)//3)+1)[:len(df)]
</code></pre>
|
python|pandas|loops|apply
| 1
|
7,013
| 53,618,106
|
Performing computations with values of two dataframes
|
<p>I have two pandas data frames.</p>
<p><strong>DataFrame 1</strong></p>
<pre><code>Index_Col Col1 Col2 Col3 Col4 Col5
Row1 0.64 0.89 0.76 0.22 1.34
Row2 0.54 0.56 0.82 0.46 0.23
and so on.
</code></pre>
<p>DataFrame 2 has Thresholds for each of the columns in dataframe1 as a range.</p>
<p><strong>DataFrame 2</strong></p>
<pre><code>Column_Name Group Min Max
col1 G1 0.5 1
col2 G1 0.1 2
col3 G2 0.3 0.9
col4 G1 0.3 1
col5 G2 0.7 2
and so on
</code></pre>
<p>I am trying to compute <code>value = ((value - Min)/(Max - Min))*100</code> for every value in every column of DataFrame1. For example, the value of Row1 of Col1 will be
<code>((0.64-0.5)/(1-0.5))*100</code>.</p>
<p>I tried converting everything to lists and compute using multiple for loops. But I would like to know if there's any simpler method.</p>
|
<pre><code>import pandas as pd
import io
# SAmple Data
df1 = pd.read_table(io.StringIO("""
Index_Col Col1 Col2 Col3 Col4 Col5
Row1 0.64 0.89 0.76 0.22 1.34
Row2 0.54 0.56 0.82 0.46 0.23
"""),delim_whitespace=True)
df2 = pd.read_table(io.StringIO("""
Column_Name Group Min Max
col1 G1 0.5 1
col2 G1 0.1 2
col3 G2 0.3 0.9
col4 G1 0.3 1
col5 G2 0.7 2
"""), delim_whitespace=True)
# Melt the wide data frame so that each cell is a row
df1m = pd.melt(df1, id_vars=["Index_Col"], var_name="Col")
# Lowercase the column name to match with df2
df1m['Column_Name'] = df1m['Col'].str.lower()
# Join the melted dataframe with the thresholds in df2
df1mj = df1m.merge(df2, left_on="Column_Name", right_on="Column_Name")
# Calculate
df1mj['new_value'] = ((df1mj['value'] - df1mj['Min'])/(df1mj['Max'] - df1mj['Min']))*100
# Use pivot to reassemble the wide dataframe
result = df1mj.pivot(index = "Index_Col", columns="Col", values="new_value")
</code></pre>
<p>Result:</p>
<pre><code>Col Col1 Col2 Col3 Col4 Col5
Index_Col
Row1 28.0 41.578947 76.666667 -11.428571 49.230769
Row2 8.0 24.210526 86.666667 22.857143 -36.153846
</code></pre>
|
python|python-3.x|pandas|dataframe
| 0
|
7,014
| 53,378,909
|
How to change the first value in the tuple of a list?
|
<p>This is my matrix:</p>
<pre><code>b = [[(1, 0.044), (2, 0.042)], [(4, 0.18), (6, 0.023)], [(4, 0.03), (5,
0.023)]]
</code></pre>
<p>And I want to let it to be a </p>
<pre><code>b = [[(6, 0.044), (7, 0.042)], [(9, 0.18), (11, 0.023)], [(9, 0.03), (10,
0.023)]]
</code></pre>
<p>To add n for the first value in the tuple, and I tried:</p>
<pre><code>for n in b:
for ee,ww in n:
ee == ee + 2903
</code></pre>
<p>It doesn't work.
How should I keep the change to the original matrix b?</p>
|
<p>Tuples are immutable. You can use a list comprehension instead:</p>
<pre><code>res = [[(i+5, j) for i, j in tup] for tup in b]
[[(6, 0.044), (7, 0.042)], [(9, 0.18), (11, 0.023)], [(9, 0.03), (10, 0.023)]]
</code></pre>
|
python|numpy|replace|tuples
| 3
|
7,015
| 19,863,964
|
fastest way to find the magnitude (length) squared of a vector field
|
<p>I have a large vector field, where the field is large (e.g. 512^3; but not necessarily square) and the vectors are either 2D or 3D (e.g. shapes are [512, 512, 512, 2] or [512, 512, 512, 3]).</p>
<p>What is the fastest way to compute a scalar field of the squared-magnitude of the vectors?</p>
<p>I could just loop over each direction, i.e.</p>
<pre><code>import numpy as np
shp = [256,256,256,3] # Shape of vector field
vf = np.arange(3*(256**3)).reshape(shp) # Create vector field
sf = np.zeros(shp[:3]) # Create scalar field for result
for ii in range(shp[0]):
for jj in range(shp[1]):
for kk in range(shp[2]):
sf[ii,jj,kk] = np.dot( vf[ii,jj,kk,:] , vf[ii,jj,kk,:] )
</code></pre>
<p>but that is fairly slow, is there anything faster?</p>
|
<p>The fastest is probably going to be <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html" rel="noreferrer"><code>np.einsum</code></a>:</p>
<pre><code>np.einsum('...j,...j->...', vf, vf)
</code></pre>
<p>The above code tells numpy to grab its to inputs and reduce the last dimension of each by multiplying corresponding values and adding them together. With your dataset there is a problem of overflow, since the magnitudes will not fit in a 32 bit integer, which is the default return of <code>np.arange</code>. You can solve that by specifying the return dtype, as either <code>np.int64</code> or <code>np.double</code>:</p>
<pre><code>>>> np.einsum('...j,...j->...', vf,vf)[-1, -1, -1]
-603979762
>>> np.einsum('...j,...j->...', vf,vf).dtype
dtype('int32')
>>> np.einsum('...j,...j->...', vf,vf, dtype=np.int64)[-1, -1, -1]
7599823767207950
>>> np.einsum('...j,...j->...', vf,vf, dtype=np.double)[-1, -1, -1]
7599823767207950.0
</code></pre>
|
python|math|optimization|vector|numpy
| 6
|
7,016
| 71,792,666
|
How to replace missing value with NA using for loop in Python
|
<p>I have a data frame with 2 features which I have created using python code:</p>
<pre><code>data_df = {"Age" : [10, 20, 30, 40, 50, np.NaN, np.NaN, np.NaN, np.NaN],
"Name" : ["A", "B", "C", "D", "E", "F", "G", "H", "I"]}
data_df = pd.DataFrame(data_df)
data_df.head(7)
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>Age</th>
<th>Name</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>10.0</td>
<td>A</td>
</tr>
<tr>
<td>1</td>
<td>20.0</td>
<td>B</td>
</tr>
<tr>
<td>2</td>
<td>30.0</td>
<td>C</td>
</tr>
<tr>
<td>3</td>
<td>40.0</td>
<td>D</td>
</tr>
<tr>
<td>4</td>
<td>50.0</td>
<td>E</td>
</tr>
<tr>
<td>5</td>
<td>NaN</td>
<td>F</td>
</tr>
<tr>
<td>6</td>
<td>NaN</td>
<td>G</td>
</tr>
</tbody>
</table>
</div>
<p>Now I want to replace all the Name value to NA where age is also NA so I use for loop as shown below:</p>
<pre><code>am_decision = []
for (x,y) in zip(data_df['Age'],data_df['Name']):
if x == np.NaN:
am_decision.append(np.NaN)
else:
am_decision.append(y)
print(len(am_decision))
print(am_decision)
</code></pre>
<p>OUTPUT == 9 <br>
['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I']</p>
<p>As you can see the above for loop code is not working. Is there something that I missed?</p>
|
<p>For test missing values use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.isna.html" rel="nofollow noreferrer"><code>pandas.isna</code></a>:</p>
<pre><code>am_decision = []
for (x,y) in zip(data_df['Age'],data_df['Name']):
if pd.isna(x):
am_decision.append(np.NaN)
else:
am_decision.append(y)
print(len(am_decision))
print(am_decision)
['A', 'B', 'C', 'D', 'E', nan, nan, nan, nan]
</code></pre>
<p>Non loop solution is faster and simplier - use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.mask.html" rel="nofollow noreferrer"><code>Series.mask</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.isna.html" rel="nofollow noreferrer"><code>Series.isna</code></a>:</p>
<pre><code>out = data_df['Name'].mask(data_df['Age'].isna())
print (out)
0 A
1 B
2 C
3 D
4 E
5 NaN
6 NaN
7 NaN
8 NaN
Name: Name, dtype: object
out = data_df['Name'].mask(data_df['Age'].isna()).tolist()
print (out)
['A', 'B', 'C', 'D', 'E', nan, nan, nan, nan]
</code></pre>
|
python|python-3.x|pandas|numpy
| 3
|
7,017
| 72,023,608
|
python reading a csv file with panda and multiple filters
|
<p>I want to read a csv with python and panda with multiple filters</p>
<p>example csv file with the name passwd.csv:</p>
<pre><code>Funktion, Benutzer, Kennwort
user_p, user1, test1
user_f, user2, test2
user, bla, blup
</code></pre>
<p>python code:</p>
<pre class="lang-py prettyprint-override"><code>import pandas
d = pandas.read_csv('C:\\tmp\\python\\passwd.csv')
res = d.query('Funktion == "user_f" ')
print (res)
</code></pre>
<p>this works fine</p>
<p>when I now changed the filter to 2 args, I got an error
pandas.core.computation.ops.UndefinedVariableError: name 'User' is not defined</p>
<pre class="lang-py prettyprint-override"><code>res = d.query('Funktion == "user_f" ') | d.query('Benutzer == "user2" ')
</code></pre>
<p>I can´t find the error</p>
<p>Kind regards
Jens</p>
|
<p>I don't know if this is a typo, but you're trying to combine two dataframes (returned by <code>query</code>) with the <code>|</code> operator. It belongs in your expression:</p>
<pre><code>res = d.query('Funktion == "user_f" | Benutzer == "user2" ')
</code></pre>
|
python|pandas|filter
| 0
|
7,018
| 71,909,917
|
"ModuleNotFoundError: No module named 'keras'" after computer restart
|
<p>Over the weekend Windows restarted my computer for updates. Now I can no longer run large amounts of code!</p>
<p>I'm running this segment of <code>jyupter</code> code in <code>VS Code</code></p>
<pre><code>from tensorflow import keras
normalizer = keras.layers.experimental.preprocessing.Normalization(axis=-1)
normalizer.adapt(ImageData)
ImageDataNorm = normalizer(ImageData)
print("var: %.4f" % np.var(ImageDataNorm))
print("mean: %.4f" % np.mean(ImageDataNorm))
</code></pre>
<p>But get: <code>ModuleNotFoundError: No module named 'keras</code></p>
<p>I'm using the proper interpreter and <code>conda list</code> includes the entire <code>tensorflow</code> package.</p>
<p>This is not the first time I've had modules go missing after restarts. My last solution was a complete removal of Python and conda but that's not really a workable solution.</p>
<p>Any help is appreciated, thanks folks!</p>
|
<p>A <code>ModuleNotFoundError:</code> is triggered when a package can't be found, or is not installed. As you have recently updated, I assume your window's installation of Python as automatically upgraded.</p>
<blockquote>
<p>I'm using the proper interpreter and <strong><code>conda list</code> includes the entire <code>tensorflow</code> package.</strong></p>
</blockquote>
<p>I don't think <code>conda list</code> will show you the packages in your Python library which are being used by VS Code</p>
<p>You can check your Python installed packages by using <code>pip list</code> in terminal. If <code>tensorflow</code> is not there, try the methods below.</p>
<hr />
<p>VS code will take it's modules from Python directly. You can install modules with:</p>
<pre><code>pip install tensorflow
</code></pre>
<p>And Anaconda uses:</p>
<pre><code>conda install tensorflow
</code></pre>
<blockquote>
<p><a href="https://www.tensorflow.org/install/pip#system-install" rel="nofollow noreferrer">https://www.tensorflow.org/install/pip#system-install</a></p>
</blockquote>
|
python|tensorflow|anaconda3
| 0
|
7,019
| 71,929,046
|
Convert an image from RGB to index in palette using Tensorflow
|
<p>I want to convert an RGB image to one with a single channel, whose value is an integer index from a palette (which has already been extracted).</p>
<p>An example:</p>
<pre class="lang-py prettyprint-override"><code>import tensorflow as tf
# image shape (height=2, width=2, channels=3)
image = tf.constant([
[
[1., 1., 1.], [1., 0., 0.]
],
[
[0., 0., 1.], [1., 0., 0.]
]
])
# palette is a tensor with the extracted colors
# palette shape (num_colors_in_palette, 3)
palette = tf.constant([
[1., 0., 0.],
[0., 0., 1.],
[1., 1., 1.]
])
indexed_image = rgb_to_indexed(image, palette)
# desired result: [[2, 0], [1, 0]]
# result shape (height, width)
</code></pre>
<p>I can imagine a few ways to implement <code>rgb_to_indexed(image, palette)</code> in pure python, but I'm having trouble finding out <strong>how to implement it the Tensorflow way</strong> (using @tf.funtion for AutoGraph and avoiding for loops), using only (or mostly) vectorized operations.</p>
<h2>Edit 1: showing sample python/numpy code</h2>
<p>If the code need not use Tensorflow, a non-vectorized implementation could be:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
def rgb_to_indexed(image, palette):
result = np.ndarray(shape=[image.shape[0], image.shape[1]])
for i, row in enumerate(image):
for j, color in enumerate(row):
index, = np.where(np.all(palette == color, axis=1))
result[i, j] = index
return result
indexed_image = rgb_to_indexed(image.numpy(), palette.numpy())
# indexed_image is [[2, 0], [1, 0]]
</code></pre>
|
<p>I used the technique described in this other question (<a href="https://stackoverflow.com/questions/64930665/find-indices-of-rows-of-numpy-2d-array-in-another-2d-array">Find indices of rows of numpy 2d array in another 2D array</a>) and adapted it from Numpy to Tensorflow. It is fully vectorized and executes very fast.</p>
<p>First in Numpy (vectorized):</p>
<pre class="lang-py prettyprint-override"><code>def rgb_to_indexed(image, palette):
original_shape = image.shape
# flattens the image to the shape (height*width, channels)
flattened_image = image.reshape(original_shape[0]*original_shape[1], -1)
num_pixels, num_channels = flattened_image.shape[0], flattened_image.shape[1]
# creates a mask of pixel and color matches and reduces it to two lists of indices:
# a) color in the palette, and b) pixel in the image
indices = flattened_image == palette[:, None]
row_sums = indices.sum(axis=2)
color_indices, pixel_indices = np.where(row_sums == num_channels)
# sets -42 as the default value in case some color is not in the palette,
# then replaces the values for which some index has been found in the palette
INDEX_OF_COLOR_NOT_FOUND = -42
indexed_image = np.ones(num_pixels, dtype="int64") * -1
indexed_image[pixel_indices] = color_indices
# reshapes to "deflatten" the indexed_image and give it a single channel (the index)
indexed_image = indexed_image.reshape([*original_shape[0:2]])
return indexed_image
</code></pre>
<p>Then my translation to Tensorflow:</p>
<pre class="lang-py prettyprint-override"><code>@tf.function
def rgba_to_indexed(image, palette):
original_shape = tf.shape(image)
# flattens the image to have (height*width, channels)
# so it has the same rank as the palette
flattened_image = tf.reshape(image, [original_shape[0]*original_shape[1], -1])
num_pixels, num_channels = tf.shape(flattened_image)[0], tf.shape(flattened_image)[1]
# does the mask magic but using tensorflow ops
indices = flattened_image == palette[:, None]
row_sums = tf.reduce_sum(tf.cast(indices, "int32"), axis=2)
results = tf.cast(tf.where(row_sums == num_channels), "int32")
color_indices, pixel_indices = results[:, 0], results[:, 1]
pixel_indices = tf.expand_dims(pixel_indices, -1)
# fills with default value then updates the palette color indices of the pixels
# with colors present in the palette
INDEX_OF_COLOR_NOT_FOUND = -42
indexed_image = tf.fill([num_pixels], INDEX_OF_COLOR_NOT_FOUND)
indexed_image = tf.tensor_scatter_nd_add(
indexed_image,
pixel_indices,
color_indices - INDEX_OF_COLOR_NOT_FOUND,
tf.shape(indexed_image))
# reshapes the image back to (height, width)
indexed_image = tf.reshape(indexed_image, [original_shape[0], original_shape[1]])
return indexed_image
</code></pre>
|
python|tensorflow
| 0
|
7,020
| 17,090,577
|
eulerian-magnification tuple index out of range python
|
<p>I'm making an attempt to build and work with this video project in Python.</p>
<p><a href="https://github.com/brycedrennan/eulerian-magnification" rel="nofollow">https://github.com/brycedrennan/eulerian-magnification</a></p>
<p>The command that I'm trying to run is:</p>
<pre><code>eulerian_magnification('media/face.mp4', image_processing='gaussian', freq_min=50.0 / 60.0, freq_max=1.0, amplification=50, pyramid_levels=4)
</code></pre>
<p>I get back the error:</p>
<pre><code>Loading media/face.mp4
Applying bandpass between 0.833333333333 and 1.0 Hz
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "eulerian_magnify.py", line 19, in eulerian_magnification
vid_data = temporal_bandpass_filter(vid_data, fps, freq_min=freq_min, freq_max=freq_max)
File "eulerian_magnify.py", line 60, in temporal_bandpass_filter
fft = scipy.fftpack.fft(data, axis=axis)
File "/usr/lib/python2.7/dist-packages/scipy/fftpack/basic.py", line 225, in fft
n = tmp.shape[axis]
IndexError: tuple index out of range
</code></pre>
<p>I installed openCV and SciPy in order to get the program to run but after searching around I haven't been able to solve this issue.</p>
<p>Does anyone know what I can play around with to cure this?</p>
|
<p>This may indicate that the OpenCV bin directory is not in the path.</p>
|
python|opencv|numpy|scipy
| 1
|
7,021
| 19,221,694
|
How many columns in pandas, python?
|
<p>Have anyone known the total columns in pandas, python?
I have just created a dataframe for pandas included more than 20,000 columns but I got memory error.</p>
<p>Thanks a lot</p>
|
<p>You get an out of memory error because you run out of memory, not because there is a limit on the number of columns.</p>
|
python|pandas
| 6
|
7,022
| 55,434,047
|
Passing values to function using numpy
|
<p>Suppose, I have a function that has three inputs prob(x,mu,sig).
With sizes:</p>
<pre><code>x = 1 x 3
mu = 1 x 3
sig = 3 x 3
</code></pre>
<p>Now, I have a dataset X, mean matrix M and std. deviation matrix sigma.
Sizes are:-</p>
<pre><code>X : m x 3.
mean : k x 3.
sigma : k x 3 x 3
</code></pre>
<p>For each value m, I want to pass all values of k in the function prob to calculate my responsibility value. </p>
<p>I can pass the values one by one using for loops.
What would be a better way of doing this in numpy.</p>
<p>The related code for reference: </p>
<pre><code>responsibility = np.zeros((X.shape[0],k))
s = np.zeros(k)
for i in np.arange(X.shape[0]):
for j in np.arange(k):
s[j] = prob(X[i],MU[j],SIGMA[j])
s = s/s.sum()
responsibility[i] = s
responsibility = np.transpose(responsibility)
</code></pre>
|
<p>If using a single for loop is acceptable then you can probably use the following,</p>
<pre><code>import itertools
sigma.shape = k, 9
zipped_array = np.array(list(zip(mean, sigma)))
all_possible_combo = list(itertools.product(X, zipped_array))
list_len = len(all_possible_combo) # = m * k
s = np.zeros(k)
responsibility = np.zeros((X.shape[0],k))
for i in range(list_len):
X_arow = all_possible_combo[i][0]
mean_single = all_possible_combo[i][1]
sigma_single = all_possible_combo[i][2].reshape((3, 3))
s = prob(X_arow, mean_single, sigma_single)
s = s/s.sum()
responsibility[i] = s
responsibility = np.transpose(responsibility)
</code></pre>
|
python|function|numpy
| 0
|
7,023
| 55,191,194
|
pandas version impact on tables
|
<p>I have a html file with tables.(wikipedia links)
I am trying to access the tables using pandas.</p>
<p>My code is :</p>
<pre><code>dfs=pd.read_html(url1)
for i in range(0,5):
print(dfs[i])
</code></pre>
<p>This works in pandas version 0.23.0</p>
<p>but the same does not work on 0.23.4 version.
I get the error</p>
<pre><code> dfs=pd.read_html(url1)
File "C:\ProgramData\Anaconda2\lib\site-packages\pandas\io\html.py", line 987, in read_html
displayed_only=displayed_only)
File "C:\ProgramData\Anaconda2\lib\site-packages\pandas\io\html.py", line 815, in _parse
raise_with_traceback(retained)
File "C:\ProgramData\Anaconda2\lib\site-packages\pandas\io\html.py", line 797, in _parse
tables = p.parse_tables()
File "C:\ProgramData\Anaconda2\lib\site-packages\pandas\io\html.py", line 213, in parse_tables
tables = self._parse_tables(self._build_doc(), self.match, self.attrs)
File "C:\ProgramData\Anaconda2\lib\site-packages\pandas\io\html.py", line 471, in _parse_tables
raise ValueError('No tables found')
ValueError: No tables found
</code></pre>
<p>how to resolve this</p>
|
<p>Use beautiful soap with pandas:</p>
<pre><code>import pandas as pd
import requests
from bs4 import BeautifulSoup
res = requests.get("https://en.wikipedia.org/wiki/List_of_bicycle-sharing_systems")
soup = BeautifulSoup(res.content,'html.parser')
table = soup.find_all('table')[0]
df = pd.read_html(str(table))
</code></pre>
<p>This syntax is using default html.parser. You can use one of alternative parser (first you need to install them with pip):</p>
<ul>
<li><p>lxml</p></li>
<li><p>lxml-xml / xml</p></li>
<li><p>html5lib</p></li>
</ul>
|
python|pandas
| 0
|
7,024
| 56,841,907
|
How to set the columns in pandas
|
<p>Here is my dataframe:</p>
<pre><code> Dec-18 Jan-19 Feb-19 Mar-19 Apr-19 May-19
Saturday 2540.0 2441.0 3832.0 4093.0 1455.0 2552.0
Sunday 1313.0 1891.0 2968.0 2260.0 1454.0 1798.0
Monday 1360.0 1558.0 2967.0 2156.0 1564.0 1752.0
Tuesday 1089.0 2105.0 2476.0 1577.0 1744.0 1457.0
Wednesday 1329.0 1658.0 2073.0 2403.0 1231.0 874.0
Thursday 798.0 1195.0 2183.0 1287.0 1460.0 1269.0
</code></pre>
<p>I have tried some pandas ops but I am not able to do that.</p>
<p>This is what I want to do:</p>
<pre><code> items
Saturday 2540.0
Sunday 1313.0
Monday 1360.0
Tuesday 1089.0
Wednesday 1329.0
Thursday 798.0
Saturday 2441.0
Sunday 1891.0
Monday 1558.0
Tuesday 2105.0
Wednesday 1658.0
Thursday 1195.0 ............ and so on
</code></pre>
<p>I want to set those rows into rows in downside, how to do that?</p>
|
<pre><code>df.reset_index().melt(id_vars='index').drop('variable',1)
</code></pre>
<p>Output:</p>
<pre><code> index value
0 Saturday 2540.0
1 Sunday 1313.0
2 Monday 1360.0
3 Tuesday 1089.0
4 Wednesday 1329.0
5 Thursday 798.0
6 Saturday 2441.0
7 Sunday 1891.0
8 Monday 1558.0
9 Tuesday 2105.0
10 Wednesday 1658.0
11 Thursday 1195.0
12 Saturday 3832.0
13 Sunday 2968.0
14 Monday 2967.0
15 Tuesday 2476.0
16 Wednesday 2073.0
17 Thursday 2183.0
18 Saturday 4093.0
19 Sunday 2260.0
20 Monday 2156.0
21 Tuesday 1577.0
22 Wednesday 2403.0
23 Thursday 1287.0
24 Saturday 1455.0
25 Sunday 1454.0
26 Monday 1564.0
27 Tuesday 1744.0
28 Wednesday 1231.0
29 Thursday 1460.0
30 Saturday 2552.0
31 Sunday 1798.0
32 Monday 1752.0
33 Tuesday 1457.0
34 Wednesday 874.0
35 Thursday 1269.0
</code></pre>
<p>Note: just noted a commented suggesting to do the same thing, I will delete my post if requested :)</p>
|
python|pandas|dataframe
| 9
|
7,025
| 56,670,223
|
Remove square brackets from cells using pandas
|
<p>I have a Pandas Dataframe with data as below</p>
<pre><code>id, name, date
[101],[test_name],[2019-06-13T13:45:00.000Z]
[103],[test_name3],[2019-06-14T13:45:00.000Z, 2019-06-14T17:45:00.000Z]
[104],[],[]
</code></pre>
<p>I am trying to convert it to a format as below with no square brackets</p>
<p>Expected output:</p>
<pre><code>id, name, date
101,test_name,2019-06-13T13:45:00.000Z
103,test_name3,2019-06-14T13:45:00.000Z, 2019-06-14T17:45:00.000Z
104,,
</code></pre>
<p>I tried using regex as below but it gave me an error <code>TypeError: expected string or bytes-like object</code></p>
<pre><code>re.search(r"\[([A-Za-z0-9_]+)\]", df['id'])
</code></pre>
|
<p>Loop through the data frame to access each string then use:</p>
<pre><code>newstring = oldstring[1:len(oldstring)-1]
</code></pre>
<p>to replace the cell in the dataframe.</p>
|
regex|pandas
| 0
|
7,026
| 56,496,731
|
Coloring entries in an Matrix/2D-numpy array?
|
<p>I'm learning python3 and I'd like to print a matrix/2d-array which is color-coded (CLI). So let's say I'd like to assign each of these integers a certain background color, creating a mosaic-style look.</p>
<p>I've figured out how to fill a matrix of a given size with random integers, but I can't wrap my head around on how to continue from here on to achieve background coloring for each individual entry in the matrix, depending on its value. This is how far I've come:</p>
<pre class="lang-py prettyprint-override"><code>from random import randint
import numpy as np
def generate():
n = 10
m = 0
map = np.random.randint(4 + 1, size=(n, n))
print(map)
for element in np.nditer(map):
# iterating over each column is probably not the way to go...
generate()
</code></pre>
<p>Is there a way to do this? I was thinking of iterating through every column of the matrix and check by several if conditions whether the entry is 0,1,2,3 or 4 and, based on the condition, append that value with a certain background color to a new matrix, but I assume there is a far more elegant way to do this...</p>
|
<p>The following will <code>print</code> a colored output on console... </p>
<pre><code>>>> map = np.random.randint(4 + 1, size=(10, 10))
>>> def get_color_coded_str(i):
... return "\033[3{}m{}\033[0m".format(i+1, i)
...
>>> map_modified = np.vectorize(get_color_coded_str)(map)
>>> print("\n".join([" ".join(["{}"]*10)]*10).format(*[x for y in map_modified.tolist() for x in y]))
>>>
</code></pre>
<p><a href="https://i.stack.imgur.com/QMHWC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QMHWC.png" alt="enter image description here"></a></p>
<p>To add background color use following <code>fn</code></p>
<pre><code>>>> def get_color_coded_str(i):
... return "\033[4{}m{}\033[0m".format(i+1, i)
</code></pre>
<p><a href="https://i.stack.imgur.com/w84Mi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/w84Mi.png" alt="enter image description here"></a></p>
<pre><code>from random import randint
import numpy as np
def get_color_coded_str(i):
return "\033[3{}m{}\033[0m".format(i+1, i)
def get_color_coded_background(i):
return "\033[4{}m {} \033[0m".format(i+1, i)
def print_a_ndarray(map, row_sep=" "):
n, m = map.shape
fmt_str = "\n".join([row_sep.join(["{}"]*m)]*n)
print(fmt_str.format(*map.ravel()))
n = 10
m = 20
map = np.random.randint(4 + 1, size=(n, m))
map_modified = np.vectorize(get_color_coded_str)(map)
print_a_ndarray(map_modified)
back_map_modified = np.vectorize(get_color_coded_background)(map)
print("-------------------------------------------------------")
print_a_ndarray(back_map_modified, row_sep="")
</code></pre>
<p>PS: print function modified as suggested by @hpaulj</p>
|
arrays|python-3.x|numpy|matrix|colors
| 4
|
7,027
| 66,917,947
|
creating a range of numbers in pandas based on single column
|
<p>I have a pandas dataframe:</p>
<pre><code>df2 = pd.DataFrame({'ID':['A','B','C','D','E'], 'loc':['Lon','Tok','Ber','Ams','Rom'], 'start':[20,10,30,40,43]})
ID loc start
0 A Lon 20
1 B Tok 10
2 C Ber 30
3 D Ams 40
4 E Rom 43
</code></pre>
<p>I'm looking to add in a column called range which takes the value in 'start' and produces a range of values which (including the initial value) are 10 less than the initial value, all in the same row.</p>
<p>The desired output:</p>
<pre><code> ID loc start range
0 A Lon 20 20,19,18,17,16,15,14,13,12,11,10
1 B Tok 10 10,9,8,7,6,5,4,3,2,1,0
2 C Ber 30 30,29,28,27,26,25,24,23,22,21,20
3 D Ams 40 40,39,38,37,36,35,34,33,32,31,30
4 E Rom 43 43,42,41,40,39,38,37,36,35,34,33
</code></pre>
<p>I have tried:</p>
<pre><code>df2['range'] = [i for i in range(df2.start, df2.start -10)]
</code></pre>
<p>and</p>
<pre><code>def create_range2(row):
return df2['start'].between(df2.start, df2.start - 10)
df2.loc[:, 'range'] = df2.apply(create_range2, axis = 1)
</code></pre>
<p>however I can't seem to get the desired output. I intend to apply this solution to multiple dataframes, one of which has > 2,000,000 rows.</p>
<p>thanks</p>
|
<p>You might prepare range creating function and <code>.apply</code> it to start column following way:</p>
<pre><code>import pandas as pd
df2 = pd.DataFrame({'ID':['A','B','C','D','E'], 'loc':['Lon','Tok','Ber','Ams','Rom'], 'start':[20,10,30,40,43]})
def make_10(x):
return list(range(x, x-10-1, -1))
df2["range"] = df2["start"].apply(make_10)
print(df2)
</code></pre>
<p>output</p>
<pre><code> ID loc start range
0 A Lon 20 [20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10]
1 B Tok 10 [10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0]
2 C Ber 30 [30, 29, 28, 27, 26, 25, 24, 23, 22, 21, 20]
3 D Ams 40 [40, 39, 38, 37, 36, 35, 34, 33, 32, 31, 30]
4 E Rom 43 [43, 42, 41, 40, 39, 38, 37, 36, 35, 34, 33]
</code></pre>
<p>Explanation: <code>.apply</code> method of <code>pandas.Series</code> (column of <code>pandas.DataFrame</code>) accept function which is applied element-wise. Note that there is <code>-1</code> in <code>range</code> as it is inclusive-exclusive and <code>-1</code> as step size as you want to have descending values.</p>
|
python|pandas
| 1
|
7,028
| 66,993,314
|
Create a single categorical column based on conditions on many numerical columns (pandas)
|
<p>I have a pandas dataframe like this</p>
<p>df:</p>
<pre><code>sEXT | sNEU | sAGR | sCON | sOPN
2.4 | 3 | 2 | 2 | 5
3 | 1 | 4 | 2.7 | 1.5
</code></pre>
<p>I want to create a column "type" according the following rules. If sEXT > 2.5 add string "E" to status, else "I". If sNEU > 2.5 add string "N" to status, else "S". If sAGR > 2.5 add string "A" to status, else "H". If sCON > 2.5 add string "C" to status, else "S". If sOPN > 2.5 add string "O" to status, else "C".</p>
<p>My expected output is:</p>
<pre><code>sEXT | sNEU | sAGR | sCON | sOPN | type
2.4 | 3 | 2 | 2 | 5 | "INHSO"
3 | 1 | 4 | 2.7 | 1.5 | "ESACC"
</code></pre>
<p>I was trying</p>
<pre><code>df['type']=None
df['type'].loc[df['sEXT']>2.5]='E'
df['type'].loc[df['sEXT']<2.5]='I'
</code></pre>
<p>But I don't know how to go on. Can you help me?</p>
|
<p>You can write a function that creates the string, and then apply the dataframe to that function:</p>
<pre><code>import pandas as pd
data = [ { "sEXT": 2.4, "sNEU": 3, "sAGR": 2, "sCON": 2, "sOPN": 5 }, { "sEXT": 3, "sNEU": 1, "sAGR": 4, "sCON": 2.7, "sOPN": 1.5 } ]
df = pd.DataFrame(data)
def generate_type(row):
text = ''
if row['sEXT'] > 2.5:
text += 'E'
else:
text += 'I'
if row['sNEU'] > 2.5:
text += 'N'
else:
text += 'S'
if row['sAGR'] > 2.5:
text += 'A'
else:
text += 'H'
if row['sCON'] > 2.5:
text += 'C'
else:
text += 'S'
if row['sOPN'] > 2.5:
text += 'O'
else:
text += 'C'
return text
df['type']= df.apply(generate_type, axis=1)
</code></pre>
<p>Result:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: right;">sEXT</th>
<th style="text-align: right;">sNEU</th>
<th style="text-align: right;">sAGR</th>
<th style="text-align: right;">sCON</th>
<th style="text-align: right;">sOPN</th>
<th style="text-align: left;">type</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: right;">2.4</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">2</td>
<td style="text-align: right;">5</td>
<td style="text-align: left;">INHSO</td>
</tr>
<tr>
<td style="text-align: right;">1</td>
<td style="text-align: right;">3</td>
<td style="text-align: right;">1</td>
<td style="text-align: right;">4</td>
<td style="text-align: right;">2.7</td>
<td style="text-align: right;">1.5</td>
<td style="text-align: left;">ESACC</td>
</tr>
</tbody>
</table>
</div>
|
python|python-3.x|pandas
| 1
|
7,029
| 67,054,207
|
Difference between different lenght timestamp-indexed DataFrames
|
<p>I have two dataframes, both are indexed with timestamp values like '2021-03-23 13:04:00.134000+00:00'.</p>
<p>I would like to compute the difference between them on some columns, but the problem is that they are not time-aligned and have different number of rows.</p>
<p>Is there a good way to makes the difference of all the elements that have a time delta less then a specyfied amount and put NaN in the other cases?</p>
<p>Edit:</p>
<p>Dataframe 1:</p>
<pre><code>|index| val1 | val 2 |
|--------------------------------| ---- | ---- |
|2021-03-23 13:04:00.134000+00:00| 200 | 50 |
|2021-03-23 13:34:00.134000+00:00| 100 | 10 |
|2021-03-23 14:04:00.134000+00:00| 100 | 10 |
</code></pre>
<p>Dataframe 2:</p>
<pre><code>|index| val1 | val 2 |
|--------------------------------| ---- | ---- |
|2021-03-23 13:24:00.134000+00:00| 200 | 50 |
|2021-03-23 14:34:00.134000+00:00| 100 | 10 |
</code></pre>
<p>Expected output (difference between columns of Dataframe 1 and Dataframe 2) supposing that the time delta is 20 min:</p>
<pre><code>|index| val1 | val 2 |
|--------------------------------| ---- | ---- |
|2021-03-23 13:04:00.134000+00:00| 0 | 0 |
|2021-03-23 13:44:00.134000+00:00| NaN | NaN |
|2021-03-23 15:04:00.134000+00:00| NaN | NaN |
</code></pre>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.merge_asof.html" rel="nofollow noreferrer"><code>merge_asof</code></a> first:</p>
<pre><code>df = pd.merge_asof(df1,
df2,
left_index=True,
right_index=True,
tolerance=pd.Timedelta('20Min'),
direction='forward',
suffixes=('','_'))
print (df)
val1 val2 val1_ val2_
index
2021-03-23 13:04:00.134000+00:00 200 50 200.0 50.0
2021-03-23 13:34:00.134000+00:00 100 10 NaN NaN
2021-03-23 14:04:00.134000+00:00 100 10 NaN NaN
</code></pre>
<p>And then subtract columns which are same (difference only <code>_</code> added to end):</p>
<pre><code>new = df.columns[df.columns.str.endswith('_')]
print (new)
Index(['val1_', 'val2_'], dtype='object')
orig = new.str.replace('_','')
print (orig)
Index(['val1', 'val2'], dtype='object')
df[orig] = df[orig].sub(df[new].to_numpy())
df = df.drop(new, axis=1)
print (df)
val1 val2
index
2021-03-23 13:04:00.134000+00:00 0.0 0.0
2021-03-23 13:34:00.134000+00:00 NaN NaN
2021-03-23 14:04:00.134000+00:00 NaN NaN
</code></pre>
|
python|pandas|dataframe|timestamp
| 1
|
7,030
| 66,926,140
|
Why tf.keras loss becomes NaN when number of train images increases from 100 to 9000?
|
<p>I am following a CNN example in <a href="https://www.tensorflow.org/tutorials/images/cnn" rel="nofollow noreferrer">here</a>.
Here are my code to prepare the CNN model:</p>
<pre><code>model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', input_shape=(100, 100, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.Flatten())
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(7)) # 7 outputs for 7 classes which are 1, 2, 3, ..., 7
</code></pre>
<p>And this is how I train the model:</p>
<pre><code>model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(trainGenerator, epochs=10,
validation_data=validationGenerator)
</code></pre>
<p>When <code>trainGenerator</code> has 80 images & <code>validationGenerator</code> has 20 images, everything is ok for the <code>val_loss</code> & <code>loss</code> like below from one of the epoch</p>
<pre><code>Epoch 1/10
3/3 [==============================] - 1s 736ms/step - loss: 1.8475 - accuracy: 0.2500 - val_loss: 2.4287 - val_accuracy: 0.5500
</code></pre>
<p>When the <code>trainGenerator</code> got 9817 images & <code>validationGenerator</code> got 2454 images, the <code>val_loss</code> & <code>loss</code> become NaN</p>
<pre><code>Epoch 1/10
307/307 [==============================] - 20s 63ms/step - loss: nan - accuracy: 0.0090 - val_loss: nan - val_accuracy: 0.0000e+00
</code></pre>
<p>The batch size in <code>trainGenerator</code> & <code>validationGenertor</code> is 32 (default value) in both scenarios above.</p>
<p>I have rescaled the images when I import the images to <code>trainGenerator</code> & <code>validationGenertor</code> using the</p>
<pre><code>trainDataGen=ImageDataGenerator(
rescale=1./255,
validation_split=0.2
)
</code></pre>
<p>After that I create <code>trainGenerator</code> using <code>flow_from_dataframe</code> like below:</p>
<pre><code> trainGenerator = trainDataGen.flow_from_dataframe(
dataframe=train_df,
directory=trainingFilepath,
x_col="filename",
y_col="label",
target_size=(100,100),
class_mode="raw",
subset="training"
)
</code></pre>
<p><code>validationGenerator</code> is created using the code above by replacing subset with <code>validation</code></p>
<p>A similar <a href="https://stackoverflow.com/questions/55328966/tf-keras-loss-becomes-nan">question</a> has been asked but it does not apply to my case as the problem persists when number of train images increases & I use <code>sparce_categorical_crossentropy</code></p>
<ol>
<li>Why do I get <code>NaN</code> in <code>val_loss</code> & <code>val_accuracy</code> is 0?</li>
<li>How do I fix it so it can work with more images in the train set?</li>
</ol>
|
<p>what I would do is to use categorical cross entropy. In your generators change class_mode to 'categorical'. In model.compile make loss='categorical_crossentropy. Not sure this will fix it but it can't hurt. Could be when you use more images, perhaps there are some Na
labels. Check your datafame for Na's.</p>
|
python|image|tensorflow|keras|computer-vision
| 0
|
7,031
| 66,923,714
|
Make a pandas dataframe using one available series
|
<p>I have a pandas series for players as follows:</p>
<pre><code>0 1
1 1
2 3
3 4
</code></pre>
<p>My expected output is:</p>
<pre><code> players teams night morning
0 1 [] [] []
1 1 [] [] []
2 3 [] [] []
3 4 [] [] []
</code></pre>
<p>This is what I have tried:</p>
<pre><code>players_ids = players_ids()
df = pd.DataFrame({'players': players_ids, 'teams': None, 'night': None, 'morning': None})
</code></pre>
<p>But what I get is this:</p>
<pre><code> players teams night morning
0 1 None None None
1 1 None None None
2 3 None None None
3 4 None None None
</code></pre>
<p>but this just shows <code>None</code> rather than <code>[]</code>. Is there any easy solution in pandas for this?</p>
|
<p>The purpose eludes me, but this can be done by filling the <code>None</code> values with empty lists:</p>
<pre><code>df = pd.DataFrame({'players': [1, 1, 3, 4], 'teams': None, 'night': None, 'morning': None})
df = df.applymap(lambda x: [] if not x else x)
</code></pre>
<p>Result:</p>
<pre><code> players teams night morning
0 1 [] [] []
1 1 [] [] []
2 3 [] [] []
3 4 [] [] []
</code></pre>
|
python|pandas
| 0
|
7,032
| 68,066,429
|
Keep non-numerical columns with resample
|
<p>I have a data frame structure that looks like this:</p>
<pre><code>df =
ds col1 col2 col3 col4
2021-04-11 17:41:55 foo1 bar1 7263 1234
2021-04-11 17:46:55 foo1 bar1 8464 5726
2021-04-11 17:51:55 foo1 bar1 3321 2345
2021-04-11 17:41:55 foo2 bar2 7263 1234
2021-04-11 17:46:55 foo2 bar2 8464 5726
2021-04-11 17:51:55 foo2 bar2 3321 2345
</code></pre>
<p>What I would like to do is to resample this into 60m bins and getting the mean - while keeping <code>col1</code> and <code>col2</code>. However, if I just do this:</p>
<pre><code>df_new = df.resample('60min', on='ds').mean()
</code></pre>
<p>The output will be without the first two columns. The wanted output should look something like:</p>
<pre><code>df =
ds col1 col2 col3 col4
2021-04-11 17:00:00 foo1 bar1 6349 3101
2021-04-11 17:00:00 foo2 bar2 7263 3101
</code></pre>
|
<p>You can use this columns for grouping (if possible):</p>
<pre><code>df_new = df.groupby([pd.Grouper(freq='60min', key='ds'), 'col1','col2']).mean()
print (df_new)
col3 col4
ds col1 col2
2021-04-11 17:00:00 foo1 bar1 6349.333333 3101.666667
foo2 bar2 6349.333333 3101.666667
</code></pre>
<p>Another approach is defined aggregate function for non nunmeric values, e.g. first value in <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.resample.Resampler.aggregate.html" rel="nofollow noreferrer"><code>Resampler.agg</code></a>:</p>
<pre><code>f = lambda x: x.mean() if np.issubdtype(x.dtype, np.number) else x.iat[0]
df_new = df.set_index('ds').resample('60min').agg(f)
print (df_new)
col1 col2 col3 col4
ds
2021-04-11 17:00:00 foo1 bar1 6349.333333 3101.666667
</code></pre>
|
python|pandas|dataframe
| 1
|
7,033
| 68,038,312
|
Extract data from a column in pandas DataFrame
|
<p>how can we extract the car model from the dataframe below:</p>
<p>Output:</p>
<p>2015 Maruti Swift VDI ABS........................... VDI ABS</p>
<p>2012 Maruti Swift Dzire VDI BS IV..............VDI BS IV</p>
<p>2013 Maruti Swift VDI......................................VDI</p>
<p>2012 Maruti Swift VDI......................................VDI</p>
<p>2010 Honda City S MT PETROL................... S MT PETROL</p>
<p>2013 Maruti Swift Dzire VDI BS IV..............VDI BS IV</p>
<p>2014 Maruti Swift Dzire VDI BS IV.............. VDI BS IV</p>
<p>2014 Maruti Swift VDI.......................... ...........VDI</p>
<p>2014 Maruti Swift VXI...................................... VXI</p>
<p>2016 Hyundai Grand i10 SPORTZ 1.2 KAPPA VTVT............................. SPORTZ 1.2 KAPPA VTVT</p>
<p>2013 Maruti Swift Dzire ZDI..........................ZDI</p>
<p>2012 Hyundai i20 SPORTZ 1.4 CRDI......................1.4 CRDI</p>
<p>2010 Hyundai i10 MAGNA 1.2 KAPPA2......................MAGNA 1.2KAPPA2</p>
<p>2019 Maruti Vitara Brezza VDI ................VDI</p>
<p>2017 Maruti Vitara Brezza VDI OPT......................VDI OPT</p>
<p>2013 Maruti Swift Dzire VXI 1.2 BS IV..................VXI 1.2 B</p>
<p>2017 Maruti Alto 800 LXI...............................800 LXI</p>
<p>2009 Hyundai i10 MAGNA 1.2.............................MAGNA 1.2</p>
|
<p>If you have this dataframe:</p>
<pre class="lang-none prettyprint-override"><code> column
0 2015 Maruti Swift VDI ABS
1 2012 Maruti Swift Dzire VDI BS IV
2 2013 Maruti Swift VDI
3 2012 Maruti Swift VDI
4 2010 Honda City S MT PETROL
5 2013 Maruti Swift Dzire VDI BS IV
6 2014 Maruti Swift Dzire VDI BS IV
7 2014 Maruti Swift VDI
8 2014 Maruti Swift VXI
9 2016 Hyundai Grand i10 SPORTZ 1.2 KAPPA VTVT
10 2013 Maruti Swift Dzire ZDI
11 2012 Hyundai i20 SPORTZ 1.4 CRDI
12 2010 Hyundai i10 MAGNA 1.2 KAPPA2
13 2019 Maruti Vitara Brezza VDI
14 2017 Maruti Vitara Brezza VDI OPT
15 2013 Maruti Swift Dzire VXI 1.2 BS IV
16 2017 Maruti Alto 800 LXI
17 2009 Hyundai i10 MAGNA 1.2
</code></pre>
<p>Then:</p>
<pre class="lang-py prettyprint-override"><code>df["result"] = df["column"].str.extract(r".*?(\s\d{3}\s.*|[A-Z]{2,}.*)")
print(df)
</code></pre>
<p>Prints:</p>
<pre class="lang-none prettyprint-override"><code> column result
0 2015 Maruti Swift VDI ABS VDI ABS
1 2012 Maruti Swift Dzire VDI BS IV VDI BS IV
2 2013 Maruti Swift VDI VDI
3 2012 Maruti Swift VDI VDI
4 2010 Honda City S MT PETROL MT PETROL
5 2013 Maruti Swift Dzire VDI BS IV VDI BS IV
6 2014 Maruti Swift Dzire VDI BS IV VDI BS IV
7 2014 Maruti Swift VDI VDI
8 2014 Maruti Swift VXI VXI
9 2016 Hyundai Grand i10 SPORTZ 1.2 KAPPA VTVT SPORTZ 1.2 KAPPA VTVT
10 2013 Maruti Swift Dzire ZDI ZDI
11 2012 Hyundai i20 SPORTZ 1.4 CRDI SPORTZ 1.4 CRDI
12 2010 Hyundai i10 MAGNA 1.2 KAPPA2 MAGNA 1.2 KAPPA2
13 2019 Maruti Vitara Brezza VDI VDI
14 2017 Maruti Vitara Brezza VDI OPT VDI OPT
15 2013 Maruti Swift Dzire VXI 1.2 BS IV VXI 1.2 BS IV
16 2017 Maruti Alto 800 LXI 800 LXI
17 2009 Hyundai i10 MAGNA 1.2 MAGNA 1.2
</code></pre>
|
python|pandas|dataframe
| 0
|
7,034
| 68,077,392
|
How do you create a datetime/timestamp from multiple columns in a csv file
|
<p>I am using pandas to read in a csv file which contains the year in the first column, the month in the second, the day in the third, the hour in the fourth, and the sea level in the fifth (<a href="https://i.stack.imgur.com/Mk7Bs.png" rel="nofollow noreferrer">csv layout</a>).</p>
<p>I would like to use the columns that I have imported to calculate a ‘datetime’ or ‘timestamp’ and then save this as a new column in my data frame. This new column should be formatted like this example here: 1985-01-01 01:00:00+00:00.</p>
|
<p><a href="https://pandas.pydata.org/docs/reference/api/pandas.to_datetime.html#pandas-to-datetime" rel="nofollow noreferrer"><code>pd.to_datetime</code></a> is pretty handy. Assuming columns are named appropriately they can be easily passed in.</p>
<p>Given this DataFrame:</p>
<pre><code>df = pd.DataFrame([[1973, 3, 1, 6, 740], [1973, 3, 1, 7, 750]],
columns=list('ABCDE'))
A B C D E
0 1973 3 1 6 740
1 1973 3 1 7 750
</code></pre>
<p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.rename.html#pandas-dataframe-rename" rel="nofollow noreferrer"><code>rename</code></a> if needed:</p>
<pre><code>df = df.rename(columns={'A': 'year', 'B': 'month', 'C': 'day', 'D': 'hour'})
year month day hour E
0 1973 3 1 6 740
1 1973 3 1 7 750
</code></pre>
<p>Then call <code>pd.to_datetime</code> on <code>year</code>, <code>month</code>, <code>day</code>, <code>hour</code>:</p>
<pre><code>df['new_col'] = pd.to_datetime(df[['year', 'month', 'day', 'hour']])
</code></pre>
<pre class="lang-none prettyprint-override"><code> year month day hour E new_col
0 1973 3 1 6 740 1973-03-01 06:00:00
1 1973 3 1 7 750 1973-03-01 07:00:00
</code></pre>
<p>All Together:</p>
<pre><code>df = pd.DataFrame([[1973, 3, 1, 6, 740], [1973, 3, 1, 7, 750]],
columns=list('ABCDE'))
df = df.rename(columns={'A': 'year', 'B': 'month', 'C': 'day', 'D': 'hour'})
df['new_col'] = pd.to_datetime(df[['year', 'month', 'day', 'hour']])
</code></pre>
<hr />
<p>Or <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.rename.html#pandas-dataframe-rename" rel="nofollow noreferrer"><code>rename</code></a> + <a href="https://pandas.pydata.org/docs/reference/api/pandas.to_datetime.html#pandas-to-datetime" rel="nofollow noreferrer"><code>pd.to_datetime</code></a> without affecting <code>df</code>:</p>
<pre><code>df = pd.DataFrame([[1973, 3, 1, 6, 740], [1973, 3, 1, 7, 750]],
columns=list('ABCDE'))
df['new_col'] = pd.to_datetime(
df[['A', 'B', 'C', 'D']]
.rename(columns={'A': 'year', 'B': 'month', 'C': 'day', 'D': 'hour'})
)
</code></pre>
<p>Notice <code>df</code> columns <code>A</code>, <code>B</code>, <code>C</code>, <code>D</code> not affected outside of datetime call:</p>
<pre class="lang-none prettyprint-override"><code> A B C D E new_col
0 1973 3 1 6 740 1973-03-01 06:00:00
1 1973 3 1 7 750 1973-03-01 07:00:00
</code></pre>
|
python|pandas|datetime|timestamp
| 2
|
7,035
| 68,092,351
|
Pandas: LDA Top n keywords and topics with weights
|
<p>I am doing a topic modelling task with LDA, and I am getting 10 components with 15 top words each:</p>
<pre><code>for index, topic in enumerate(lda.components_):
print(f'Top 10 words for Topic #{index}')
print([vectorizer.get_feature_names()[i] for i in topic.argsort()[-10:]])
print('\n')
</code></pre>
<p>prints:</p>
<pre><code>Top 10 words for Topic #0
['compile', 'describes', 'info', 'extent', 'changing', 'reader', 'reservation', 'countries', 'printed', 'clear', 'line', 'passwords', 'situation', 'tables', 'downloads']
</code></pre>
<p>Now I would like to create a pandas dataframe to show each topic (index) with all the keywords (rows) and see their weights.
I'd like the keywords not present in a topic to have 0 weight but I cant get it to work. I have this so far, but it prints all the feature names (aroud 1700). How can I set it only for the top 10 for each topic?</p>
<pre><code>topicnames = ['Topic' + str(i) for i in range(lda.n_components)]
# Topic-Keyword Matrix
df_topic_keywords = pd.DataFrame(lda_model.components_)
# Assign Column and Index
df_topic_keywords.columns = vectorizer.get_feature_names()
df_topic_keywords.index = topicnames
# View
df_topic_keywords.head()
</code></pre>
|
<p>If I understand correctly, you have a dataframe with all values and you want to keep the top 10 in each row, and have 0s on remaining values.</p>
<p>Here we <code>transform</code> each row by:</p>
<ul>
<li>getting the 10th highest values</li>
<li>reindexing to the original index of the row (thus the columns of the dataframe) and filling with 0s:</li>
</ul>
<pre><code>>>> df.transform(lambda s: s.nlargest(10).reindex(s.index, fill_value=0), axis='columns')
a b c d e f g h i j k l m n o p q r s t u v w x y
a 0 0 63 98 0 0 73 0 78 0 94 0 0 63 68 98 0 0 0 67 0 77 0 0 0
z 76 0 0 0 84 0 62 61 0 93 0 0 82 70 0 0 0 91 0 0 48 95 0 0 0
</code></pre>
|
python|pandas|dataframe|lda|topic-modeling
| 0
|
7,036
| 59,214,988
|
Convert CSV file to HTML and display in browser with Pandas
|
<p>How can I convert a <code>CSV</code> file to <code>HTML</code> and open it in a web browser via <code>Python</code> using <code>pandas</code>.
Below is my program but I can not display them in the web page:</p>
<pre><code>import pandas
import webbrowser
data = pandas.read_csv(r'C:\Users\issao\Downloads\data.csv')
data = data.to_html()
webbrowser.open('data.html')
</code></pre>
|
<p>You need to pass a <code>url</code> to <code>webbrowser</code>.</p>
<p>Save the html content into a local file and pass it's path to webbrowser</p>
<pre><code>import os
import webbrowser
import pandas
data = pandas.read_csv(r'C:\Users\issao\Downloads\data.csv')
html = data.to_html()
path = os.path.abspath('data.html')
url = 'file://' + path
with open(path, 'w') as f:
f.write(html)
webbrowser.open(url)
</code></pre>
|
python|html|pandas|csv|web
| 2
|
7,037
| 59,433,911
|
How to convert string to datetime format if it is in a list?
|
<p>I am trying to plot a graph with dates format. The thing is that I have problem with the format of the dates column.</p>
<p>I have tried to use the solution like this:</p>
<pre class="lang-py prettyprint-override"><code>df['Date'] = pd.to_datetime(df['Date'])
</code></pre>
<p>It works. But the problem is that when I append the value of the <code>Date</code> in the dataframe into a list, the format of my <code>Date</code> column turns back to <code>String</code>. How do I solve this case?</p>
|
<p>hope this will work</p>
<p>solution1</p>
<pre><code>df['Date']=df['Date'].astype('datetime64[ns]')
</code></pre>
<p>Solution2</p>
<p>date is the list of date in string formate,if u want to convert into datetime then try this code</p>
<pre><code>dates_list = [dt.datetime.strptime(date, '"%Y-%m-%d"').date() for date in dates]
</code></pre>
|
python-3.x|pandas|datetime
| 0
|
7,038
| 59,200,373
|
Padding and reshaping pandas dataframe
|
<p>I have a dataframe with the following form:</p>
<pre><code>data = pd.DataFrame({'ID':[1,1,1,2,2,2,2,3,3],'Time':[0,1,2,0,1,2,3,0,1],
'sig':[2,3,1,4,2,0,2,3,5],'sig2':[9,2,8,0,4,5,1,1,0],
'group':['A','A','A','B','B','B','B','A','A']})
print(data)
ID Time sig sig2 group
0 1 0 2 9 A
1 1 1 3 2 A
2 1 2 1 8 A
3 2 0 4 0 B
4 2 1 2 4 B
5 2 2 0 5 B
6 2 3 2 1 B
7 3 0 3 1 A
8 3 1 5 0 A
</code></pre>
<p>I want to reshape and pad such that each 'ID' has the same number of Time values, the sig1,sig2 are padded with zeros (or mean value within ID) and the group carries the same letter value. The output after repadding would be :</p>
<pre><code>data_pad = pd.DataFrame({'ID':[1,1,1,1,2,2,2,2,3,3,3,3],'Time':[0,1,2,3,0,1,2,3,0,1,2,3],
'sig1':[2,3,1,0,4,2,0,2,3,5,0,0],'sig2':[9,2,8,0,0,4,5,1,1,0,0,0],
'group':['A','A','A','A','B','B','B','B','A','A','A','A']})
print(data_pad)
ID Time sig1 sig2 group
0 1 0 2 9 A
1 1 1 3 2 A
2 1 2 1 8 A
3 1 3 0 0 A
4 2 0 4 0 B
5 2 1 2 4 B
6 2 2 0 5 B
7 2 3 2 1 B
8 3 0 3 1 A
9 3 1 5 0 A
10 3 2 0 0 A
11 3 3 0 0 A
</code></pre>
<p>My end goal is to ultimately reshape this into something with shape (number of ID, number of time points, number of sequences {2 here}).</p>
<p>It seems that if I pivot <code>data</code>, it fills in with nan values, which is fine for the signal values, but not the groups. I am also hoping to avoid looping through data.groupby('ID'), since my actual data has a large number of groups and the looping would likely be very slow.</p>
|
<p>Here's one approach creating the new index with <code>pd.MultiIndex.from_product</code> and using it to <code>reindex</code> on the <code>Time</code> column:</p>
<pre><code>df = data.set_index(['ID', 'Time'])
# define a the new index
ix = pd.MultiIndex.from_product([df.index.levels[0],
df.index.levels[1]],
names=['ID', 'Time'])
# reindex using the above multiindex
df = df.reindex(ix, fill_value=0)
# forward fill the missing values in group
df['group'] = df.group.mask(df.group.eq(0)).ffill()
</code></pre>
<hr>
<pre><code>print(df.reset_index())
ID Time sig sig2 group
0 1 0 2 9 A
1 1 1 3 2 A
2 1 2 1 8 A
3 1 3 0 0 A
4 2 0 4 0 B
5 2 1 2 4 B
6 2 2 0 5 B
7 2 3 2 1 B
8 3 0 3 1 A
9 3 1 5 0 A
10 3 2 0 0 A
11 3 3 0 0 A
</code></pre>
|
python|pandas|padding
| 1
|
7,039
| 59,445,281
|
How can I read from a file with Python from a specific location to a specific location?
|
<p>Currently, I'm doing: </p>
<pre><code> source_noise = np.fromfile('data/noise/' + source + '_16k.dat', sep='\n')
source_noise_start = np.random.randint(
0, len(source_noise) - len(audio_array))
source_noise = source_noise[source_noise_start:
source_noise_start + len(audio_array)]
</code></pre>
<p>My file looks like:</p>
<pre><code> -5.3302745e+02
-5.3985005e+02
-5.8963920e+02
-6.5875741e+02
-5.7371864e+02
-2.0796765e+02
2.8152341e+02
6.5398089e+02
8.6053581e+02
</code></pre>
<p>.. and on and on.</p>
<p>This requires that I read the entire file, when all I want to do is read a part of a file. Is there any way for me to do this with Python that will be FASTER than what I'm doing now?</p>
|
<p>you can use the seek method to move inside file and read specific places. </p>
<p>file data -> "hello world"</p>
<pre><code>start_read = 6
with open("filename", 'rb') as file:
file.seek(start_read)
output = file.read(5)
print(output)
# will display world
</code></pre>
|
python|numpy|file
| 1
|
7,040
| 46,008,310
|
Why does outputing numpy.dot to memmap does not work?
|
<p>If I do:</p>
<pre><code>a = np.ones((10,1))
b = np.ones((10,1))
c = np.memmap('zeros.mat', dtype=np.float64, mode='w+', shape=(10,10), order='C')
a.dot(b.T, out=c)
</code></pre>
<p>I am getting:</p>
<blockquote>
<p>ValueError: output array is not acceptable (must have the right type,
nr dimensions, and be a C-Array)</p>
</blockquote>
<p>I check all conditions from the error message and they seem to fit:</p>
<pre><code>>>> print(a.dtype == b.dtype == c.dtype)
>>> print(np.dot(a, b.T).shape == c.shape)
>>> print(c.flags['C_CONTIGUOUS'])
True
True
True
</code></pre>
<p>When I replace c with:</p>
<pre><code>c = np.zeros((10,10))
</code></pre>
<p>it works. </p>
<p>What am I doing wrong?</p>
|
<p>It doesn't just have to match the dtype; it also has to have the right <em>type</em>, as in <code>type(c)</code>. <code>c</code> is a <code>numpy.memmap</code> instance, not a <code>numpy.ndarray</code>, so that check fails.</p>
<p>As recommended in the <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.memmap.html" rel="nofollow noreferrer"><code>numpy.memmap</code> docs</a>, you could instead use <code>mmap.mmap</code> to map the file and create a <code>numpy.ndarray</code> backed by the mmap as its buffer. You can look at the <a href="https://github.com/numpy/numpy/blob/v1.13.0/numpy/core/memmap.py#L202" rel="nofollow noreferrer"><code>numpy.memmap</code> implementation</a> to see what might be involved in doing that.</p>
|
python|numpy
| 3
|
7,041
| 45,925,327
|
Dynamically filtering a pandas dataframe
|
<p>I am trying to filter a pandas data frame using thresholds for three columns</p>
<pre><code>import pandas as pd
df = pd.DataFrame({"A" : [6, 2, 10, -5, 3],
"B" : [2, 5, 3, 2, 6],
"C" : [-5, 2, 1, 8, 2]})
df = df.loc[(df.A > 0) & (df.B > 2) & (df.C > -1)].reset_index(drop = True)
df
A B C
0 2 5 2
1 10 3 1
2 3 6 2
</code></pre>
<p>However, I want to do this inside a function where the names of the columns and their thresholds are given to me in a dictionary. Here's my first try that works ok. Essentially I am putting the filter inside <code>cond</code> variable and just run it: </p>
<pre><code>df = pd.DataFrame({"A" : [6, 2, 10, -5, 3],
"B" : [2, 5, 3, 2, 6],
"C" : [-5, 2, 1, 8, 2]})
limits_dic = {"A" : 0, "B" : 2, "C" : -1}
cond = "df = df.loc["
for key in limits_dic.keys():
cond += "(df." + key + " > " + str(limits_dic[key])+ ") & "
cond = cond[:-2] + "].reset_index(drop = True)"
exec(cond)
df
A B C
0 2 5 2
1 10 3 1
2 3 6 2
</code></pre>
<p>Now, finally I put everything inside a function and it stops working (perhaps <code>exec</code> function does not like to be used inside a function!):</p>
<pre><code>df = pd.DataFrame({"A" : [6, 2, 10, -5, 3],
"B" : [2, 5, 3, 2, 6],
"C" : [-5, 2, 1, 8, 2]})
limits_dic = {"A" : 0, "B" : 2, "C" : -1}
def filtering(df, limits_dic):
cond = "df = df.loc["
for key in limits_dic.keys():
cond += "(df." + key + " > " + str(limits_dic[key])+ ") & "
cond = cond[:-2] + "].reset_index(drop = True)"
exec(cond)
return(df)
df = filtering(df, limits_dic)
df
A B C
0 6 2 -5
1 2 5 2
2 10 3 1
3 -5 2 8
4 3 6 2
</code></pre>
<p>I know that <code>exec</code> function acts differently when used inside a function but was not sure how to address the problem. Also, I am wondering there must be a more elegant way to define a function to do the filtering given two input: 1)<code>df</code> and 2)<code>limits_dic = {"A" : 0, "B" : 2, "C" : -1}</code>. I would appreciate any thoughts on this.</p>
|
<p>If you're trying to build a dynamic query, there are easier ways. Here's one using a list comprehension and <code>str.join</code>:</p>
<pre><code>query = ' & '.join(['{}>{}'.format(k, v) for k, v in limits_dic.items()])
</code></pre>
<p>Or, using <code>f</code>-strings with python-3.6+, </p>
<pre><code>query = ' & '.join([f'{k}>{v}' for k, v in limits_dic.items()])
</code></pre>
<p></p>
<pre><code>print(query)
'A>0 & C>-1 & B>2'
</code></pre>
<p>Pass the query string to <code>df.query</code>, it's meant for this very purpose:</p>
<pre><code>out = df.query(query)
print(out)
A B C
1 2 5 2
2 10 3 1
4 3 6 2
</code></pre>
<h3>What if my column names have whitespace, or other weird characters?</h3>
<p>From pandas 0.25, you can wrap your column name in backticks so this works:</p>
<pre><code>query = ' & '.join([f'`{k}`>{v}' for k, v in limits_dic.items()])
</code></pre>
<p>See <a href="https://stackoverflow.com/questions/50697536/pandas-query-function-not-working-with-spaces-in-column-names">this Stack Overflow post</a> for more.</p>
<hr>
<p>You could also use <code>df.eval</code> if you want to obtain a boolean mask for your query, and then indexing becomes straightforward after that:</p>
<pre><code>mask = df.eval(query)
print(mask)
0 False
1 True
2 True
3 False
4 True
dtype: bool
out = df[mask]
print(out)
A B C
1 2 5 2
2 10 3 1
4 3 6 2
</code></pre>
<hr>
<h3>String Data</h3>
<p>If you need to query columns that use string data, the code above will need a slight modification.</p>
<p>Consider (data from <a href="https://stackoverflow.com/a/50692578/4909087">this answer</a>):</p>
<pre><code>df = pd.DataFrame({'gender':list('MMMFFF'),
'height':[4,5,4,5,5,4],
'age':[70,80,90,40,2,3]})
print (df)
gender height age
0 M 4 70
1 M 5 80
2 M 4 90
3 F 5 40
4 F 5 2
5 F 4 3
</code></pre>
<p>And a list of columns, operators, and values:</p>
<pre><code>column = ['height', 'age', 'gender']
equal = ['>', '>', '==']
condition = [1.68, 20, 'F']
</code></pre>
<p>The appropriate modification here is:</p>
<pre><code>query = ' & '.join(f'{i} {j} {repr(k)}' for i, j, k in zip(column, equal, condition))
df.query(query)
age gender height
3 40 F 5
</code></pre>
<hr>
<p>For information on the <code>pd.eval()</code> family of functions, their features and use cases, please visit <a href="https://stackoverflow.com/questions/53779986/dynamic-expression-evaluation-in-pandas-using-pd-eval">Dynamic Expression Evaluation in pandas using pd.eval()</a>.</p>
|
python|pandas|dataframe|filter|exec
| 76
|
7,042
| 45,999,895
|
Tensorboard is not populating graph on windows
|
<p>I have written simple python program to multiply two values and expected to populate the tensorboard graph.</p>
<p>I am using Windows - CPU machine.</p>
<p>Then after executing my program it generated required graph event file in the log directory path with the name <code>events.out.tfevents.1504266616.L7</code></p>
<p>I use the below command to start tensorboard:</p>
<pre><code>tensorboard --logdir C:\\Users\\SIMBU\\python_pgm\\TensorFlow\\graph --host 127.0.0.1 --port 5626
</code></pre>
<p>However, there is no graph under <code>http://127.0.0.1:5626/#graphs</code>.</p>
<p>What i have did wrong?</p>
|
<p>Tensorboard requires that you use linux style paths with forward slashes, e.g.</p>
<pre><code>tensorboard --logdir C:/Users/SIMBU/python_pgm/TensorFlow/graph
</code></pre>
|
python|tensorflow|tensorboard|tensorflow-xla
| 3
|
7,043
| 35,745,992
|
Apply a threshold on a Pandas DataFrame column
|
<p>I have a Daframe that looks like this</p>
<pre><code>In [52]: f
Out[52]:
Date
2015-02-23 12:00:00 0.172517
2015-02-23 13:00:00 0.172414
2015-02-23 14:00:00 0.172516
2015-02-23 15:00:00 0.173261
2015-02-23 16:00:00 0.172921
2015-02-23 17:00:00 0.172371
2015-02-23 18:00:00 0.176374
2015-02-23 19:00:00 0.177480
...
</code></pre>
<p>and I want to apply a threshold to the series so that is the values go below it I would just substitute the threshold's value to the actual one. </p>
<p>I am trying to definte a boolean dataframe like</p>
<p>Bool = f > Threshold </p>
<p>but I am not sure how to go on. Thanks in Advance.</p>
|
<p>IIUC then the following should work:</p>
<pre><code>f[f> Threshold] = some_val
</code></pre>
<p>Or you can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.clip_upper.html" rel="noreferrer"><code>clip_upper</code></a>:</p>
<pre><code>f = f.clip_upper(Threshold)
</code></pre>
<p>This will limit the upper values to your threshold value</p>
<pre><code>In [147]:
df[df['val'] > 0.175] = 0.175
df
Out[147]:
val
Date
2015-02-23 12:00:00 0.172517
2015-02-23 13:00:00 0.172414
2015-02-23 14:00:00 0.172516
2015-02-23 15:00:00 0.173261
2015-02-23 16:00:00 0.172921
2015-02-23 17:00:00 0.172371
2015-02-23 18:00:00 0.175000
2015-02-23 19:00:00 0.175000
In [149]:
df['val'].clip_upper(0.175)
Out[149]:
Date
2015-02-23 12:00:00 0.172517
2015-02-23 13:00:00 0.172414
2015-02-23 14:00:00 0.172516
2015-02-23 15:00:00 0.173261
2015-02-23 16:00:00 0.172921
2015-02-23 17:00:00 0.172371
2015-02-23 18:00:00 0.175000
2015-02-23 19:00:00 0.175000
Name: val, dtype: float64
</code></pre>
|
python|pandas|boolean|time-series
| 13
|
7,044
| 50,797,615
|
Exact string search using lambda function
|
<p>How can one search for exact string using a lambda function? The data frame looks as follows:</p>
<pre><code>A B
10 Mini
20 Mini Van
15 Mini
13 Mini Bus
</code></pre>
<p>Desired results</p>
<pre><code>A B
10 Mini
15 Mini
</code></pre>
<p>I have tried the following, but all fail:</p>
<pre><code>df_temp = df_temp[df_temp['B'].apply(lambda x: 'mini' in x)] and
df_temp = df_temp[df_temp['B'].apply(lambda x: 'mini' in x.str.match())]
</code></pre>
<p>Thank you</p>
|
<p>Just check for equality:</p>
<pre><code>df_temp = df_temp[df_temp['B'] == 'Mini']
</code></pre>
<p>This works because <code>df_temp['B'] == 'Mini'</code> returns a Boolean series, which is then used to index <code>df_temp</code>.</p>
<p>Or you can use <a href="http://pandas.pydata.org/pandas-docs/version/0.22/generated/pandas.DataFrame.query.html" rel="noreferrer"><code>pd.DataFrame.query</code></a> for more intuitive syntax:</p>
<pre><code>df_temp = df_temp.query('B == "Mini"')
</code></pre>
<p><a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.apply.html" rel="noreferrer"><code>pd.Series.apply</code></a> is just a thinly veiled loop; it should be reserved for when you <em>need</em> to explicitly operate on a series one item at a time in a loop. It is inefficient and verbose versus indexing via the above methods.</p>
|
python|pandas
| 5
|
7,045
| 51,109,471
|
cannot replace [''] with method pad on a DataFrame
|
<p>running python2.7</p>
<p>Question1</p>
<p>I want to replace empty string '' with None in my dataframe "test": </p>
<pre><code> from numpy.random import randn
test = pd.DataFrame(randn(3,2))
test.iloc[0,0]=''
test.replace('', None)
</code></pre>
<p>Give me error TypeError: </p>
<p>cannot replace [''] with method pad on a DataFrame.</p>
<p>What went wrong?</p>
<p>Question 2:</p>
<p>from numpy.random import randn</p>
<pre><code>test = pd.DataFrame(randn(3,2))
# this works
test.iloc[0,1]= 'A'
test[1] = test[1].replace('A','b')
# this does not
test.iloc[0,0]=''
test.loc[0] = test[0].replace('', None)
test
0 1
0 b
1 0.042052 -1.44156
2 0.462131 -0.303288
</code></pre>
<p>I am expecting</p>
<pre><code>test
0 1
0 None b
1 0.042052 -1.44156
2 0.462131 -0.303288
</code></pre>
|
<p>None is being interpreted as a lack of an argument. Try:</p>
<pre><code>test.replace({'':None})
</code></pre>
<p>Question 2:</p>
<pre><code>test.where(test != '', None)
</code></pre>
|
python|pandas
| 6
|
7,046
| 50,936,819
|
Python - How to plot data from multiple text files in a single graph
|
<p>I am trying to plot a graph by importing data from multiple text files in a single graph (multiple lines). For that, I wrote the following code: </p>
<pre><code>import glob
import matplotlib.pyplot as plt
import numpy as np
filenames=glob.glob("FHGM3168-01G2-*#1.txt")
for f in filenames:
print(f)
data = np.loadtxt(f, skiprows=12)
plt.figure(figsize=(8,6), dpi=100, frameon=True, clear=False)
plt.plot(data[:,0],data[:,1])
plt.axis([-20000, 20000, -0.3, 0.3])
plt.axvline(x=0, color="black", linestyle='-')
plt.axhline(y=0, color="black", linestyle='-')
plt.title("Test")
plt.xlabel("Field (G)")
plt.ylabel("Moment(memu)")
plt.legend()
plt.show()
</code></pre>
<p>The problem with the above code is I cant plot the data in a single graph. Because I am getting 50 individual graphs when I import 50 text files. Could someone please help me by correcting the code. </p>
|
<p>Initialize the graph outside the for loop (plt.figure() or the like). If you need plt.show(), do it after the loop.</p>
|
python|numpy|matplotlib|plot|spyder
| 0
|
7,047
| 50,840,255
|
Can I make numpy.sum return 0 when array is 0 rows?
|
<p>I have a sum function that, in a simplified version, looks like this: The row arrays used as indicers change dynamically within my program, but this is a heavily reduced version to demonstrate the issue:</p>
<p>This runs perfectly fine if I throw a few integers into the Row1 array, which is obviously intended, but since there are instances where one of the row arrays will be empty in my program, I was thinking about whether I can make numpy execute this task without throwing an error. Let's </p>
<pre><code>arr = np.array([1, 1, 1], [1, 1, 1], [1, 1, 1], [1, 1, 1])
Row1 = np.array([])
Row2 = np.array([1,2])
Col = np.array([0,2])
Result = np.sum(arr[Row1[:, None], Col]) + np.sum(arr[Row2[:, None], Col]
</code></pre>
<p>This could obviously be interpreted to return 4 (0 + 4) but numpy will obviously throw an error as I'm trying to indice a 0 dimension array. I could solve this by doing:</p>
<pre><code>arr = np.array([1, 1, 1], [1, 1, 1], [1, 1, 1], [1, 1, 1])
Row1 = np.array([])
Row2 = np.array([1,2])
Col = np.array([0,2])
sum1 = np.sum(arr[Row1[:, None], Col]) if len(Row1) > 0 else 0
sum2 = np.sum(arr[Row2[:, None], Col]) if len(Row2) > 0 else 0
Result = sum1 + sum2
</code></pre>
<p>This would work just fine but would require hundreds of extra lines in my code that feel a bit unnecessary, so I was just wondering if anyone has a more efficient way around this issue. Thank you! </p>
|
<blockquote>
<p>"This could obviously be interpreted to return 4 (0 + 4) <em>but numpy will obviously throw an error as I'm trying to indice a 0 dimension array</em>. I could solve this by doing:"</p>
</blockquote>
<p>Nope, as long as you make sure the empty array has dtype <code>int</code> it works just fine.</p>
<pre><code>arr = np.array(([1, 1, 1], [1, 1, 1], [1, 1, 1], [1, 1, 1]))
Row1 = np.array([], dtype=int)
Row2 = np.array([1,2])
Col = np.array([0,2])
Result = np.sum(arr[Row1[:, None], Col]) + np.sum(arr[Row2[:, None], Col])
Result
# 4
</code></pre>
|
python|numpy
| 2
|
7,048
| 33,232,265
|
How to get column value as percentage of other column value in pandas dataframe
|
<p>I have a question. I have a relatively huge pandas dataframe like this one:</p>
<pre><code>df:
Column1 Column2 Column3 Column4
0 100 50 25 10
1 200 100 50 10
2 10 10 5 5
3 20 15 10 5
4 10 7 7 7
</code></pre>
<p>What I now would like to do with it is adding strings to each value as follows:
For each value in Column2, add a string displaying this value as percentage of the value in Column1. Then, for all values in Columns3 until the end (ColumnN) add to each value a string displaying this value as percentage of Column2. The final result would look like this:</p>
<pre><code>df:
Column1 Column2 Column3 Column4
0 100 50 (50%) 25 (50%) 10 (20%)
1 200 100(50%) 50 (50%) 10 (10%)
2 10 10 (100%) 5 (50%) 5 (50%)
3 20 15 (75%) 10 (66,6%) 5 (33,3%)
4 10 7 (70%) 7 (100%) 7 (100%)
</code></pre>
<p>My idea of finally adding the strings to the corresponding values would maybe be something like <code>df['col'] = 'str' + df['col'].astype(str)</code>, but I don't really know how to start with it, as for example getting the percentage values for each value for example. An help on this would be really appreciated.</p>
|
<p>Something like this?</p>
<pre><code>In [95]: (df.astype(str) +
' (' +
df.apply(lambda x: (100 * x / x['Column1']), axis=1).astype(str) +
'%)')
Out[95]:
Column1 Column2 Column3 Column4
0 100 (100.0%) 50 (50.0%) 25 (25.0%) 10 (10.0%)
1 200 (100.0%) 100 (50.0%) 50 (25.0%) 10 (5.0%)
2 10 (100.0%) 10 (100.0%) 5 (50.0%) 5 (50.0%)
3 20 (100.0%) 15 (75.0%) 10 (50.0%) 5 (25.0%)
4 10 (100.0%) 7 (70.0%) 7 (70.0%) 7 (70.0%)
</code></pre>
|
python|pandas|dataframe
| 3
|
7,049
| 33,481,440
|
Substitute numpy array elements using dictionary
|
<p>I have this numpy array </p>
<pre><code>message = [ 97 98 114 97]
</code></pre>
<p>and this dictionary </p>
<pre><code>codes = {97: '1', 98: '01', 114: '000'}
</code></pre>
<p>and I am now iterating through the numpy array and converting those numbers to the ones corresponding in the dictionary like this:</p>
<pre><code>[codes[i] for i in message]
</code></pre>
<p>But this is really slow and takes a lot of memory, since I am creating a new list. Is there better approach? Maybe one in which I will have still the same numpy array, but with the new numbers like this?</p>
<pre><code>message = [1 01 000 1]
</code></pre>
|
<p>Here's a <em>NumPythonic</em> solution using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.searchsorted.html" rel="nofollow"><code>np.searchsorted</code></a> -</p>
<pre><code>np.asarray(codes.values())[np.searchsorted(codes.keys(),message)]
</code></pre>
<p>Please note that the output would be a NumPy array as well. If you would like to have a list output, wrap it with <code>.tolist()</code> -</p>
<pre><code>np.asarray(codes.values())[np.searchsorted(codes.keys(),message)].tolist()
</code></pre>
<p>I would think the only bottleneck to this approach would be conversion to NumPy array with <code>np.asarray()</code>, as usually <code>np.searchsorted</code> is pretty efficient.</p>
<p>Sample run -</p>
<pre><code>In [36]: message = [ 97, 98, 114, 97]
In [37]: codes = {97: '1', 98: '01', 114: '000'}
In [38]: [codes[i] for i in message]
Out[38]: ['1', '01', '000', '1']
In [39]: np.asarray(codes.values())[np.searchsorted(codes.keys(),message)]
Out[39]:
array(['1', '01', '000', '1'],
dtype='|S3')
</code></pre>
|
python|arrays|numpy|dictionary
| 1
|
7,050
| 66,531,167
|
Calculate z-score for multiple columns of dataset on groupby and transform to original shape in pandas without using loop
|
<p>I have a data frame</p>
<pre><code>df = pd.DataFrame([["A",1,98,56,61], ["B",1,99,54,36], ["C",1,97,32,83],["B",1,96,31,90], ["C",1,45,32,12], ["A",1,67,33,55], ["C",1,54,65,73], ["A",1,34,84,98], ["B",1,76,12,99]], columns=["id","date","c1","c2","c3"])
</code></pre>
<p>Need to calculate Z-score for columns "c1", "c2", "c3" using groupby on "id", and transform it to the original form without using the loop.</p>
<p><strong>Expected output:</strong></p>
<pre><code>df_out = pd.DataFrame([["A",1,1.21179,-0.079921,-0.543442], ["B",1,0.84893,1.26172,-1.401826], ["C",1,1.395551,-0.707107,0.860437],["B",1,0.55507,-0.077644,0.539164], ["C",1,-0.89609,-0.707107,-1.402194], ["A",1,0.025511,-1.182827,-0.858988], ["C",1,-0.49946,1.414214,0.541757], ["A",1,-1.237301,1.262748,1.40243], ["B",1,-1.404,-1.184075,0.862662]], columns=["id","date","c1","c2","c3"])
</code></pre>
<p>How to do it?</p>
|
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.DataFrameGroupBy.transform.html" rel="nofollow noreferrer"><code>GroupBy.transform</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.join.html" rel="nofollow noreferrer"><code>DataFrame.join</code></a>:</p>
<pre><code>from scipy.stats import zscore
df = df[['id','date']].join(df.groupby(['id','date']).transform(zscore))
print (df)
id date c1 c2 c3
0 A 1 1.211790 -0.079921 -0.543442
1 B 1 0.848930 1.261720 -1.401826
2 C 1 1.395551 -0.707107 0.860437
3 B 1 0.555070 -0.077644 0.539164
4 C 1 -0.896090 -0.707107 -1.402194
5 A 1 0.025511 -1.182827 -0.858988
6 C 1 -0.499460 1.414214 0.541757
7 A 1 -1.237301 1.262748 1.402430
8 B 1 -1.404000 -1.184075 0.862662
</code></pre>
|
python|python-3.x|pandas|dataframe
| 2
|
7,051
| 66,603,096
|
Creating a Heatmap of Cartisan X,Y,Z values using Python
|
<p>Here is what I am looking to do:</p>
<ol>
<li>Import .csv file containing Cartesian Coordinates of X, Y, and Z values, as well as nominal Z values.</li>
<li>Create a 2D heatmap image of those points where X, Y are locations of known points and the Z value determines the color at that location. Specifically the deviation of the Z value from the nominal Z value for that specific X, Y point.</li>
<li>Save the image to the same directory as the .csv</li>
</ol>
<p>I understand how to read in and save files, where I am getting hung up on is where to begin preparing the data for a heatmap. I have been messing around using <code>matplotlib.pyplot</code>, <code>numpy</code>, and <code>pandas</code> and trying to use the <code>plt.contourf()</code> function to generate the heatmap, but I don't know enough to get it to work correctly. Most examples and tutorials I have found use Z as some mathematical function of X, Y to keep the examples simple instead of referencing data from a file like I am trying to do. Another issue I am facing is the data I am working with is not necessarily rectangular and does not fit nicely into a grid. They could be random X, Y points which are not evenly distributed.</p>
<p>Anyway here is some of the stuff I have so far (Just plots the X Y Data):</p>
<pre><code>import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import matplotlib.mlab as ml
import seaborn as sns
csv_name = 'Test.csv'
# import the .csv for heatmap
print('CSV FORMAT SHOULD BE: Point_Number, X_Coord, Y_Coord, Z_Coord, Z_Coord_Nominal, Upper_Tol, Lower_Tol')
specify_filename = input("Enter a filename for data:")
myfile = open(specify_filename)
heatmap_data = myfile.read()
myfile.close()
print('Data File Contents:')
print(heatmap_data)
# data preparation
df = pd.read_csv(specify_filename, header=None)
df.columns = ['Point_Number', 'X_Coord', 'Y_Coord', 'Z_Coord', 'Z_Coord_Nominal', 'Upper_Tol', 'Lower_Tol']
plt.gca().set_aspect('equal', adjustable='box')
plt.plot(df['X_Coord'], df['Y_Coord'], 'o')
# This is where I don't know what to do with the data to make it plot properly using plt.contourf()
# save heatmap image
print('Data File:')
print(specify_filename)
output_path = specify_filename.replace(csv_name, 'heatmap.png', 1)
print('Output File:')
print(output_path)
plt.savefig(output_path)
</code></pre>
<p>This creates a plot that looks like this: (Note that the points are not in a rectangular grid)
<a href="https://i.stack.imgur.com/iDota.png" rel="nofollow noreferrer">Output Image with X Y coordinates.</a></p>
<p>Ideally the generated heatmap image will interpolate the Z value between points and look something like <a href="https://i.stack.imgur.com/VPbJG.png" rel="nofollow noreferrer">this </a> or <a href="https://i.stack.imgur.com/IeZDL.png" rel="nofollow noreferrer">this</a>. I would like to use the deviation from the nominal Z values for the colors on the heatmap.</p>
<p>Any help or <strong>examples</strong> would be greatly appreciated. If I am using the wrong tools for this, I would love to know what the better alternatives are. I barely use python so I am not extremely familiar with the nuances, but am open to any help I can get. Thank you!</p>
|
<p>If your data is ordered then you can use a pcolormesh.</p>
<pre class="lang-py prettyprint-override"><code>
import matplotlib.pyplot as plt
import numpy as np
theta, r = np.meshgrid(np.linspace(0, 2*np.pi, 50), np.linspace(0, 5, 50));
X_coord = r * np.cos(theta);
Y_coord = r * np.sin(theta);
Z_coord = np.sin(3*theta) * np.cos(r)
plt.pcolormesh(X_coord, Y_coord, Z_coord, shading='nearest')
</code></pre>
<p><a href="https://i.stack.imgur.com/M2q1n.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/M2q1n.png" alt="nearest" /></a></p>
<p>You can smooth the image somewhat by changing the shading parameter</p>
<pre class="lang-py prettyprint-override"><code>plt.pcolormesh(X_coord, Y_coord, Z_coord, shading='gouraud')
</code></pre>
<p><a href="https://i.stack.imgur.com/ZuuWR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZuuWR.png" alt="enter image description here" /></a></p>
|
python|pandas|numpy|matplotlib|heatmap
| 0
|
7,052
| 66,514,218
|
Is pytorch 1.7 officially enabled for cuda 10.0?
|
<p>I had to stay on CUDA 10.0 for personal projects.</p>
<p>Rather than installing Pytorch with versions appropriate for CUDA 10.0, I accidentally installed Pytorch 1.7 supported with CUDA 10.1. In particular, I installed by</p>
<pre><code>pip install torch==1.7.1+cu101 torchvision==0.8.2+cu101 torchaudio==0.7.2 -f https://download.pytorch.org/whl/torch_stable.html
</code></pre>
<p>Surprisingly, everything works fine so far although the CUDA versions do not match.</p>
<p>To verify my installation, I've run the code given in <a href="https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py" rel="nofollow noreferrer"><code>collect_env.py</code></a>, and it was fine.</p>
<p>I am just wondering few things.</p>
<ol>
<li>Did Pytorch team officially comment Pytorch 1.7 is compatible with CUDA 10.0?</li>
<li>Would there be more rigorous ways to verify my Pytorch installation?</li>
</ol>
|
<blockquote>
<p>Surprisingly, everything works fine so far although the CUDA versions
do not match.</p>
</blockquote>
<p>Changes between minor versions should work (mismatch like this worked in my case), although there is no promise of compatibility in <code>10.x</code> release (<a href="https://docs.nvidia.com/deploy/cuda-compatibility/index.html" rel="nofollow noreferrer">source</a>), only since <code>11.x</code> there will be <a href="https://en.wikipedia.org/wiki/Binary-code_compatibility" rel="nofollow noreferrer">binary compatibility</a>.</p>
<blockquote>
<p>Did Pytorch team officially commented Pytorch 1.7 is compatible with
CUDA 10.0?</p>
</blockquote>
<p>Not that I'm aware of, but <a href="https://download.pytorch.org/whl/torch_stable.html" rel="nofollow noreferrer">listed wheels</a> <strong>do not include</strong> <code>10.0</code> CUDA and PyTorch <code>1.7.0</code> (latest with <code>10.0</code> support seems to be <code>1.4.0</code>).</p>
<blockquote>
<p>Would there be more rigorous way to verify my Pytorch installation?</p>
</blockquote>
<p>As above, maybe cloning PyTorch's github repo, reverting to tagged release and running tests (folder <a href="https://github.com/pytorch/pytorch/tree/master/test" rel="nofollow noreferrer">here</a>, one of cuda test files <a href="https://github.com/pytorch/pytorch/blob/master/test/test_cuda.py" rel="nofollow noreferrer">here</a>), but for personal projects might be excessive.</p>
|
installation|pytorch
| 2
|
7,053
| 66,498,690
|
How to take the first non null element, row-wise, from a column that consists of lists?
|
<p>Suppose a generic dataframe with 4 numeric columns and one final column that, row-wise, gathers all observations of the past four columns inside a list.</p>
<p>Let me provide a code example of the initial dataframe:</p>
<pre><code>import pandas as pd
import numpy as np
x = pd.DataFrame({'col_1': [np.nan, 35, 27, 50],
'col_2': [15,12,np.nan, np.nan],
'col_3': [12,15,40,np.nan],
'col_4': [np.nan,np.nan,np.nan,5],
})
col_names = x.filter(regex='col', axis='columns').columns.tolist()
x['fifth_col'] = x[col_names].values.tolist()
col_1 col_2 col_3 col_4 fifth_col
0 NaN 15.0 12.0 NaN [nan, 15.0, 12.0, nan]
1 35.0 12.0 15.0 NaN [35.0, 12.0, 15.0, nan]
2 27.0 NaN 40.0 NaN [27.0, nan, 40.0, nan]
3 50.0 NaN NaN 5.0 [50.0, nan, nan, 5.0]
</code></pre>
<p>I need to create a sixth column that shows the first non-null element in every list of the fifth column.
I tried with the following statement but it does not work:</p>
<pre><code>x['sixth_col'] = x['fifth_col'].notna().apply(lambda x: x.first)
</code></pre>
|
<p><code>explode</code> the list then take the <code>first</code> value along the index.</p>
<pre><code>x['sixth_col'] = x['fifth_col'].explode().groupby(level=0).first()
</code></pre>
<hr />
<pre><code> col_1 col_2 col_3 col_4 fifth_col sixth_col
0 NaN 15.0 12.0 NaN [nan, 15.0, 12.0, nan] 15.0
1 35.0 12.0 15.0 NaN [35.0, 12.0, 15.0, nan] 35.0
2 27.0 NaN 40.0 NaN [27.0, nan, 40.0, nan] 27.0
3 50.0 NaN NaN 5.0 [50.0, nan, nan, 5.0] 50.0
</code></pre>
|
python|pandas|list|dataframe
| 1
|
7,054
| 66,661,238
|
Save model.summary() as pdf
|
<p>I'm trying to print my model.summary() to pdf.</p>
<p>I tried following the question asked here: <a href="https://stackoverflow.com/questions/45199047/how-to-save-model-summary-to-file-in-keras">How to save model.summary() to file in Keras?</a></p>
<pre><code>def myprint(s):
with open('/content/drive/My Drive/xxx/yyy/zzz.pdf','w') as f:
print(s, file=f)
model.summary(print_fn=myprint)
</code></pre>
<p>But the file is unreadable.</p>
<p>Can someone give me a simple working solution? Thanks in advance.</p>
|
<p>Do you need to save the summary as a PDF? If so, you need to use a wrapper library like <a href="https://pyfpdf.readthedocs.io/en/latest/FAQ/index.html" rel="nofollow noreferrer">fpdf</a> or <a href="https://pypi.org/project/PyPDF2/" rel="nofollow noreferrer">PyPDF2</a> in order to generate the required PDF metadata so that the file is properly formatted.</p>
<p>Otherwise, you can write to a <code>.txt</code> file using your approach:</p>
<pre><code>def myprint(s):
with open('/content/drive/My Drive/xxx/yyy/zzz.txt','w') as f:
print(s, file=f)
model.summary(print_fn=myprint)
</code></pre>
|
python|tensorflow|pdf|keras
| 0
|
7,055
| 16,452,182
|
Error when importing numba in Python 3
|
<p>I have just installed numba in my Ubuntu 13.04 via pip-3.3, as an alternative to numpy and cython to make calculations, but every time i try to import it in Python i get a "Segmentation fault (core dumped)" error and Python exists:</p>
<pre><code>esteban@esteban-Inspiron-1525:~$ python3
Python 3.3.1 (default, Apr 17 2013, 22:30:32)
[GCC 4.7.3] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import numba
Segmentation fault (core dumped)
esteban@esteban-Inspiron-1525:~$
</code></pre>
<p>Does anyone now what could be happening?. Could it be a problem in the installation or is it that numba is not supported in python3 yet? I had this packages installed before numba:</p>
<pre><code>llvm
cython
llvmpy
sphinx (for doc)
</code></pre>
<p>Thanks a lot!</p>
|
<p>Numba has preliminary support for Python 3. It should work, but I don't think it's received as much testing. Which version of numba did you try? </p>
<p>Also, how did you install llvm and llvmpy and which version of numpy do you have installed? </p>
|
python|numpy|python-3.x|installation|cython
| 4
|
7,056
| 16,563,552
|
Pandas: fancy indexing a dataframe
|
<p>I have a Pandas dataframe, df1, that is a year-long <em>5 minute</em> timeseries with columns A-Z.</p>
<pre><code>df1.shape
(105121, 26)
df1.index
<class 'pandas.tseries.index.DatetimeIndex'>
[2002-01-02 00:00:00, ..., 2003-01-02 00:00:00]
Length: 105121, Freq: 5T, Timezone: None
</code></pre>
<p>I have a second dataframe, df2, that is a year-long <em>daily</em> timeseries (over the same period) with matching columns. The values of this second frame are Booleans.</p>
<pre><code>df2.shape
(365, 26)
df2.index
<class 'pandas.tseries.index.DatetimeIndex'>
[2002-01-02 00:00:00, ..., 2003-01-01 00:00:00]
Length: 365, Freq: D, Timezone: None
</code></pre>
<p>I want to use df2 as a fancy index to df1, i.e. "df1.ix[df2]" or somesuch, such that I get back a subset of df1's columns for each date -- i.e. those which df2 says are True on that date (with all timestamps thereon). Thus the shape of the result should be (105121, width), where width is the number of distinct columns the Booleans imply (width<=26).</p>
<p>Currently, df1.ix[df2] only partially works. Only the 00:00 values for each day are picked out, which makes sense in the light of df2's 'point-like' time series.</p>
<p>I next tried time spans as the df2 index:</p>
<pre><code>df2.index
PeriodIndex: 365 entries, 2002-01-02 to 2003-01-01
</code></pre>
<p>This time, I get an error:</p>
<pre><code>/home/wchapman/.local/lib/python2.7/site-packages/pandas-0.11.0-py2.7-linux-x86_64.egg/pandas/core/index.pyc in get_indexer(self, target, method, limit)
844 this = self.astype(object)
845 target = target.astype(object)
--> 846 return this.get_indexer(target, method=method, limit=limit)
847
848 if not self.is_unique:
AttributeError: 'numpy.ndarray' object has no attribute 'get_indexer'
</code></pre>
<p>My interim solution is to loop by date, but this seems inefficient. Is Pandas capable of this kind of fancy indexing? I don't see examples anywhere in the documentation.</p>
|
<p>Here's one way to do this:</p>
<pre><code>t_index = df1.index
d_index = df2.index
mask = t_index.map(lambda t: t.date() in d_index)
df1[mask]
</code></pre>
<p>And slightly faster (but with the same idea) would be to use:</p>
<pre><code>mask = pd.to_datetime([datetime.date(*t_tuple)
for t_tuple in zip(t_index.year,
t_index.month,
t_index.day)]).isin(d_index)
</code></pre>
|
numpy|pandas
| 0
|
7,057
| 57,622,868
|
Elegant way to do fuzzy map based on a mix of substring and string in pandas
|
<p>I have two dataframes <code>mapp</code> and <code>data</code> like as shown below</p>
<pre><code>mapp = pd.DataFrame({'variable': ['d22','Studyid','noofsons','Level','d21'],'concept_id':[1,2,3,4,5]})
data = pd.DataFrame({'sourcevalue': ['d22heartabcd','Studyid','noofsons','Level','d21abcdef']})
</code></pre>
<p><a href="https://i.stack.imgur.com/gNHcs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gNHcs.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/kTk9X.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kTk9X.png" alt="enter image description here"></a></p>
<p>I would like fetch a value from <code>data</code> and check whether it is present in <code>mapp</code>, if yes, then get the corresponding <code>concept_id</code> value. The priority is to first look for an <code>exact match</code>. If no match is found, then go for <code>substring match</code>. As I am dealing with more than million records, any scalabale solution is helpful</p>
<pre><code>s = mapp.set_index('variable')['concept_id']
data['concept_id'] = data['sourcevalue'].map(s)
</code></pre>
<p>produces an output like below</p>
<p><a href="https://i.stack.imgur.com/f5Wr0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/f5Wr0.png" alt="enter image description here"></a></p>
<p>When I do substring match, valid records also become NA as shown below</p>
<pre><code>data['concept_id'] = data['sourcevalue'].str[:3].map(s)
</code></pre>
<p><a href="https://i.stack.imgur.com/BuO4Z.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BuO4Z.png" alt="enter image description here"></a></p>
<p>I don't know why it's giving <code>NA</code> for valid records now </p>
<p>How can I do this two checks at once in an elegant and efficient manner?</p>
<p>I expect my output to be like as shown below</p>
<p><a href="https://i.stack.imgur.com/1bSmp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1bSmp.png" alt="enter image description here"></a></p>
|
<p>If need map by strings and first 3 letters create 2 separate Series and then use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.fillna.html" rel="nofollow noreferrer"><code>Series.fillna</code></a> or <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.combine_first.html" rel="nofollow noreferrer"><code>Series.combine_first</code></a> for replace missing values from <code>a</code> by <code>b</code>:</p>
<pre><code>s = mapp.set_index('variable')['concept_id']
a = data['sourcevalue'].map(s)
b = data['sourcevalue'].str[:3].map(s)
data['concept_id'] = a.fillna(b)
#alternative
#data['concept_id'] = a.combine_first(b)
print (data)
sourcevalue concept_id
0 d22heartabcd 1.0
1 Studyid 2.0
2 noofsons 3.0
3 Level 4.0
4 d21abcdef 5.0
</code></pre>
<p>EDIT:</p>
<pre><code>#all strings map Series
s = mapp.set_index('variable')['concept_id']
print (s)
variable
d22 1
Studyid 2
noofsons 3
Level 4
d21 5
Name: concept_id, dtype: int64
#first 3 letters map Series
s1 = mapp.assign(variable = mapp['variable'].str[:3]).set_index('variable')['concept_id']
print (s1)
variable
d22 1
Stu 2
noo 3
Lev 4
d21 5
Name: concept_id, dtype: int64
</code></pre>
<hr>
<pre><code>#first 3 letters map by all strings
print (data['sourcevalue'].str[:3].map(s))
0 1.0
1 NaN
2 NaN
3 NaN
4 5.0
Name: sourcevalue, dtype: float64
#first 3 letters match by 3 first letters map Series
print (data['sourcevalue'].str[:3].map(s1))
0 1
1 2
2 3
3 4
4 5
Name: sourcevalue, dtype: int64
</code></pre>
|
python|python-3.x|pandas|dataframe
| 3
|
7,058
| 57,696,446
|
how to replace timestamp with 1 and 0?
|
<p>I would like to replace the enrolment time log with 1 and the null cells with 0 on a large dataset, below is a sample:</p>
<pre><code>data = [['tom', '10', "2014-02-05 21:24:44 UTC"], ['nick', '',''], ['juli', 14, '2014-02-15 21:55:43 UTC']]
BD = pd.DataFrame(data, columns = ['Name', 'Age', 'Enrolled_at'])
</code></pre>
<p>I've tried the following code but they are for replacing a certain value and in my dateset, the timestamps are not unique.</p>
<p>1</p>
<pre><code>BD['enrolled_at'].replace('', "1", inplace=True)
BD.head()
</code></pre>
<p>2</p>
<pre><code>BD.loc[(BD['enrolled_at'] > 1990)] = 1
</code></pre>
<p>3</p>
<pre><code>BD['enrolled_at'].replace("$20$", "1", regex=True, inplace=True)
BD
</code></pre>
<p>.</p>
<p>eThe current situation</p>
<p><a href="https://i.stack.imgur.com/ga41p.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ga41p.png" alt="The current situation"></a></p>
<p>.</p>
<p>Expected result</p>
<p><a href="https://i.stack.imgur.com/ckWdz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ckWdz.png" alt="Expected result"></a></p>
|
<pre><code>BD['Enrolled_at'] = pd.to_datetime(BD['Enrolled_at'])
BD['Enrolled_at'] = np.where(BD['Enrolled_at'] > '1990-01-01', 1, 0)
</code></pre>
<p>You can set the 1990 date to the lowest value of dates in your data</p>
|
python|pandas|numpy
| 3
|
7,059
| 57,558,956
|
How can I add functions to aggregations in groupby in python?
|
<p>I'm trying to get groupby stats with additional math operations between the aggregations</p>
<p>I tried</p>
<pre><code>...agg({
'id':"count",
'repair':"count",
('repair':"count")/('id':"count")
})
</code></pre>
<blockquote>
<pre><code>yr id repair
2016 37 27
2017 53 28
</code></pre>
</blockquote>
<p>After grouping I'm able to get this stat by</p>
<pre><code>gr['repair']/gr['id']*100
</code></pre>
<blockquote>
<pre><code>yr
2016 0.73
2017 0.53
</code></pre>
</blockquote>
<p>How can I get this type of calculation within the groupby?</p>
|
<p>Consider a custom function that returns an aggregated data set:</p>
<pre><code>def agg_func(g):
g['id'] = g['id'].count()
g['repair'] = g['repair'].count()
g['repair_per_id'] = (g['repair'] / g['id']) * 100
return g.aggregate('max') # CAN ALSO USE: min, max, mean, median, mode
agg_df = (df.groupby(['group'])
.apply(agg_func)
.reset_index(drop=True)
)
</code></pre>
<hr>
<p>To demonstrate with seeded, random data:</p>
<pre><code>import numpy as np
import pandas as pd
data_tools = ['sas', 'stata', 'spss', 'python', 'r', 'julia']
np.random.seed(8192019)
random_df = pd.DataFrame({'group': np.random.choice(data_tools, 500),
'id': np.random.randint(1, 10, 500),
'repair': np.random.uniform(0, 100, 500)
})
# RANDOMLY ASSIGN NANs
random_df['repair'].loc[np.random.choice(random_df.index, 75)] = np.nan
# RUN AGGREGATIONS
agg_df = (random_df.groupby(['group'])
.apply(agg_func)
.reset_index(drop=True)
)
print(agg_df)
# group id repair repair_per_id
# 0 julia 79 70 88.607595
# 1 python 89 74 83.146067
# 2 r 82 69 84.146341
# 3 sas 74 66 89.189189
# 4 spss 77 69 89.610390
# 5 stata 99 84 84.848485
</code></pre>
|
python|pandas|group-by|aggregation
| 3
|
7,060
| 57,705,746
|
Python Pandas: Append column names in each row
|
<p>Is there a way to append column names in dataframe rows?</p>
<p>input:</p>
<pre><code>cv cv mg mg
5g 5g 0% zinsenzin
</code></pre>
<p>output:</p>
<pre><code>cv cv col_name mg mg col_name
5g 5g cv 0% zinsenzin mg
</code></pre>
<p>I tried by this, but it's not working</p>
<pre><code>list_col = list(df)
for i in list_col:
if i != i.shift(1)
df['new_col'] = i
</code></pre>
<p>I got stuck here and can't find any solution.</p>
|
<p>In pandas working with duplicated columns names is not easy, but possible:</p>
<pre><code>c = 'cv cv mg mg sa sa ta ta at at ad ad an av av ar ar ai ai ca ca ch ch ks ks ct ct ce ce cw cw dt dt fr fr fs fs fm fm it it lg lg mk mk md md mt mt ob ob ph ph pb pb rt rt sz sz tg tg tt tt vv vv yq yq fr fr ms ms lp lp ts ts mv mv'.split()
</code></pre>
<hr>
<pre><code>df = pd.DataFrame([range(77)], columns=c)
print (df)
cv cv mg mg sa sa ta ta at at ... fr fr ms ms lp lp ts \
0 0 1 2 3 4 5 6 7 8 9 ... 67 68 69 70 71 72 73
ts mv mv
0 74 75 76
[1 rows x 77 columns]
df = pd.concat([v.assign(new_col=k) for k, v in df.groupby(axis=1,level=0,sort=False)],axis=1)
print (df)
cv cv new_col mg mg new_col sa sa new_col ta ... new_col lp lp \
0 0 1 cv 2 3 mg 4 5 sa 6 ... ms 71 72
new_col ts ts new_col mv mv new_col
0 lp 73 74 ts 75 76 mv
[1 rows x 115 columns]
</code></pre>
|
pandas
| 2
|
7,061
| 24,340,785
|
Python plot values at nodes in meshgrid
|
<p>I have from</p>
<pre><code> numpy.meshgrid(xx,yy)
</code></pre>
<p>a rectangular grid.</p>
<p>To get the coordinates (nodes) I split it into two lists X and Y with values:</p>
<pre><code> X = (0.0 , 0.2 , 0.4 , 0.6 , 0.8 , 1.0)*6
Y = (0.0 , 0.2 , 0.4 , 0.6 , 0.8 , 1.0)*6
</code></pre>
<p>Which gives a grid with 36 points. (think of it as a unit square)</p>
<p>Now i have from solving a linear system of equation another list which have the size (36,1).</p>
<p>I want to plot the values from the (36,1) list at the corresponding nodes in my grid.</p>
<p>So the first 6 points from the (36,1) list lies on the x-axis (Y = 0) and then the following 6 lies on Y = 0.2 and so on.
Does anyone have any idea how to do this?</p>
|
<p>Take your output array and:</p>
<pre><code>disparray = myarray + (arange(6) * .2)[:,None]
plot(X.flatten(), disparray.flatten(), '.')
</code></pre>
<p>This should do.</p>
<p>And, of course you can plot with a for loop.</p>
<pre><code>figure()
for r in range(myarray.shape[0]):
plot(X[0], myarray[r] + 0.2*r, 'k')
</code></pre>
<p>This uses the X values from the first row of your mesh as the X values in the plot and plots each row of your result array <code>myarray</code> at offsets 0, 0.2, 0.4... with a black line</p>
|
python|numpy|matplotlib|plot|mesh
| 1
|
7,062
| 24,109,603
|
Vectorize over only one axis in a 2D array with numpy vectorize
|
<p>I have the following function to get the Euclidean distance between two vectors <code>a</code> and <code>b</code>. </p>
<pre><code>def distance_func(a,b):
distance = np.linalg.norm(b-a)
return distance
</code></pre>
<p>Here, I want <code>a</code> to be an element of an array of vectors. So I used numpy vectorize to iterate over the array. (In order to get a better speed than iterating with a for loop)</p>
<pre><code>vfunc = np.vectorize(distance_func)
</code></pre>
<p>I used this as follows to get an array of Euclidean distances</p>
<pre><code>a = np.array([[1,2],[2,3],[3,4],[4,5],[5,6]])
b = np.array([1,2])
vfunc(a,b)
</code></pre>
<p>But this function returns:</p>
<blockquote>
<p>array([[ 0., 0.],
[ 1., 1.],
[ 2., 2.],
[ 3., 3.],
[ 4., 4.]])</p>
</blockquote>
<p>This is the result of performing the operation <code>np.linalg.norm(a-b)</code> individually for the second vector.
How do I use numpy vectorize to get the array of Euclidean distance in this way? </p>
|
<p>You don't need to use <code>vectorize</code>, you can just do:</p>
<pre><code>a = np.array([[1,2],[2,3],[3,4],[4,5],[5,6]])
b = np.array([1,2])
np.linalg.norm(a-b, axis=1)
</code></pre>
<p>which gives:</p>
<pre><code>[ 0. 1.41421356 2.82842712 4.24264069 5.65685425]
</code></pre>
<p>(I assume this is what you want, but if not, please also show the result you expect for your example.)</p>
|
python|arrays|python-2.7|numpy|vectorization
| 4
|
7,063
| 24,381,090
|
Performance issue with reading integers from a binary file at specific locations
|
<p>I have a file with integers stored as binary and I'm trying to extract values at specific locations. It's one big serialized integer array for which I need values at specific indexes. I've created the following code but its terribly slow compared to the F# version I created before.</p>
<pre><code>import os, struct
def read_values(filename, indices):
# indices are sorted and unique
values = []
with open(filename, 'rb') as f:
for index in indices:
f.seek(index*4L, os.SEEK_SET)
b = f.read(4)
v = struct.unpack("@i", b)[0]
values.append(v)
return values
</code></pre>
<p>For comparison here is the F# version:</p>
<pre><code>open System
open System.IO
let readValue (reader:BinaryReader) cellIndex =
// set stream to correct location
reader.BaseStream.Position <- cellIndex*4L
match reader.ReadInt32() with
| Int32.MinValue -> None
| v -> Some(v)
let readValues fileName indices =
use reader = new BinaryReader(File.Open(fileName, FileMode.Open, FileAccess.Read, FileShare.Read))
// Use list or array to force creation of values (otherwise reader gets disposed before the values are read)
let values = List.map (readValue reader) (List.ofSeq indices)
values
</code></pre>
<p>Any tips on how to improve the performance of the python version, e.g. by usage of numpy ?</p>
<p><strong>Update</strong></p>
<p>Hdf5 works very good (from 5 seconds to 0.8 seconds on my test file): </p>
<pre><code>import tables
def read_values_hdf5(filename, indices):
values = []
with tables.open_file(filename) as f:
dset = f.root.raster
return dset[indices]
</code></pre>
<p><strong>Update 2</strong></p>
<p>I went with the np.memmap because the performance is similar to hdf5 and I already have numpy in production.</p>
|
<p>Heavily depending on your index file size you might want to read it completely into a numpy array. If the file is not large, complete sequential read may be faster than a large number of seeks.</p>
<p>One problem with the seek operations is that python operates on buffered input. If the program was written in some lower level language, the use on unbuffered IO would be a good idea, as you only need a few values.</p>
<pre><code>import numpy as np
# read the complete index into memory
index_array = np.fromfile("my_index", dtype=np.uint32)
# look up the indices you need (indices being a list of indices)
return index_array[indices]
</code></pre>
<p>If you would anyway read almost all pages (i.e. your indices are random and at a frequency of 1/1000 or more), this is probably faster. On the other hand, if you have a large index file, and you only want to pick a few indices, this is not so fast.</p>
<p>Then one more possibility - which might be the fastest - is to use the python <code>mmap</code> module. Then the file is memory-mapped, and only the pages really required are accessed.</p>
<p>It should be something like this:</p>
<pre><code>import mmap
with open("my_index", "rb") as f:
memory_map = mmap.mmap(mmap.mmap(f.fileno(), 0)
for i in indices:
# the index at position i:
idx_value = struct.unpack('I', memory_map[4*i:4*i+4])
</code></pre>
<p>(Note, I did not actually test that one, so there may be typing errors. Also, I did not care about endianess, so please check it is correct.) </p>
<p>Happily, these can be combined by using <code>numpy.memmap</code>. It should keep your array on disk but give you numpyish indexing. It should be as easy as:</p>
<pre><code>import numpy as np
index_arr = np.memmap(filename, dtype='uint32', mode='rb')
return index_arr[indices]
</code></pre>
<p>I think this should be the easiest and fastest alternative. However, if "fast" is important, please test and profile.</p>
<hr>
<p>EDIT: As the <code>mmap</code> solution seems to gain some popularity, I'll add a few words about memory mapped files.</p>
<p><strong>What is mmap?</strong></p>
<p>Memory mapped files are not something uniquely pythonic, because memory mapping is something defined in the POSIX standard. Memory mapping is a way to use devices or files as if they were just areas in memory.</p>
<p>File memory mapping is a very efficient way to randomly access fixed-length data files. It uses the same technology as is used with virtual memory. The reads and writes are ordinary memory operations. If they point to a memory location which is not in the physical RAM memory ("page fault" occurs), the required file block (page) is read into memory.</p>
<p>The delay in random file access is mostly due to the physical rotation of the disks (SSD is another story). In average, the block you need is half a rotation away; for a typical HDD this delay is approximately 5 ms plus any data handling delay. The overhead introduced by using python instead of a compiled language is negligible compared to this delay.</p>
<p>If the file is read sequentially, the operating system usually uses a read-ahead cache to buffer the file before you even know you need it. For a randomly accessed big file this does not help at all. Memory mapping provides a very efficient way, because all blocks are loaded exactly when you need and remain in the cache for further use. (This could in principle happen with <code>fseek</code>, as well, because it might use the same technology behind the scenes. However, there is no guarantee, and there is anyway some overhead as the call wanders through the operating system.)</p>
<p><code>mmap</code> can also be used to write files. It is very flexible in the sense that a single memory mapped file can be shared by several processes. This may be very useful and efficient in some situations, and <code>mmap</code> can also be used in inter-process communication. In that case usually no file is specified for <code>mmap</code>, instead the memory map is created with no file behind it.</p>
<p><code>mmap</code> is not very well-known despite its usefulness and relative ease of use. It has, however, one important 'gotcha'. The file size has to remain constant. If it changes during <code>mmap</code>, odd things may happen.</p>
|
python|numpy|binaryfiles
| 4
|
7,064
| 43,772,218
|
fastest way to use numpy.interp on a 2-D array
|
<p>I have the following problem. I am trying to find the fastest way to use the interpolation method of numpy on a 2-D array of x-coordinates.</p>
<pre><code>import numpy as np
xp = [0.0, 0.25, 0.5, 0.75, 1.0]
np.random.seed(100)
x = np.random.rand(10)
fp = np.random.rand(10, 5)
</code></pre>
<p>So basically, <code>xp</code> would be the x-coordinates of the data points, <code>x</code> would be an array containing the x-coordinates of the values I want to interpolate, and <code>fp</code> would be a 2-D array containing y-coordinates of the datapoints.</p>
<pre><code>xp
[0.0, 0.25, 0.5, 0.75, 1.0]
x
array([ 0.54340494, 0.27836939, 0.42451759, 0.84477613, 0.00471886,
0.12156912, 0.67074908, 0.82585276, 0.13670659, 0.57509333])
fp
array([[ 0.89132195, 0.20920212, 0.18532822, 0.10837689, 0.21969749],
[ 0.97862378, 0.81168315, 0.17194101, 0.81622475, 0.27407375],
[ 0.43170418, 0.94002982, 0.81764938, 0.33611195, 0.17541045],
[ 0.37283205, 0.00568851, 0.25242635, 0.79566251, 0.01525497],
[ 0.59884338, 0.60380454, 0.10514769, 0.38194344, 0.03647606],
[ 0.89041156, 0.98092086, 0.05994199, 0.89054594, 0.5769015 ],
[ 0.74247969, 0.63018394, 0.58184219, 0.02043913, 0.21002658],
[ 0.54468488, 0.76911517, 0.25069523, 0.28589569, 0.85239509],
[ 0.97500649, 0.88485329, 0.35950784, 0.59885895, 0.35479561],
[ 0.34019022, 0.17808099, 0.23769421, 0.04486228, 0.50543143]])
</code></pre>
<p>The desired outcome should look like this:</p>
<pre><code>array([ 0.17196795, 0.73908678, 0.85459966, 0.49980648, 0.59893702,
0.9344241 , 0.19840596, 0.45777785, 0.92570835, 0.17977264])
</code></pre>
<p>Again, looking for the fastest way to do cause this is a simplified version of my problem, which has a length of about 1 million versus 10.</p>
<p>Thanks</p>
|
<p>So basically you want output equivalent to</p>
<pre><code>np.array([np.interp(x[i], xp, fp[i]) for i in range(x.size)])
</code></pre>
<p>But that <code>for</code> loop is going to make that pretty slow for large <code>x.size</code></p>
<p>This should work:</p>
<pre><code>def multiInterp(x, xp, fp):
i, j = np.nonzero(np.diff(np.array(xp)[None,:] < x[:,None]))
d = (x - xp[j]) / np.diff(xp)[j]
return fp[i, j] + np.diff(fp)[i, j] * d
</code></pre>
<p><strong>EDIT:</strong> This works even better and can handle bigger arrays:</p>
<pre><code>def multiInterp2(x, xp, fp):
i = np.arange(x.size)
j = np.searchsorted(xp, x) - 1
d = (x - xp[j]) / (xp[j + 1] - xp[j])
return (1 - d) * fp[i, j] + fp[i, j + 1] * d
</code></pre>
<p>Testing:</p>
<pre><code>multiInterp2(x, xp, fp)
Out:
array([ 0.17196795, 0.73908678, 0.85459966, 0.49980648, 0.59893702,
0.9344241 , 0.19840596, 0.45777785, 0.92570835, 0.17977264])
</code></pre>
<p>Timing tests with original data:</p>
<pre><code> %timeit multiInterp2(x, xp, fp)
The slowest run took 6.87 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 25.5 µs per loop
%timeit np.concatenate([compiled_interp(x[[i]], xp, fp[i]) for i in range(fp.shape[0])])
The slowest run took 4.03 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 3: 39.3 µs per loop
</code></pre>
<p>Seems to be faster even for a small size of <code>x</code></p>
<p>Let's try something much, much bigger:</p>
<pre><code>n = 10000
m = 10000
xp = np.linspace(0, 1, n)
x = np.random.rand(m)
fp = np.random.rand(m, n)
%timeit b() # kazemakase's above
10 loops, best of 3: 38.4 ms per loop
%timeit multiInterp2(x, xp, fp)
100 loops, best of 3: 2.4 ms per loop
</code></pre>
<p>The advantages scale a lot better even than the complied version of <code>np.interp</code></p>
|
numpy|interpolation|linear-interpolation
| 10
|
7,065
| 72,869,547
|
from_logits in SparseCategoricalCrossEntropy loss function not working as expected
|
<p>After researching, my understanding of logit is that when <code>from_logits=True</code> output is not normalized (not a probability distribution) and when <code>from_logits=False</code> it is normalized by softmax function. So if <code>from_logtis=False</code> isn't it supposed to output probability distribution of class?</p>
<p>Here is my code:</p>
<pre><code>(img_train, label_train), (img_test, label_test) = tf.keras.datasets.fashion_mnist.load_data()
train_ds = tf.data.Dataset.from_tensor_slices((img_train/255.0, label_train)).batch(32)
test_ds = tf.data.Dataset.from_tensor_slices((img_test/255.0, label_test)).batch(32)
inputs = tf.keras.Input(shape=(28, 28), batch_size=32)
flatten_layer = tf.keras.layers.Flatten()(inputs)
dense = tf.keras.layers.Dense(units=512, activation='relu')(flatten_layer)
outputs = tf.keras.layers.Dense(units=10)(dense)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.01),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
history = model.fit(train_ds, validation_data=test_ds, epochs=10)
metric = tf.keras.metrics.SparseCategoricalAccuracy()
for x, y in test_ds:
logits = model(x)
metric.update_state(y, logits)
metric.result()
</code></pre>
<p>printing out logits of last batch gives:</p>
<pre><code>print(logits[0])
<tf.Tensor: shape=(10,), dtype=float32, numpy=
array([ 1.3062842 , 2.253938 , -5.295599 , 7.0740013 ,
-17.184162 , -19.801863 , 0.29550672, -81.3132 ,
-9.149338 , -46.353527 ], dtype=float32)>
</code></pre>
<p>Now when I train it with <code>from_logits=False</code></p>
<pre><code>logit_false_model = tf.keras.Model(inputs=inputs, outputs=outputs)
logit_false_model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.01),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=False),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
metric2 = tf.keras.metrics.SparseCategoricalAccuracy()
for x, y in test_ds:
false_logits = logit_false_model(x)
metric2.update_state(y, false_logits)
metric2.result()
</code></pre>
<p>printing out last batch gives:</p>
<pre><code>print(false_logits[0])
<tf.Tensor: shape=(10,), dtype=float32, numpy=
array([-125.248 , -211.52843, -243.62004, -230.45828, -336.3651 ,
-369.41864, -177.14006, -871.6252 , -401.76608, -529.4581 ],
dtype=float32)>
</code></pre>
<p>Why is this the case?</p>
|
<p>You need to use <code>Softmax</code> activation function if you are using <code>from_logits=False</code> (which is default <code>from_logits=False</code> if not defined) to get the output probability distribution of the class.</p>
<pre><code>tf.Tensor(
[1.2676346e-01 1.1662615e-02 1.2230438e-03 8.4634316e-01 6.9824701e-08
9.0843132e-06 1.2575756e-02 2.9462583e-18 1.4229250e-03 5.9792920e-13], shape=(10,), dtype=float32)
</code></pre>
<p><code>Logits</code> are the unnormalized outputs of a neural network.
<code>Softmax</code> is a normalization function that squashes the outputs of a neural network so that they are all between 0 and 1 and sum to</p>
<p>So when you do not use the <code>Softmax</code> activation function and use <code>from_logits=True</code> or <code>from_logits=False</code>, this will give the unnormalized logits of the tensors as output.</p>
<pre><code>tf.Tensor(
[ 2.1156607 -1.1930232 -3.6891053 3.455227 -9.048524 -5.6176786
0.9181027 -70.903275 -2.887693 -38.01756 ], shape=(10,), dtype=float32)
</code></pre>
<p>Please check <a href="https://colab.sandbox.google.com/gist/RenuPatelGoogle/12d3e6bfa7b46839c606bd8a3f5d27ce/from_logits.ipynb" rel="nofollow noreferrer">this</a> gist for your reference.</p>
|
python|tensorflow|keras|deep-learning
| 0
|
7,066
| 73,169,558
|
pd.DataFrame.to_sql() is prepending the server name and username to the table name
|
<p>I have a Pandas dataframe <code>df</code> which I want to push to a relational database as a table. I setup a connection object (<code><Connection></code>) using SQLAlchemy (pyodbc is the connection engine), and called the command</p>
<p><code>df.to_sql(<Table_Name>, <Connection>)</code></p>
<p>which I was able to confirm was written as a table to the desired relational database by visual examination of it in SQL Server Management Studio (SSMS). But in the left-hand-side list of databases and their tables in SSMS I see that it has named it</p>
<p><code><Sender_Server>\<Username>.<Table_Name></code></p>
<p>where <code><Sender_Server></code> is (I think) related to the name of the server I ran the Python command from, <code><Username></code> is my username on that server, and <code><Table_Name></code> is the desired table name.</p>
<p>When I right-click on the table and select to query the top one thousand rows I get a query of the form</p>
<pre class="lang-sql prettyprint-override"><code>SELECT * FROM [<Data_Base_Name>].[<Sender_Server>\<Username>].[<Table_Name>]
</code></pre>
<p>which also has the <code><Sender_Server>\<Username></code> info in it. The inclusion of <code><Sender_Server>\<Username></code> is undesired behaviour in our use case.</p>
<p>How can I instead have the data pushed such that</p>
<pre class="lang-sql prettyprint-override"><code>SELECT * FROM [<Data_Base_Name>].[<Table_Name>]
</code></pre>
<p>is the appropriate query?</p>
|
<p>By default, <code>.to_sql()</code> assumes the default schema for the current user unless <code>schema="schema_name"</code> is provided. Say, for example, the database contains a table named <code>dbo.thing</code> and the database user named <code>joan</code> has a default schema named <code>engineering</code>. If Joan does</p>
<pre class="lang-py prettyprint-override"><code>df.to_sql("thing", engine, if_exists="append", index=False)
</code></pre>
<p>it will not append to <code>dbo.thing</code> but will instead try to create an <code>engineering.thing</code> table and append to that. If Joan wants to append to <code>dbo.thing</code> she needs to do</p>
<pre class="lang-py prettyprint-override"><code>df.to_sql("thing", engine, schema="dbo", if_exists="append", index=False)
</code></pre>
|
python|pandas|sqlalchemy|pyodbc
| 2
|
7,067
| 70,473,374
|
How to scale the x-axis (in datetime format) of dataframes graphs to the same scale ? Python Pandas
|
<p>I have several dataframe graphics on a single figure. The X axis is a timestamp in the format: dd/mm/yy HH:MM:SS</p>
<p>The problem is that the time axis is not on the same scale and I can't put them on the same scale. I tried this but it doesn't work:</p>
<pre><code>df1:
Timestamp,Value
2018-11-13 00:26:43.267725,68.9999980926514
2018-11-13 00:26:52.194564,488.389312744141
2018-11-13 00:26:52.479555,549.0
2018-11-13 00:27:11.812900,535.6854553222661
2018-11-13 00:27:12.080380,549.0
2018-11-13 00:27:12.348114,509.51171875
2018-11-13 00:27:20.346217,47.54024255275726
2018-11-13 00:28:39.572289,68.9999980926514
2018-11-13 00:28:46.264423,86.6078643798828
2018-11-13 00:28:50.782171,549.0
2018-11-13 00:29:12.807073,68.9999980926514
df2:
Timestamp,Value
2018-12-10 20:22:30.088260,120.8003616333008
2018-12-10 20:22:31.893382,549.0
2018-12-10 20:22:49.872620,478.66650390625
2018-12-10 20:22:50.129706,427.010375976562
2018-12-10 20:22:50.437430,353.003936767578
2018-12-10 20:22:50.762730,277.003540039062
2018-12-10 20:22:51.081120,232.50846862793
2018-12-10 20:22:51.338931,198.633895874023
2018-12-10 20:22:51.677225,164.06259918212902
2018-12-10 20:22:52.002505,147.7807312011719
</code></pre>
<pre><code>cols = 1
rows = 2
nb_figs = rows
# create the figure with multiple axes
fig, axes = plt.subplots(nrows=rows, ncols=cols, figsize=(10, 13))
ax = df1.plot(x='Timestamp', y='Value', ax=axes[0])
xlocator = mdates.SecondLocator(interval = 15)
ax.xaxis.set_major_locator(xlocator)
ax.xaxis.set_major_formatter(mdates.DateFormatter('%d/%m/%y %H:%M:%S'))
ax = df2.plot(x='Timestamp', y='Value', ax=axes[1])
xlocator = mdates.SecondLocator(interval = 15)
ax.xaxis.set_major_locator(xlocator)
ax.xaxis.set_major_formatter(mdates.DateFormatter('%d/%m/%y %H:%M:%S'))
</code></pre>
<p>The result is:</p>
<p><a href="https://i.stack.imgur.com/FLAdi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FLAdi.png" alt="enter image description here" /></a></p>
<p>I want the two plots to span the same duration (say 20 minutes), not necessarily show the same time interval? Am I missing a parameter? Or do I need another method?</p>
|
<p>You can <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.axis.Axis.get_view_interval.html" rel="nofollow noreferrer">take</a> the view interval of both Axes and enlarge the smaller one to match the bigger, then <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.axes.Axes.set_xlim.html" rel="nofollow noreferrer">set</a> the new interval around its center:</p>
<pre><code>fig, axes = plt.subplots(nrows=2, figsize=(10, 8), layout='constrained')
ax1 = df1.plot(x='Timestamp', y='Value', ax=axes[0])
xlocator = mdates.SecondLocator(interval = 15)
ax1.xaxis.set_major_locator(xlocator)
ax1.xaxis.set_major_formatter(mdates.DateFormatter('%d/%m/%y %H:%M:%S'))
ax2 = df2.plot(x='Timestamp', y='Value', ax=axes[1])
xlocator = mdates.SecondLocator(interval = 15)
ax2.xaxis.set_major_locator(xlocator)
ax2.xaxis.set_major_formatter(mdates.DateFormatter('%d/%m/%y %H:%M:%S'))
vi1 = ax1.xaxis.get_view_interval()
vi2 = ax2.xaxis.get_view_interval()
if vi1.ptp() > vi2.ptp():
ax2.set_xlim(vi2.mean() - vi1.ptp() / 2, vi2.mean() + vi1.ptp() / 2)
else:
ax1.set_xlim(vi1.mean() - vi2.ptp() / 2, vi1.mean() + vi2.ptp() / 2)
</code></pre>
<p><a href="https://i.stack.imgur.com/3MuSD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3MuSD.png" alt="enter image description here" /></a></p>
<p>Instead of <code>get_view_interval()</code> you could also use <code>get_xlim()</code> but the former has the advantage that it returns the interval as a numpy array that makes the calculation of the new interval a bit easier.</p>
<br>
If you don't want to center but both data start at the left edge, you can simply use:
<pre><code>if vi1.ptp() > vi2.ptp():
ax2.set_xlim(vi2[0], vi2[0] + vi1.ptp())
else:
ax1.set_xlim(vi1[0], vi1[0] + vi2.ptp())
</code></pre>
|
python|pandas|datetime|matplotlib
| 0
|
7,068
| 70,650,372
|
why python pandas not able to generate xlsx file?
|
<p>I am trying to achieve below tasks:</p>
<p><strong>traverse_dir() function</strong></p>
<pre><code>- read a root directory, and get the names of the sub directories.
- read the sub directories and see if 'installed-files.json' file is present.
- if the 'installed-files.json' file is present in all the directories, then open them and create a excel file out of the JSON file those are present in all the sub directories.
</code></pre>
<p><strong>filter_apk() function</strong></p>
<pre><code>- read the excel file generated in the first function and create another excel file that will store only file names ending with '.apk'.
</code></pre>
<p>Below is the code snippet:</p>
<pre><code>def traverse_dir(rootDir, file_name):
dir_names = []
for names in os.listdir(rootDir):
entry_path = os.path.join(names)
if os.path.isdir(entry_path):
dir_names.append(entry_path)
for i in dir_names:
if file_name in i:
with open(file_name) as jf:
data = json.load(jf)
df = pd.DataFrame(data)
new_df = df[df.columns.difference(['SHA256'])]
new_df.to_excel('abc.xlsx')
def filter_apk():
traverse_dir(rootDir, file_name)
old_xl = pd.read_excel('abc.xlsx')
a = old_xl[old_xl["Name"].str.contains("\.apk")]
a.to_excel('zybg.xlsx')
rootDir = '<root path where sub folders resides>'
file_name = 'installed-files.json'
filter_apk()
</code></pre>
<p>Note:</p>
<ul>
<li><p>I have tested the code separately on single folder, and its working like charm. I am only facing issues when I am trying to work with multiple directories.</p>
</li>
<li><p>In fact, in the 1st function <code>traverse_dir()</code>, I am able to list the sub directories.</p>
</li>
</ul>
<p>I am getting below errors while executing the program.</p>
<pre><code>Traceback (most recent call last):
File "Jenkins.py", line 36, in <module>
filter_apk()
File "Jenkins.py", line 30, in filter_apk
old_xl = pd.read_excel('abc.xlsx')
with open(filename, "rb") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'abc.xlsx'
</code></pre>
<p>Why the file is not getting generated? Any suggestions?</p>
<p><strong>modified code</strong></p>
<pre><code>def traverse_dir(rootDir, file_name):
dir_names = []
for names in os.listdir(rootDir):
entry_path = os.path.join(rootDir, names)
if os.path.isdir(entry_path):
dir_names.append(entry_path)
for fil_name in dir_names:
file_path = os.path.join(entry_path, fil_name, file_name)
print(file_path)
if os.path.isfile(file_path):
with open(file_path) as jf:
data = json.load(jf)
df = pd.DataFrame(data)
df1 = pd.DataFrame(data)
new_df = df[df.columns.difference(['SHA256'])]
new_df1 = df1[df.columns.difference(['SHA256'])]
with pd.ExcelWriter('abc.xlsx') as writer:
new_df.to_excel(writer, sheet_name='BRA', index=False)
new_df1.to_excel(writer, sheet_name='CNA', index=False)
else:
raise FileNotFoundError
rootDir = <path to subdirs
file_name = 'installed-files.json'
traverse_dir(rootDir, file_name)
</code></pre>
|
<p>The main issue is that <code>if file_name in i:</code> is always false, hence no xlsx file created.
You may need to make some changes to test for the file existence, for example:</p>
<pre><code>import os
def traverse_dir(rootDir, file_name):
dir_names = []
for names in os.listdir(rootDir):
entry_path = os.path.join(names)
if os.path.isdir(entry_path):
dir_names.append(entry_path)
for i in dir_names:
file_path=os.path.join(rootDir,i,file_name)
if os.path.isfile(file_path):
with open(file_path) as jf:
data = json.load(jf)
df = pd.DataFrame(data)
new_df = df[df.columns.difference(['SHA256'])]
new_df.to_excel('abc.xlsx')
def filter_apk():
traverse_dir(rootDir, file_name)
old_xl = pd.read_excel('abc.xlsx')
a = old_xl[old_xl["Name"].str.contains("\.apk")]
a.to_excel('zybg.xlsx')
rootDir = '<root path where sub folders resides>'
file_name = 'installed-files.json'
filter_apk()
</code></pre>
|
python|python-3.x|pandas|dataframe
| 1
|
7,069
| 42,832,829
|
How to use Tensorflow's batch_sequences_with_states utility
|
<p>I am trying to build a generative RNN using Tensorflow. I have a preprocessed dataset which is a list of <code>sequence_length x 2048 x 2</code> numpy arrays. The sequences have different lengths. I have been looking through examples and documentation but I really couldn't understand, for example, what <code>key</code> is, or how I should create the <code>input_sequences</code> dictionary, etc.</p>
<p>So how should one format a list of numpy arrays, each of which represent a sequence of rank n (2 in this case) tensors, in order to be able to use this <code>batch_sequences_with_states</code> method?</p>
|
<p><strong>Toy Implementations</strong></p>
<p>I tried this and I will be glad to share my findings with you. It is a toy example. I attempted to create an example that works and observe how the output varies. In particular I used a case study of lstm. For you, you can define a conv net. Feel free to add more input and adjust as usual and follow the doc.
<a href="https://www.tensorflow.org/versions/r0.11/api_docs/python/contrib.training/splitting_sequence_inputs_into_minibatches_with_state_saving#batch_sequences_with_states" rel="noreferrer">https://www.tensorflow.org/versions/r0.11/api_docs/python/contrib.training/splitting_sequence_inputs_into_minibatches_with_state_saving#batch_sequences_with_states</a>
There are other more subtle examples I tried but I keep this simple version to show how the operation can be useful. In particular add more elements to the dictionaries (input sequence and context sequence) and observe the changes.</p>
<p><strong>Two Approaches</strong></p>
<p>Basically I will use two approaches:</p>
<ol>
<li>tf.contrib.training.batch_sequences_with_states</li>
<li>tf.train.batch( )</li>
</ol>
<p>I will start with the first one because it will directly helpful then I will show how to solve similar problem with train.batch.</p>
<p>I will basically be generate toy numpy arrays and tensors and use it for testing the operations</p>
<pre><code>import tensorflow as tf
batch_size = 32
num_unroll = 20
num_enqueue_threads = 20
lstm_size = 8
cell = tf.contrib.rnn.BasicLSTMCell(num_units=lstm_size)
#state size
state_size = cell.state_size[0];
initial_state_values = tf.zeros((state_size,), dtype=tf.float32)
# Initial states
initial_state_values = tf.zeros((state_size,), dtype=tf.float32)
initial_states = {"lstm_state": initial_state_values}
# Key should be string
#I used x as input sequence and y as input context. So that the
# keys should be 2.
key = ["1","2"]
#Toy data for our sample
x = tf.range(0, 12, name="x")
y = tf.range(12,24,name="y")
# convert to float
#I converted to float so as not to raise type mismatch erroe
x=tf.to_float(x)
y=tf.to_float(y)
#the input sequence as dictionary
#This is needed according to the tensorflow doc
sequences = {"x": x }
#Context Input
context = {"batch1": y}
# Train batch with sequence state
batch_new = tf.contrib.training.batch_sequences_with_states(
input_key=key,
input_sequences=sequences,
input_context=context,
initial_states=initial_states,
num_unroll=num_unroll,
batch_size=batch_size,
input_length = None,
pad = True,
num_threads=num_enqueue_threads,
capacity=batch_size * num_enqueue_threads * 2)
# To test what we have got type and observe the output of
# the following
# In short once in ipython notebook
# type batch_new.[press tab] to see all options
batch_new.key
batch_new.sequences
#splitting of input. This generate input per epoch
inputs_by_time = tf.split(inputs, num_unroll)
assert len(inputs_by_time) == num_unroll
# Get lstm or conv net output
lstm_output, _ = tf.contrib.rnn.static_state_saving_rnn(
cell,
inputs_by_time,
state_saver=batch_new,
state_name=("lstm_state","lstm_state"))
</code></pre>
<p><strong>Create Graph and Queue as Usual</strong></p>
<p>The parts with # and * can be further adapted to suit requirement.</p>
<pre><code> # Create the graph, etc.
init_op = tf.global_variables_initializer()
#Create a session for running operations in the Graph.
sess = tf.Session()
# Initialize the variables (like the epoch counter).
sess.run(init_op)
# Start input enqueue threads.
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
# For the part below uncomment
#*those comments with asterics to do other operations
#*try:
#* while not coord.should_stop():
#*Run training steps or whatever
#*sess.run(train_op) # uncomment to run other ops
#*except tf.errors.OutOfRangeError:
#print('Done training -- epoch limit reached')
#*finally:
# When done, ask the threads to stop.
coord.request_stop()
# Wait for threads to finish.
coord.join(threads)
sess.close()
</code></pre>
<p><strong>Second Approach</strong></p>
<p>You can also use train.batch in a very interesting way:</p>
<pre><code>import tensorflow as tf
#[0, 1, 2, 3, 4 ,...]
x = tf.range(0, 11, name="x")
# A queue that outputs 0,1,2,3,..
# slice end is useful for dequeuing
slice_end = 10
# instantiate variable y
y = tf.slice(x, [0], [slice_end], name="y")
# Reshape y
y = tf.reshape(y,[10,1])
y=tf.to_float(y, name='ToFloat')
</code></pre>
<p><strong>Important</strong>
Note the use of dynamic and enqueue many with padding. Feel free to play with both options. And compare output!</p>
<pre><code>batched_data = tf.train.batch(
tensors=[y],
batch_size=10,
dynamic_pad=True,
#enqueue_many=True,
name="y_batch"
)
batch_size = 128 ;
lstm_cell = tf.contrib.rnn.LSTMCell(batch_size,forget_bias=1,state_is_tuple=True)
val, state = tf.nn.dynamic_rnn(lstm_cell, batched_data, dtype=tf.float32)
</code></pre>
<p><strong>Conclusion</strong></p>
<p>The aim is to show that by simple examples we can get insight into the
details of the operations. You can adapt it to convolutional net in your case. </p>
<p>Hope this helps!</p>
|
python|numpy|tensorflow
| 5
|
7,070
| 42,816,129
|
Select first few rows of pandas dataframe with a certain value in the column
|
<p>I am trying to set values of a column in a pandas dataframe based on the value of another column, </p>
<pre><code>df2.loc[df2['col1',len] == val, 'col2'] = df1['col2']
</code></pre>
<p>Above code works fine, however, now the problem is that I want to set values only for first few rows, something like below:</p>
<pre><code>len1 = len(df1.index)
df2.loc[df2['col1',len1] == val, 'col2'] = df1['col2']
</code></pre>
<p>But I am getting following error:</p>
<blockquote>
<pre><code>Traceback (most recent call last):
File "...\lib\site-packages\pandas\indexes\base.py", line 1945, in get_loc
return self._engine.get_loc(key)
File "pandas\index.pyx", line 137, in pandas.index.IndexEngine.get_loc (pandas\index.c:4154)
File "pandas\index.pyx", line 159, in pandas.index.IndexEngine.get_loc (pandas\index.c:4018)
File "pandas\hashtable.pyx", line 675, in pandas.hashtable.PyObjectHashTable.get_item (pandas\hashtable.c:12368)
</code></pre>
</blockquote>
<p>Any help will be highly appreciated.</p>
|
<p>Change it to this:</p>
<pre><code>df2.iloc[:len(df1.index),].ix[df2.col1 == val, 'col2'] = df1['col2']
</code></pre>
<p>Here it is working not sure what's wrong with it. </p>
<pre><code> name gender age occupation years_of_school married
0 Bob M 37 Dentist 20 N
1 Sally F 21 Student 16 N
2 Jim M 55 Carpenter 12 Y
3 Dan M 27 Teacher 18 Y
4 Rose F 14 Student 9 N
5 Emily F 65 Retired 22 Y
age_range
0 mid
1 young
2 old
3 young
4 young
</code></pre>
<p>Here is a sample query on it:</p>
<pre><code>df.iloc[:len(df1.index),].ix[df.age > 25, 'occupation'] = df1['age_range']
</code></pre>
<p>Here is what it returns:</p>
<pre><code> name gender age occupation years_of_school married
0 Bob M 37 mid 20 N
1 Sally F 21 Student 16 N
2 Jim M 55 old 12 Y
3 Dan M 27 young 18 Y
4 Rose F 14 Student 9 N
5 Emily F 65 Retired 22 Y
</code></pre>
<p>I am not receiving any copying a slice errors. That might be because of the way you created or massaged your DataFrames previously but I just did this with no errors and no problems. Unless I misunderstood your original question then I don't understand why the down votes, and even then no explanation of why down voting so that I can fix it.</p>
|
python|pandas
| 1
|
7,071
| 42,713,161
|
pandas unstack the list
|
<p>I encounter a problem:</p>
<pre><code>import pandas
data=pandas.DataFrame({'data1':[[('m',2)],[('n',3),('y',4)],[('x',3),('y',5)],[('m',3)]]},
index=[['a','a','c','d'],[1,1,3,4]])
</code></pre>
<p>the data like this:</p>
<pre><code> data1
a 1 [(m, 2)]
1 [(n, 3), (y, 4)]
c 3 [(x, 3), (y, 5)]
d 4 [(m, 3)]
</code></pre>
<p>I wanted the result like this:</p>
<pre><code> key value
a 1 m 2
1 n 3
1 y 4
c 3 x 3
3 y 5
d 4 m 3
</code></pre>
<p>thx!</p>
|
<p>You can use list comprehension for creating <code>df</code> by tuples and then reshape by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.stack.html" rel="nofollow noreferrer"><code>stack</code></a>:</p>
<pre><code>df = pd.DataFrame([dict(x) for x in data.data1], index=data.index)
print (df)
m n x y
a 1 2.0 NaN NaN NaN
1 NaN 3.0 NaN 4.0
c 3 NaN NaN 3.0 5.0
d 4 3.0 NaN NaN NaN
df = df.stack().astype(int).reset_index(level=2)
df.columns = ['key','value']
print (df)
key value
a 1 m 2
1 n 3
1 y 4
c 3 x 3
3 y 5
d 4 m 3
</code></pre>
|
python|pandas
| 1
|
7,072
| 42,608,621
|
How to create layers of arrays (arrays in arrays) with given sizes
|
<p>I want to create arrays in arrays in an array in a flexible way. The question is probably best illustrated by an example.</p>
<p>We want to construct an array with three layers. The size of each layer are given by the following:</p>
<pre><code>S1, S2, S3 = 3, [3,2,2], 4
</code></pre>
<p>Each constant represent a size of the array I want to construct. S1 represents the outer layer and corresponds to len(S2). S2 represents the next layer where each must have a size corresponding to that element in the list. Lastly, S3 represents the size of each array in the innermost layer. </p>
<p>I know one way to do this. </p>
<pre><code>V = np.array([
np.array([np.zeros(S3), np.zeros(S3), np.zeros(S3)]),
np.array([np.zeros(S3), np.zeros(S3)]),
np.array([np.zeros(S3), np.zeros(S3)])
])
</code></pre>
<p>But this is quite impractical if either S1 or S2 changes. How do you create such an array for any S1 and S2 as efficiently as possible? </p>
<p>I hope the question made some sense. If not, I will do my best to expand or reformulate the question. </p>
<p>Thanks in advance. </p>
|
<p>You could do:</p>
<pre><code>[[[0]*s3 for _ in range(size)] for size in s2]
</code></pre>
<p><code>s1</code> is really not needed as information, since <code>s1 == len(s2)</code>.</p>
|
python|arrays|numpy|multidimensional-array
| 2
|
7,073
| 27,242,390
|
Finding the sum of grouped data by column
|
<p>My grouped data looks like:</p>
<pre><code>deviceid time total_sent
022009f075929be71975ce70db19cd47780b112f 1980-January 36 4
52 1
94 1
211 1
278 1
318 2
370 1
426 1
430 1
435 1
560 1
674 1
797 1
813 4
816 1
ff5b22df4ab9207bb6709cddef6d95c655565578 2013-August 11308408 4
12075616 1
17933654 1
22754808 12
22754987 1
22755166 3
22755345 4
22788586 4
22788765 2
22788944 2
22791830 1
22792546 1
22796843 1
22797201 2
22797380 2
</code></pre>
<p>Where the last column represents the count. I obtained this grouped representation using the expression: </p>
<pre><code>data1.groupby(['deviceid', 'time', 'total_sent'])
</code></pre>
<p>How do I sum the total_sent per month?</p>
<pre><code>deviceid time sum
022009f075929be71975ce70db19cd47780b112f 1980-January 6210
ff5b22df4ab9207bb6709cddef6d95c655565578 2013-August XXXX
</code></pre>
|
<p>Since <code>total_sent</code> column is to be summed, it shouldn't be within the groupby keys. You can try the following:</p>
<pre><code>data1.groupby(['deviceid', 'time']).agg({'total_sent': sum})
</code></pre>
<p>which will sum the <code>total_sent</code> column for each group, indexed by <code>deviceid</code> and <code>time</code>.</p>
|
python|pandas|data-analysis
| 1
|
7,074
| 14,444,916
|
Pandas obtain share from DataFrame
|
<p>I need a smart and concise way to arrive from data_1 to data_3 dataframe.
Right now I m arrived easily just to dataframe 2.</p>
<pre><code>DATA_1
key SEGM1 SEGM2 VAL
A K X 1
B K X 2
C K X 3
D K Y 4
E K Y 5
F J Y 6
G J Z 7
H J Z 8
I J Z 9
DATA_2
SEGM1 SEGM2 VAL
K X 6
Y 9
J Y 6
Z 24
DATA_3
SEGM1 SEGM2 VAL
K X 40%
Y 60%
J Y 20%
Z 80%
</code></pre>
<p>Thanks a lot!</p>
<p>M</p>
|
<p>Here's a one-liner:</p>
<pre><code>In [1]: df
Out[1]:
SEGM1 SEGM2 VAL
key
A K X 1
B K X 2
C K X 3
D K Y 4
E K Y 5
F J Y 6
G J Z 7
H J Z 8
I J Z 9
</code></pre>
<p>Use the <code>DataFrame.div</code> function to divide two dataframes. The first dataframe is grouped by the "inner levels" for which you want to calculate shares and then summed. The second dataframe is grouped by the "outer level" which serves as the denominator for the share calculation. You have to pass <code>level=0</code> to the <code>div</code> function which refers to the multi-index level SEGM1.</p>
<pre><code>In [2]: df.groupby(['SEGM1','SEGM2'])[['VAL']].sum().div(df.groupby('SEGM1').sum(),level=0)
Out[2]:
VAL
SEGM1 SEGM2
J Y 0.2
Z 0.8
K X 0.4
Y 0.6
</code></pre>
<p>Numerator DataFrame:</p>
<pre><code>In [1]: df.groupby(['SEGM1','SEGM2'])[['VAL']].sum()
Out[1]:
VAL
SEGM1 SEGM2
J Y 6
Z 24
K X 6
Y 9
</code></pre>
<p>Denominator DataFrame:</p>
<pre><code>In [2]: df.groupby('SEGM1').sum()
Out[2]:
VAL
SEGM1
J 30
K 15
</code></pre>
|
dataframe|grouping|pandas
| 1
|
7,075
| 25,169,066
|
Is there a Pandas equivalent to each_slice to operate on dataframes
|
<p>I am wondering if there is a Python or Pandas function that approximates the Ruby #each_slice method. In this example, the Ruby #each_slice method will take the array or hash and break it into groups of 100. </p>
<pre><code>var.each_slice(100) do |batch|
# do some work on each batch
</code></pre>
<p>I am trying to do this same operation on a Pandas dataframe. Is there a Pythonic way to accomplish the same thing?</p>
<p>I have checked out this answer: <a href="https://stackoverflow.com/questions/3833589/python-equivalent-of-rubys-each-slicecount">Python equivalent of Ruby's each_slice(count)</a></p>
<p>However, it is old and is not Pandas specific. I am checking it out but am wondering if there is a more direct method.</p>
|
<p>There isn't a built in method as such but you can use numpy's <code>array_slice</code>, you can pass the dataframe to this and the number of slices.</p>
<p>In order to get ~100 size slices you'll have to calculate this which is simply the number of rows/100:</p>
<pre><code>import numpy as np
# df.shape returns the dimensions in a tuple, the first dimension is the number of rows
np.array_slice(df, df.shape[0]/100)
</code></pre>
<p>This returns a list of dataframes sliced as evenly as possible</p>
|
python|pandas
| 1
|
7,076
| 26,790,050
|
pandas conditional aggregation
|
<p>I want to group the below dataframe based on 'id', then have the aggregate sums of 'flow' for all values of 'id' except 0; those should stay independent. What is the best solution?</p>
<p>Original:</p>
<pre><code>id flow
0 1
0 1
1 1
1 1
2 1
2 1
</code></pre>
<p>Aggregated:</p>
<pre><code>id flow
0 1
0 1
1 2
2 2
</code></pre>
|
<p>One way would be to use <code>transform</code> to assign the new flow values back and then drop duplicates:</p>
<pre><code>In [48]:
df.loc[df['id'] != 0, 'flow'] = df.groupby('id')['flow'].transform('sum')
df.drop(df[df['id']!=0].drop_duplicates().index)
Out[48]:
id flow
0 0 1
1 0 1
3 1 2
5 2 2
</code></pre>
|
python|pandas|aggregation
| 2
|
7,077
| 39,198,108
|
pandas standalone series and from dataframe different behavior
|
<p>Here is my code and warning message. If I change <code>s</code> to be a standalone <code>Series</code> by using <code>s = pd.Series(np.random.randn(5))</code>, there will no such errors. Using Python 2.7 on Windows.</p>
<p>It seems Series created from standalone and Series created from a column of a data frame are different behavior? Thanks.</p>
<p>My purpose is to change the Series value itself, other than change on a copy.</p>
<p><strong>Source code</strong>,</p>
<pre><code>import pandas as pd
sample = pd.read_csv('123.csv', header=None, skiprows=1,
dtype={0:str, 1:str, 2:str, 3:float})
sample.columns = pd.Index(data=['c_a', 'c_b', 'c_c', 'c_d'])
sample['c_d'] = sample['c_d'].astype('int64')
s = sample['c_d']
#s = pd.Series(np.random.randn(5))
for i in range(len(s)):
if s.iloc[i] > 0:
s.iloc[i] = s.iloc[i] + 1
else:
s.iloc[i] = s.iloc[i] - 1
</code></pre>
<p><strong>Warning message</strong>,</p>
<pre><code>C:\Python27\lib\site-packages\pandas\core\indexing.py:132: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
self._setitem_with_indexer(indexer, value)
</code></pre>
<p><strong>Content of 123.csv</strong>,</p>
<pre><code>c_a,c_b,c_c,c_d
hello,python,numpy,0.0
hi,python,pandas,1.0
ho,c++,vector,0.0
ho,c++,std,1.0
go,c++,std,0.0
</code></pre>
<p><strong>Edit 1</strong>, seems lambda solution does not work, tried to print <code>s</code> before and after, the same value,</p>
<pre><code>import pandas as pd
sample = pd.read_csv('123.csv', header=None, skiprows=1,
dtype={0:str, 1:str, 2:str, 3:float})
sample.columns = pd.Index(data=['c_a', 'c_b', 'c_c', 'c_d'])
sample['c_d'] = sample['c_d'].astype('int64')
s = sample['c_d']
print s
s.apply(lambda x:x+1 if x>0 else x-1)
print s
0 0
1 1
2 0
3 1
4 0
Name: c_d, dtype: int64
Backend TkAgg is interactive backend. Turning interactive mode on.
0 0
1 1
2 0
3 1
4 0
</code></pre>
<p>regards,
Lin</p>
|
<p>By doing <code>s = sample['c_d']</code>, if you make a change to the value of <code>s</code> then your original Dataframe <code>sample</code> also changes. That's why you got the warning.</p>
<p>You can do <code>s = sample[c_d].copy()</code> instead, so that changing the value of <code>s</code> doesn't change the value of <code>c_d</code> column of the Dataframe <code>sample</code>.</p>
|
python|python-2.7|pandas|numpy|dataframe
| 1
|
7,078
| 19,614,379
|
How do you calculate expanding mean on time series using pandas?
|
<p>How would you create a column(s) in the below pandas DataFrame where the new columns are the expanding mean/median of 'val' for each 'Mod_ID_x'. Imagine this as if were time series data and 'ID' 1-2 was on Day 1 and 'ID' 3-4 was on Day 2.</p>
<p>I have tried every way I could think of but just can't seem to get it right.</p>
<pre><code>left4 = pd.DataFrame({'ID': [1,2,3,4],'val': [10000, 25000, 20000, 40000],
'Mod_ID': [15, 35, 15, 42],'car': ['ford','honda', 'ford', 'lexus']})
right4 = pd.DataFrame({'ID': [3,1,2,4],'color': ['red', 'green', 'blue', 'grey'], 'wheel': ['4wheel','4wheel', '2wheel', '2wheel'],
'Mod_ID': [15, 15, 35, 42]})
df1 = pd.merge(left4, right4, on='ID').drop('Mod_ID_y', axis=1)
</code></pre>
<p><img src="https://i.stack.imgur.com/tIgBH.gif" alt="Pandas DataFrame"></p>
|
<p>Hard to test properly on your DataFrame, but you can use something like this:</p>
<pre><code>>>> df1["exp_mean"] = df1[["Mod_ID_x","val"]].groupby("Mod_ID_x").transform(pd.expanding_mean)
>>> df1
ID Mod_ID_x car val color wheel exp_mean
0 1 15 ford 10000 green 4wheel 10000
1 2 35 honda 25000 blue 2wheel 25000
2 3 15 ford 20000 red 4wheel 15000
3 4 42 lexus 40000 grey 2wheel 40000
</code></pre>
|
python-2.7|pandas|dataframe|time-series
| 2
|
7,079
| 29,087,284
|
create a feature vector using pandas or python
|
<p>i have an a binary classifier which takes a 200 element input feature vector as shown below </p>
<pre><code> [ id, v1, v2, ...,v190, v200, class]
[ 7, 0, 0, ..., 0, 0, 0 ],
[ 8, 0, 1, ..., 0, 0, 1 ],
[ 9, 0, 0, ..., 0, 0, 1 ],
</code></pre>
<p>For each element X it may have any set of attributes in v1-v200</p>
<pre><code> sql = 'SELECT x_id, x_attr FROM elements WHERE x_hash = %s'
cur.execute(sql, (x_hash,))
x1 = cur.fetchone()
x1 # x1 returns the id and a list of attributes
(123, [v2,v56,v200])
</code></pre>
<p>given that output i want to create a feature vector such as the one above, if a attribute in the list matches the any attribute in set v1- v200 then it will be set as a 1. </p>
<pre><code> [ id, v1, v2,...,v56,...,v190, v200, class ],
[ 123, 0, 1,...,1,..., 0, 1, ? ],
</code></pre>
<p>how can i do it in pandas or python? </p>
|
<p>First initializing a pandas dataframe and then building on your example:</p>
<pre><code>df = pd.DataFrame(None, columns=['v'+str(i) for i in range(1,201)])
sql = 'SELECT x_id, x_attr FROM elements WHERE x_hash = %s'
cur.execute(sql, (x_hash,))
x1_id, features = cur.fetchone()
df.loc[x1_id] = 0 # Initializes all values for the ID to zero.
df.loc[x1_id, features] = 1 # Sets relevant features to a value of one.
</code></pre>
<p>I haven't included class, as I wasn't sure how you were using it.</p>
|
python|numpy|pandas
| 2
|
7,080
| 33,891,923
|
How do you get the number of masked rows in a numpy masked array?
|
<p>So I have a numpy array that contains a number of numpy arrays where some of them have masked values that looks like the one below:</p>
<pre><code>[[1 2 3]
[-- -- --]
[7 8 9]]
</code></pre>
<p>What is the most efficient way to get the number of masked numpy arrays (meaning something like [-- -- --]) in the bigger numpy array (in this case it would be 1).</p>
<p>Thank you!</p>
|
<p><a href="http://docs.scipy.org/doc/numpy-1.10.0/reference/maskedarray.generic.html#accessing-the-mask" rel="nofollow">Masked arrays have a <code>.mask</code> attribute</a> consisting of a boolean array that is <code>True</code> wherever a value is masked. If you want to know how many rows contained <em>at least one</em> masked value, you could use:</p>
<pre><code>x.mask.any(axis=1).sum()
</code></pre>
<p>where <code>x</code> is your masked array. If you are only interested in rows where <em>all</em> of the values are masked, you could use:</p>
<pre><code>x.mask.all(axis=1).sum()
</code></pre>
<p>Obviously in your example these would both give a result of 1.</p>
|
python|arrays|numpy
| 4
|
7,081
| 23,719,203
|
Pandas Dataframe selecting groups with minimal cardinality
|
<p>I have a problem where I need to take groups of rows from a data frame where the number of items in a group exceeds a certain number (cutoff). For those groups, I need to take some head rows and the tail row.</p>
<p>I am using the code below</p>
<pre><code>train = train[train.groupby('id').id.transform(len) > headRows]
groups = pd.concat([train.groupby('id').head(headRows),train.groupby('id').tail(1)]).sort_index()
</code></pre>
<p>This works. But the first line, it is very slow :(. 30 minutes or more.</p>
<p>Is there any way to make the first line faster ? If I do not use the first line, there are duplicate indices from the result of the second line, which messes up things.</p>
<p>Thanks in advance
Regards</p>
<p>Note:
My train data frame has around 70,000 groups of varying group size over around 700,000 rows . It actually follows from my other question as can be seen here <a href="https://stackoverflow.com/questions/23580009/data-processing-with-adding-columns-dynamically-in-python-pandas-dataframe">Data processing with adding columns dynamically in Python Pandas Dataframe</a>.
Jeff gave a great answer there, but it fails if the group size is less or equal to parameter I pass in head(parameter) when concatenating my rows as in Jeffs answer : In [31]: groups = concat.....</p>
|
<p>Use <code>groupby/filter</code>:</p>
<pre><code>>>> df.groupby('id').filter(lambda x: len(x) > cutoff)
</code></pre>
<p>This will just return the rows of your dataframe where the size of the group is greater than your cutoff. Also, it should perform quite a bit better. I timed <code>filter</code> here with a dataframe with 30,039 'id' groups and a little over 4 million observations:</p>
<pre><code>In [9]: %timeit df.groupby('id').filter(lambda x: len(x) > 12)
1 loops, best of 3: 12.6 s per loop
</code></pre>
|
python|pandas|dataframe|data-processing
| 7
|
7,082
| 62,115,259
|
Pandas drop rows in time series with less than x observation
|
<p>I am working with timeseries data in Pandas (timestamp used as index). I am doing some filtering on my dataset and end up with a dataframe that mostly contains consecutive observations (one-minute data). However, there are also time intervals with only one or a few minutes of obervations. These I would like to exclude. How can I get hold of those short intervals using sth like:</p>
<pre><code>df = df.drop(df[<some boolean condition>].index)
</code></pre>
<pre><code>timestamp value
2018-01-08 06:13:00 143
2018-01-08 06:14:00 324
2018-01-08 06:15:00 324
2018-01-08 06:16:00 324
2018-01-08 06:17:00 324
2018-01-08 06:20:00 324(remove)
2018-01-08 06:35:00 324
2018-01-08 06:36:00 324
2018-01-08 06:37:00 324
2018-01-08 06:38:00 324
2018-01-08 06:39:00 324
2018-01-08 06:40:00 324
</code></pre>
|
<p>Use:</p>
<pre><code>#convert index to Series
s = df.index.to_series()
#test if 1 Minute difference, then cumulative sum
a = s.diff().ne(pd.Timedelta(1, unit='Min')).cumsum()
#filter if counts of cumulative value greater like N, e.g. 3
N = 3
df = df[a.map(a.value_counts()).gt(N)]
print (df)
value
timestamp
2018-01-08 06:13:00 143
2018-01-08 06:14:00 324
2018-01-08 06:15:00 324
2018-01-08 06:16:00 324
2018-01-08 06:17:00 324
2018-01-08 06:35:00 324
2018-01-08 06:36:00 324
2018-01-08 06:37:00 324
2018-01-08 06:38:00 324
2018-01-08 06:39:00 324
2018-01-08 06:40:00 324
</code></pre>
|
python|pandas|time-series
| 1
|
7,083
| 62,284,095
|
What are the parameters to tf.GradientTape()'s __exit__ function?
|
<p>According to the <a href="https://www.tensorflow.org/api_docs/python/tf/GradientTape" rel="nofollow noreferrer">documentation</a> for <code>tf.GradientTape</code>, its <code>__exit__()</code> method takes three positional arguments: <code>typ, value, traceback</code>.</p>
<p><strong>What exactly are these parameters?</strong> </p>
<p>How does the <code>with</code> statement infer them? </p>
<p>What values should I give them in the code below (where I'm <em>not</em> using a <code>with</code> statement):</p>
<pre><code>x = tf.Variable(5)
gt = tf.GradientTape()
gt.__enter__()
y = x ** 2
gt.__exit__(typ = __, value = __, traceback = __)
</code></pre>
|
<p><code>sys.exc_info()</code> returns a tuple with three values <code>(type, value, traceback)</code>.</p>
<ol>
<li>Here <code>type</code> gets the exception type of the Exception being handled</li>
<li><code>value</code> is the arguments that are being passed to the constructor of an exception class.</li>
<li><code>traceback</code> contains the stack information like where the exception occurred etc. </li>
</ol>
<p>In the GradientTape context when the exception occurs <code>sys.exc_info()</code> details will be passed to <strong>exit</strong>() function which will <code>Exits the recording context, no further operations are traced</code>. </p>
<p>Below is the example to illustrate the same. </p>
<p>Let's consider a simple function.</p>
<pre><code>def f(w1, w2):
return 3 * w1 ** 2 + 2 * w1 * w2
</code></pre>
<p><strong>By not using <code>with</code> statement:</strong></p>
<pre><code>w1, w2 = tf.Variable(5.), tf.Variable(3.)
tape = tf.GradientTape()
z = f(w1, w2)
tape.__enter__()
dz_dw1 = tape.gradient(z, w1)
try:
dz_dw2 = tape.gradient(z, w2)
except Exception as ex:
print(ex)
exec_tup = sys.exc_info()
tape.__exit__(exec_tup[0],exec_tup[1],exec_tup[2])
</code></pre>
<p><strong>Prints:</strong></p>
<blockquote>
<p>GradientTape.gradient can only be called once on non-persistent tapes.</p>
</blockquote>
<p>Even if you don't exit explicitly by passing values, the program will pass these values to exit the GradientTaoe recording, below is the example. </p>
<pre><code>w1, w2 = tf.Variable(5.), tf.Variable(3.)
tape = tf.GradientTape()
z = f(w1, w2)
tape.__enter__()
dz_dw1 = tape.gradient(z, w1)
try:
dz_dw2 = tape.gradient(z, w2)
except Exception as ex:
print(ex)
</code></pre>
<p>prints the same exception message.</p>
<p><strong>By using <code>with</code> statement.</strong> </p>
<pre><code>with tf.GradientTape() as tape:
z = f(w1, w2)
dz_dw1 = tape.gradient(z, w1)
try:
dz_dw2 = tape.gradient(z, w2)
except Exception as ex:
print(ex)
exec_tup = sys.exc_info()
tape.__exit__(exec_tup[0],exec_tup[1],exec_tup[2])
</code></pre>
<p>Below is the <code>sys.exc_info()</code> response for the above exception. </p>
<pre><code>(RuntimeError,
RuntimeError('GradientTape.gradient can only be called once on non-persistent tapes.'),
<traceback at 0x7fcd42dd4208>)
</code></pre>
<p><strong>Edit 1:</strong> </p>
<p>As <code>user2357112 supports Monica</code> mentioned in the comment. Providing the solution for non-exception cases. </p>
<p>In the non-exception case, the spec mandates that the values passed to <code>__exit__</code> should all be <code>None</code>.</p>
<p><strong>Example 1:</strong></p>
<pre><code>x = tf.constant(3.0)
g = tf.GradientTape()
g.__enter__()
g.watch(x)
y = x * x
g.__exit__(None,None,None)
z = x*x
dy_dx = g.gradient(y, x)
# dz_dx = g.gradient(z, x)
print(dy_dx)
# print(dz_dx)
</code></pre>
<p><strong>Prints:</strong></p>
<pre><code>tf.Tensor(6.0, shape=(), dtype=float32)
</code></pre>
<p>Since the <code>y</code> has been captured before <code>__exit__</code> it returns the Gradient value. </p>
<p><strong>Example 2:</strong> </p>
<pre><code>x = tf.constant(3.0)
g = tf.GradientTape()
g.__enter__()
g.watch(x)
y = x * x
g.__exit__(None,None,None)
z = x*x
# dy_dx = g.gradient(y, x)
dz_dx = g.gradient(z, x)
# print(dy_dx)
print(dz_dx)
</code></pre>
<p><strong>Prints:</strong></p>
<pre><code>None
</code></pre>
<p>This is because <code>z</code> is captured after the <code>__exit__</code> thus the gradient stops recording. </p>
|
python|tensorflow|oop|with-statement|automatic-differentiation
| -1
|
7,084
| 51,470,186
|
scipy.optimize.minimize changes values at low decimal place
|
<p>this is my code for the optimisation.</p>
<pre><code> initialGuess = D.Matrix[:,D.menge]
bnds = D.Matrix[:,(D.mengenMin,D.mengenMax)]
con1 = {'type': 'eq', 'fun': PercentSum}
con2 = {'type': 'eq', 'fun': MinMaxProportion}
cons = ([con1,con2])
solution = minimize(rootfunc,initialGuess,method='SLSQP',\
bounds=bnds,constraints=cons)
</code></pre>
<p>The Problem is, the algorithm changes values at low decimal place.
e.g. this is my initial guess. I already tried to change from float to integers, to have a work around. </p>
<pre><code> [ 0. 0. 123. 0. 0. 622. 245. 0. 0. 0.]
</code></pre>
<p>The first try of the of the optimizer looks like this: </p>
<pre><code>[1.49011612e-08 0.00000000e+00 1.23000000e+02 0.00000000e+00
0.00000000e+00 6.22000000e+02 2.45000000e+02 0.00000000e+00
0.00000000e+00 0.00000000e+00]
</code></pre>
<p>Another is this:</p>
<pre><code>[ 0. 0. 123.00000001 0. 0.
622. 245. 0. 0. 0. ]
</code></pre>
<p>An finally the optimization finishes with this error:</p>
<pre><code>status 6
message Singular matrix C in LSQ subproblem
</code></pre>
<p>I think the problem is the tiny difference. Is there a possibility to tell the SLSQP algorithm only to try changes on the first two decimal places or higher? </p>
<p>kind regards </p>
<p>edit: I have found an option, but it does not solve my problem. The new call of scipy.optimize.minimize:</p>
<pre><code>solution = minimize(rootfunc,initialGuess,method='SLSQP',\
bounds=bnds,constraints=con2,options={'eps':1,'disp':True})
</code></pre>
|
<p>Those "tiny steps" are not values selected by the solver in each iteration, they are from finite differencing. Gradient-based solvers like this one require gradients. Since you didn't provide the gradients as functions, it defaults to calculating them for you with finite difference. Your real problem, as the error stated, is most likely a singular matrix.</p>
|
python|numpy|scipy|nonlinear-optimization
| 0
|
7,085
| 51,548,551
|
reading nested .h5 group into numpy array
|
<p>I received this .h5 file from a friend and I need to use the data in it for some work. All the data is numerical. This the first time I work with these kind of files. I found many questions and answers here about reading these files but I couldn't find a way to get to lower level of the groups or folders the file contains.
The file contains two main folders, i.e. X and Y
X contains a folder named 0 which contains two folders named A and B.
Y contains ten folders named 1-10.
The data I want to read is in A,B,1,2,..,10
for instance I start with </p>
<pre><code>f = h5py.File(filename, 'r')
f.keys()
</code></pre>
<p>Now f returns <i> [u'X', u'Y'] </i> The two main folders </p>
<p>Then I try to read X and Y using read_direct but I get the error </p>
<p><i> AttributeError: 'Group' object has no attribute 'read_direct' </i></p>
<p>I try to create an object for X and Y as follows</p>
<pre><code>obj1 = f['X']
obj2 = f['Y']
</code></pre>
<p>Then if I use command like </p>
<pre><code>obj1.shape
obj1.dtype
</code></pre>
<p>I get an error </p>
<p> AttributeError: 'Group' object has no attribute 'shape'
<p>I can see that these command don't work because I use then on X and Y which are folders contains no data but other folders. </p>
<p>So my question is how to get down to the folders named A, B,1-10 to read the data</p>
<p>I couldn't find a way to do that even in the documentation <a href="http://docs.h5py.org/en/latest/quick.html" rel="noreferrer">http://docs.h5py.org/en/latest/quick.html</a></p>
|
<p>You need to traverse down your HDF5 hierarchy until you reach a dataset. Groups do not have a shape or type, datasets do.</p>
<p>Assuming you do not know your hierarchy structure in advance, you can use a recursive algorithm to yield, via an iterator, full paths to all available datasets in the form <code>group1/group2/.../dataset</code>. Below is an example.</p>
<pre><code>import h5py
def traverse_datasets(hdf_file):
def h5py_dataset_iterator(g, prefix=''):
for key in g.keys():
item = g[key]
path = f'{prefix}/{key}'
if isinstance(item, h5py.Dataset): # test for dataset
yield (path, item)
elif isinstance(item, h5py.Group): # test for group (go down)
yield from h5py_dataset_iterator(item, path)
for path, _ in h5py_dataset_iterator(hdf_file):
yield path
</code></pre>
<p>You can, for example, iterate all dataset paths and output attributes which interest you:</p>
<pre><code>with h5py.File(filename, 'r') as f:
for dset in traverse_datasets(f):
print('Path:', dset)
print('Shape:', f[dset].shape)
print('Data type:', f[dset].dtype)
</code></pre>
<p>Remember that, by default, arrays in HDF5 are not read entirely in memory. You can read into memory via <code>arr = f[dset][:]</code>, where <code>dset</code> is the full path.</p>
|
python|arrays|numpy|hdf5|h5py
| 13
|
7,086
| 51,417,282
|
ValueError: ndarray is not contiguous
|
<p>when I build a matrix using the last row of my dataframe:</p>
<pre><code>x = w.iloc[-1, :]
a = np.mat(x).T
</code></pre>
<p>it goes:</p>
<pre><code>ValueError: ndarray is not contiguous
</code></pre>
<p>`print the x shows(I have 61 columns in my dataframe):</p>
<pre><code>print(x)
cdl2crows 0.000000
cdl3blackcrows 0.000000
cdl3inside 0.000000
cdl3linestrike 0.000000
cdl3outside 0.191465
cdl3starsinsouth 0.000000
cdl3whitesoldiers_x 0.000000
cdl3whitesoldiers_y 0.000000
cdladvanceblock 0.000000
cdlhighwave 0.233690
cdlhikkake 0.218209
cdlhikkakemod 0.000000
...
cdlidentical3crows 0.000000
cdlinneck 0.000000
cdlinvertedhammer 0.351235
cdlkicking 0.000000
cdlkickingbylength 0.000000
cdlladderbottom 0.002259
cdllongleggeddoji 0.629053
cdllongline 0.588480
cdlmarubozu 0.065362
cdlmatchinglow 0.032838
cdlmathold 0.000000
cdlmorningdojistar 0.000000
cdlmorningstar 0.327749
cdlonneck 0.000000
cdlpiercing 0.251690
cdlrickshawman 0.471466
cdlrisefall3methods 0.000000
Name: 2010-01-04, Length: 61, dtype: float64
</code></pre>
<p>how to solve it? so many thanks</p>
|
<p>np.mat expects array form of input.
refer to the doc
<a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.mat.html" rel="nofollow noreferrer">doc</a></p>
<p>So your code should be</p>
<pre><code>x = w.iloc[-1, :].values
a = np.mat(x).T
</code></pre>
<p>.values will give numpy array format of dataframe values, so np.mat will work.</p>
|
python|pandas|numpy
| 1
|
7,087
| 51,177,262
|
Where can I find documentation on NumPy's delineation of its directories?
|
<p>This may be a stupid question, but...</p>
<p>Where can I find a simple explanation of what goes under 'lib,' vs 'core'? How do I know whether a function goes under fromnumeric.py, or function_base.py? Some of the .py files have explanation strings at the beginning, but others do not. </p>
|
<p>You may double check the <a href="https://docs.scipy.org/doc/numpy/reference/" rel="nofollow noreferrer">numpy/reference</a> and <a href="https://docs.scipy.org/doc/numpy/user/" rel="nofollow noreferrer">numpy/user</a> guides.</p>
|
numpy
| 0
|
7,088
| 48,595,802
|
Tensorflow: Finding index of first occurrence of elements in a tensor
|
<p>Suppose I have a tensor, x = [1, 2, 6, 6, 4, 2, 3, 2]<br>
I want to find the index of the first occurrence of every unique element in x.<br>
The output should be [0, 1, 6, 4, 2].
I basically want the second output of numpy.unique(x,return_index=True). This functionality doesn't seem to be supported in tf.unique.
Is there a workaround to this in tensorflow, without using any loops?</p>
|
<pre><code>x = [1, 2, 6, 6, 4, 2, 3, 2]
x_count = tf.cumsum(tf.ones_like(x))-1
unique, unique_id = tf.unique(x)
unique_first = tf.unsorted_segment_min(x_count, unique_id, tf.shape(unique)[0])
with tf.Session() as sess:
print(sess.run(tf.stack([unique, unique_first],0)))
</code></pre>
<p>Gives:</p>
<pre><code>[[1 2 6 4 3]
[0 1 2 4 6]]
</code></pre>
|
python|tensorflow
| 1
|
7,089
| 48,766,275
|
Appending a column to data frame using Pandas in python
|
<p>I'm trying some operations on Excel file using pandas. I want to extract some columns from a excel file and add another column to those extracted columns. And want to write all the columns to new excel file. To do this I have to append new column to old columns.</p>
<p>Here is my code- </p>
<pre><code>import pandas as pd
#Reading ExcelFIle
#Work.xlsx is input file
ex_file = 'Work.xlsx'
data = pd.read_excel(ex_file,'Data')
#Create subset of columns by extracting columns D,I,J,AU from the file
data_subset_columns = pd.read_excel(ex_file, 'Data', parse_cols="D,I,J,AU")
#Compute new column 'Percentage'
#'Num Labels' and 'Num Tracks' are two different columns in given file
data['Percentage'] = data['Num Labels'] / data['Num Tracks']
data1 = data['Percentage']
print data1
#Here I'm trying to append data['Percentage'] to data_subset_columns
Final_data = data_subset_columns.append(data1)
print Final_data
Final_data.to_excel('111.xlsx')
</code></pre>
<p>No error is shown. But Final_data is not giving me expected results. ( Data not getting appended) </p>
|
<p>There is no need to explicitly append columns in <code>pandas</code>. When you calculate a new column, it is included in the dataframe. When you export it to excel, the new column will be included.</p>
<p>Try this, assuming 'Num Labels' and 'Num Tracks' are in "D,I,J,AU" [otherwise add them]:</p>
<pre><code>import pandas as pd
data_subset = pd.read_excel(ex_file, 'Data', parse_cols="D,I,J,AU")
data_subset['Percentage'] = data_subset['Num Labels'] / data_subset['Num Tracks']
data_subset.to_excel('111.xlsx')
</code></pre>
|
python|pandas
| 3
|
7,090
| 48,871,043
|
How to update row by row of dataframe using python pandas
|
<p>I don't know whether it can be achieved or not using python pandas. Here is the scenario I'm trying to do</p>
<p>I created a databases connection to MSSQL using python (pyodbc, sqlalchemy) </p>
<p>I read one table and saved it as dataframe like this </p>
<pre><code>data = pd.read_sql_table('ENCOUNTERP1', conn)
</code></pre>
<p>and the dataframe looks like this </p>
<pre><code>ENCOUNTERID DIAGCODE DIAGSEQNO POA DIAGVERFLAG
0 78841 3GRNFC 3 P
1 89960 6
2 86479 N18BZON 9 K
3 69135 MPPY3 9 9 0
4 32422 DS6SBT 2 P
5 69135 4 D H
6 92019 PP0 1
7 42105 2 L
8 99256 U 1 J
9 33940 II9ZODF 3 2
10 33940 OH 1
11 65108 CI6COE 8 U
12 77871 Y3ZHN1 7 S
13 65108 73BJBZV 8 7
14 99256 7 1 T
</code></pre>
<p>Now I have one more dataframe (<code>dp = pd.read_sql_table('tblDiagnosis', conn)</code>)which has <strong>DIAGCODE</strong> column in it and they all are unique </p>
<p>I want to get those DIAGCODE from dataframe <code>dp</code> and update it to dataframe <code>data['DIAGCODE']</code> </p>
<p>I tried to do like this iterate over each row and update another dataframe row by row but here in this code the second for loop will start from 0 index every time so, finally the entire row is filled with one value.</p>
<pre><code>for index, row in dp.iterrows():
for i, r in data.iterrows():
r['DIAGCODE'] = row['Code']
</code></pre>
<p>First of all the two dataframe's are not equal in size this is dataframe for <code>data</code></p>
<p><code>Code Description Category IcdSet
0 001 001 - CHOLERA CHOLERA 9
1 0010 0010 - CHOLERA D/T V. CHOLERAE CHOLERA 9
2 0011 0011 - CHOLERA D/T V. EL TOR CHOLERA 9
3 0019 0019 - CHOLERA NOS CHOLERA 10
4 002 002 - TYPHOID/PARATYPHOID FEV TYPHOID AND PARATYPHOID FEVERS 9
5 0020 0020 - TYPHOID FEVER TYPHOID AND PARATYPHOID FEVERS 9</code> </p>
<p>and the output should be something like this </p>
<p><code>ENCOUNTERID DIAGCODE DIAGSEQNO POA DIAGVERFLAG
0 78841 001 3 P
1 89960 0010 6
2 86479 0011 9 K
3 69135 0019 9 9 0
4 32422 002 2 P
5 69135 0020 4 D H</code></p>
<p>I would like to add one condition from dataframe dp like this </p>
<p><code>for index, row in dp.iterrows():
for i, r in data.iterrows():
if row['Code'] == 10:
r['DIAGCODE'] = row['Code']</code></p>
|
<p>I assume that the two tables have same row size and are both in desired order you wanted. If it's correct, then you can simply use:</p>
<pre><code>df = pd.concat([data, pd], axis=1)
</code></pre>
<p>Then extract the columns you wanted:</p>
<pre><code>df = df.ix[;,['ENCOUNTERID','CODE', 'DIAGSEQNO', 'POA', 'DIAGVERFLAG']].rename(columns={'CODE': 'DIAGCODE'})
</code></pre>
<p>If this meets your requirement, please vote.</p>
<hr>
<p>Sorry, the .ix was deprecated even it can still be used without problem. So please use</p>
<pre><code>df = df[['ENCOUNTERID','CODE', 'DIAGSEQNO', 'POA', 'DIAGVERFLAG']].rename(columns={'CODE': 'DIAGCODE'})
</code></pre>
<hr>
<p>BTW, the issue in your code is that you were using two loops which makes the last value of inside loop to be the final value of outside loop.
So here is solution:</p>
<pre><code>for row, r in zip(pd.iterrows(),data.iterrows()):
r[1]['DIAGCODE']=row[1]['CODE']
</code></pre>
|
python|pandas|dataframe
| 1
|
7,091
| 70,892,948
|
Transform not getting applied on CustomDataset Pytorch
|
<p>I images with folder structure as following :</p>
<pre><code>root_dir
│
└───folder1
│ │ file011.png
│ │ file012.png
│
└───folder2
| │ file021.png
| │ file022.png
|
└───folder2
│ file031.png
│ file032.png
...
</code></pre>
<p>Now I wanted to create a CustomeDataset without labels in PyTorch (since I am using it for GANs)
So I did the following :</p>
<pre class="lang-py prettyprint-override"><code>class CurrencyDataset(Dataset):
'''
Currency Dataset with no labels
'''
def __init__(self, type, transform):
'''
Parameters
type : "Train" or "Test"
transform : Transformations to be applied
'''
root_dir = "indian-currency-notes-classifier/"
# Storing images in a list
self.data = []
self.transform = transform
dir = os.path.join(root_dir, type)
for note in os.listdir(dir) :
note_dir = os.path.join(dir, note)
for img_name in os.listdir(note_dir):
img = io.imread(os.path.join(note_dir, img_name))
self.data.append(img)
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
x = self.data[idx]
if self.transform :
x = self.transform(x)
return x
</code></pre>
<p>and used the following transformations :</p>
<pre class="lang-py prettyprint-override"><code>transform = transforms.Compose([
transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5]),
transforms.RandomRotation(15),
transforms.Resize((224, 224)),
transforms.ToTensor(),
])
train_ds = CurrencyDataset("Train", transform)
</code></pre>
<p>but on checking the shape and values of the Tensors, I found out that transformations were not getting applied</p>
<pre class="lang-py prettyprint-override"><code>train_ds.data[0].shape
>> (1072, 1154, 3)
</code></pre>
<p>I am a bit new to PyTorch so please let me know If I am doing something wrong here or What needs to be done to make it correct ?</p>
|
<p>You forgot to assign the <code>transform</code> object as an attribute of the instance. This, in turn, means <code>self.transform</code> evaluates to <code>None</code> in the <code>__getitem__</code> function. Simply add the following in the <code>__init__</code>:</p>
<pre><code>self.transform = transform
</code></pre>
<hr />
<p>Additonally, you are not calling the proper function (<code>__getitem__</code>) with <code>train_ds.data[0].shape</code>, instead it should be <code>train_ds[0]</code> (in other words: <code>train_ds.__getitem__(0)</code>).</p>
<hr />
<p>Finally, your pipeline doesn't have the correct order of transforms as <code>T.Resize</code> expects a <code>torch.Tensor</code>, not a PIL image:</p>
<pre><code>transform = T.Compose([
T.RandomRotation(15),
T.Resize((224, 224)),
T.ToTensor(),
T.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])])
</code></pre>
|
python|pytorch
| 1
|
7,092
| 71,091,025
|
How to Combine CSV files according to some conditions in pandas python?
|
<p>I have 4 CSV Files:
<a href="https://i.stack.imgur.com/BtNLa.png" rel="nofollow noreferrer">CSV Files Picture</a></p>
<p>I want to combine the 4 files into one data frame. I have to Use the Invoices.Customer_ID and Customers.ID. When combining, I also have to make sure that the result set only contains customers and articles for which there are invoices and invoice items.</p>
<p>I have this simple code that reads the CSV file and displays the data.</p>
<pre><code>from datetime import date, datetime
import os
import pandas as pd
article_csv = pd.read_csv('Input/Artikel.csv')
Invoices_items_csv = pd.read_csv('Input/Rechnungen_Positionen.csv')
Customers_csv = pd.read_csv('Input/Kunden.csv')
Invoices_csv = pd.read_csv('Input/Rechnungen.csv')
</code></pre>
<p>Can someone help me here to achieve this goal? Thanks in advance</p>
|
<p>According to <a href="https://stackoverflow.com/questions/44781633/join-pandas-dataframes-based-on-column-values">this post</a>, you can merge dataframes per column-definitions like this:</p>
<pre><code>df = pd.merge(df1, df2, on=['document_id','item_id'])
</code></pre>
<p>So for your case, i think you would have to do this from left to right, meaning from article_csv to Invoices_items_csv to Customers_csv etc., because you can only merge 2 dataframes at a time. Keep in mind that this represents an inner-join, meaning only data with the specified columns existing on both dataframes will be merged, which is what you described. Then, you simply can use <code>ID</code> for the first 3 dataframes, and <code>ID</code> and <code>Invoice_ID</code> for the last merge.</p>
<pre><code>import pandas as pd
# sample data
article_csv = pd.DataFrame({
"ID": [1, 2, 3],
"Code": ["abc", "def", "elk"],
"Discount": [2, None, 1.5]
})
Customers_csv = pd.DataFrame({
"ID": [1, 2, 3, 4, 5, 6, 7, 8],
"Country": ['AT']*8
})
Invoices_csv = pd.DataFrame({
"ID": [1, 3],
"Customer_id": [7, 8]
})
Invoices_items_csv = pd.DataFrame({
"Invoice_ID": [1, 1, 3, 3],
"Quantity": [5, 2, 7, 8]
})
# merge dataframes
merged_df = pd.merge(Customers_csv, article_csv, on=["ID", "ID"])
merged_df = pd.merge(merged_df, Invoices_csv, on=["ID", "ID"])
merged_df = pd.merge(merged_df, Invoices_items_csv, left_on="ID", right_on="Invoice_ID")
</code></pre>
|
python|pandas|csv
| 0
|
7,093
| 51,950,226
|
Capped / Constrained Weights
|
<p>I have a dataframe of weights, in which I want to constrain the maximum weight for any one element to 30%. However in doing this, the sum of the weights becomes less than 1, so the weights of all other elements should be uniformly increased, and then repetitively capped at 30% until the sum of all weights is 1.</p>
<p>For example:</p>
<p><a href="https://i.stack.imgur.com/AYmsT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AYmsT.png" alt=""></a></p>
<p>If my data is in a pandas data frame, how can I do this efficiently?
Note: in reality I have like 20 elements which I want to cap at 10%... so there is much more processing involved. I also intent to run this step 1000s of times.</p>
|
<p>@jpp </p>
<p>The following is a rough approach, modified from your answer to iteratively solveand re-cap. It doenst produce a perfect answer though... and having a while loop makes it inefficient. Any ideas how this could be improved?</p>
<pre><code>import pandas as pd
import numpy as np
cap = 0.1
df = pd.DataFrame({'Elements': list('ABCDEFGHIJKLMNO'),
'Values': [17,11,7,5,4,4,3,2,1.5,1,1,1,0.8,0.6,0.5]})
df['Uncon'] = df['Values']/df['Values'].sum()
df['Con'] = np.minimum(cap, df['Uncon'])
while df['Con'].sum() < 1 or len(df['Con'][df['Con']>cap]) >=1:
df['Con'] = np.minimum(cap, df['Con'])
nonmax = df['Con'].ne(cap)
adj = (1 - df['Con'].sum()) * df['Con'].loc[nonmax] /
df['Uncon'].loc[nonmax].sum()
df['Con'] = df['Con'].mask(nonmax, df['Con'] + adj)
print(df)
print(df['Con'].sum())
</code></pre>
|
python|pandas|weighted
| 1
|
7,094
| 51,700,232
|
ClipByValue error when deploying to ML Engine
|
<p>When trying to deploy a keras model to ML Engine I get</p>
<pre><code>$ gcloud ml-engine versions create v2 --model=plantDisease01 --origin=gs://${BUCKET_NAME}/
plantDisease01 --runtime-version=1.4
Creating version (this might take a few minutes)......failed. ERROR: (gcloud.ml-engine.versions.create) Bad model detected with error: "Failed to load model: Loading servable: {name: default version: 1} failed: Not found: Op type not registere
d 'ClipByValue' in binary running on localhost. Make sure the Op and Kernel are registered
in the binary running in this process.\n\n (Error code: 0)"
FAIL
</code></pre>
<p>my storage looks like</p>
<pre><code>$ gsutil ls gs://keras-class-191806/plantDisease01/export [23:29:38]
gs://keras-class-191806/plantDisease01/export/
gs://keras-class-191806/plantDisease01/export/saved_model.pb
</code></pre>
<p>I built the protocol buffer version using this approach <a href="https://stackoverflow.com/a/44232441/630752">https://stackoverflow.com/a/44232441/630752</a></p>
|
<p><code>ClipByValue</code> was <a href="https://github.com/tensorflow/tensorflow/commits/700a6698e634391cf96a314f378a8de973b49995/tensorflow/compiler/tf2xla/kernels/clip_by_value_op.cc" rel="nofollow noreferrer">introduced in TensorFlow 1.8</a>. You can either <a href="https://www.tensorflow.org/extend/adding_an_op#op_registration" rel="nofollow noreferrer">register</a> the op yourself or just change the <code>--runtime-version</code> flag's value to 1.8. </p>
|
tensorflow|keras|google-cloud-ml
| 1
|
7,095
| 51,872,431
|
pandas cumulative subtraction in a column
|
<p>I have a dataframe where I need to do a burndown starting from the baseline and subtracting all the values along, essentially I'm looking for an <strong>opposite of DataFrame().cumsum(0)</strong>:</p>
<pre><code> In Use
Baseline 3705.0
February 2018 0.0
March 2018 2.0
April 2018 15.0
May 2018 30.0
June 2018 14.0
July 2018 797.0
August 2018 1393.0
September 2018 86.0
October 2018 374.0
November 2018 21.0
December 2018 0.0
January 2019 0.0
February 2019 0.0
March 2019 0.0
April 2019 2.0
unknown 971.0
</code></pre>
<p>I cannot find a function to do or, or I'm not looking by the right tags / names.</p>
<p>How can this be achieved?</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.diff.html" rel="nofollow noreferrer"><code>DataFrameGroupBy.diff</code></a> by groups created by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.diff.html" rel="nofollow noreferrer"><code>diff</code></a>, comapring by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.lt.html" rel="nofollow noreferrer"><code>lt</code></a> <code><</code> and cumulative sum:</p>
<pre><code>g = df['Use'].diff().lt(0).cumsum()
df['new'] = df['Use'].groupby(g).diff().fillna(df['Use'])
print (df)
In Use new
0 Baseline 3705.0 3705.0
1 February 2018 0.0 0.0
2 March 2018 2.0 2.0
3 April 2018 15.0 13.0
4 May 2018 30.0 15.0
5 June 2018 14.0 14.0
6 July 2018 797.0 783.0
7 August 2018 1393.0 596.0
8 September 2018 86.0 86.0
9 October 2018 374.0 288.0
10 November 2018 21.0 21.0
11 December 2018 0.0 0.0
12 January 2019 0.0 0.0
13 February 2019 0.0 0.0
14 March 2019 0.0 0.0
15 April 2019 2.0 2.0
16 unknown 971.0 969.0
</code></pre>
|
python|python-3.x|pandas|numpy
| 3
|
7,096
| 64,345,662
|
Connect column of nodes based on another column
|
<p>I would need to build a network where nodes are websites and should be grouped based on a score assigned. If the website is new, then it will have a label 1, otherwise 0.</p>
<p>Example of data:</p>
<pre><code>url score label
web1 5 1
web2 10 1
web3 5 0
web4 2 0
...
</code></pre>
<p>I tried to use networkx to build the net. To group together the webs based on their score, I just used score as a common node (but probably there would be a better way to represent it).
I would like to colour the webs based on label column, but I do not know how to do that.
My code is:</p>
<pre><code>import networkx as nx
G = nx.from_pandas_edgelist(df, 'url', 'score')
nodes = G.nodes()
plt.figure(figsize=(40,50))
pos = nx.draw(G, with_labels=True,
nodelist=nodes,
node_size=1000)
</code></pre>
<p>I hope you can give me some tips.</p>
|
<p>Probably a partition graph might be a good idea if you want to include the <code>score</code> as a node too. You can start by creating the graph with <code>nx.from_pandas_edgelist</code> as you did, and update the node attributes as:</p>
<pre><code>B = nx.from_pandas_edgelist(df, source='url', target='score')
node_view = B.nodes(data=True)
for partition_nodes, partition in zip((df.url, df.score), (0,1)):
for node in partition_nodes.to_numpy():
node_view[node]['bipartite'] = partition
</code></pre>
<p>Now we have the partition attributes for each node:</p>
<pre><code>B.nodes(data=True)
NodeDataView({'web1': {'bipartite': 0}, 5: {'bipartite': 1}, 'web2':
{'bipartite': 0}, 10: {'bipartite': 1}, 'web3': {'bipartite': 0},
'web4': {'bipartite': 0}, 2: {'bipartite': 1}})
</code></pre>
<p>The graph can be represented with a partition layout:</p>
<pre><code>part1_nodes = [node for node, attr in B.nodes(data=True) if attr['bipartite']==0]
fig = plt.figure(figsize=(12,8))
plt.box(False)
nx.draw_networkx(
B,
pos = nx.drawing.layout.bipartite_layout(B, part1_nodes),
node_color=[]
node_size=800)
</code></pre>
<p><a href="https://i.stack.imgur.com/XaZfL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XaZfL.png" alt="enter image description here" /></a></p>
|
python|pandas|networkx
| 1
|
7,097
| 47,703,606
|
pandas groupby: TOP 3 values for each group
|
<p><strong>A new and more generic question has been posted in <a href="https://stackoverflow.com/questions/47714181/pandas-groupby-top-3-values-in-each-group-and-store-in-dataframe?noredirect=1#comment82388261_47714181">pandas groupby: TOP 3 values in each group and store in DataFrame</a> and a working solution has been answered there.</strong></p>
<p>In this example I create a dataframe <code>df</code> with some random data spaced 5 minutes.
I want to create a dataframe <code>gdf</code> (<em>grouped df</em>) where the <strong>3 highest values</strong> for each hour are listed.</p>
<p>I.e.: from this series of values</p>
<pre><code> VAL
TIME
2017-12-08 00:00:00 29
2017-12-08 00:05:00 56
2017-12-08 00:10:00 82
2017-12-08 00:15:00 13
2017-12-08 00:20:00 35
2017-12-08 00:25:00 53
2017-12-08 00:30:00 25
2017-12-08 00:35:00 23
2017-12-08 00:40:00 21
2017-12-08 00:45:00 12
2017-12-08 00:50:00 15
2017-12-08 00:55:00 9
2017-12-08 01:00:00 13
2017-12-08 01:05:00 87
2017-12-08 01:10:00 9
2017-12-08 01:15:00 63
2017-12-08 01:20:00 62
2017-12-08 01:25:00 52
2017-12-08 01:30:00 43
2017-12-08 01:35:00 77
2017-12-08 01:40:00 95
2017-12-08 01:45:00 79
2017-12-08 01:50:00 77
2017-12-08 01:55:00 5
2017-12-08 02:00:00 78
2017-12-08 02:05:00 41
2017-12-08 02:10:00 10
2017-12-08 02:15:00 10
2017-12-08 02:20:00 88
</code></pre>
<p>I am very close to the solution but I cannot find the correct syntax for the last step. What I get up to now (<code>largest3</code>) is:</p>
<pre><code> VAL
TIME TIME
2017-12-08 00:00:00 2017-12-08 00:10:00 82
2017-12-08 00:05:00 56
2017-12-08 00:25:00 53
2017-12-08 01:00:00 2017-12-08 01:40:00 95
2017-12-08 01:05:00 87
2017-12-08 01:45:00 79
2017-12-08 02:00:00 2017-12-08 02:20:00 88
2017-12-08 02:00:00 78
2017-12-08 02:05:00 41
</code></pre>
<p>from which I would like to obtain this <code>gdf</code> (the time when each maximum was reached is not important):</p>
<pre><code> VAL1 VAL2 VAL3
TIME
2017-12-08 00:00:00 82 56 53
2017-12-08 01:00:00 95 87 79
2017-12-08 02:00:00 88 78 41
</code></pre>
<p>This is the code:</p>
<pre><code>import pandas as pd
from datetime import *
import numpy as np
# test data
df = pd.DataFrame()
date_ref = datetime(2017,12,8,0,0,0)
days = pd.date_range(date_ref, date_ref + timedelta(0.1), freq='5min')
np.random.seed(seed=1111)
data1 = np.random.randint(1, high=100, size=len(days))
df = pd.DataFrame({'TIME': days, 'VAL': data1})
df = df.set_index('TIME')
print(df)
print("----")
# groupby
group1 = df.groupby(pd.Grouper(freq='1H'))
largest3 = pd.DataFrame(group1['VAL'].nlargest(3))
print(largest3)
gdf = pd.DataFrame()
# ???? <-------------------
</code></pre>
<p>Thank you in advance.</p>
|
<p><strong>NOTE: This solution works only if each group has at least 3 rows</strong></p>
<p>Try the following approach:</p>
<pre><code>In [59]: x = (df.groupby(pd.Grouper(freq='H'))['VAL']
.apply(lambda x: x.nlargest(3))
.reset_index(level=1, drop=True)
.to_frame('VAL'))
In [60]: x
Out[60]:
VAL
TIME
2017-12-08 00:00:00 82
2017-12-08 00:00:00 56
2017-12-08 00:00:00 53
2017-12-08 01:00:00 95
2017-12-08 01:00:00 87
2017-12-08 01:00:00 79
2017-12-08 02:00:00 88
2017-12-08 02:00:00 78
2017-12-08 02:00:00 41
In [61]: x.set_index(np.arange(len(x)) % 3, append=True)['VAL'].unstack().add_prefix('VAL')
Out[61]:
VAL0 VAL1 VAL2
TIME
2017-12-08 00:00:00 82 56 53
2017-12-08 01:00:00 95 87 79
2017-12-08 02:00:00 88 78 41
</code></pre>
<p>Some explanation:</p>
<pre><code>In [94]: x.set_index(np.arange(len(x)) % 3, append=True)
Out[94]:
VAL
TIME
2017-12-08 00:00:00 0 82
1 56
2 53
2017-12-08 01:00:00 0 95
1 87
2 79
2017-12-08 02:00:00 0 88
1 78
2 41
In [95]: x.set_index(np.arange(len(x)) % 3, append=True)['VAL'].unstack()
Out[95]:
0 1 2
TIME
2017-12-08 00:00:00 82 56 53
2017-12-08 01:00:00 95 87 79
2017-12-08 02:00:00 88 78 41
</code></pre>
|
python|pandas|dataframe|pandas-groupby
| 4
|
7,098
| 48,935,971
|
Getting top 3 values from multi-index pandas dataframe
|
<p>I have a multi-level grouped pandas dataframe which looks something like this:</p>
<pre><code>date AccountNum ProgramName Duration
2017-11-12 12345 program1 200
program2 300
program4 100
program5 250
45678 program7 200
program2 300
program8 100
program5 250
.... more accounts for 2017-11-12
2017-11-18 12345 program6 200
program2 300
program3 100
program5 250
45678 program6 200
program3 300
program4 100
program5 250
etc-etc
</code></pre>
<p>The duration is aggregated already, it is an average by the date, by account number, and by program name.
Here was the code to get the dataframe above:</p>
<pre><code>grouped = programs.groupby([pd.Grouper(freq='W'),'AccNum','ProgramName'])['Duration'].agg('mean')
</code></pre>
<p>The Duration column is the average per week for each account (and for each week).
I need to select top 3 programs for each account per week.</p>
<p>I tried nlargest() function but it does not seem to be working for me because I am either getting only 3 accounts back or losing the date column. Any help would be greatly appreciated.</p>
<p>EDIT:
Here is what I want the result to look like:</p>
<pre><code> date AccountNum ProgramName Duration
2017-11-12 12345 program2 300
program5 250
program1 200
45678 program2 300
program5 259
program7 200
.... more accounts for 2017-11-12
2017-11-18 12345 program2 300
program5 250
program6 200
45678 program3 300
program5 250
program6 200
.... more dates and more accounts ..
</code></pre>
<p>Essentially, I need to keep the group structure -- by date/by account/top 3 programs based on duration. The end goal of this exercise is to be able to see the change in duration week after week for top 3 programs for each account. </p>
|
<p>Is that what you want?</p>
<pre><code>In [143]: df.groupby(level=[0,1], as_index=False).apply(lambda x: x.nlargest(3, columns=['Duration'])).reset_index(level=0, drop=True)
Out[143]:
ProgramName Duration
date AccountNum
2017-11-12 12345.0 program2 300
12345.0 program5 250
12345.0 program1 200
45678.0 program2 300
45678.0 program5 250
45678.0 program7 200
2017-11-18 12345.0 program2 300
12345.0 program5 250
12345.0 program6 200
45678.0 program3 300
45678.0 program5 250
45678.0 program6 200
</code></pre>
|
python|pandas
| 0
|
7,099
| 48,951,946
|
Adding a row of totals to a dataframe
|
<p>I have a data frame and I am trying to figure out how to add a row to each client that sums up the hours for each client. Here is an example of my data frame:</p>
<pre><code> hours
client month
A January 203.50
February 227.75
March 159.75
April 203.25
May 199.90
B January 203.50
February 227.75
March 159.75
April 203.25
May 199.90
C January 203.50
February 227.75
March 159.75
April 203.25
May 199.90
</code></pre>
<p>I would like to add a new row to each of the clients that sums up the hours. It would look like this:</p>
<pre><code> hours
client month
A January 203.50
February 227.75
March 159.75
April 203.25
May 199.90
Total 1000.34
B January 203.50
February 227.75
March 159.75
April 203.25
May 199.90
Total 1000.34
C January 203.50
February 227.75
March 159.75
April 203.25
May 199.90
Total 1000.34
</code></pre>
<p>I have tired writing a for loop that goes through each client, sums up the hours, then appends the new row to each client. The loop I am trying looks something like this</p>
<p>for hours in df:
df.append(pd.Series(vp.sum(), name='Total'</p>
<p>However, this doesn't work. Any help would be greatly appreciated!!</p>
|
<p>IIUC, you can using <code>concat</code></p>
<pre><code>pd.concat([df,df.sum(level=0).assign(month='Total').set_index('month',append=True)]).sort_index()
Out[1754]:
hours
client month
A April 203.25
February 227.75
January 203.50
March 159.75
May 199.90
Total 994.15
B April 203.25
February 227.75
January 203.50
March 159.75
May 199.90
Total 994.15
C April 203.25
February 227.75
January 203.50
March 159.75
May 199.90
Total 994.15
</code></pre>
|
pandas|sum|append
| 2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.