Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
377,500
47,172,754
Tensorflow - Using tf.losses.hinge_loss causing Shapes Incompatible error
<p>My current code using <code>sparse_softmax_cross_entropy</code> works fine. </p> <pre class="lang-py prettyprint-override"><code>loss_normal = ( tf.reduce_mean(tf.losses .sparse_softmax_cross_entropy(labels=labels, logits=logits, weights=class_weights)) ) </code></pre> <p>However, when I try to use the <code>hinge_loss</code>:</p> <pre class="lang-py prettyprint-override"><code>loss_normal = ( tf.reduce_mean(tf.losses .hinge_loss(labels=labels, logits=logits, weights=class_weights)) ) </code></pre> <p>It reported an error saying:</p> <pre><code>ValueError: Shapes (1024, 2) and (1024,) are incompatible </code></pre> <p>The error seems to be originated from this function in the <code>losses_impl.py</code> file:</p> <pre class="lang-py prettyprint-override"><code> with ops.name_scope(scope, "hinge_loss", (logits, labels)) as scope: ... logits.get_shape().assert_is_compatible_with(labels.get_shape()) ... </code></pre> <p>I modified my code as below to just extract 1 column of the logits tensor:</p> <pre class="lang-py prettyprint-override"><code>loss_normal = ( tf.reduce_mean(tf.losses .hinge_loss(labels=labels, logits=logits[:,1:], weights=class_weights )) ) </code></pre> <p>But it still reports a similar error: </p> <pre><code>ValueError: Shapes (1024, 1) and (1024,) are incompatible. </code></pre> <p>Can someone please help point out why my code works fine with <code>sparse_softmax_cross_entropy</code> loss but not <code>hinge_loss</code>?</p>
<p>The tensor <code>labels</code> has the shape <code>[1024]</code>, the tensor <code>logits</code> has <code>[1024, 2]</code> shape. This works fine for <a href="https://www.tensorflow.org/api_docs/python/tf/nn/sparse_softmax_cross_entropy_with_logits" rel="nofollow noreferrer"><code>tf.nn.sparse_softmax_cross_entropy_with_logits</code></a>:</p> <blockquote> <ul> <li><strong>labels</strong>: Tensor of shape [d_0, d_1, ..., d_{r-1}] (where r is rank of labels and result) and dtype int32 or int64. Each entry in labels must be an index in [0, num_classes). Other values will raise an exception when this op is run on CPU, and return NaN for corresponding loss and gradient rows on GPU. </li> <li><strong>logits</strong>: Unscaled log probabilities of shape [d_0, d_1, ..., d_{r-1}, num_classes] and dtype float32 or float64.</li> </ul> </blockquote> <p>But <a href="https://www.tensorflow.org/api_docs/python/tf/losses/hinge_loss" rel="nofollow noreferrer"><code>tf.hinge_loss</code></a> requirements are different:</p> <blockquote> <ul> <li><strong>labels</strong>: The ground truth output tensor. Its shape should match the shape of logits. The values of the tensor are expected to be 0.0 or 1.0. </li> <li><strong>logits</strong>: The logits, a float tensor.</li> </ul> </blockquote> <p>You can resolve this in two ways:</p> <ul> <li><p>Reshape the labels to <code>[1024, 1]</code> and use just one row of <code>logits</code>, like you did - <code>logits[:,1:]</code>:</p> <pre><code>labels = tf.reshape(labels, [-1, 1]) hinge_loss = ( tf.reduce_mean(tf.losses.hinge_loss(labels=labels, logits=logits[:,1:], weights=class_weights)) ) </code></pre> <p>I think you'll also need to reshape the <code>class_weights</code> the same way.</p></li> <li><p>Use all of learned <code>logits</code> features via <code>tf.reduce_sum</code>, which will make a flat <code>(1024,)</code> tensor:</p> <pre><code>logits = tf.reduce_sum(logits, axis=1) hinge_loss = ( tf.reduce_mean(tf.losses.hinge_loss(labels=labels, logits=logits, weights=class_weights)) ) </code></pre> <p>This way you don't need to reshape <code>labels</code> or <code>class_weights</code>.</p></li> </ul>
machine-learning|tensorflow|cross-entropy
1
377,501
47,373,002
Fine-tuning from ./checkpoints/ssd_300_vgg.ckpt. Ignoring missing vars: False
<p>i want to train SSD for num_classes=2 . link of this code is <a href="https://github.com/balancap/SSD-Tensorflow" rel="nofollow noreferrer">here</a> . I write this code in command prompt:</p> <pre><code>python train_ssd_network.py \ --train_dir=./logs/ \ --dataset_dir=./tfrecords\ --dataset_name=pascalvoc_2012 \ --dataset_split_name=train \ --model_name=ssd_300_vgg \ --checkpoint_path=./checkpoints/ssd_300_vgg.ckpt \ --save_summaries_secs=60 \ --save_interval_secs=600 \ --weight_decay=0.0005 \ --optimizer=adam \ --learning_rate=0.001 \ --batch_size=32 </code></pre> <p>but I got this error :</p> <pre><code> start_standard_services=start_standard_services) File "C:\Users\User\Anaconda3\envs\python35\lib\site-packages\tensorflow\python\training\supervisor.py", line 706, in prepare_or_wait_for_session init_feed_dict=self._init_feed_dict, init_fn=self._init_fn) File "C:\Users\User\Anaconda3\envs\python35\lib\site-packages\tensorflow\python\training\session_manager.py", line 264, in prepare_session init_fn(sess) File "C:\Users\User\Anaconda3\envs\python35\lib\site-packages\tensorflow\contrib\framework\python\ops\variables.py", line 655, in callback saver.restore(session, model_path) File "C:\Users\User\Anaconda3\envs\python35\lib\site-packages\tensorflow\python\training\saver.py", line 1457, in restore {self.saver_def.filename_tensor_name: save_path}) File "C:\Users\User\Anaconda3\envs\python35\lib\site-packages\tensorflow\python\client\session.py", line 778, in run run_metadata_ptr) File "C:\Users\User\Anaconda3\envs\python35\lib\site-packages\tensorflow\python\client\session.py", line 982, in _run feed_dict_string, options, run_metadata) File "C:\Users\User\Anaconda3\envs\python35\lib\site-packages\tensorflow\python\client\session.py", line 1032, in _do_run target_list, options, run_metadata) File "C:\Users\User\Anaconda3\envs\python35\lib\site-packages\tensorflow\python\client\session.py", line 1052, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.InvalidArgumentError: Assign requires shapes of both tensors to match. lhs shape= [12] rhs shape= [126] [[Node: save_1/Assign_29 = Assign[T=DT_FLOAT, _class=["loc:@ssd_300_vgg/block8_box/conv_cls/biases"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/cpu:0"](s sd_300_vgg/block8_box/conv_cls/biases, save_1/RestoreV2_29)]] Caused by op 'save_1/Assign_29', defined at: File "train_ssd_network.py", line 390, in &lt;module&gt; tf.app.run() File "C:\Users\User\Anaconda3\envs\python35\lib\site-packages\tensorflow\python\platform\app.py", line 48, in run _sys.exit(main(_sys.argv[:1] + flags_passthrough)) File "train_ssd_network.py", line 378, in main init_fn=tf_utils.get_init_fn(FLAGS), File "F:\Downloads\SSD-Tensorflow-master-\tf_utils.py", line 235, in get_init_fn ignore_missing_vars=flags.ignore_missing_vars) File "C:\Users\User\Anaconda3\envs\python35\lib\site-packages\tensorflow\contrib\framework\python\ops\variables.py", line 653, in assign_from_checkpoint_fn saver = tf_saver.Saver(var_list, reshape=reshape_variables) File "C:\Users\User\Anaconda3\envs\python35\lib\site-packages\tensorflow\python\training\saver.py", line 1056, in __init__ self.build() File "C:\Users\User\Anaconda3\envs\python35\lib\site-packages\tensorflow\python\training\saver.py", line 1086, in build restore_sequentially=self._restore_sequentially) File "C:\Users\User\Anaconda3\envs\python35\lib\site-packages\tensorflow\python\training\saver.py", line 691, in build restore_sequentially, reshape) File "C:\Users\User\Anaconda3\envs\python35\lib\site-packages\tensorflow\python\training\saver.py", line 419, in _AddRestoreOps assign_ops.append(saveable.restore(tensors, shapes)) File "C:\Users\User\Anaconda3\envs\python35\lib\site-packages\tensorflow\python\training\saver.py", line 155, in restore self.op.get_shape().is_fully_defined()) File "C:\Users\User\Anaconda3\envs\python35\lib\site-packages\tensorflow\python\ops\state_ops.py", line 270, in assign validate_shape=validate_shape) File "C:\Users\User\Anaconda3\envs\python35\lib\site-packages\tensorflow\python\ops\gen_state_ops.py", line 47, in assign use_locking=use_locking, name=name) File "C:\Users\User\Anaconda3\envs\python35\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 768, in apply_op op_def=op_def) File "C:\Users\User\Anaconda3\envs\python35\lib\site-packages\tensorflow\python\framework\ops.py", line 2336, in create_op original_op=self._default_original_op, op_def=op_def) File "C:\Users\User\Anaconda3\envs\python35\lib\site-packages\tensorflow\python\framework\ops.py", line 1228, in __init__ self._traceback = _extract_stack() InvalidArgumentError (see above for traceback): Assign requires shapes of both tensors to match. lhs shape= [12] rhs shape= [126] [[Node: save_1/Assign_29 = Assign[T=DT_FLOAT, _class=["loc:@ssd_300_vgg/block8_box/conv_cls/biases"], use_locking=true, validate_shape=true, _device="/job:localhost/replica:0/task:0/cpu:0"](s sd_300_vgg/block8_box/conv_cls/biases, save_1/RestoreV2_29)]] </code></pre> <p>can Somebody help me why...according to some guidance I changed checkpoint_path=./checkpoints/ssd_300_vgg.ckpt to checkpoint_path=./checkpoints/ssd_300_vgg.ckpt/ssd_300_vgg.ckpt, but I get error too. </p>
<p>You should define "checkpoint_exclude_scopes ",if you want to fine-tuning a network trained on ImageNet. More detail described in the SSD-Tensorflow <a href="https://github.com/balancap/SSD-Tensorflow" rel="nofollow noreferrer">README.md</a></p>
tensorflow|deep-learning|jupyter-notebook|object-detection
0
377,502
47,444,999
Check if column contains type string (object)
<p>I have a huge dataset with thousands of rows and hundreds of columns. One of these columns contain a string because I am getting an error. I want to locate this string. All my columns are supposed to be float values, however one of these columns has a type <code>str</code> somewhere.</p> <p>How can I loop through a particular column using <code>Pandas</code> and print only the row that is of type <code>str</code>? I want to find out what the string(s) are so I can convert them to their numerical equivalent.</p>
<p>Using <code>applymap</code> with <code>type</code></p> <pre><code>df = pd.DataFrame({'C1': [1,2,3,'4'], 'C2': [10, 20, '3',40]}) df.applymap(type)==str Out[73]: C1 C2 0 False False 1 False False 2 False True 3 True False </code></pre> <p>Here you know the str cell. Then we using <code>np.where</code> to locate it </p> <pre><code>np.where((df.applymap(type)==str)) Out[75]: (array([2, 3], dtype=int64), array([1, 0], dtype=int64)) </code></pre>
python|python-3.x|pandas|dataframe|dataset
5
377,503
47,273,887
pandas Date get numeric Time intervall
<p>i have the following dataframe example with the Date and Interval:</p> <pre><code> Date Interval 0 2013-08-01 14:00:00 1 2013-08-01 14:15:00 2 2013-08-01 14:30:00 3 2013-08-01 14:45:00 4 2013-08-01 15:00:00 ... </code></pre> <p>What i want is a new Column where the Interval is mapped like:</p> <pre><code>00:00:00 = 1 00:15:00 = 2 00:30:00 = 3 00:45:00 = 4 ... 23:45:00 = 96 </code></pre> <p>So every 15Minutes is 1 Intervall. The rows in the dataframe are mixed, so i cant start a counter and increase the value. i need to use the time value in the Interval Column to get the mapped value in the new Column</p> <p>i tried:</p> <pre><code>dates = pandas.to_datetime(df['Interval']) df['IntervalMapped']= dates.dt.hour * 2 + dates.dt.minute//15 + 1 </code></pre> <p>but thats wrong</p>
<p>IIUC, I think you want this:</p> <pre><code> df['timestamp'] = pd.to_datetime(df['Date'] + ' ' + df['Interval']) df['IntervalMap'] = df['timestamp'].dt.hour.mul(4) + df['timestamp'].dt.minute.floordiv(15) + 1 </code></pre> <p>Output:</p> <pre><code> Date Interval timestamp IntervalMap 0 2013-08-01 14:00:00 2013-08-01 14:00:00 57 1 2013-08-01 14:15:00 2013-08-01 14:15:00 58 2 2013-08-01 14:30:00 2013-08-01 14:30:00 59 3 2013-08-01 14:45:00 2013-08-01 14:45:00 60 4 2013-08-01 15:00:00 2013-08-01 15:00:00 61 </code></pre>
python|pandas|dataframe
0
377,504
47,087,741
Use TQDM Progress Bar with Pandas
<p>Is it possible to use TQDM progress bar when importing and indexing large datasets using Pandas?</p> <p>Here is an example of of some 5-minute data I am importing, indexing, and using to_datetime. It takes a while and it would be nice to see a progress bar.</p> <pre><code>#Import csv files into a Pandas dataframes and convert to Pandas datetime and set to index eurusd_ask = pd.read_csv('EURUSD_Candlestick_5_m_ASK_01.01.2012-05.08.2017.csv') eurusd_ask.index = pd.to_datetime(eurusd_ask.pop('Gmt time')) </code></pre>
<p>Find length by getting shape</p> <pre><code>for index, row in tqdm(df.iterrows(), total=df.shape[0]): print("index",index) print("row",row) </code></pre>
python|pandas|tqdm
179
377,505
47,426,764
Custom Keras Loss Function throws 'ValueError None'
<h2>This custom Keras Loss function:</h2> <pre><code>def thresholdLoss (actual, predicted): rel = predicted / safeActual relAbove = tf.greater(rel, 1.50) relBelow = tf.less(rel, .50) isOutsideThresh = tf.logical_or(relAbove, relBelow, name='outsideRange') errCounts = tf.to_float(tf.count_nonzero(isOutsideThresh), name='countToFloat') return errCounts </code></pre> <p>Throws the following exception in the call to fit():</p> <pre><code>ValueError: Tried to convert 'x' to a tensor and failed. Error: None values not supported. </code></pre> <h2>What causes <code>x</code> to be the value <code>None</code> in square(x) called by the rmsProp optimizer?</h2> <p>Edit: It is caused by a gradient of all None values.</p> <h2>How can the loss function be modified to ensure a valid gradient?</h2> <p>as computed by Keras <code>gradients_impl.py:gradients()</code>?</p>
<p>This is a result of the loss function not being continuous. It returns a count, rather than a smooth/continuously-varying, real-valued error.</p> <p>Keras computes a gradient from the loss function. When that function is not continuous, Keras can return None values as the gradient vector. And this is the result of that cryptic error message.</p> <p>So the problem is only with that final 'countToFloat' computation that reduces the error to a discontinuous count.</p> <p>The solution is to compute a continuous error. Something like this:</p> <pre><code>delta = predicted - actual withinBand = tf.logical_and(tf.greater_equal(rel, .50), tf.less_equal(rel, 1.50)) err = tf.square(delta) err = tf.where(withinBand, .10 * tf.ones_like(actual), err) # reduce relevance of small error mse = K.mean(err) return mse </code></pre>
tensorflow|keras
1
377,506
47,260,715
How to get the value of a feature in a layer that match a the state dict in PyTorch?
<p>I have some cnn, and I want to fetch the value of some intermediate layer corresponding to a some key from the state dict. How could this be done? Thanks.</p>
<p>I think you need to create a new class that redefines the forward pass through a given model. However, <strong>most probably you will need to create the code regarding the architecture of your model</strong>. You can find here an example:</p> <pre><code>class extract_layers(): def __init__(self, model, target_layer): self.model = model self.target_layer = target_layer def __call__(self, x): return self.forward(x) def forward(self, x): module = self.model._modules[self.target_layer] # get output of the desired layer features = module(x) # get output of the whole model x = self.model(x) return x, features model = models.vgg19(pretrained=True) target_layer = 'features' extractor = extract_layers(model, target_layer) image = Variable(torch.randn(1, 3, 244, 244)) x, features = extractor(image) </code></pre> <p>In this case, I am using the pre-defined vgg19 network given in the pytorch models zoo. The network has the layers structured in two modules the <code>features</code> for the convolutional part and the <code>classifier</code> for the fully-connected part. In this case, since <code>features</code> wraps all the convolutional layers of the network it is straightforward. If your architecture has several layers with different names, you will need to store their output using something similar to this:</p> <pre><code> for name, module in self.model._modules.items(): x = module(x) # forward the module individually if name in self.target_layer: features = x # store the output of the desired layer </code></pre> <p>Also, you should keep in mind that you need to reshape the output of the layer that connects the convolutional part to the fully-connected one. It should be easy to do if you know the name of that layer.</p>
python|pytorch
2
377,507
47,124,007
How to sum appearances in columns with pandas without lambda
<p>I have the following dataframe</p> <pre><code> a b c d e 0 0 0 -1 1 -1 1 0 1 -1 1 -1 2 -1 0 -1 1 1 3 -1 1 1 -1 1 4 1 0 1 -1 1 5 1 0 0 0 1 6 1 1 0 0 -1 7 1 1 -1 0 0 </code></pre> <p>For each numbers that appears in a,b,c,d,e, I want to sum and store in a column the amount of times that it appears in a row, so the result should be something like this:</p> <pre><code> a b c d e Sum1 Sum0 Sum_1 0 0 0 -1 1 -1 1 2 2 1 0 1 -1 1 -1 2 1 2 2 -1 0 -1 1 1 2 1 2 3 -1 1 1 -1 1 3 0 2 4 1 0 1 -1 1 3 1 1 5 1 0 0 0 1 2 3 0 6 1 -1 0 0 -1 1 2 2 7 1 1 -1 -1 0 2 1 2 </code></pre> <p>So in the first row, the number "1" appears once in a,b,c,d,e, so we store that in Sum1 column. Then, number "0" appears two times, and we store that in Sum0, and number "-1" appears 2 times and we store it in Sum_1.</p> <p>How could this columns be calculated, without using lambda functions (to get better performance)? I guess that numpy is involved here but I don't get how to do it</p>
<p>Using <code>get_dummies</code></p> <pre><code>df=df.astype(str) pd.get_dummies(df.stack()).sum(level=0) Out[667]: -1 0 1 0 2 2 1 1 2 1 2 2 2 1 2 3 2 0 3 4 1 1 3 5 0 3 2 6 1 2 2 7 1 2 2 </code></pre> <p>More info </p> <pre><code>pd.concat([df,pd.get_dummies(df.stack()).sum(level=0).add_prefix('Sum')],1) Out[669]: a b c d e Sum-1 Sum0 Sum1 0 0 0 -1 1 -1 2 2 1 1 0 1 -1 1 -1 2 1 2 2 -1 0 -1 1 1 2 1 2 3 -1 1 1 -1 1 2 0 3 4 1 0 1 -1 1 1 1 3 5 1 0 0 0 1 0 3 2 6 1 1 0 0 -1 1 2 2 7 1 1 -1 0 0 1 2 2 </code></pre> <p><strong><em>Another method maybe solve but do not need convert to str.</em></strong> </p> <pre><code>df.apply(lambda x : x.value_counts(),1).fillna(0) Out[674]: -1 0 1 0 2.0 2.0 1.0 1 2.0 1.0 2.0 2 2.0 1.0 2.0 3 2.0 0.0 3.0 4 1.0 1.0 3.0 5 0.0 3.0 2.0 6 1.0 2.0 2.0 7 1.0 2.0 2.0 </code></pre>
python|pandas|numpy|dataframe|lambda
2
377,508
47,183,159
How to set weights in Keras with a numpy array?
<p>I am having trouble with the Keras backend functions for setting values. I am trying to convert a model from PyTorch to Keras and am trying to set the weights of the Keras model, but the weights do not appear to be getting set. Note: I am not actually setting with np.ones just using that for an example.</p> <p>I have tried...</p> <p>Loading an existing model</p> <pre><code>import keras from keras.models import load_model, Model model = load_model(model_dir+file_name) keras_layer = [layer for layer in model.layers if layer.name=='conv2d_1'][0] </code></pre> <p>Creating a simple model</p> <pre><code>img_input = keras.layers.Input(shape=(3,3,3)) x = keras.layers.Conv2D(1, kernel_size=1, strides=1, padding="valid", use_bias=False, name='conv1')(img_input) model = Model(img_input, x) keras_layer = [layer for layer in model.layers if layer.name=='conv1'][0] </code></pre> <p>Then using set_weights or set_value</p> <pre><code>keras_layer.set_weights([np.ones((1, 1, 3, 1))]) </code></pre> <p>or...</p> <pre><code>K.batch_set_value([(weight,np.ones((1, 1, 3, 1))) for weight in keras_layer.weights]) </code></pre> <p>afterwards I call either one of the following:</p> <pre><code>K.batch_get_value([weight for weight in keras_layer.weights]) keras_layer.get_weights() </code></pre> <p>And None of the weights appear to have been set. The same values as before are returned.</p> <pre><code>[array([[[[ 1.61547325e-06], [ 2.97779252e-06], [ 1.50160542e-06]]]], dtype=float32)] </code></pre> <p>How do I set the weights of a layer in Keras with a numpy array of values?</p>
<p>What is <code>keras_layer</code> in your code?</p> <p>You can set weights these ways:</p> <pre><code>model.layers[i].set_weights(listOfNumpyArrays) model.get_layer(layerName).set_weights(...) model.set_weights(listOfNumpyArrays) </code></pre> <p>Where <code>model</code> is an instance of an existing model. You can see the expected length of the list and its array shapes using the method <code>get_weights()</code> from the same instances above.</p>
python|tensorflow|machine-learning|keras|deep-learning
52
377,509
47,521,393
Best way in Pytorch to upsample a Tensor and transform it to rgb?
<p>For a nice output in Tensorboard I want to show a batch of input images, corresponding target masks and output masks in a grid. Input images have different size then the masks. Furthermore the images are obviously RGB. From a batch of e.g. 32 or 64 I only want to show the first 4 images.</p> <p>After some fiddling around I came up with the following example code. Good thing: It works. But I am really not sure if I missed something in Pytorch. It just looks much longer then I expected. Especially the upsampling and transformation to RGB seems wild. But the other transformations I found would not work for a whole batch.</p> <pre><code>import torch from torch.autograd import Variable import torch.nn.functional as FN import torchvision.utils as vutils from tensorboardX import SummaryWriter import time batch = 32 i_size = 192 o_size = 112 nr_imgs = 4 # Tensorboard init writer = SummaryWriter('runs/' + time.strftime('%Y%m%d_%H%M%S')) input_image=Variable(torch.rand(batch,3,i_size,i_size)) target_mask=Variable(torch.rand(batch,o_size,o_size)) output_mask=Variable(torch.rand(batch,o_size,o_size)) # upsample target_mask, add dim to have gray2rgb tm = FN.upsample(target_mask[:nr_imgs,None], size=[i_size, i_size], mode='bilinear') tm = torch.cat( (tm,tm,tm), dim=1) # grayscale plane to rgb # upsample target_mask, add dim to have gray2rgb om = FN.upsample(output_mask[:nr_imgs,None], size=[i_size, i_size], mode='bilinear') om = torch.cat( (om,om,om), dim=1) # grayscale plane to rgb # add up all images and make grid imgs = torch.cat( ( input_image[:nr_imgs].data, tm.data, om.data ) ) x = vutils.make_grid(imgs, nrow=nr_imgs, normalize=True, scale_each=True) # Tensorboard img output writer.add_image('Image', x, 0) </code></pre> <p><strong>EDIT</strong>: Found <a href="https://github.com/pytorch/vision/issues/157" rel="nofollow noreferrer">this</a> on Pytorchs Issues list. Its about <code>Batch support for Transform</code>. Seems there are no plans to add batch transforms in the future. So my current code might be the best solution for the time being, anyway?</p>
<p>Maybe you can just convert your Tensors to the numpy array (.data.cpu().numpy() ) and use opencv to do upsampling? OpenCV implementation should be quite fast.</p>
python|python-3.x|deep-learning|tensorboard|pytorch
2
377,510
47,384,508
KeyError: "The name 'predictions/accuracy/Mean:0' refers to a Tensor which does not exist...."
<p>I am using tensorflow1.4cpu version on my ubuntu16.04. I have previously trained a convnet model on mnist dataset.</p> <p>Now i want to access the model again and predict accuracy on mnist.test.images. I successfully load the model:</p> <pre><code>import tensorflow as tf from tensorflow.examples.tutorials.mnist import input_data mnist = input_data.read_data_sets('MNIST_data', one_hot=True) images = mnist.test.images[0:1000] labels = mnist.test.labels[0:1000] sess=tf.Session() saver = tf.train.import_meta_graph('tradConv_mnistModel/tradConvMnist- 10000.meta') saver.restore(sess, tf.train.latest_checkpoint('./tradConv_mnistModel')) graph = tf.get_default_graph() </code></pre> <p>In the previously trained graph i printed the name of accuracy which is:</p> <pre><code>with tf.name_scope('predictions'): correct_prediction = tf.equal( tf.argmax(y_conv, 1), tf.argmax(y_, 1)) with tf.name_scope('accuracy'): accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) print(accuracy.name) "predictions/accuracy/Mean:0" </code></pre> <p>But when i try to evaluate accuracy with the newly loaded graph i get the following error:</p> <pre><code>tensor_name = "predictions/accuracy/Mean:0" accuracy = graph.get_tensor_by_name(tensor_name) print("test accuracy %g"%accuracy.eval(feed_dict={ x: images, y_: labels, keep_prob: 1.0})) KeyError: "The name 'predictions/accuracy/Mean:0' refers to a Tensor which does not exist. The operation, 'predictions/accuracy/Mean', does not exist in the graph." </code></pre> <p>also i printed all the tensors_name in the saved graph and indeed that tensor_name does not exit.</p> <p>So my question is how can i evaluate the accuracy of a trained model a test data after reloading?</p>
<p>The method tf.train.Saver().restore works. But the other method tf.train.import_meta_graph('tradConv_mnistModel/tradConvMnis‌​t- 10000.meta') does not work!</p>
tensorflow|mnist
-1
377,511
47,355,141
DatabaseError: ('HY000', '[HY000] [Microsoft][ODBC SQL Server Driver]Connection is busy with results for another hstmt (0) (SQLExecDirectW)')
<p>I am trying to read data from SQL server into pandas data frame. Below is the code.</p> <pre><code>def get_data(size): con = pyodbc.connect(r'driver={SQL Server}; server=SPROD_RPT01; database=Reporting') cur = con.cursor() db_cmd = &quot;select distinct top %s * from dbo.KrishAnalyticsAllCalls&quot; %size res = cur.execute(db_cmd) sql_out = pd.read_sql_query(db_cmd, con, chunksize=10**6) frames = [chunk for chunk in sql_out] df_sql = pd.concat(frames) return df_sql df = get_data(5000000) </code></pre> <p>I am getting following error:</p> <blockquote> <p>pandas.io.sql.DatabaseError: Execution failed on sql 'select distinct top 500000 * from dbo.KrishAnalyticsAllCalls': ('HY000', '[HY000] [Microsoft][ODBC SQL Server Driver]Connection is busy with results for another hstmt (0) (SQLExecDirectW)')</p> </blockquote> <p>I had executed the function before and interrupted the execution with <code>ctrl+k</code> as I wanted to make a change in the function. Now, after making the change when I'm trying to execute the function I am getting the above error.</p> <p>How can I kill that connection/IPython Kernel since I don't know of any IPython Kernel running executing the query in the function?</p>
<p>I was facing the same issue. This was fixed when I used <code>fetchall()</code> function. The following the code that I used.</p> <pre class="lang-py prettyprint-override"><code>import pypyodbc as pyodbc def connect(self, query): con = pyodbc.connect(self.CONNECTION_STRING) cursor = con.cursor() print('Connection to db successful') cmd = (query) results = cursor.execute(cmd).fetchall() df = pd.read_sql(query, con) return df, results </code></pre> <p>Using <code>cursor.execute(cmd).fetchall()</code> instead of <code>cursor.execute(cmd)</code> resolved it. Hope this helps.</p>
python|sql-server|pandas|pyodbc
9
377,512
47,519,802
does tensorflow convert_to_tensor do memory copy?
<p>I am creating a tensorflow constant based on a ndarray list object. My understanding was that tensor itself wouldnt do a memory copy of the underlying data, but create a python object using the same underlying ndarray data. However, after running a little test, seems like it does copy the data </p> <pre><code>def mem_test(): printMemUsed("before r list") r = ['Supporter'] * 100000000 printMemUsed("after r list") r_arr = np.array(r) printMemUsed("after nd_array") tf.convert_to_tensor(r_arr) printMemUsed("after tensor conversion") def printMemUsed(discript): print("{}:\t{}".format(discript, psutil.virtual_memory().used)) </code></pre> <p>Here's the ouput:</p> <pre><code>before r list: 727310336 -&gt; 727 Mb after r list: 1528782848 -&gt; 1.5 GB after nd_array: 2430574592 -&gt; 2.4 GB after tensor conversion: 8925667328 -&gt; 8.9 GB </code></pre> <p>edit: r_arr had a dtype of 'S9' (null terminated string). After changing the input array element to type 'unicode' (U9), the virtual memory consumption bumped up to 5 GB after nd_array</p>
<p><code>tf.convert_to_tensor</code> uses a <code>_tensor_conversion_func_registry</code> were conversions functions are registered for different input types. For Python list, tuple and numpy nd.arrays this happens in <a href="https://github.com/tensorflow/tensorflow/blob/5a810ea2b3ff056a8d7dc5eef19c8193ffc0f8f6/tensorflow/python/framework/constant_op.py#L237" rel="nofollow noreferrer">constant_op.py</a>. And according to <a href="https://stackoverflow.com/a/42450418/5304427">this</a> answer <code>tf.constant</code> creates at least two copies.</p> <blockquote> <p>Why is the value of a tf.constant() stored multiple times in memory?</p> <p>Because data for a constant tensor is embedded into graph definition. This means this data is stored both in the client, which maintains the graph definition, and in the runtime, which allocates it's own memory for all tensors:</p> </blockquote> <p>Edit:</p> <p>According to <a href="https://stackoverflow.com/questions/35687678/using-a-pre-trained-word-embedding-word2vec-or-glove-in-tensorflow/35688187#35688187">this</a> answer one option is to use <code>tf.Variable</code> and initialize it with a <code>tf.placeholder</code>.</p>
python|numpy|tensorflow
0
377,513
47,329,860
Pandas, are there any faster ways to update values?
<p>Currently, my table has over 10000000 records, and there is a column named <code>ID</code>, and I want to update column named '3rd_col' with a new value if the <code>ID</code> is in the given list.</p> <p>I use <code>.loc</code> and here is my code</p> <pre><code>for _id in given_ids: df.loc[df.ID == _id, '3rd_col'] = new_value </code></pre> <p>But the performance of the above code is slow, how can I improve the performance of updating value?</p> <p>Sorry, here I want to be more specific on my problem, different id has different values to be assigned based on a function and there are about 4 columns to be assigned.</p> <pre><code>for _id in given_ids: df.loc[df.ID == _id, '3rd_col'] = return_new_val_1(id) df.loc[df.ID == _id, '4rd_col'] = return_new_val_2(id) df.loc[df.ID == _id, '5rd_col'] = return_new_val_3(id) df.loc[df.ID == _id, '6rd_col'] = return_new_val_4(id) </code></pre>
<p>You can create <code>dictionary</code> first and then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.replace.html" rel="nofollow noreferrer"><code>replace</code></a>:</p> <pre><code>#sample function def return_new_val(x): return x * 3 given_ids = list('abc') d = {_id: return_new_val(_id) for _id in given_ids} print (d) {'a': 'aaa', 'c': 'ccc', 'b': 'bbb'} df = pd.DataFrame({'ID':list('abdefc'), 'M':[4,5,4,5,5,4]}) df['3rd_col'] = df['ID'].replace(d) print (df) ID M 3rd_col 0 a 4 aaa 1 b 5 bbb 2 d 4 d 3 e 5 e 4 f 5 f 5 c 4 ccc </code></pre> <p>Or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html" rel="nofollow noreferrer"><code>map</code></a>, but then get <code>NaN</code>s for no match:</p> <pre><code>df['3rd_col'] = df['ID'].map(d) print (df) ID M 3rd_col 0 a 4 aaa 1 b 5 bbb 2 d 4 NaN 3 e 5 NaN 4 f 5 NaN 5 c 4 ccc </code></pre> <p>EDIT:</p> <p>If need append data by multiple functions first create new <code>DataFrame</code> and then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.join.html" rel="nofollow noreferrer"><code>join</code></a> to original:</p> <pre><code>def return_new_val1(x): return x * 2 def return_new_val2(x): return x * 3 given_ids = list('abc') df2 = pd.DataFrame({'ID':given_ids}) df2['3rd_col'] = df2['ID'].map(return_new_val1) df2['4rd_col'] = df2['ID'].map(return_new_val2) df2 = df2.set_index('ID') print (df2) 3rd_col 4rd_col ID a aa aaa b bb bbb c cc ccc </code></pre> <hr> <pre><code>df = pd.DataFrame({'ID':list('abdefc'), 'M':[4,5,4,5,5,4]}) df = df.join(df2, on='ID') print (df) ID M 3rd_col 4rd_col 0 a 4 aa aaa 1 b 5 bb bbb 2 d 4 NaN NaN 3 e 5 NaN NaN 4 f 5 NaN NaN 5 c 4 cc ccc #bur replace NaNs by values in `ID` cols = ['3rd_col','4rd_col'] df[cols] = df[cols].mask(df[cols].isnull(), df['ID'], axis=0) print (df) ID M 3rd_col 4rd_col 0 a 4 aa aaa 1 b 5 bb bbb 2 d 4 d d 3 e 5 e e 4 f 5 f f 5 c 4 cc ccc </code></pre>
python|pandas
7
377,514
47,433,199
How can I feed a batch into LSTM without reordering them by length in PyTorch?
<p>I am new to Pytorch and I have got into some trouble. I want to build a rank-model to judge the similarity of question and its answers (including right answers and wrong answers). And I use LSTM as an encoder. </p> <p>There are two LSTMs in my model and they share the weights. So my model’s input is two sequences (question and answer). But if I use batch, reordering will disrupt the correspondence between questions and answers. What should I do?</p>
<p>Maybe a starting point could be to use similar RNN wrapper as here <a href="https://github.com/facebookresearch/DrQA/blob/master/drqa/reader/layers.py#L20" rel="nofollow noreferrer">https://github.com/facebookresearch/DrQA/blob/master/drqa/reader/layers.py#L20</a> You can encode question and asnwer separately (this module will pack and unpack internally and will take care of sorting) and then you can work on the encoded representations</p>
pytorch
1
377,515
47,535,406
How does numpy allocate memory for nested array?
<p>The following program creates a large array from a nested list of arrays:</p> <pre><code>import numpy as np a = np.arange(6).reshape(2, 3) nested_list = [[a, a + 1], [a + 2, a + 3]] b = np.array(nested_list) </code></pre> <p>Does np.array pre-allocate memory for only once for the result before copying data into the memory in this case?</p> <p>Or, this is similar to:</p> <pre><code>c = np.vstack([np.hstack([a, a + 1]), np.hstack([a + 2, a + 3])]) </code></pre> <p>which would pre-allocate memory for 3 times?</p> <pre><code>&gt;&gt;&gt; b array([[[[0, 1, 2], [3, 4, 5]], [[1, 2, 3], [4, 5, 6]]], [[[2, 3, 4], [5, 6, 7]], [[3, 4, 5], [6, 7, 8]]]]) &gt;&gt;&gt; c array([[0, 1, 2, 1, 2, 3], [3, 4, 5, 4, 5, 6], [2, 3, 4, 3, 4, 5], [5, 6, 7, 6, 7, 8]]) &gt;&gt;&gt; b.shape (2, 2, 2, 3) &gt;&gt;&gt; b.reshape(2*2, 2*3) array([[0, 1, 2, 3, 4, 5], [1, 2, 3, 4, 5, 6], [2, 3, 4, 5, 6, 7], [3, 4, 5, 6, 7, 8]]) </code></pre>
<p><code>nested_list = [[a, a + 1], [a + 2, a + 3]]</code> produces 3 new arrays (the sums) plus a list of pointers to those arrays. That's just basic Python interpreter action.</p> <p><code>b = np.array(nested_list)</code>: <code>np.array</code> is a complex compiled function, so without some serious digging it is hard to tell exactly what it does. My impression from previous use, and especially errors when components don't exactly match in size, is that it scans the input to determine the highest-dimensional array that it can create, and then plugs the pieces in, with type conversions if needed.</p> <p>It's easy to do time comparisons; harder to track memory use. But assuming that data copying is the biggest time consumer, time tests are probably a good proxy for memory use. And unless we are hitting memory errors, we are usually more concerned with time than memory use.</p> <pre><code>In [565]: alist = [[a,a+1],[a+2,a+3]] In [566]: allist = [[a.tolist(), (a+1).tolist()],[(a+2).tolist(), (a+3).tolist()]] In [567]: timeit np.array(alist) 6.74 µs ± 63.2 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) In [568]: timeit np.array(allist) 9.92 µs ± 286 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) </code></pre> <p>Working from the nested list of arrays is a bit faster than working from the pure list equivalent. It may be copying those arrays to the target as blocks.</p> <p>Individual stacks is noticeably slower, though it also creates the <code>a+n</code> arrays as well:</p> <pre><code>In [569]: timeit c = np.vstack([np.hstack([a, a + 1]), np.hstack([a + 2, a + 3])]) 37.8 µs ± 39 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each) </code></pre> <p><code>np.stack</code> acts the same as <code>np.array</code> (with the default axis). It too uses <code>concatenate</code>:</p> <pre><code>In [570]: timeit np.stack(alist) 28.7 µs ± 262 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each) </code></pre> <p>Including the <code>a+n</code> calculations into the timing may be fairer:</p> <pre><code>In [571]: %%timeit ...: alist = [[a,a+1],[a+2,a+3]] ...: np.stack(alist) ...: 38.6 µs ± 509 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each) In [572]: %%timeit ...: alist = [[a,a+1],[a+2,a+3]] ...: np.array(alist) ...: 15.7 µs ± 177 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each) </code></pre> <hr> <p>The new <code>np.block</code> was mentioned - it produces something different and is quite a bit slower</p> <pre><code>In [573]: np.block(alist) Out[573]: array([[0, 1, 2, 1, 2, 3], [3, 4, 5, 4, 5, 6], [2, 3, 4, 3, 4, 5], [5, 6, 7, 6, 7, 8]]) In [574]: timeit np.block(alist) 126 µs ± 2.39 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each) </code></pre> <p><code>block</code> produces the same 2d array as the nested stacks:</p> <pre><code>np.vstack([np.hstack([a, a + 1]), np.hstack([a + 2, a + 3])]) </code></pre> <p><code>np.array</code> and <code>np.stack</code> produce a 4d array. It can be reshaped to 2d, but the order of elements is different. To match we'd need to do some transposing before reshaping. e.g.</p> <pre><code>In [590]: np.array(alist).transpose(0,2,1,3).reshape(4,6) Out[590]: array([[0, 1, 2, 1, 2, 3], [3, 4, 5, 4, 5, 6], [2, 3, 4, 3, 4, 5], [5, 6, 7, 6, 7, 8]]) </code></pre>
python|numpy
1
377,516
47,373,706
EnumDescriptorProto error while importing tensorflow
<p>Im trying to run the following code</p> <pre><code>import tensorflow as tf print("Hello TensorFlow version", tf.__Version__) </code></pre> <p>It is firing the following error</p> <blockquote> <p>Users/anaconda/envs/cnn/bin/python /Users/Downloads/rude-carnie/version.py Traceback (most recent call last): File "/Users/Downloads/rude-carnie/version.py", line 1, in import tensorflow as tf File "/Users/anaconda/envs/cnn/lib/python3.6/site-packages/tensorflow/init.py", line 24, in from tensorflow.python import * File "/Users/anaconda/envs/cnn/lib/python3.6/site-packages/tensorflow/python/init.py", line 52, in from tensorflow.core.framework.graph_pb2 import * File "/Users/anaconda/envs/cnn/lib/python3.6/site-packages/tensorflow/core/framework/graph_pb2.py", line 10, in from google.protobuf import descriptor_pb2 File "/Users/anaconda/envs/cnn/lib/python3.6/site-packages/google/protobuf/descriptor_pb2.py", line 735, in options=None, file=DESCRIPTOR), File "/Users/anaconda/envs/cnn/lib/python3.6/site-packages/google/protobuf/descriptor.py", line 501, in new return _message.default_pool.FindFieldByName(full_name) KeyError: "Couldn't find field google.protobuf.EnumDescriptorProto.EnumReservedRange.start"</p> </blockquote> <p>how can I be able to sort this out?</p>
<p>Today I came across same problem. Uninstall all the protobuf on your machine (both from anaconda and pip)</p> <pre><code>pip uninstall protobuf conda remove protobuf </code></pre> <p>install protobuf using pip:</p> <pre><code>pip install protobuf==3.5.0.post1 </code></pre> <p>Source : <a href="https://github.com/tensorflow/tensorflow/issues/14689" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/issues/14689</a></p>
python|tensorflow
0
377,517
47,516,286
Append arrays of different dimensions to get a single array
<p>l have three vectors (numpy arrays), <code>vector_1, vector_2, vector_3</code> as follow :</p> <p>Dimension(vector1)=(200,2048)</p> <p>Dimension(vector2)=(200,8192)</p> <p>Dimension(vector3)=(200,32768)</p> <p>l would like to append these vectors to get <strong>vector_4</strong> :</p> <p><strong>Dimension(vector4)= (200,2048+8192+32768)= (200, 43008)</strong> </p> <p>Add respectively vector1 then vector2 then vector3</p> <p>l tries the following :</p> <pre><code>vector4=numpy.concatenate((vector1,vector2,vector3),axis=0) ValueError: all the input array dimensions except for the concatenation axis must match exactly </code></pre> <p>and</p> <pre><code>vector4=numpy.append(vector4,[vector1,vector2,vectors3],axis=0) TypeError: append() missing 1 required positional argument: 'values' </code></pre>
<p>I believe you are looking for <code>numpy.hstack</code>.</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; a = np.arange(4).reshape(2,2) &gt;&gt;&gt; b = np.arange(6).reshape(2,3) &gt;&gt;&gt; c = np.arange(8).reshape(2,4) &gt;&gt;&gt; a array([[0, 1], [2, 3]]) &gt;&gt;&gt; b array([[0, 1, 2], [3, 4, 5]]) &gt;&gt;&gt; c array([[0, 1, 2, 3], [4, 5, 6, 7]]) &gt;&gt;&gt; np.hstack((a,b,c)) array([[0, 1, 0, 1, 2, 0, 1, 2, 3], [2, 3, 3, 4, 5, 4, 5, 6, 7]]) </code></pre>
python|arrays|numpy|append
1
377,518
47,286,547
Is there a way in pd.read_csv to replace NaN value with other character?
<p>I have some data in csv file. Because it is collected from machine,all lines should be number but some NaN values exists in some lines.And the machine can auto replace these NaN values with a string '-'.</p> <p>My question is how to set params of <strong><em>pd.read_csv()</em></strong> to auto replace '-'values with zero from csv file? </p>
<p>while reading the <code>csv</code> file you can use the parameter <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="noreferrer">na_values</a>:</p> <pre><code>df = pd.read_csv('file.csv',na_values='-') </code></pre> <p>Edit: you can then convert nan to 0 by:</p> <pre><code>df.fillna(0,1,inplace=True) </code></pre>
python|pandas|csv
13
377,519
47,335,625
Reshape array using numpy - ValueError: cannot reshape array
<p>I have an array of floats <code>vec</code> which I want to reshape</p> <pre><code>vec.shape &gt;&gt;&gt; (3,) len(vec[0]) # all 3 rows of vec have 150 columns &gt;&gt;&gt; 150 np.reshape(vec, (3,150)) &gt;&gt;&gt; ValueError: cannot reshape array of size 3 into shape (3,150) </code></pre> <p>but I get the error above.</p> <p>What's going wrong? How can I fix this? Thanks!</p>
<p>The <code>vec.shape</code> means that the array has 3 items. But they are dtype object, that is, pointers to items else where in memory.</p> <p>Apparently the items are arrays themselves. One of the <code>concatenate</code> or <code>stack</code> functions can join them into one array, provided the dimensions match.</p> <p>I'd suggest printing</p> <pre><code>[x.shape for x in vec] </code></pre> <p>to verify the shape. And of course make sure that those sub arrays are not, themselves object dtypes.</p> <hr> <pre><code>In [261]: vec = np.empty(3, object) In [262]: vec[:] = [np.arange(10), np.ones(10), np.zeros(10)] In [263]: vec Out[263]: array([array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), array([ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]), array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])], dtype=object) In [264]: vec.reshape(3,10) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-264-cd555975140c&gt; in &lt;module&gt;() ----&gt; 1 vec.reshape(3,10) ValueError: cannot reshape array of size 3 into shape (3,10) In [265]: [x.shape for x in vec] Out[265]: [(10,), (10,), (10,)] In [266]: np.stack(vec) Out[266]: array([[ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9.], [ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]) In [267]: np.concatenate(vec) Out[267]: array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]) In [268]: np.concatenate(vec).reshape(3,10) Out[268]: array([[ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9.], [ 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.], [ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]]) </code></pre> <hr> <p><code>len</code> is not a good test; use <code>shape</code>. For example if I change one array to be 2d</p> <pre><code>In [269]: vec[1]=np.ones((10,1)) In [270]: vec Out[270]: array([array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), array([[ 1.], [ 1.], [ 1.], [ 1.], [ 1.], [ 1.], [ 1.], [ 1.], [ 1.], [ 1.]]), array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.])], dtype=object) In [271]: [len(x) for x in vec] Out[271]: [10, 10, 10] In [272]: [x.shape for x in vec] Out[272]: [(10,), (10, 1), (10,)] In [273]: np.concatenate(vec) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-273-a253d8b9b25d&gt; in &lt;module&gt;() ----&gt; 1 np.concatenate(vec) ValueError: all the input arrays must have same number of dimensions </code></pre>
python|arrays|numpy|reshape
3
377,520
47,085,662
Merge histograms with different ranges
<p>Is it any fast way to merge two numpy histograms with different bin ranges and bin number?</p> <p>For example:</p> <pre><code>x = [1,2,2,3] y = [4,5,5,6] a = np.histogram(x, bins=10) # a[0] = [1, 0, 0, 0, 0, 2, 0, 0, 0, 1] # a[1] = [ 1. , 1.2, 1.4, 1.6, 1.8, 2. , 2.2, 2.4, 2.6, 2.8, 3. ] b = np.histogram(y, bins=5) # b[0] = [1, 0, 2, 0, 1] # b[1] = [ 4. , 4.4, 4.8, 5.2, 5.6, 6. ] </code></pre> <p>Now I want to have some function like this: </p> <pre><code>def merge(a, b): # some actions here # return merged_a_b_values, merged_a_b_bins </code></pre> <p>Actually I have not <code>x</code> and <code>y</code>, <code>a</code> and <code>b</code> are known only. But the result of <code>merge(a, b)</code> must be equal to <code>np.histogram(x+y, bins=10)</code>:</p> <pre><code>m = merge(a, b) # m[0] = [1, 0, 2, 0, 1, 0, 1, 0, 2, 1] # m[1] = [ 1. , 1.5, 2. , 2.5, 3. , 3.5, 4. , 4.5, 5. , 5.5, 6. ] </code></pre>
<p>I'd actually have added a comment to dangom's answer, but I lack the reputation required. I'm a little confused by your example. You're plotting the histogram of the histogram bins if I'm not mistaken. It should rather be this, right?</p> <pre><code>plt.figure() plt.plot(a[1][:-1], a[0], marker='.', label='a') plt.plot(b[1][:-1], b[0], marker='.', label='b') plt.plot(c[1][:-1], c[0], marker='.', label='c') plt.legend() plt.show() </code></pre> <p>Also a note to your suggestion for combining the histogram. You are of course right, that there's no unique solution as you simply don't know, where the samples would've have been in the finer grid you use for the combination. When having two histograms, which have a significantly differing bin width the suggested merging function may result in a sparse and artificial looking histogram.</p> <p>I tried combining the histograms by interpolation (assuming the samples within the count bin were distributed uniformly in the original bin - which is of course also only an assumption). This leads however to a more natural looking result, at least for data sampled from distributions I typically encounter.</p> <pre><code>import numpy as np def merge_hist(a, b): edgesa = a[1] edgesb = b[1] da = edgesa[1]-edgesa[0] db = edgesb[1]-edgesb[0] dint = np.min([da, db]) min = np.min(np.hstack([edgesa, edgesb])) max = np.max(np.hstack([edgesa, edgesb])) edgesc = np.arange(min, max, dint) def interpolate_hist(edgesint, edges, hist): cumhist = np.hstack([0, np.cumsum(hist)]) cumhistint = np.interp(edgesint, edges, cumhist) histint = np.diff(cumhistint) return histint histaint = interpolate_hist(edgesc, edgesa, a[0]) histbint = interpolate_hist(edgesc, edgesb, b[0]) c = histaint + histbint return c, edgesc </code></pre> <p>An example for two gaussian distributions:</p> <pre><code>import numpy as np a = 5 + 1*np.random.randn(100) b = 10 + 2*np.random.randn(100) hista, edgesa = np.histogram(a, bins=10) histb, edgesb = np.histogram(b, bins=5) histc, edgesc = merge_hist([hista, edgesa], [histb, edgesb]) plt.figure() width = edgesa[1]-edgesa[0] plt.bar(edgesa[:-1], hista, width=width) width = edgesb[1]-edgesb[0] plt.bar(edgesb[:-1], histb, width=width) plt.figure() width = edgesc[1]-edgesc[0] plt.bar(edgesc[:-1], histc, width=width) plt.show() </code></pre> <p>I, however, am no statistician, so please let me know if the suggestes approach is viable.</p>
python|numpy|merge|histogram
3
377,521
47,482,009
Pandas rolling window to return an array
<p>Here is a sample code. </p> <pre><code>df = pd.DataFrame(np.random.randn(10, 2), columns=list('AB')) df['C'] = df.B.rolling(window=3) </code></pre> <p>Output:</p> <pre><code> A B C 0 -0.108897 1.877987 Rolling [window=3,center=False,axis=0] 1 -1.276055 -0.424382 Rolling [window=3,center=False,axis=0] 2 1.578561 -1.094649 Rolling [window=3,center=False,axis=0] 3 -0.443294 1.683261 Rolling [window=3,center=False,axis=0] 4 0.674124 0.281077 Rolling [window=3,center=False,axis=0] 5 0.587773 0.697557 Rolling [window=3,center=False,axis=0] 6 -0.258038 -1.230902 Rolling [window=3,center=False,axis=0] 7 -0.443269 0.647107 Rolling [window=3,center=False,axis=0] 8 0.347187 0.753585 Rolling [window=3,center=False,axis=0] 9 -0.369179 0.975155 Rolling [window=3,center=False,axis=0] </code></pre> <p>I want my 'C' column to be an array like [0.1231, -1.132, 0.8766]. I tried using rolling apply but in vain.</p> <p>Expected Output:</p> <pre><code> A B C 0 -0.108897 1.877987 [] 1 -1.276055 -0.424382 [] 2 1.578561 -1.094649 [-1.094649, -0.424382, 1.877987] 3 -0.443294 1.683261 [1.683261, -1.094649, -0.424382] 4 0.674124 0.281077 [0.281077, 1.683261, -1.094649] 5 0.587773 0.697557 [0.697557, 0.281077, 1.683261] 6 -0.258038 -1.230902 [-1.230902, 0.697557, 0.281077] 7 -0.443269 0.647107 [0.647107, -1.230902, 0.697557] 8 0.347187 0.753585 [0.753585, 0.647107, -1.230902] 9 -0.369179 0.975155 [0.975155, 0.753585, 0.647107] </code></pre>
<p>Since pandas <code>1.1</code> rolling objects are iterable.</p> <p>For a list of lists:</p> <pre><code>df['C'] = [window.to_list() for window in df.B.rolling(window=3)] </code></pre> <p>For a Series of Series's do:</p> <pre><code>df['C'] = pd.Series(df.B.rolling(window=3)) </code></pre> <p>Also checkout the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.rolling.html" rel="noreferrer">rolling function</a> for parameters.</p>
python|pandas|numpy|dataframe
23
377,522
11,113,649
How to run a .py module?
<p>I've got zero experience with Python. I have looked around some tutorial materials, but it seems difficult to understand a advanced code. So I came here for a more specific answer. For me the mission is to redo the code in my computer. </p> <p>Here is the scenario:</p> <p>I'm a graduate student studying tensor factorization in relation learning. A paper[1] providing a code to run this algorithm, as follows:</p> <pre><code>import logging, time from numpy import dot, zeros, kron, array, eye, argmax from numpy.linalg import qr, pinv, norm, inv from scipy.linalg import eigh from numpy.random import rand __version__ = "0.1" __all__ = ['rescal', 'rescal_with_random_restarts'] __DEF_MAXITER = 500 __DEF_INIT = 'nvecs' __DEF_PROJ = True __DEF_CONV = 1e-5 __DEF_LMBDA = 0 _log = logging.getLogger('RESCAL') def rescal_with_random_restarts(X, rank, restarts=10, **kwargs): """ Restarts RESCAL multiple time from random starting point and returns factorization with best fit. """ models = [] fits = [] for i in range(restarts): res = rescal(X, rank, init='random', **kwargs) models.append(res) fits.append(res[2]) return models[argmax(fits)] def rescal(X, rank, **kwargs): """ RESCAL Factors a three-way tensor X such that each frontal slice X_k = A * R_k * A.T. The frontal slices of a tensor are N x N matrices that correspond to the adjecency matrices of the relational graph for a particular relation. For a full description of the algorithm see: Maximilian Nickel, Volker Tresp, Hans-Peter-Kriegel, "A Three-Way Model for Collective Learning on Multi-Relational Data", ICML 2011, Bellevue, WA, USA Parameters ---------- X : list List of frontal slices X_k of the tensor X. The shape of each X_k is ('N', 'N') rank : int Rank of the factorization lmbda : float, optional Regularization parameter for A and R_k factor matrices. 0 by default init : string, optional Initialization method of the factor matrices. 'nvecs' (default) initializes A based on the eigenvectors of X. 'random' initializes the factor matrices randomly. proj : boolean, optional Whether or not to use the QR decomposition when computing R_k. True by default maxIter : int, optional Maximium number of iterations of the ALS algorithm. 500 by default. conv : float, optional Stop when residual of factorization is less than conv. 1e-5 by default Returns ------- A : ndarray array of shape ('N', 'rank') corresponding to the factor matrix A R : list list of 'M' arrays of shape ('rank', 'rank') corresponding to the factor matrices R_k f : float function value of the factorization iter : int number of iterations until convergence exectimes : ndarray execution times to compute the updates in each iteration """ # init options ainit = kwargs.pop('init', __DEF_INIT) proj = kwargs.pop('proj', __DEF_PROJ) maxIter = kwargs.pop('maxIter', __DEF_MAXITER) conv = kwargs.pop('conv', __DEF_CONV) lmbda = kwargs.pop('lmbda', __DEF_LMBDA) if not len(kwargs) == 0: raise ValueError( 'Unknown keywords (%s)' % (kwargs.keys()) ) sz = X[0].shape dtype = X[0].dtype n = sz[0] k = len(X) _log.debug('[Config] rank: %d | maxIter: %d | conv: %7.1e | lmbda: %7.1e' % (rank, maxIter, conv, lmbda)) _log.debug('[Config] dtype: %s' % dtype) # precompute norms of X normX = [norm(M)**2 for M in X] Xflat = [M.flatten() for M in X] sumNormX = sum(normX) # initialize A if ainit == 'random': A = array(rand(n, rank), dtype=dtype) elif ainit == 'nvecs': S = zeros((n, n), dtype=dtype) T = zeros((n, n), dtype=dtype) for i in range(k): T = X[i] S = S + T + T.T evals, A = eigh(S,eigvals=(n-rank,n-1)) else : raise 'Unknown init option ("%s")' % ainit # initialize R if proj: Q, A2 = qr(A) X2 = __projectSlices(X, Q) R = __updateR(X2, A2, lmbda) else : R = __updateR(X, A, lmbda) # compute factorization fit = fitchange = fitold = f = 0 exectimes = [] ARAt = zeros((n,n), dtype=dtype) for iter in xrange(maxIter): tic = time.clock() fitold = fit A = __updateA(X, A, R, lmbda) if proj: Q, A2 = qr(A) X2 = __projectSlices(X, Q) R = __updateR(X2, A2, lmbda) else : R = __updateR(X, A, lmbda) # compute fit value f = lmbda*(norm(A)**2) for i in range(k): ARAt = dot(A, dot(R[i], A.T)) f += normX[i] + norm(ARAt)**2 - 2*dot(Xflat[i], ARAt.flatten()) + lmbda*(R[i].flatten()**2).sum() f *= 0.5 fit = 1 - f / sumNormX fitchange = abs(fitold - fit) toc = time.clock() exectimes.append( toc - tic ) _log.debug('[%3d] fit: %.5f | delta: %7.1e | secs: %.5f' % (iter, fit, fitchange, exectimes[-1])) if iter &gt; 1 and fitchange &lt; conv: break return A, R, f, iter+1, array(exectimes) def __updateA(X, A, R, lmbda): n, rank = A.shape F = zeros((n, rank), dtype=X[0].dtype) E = zeros((rank, rank), dtype=X[0].dtype) AtA = dot(A.T,A) for i in range(len(X)): F += dot(X[i], dot(A, R[i].T)) + dot(X[i].T, dot(A, R[i])) E += dot(R[i], dot(AtA, R[i].T)) + dot(R[i].T, dot(AtA, R[i])) A = dot(F, inv(lmbda * eye(rank) + E)) return A def __updateR(X, A, lmbda): r = A.shape[1] R = [] At = A.T if lmbda == 0: ainv = dot(pinv(dot(At, A)), At) for i in range(len(X)): R.append( dot(ainv, dot(X[i], ainv.T)) ) else : AtA = dot(At, A) tmp = inv(kron(AtA, AtA) + lmbda * eye(r**2)) for i in range(len(X)): AtXA = dot(At, dot(X[i], A)) R.append( dot(AtXA.flatten(), tmp).reshape(r, r) ) return R def __projectSlices(X, Q): q = Q.shape[1] X2 = [] for i in range(len(X)): X2.append( dot(Q.T, dot(X[i], Q)) ) return X2 </code></pre> <p>It's boring to paste such a long code but there is no other way to figure out my problems. I'm sorry about this.</p> <p>I import this module and pass them arguments according to the author's <a href="http://www.cip.ifi.lmu.de/~nickel/doc/rescal/index.html" rel="nofollow">website</a>:</p> <pre><code>import pickle, sys from rescal import rescal rank = sys.argv[1] X = pickle.load('us-presidents.pickle') A, R, f, iter, exectimes = rescal(X, rank, lmbda=1.0) </code></pre> <p>The dataset us-presidents.rdf can be found <a href="http://www.cip.ifi.lmu.de/~nickel/" rel="nofollow">here</a>.</p> <p>My questions are:</p> <ol> <li>According to the code note, the tensor X is a list. I don't quite understand this, how do I relate a list to a tensor in Python? Can I understand tensor = list in Python?</li> <li>Should I convert RDF format to a triple(subject, predicate, object) format first? I'm not sure of the data structure of X. How do I assignment values to X by hand?</li> <li>Then, how to run it?</li> </ol> <p>I paste the author's code without his authorization, is it an act of infringement? if so, I am so sorry and I will delete it soon. </p> <p>The problems may be a little bored, but these are important to me. Any help would be greatly appreciated.</p> <p>[1] Maximilian Nickel, Volker Tresp, Hans-Peter Kriegel, A Three-Way Model for Collective Learning on Multi-Relational Data, in Proceedings of the 28th International Conference on Machine Learning, 2011 , Bellevue, WA, USA</p>
<p>To answer Q2: you need to transform the RDF and save it before you can load it from the file 'us-presidents.pickle'. The author of that code probably did that once because the Python native pickle format loads faster. As the pickle format includes the datatype of the data, it is possible that <code>X</code> is some numpy class instance and you would need either an example pickle file as used by this code, or some code doing the pickle.dump to figure out how to convert from RDF to this particular pickle file as <code>rescal</code> expects it.</p> <p>So this might answer Q1: the tensor consists of a list of elements. From the code you can see that the <code>X</code> parameter to rescal has a length (<code>k = len(X)</code> ) and can be indexed (<code>T = X[i]</code>). So it elements are used as a list (even if it might be some other datatype, that just behaves as such.</p> <p>As an aside: If you are not familiar with Python and are just interested in the result of the computation, you might get more help contacting the author of the software.</p>
python|numpy|scipy|rdf|factorization
1
377,523
11,254,248
Efficiently accumulating a collection of sparse scipy matrices
<p>I've got a collection of O(N) NxN <code>scipy.sparse.csr_matrix</code>, and each sparse matrix has on the order of N elements set. I want to add all these matrices together to get a regular NxN numpy array. (N is on the order of 1000). The arrangement of non-zero elements within the matrices is such that the resulting sum certainly isn't sparse (virtually no zero elements left in fact).</p> <p>At the moment I'm just doing </p> <pre><code>reduce(lambda x,y: x+y,[m.toarray() for m in my_sparse_matrices]) </code></pre> <p>which works but is a bit slow: of course the sheer amount of pointless processing of zeros which is going on there is absolutely horrific.</p> <p>Is there a better way ? There's nothing obvious to me in the <a href="http://docs.scipy.org/doc/scipy/reference/sparse.html" rel="noreferrer">docs</a>.</p> <p><strong>Update:</strong> as per user545424's suggestion, I tried the alternative scheme of summing the sparse matrices, and also summing sparse matrices onto a dense matrix. The code below shows all approaches to run in comparable time (Python 2.6.6 on amd64 Debian/Squeeze on a quad-core i7)</p> <pre><code>import numpy as np import numpy.random import scipy import scipy.sparse import time N=768 S=768 D=3 def mkrandomsparse(): m=np.zeros((S,S),dtype=np.float32) r=np.random.random_integers(0,S-1,D*S) c=np.random.random_integers(0,S-1,D*S) for e in zip(r,c): m[e[0],e[1]]=1.0 return scipy.sparse.csr_matrix(m) M=[mkrandomsparse() for i in xrange(N)] def plus_dense(): return reduce(lambda x,y: x+y,[m.toarray() for m in M]) def plus_sparse(): return reduce(lambda x,y: x+y,M).toarray() def sum_dense(): return sum([m.toarray() for m in M]) def sum_sparse(): return sum(M[1:],M[0]).toarray() def sum_combo(): # Sum the sparse matrices 'onto' a dense matrix? return sum(M,np.zeros((S,S),dtype=np.float32)) def benchmark(fn): t0=time.time() fn() t1=time.time() print "{0:16}: {1:.3f}s".format(fn.__name__,t1-t0) for i in xrange(4): benchmark(plus_dense) benchmark(plus_sparse) benchmark(sum_dense) benchmark(sum_sparse) benchmark(sum_combo) print </code></pre> <p>and logs out</p> <pre><code>plus_dense : 1.368s plus_sparse : 1.405s sum_dense : 1.368s sum_sparse : 1.406s sum_combo : 1.039s </code></pre> <p>although you can get one approach or the other to come out ahead by a factor of 2 or so by messing with N,S,D parameters... but nothing like the order of magnitude improvement you'd hope to see from considering the number of zero adds it should be possible to skip.</p>
<p>I think I've found a way to speed it up by a factor of ~10 if your matrices are very sparse.</p> <pre><code>In [1]: from scipy.sparse import csr_matrix In [2]: def sum_sparse(m): ...: x = np.zeros(m[0].shape) ...: for a in m: ...: ri = np.repeat(np.arange(a.shape[0]),np.diff(a.indptr)) ...: x[ri,a.indices] += a.data ...: return x ...: In [6]: m = [np.zeros((100,100)) for i in range(1000)] In [7]: for x in m: ...: x.ravel()[np.random.randint(0,x.size,10)] = 1.0 ...: m = [csr_matrix(x) for x in m] In [17]: (sum(m[1:],m[0]).todense() == sum_sparse(m)).all() Out[17]: True In [18]: %timeit sum(m[1:],m[0]).todense() 10 loops, best of 3: 145 ms per loop In [19]: %timeit sum_sparse(m) 100 loops, best of 3: 18.5 ms per loop </code></pre>
python|optimization|numpy|scipy|sparse-matrix
4
377,524
11,297,030
Matplotlib - Stepped histogram with already binned data
<p>I am trying to get a histogram with already binned data. I have been trying to use <code>bar()</code> for this, but I can't seem to figure out how to make it a stepped histogram <a href="http://matplotlib.org/mpl_examples/pylab_examples/histogram_demo_extended_02.png" rel="noreferrer">like this one from the examples</a>, instead of a filled histogram.</p> <p><img src="https://i.stack.imgur.com/iX38Z.png" alt="enter image description here"></p>
<p>You could cheat, by offsetting your data and using <code>plot</code> instead:</p> <pre><code>from matplotlib import pyplot import numpy as np #sample data: x = np.arange(30) y = np.cumsum(np.arange(30)) #offset the x for horizontal, repeat the y for vertical: x = np.ravel(zip(x,x+1)) y = np.ravel(zip(y,y)) pyplot.plot(x,y) pyplot.savefig('plt.png') </code></pre> <p>the plot:</p> <p><img src="https://i.stack.imgur.com/5stNC.png" alt="enter image description here"></p>
python|numpy|matplotlib|scipy
8
377,525
11,373,192
Generating Discrete random variables with specified weights using SciPy or NumPy
<p>I am looking for a simple function that can generate an array of specified random values based on their corresponding (also specified) probabilities. I only need it to generate float values, but I don't see why it shouldn't be able to generate any scalar. I can think of many ways of building this from existing functions, but I think I probably just missed an obvious SciPy or NumPy function. </p> <p>E.g.:</p> <pre><code>&gt;&gt;&gt; values = [1.1, 2.2, 3.3] &gt;&gt;&gt; probabilities = [0.2, 0.5, 0.3] &gt;&gt;&gt; print some_function(values, probabilities, size=10) (2.2, 1.1, 3.3, 3.3, 2.2, 2.2, 1.1, 2.2, 3.3, 2.2) </code></pre> <p>Note: I found <strong><a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.rv_discrete.html#scipy.stats.rv_discrete">scipy.stats.rv_discrete</a></strong> but I don't understand how it works. Specifically, I do not understand what this (below) means nor what it should do:</p> <pre><code>numargs = generic.numargs [ &lt;shape(s)&gt; ] = ['Replace with resonable value', ]*numargs </code></pre> <p>If rv_discrete is what I should be using, could you please provide me with a simple example and an explanation of the above "shape" statement?</p>
<p>Drawing from a discrete distribution is directly built into numpy. The function is called <a href="https://numpy.org/doc/stable/reference/random/generated/numpy.random.choice.html" rel="noreferrer">random.choice</a> (difficult to find without any reference to discrete distributions in the numpy docs).</p> <pre><code>elements = [1.1, 2.2, 3.3] probabilities = [0.2, 0.5, 0.3] np.random.choice(elements, 10, p=probabilities) </code></pre>
python|random|numpy|scipy
82
377,526
68,188,278
"torch.relu_(input) unknown parameter type" from pytorch
<p>I am trying to run this <a href="https://github.com/ultravideo/Stereo-3D-Pose-Estimation" rel="nofollow noreferrer">3D pose estimation repo</a> in Google Colab on a GPU, but after doing all of the steps and putting in my own left/right cam vids, I get this error in Colab:</p> <pre><code>infering thread started 1 1 : cannot connect to X server Exception in thread Thread-1: Traceback (most recent call last): File &quot;/usr/lib/python3.7/threading.py&quot;, line 926, in _bootstrap_inner self.run() File &quot;/usr/lib/python3.7/threading.py&quot;, line 870, in run self._target(*self._args, **self._kwargs) File &quot;/content/Stereo-3D-Pose-Estimation/poseinferscheduler.py&quot;, line 59, in infer_pose_loop l_pose_t = infer_fast(self.net, l_img, height, self.stride, self.upsample_ratio, self.cpu) File &quot;/content/Stereo-3D-Pose-Estimation/pose3dmodules.py&quot;, line 47, in infer_fast stages_output = net(tensor_img) File &quot;/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py&quot;, line 1051, in _call_impl return forward_call(*input, **kwargs) File &quot;/content/Stereo-3D-Pose-Estimation/models/with_mobilenet.py&quot;, line 115, in forward backbone_features = self.model(x) File &quot;/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py&quot;, line 1051, in _call_impl return forward_call(*input, **kwargs) File &quot;/usr/local/lib/python3.7/dist-packages/torch/nn/modules/container.py&quot;, line 139, in forward input = module(input) File &quot;/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py&quot;, line 1051, in _call_impl return forward_call(*input, **kwargs) File &quot;/usr/local/lib/python3.7/dist-packages/torch/nn/modules/container.py&quot;, line 139, in forward input = module(input) File &quot;/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py&quot;, line 1051, in _call_impl return forward_call(*input, **kwargs) File &quot;/usr/local/lib/python3.7/dist-packages/torch/nn/modules/activation.py&quot;, line 102, in forward return F.relu(input, inplace=self.inplace) File &quot;/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py&quot;, line 1296, in relu result = torch.relu_(input) RuntimeError: unknown parameter type </code></pre> <p>I am a bit confused as to why I am seeing it, I have already installed all necessary prerequisites; also can't interpret what it means either.</p>
<p>Since the traceback happens in the pytorch library, I checked the code there on the pytorch github.</p> <p>What the error means is that you are calling an inplace activation function in torch.relu_ to some object called input. However, what is happening is that the type of input is not recognized by the torch backend which is why it is a runtime error.</p> <p>Therefore, I would suggest to print out input and also run</p> <pre><code>type(input) </code></pre> <p>to find out what object input represents and what that variable is. As a further reference, this is the particular script that Pytorch runs in the backend that leads it to throw an unknown parameter type error. From a quick look, it seems to be a switch statement that confirms if a value falls into a list of types. If it is not in the list of types, then it will run the default block which throws unknown parameter type error.</p> <p><a href="https://github.com/pytorch/pytorch/blob/aacc722aeca3de1aedd35adb41e6f8149bd656cd/torch/csrc/utils/python_arg_parser.cpp#L518-L541" rel="nofollow noreferrer">https://github.com/pytorch/pytorch/blob/aacc722aeca3de1aedd35adb41e6f8149bd656cd/torch/csrc/utils/python_arg_parser.cpp#L518-L541</a></p> <h1>EDIT:</h1> <p>If type(input) returns a torch.tensor then it is probably an issue with the version of python you are using. I know you said you have the prerequisites but I think it would be good to double check if you have python 3.6, and maybe but less preferably python 3.5 or 3.7. These are the python versions that work with the repo you just sent me.</p> <p>You can find the python version on your collab by typing <code>!python --version</code> on one of cells. Make sure that it returns a correct version supported by the software you are running. This error might come from the fact that instead of torch, python itself is expressing this error in its backend.</p> <p>I found this stackoverflow useful as it shows how some code was unable to recognize a built in type dictionary in python: <a href="https://stackoverflow.com/questions/53657686/typeerror-unknown-parameter-type-class-dict-values">&quot;TypeError: Unknown parameter type: &lt;class &#39;dict_values&#39;&gt;&quot;</a> The solution to this was to check python versions.</p> <p>Sarthak</p>
python-3.x|pytorch|google-colaboratory|pose-estimation
1
377,527
68,295,285
Using values from a column for another column
<div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">County</th> <th style="text-align: center;">State</th> <th style="text-align: center;">County State</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">Davis County</td> <td style="text-align: center;">NE</td> <td style="text-align: center;">Davis County, NE</td> </tr> <tr> <td style="text-align: left;">Ark County</td> <td style="text-align: center;">UT</td> <td style="text-align: center;">Ark County, UT</td> </tr> <tr> <td style="text-align: left;">Clay County Party</td> <td style="text-align: center;">WI</td> <td style="text-align: center;">Clay County Party, WI</td> </tr> </tbody> </table> </div> <p>I want to delete the word County and everything that proceeds it in the &quot;County&quot; and &quot;County State column&quot; and then add back the state in the &quot;County State&quot; column</p> <p>This is what I want</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">County</th> <th style="text-align: center;">State</th> <th style="text-align: center;">County State</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">Davis</td> <td style="text-align: center;">NE</td> <td style="text-align: center;">Davis, NE</td> </tr> <tr> <td style="text-align: left;">Ark</td> <td style="text-align: center;">UT</td> <td style="text-align: center;">Ark, UT</td> </tr> <tr> <td style="text-align: left;">Clay</td> <td style="text-align: center;">WI</td> <td style="text-align: center;">Clay, WI</td> </tr> </tbody> </table> </div> <p>What I have tried so far:</p> <pre><code>def county(df): df['County'].replace([r' County.*'], '', regex = True, inplace = True) if df.State == 'NE': df['County, State'].replace([r' County.*'], ' ,NE', regex = True, inplace = True) elif df.State == 'UT': df['County, State'].replace([r' County.*'], ' ,UT', regex = True, inplace = True) elif df.State == 'WI': df['County, State'].replace([r' County.*'], ' ,Wi', regex = True, inplace = True) </code></pre>
<p>After the <code>County</code> column is cleaned, seems like just string concatonating the two columns would work:</p> <pre><code>def county(df): df['County'].replace([r'\s*?County.*'], '', regex=True, inplace=True) df['County State'] = df['County'] + ', ' + df['State'] return df </code></pre> <p><code>county(df)</code>:</p> <pre><code> County State County State 0 Davis NE Davis, NE 1 Ark UT Ark, UT 2 Clay WI Clay, WI </code></pre> <hr /> <p>Or match everything up to the comma for the <code>County State</code> column:</p> <pre><code>def county(df): df['County'].replace([r'\s*County.*'], '', regex=True, inplace=True) df['County State'].replace([r'\s*County[^,]*'], '', regex=True, inplace=True) return df </code></pre> <p><code>county(df)</code>:</p> <pre><code> County State County State 0 Davis NE Davis, NE 1 Ark UT Ark, UT 2 Clay WI Clay, WI </code></pre> <hr /> <p>Regex: <code>\s*County[^,]*</code></p> <ol> <li><code>\s</code> matches any whitespace character (equivalent to [\r\n\t\f\v ])</li> <li><code>*</code> matches the previous token between zero and unlimited times, as many times as possible, giving back as needed (greedy)</li> <li><code>County</code> matches the characters <code>County</code> literally (case sensitive)</li> <li>Match a single character not present in the list below <code>[^,]</code> <ul> <li><code>*</code> matches the previous token between zero and unlimited times, as many times as possible, giving back as needed (greedy)</li> <li><code>,</code> matches the character <code>,</code> literally (case sensitive)</li> </ul> </li> </ol>
python|pandas
0
377,528
68,060,861
How do I modify an item in a pandas dataframe using python?
<p>I would like to be able to access an excel spreadsheet and modify the values in it which say AHU 01-01 to AHU-01-01. I would like to take the &quot;AHU &quot; and change it to &quot;AHU-&quot; without losing the numbers which follow right after.</p> <p>Here is my current code:</p> <pre><code>import pandas as pd df1=pd.read_excel(&quot;MDCC AHU Schedule FOR PYTHON.xlsx&quot;) ahunameonschedule = df1.iloc[0] columns = pd.Index([ahunameonschedule]) </code></pre> <p>Here is the output from the code above:</p> <p>Index([[nan, 'DESIGNATION', nan, 'AHU 01-01', 'AHU 02-01', 'AHU 02-02', 'AHU 02-03', 'AHU 02-04', 'AHU 03-01', 'AHU 03-02', 'AHU 03-03', 'AHU 04-01', 'AHU 04-02', 'AHU 04-03', 'AHU 04-04', 'AHU 04-05', 'AHU 05-01', 'AHU 05-02', 'AHU 05-03', 'AHU 05-04', 'AHU 05-05', 'AHU 06-01', 'AHU 06-02', 'AHU 06-03', 'AHU 06-04', 'AHU 06-05', 'AHU 07-01', 'AHU 07-02', 'AHU 07-03', 'AHU 07-04', 'AHU 07-05', 'AHU 08-01', 'AHU 08-02', 'AHU 08-03', 'AHU 08-04', 'AHU 08-05', 'AHU 09-01', 'AHU 09-02', 'AHU 09-03', 'AHU 09-04', 'AHU 09-05', 'AHU 10-01', 'AHU 10-02', 'AHU 10-03', 'AHU 10-04', 'AHU 10-05']], dtype='object')</p> <p>Here is what I've tried:</p> <pre><code>df1.rename(index={'AHU':'AHU-'}, level = 0, inplace=True) df1 </code></pre> <p>Additionally, how could I add conditionals to account for if a value doesn't have &quot;AHU &quot; within it?</p>
<pre><code>mydict = {'row_num': [1,2,3,4,5], 'col_to_change': ['AHU 03-01', 'AHU 03-02', 'AHU 03-03', 'AHU 04-01', 'AHU 01-01']} df = pd.DataFrame(mydict) df['changed_column'] = df[['col_to_change']].apply(lambda x : &quot;-&quot;.join(x[0].split()) , axis = 1) </code></pre>
python|pandas|dataframe|indexing
0
377,529
68,246,198
How do I sum up when complying to two conditions and then put the summed data in a new data frame?
<p>I have a data frame with dates, categories, and time durations. I want to sum the time durations if the entries have the same date and the same category.</p> <p>Input:</p> <pre><code>Date Duration Category 01/01/2021 0.1 Entertainment 01/01/2021 1.4 Working 01/01/2021 2.1 Entertainment 02/01/2021 7.9 Sleeping 02/01/2021 1.2 Working 02/01/2021 2.8 Working 04/01/2021 6.2 Sleeping </code></pre> <p>Output:</p> <pre><code>Date Entertainment Working Sleeping 01/01/2021 2.2 1.4 0 02/01/2021 0 4.0 7.9 03/01/2021 0 0 0 04/01/2021 0 0 6.2 </code></pre> <p>I have more categories so if you can allow it to easily add new categories. The code I have doesn't work at all so please help me out thanks.</p>
<p>you can use <code>pivot_table()</code>:</p> <pre><code>out=(df.pivot_table('Duration','Date','Category',fill_value=0,aggfunc='sum') .rename_axis(columns=None) .reset_index()) </code></pre> <p><strong>OR</strong></p> <p>you can use <code>pd.crosstab()</code>:</p> <pre><code>out=(pd.crosstab(df['Date'],df['Category'],df['Duration'],aggfunc='sum') .fillna(0) .rename_axis(columns=None) .reset_index()) </code></pre> <p>output of <code>out</code>:</p> <pre><code> Date Entertainment Sleeping Working 0 01/01/2021 2.2 0.0 1.4 1 02/01/2021 0.0 7.9 4.0 2 04/01/2021 0.0 6.2 0.0 </code></pre>
python|pandas|dataframe|sum
1
377,530
68,171,716
What is the input shape of the InputLayer in keras Tensorflow?
<p>I have this data</p> <pre><code>X_regression = tf.range(0, 1000, 5) y_regression = X + 100 X_reg_train, X_reg_test = X_regression[:150], X_regression[150:] y_reg_train, y_reg_test = y_regression[:150], y_regression[150:] </code></pre> <p>I inspect the data input data</p> <pre><code>X_reg_train[0], X_reg_train[0].shape, X_reg_train[0].ndim </code></pre> <p>and it returns:</p> <pre><code>(&lt;tf.Tensor: shape=(), dtype=int32, numpy=0&gt;, TensorShape([]), 0) </code></pre> <p>I build a model:</p> <pre><code># Set the random seed tf.random.set_seed(42) # Create the model model_reg = tf.keras.models.Sequential() # Add Input layer model_reg.add(tf.keras.layers.InputLayer(input_shape=[1])) # Add Hidden layers model_reg.add(tf.keras.layers.Dense(units=10, activation=tf.keras.activations.relu)) # Add last layer model_reg.add(tf.keras.layers.Dense(units=1)) # Compile the model model_reg.compile(optimizer=tf.keras.optimizers.Adam(), loss=tf.keras.losses.mae, metrics=[tf.keras.metrics.mae]) # Fit the model model_reg.fit(X_reg_train, y_reg_train, epochs=10) </code></pre> <p>The model works.</p> <p>However, I am confused about <code>input_shape</code></p> <p>Why is it <code>[1]</code> in this situation? Why is it sometimes a tuple?</p> <p>Would appreciate an explanation of different formats of <code>input_shape</code> in different situations.</p>
<p><code>InputLayer</code> is actually just the same as specifying the parameter <code>input_shape</code> in a <code>Dense</code> layer. Keras actually uses <code>InputLayer</code> when you use <code>method 2</code> in the background.</p> <pre><code># Method 1 model_reg.add(tf.keras.layers.InputLayer(input_shape=(1,))) model_reg.add(tf.keras.layers.Dense(units=10, activation=tf.keras.activations.relu)) # Method 2 model_reg.add(tf.keras.layers.Dense(units=10, input_shape=(1,), activation=tf.keras.activations.relu)) </code></pre> <p>The parameter <code>input_shape</code> is actually supposed to be a tuple, if you noticed that I set the <code>input_shape</code> in your example to be <code>(1,)</code> this is a tuple with a single element in it. As your data is 1D, you pass in a single element at a time therefore the input shape is <code>(1,)</code>.</p> <p>If your input data was a 2D input for example when trying to predict the price of a house based on multiple variables, you would have multiple rows and multiple columns of data. In this case, you pass in the input shape of the last dimension of the <code>X_reg_train</code> which is the number of inputs. If <code>X_reg_train</code> was <code>(1000,10)</code> then we use the <code>input_shape</code> of <code>(10,)</code>.</p> <pre><code>model_reg.add(tf.keras.layers.Dense(units=10, input_shape=(X_reg_train.shape[1],), activation=tf.keras.activations.relu)) </code></pre> <p>Ignoring the <code>batch_size</code> for a moment, with this we are actually just sending a single row of the data to predict a single house price. The <code>batch_size</code> is just here to chunk multiple rows of data together so that we do not have to load the entire dataset into memory which is computationally expensive, so we send small chunks, with the default value being <code>32</code>. When running the training you would have noticed that under each epoch it says <code>5/5</code> which are for the <code>5 batches</code> of data you have, since the training size is <code>150</code>, <code>150 / 32 = 5(rounded up)</code>.</p> <p>For <code>3D input</code> with the <code>Dense</code> layer it actually just gets flattened to a <code>2D input</code>, i.e. from <code>(batch_size, sequence_length, dim) -&gt; (batch_size * sequence_length, dim) -&gt; (batch_size, sequence_length, hidden_units)</code> which is the same as using a <code>Conv1D</code> layer with a <code>kernel</code> of <code>1</code>. So I wouldn't even use the <code>Dense</code> layer in this case.</p>
python|tensorflow|deep-learning|tf.keras
0
377,531
68,340,223
How to batch an object detection dataset?
<p>I am working on implementing a face detection model on the wider face dataset. I learned it was built into <a href="https://www.tensorflow.org/datasets/catalog/wider_face" rel="nofollow noreferrer">Tensorflow datasets</a> and I am using it. However, I am facing an issue while batching the data. Since, an Image can have multiple faces, therefore the number of bounding boxes output are different for each Image. For example, an Image with 2 faces will have 2 bounding box, whereas one with 4 will have 4 and so on.</p> <p>But the problem is, these unequal number of bounding boxes is causing each of the Dataset object tensors to be of different shapes. And in TensorFlow afaik we cannot batch tensors of unequal shapes ( source - <a href="https://stackoverflow.com/questions/59630011/tensorflow-datasets-make-batches-with-different-shaped-data">Tensorflow Datasets: Make batches with different shaped data</a>). So I am unable to batch the dataset.</p> <p>So after loading the following code and batching -</p> <pre><code>ds,info = tfds.load('wider_face', split='train', shuffle_files=True, with_info= True) ds1 = ds.batch(12) for step, (x,y,z) in enumerate(ds1) : print(step) break </code></pre> <p>I am getting this kind of error on run <a href="https://i.stack.imgur.com/Sf4MY.png" rel="nofollow noreferrer">Link to Error Image</a></p> <p>In general any help on how can I batch the Tensorflow object detection datasets will be very helpfull.</p>
<p>It might be a bit late but I thought I should post this anyways. The padded_batch feature ought to do the trick here. It kind of goes around the issue by matching dimension via padding zeros</p> <pre><code>ds,info = tfds.load('wider_face', split='train', shuffle_files=True, with_info= True) ds1 = ds.padded_batch(12) for step, (x,y,z) in enumerate(ds1) : print(step) break </code></pre> <p>Another solution would be to process not use batch and process with custom buffers with for loops but that kind of defeats the purpose. Just for posterity I'll add the sample code here as an example of a simple workaround.</p> <pre><code>ds,info = tfds.load('wider_face', split='train', shuffle_files=True, with_info= True) batch_size = 12 image_annotations_pair = [x['image'], x['faces']['bbox'] for n, x in enumerate(ds) if n &lt; batch_size] </code></pre> <p>Then use a train_step modified for this.</p> <p>For details one may refer to - <a href="https://www.kite.com/python/docs/tensorflow.contrib.autograph.operators.control_flow.dataset_ops.DatasetV2.padded_batch" rel="nofollow noreferrer">https://www.kite.com/python/docs/tensorflow.contrib.autograph.operators.control_flow.dataset_ops.DatasetV2.padded_batch</a></p>
tensorflow|deep-learning|computer-vision|conv-neural-network|object-detection
1
377,532
68,046,110
simple quest about LD_LIBRARY_PATH and new entries
<p>In Ubuntu 16, when setting environment variable like <a href="https://stackoverflow.com/questions/13428910/how-to-set-the-environmental-variable-ld-library-path-in-linux">this</a>, what we had in LD_LIBRARY_PATH before will be erased? I have already ROS and want to add the <a href="https://github.com/GarrickLin/numpy-opencv-converter" rel="nofollow noreferrer">opencv&lt;-&gt;numpy converter</a>. I don't want to lose what I already have in LD_LIBRARY_PATH, though and everywhere I search I don't find if it edits what we have in env variable or just add new entries. Don't want to lose anything haha</p>
<p>Read about how ${LD_LIBRARY_PATH} is used like this:</p> <pre><code>$ man ld.so </code></pre> <p>You already know that ${PATH} is a list of directories that are polled when just an application name is given:</p> <pre><code>$ foo </code></pre> <p>and ${LD_LIBRARY_PATH} works exactly the same for the ld.so(1) dynamic library linker.</p> <p>The tricky bit is that ld.so(1) uses a cache of the shared libraries found the last time the ${LD_LIBRARY_PATH} was used. This speeds up the program starting, just as any cache is supposed to do. Lots of examples show just mentioning a single directory to pick up an application-specific library:</p> <pre><code>$ export LD_LIBRARY_PATH=/my/wonderful/library $ foo </code></pre> <p>but that is depending that the existing cache already any other needed library because the ld.so(1) will know to search only that &quot;/my/wonderful/library/&quot; location.</p> <p>A better solution is to add the new directory like this:</p> <pre><code>$ export LD_LIBRARY_PATH=/my/wonderful/library:${LD_LIBRARY_PATH} $ foo </code></pre> <p>which will work even if the ld.so(1) cache gets deleted out from under you and ldconfig(1) has to rebuild that cache.</p> <p>This idiom that exports the ${LD_LIBRARY_PATH} permanently is a bad, bad idea:</p> <pre><code>$ export LD_LIBRARY_PATH=/my/wonderful/library:${LD_LIBRARY_PATH} $ foo </code></pre> <p>because the custom ${LD_LIBRARY_PATH} will be used for the &quot;foo&quot; application plus <em>any child program in this process tree</em>. Instead, always limit the setting to process tree here instead of anything run from this shell ever again. Keep control of where the path is set like this:</p> <pre><code>$ LD_LIBRARY_PATH=/my/wonderful/library:${LD_LIBRARY_PATH} foo </code></pre> <p>limits visibility of the ${LD_LIBRARY_PATH} to only &quot;foo&quot; and any process it spawns.</p>
linux|python-2.7|numpy|environment-variables
1
377,533
68,128,827
How to know if a record has updated its date
<p>I want to know if a record has update its date in a pandas Dataframe. The dataframe is made up of several columns in which for each value of A we have several values ​​of B with start dates and end dates. Thanks to the timestamp we can know if there is a new record or a previous one has been modified.</p> <p>What I want to know is how to be able to check if a new record has a date range close to other records in its group, for example B1 group and if they have a similar date range, delete the previous one and only leave the new record updated, but if it doesn't have a common range to interpret as a new record.</p> <p>For example,</p> <p>Input Dataframe:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>A</th> <th>B</th> <th>Start</th> <th>End</th> <th>Timestamp</th> </tr> </thead> <tbody> <tr> <td>A1</td> <td>B1</td> <td>2021-05-10 00:00:00</td> <td>2021-05-27 00:00:00</td> <td>2021-05-15 00:00:00</td> </tr> <tr> <td>A1</td> <td>B1</td> <td>2021-05-12 00:00:00</td> <td>2021-05-30 00:00:00</td> <td>2021-04-15 00:00:00</td> </tr> <tr> <td>A1</td> <td>B1</td> <td>2021-05-10 00:00:00</td> <td>2021-05-12 00:00:00</td> <td>2021-03-15 00:00:00</td> </tr> <tr> <td>A1</td> <td>B2</td> <td>2021-06-02 00:00:00</td> <td>2021-06-04 00:00:00</td> <td>2021-02-15 00:00:00</td> </tr> <tr> <td>A2</td> <td>B3</td> <td>2021-01-01 00:00:00</td> <td>2022-01-01 00:00:00</td> <td>2021-05-15 00:00:00</td> </tr> <tr> <td>A2</td> <td>B3</td> <td>2021-07-15 00:00:00</td> <td>2021-08-15 00:00:00</td> <td>2021-04-15 00:00:00</td> </tr> <tr> <td>A2</td> <td>B4</td> <td>2021-05-30 00:00:00</td> <td>2021-06-15 00:00:00</td> <td>2021-05-15 00:00:00</td> </tr> <tr> <td>A2</td> <td>B4</td> <td>2021-06-02 00:00:00</td> <td>2021-06-17 00:00:00</td> <td>2021-04-15 00:00:00</td> </tr> </tbody> </table> </div> <p>Expected Output:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>A</th> <th>B</th> <th>Start</th> <th>End</th> <th>Timestamp</th> </tr> </thead> <tbody> <tr> <td>A1</td> <td>B1</td> <td>2021-05-10 00:00:00</td> <td>2021-05-27 00:00:00</td> <td>2021-05-15 00:00:00</td> </tr> <tr> <td>A1</td> <td>B2</td> <td>2021-06-02 00:00:00</td> <td>2021-06-04 00:00:00</td> <td>2021-02-15 00:00:00</td> </tr> <tr> <td>A2</td> <td>B3</td> <td>2021-01-01 00:00:00</td> <td>2022-01-01 00:00:00</td> <td>2021-05-15 00:00:00</td> </tr> <tr> <td>A2</td> <td>B3</td> <td>2021-07-15 00:00:00</td> <td>2021-08-15 00:00:00</td> <td>2021-04-15 00:00:00</td> </tr> <tr> <td>A2</td> <td>B4</td> <td>2021-05-30 00:00:00</td> <td>2021-06-15 00:00:00</td> <td>2021-05-15 00:00:00</td> </tr> </tbody> </table> </div> <p>Thank you!</p>
<p>I'm not sure what you mean exactly with a 'close' date range, so this answer won't exactly match the output you listed in the question.</p> <p>For demo purposes I've made a csv file called <code>data.csv</code> with the data in your question</p> <pre><code>A,B,Start,End,Timestamp A1,B1,2021-05-10 00:00:00,2021-05-27 00:00:00,2021-05-15 00:00:00 A1,B1,2021-05-12 00:00:00,2021-05-30 00:00:00,2021-04-15 00:00:00 A1,B1,2021-05-10 00:00:00,2021-05-12 00:00:00,2021-03-15 00:00:00 A1,B2,2021-06-02 00:00:00,2021-06-04 00:00:00,2021-02-15 00:00:00 A2,B3,2021-01-01 00:00:00,2022-01-01 00:00:00,2021-05-15 00:00:00 A2,B3,2021-07-15 00:00:00,2021-08-15 00:00:00,2021-04-15 00:00:00 A2,B4,2021-05-30 00:00:00,2021-06-15 00:00:00,2021-05-15 00:00:00 A2,B4,2021-06-02 00:00:00,2021-06-17 00:00:00,2021-04-15 00:00:00 </code></pre> <p>An approach could be to compare the time differences for every group in the <code>B</code> column. We'll start with a group you mentioned in your question, i.e. where the <code>B</code> column value equals <code>&quot;B1&quot;</code>:</p> <pre><code>import pandas as pd df = pd.read_csv(&quot;data.csv&quot;) dff = df[df[&quot;B&quot;] == &quot;B1&quot;] &gt;&gt;&gt; dff A B ... End Timestamp 0 A1 B1 ... 2021-05-27 00:00:00 2021-05-15 00:00:00 1 A1 B1 ... 2021-05-30 00:00:00 2021-04-15 00:00:00 2 A1 B1 ... 2021-05-12 00:00:00 2021-03-15 00:00:00 # Difference in number of days between start and end date &gt;&gt;&gt; (pd.to_datetime(dff.End) - pd.to_datetime(dff.Start)).dt.days 0 17 1 18 2 2 dtype: int64 # How does each time difference compare to the time difference in the first row &gt;&gt;&gt; (pd.to_datetime(dff.End) - pd.to_datetime(dff.Start)).dt.days.diff().fillna(0) 0 0.0 1 1.0 2 -16.0 dtype: float64 # Filter where the number of days difference compared to the first row is less than 7 &gt;&gt;&gt; abs((pd.to_datetime(dff.End) - pd.to_datetime(dff.Start)).dt.days.diff().fillna(0)) &lt; 7 0 True 1 True 2 False dtype: bool # Filter dff based on earlier condition &gt;&gt;&gt; dff[abs((pd.to_datetime(dff.End) - pd.to_datetime(dff.Start)).dt.days.diff().fillna(0)) &lt; 7] A B Start End Timestamp 0 A1 B1 2021-05-10 00:00:00 2021-05-27 00:00:00 2021-05-15 00:00:00 1 A1 B1 2021-05-12 00:00:00 2021-05-30 00:00:00 2021-04-15 00:00:00 </code></pre> <p>Above we've only compared one group of the <code>B</code> column. To do what we've done above for all groups, we could use a <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> on the <code>B</code> column. Then we could iterate through each group and filter each group using the filter mentioned earlier. After filtering all groups these filtered groups can be contained inside a list and concatenated together.</p> <pre><code>df = pd.concat([ group[ abs( (pd.to_datetime(group.End) - pd.to_datetime(group.Start)) .dt.days.diff() .fillna(0) ) &lt; 7 ] for name, group in df.groupby(&quot;B&quot;) ]) &gt;&gt;&gt; df A B Start End Timestamp 0 A1 B1 2021-05-10 00:00:00 2021-05-27 00:00:00 2021-05-15 00:00:00 1 A1 B1 2021-05-12 00:00:00 2021-05-30 00:00:00 2021-04-15 00:00:00 3 A1 B2 2021-06-02 00:00:00 2021-06-04 00:00:00 2021-02-15 00:00:00 4 A2 B3 2021-01-01 00:00:00 2022-01-01 00:00:00 2021-05-15 00:00:00 6 A2 B4 2021-05-30 00:00:00 2021-06-15 00:00:00 2021-05-15 00:00:00 7 A2 B4 2021-06-02 00:00:00 2021-06-17 00:00:00 2021-04-15 00:00:00 </code></pre> <p>Adjust the degree of closeness according to your needs. I've used days here as a measurement, but you could use a different one. You could use seconds, microseconds, nanoseconds, etc... Look through the <a href="https://pandas.pydata.org/docs/reference/series.html" rel="nofollow noreferrer"><code>Series</code> documentation</a> for more examples.</p>
python|pandas|dataframe|date|datetime
2
377,534
68,403,233
Adjust figure yellow bricks model - python
<p>I am trying to adjust the axes limits on a yellow bricks figure. However, I can't seem to adjust it. I can change axes labels and titles but not the limits. It works if I don't render the figure with <code>visualizer.show()</code> but then I lose labels, titles, legend etc.</p> <pre><code>from sklearn.linear_model import RidgeClassifier from sklearn.model_selection import train_test_split from sklearn.preprocessing import OrdinalEncoder, LabelEncoder from yellowbrick.classifier import ROCAUC from yellowbrick.datasets import load_game import matplotlib.pyplot as plt X, y = load_game() X = OrdinalEncoder().fit_transform(X) y = LabelEncoder().fit_transform(y) fig, ax = plt.subplots() ax.set_xlim([-0.05, 1.0]) ax.set_ylim([0.0, 1.05]) X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42) fig, ax = plt.subplots(figsize = (10,6)) model = RidgeClassifier() visualizer = ROCAUC(model, classes=[&quot;win&quot;, &quot;loss&quot;, &quot;draw&quot;], ax = ax) visualizer.fit(X_train, y_train) visualizer.score(X_test, y_test) visualizer.show() </code></pre>
<p>Instead of calling the <code>visualizer.show()</code> method, you can try calling the <code>visualizer.finalize()</code> method and then accessing the underlying matplotlib axes to change the limits. You are also overwriting <code>ax</code> which wasn't doing you any favours either.</p> <p>Here is the full code example:</p> <pre class="lang-py prettyprint-override"><code>from sklearn.linear_model import RidgeClassifier from sklearn.model_selection import train_test_split from sklearn.preprocessing import OrdinalEncoder, LabelEncoder from yellowbrick.classifier import ROCAUC from yellowbrick.datasets import load_game import matplotlib.pyplot as plt X, y = load_game() X = OrdinalEncoder().fit_transform(X) y = LabelEncoder().fit_transform(y) X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42) fig, ax = plt.subplots(figsize = (10,6)) model = RidgeClassifier() visualizer = ROCAUC(model, classes=[&quot;win&quot;, &quot;loss&quot;, &quot;draw&quot;], ax=ax) visualizer.fit(X_train, y_train) visualizer.score(X_test, y_test) visualizer.finalize() ax.set_xlim([-0.05, 1.0]) ax.set_ylim([0.0, 1.05]) </code></pre>
python|pandas|yellowbrick
1
377,535
68,285,099
How to avoid overfitting in train data?
<p>I have train data 700 image for each gesture (5 gesture),<a href="https://i.stack.imgur.com/l4pWq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/l4pWq.png" alt="train image" /></a> validation test data 200 image <a href="https://i.stack.imgur.com/5G60l.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5G60l.jpg" alt="validation image" /></a> and test data 150 image. <a href="https://i.stack.imgur.com/dazEs.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dazEs.jpg" alt="test image" /></a> My model is:</p> <pre><code>def get_model(): &quot;&quot;&quot; Returns a compiled convolutional neural network model. Assume that the `input_shape` of the first layer is `(IMG_WIDTH, IMG_HEIGHT, 3)`. The output layer should have `NUM_CATEGORIES` units, one for each category. &quot;&quot;&quot; # Create a convolutional neural network model = tf.keras.models.Sequential( [ # Convolutional layer. Learn 32 filters using a 3x3 kernel tf.keras.layers.Conv2D( 32, (5, 5), activation='relu', input_shape=(IMG_WIDTH, IMG_HEIGHT, 3) ), # Max-pooling layer, using 2x2 pool size tf.keras.layers.MaxPool2D(pool_size=(2, 2)), tf.keras.layers.Conv2D( 64, (3, 3), activation='relu', input_shape=(IMG_WIDTH, IMG_HEIGHT, 3) ), # Max-pooling layer, using 2x2 pool size tf.keras.layers.MaxPool2D(pool_size=(2, 2)), tf.keras.layers.Conv2D( 128, (3, 3), activation='relu', input_shape=((IMG_WIDTH), (IMG_HEIGHT), 3) ), tf.keras.layers.MaxPool2D(pool_size=(2, 2)), tf.keras.layers.Conv2D( 256, (3, 3), activation='relu', input_shape=((IMG_WIDTH), (IMG_HEIGHT), 3) ), tf.keras.layers.MaxPool2D(pool_size=(2, 2)), tf.keras.layers.Flatten(), # Add a hidden layer with dropout tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.3), # Add an output layer with output units for all 6 gestures tf.keras.layers.Dense(NUM_CATEGORIES, activation='softmax') ]) optimizer = tf.keras.optimizers.Adam(learning_rate=0.001) model.compile( optimizer=optimizer , loss=&quot;categorical_crossentropy&quot;, metrics=[&quot;accuracy&quot;] ) return model </code></pre> <p>Model fitting part:</p> <pre><code>y_test = tf.keras.utils.to_categorical(labels_test) y_train = tf.keras.utils.to_categorical(labels_train) y_valid = tf.keras.utils.to_categorical(valid_label) x_train = np.array(images_train)/255 x_test = np.array(images_test)/255 x_valid = np.array(valid_image)/255 # Get a compiled neural network fitting_time = datetime.now() model = get_model() # Fit model on training data model.fit(x_train, y_train, epochs=EPOCHS, validation_data = (x_valid, y_valid)) # Evaluate neural network performance model.evaluate(x_test, y_test, verbose=2) </code></pre> <p>I have changed learning rate many time, but doesn't work. It's overfitting in train data. <a href="https://i.stack.imgur.com/sQfmF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sQfmF.png" alt="result" /></a></p> <p>What can I do for avoiding overfitting and why test accuracy is so low?</p>
<p>This situation may be ocurring because you are setting the input_shape parameter in all convolutional layers. You should define it just in the first one.</p> <p>Your code should looks like:</p> <pre><code>def get_model(): &quot;&quot;&quot; Returns a compiled convolutional neural network model. Assume that the `input_shape` of the first layer is `(IMG_WIDTH, IMG_HEIGHT, 3)`. The output layer should have `NUM_CATEGORIES` units, one for each category. &quot;&quot;&quot; # Create a convolutional neural network model = tf.keras.models.Sequential( [ # Convolutional layer. Learn 32 filters using a 3x3 kernel tf.keras.layers.Conv2D( 32, (5, 5), activation='relu', input_shape=(IMG_WIDTH, IMG_HEIGHT, 3) ), # Max-pooling layer, using 2x2 pool size tf.keras.layers.MaxPool2D(pool_size=(2, 2)), tf.keras.layers.Conv2D( 64, (3, 3), activation='relu' ), # Max-pooling layer, using 2x2 pool size tf.keras.layers.MaxPool2D(pool_size=(2, 2)), tf.keras.layers.Conv2D( 128, (3, 3), activation='relu' ), tf.keras.layers.MaxPool2D(pool_size=(2, 2)), tf.keras.layers.Conv2D( 256, (3, 3), activation='relu' ), tf.keras.layers.MaxPool2D(pool_size=(2, 2)), tf.keras.layers.Flatten(), # Add a hidden layer with dropout tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.3), # Add an output layer with output units for all 6 gestures tf.keras.layers.Dense(NUM_CATEGORIES, activation='softmax') ]) optimizer = tf.keras.optimizers.Adam(learning_rate=0.001) model.compile( optimizer=optimizer , loss=&quot;categorical_crossentropy&quot;, metrics=[&quot;accuracy&quot;] ) return model </code></pre> <p>Also you should check if you have the same distribution in the training and validation sets.</p>
python|conv-neural-network|tensorflow2.0
-1
377,536
68,222,820
Local Gradient Aggregation for Horovod using Tensorflow 1.X
<p>I am trying to use Horovod for distributing training GPU on different servers. Following the advice <a href="https://www.determined.ai/blog/optimizing-horovod" rel="nofollow noreferrer">Here</a>.</p> <p>I wanted to implement local gradient aggregation. In the explanation the modification looks easy <code>optimizer = hvd.DistributedOptimizer(opt, backward_passes_per_step=4)</code>.<br /> But trying to use it in my example model results in the following error.</p> <pre><code>tensorflow.python.framework.errors_impl.FailedPreconditionError: 2 root error(s) found. [1,4]&lt;stderr&gt;: (0) Failed precondition: Attempting to use uninitialized value aggregation_variables_4/aggregation_counter [1,4]&lt;stderr&gt;: [[node aggregation_variables_4/aggregation_counter/read </code></pre> <p>I am using the native TensorFlow 1.15 not keras or latest tensorflow version.</p> <p>Is there a working example for this? or someone know how to implement it?</p>
<p>I have solved the problem. As indicated in the error message <code>aggregation_counter</code> variable is not initialized. I was using <code>sess.run(tf.global_variables_initializer())</code>. To solve the problem I add <code>sess.run(tf.local_variables_initializer())</code>. Doing this has done the trick. I am not yet sure why the global variable initializer failed to initialize the <code>aggregation_counter</code> variable.</p>
python|tensorflow|tensorflow1.15|horovod
0
377,537
68,279,224
Why Running torchscript model inference on IOS results in threading error?
<p>I have been trying to integrate pytorch model developed on python into IOS. The example I have looked at is from this <a href="https://github.com/pytorch/ios-demo-app/blob/master/D2Go" rel="nofollow noreferrer">github repo</a>.</p> <p>I used the same d2go model in my own application. One thing I noticed is that if the model inference code isn't wrapped in the DispatchQueue global as shown below</p> <pre><code>DispatchQueue.global().async { guard let outputs = self.inferencer.module.detect(image: &amp;pixelBuffer) else { return } </code></pre> <p>I get a error like <code>Thread 1: EXC_BAD_ACCESS (code=1, address=0x7ffeeb4e0000)</code> or if my model takes too long to run the inference even though it is wrapped in the dispatchQueue code above, I get an error like <code>Thread 4: EXC_BAD_ACCESS (code=1, address=0x7ff159bed010)</code>.</p> <p>Im not sure how threading works in such scenarios. I am running the code when a button is pressed in the new SwiftUI framework.</p> <p>Any Intuition on why such a case might happen ? I have tried the above on simulators</p>
<p>You should probably declare &quot;pixelBuffer&quot; in the same scope (inside the dispatch block)</p>
ios|multithreading|pytorch|dispatch-queue|torchscript
0
377,538
68,229,187
Keras model concat: Attribute and Value error
<p>This is a keras model I have made based on the paper Liu, Gibson, et al 2017 (<a href="https://arxiv.org/abs/1708.09022" rel="nofollow noreferrer">https://arxiv.org/abs/1708.09022</a>). It can be seen in fig1.</p> <p>I have 3 questions-</p> <ol> <li>I am not sure if I am correctly using concatenate as per the paper.</li> <li>I am getting AttributeError: 'KerasTensor' object has no attribute 'add' on model4.add flatten. This error didn't show up earlier</li> <li>Earlier, the only error was ValueError: A <code>Concatenate</code> layer requires inputs with matching shapes except for the concat axis. Got inputs shapes: [(None, 310, 1, 16), (None, 310, 1, 32), (None, 310, 1, 64)], which I am also not sure how to deal with.</li> </ol> <pre><code>model1= Sequential() model2= Sequential() model3= Sequential() model4= Sequential() input_sh = (619,2,1) model1.add(Convolution1D(filters=16, kernel_size=21, padding='same', activation='LeakyReLU', input_shape=input_sh)) model1.add(MaxPooling2D(pool_size=(2,2), padding='same')) model1.add(BatchNormalization()) model1.summary() model2.add(Convolution1D(filters=32, kernel_size=11, padding='same', activation='LeakyReLU', input_shape= input_sh)) model2.add(MaxPooling2D(pool_size=(2,2), padding='same')) model2.add(BatchNormalization()) model2.summary() model3.add(Convolution1D(filters=64, kernel_size=5, padding='same', activation='LeakyReLU', input_shape= input_sh)) model3.add(MaxPooling2D(pool_size=(2,2), padding='same')) model3.add(BatchNormalization()) model3.summary() model4 = concatenate([model1.output, model2.output, model3.output], axis= -1) model4.add(Flatten()) # Line with error model4.add(Dense(2048, activation='tanh')) model4.add(Dropout(.5)) model4.add(Dense(len(dic), activation=&quot;softmax&quot;)) #len(dic) = 19 model4.summary() </code></pre> <p>The output is as follows-</p> <pre><code>Model: &quot;sequential_59&quot; _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv1d_45 (Conv1D) (None, 619, 2, 16) 352 _________________________________________________________________ max_pooling2d_45 (MaxPooling (None, 310, 1, 16) 0 _________________________________________________________________ batch_normalization_45 (Batc (None, 310, 1, 16) 64 ================================================================= Total params: 416 Trainable params: 384 Non-trainable params: 32 _________________________________________________________________ Model: &quot;sequential_60&quot; _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv1d_46 (Conv1D) (None, 619, 2, 32) 384 _________________________________________________________________ max_pooling2d_46 (MaxPooling (None, 310, 1, 32) 0 _________________________________________________________________ batch_normalization_46 (Batc (None, 310, 1, 32) 128 ================================================================= Total params: 512 Trainable params: 448 Non-trainable params: 64 _________________________________________________________________ Model: &quot;sequential_61&quot; _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv1d_47 (Conv1D) (None, 619, 2, 64) 384 _________________________________________________________________ max_pooling2d_47 (MaxPooling (None, 310, 1, 64) 0 _________________________________________________________________ batch_normalization_47 (Batc (None, 310, 1, 64) 256 ================================================================= Total params: 640 Trainable params: 512 Non-trainable params: 128 _________________________________________________________________ --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) &lt;ipython-input-25-bf7ad914aa4e&gt; in &lt;module&gt;() 44 model4 = concatenate([model1.output, model2.output, model3.output], axis= -1) 45 ---&gt; 46 model4.add(Flatten()) 47 model4.add(Dense(2048, activation='tanh')) 48 model4.add(Dropout(.5)) AttributeError: 'KerasTensor' object has no attribute 'add' </code></pre>
<p>You can use the <code>Functional()</code> API in order to solve your problem (I haven't read the paper, but here is how you can combine models and get a final output).</p> <p>I used 'relu' activation for simplicity purposes (ensure you use <code>keras</code> inside <code>tensorflow</code>)</p> <p>Here is the code that should work:</p> <pre><code>import tensorflow as tf from tensorflow.keras import * from tensorflow.keras.layers import * model1= Sequential() model2= Sequential() model3= Sequential() input_sh = (619,2,1) model1.add(Convolution1D(filters=16, kernel_size=21, padding='same', activation='relu', input_shape=input_sh)) model1.add(MaxPooling2D(pool_size=(2,2), padding='same')) model1.add(BatchNormalization()) model1.summary() model2.add(Convolution1D(filters=32, kernel_size=11, padding='same', activation='relu', input_shape= input_sh)) model2.add(MaxPooling2D(pool_size=(2,2), padding='same')) model2.add(BatchNormalization()) model2.summary() model3.add(Convolution1D(filters=64, kernel_size=5, padding='same', activation='relu', input_shape= input_sh)) model3.add(MaxPooling2D(pool_size=(2,2), padding='same')) model3.add(BatchNormalization()) model3.summary() concatenated = concatenate([model1.output, model2.output, model3.output], axis=-1) x = Dense(64, activation='relu')(concatenated) x = Flatten()(x) x = Dropout(.5)(x) x = Dense(19, activation=&quot;softmax&quot;)(x) final_model = Model(inputs=[model1.input,model2.input,model3.input],outputs=x) final_model.summary() Model: &quot;functional_3&quot; __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== conv1d_15_input (InputLayer) [(None, 619, 2, 1)] 0 __________________________________________________________________________________________________ conv1d_16_input (InputLayer) [(None, 619, 2, 1)] 0 __________________________________________________________________________________________________ conv1d_17_input (InputLayer) [(None, 619, 2, 1)] 0 __________________________________________________________________________________________________ conv1d_15 (Conv1D) (None, 619, 2, 16) 352 conv1d_15_input[0][0] __________________________________________________________________________________________________ conv1d_16 (Conv1D) (None, 619, 2, 32) 384 conv1d_16_input[0][0] __________________________________________________________________________________________________ conv1d_17 (Conv1D) (None, 619, 2, 64) 384 conv1d_17_input[0][0] __________________________________________________________________________________________________ max_pooling2d_15 (MaxPooling2D) (None, 310, 1, 16) 0 conv1d_15[0][0] __________________________________________________________________________________________________ max_pooling2d_16 (MaxPooling2D) (None, 310, 1, 32) 0 conv1d_16[0][0] __________________________________________________________________________________________________ max_pooling2d_17 (MaxPooling2D) (None, 310, 1, 64) 0 conv1d_17[0][0] __________________________________________________________________________________________________ batch_normalization_15 (BatchNo (None, 310, 1, 16) 64 max_pooling2d_15[0][0] __________________________________________________________________________________________________ batch_normalization_16 (BatchNo (None, 310, 1, 32) 128 max_pooling2d_16[0][0] __________________________________________________________________________________________________ batch_normalization_17 (BatchNo (None, 310, 1, 64) 256 max_pooling2d_17[0][0] __________________________________________________________________________________________________ concatenate_5 (Concatenate) (None, 310, 1, 112) 0 batch_normalization_15[0][0] batch_normalization_16[0][0] batch_normalization_17[0][0] __________________________________________________________________________________________________ dense_5 (Dense) (None, 310, 1, 64) 7232 concatenate_5[0][0] __________________________________________________________________________________________________ flatten_3 (Flatten) (None, 19840) 0 dense_5[0][0] __________________________________________________________________________________________________ dropout_3 (Dropout) (None, 19840) 0 flatten_3[0][0] __________________________________________________________________________________________________ dense_6 (Dense) (None, 19) 376979 dropout_3[0][0] ================================================================================================== Total params: 385,779 Trainable params: 385,555 Non-trainable params: 224 </code></pre>
python|tensorflow|keras|deep-learning|concatenation
2
377,539
68,090,137
How can I subset a data frame for unique rows using repeating values from a column in another data frame in python?
<p>I have 2 data frames. I want to subset df_1 based on df_2 so that the rows in the resulting data frame correspond to the rows in df_2. Here are two example data frames:</p> <pre><code>df_1 = pd.DataFrame({ &quot;ID&quot;: [&quot;Lemon&quot;,&quot;Banana&quot;,&quot;Apple&quot;,&quot;Cherry&quot;,&quot;Tomato&quot;,&quot;Blueberry&quot;,&quot;Avocado&quot;,&quot;Lime&quot;], &quot;Color&quot;: [&quot;Yellow&quot;,&quot;Yellow&quot;,&quot;Red&quot;,&quot;Red&quot;,&quot;Red&quot;,&quot;Blue&quot;,&quot;Green&quot;,&quot;Green&quot;]}) df_2 = pd.DataFrame({&quot;Color&quot;: [&quot;Red&quot;,&quot;Blue&quot;,&quot;Yellow&quot;,&quot;Green&quot;,&quot;Red&quot;,&quot;Yellow&quot;]}) </code></pre> <p>My desired output is df_3, where the &quot;Color&quot; column is the same as in df_2:</p> <pre><code>df_3 = pd.DataFrame({ &quot;ID&quot;: [&quot;Apple&quot;,&quot;Blueberry&quot;,&quot;Lemon&quot;,&quot;Avocado&quot;,&quot;Cherry&quot;,&quot;Banana&quot;], &quot;Color&quot;: [&quot;Red&quot;,&quot;Blue&quot;,&quot;Yellow&quot;,&quot;Green&quot;,&quot;Red&quot;,&quot;Yellow&quot;]}) </code></pre> <p>When I merge df_1 and df_2, I get duplicated rows because most of the rows in df_2 have multiple matches in df_1.</p> <pre><code>merged = df_2.merge(df_1, how=&quot;left&quot;, on=&quot;Color&quot;) </code></pre> <p>Dropping duplicates works properly for the &quot;Yellow&quot; color because it has a 2:2 ratio of values in df_2 and options in df_1, but it doesn't work properly for &quot;Red&quot; or &quot;Green&quot; because they have a 2:3 ratio and a 1:2 ratio respectively, resulting in extra rows.</p> <pre><code>no_duplicates = merged.drop_duplicates(subset = &quot;ID&quot;) </code></pre> <p>Is there a way to subset df_1 where the first occurrence of &quot;Red&quot; in df_2 pulls out the first occurrence of &quot;Red&quot; in df_1, the second occurrence of &quot;Red&quot; in df_2 pulls out the second occurrence of &quot;Red&quot; in df_1, etc.? I would rather not use a loop unless I have no other choice. Thank you.</p>
<p>Try adding an indicator column to both <code>df_1</code> and <code>df_2</code> with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.cumcount.html" rel="nofollow noreferrer"><code>groupby cumcount</code></a> to get position as well:</p> <pre><code>df_1['i'] = df_1.groupby('Color').cumcount() df_2['i'] = df_2.groupby('Color').cumcount() </code></pre> <p><code>df_1</code>:</p> <pre><code> ID Color i 0 Lemon Yellow 0 1 Banana Yellow 1 2 Apple Red 0 3 Cherry Red 1 4 Tomato Red 2 5 Blueberry Blue 0 6 Avocado Green 0 7 Lime Green 1 </code></pre> <p><code>df_2</code>:</p> <pre><code> Color i 0 Red 0 1 Blue 0 2 Yellow 0 3 Green 0 4 Red 1 5 Yellow 1 </code></pre> <p>Then <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.merge.html" rel="nofollow noreferrer"><code>merge</code></a> on both the indicator and the <code>Color</code> then <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop.html" rel="nofollow noreferrer"><code>drop</code></a> the indicator column:</p> <pre><code>merged_df = df_1.merge(df_2, how='right', on=['Color', 'i']).drop('i', axis=1) </code></pre> <p><code>merged_df</code>:</p> <pre><code> ID Color 0 Apple Red 1 Blueberry Blue 2 Lemon Yellow 3 Avocado Green 4 Cherry Red 5 Banana Yellow </code></pre> <hr /> <p>Alternatively create pass the series directly to <code>merge</code> (this leaves <code>df_1</code> and <code>df_2</code> unaffected):</p> <pre><code>merged_df = df_1.merge( df_2, how='right', left_on=['Color', df_1.groupby('Color').cumcount()], right_on=['Color', df_2.groupby('Color').cumcount()] ).drop('key_1', axis=1) </code></pre> <p><code>merged_df</code>:</p> <pre><code> ID Color 0 Apple Red 1 Blueberry Blue 2 Lemon Yellow 3 Avocado Green 4 Cherry Red 5 Banana Yellow </code></pre>
python|pandas|merge|subset
1
377,540
68,150,248
How to extract overlapping patches from a 3D volume and recreate the input shape from the patches?
<p>Pytorch offers <code>torch.Tensor.unfold</code> operation which can be chained to arbitrarily many dimensions to extract overlapping patches. How can we reverse the patch extraction operation such that the patches are combined to the input shape.</p> <p>The focus is 3D volumetric images with 1 channel (biomedical). Extracting is possible with <code>unfold</code>, how can we combine the patches if they overlap.</p>
<p>To extract (overlapping-) patches and to reconstruct the input shape we can use the <code>torch.nn.functional.unfold</code> and the inverse operation <code>torch.nn.functional.fold</code>. These methods only process 4D tensors or 2D images, however you can use these methods to process one dimension at a time.</p> <p>Few notes:</p> <ol> <li><p>This way requires fold/unfold methods from <strong>pytorch</strong>, unfortunately I have yet to find a similar method in the TF api.</p> </li> <li><p>We start with 2D then 3D then 4D to show the incremental differences, you can extend to arbitrarily many dimensions (probably write a loop instead of hardcoding each dimension like i did)</p> </li> <li><p>We can extract patches in 2 ways, their output is the same. The methods are called <code>extract_patches_Xd</code> and <code>extract_patches_Xds</code> where X is the number of dimensions. The latter uses torch.Tensor.unfold() and has less lines of code. (output is the same, except it cannot use dilation)</p> </li> <li><p>The methods <code>extract_patches_Xd</code> and <code>combine_patches_Xd</code> are <strong>inverse</strong> methods and the combiner reverses the steps from the extracter step by step.</p> </li> <li><p>The lines of code are followed by a comment stating the dimensionality such as (B, C, T, D, H, W). The following are used:</p> <ol> <li><code>B</code>: Batch size</li> <li><code>C</code>: Channels</li> <li><code>T</code>: Time Dimension</li> <li><code>D</code>: Depth Dimension</li> <li><code>H</code>: Height Dimension</li> <li><code>W</code>: Width Dimension</li> <li><code>x_dim_in</code>: In the extraction method, this is the number input pixels in dimension <code>x</code>. In the combining method, this is the number of number of sliding windows in dimension <code>x</code>.</li> <li><code>x_dim_out</code>: In the extraction method, this is the number of sliding windows in dimension <code>x</code>. In the combining method, this is the number output pixels in dimension <code>x</code>.</li> </ol> </li> <li><p>I have a <a href="https://colab.research.google.com/drive/1Xmvau7PXjFGrM31reoc6UNc27ewV-wTn?usp=sharing" rel="nofollow noreferrer">public notebook to try out the code</a></p> </li> <li><p>I have tried out basic 2D, 3D and 4D tensors as shown below. However, my code is not infallible and I appreciate feedback when tested on other inputs.</p> </li> <li><p>The <code>get_dim_blocks()</code> method is the function given on the <a href="https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html?highlight=convolution" rel="nofollow noreferrer">pytorch docs website</a> to compute the output shape of a convolutional layer.</p> </li> <li><p>Note that if you have overlapping patches and you combine them, the overlapping elements will be summed. If you would like to get the initial input again there is a way.</p> <ol> <li>Create similar sized tensor of ones as the patches with <code>torch.ones_like(patches_tensor)</code>.</li> <li>Combine the patches into full image with same output shape. (this creates a counter for overlapping elements).</li> <li>Divide the Combined image with the Combined ones, this should reverse any double summation of elements.</li> </ol> </li> </ol> <p>First (2D):</p> <p>The <code>torch.nn.functional.fold</code> and <code>torch.nn.functional.unfold</code> methods can be used directly.</p> <pre class="lang-py prettyprint-override"><code>import torch </code></pre> <pre class="lang-py prettyprint-override"><code>def extract_patches_2ds(x, kernel_size, padding=0, stride=1): if isinstance(kernel_size, int): kernel_size = (kernel_size, kernel_size) if isinstance(padding, int): padding = (padding, padding, padding, padding) if isinstance(stride, int): stride = (stride, stride) channels = x.shape[1] x = torch.nn.functional.pad(x, padding) # (B, C, H, W) x = x.unfold(2, kernel_size[0], stride[0]).unfold(3, kernel_size[1], stride[1]) # (B, C, h_dim_out, w_dim_out, kernel_size[0], kernel_size[1]) x = x.contiguous().view(-1, channels, kernel_size[0], kernel_size[1]) # (B * h_dim_out * w_dim_out, C, kernel_size[0], kernel_size[1]) return x def extract_patches_2d(x, kernel_size, padding=0, stride=1, dilation=1): if isinstance(kernel_size, int): kernel_size = (kernel_size, kernel_size) if isinstance(padding, int): padding = (padding, padding) if isinstance(stride, int): stride = (stride, stride) if isinstance(dilation, int): dilation = (dilation, dilation) def get_dim_blocks(dim_in, dim_kernel_size, dim_padding = 0, dim_stride = 1, dim_dilation = 1): dim_out = (dim_in + 2 * dim_padding - dim_dilation * (dim_kernel_size - 1) - 1) // dim_stride + 1 return dim_out channels = x.shape[1] h_dim_in = x.shape[2] w_dim_in = x.shape[3] h_dim_out = get_dim_blocks(h_dim_in, kernel_size[0], padding[0], stride[0], dilation[0]) w_dim_out = get_dim_blocks(w_dim_in, kernel_size[1], padding[1], stride[1], dilation[1]) # (B, C, H, W) x = torch.nn.functional.unfold(x, kernel_size, padding=padding, stride=stride, dilation=dilation) # (B, C * kernel_size[0] * kernel_size[1], h_dim_out * w_dim_out) x = x.view(-1, channels, kernel_size[0], kernel_size[1], h_dim_out, w_dim_out) # (B, C, kernel_size[0], kernel_size[1], h_dim_out, w_dim_out) x = x.permute(0,1,4,5,2,3) # (B, C, h_dim_out, w_dim_out, kernel_size[0], kernel_size[1]) x = x.contiguous().view(-1, channels, kernel_size[0], kernel_size[1]) # (B * h_dim_out * w_dim_out, C, kernel_size[0], kernel_size[1]) return x def combine_patches_2d(x, kernel_size, output_shape, padding=0, stride=1, dilation=1): if isinstance(kernel_size, int): kernel_size = (kernel_size, kernel_size) if isinstance(padding, int): padding = (padding, padding) if isinstance(stride, int): stride = (stride, stride) if isinstance(dilation, int): dilation = (dilation, dilation) def get_dim_blocks(dim_in, dim_kernel_size, dim_padding = 0, dim_stride = 1, dim_dilation = 1): dim_out = (dim_in + 2 * dim_padding - dim_dilation * (dim_kernel_size - 1) - 1) // dim_stride + 1 return dim_out channels = x.shape[1] h_dim_out, w_dim_out = output_shape[2:] h_dim_in = get_dim_blocks(h_dim_out, kernel_size[0], padding[0], stride[0], dilation[0]) w_dim_in = get_dim_blocks(w_dim_out, kernel_size[1], padding[1], stride[1], dilation[1]) # (B * h_dim_in * w_dim_in, C, kernel_size[0], kernel_size[1]) x = x.view(-1, channels, h_dim_in, w_dim_in, kernel_size[0], kernel_size[1]) # (B, C, h_dim_in, w_dim_in, kernel_size[0], kernel_size[1]) x = x.permute(0,1,4,5,2,3) # (B, C, kernel_size[0], kernel_size[1], h_dim_in, w_dim_in) x = x.contiguous().view(-1, channels * kernel_size[0] * kernel_size[1], h_dim_in * w_dim_in) # (B, C * kernel_size[0] * kernel_size[1], h_dim_in * w_dim_in) x = torch.nn.functional.fold(x, (h_dim_out, w_dim_out), kernel_size=(kernel_size[0], kernel_size[1]), padding=padding, stride=stride, dilation=dilation) # (B, C, H, W) return x a = torch.arange(1, 65, dtype=torch.float).view(2,2,4,4) print(a.shape) print(a) b = extract_patches_2d(a, 2, padding=1, stride=2, dilation=1) # b = extract_patches_2ds(a, 2, padding=1, stride=2) print(b.shape) print(b) c = combine_patches_2d(b, 2, (2,2,4,4), padding=1, stride=2, dilation=1) print(c.shape) print(c) print(torch.all(a==c)) </code></pre> <p>Output (2D)</p> <pre class="lang-py prettyprint-override"><code>torch.Size([2, 2, 4, 4]) tensor([[[[ 1., 2., 3., 4.], [ 5., 6., 7., 8.], [ 9., 10., 11., 12.], [13., 14., 15., 16.]], [[17., 18., 19., 20.], [21., 22., 23., 24.], [25., 26., 27., 28.], [29., 30., 31., 32.]]], [[[33., 34., 35., 36.], [37., 38., 39., 40.], [41., 42., 43., 44.], [45., 46., 47., 48.]], [[49., 50., 51., 52.], [53., 54., 55., 56.], [57., 58., 59., 60.], [61., 62., 63., 64.]]]]) torch.Size([18, 2, 2, 2]) tensor([[[[ 0., 0.], [ 0., 1.]], [[ 0., 0.], [ 2., 3.]]], [[[ 0., 0.], [ 4., 0.]], [[ 0., 5.], [ 0., 9.]]], [[[ 6., 7.], [10., 11.]], [[ 8., 0.], [12., 0.]]], [[[ 0., 13.], [ 0., 0.]], [[14., 15.], [ 0., 0.]]], [[[16., 0.], [ 0., 0.]], [[ 0., 0.], [ 0., 17.]]], [[[ 0., 0.], [18., 19.]], [[ 0., 0.], [20., 0.]]], [[[ 0., 21.], [ 0., 25.]], [[22., 23.], [26., 27.]]], [[[24., 0.], [28., 0.]], [[ 0., 29.], [ 0., 0.]]], [[[30., 31.], [ 0., 0.]], [[32., 0.], [ 0., 0.]]], [[[ 0., 0.], [ 0., 33.]], [[ 0., 0.], [34., 35.]]], [[[ 0., 0.], [36., 0.]], [[ 0., 37.], [ 0., 41.]]], [[[38., 39.], [42., 43.]], [[40., 0.], [44., 0.]]], [[[ 0., 45.], [ 0., 0.]], [[46., 47.], [ 0., 0.]]], [[[48., 0.], [ 0., 0.]], [[ 0., 0.], [ 0., 49.]]], [[[ 0., 0.], [50., 51.]], [[ 0., 0.], [52., 0.]]], [[[ 0., 53.], [ 0., 57.]], [[54., 55.], [58., 59.]]], [[[56., 0.], [60., 0.]], [[ 0., 61.], [ 0., 0.]]], [[[62., 63.], [ 0., 0.]], [[64., 0.], [ 0., 0.]]]]) torch.Size([2, 2, 4, 4]) tensor([[[[ 1., 2., 3., 4.], [ 5., 6., 7., 8.], [ 9., 10., 11., 12.], [13., 14., 15., 16.]], [[17., 18., 19., 20.], [21., 22., 23., 24.], [25., 26., 27., 28.], [29., 30., 31., 32.]]], [[[33., 34., 35., 36.], [37., 38., 39., 40.], [41., 42., 43., 44.], [45., 46., 47., 48.]], [[49., 50., 51., 52.], [53., 54., 55., 56.], [57., 58., 59., 60.], [61., 62., 63., 64.]]]]) tensor(True) </code></pre> <p>Second (3D):</p> <p>Now it becomes interesting: We need to use 2 <code>fold</code> and <code>unfold</code> where we first apply the <code>fold</code> to the <code>D</code> dimension and leave the <code>W</code> and <code>H</code> untouched by setting kernel to 1, padding to 0, stride to 1 and dilation to 1. After we review the tensor and fold over the <code>H</code> and <code>W</code> dimensions. The unfolding happens in reverse, starting with <code>H</code> and <code>W</code>, then <code>D</code>.</p> <pre class="lang-py prettyprint-override"><code>def extract_patches_3ds(x, kernel_size, padding=0, stride=1): if isinstance(kernel_size, int): kernel_size = (kernel_size, kernel_size, kernel_size) if isinstance(padding, int): padding = (padding, padding, padding, padding, padding, padding) if isinstance(stride, int): stride = (stride, stride, stride) channels = x.shape[1] x = torch.nn.functional.pad(x, padding) # (B, C, D, H, W) x = x.unfold(2, kernel_size[0], stride[0]).unfold(3, kernel_size[1], stride[1]).unfold(4, kernel_size[2], stride[2]) # (B, C, d_dim_out, h_dim_out, w_dim_out, kernel_size[0], kernel_size[1], kernel_size[2]) x = x.contiguous().view(-1, channels, kernel_size[0], kernel_size[1], kernel_size[2]) # (B * d_dim_out * h_dim_out * w_dim_out, C, kernel_size[0], kernel_size[1], kernel_size[2]) return x def extract_patches_3d(x, kernel_size, padding=0, stride=1, dilation=1): if isinstance(kernel_size, int): kernel_size = (kernel_size, kernel_size, kernel_size) if isinstance(padding, int): padding = (padding, padding, padding) if isinstance(stride, int): stride = (stride, stride, stride) if isinstance(dilation, int): dilation = (dilation, dilation, dilation) def get_dim_blocks(dim_in, dim_kernel_size, dim_padding = 0, dim_stride = 1, dim_dilation = 1): dim_out = (dim_in + 2 * dim_padding - dim_dilation * (dim_kernel_size - 1) - 1) // dim_stride + 1 return dim_out channels = x.shape[1] d_dim_in = x.shape[2] h_dim_in = x.shape[3] w_dim_in = x.shape[4] d_dim_out = get_dim_blocks(d_dim_in, kernel_size[0], padding[0], stride[0], dilation[0]) h_dim_out = get_dim_blocks(h_dim_in, kernel_size[1], padding[1], stride[1], dilation[1]) w_dim_out = get_dim_blocks(w_dim_in, kernel_size[2], padding[2], stride[2], dilation[2]) # print(d_dim_in, h_dim_in, w_dim_in, d_dim_out, h_dim_out, w_dim_out) # (B, C, D, H, W) x = x.view(-1, channels, d_dim_in, h_dim_in * w_dim_in) # (B, C, D, H * W) x = torch.nn.functional.unfold(x, kernel_size=(kernel_size[0], 1), padding=(padding[0], 0), stride=(stride[0], 1), dilation=(dilation[0], 1)) # (B, C * kernel_size[0], d_dim_out * H * W) x = x.view(-1, channels * kernel_size[0] * d_dim_out, h_dim_in, w_dim_in) # (B, C * kernel_size[0] * d_dim_out, H, W) x = torch.nn.functional.unfold(x, kernel_size=(kernel_size[1], kernel_size[2]), padding=(padding[1], padding[2]), stride=(stride[1], stride[2]), dilation=(dilation[1], dilation[2])) # (B, C * kernel_size[0] * d_dim_out * kernel_size[1] * kernel_size[2], h_dim_out, w_dim_out) x = x.view(-1, channels, kernel_size[0], d_dim_out, kernel_size[1], kernel_size[2], h_dim_out, w_dim_out) # (B, C, kernel_size[0], d_dim_out, kernel_size[1], kernel_size[2], h_dim_out, w_dim_out) x = x.permute(0, 1, 3, 6, 7, 2, 4, 5) # (B, C, d_dim_out, h_dim_out, w_dim_out, kernel_size[0], kernel_size[1], kernel_size[2]) x = x.contiguous().view(-1, channels, kernel_size[0], kernel_size[1], kernel_size[2]) # (B * d_dim_out * h_dim_out * w_dim_out, C, kernel_size[0], kernel_size[1], kernel_size[2]) return x def combine_patches_3d(x, kernel_size, output_shape, padding=0, stride=1, dilation=1): if isinstance(kernel_size, int): kernel_size = (kernel_size, kernel_size, kernel_size) if isinstance(padding, int): padding = (padding, padding, padding) if isinstance(stride, int): stride = (stride, stride, stride) if isinstance(dilation, int): dilation = (dilation, dilation, dilation) def get_dim_blocks(dim_in, dim_kernel_size, dim_padding = 0, dim_stride = 1, dim_dilation = 1): dim_out = (dim_in + 2 * dim_padding - dim_dilation * (dim_kernel_size - 1) - 1) // dim_stride + 1 return dim_out channels = x.shape[1] d_dim_out, h_dim_out, w_dim_out = output_shape[2:] d_dim_in = get_dim_blocks(d_dim_out, kernel_size[0], padding[0], stride[0], dilation[0]) h_dim_in = get_dim_blocks(h_dim_out, kernel_size[1], padding[1], stride[1], dilation[1]) w_dim_in = get_dim_blocks(w_dim_out, kernel_size[2], padding[2], stride[2], dilation[2]) # print(d_dim_in, h_dim_in, w_dim_in, d_dim_out, h_dim_out, w_dim_out) x = x.view(-1, channels, d_dim_in, h_dim_in, w_dim_in, kernel_size[0], kernel_size[1], kernel_size[2]) # (B, C, d_dim_in, h_dim_in, w_dim_in, kernel_size[0], kernel_size[1], kernel_size[2]) x = x.permute(0, 1, 5, 2, 6, 7, 3, 4) # (B, C, kernel_size[0], d_dim_in, kernel_size[1], kernel_size[2], h_dim_in, w_dim_in) x = x.contiguous().view(-1, channels * kernel_size[0] * d_dim_in * kernel_size[1] * kernel_size[2], h_dim_in * w_dim_in) # (B, C * kernel_size[0] * d_dim_in * kernel_size[1] * kernel_size[2], h_dim_in * w_dim_in) x = torch.nn.functional.fold(x, output_size=(h_dim_out, w_dim_out), kernel_size=(kernel_size[1], kernel_size[2]), padding=(padding[1], padding[2]), stride=(stride[1], stride[2]), dilation=(dilation[1], dilation[2])) # (B, C * kernel_size[0] * d_dim_in, H, W) x = x.view(-1, channels * kernel_size[0], d_dim_in * h_dim_out * w_dim_out) # (B, C * kernel_size[0], d_dim_in * H * W) x = torch.nn.functional.fold(x, output_size=(d_dim_out, h_dim_out * w_dim_out), kernel_size=(kernel_size[0], 1), padding=(padding[0], 0), stride=(stride[0], 1), dilation=(dilation[0], 1)) # (B, C, D, H * W) x = x.view(-1, channels, d_dim_out, h_dim_out, w_dim_out) # (B, C, D, H, W) return x a = torch.arange(1, 129, dtype=torch.float).view(2,2,2,4,4) print(a.shape) print(a) # b = extract_patches_3d(a, 2, padding=1, stride=2) b = extract_patches_3ds(a, 2, padding=1, stride=2) print(b.shape) print(b) c = combine_patches_3d(b, 2, (2,2,2,4,4), padding=1, stride=2) print(c.shape) print(c) print(torch.all(a==c)) </code></pre> <p>Output (3D)</p> <p>(I had to limit the characters please look at the <a href="https://colab.research.google.com/drive/1Xmvau7PXjFGrM31reoc6UNc27ewV-wTn?usp=sharing" rel="nofollow noreferrer">notebook</a>)</p> <p>Third (4D)</p> <p>We add a time dimension to the 3D volume. We start the folding with just the <code>T</code> dimension, leaving <code>D</code>, <code>H</code> and <code>W</code> alone similarly to the 3D version. Then we fold over <code>D</code> leaving <code>H</code> and <code>W</code>. Finally we do <code>H</code> and <code>W</code>. The unfolding happens in reverse again. Hopefully by now you notice a pattern and you can add arbitrarily many dimensions and start folding one by one. The unfolding happens in reverse again.</p> <pre class="lang-py prettyprint-override"><code>def extract_patches_4ds(x, kernel_size, padding=0, stride=1): if isinstance(kernel_size, int): kernel_size = (kernel_size, kernel_size, kernel_size, kernel_size) if isinstance(padding, int): padding = (padding, padding, padding, padding, padding, padding, padding, padding) if isinstance(stride, int): stride = (stride, stride, stride, stride) channels = x.shape[1] x = torch.nn.functional.pad(x, padding) # (B, C, T, D, H, W) x = x.unfold(2, kernel_size[0], stride[0]).unfold(3, kernel_size[1], stride[1]).unfold(4, kernel_size[2], stride[2]).unfold(5, kernel_size[3], stride[3]) # (B, C, t_dim_out, d_dim_out, h_dim_out, w_dim_out, kernel_size[0], kernel_size[1], kernel_size[2], kernel_size[3]) x = x.contiguous().view(-1, channels, kernel_size[0], kernel_size[1], kernel_size[2], kernel_size[3]) # (B * t_dim_out, d_dim_out * h_dim_out * w_dim_out, C, kernel_size[0], kernel_size[1], kernel_size[2], kernel_size[3]) return x def extract_patches_4d(x, kernel_size, padding=0, stride=1, dilation=1): if isinstance(kernel_size, int): kernel_size = (kernel_size, kernel_size, kernel_size, kernel_size) if isinstance(padding, int): padding = (padding, padding, padding, padding) if isinstance(stride, int): stride = (stride, stride, stride, stride) if isinstance(dilation, int): dilation = (dilation, dilation, dilation, dilation) def get_dim_blocks(dim_in, dim_kernel_size, dim_padding = 0, dim_stride = 1, dim_dilation = 1): dim_out = (dim_in + 2 * dim_padding - dim_dilation * (dim_kernel_size - 1) - 1) // dim_stride + 1 return dim_out channels = x.shape[1] t_dim_in = x.shape[2] d_dim_in = x.shape[3] h_dim_in = x.shape[4] w_dim_in = x.shape[5] t_dim_out = get_dim_blocks(t_dim_in, kernel_size[0], padding[0], stride[0], dilation[0]) d_dim_out = get_dim_blocks(d_dim_in, kernel_size[1], padding[1], stride[1], dilation[1]) h_dim_out = get_dim_blocks(h_dim_in, kernel_size[2], padding[2], stride[2], dilation[2]) w_dim_out = get_dim_blocks(w_dim_in, kernel_size[3], padding[3], stride[3], dilation[3]) # print(t_dim_in, d_dim_in, h_dim_in, w_dim_in, t_dim_out, d_dim_out, h_dim_out, w_dim_out) # (B, C, T, D, H, W) x = x.view(-1, channels, t_dim_in, d_dim_in * h_dim_in * w_dim_in) # (B, C, T, D * H * W) x = torch.nn.functional.unfold(x, kernel_size=(kernel_size[0], 1), padding=(padding[0], 0), stride=(stride[0], 1), dilation=(dilation[0], 1)) # (B, C * kernel_size[0], t_dim_out * D * H * W) x = x.view(-1, channels * kernel_size[0] * t_dim_out, d_dim_in, h_dim_in * w_dim_in) # (B, C * kernel_size[0] * t_dim_out, D, H * W) x = torch.nn.functional.unfold(x, kernel_size=(kernel_size[1], 1), padding=(padding[1], 0), stride=(stride[1], 1), dilation=(dilation[1], 1)) # (B, C * kernel_size[0] * t_dim_out * kernel_size[1], d_dim_out * H * W) x = x.view(-1, channels * kernel_size[0] * t_dim_out * kernel_size[1] * d_dim_out, h_dim_in, w_dim_in) # (B, C * kernel_size[0] * t_dim_out * kernel_size[1] * d_dim_out, H, W) x = torch.nn.functional.unfold(x, kernel_size=(kernel_size[2], kernel_size[3]), padding=(padding[2], padding[3]), stride=(stride[2], stride[3]), dilation=(dilation[2], dilation[3])) # (B, C * kernel_size[0] * t_dim_out * kernel_size[1] * d_dim_out * kernel_size[2] * kernel_size[3], h_dim_out * w_dim_out) x = x.view(-1, channels, kernel_size[0], t_dim_out, kernel_size[1], d_dim_out, kernel_size[2], kernel_size[3], h_dim_out, w_dim_out) # (B, C, kernel_size[0], t_dim_out, kernel_size[1], d_dim_out, kernel_size[2], kernel_size[3], h_dim_out, w_dim_out) x = x.permute(0, 1, 3, 5, 8, 9, 2, 4, 6, 7) # (B, C, t_dim_out, d_dim_out, h_dim_out, w_dim_out, kernel_size[0], kernel_size[1], kernel_size[2], kernel_size[3]) x = x.contiguous().view(-1, channels, kernel_size[0], kernel_size[1], kernel_size[2], kernel_size[3]) # (B * t_dim_out * d_dim_out * h_dim_out * w_dim_out, C, kernel_size[0], kernel_size[1], kernel_size[2], kernel_size[3]) return x def combine_patches_4d(x, kernel_size, output_shape, padding=0, stride=1, dilation=1): if isinstance(kernel_size, int): kernel_size = (kernel_size, kernel_size, kernel_size, kernel_size) if isinstance(padding, int): padding = (padding, padding, padding, padding) if isinstance(stride, int): stride = (stride, stride, stride, stride) if isinstance(dilation, int): dilation = (dilation, dilation, dilation, dilation) def get_dim_blocks(dim_in, dim_kernel_size, dim_padding = 0, dim_stride = 1, dim_dilation = 1): dim_out = (dim_in + 2 * dim_padding - dim_dilation * (dim_kernel_size - 1) - 1) // dim_stride + 1 return dim_out channels = x.shape[1] t_dim_out, d_dim_out, h_dim_out, w_dim_out = output_shape[2:] t_dim_in = get_dim_blocks(d_dim_out, kernel_size[0], padding[0], stride[0], dilation[0]) d_dim_in = get_dim_blocks(d_dim_out, kernel_size[1], padding[1], stride[1], dilation[1]) h_dim_in = get_dim_blocks(h_dim_out, kernel_size[2], padding[2], stride[2], dilation[2]) w_dim_in = get_dim_blocks(w_dim_out, kernel_size[3], padding[3], stride[3], dilation[3]) # print(t_dim_in, d_dim_in, h_dim_in, w_dim_in, t_dim_out, d_dim_out, h_dim_out, w_dim_out) x = x.view(-1, channels, t_dim_in, d_dim_in, h_dim_in, w_dim_in, kernel_size[0], kernel_size[1], kernel_size[2], kernel_size[3]) # (B, C, t_dim_in, d_dim_in, h_dim_in, w_dim_in, kernel_size[0], kernel_size[1], kernel_size[2], kernel_size[3]) x = x.permute(0, 1, 6, 2, 7, 3, 8, 9, 4, 5) # (B, C, kernel_size[0], t_dim_in, kernel_size[1], d_dim_in, kernel_size[2], kernel_size[3], h_dim_in, w_dim_in) x = x.contiguous().view(-1, channels * kernel_size[0] * t_dim_in * kernel_size[1] * d_dim_in * kernel_size[2] * kernel_size[3], h_dim_in * w_dim_in) # (B, C * kernel_size[0] * t_dim_in * kernel_size[1] * d_dim_in * kernel_size[2] * kernel_size[3], h_dim_in, w_dim_in) x = torch.nn.functional.fold(x, output_size=(h_dim_out, w_dim_out), kernel_size=(kernel_size[2], kernel_size[3]), padding=(padding[2], padding[3]), stride=(stride[2], stride[3]), dilation=(dilation[2], dilation[3])) # (B, C * kernel_size[0] * t_dim_in * kernel_size[1] * d_dim_in, H, W) x = x.view(-1, channels * kernel_size[0] * t_dim_in * kernel_size[1], d_dim_in * h_dim_out * w_dim_out) # (B, C * kernel_size[0] * t_dim_in * kernel_size[1], d_dim_in * H * W) x = torch.nn.functional.fold(x, output_size=(d_dim_out, h_dim_out * w_dim_out), kernel_size=(kernel_size[1], 1), padding=(padding[1], 0), stride=(stride[1], 1), dilation=(dilation[1], 1)) # (B, C * kernel_size[0] * t_dim_in, D, H * W) x = x.view(-1, channels * kernel_size[0], t_dim_in * d_dim_out * h_dim_out * w_dim_out) # (B, C * kernel_size[0], t_dim_in * D * H * W) x = torch.nn.functional.fold(x, output_size=(t_dim_out, d_dim_out * h_dim_out * w_dim_out), kernel_size=(kernel_size[0], 1), padding=(padding[0], 0), stride=(stride[0], 1), dilation=(dilation[0], 1)) # (B, C, T, D * H * W) x = x.view(-1, channels, t_dim_out, d_dim_out, h_dim_out, w_dim_out) # (B, C, T, D, H, W) return x a = torch.arange(1, 129, dtype=torch.float).view(2,2,2,2,4,2) print(a.shape) print(a) # b = extract_patches_4d(a, 2, padding=1, stride=2) b = extract_patches_4ds(a, 2, padding=1, stride=2) print(b.shape) print(b) c = combine_patches_4d(b, 2, (2,2,2,2,4,2), padding=1, stride=2) print(c.shape) print(c) print(torch.all(a==c)) </code></pre> <p>Output (4D)</p> <p>(I had to limit the characters please look at the <a href="https://colab.research.google.com/drive/1Xmvau7PXjFGrM31reoc6UNc27ewV-wTn?usp=sharing" rel="nofollow noreferrer">notebook</a>)</p>
machine-learning|3d|computer-vision|pytorch
1
377,541
68,324,172
Retrieving intermediate features from pytorch torch.hub.load
<p>I have a Net object instantiated in pytorch via torch.hub.load:</p> <pre><code>model = torch.hub.load('facebookresearch/pytorchvideo', 'slowfast_r50', pretrained=True) </code></pre> <p>The final layer is a projection to a 400-dim vector. Is there a way to get the pentultimate layer instead during a forward pass?</p>
<p>Yes, easiest way is to switch the layer with <a href="https://pytorch.org/docs/stable/generated/torch.nn.Identity.html" rel="nofollow noreferrer"><code>torch.nn.Identity</code></a> (which simply returns it's inputs unchanged):</p> <p>Line below changes this submodule:</p> <pre><code>(6): ResNetBasicHead( (dropout): Dropout(p=0.5, inplace=False) (proj): Linear(in_features=2304, out_features=400, bias=True) (output_pool): AdaptiveAvgPool3d(output_size=1) ) </code></pre> <p>to <code>Identity</code>:</p> <pre><code>model.blocks[6] = torch.nn.Identity() </code></pre> <p>as you probably don't want to keep the <code>Dropout</code> anyway (you might only change <code>proj</code> or any other part of the network as needed).</p>
pytorch
1
377,542
68,430,896
How to optimize PySpark when using multiple data frames?
<p>Looking to optimize the following code for speed. Extensive background in python and pandas, but new to pyspark. Any suggestions you may have will be greatly appreciated.</p> <p>For clarity, the code has been broken up into parts 0 through 5. Feel free to address a single part or make a suggestion that efficiently ties all parts together.</p> <p><strong>There must be a PySpark &quot;trick&quot; that circumvents the necessity of having to visit each column multiple times, right?</strong></p> <pre><code>from pyspark.sql.functions import countDistinct, col, isnan, when, count df = spark.sql(&quot;&quot;&quot;SELECT {} FROM db.tbl WHERE tbl.year = '2019'&quot;&quot;&quot;.format(sql_string)) print(df.count(), len(df.columns)) # 0. Find Number of Distinct Values for each Column distinct_value_dict = {} for i, x in enumerate(df.columns): df0 = df.agg(countDistinct(x).alias('c')).collect() distinct_value_dict[x] = int(str(df0)[7:-2]) print(i, ',', end=&quot;&quot;) df0 = pd.DataFrame.from_dict(distinct_value_dict, orient='index', columns=['Distinct Values']) df0.to_csv('db_2019_0.csv') print('df0: Distinct Values... COMPLETE!') # 1. Find Data Type of each Column df1 = pd.DataFrame(df.dtypes).rename(columns={0:'Column Name', 1:'Data Type'}).set_index('Column Name') df1.to_csv('db_2019_1.csv') print('df1: Data Type... COMPLETE!') # 2. Count the 'Missing Values' for each Column: Count None, Null, empty, Nan with string literals. df2 = df.select([count(when( col(c).contains('None') | \ col(c).contains('NULL') | \ (col(c) == '' ) | \ col(c).isNull() | \ isnan(c), c )).alias(c) for c in df.columns] ).toPandas().transpose().rename(columns={0:'Missing Values'}) df2.to_csv('db_2019_2.csv') print('df2: Missing Values... COMPLETE!') # 3. Generate Descriptive Statistics: Row Count, Mean, Std Dev, Min, Max df3 = df.describe().toPandas().set_index('summary').transpose().rename(columns={'summary':'Column Name', 'count':'Row Count', 'mean':'Mean', 'stddev':'Std Dev', 'min':'Min', 'max':'Max'}) df3.to_csv('db_2019_3.csv') print('df3: Describe... COMPLETE!') # 4. Determine the Mode for each Column df4 = pd.DataFrame([[i, df_tops.groupby(i).count().orderBy('count', ascending=False).first()[0]] for i in df.columns]).rename(columns={0:'Column Name', 1:'Mode'}).set_index('Column Name') df4.to_csv('db_2019_4.csv') print('df4: Mode... COMPLETE!') # 5. Concat descriptive values for each of the columns df_summary = pd.concat([df1, df0, df2, df3, df4], axis=1).sort_values(by='Missing Values') df_summary </code></pre>
<p>Using a list comprehension did not improve performance for #0. I found it to be more efficient to extract the distinct value count from the string and add it to a dictionary.</p> <pre><code># 0. Find Number of Distinct Values for each Column distinct_value_dict = {} for i, x in enumerate(df.columns): df0 = df.agg(countDistinct(x).alias('c')).collect() distinct_value_dict[x] = int(str(df0)[7:-2]) print(i, ',', end=&quot;&quot;) df0 = pd.DataFrame.from_dict(distinct_value_dict, orient='index', columns=['Distinct Values']) df0.to_csv('db_2019_0.csv') print('df0: Distinct Values... COMPLETE!') </code></pre>
python|pandas|dataframe|apache-spark|pyspark
0
377,543
68,038,132
timedelta64 column with apache superset
<p>I have a dataframe with a <code>timedelta64</code> column that I want to use in my analytics. I upload the data to superset via &quot;Upload CSV&quot; and there doesn't seem to be a way to tell superset that a particular column is a timedelta during the upload process (similar to how you can tell it to parse specific columns as dates). So the column is imported as text and there is no way to change the type of column after the upload.</p> <p>Is there any way around this?</p>
<p>the column types in Superset mirror the database column types. So if you upload a CSVS to a Postgres database, you need to make sure that timedelta is an actual column type (as far as I know, it isn't a column type!).</p> <p>In fact, the suggested workaround is to save the value / column as string! <a href="https://stackoverflow.com/questions/55516374/how-to-store-a-pandas-dataframe-with-datetime-timedelta-type-data-objects-into-a">How to store a pandas dataframe with datetime.timedelta type data objects into a postgresql d/b using sqlalchemy?</a></p> <p>You can always tap into the semantic layer in Superset to continue transforming the data in SQL Lab: <a href="https://docs.preset.io/docs/semantic-layer" rel="nofollow noreferrer">https://docs.preset.io/docs/semantic-layer</a></p>
python|pandas|apache-superset
0
377,544
68,140,388
An error 'Cache may be out of date, try `force_reload=True`.' comes up even though I have included `force_reload=True` in the code block?
<p>My Heroku App gives an Internal Server Error (500) when I try to get an inference for a model. With the command <code>heroku logs --tail</code> The following error comes up ( This is part of the error received )</p> <pre><code>2021-06-25T13:13:01.052585+00:00 heroku[web.1]: State changed from up to starting 2021-06-25T13:13:02.131624+00:00 heroku[web.1]: Stopping all processes with SIGTERM 2021-06-25T13:13:02.333580+00:00 app[web.1]: [2021-06-25 13:13:02 +0000] [8] [INFO] Worker exiting (pid: 8) 2021-06-25T13:13:02.333668+00:00 app[web.1]: [2021-06-25 13:13:02 +0000] [4] [INFO] Handling signal: term 2021-06-25T13:13:02.333748+00:00 app[web.1]: [2021-06-25 13:13:02 +0000] [7] [INFO] Worker exiting (pid: 7) 2021-06-25T13:13:02.734851+00:00 app[web.1]: [2021-06-25 13:13:02 +0000] [4] [INFO] Shutting down: Master 2021-06-25T13:13:02.814358+00:00 heroku[web.1]: Process exited with status 0 2021-06-25T13:13:32.771092+00:00 heroku[web.1]: Starting process with command `gunicorn app:app` 2021-06-25T13:13:37.235228+00:00 app[web.1]: [2021-06-25 13:13:37 +0000] [4] [INFO] Starting gunicorn 20.1.0 2021-06-25T13:13:37.235870+00:00 app[web.1]: [2021-06-25 13:13:37 +0000] [4] [INFO] Listening at: http://0.0.0.0:17520 (4) 2021-06-25T13:13:37.236182+00:00 app[web.1]: [2021-06-25 13:13:37 +0000] [4] [INFO] Using worker: sync 2021-06-25T13:13:37.248134+00:00 app[web.1]: [2021-06-25 13:13:37 +0000] [7] [INFO] Booting worker with pid: 7 2021-06-25T13:13:37.304799+00:00 app[web.1]: [2021-06-25 13:13:37 +0000] [8] [INFO] Booting worker with pid: 8 2021-06-25T13:13:37.739229+00:00 heroku[web.1]: State changed from starting to up 2021-06-25T13:14:08.000000+00:00 app[api]: Build succeeded 2021-06-25T13:48:08.838837+00:00 heroku[web.1]: Idling 2021-06-25T13:48:08.857160+00:00 heroku[web.1]: State changed from up to down 2021-06-25T13:48:10.249612+00:00 heroku[web.1]: Stopping all processes with SIGTERM 2021-06-25T13:48:10.325914+00:00 app[web.1]: [2021-06-25 13:48:10 +0000] [4] [INFO] Handling signal: term 2021-06-25T13:48:10.336944+00:00 app[web.1]: [2021-06-25 13:48:10 +0000] [7] [INFO] Worker exiting (pid: 7) 2021-06-25T13:48:10.337795+00:00 app[web.1]: [2021-06-25 13:48:10 +0000] [8] [INFO] Worker exiting (pid: 8) 2021-06-25T13:48:11.241652+00:00 app[web.1]: [2021-06-25 13:48:11 +0000] [4] [INFO] Shutting down: Master 2021-06-25T13:48:11.421533+00:00 heroku[web.1]: Process exited with status 0 2021-06-26T07:29:48.172006+00:00 heroku[web.1]: Unidling 2021-06-26T07:29:48.174345+00:00 heroku[web.1]: State changed from down to starting 2021-06-26T07:30:11.641635+00:00 heroku[web.1]: Starting process with command `gunicorn app:app` 2021-06-26T07:30:13.993835+00:00 app[web.1]: [2021-06-26 07:30:13 +0000] [4] [INFO] Starting gunicorn 20.1.0 2021-06-26T07:30:13.994244+00:00 app[web.1]: [2021-06-26 07:30:13 +0000] [4] [INFO] Listening at: http://0.0.0.0:34971 (4) 2021-06-26T07:30:13.994334+00:00 app[web.1]: [2021-06-26 07:30:13 +0000] [4] [INFO] Using worker: sync 2021-06-26T07:30:14.001212+00:00 app[web.1]: [2021-06-26 07:30:14 +0000] [7] [INFO] Booting worker with pid: 7 2021-06-26T07:30:14.034313+00:00 app[web.1]: [2021-06-26 07:30:14 +0000] [8] [INFO] Booting worker with pid: 8 2021-06-26T07:30:15.444809+00:00 heroku[web.1]: State changed from starting to up 2021-06-26T07:30:16.251487+00:00 heroku[router]: at=info method=GET path=&quot;/&quot; host=safety-helmet-object-detection.herokuapp.com request_id=1a73ef92-3214-4814-a6e6-0a078b49091e fwd=&quot;122.200.18.98&quot; dyno=web.1 connect=1ms service=10ms status=200 bytes=2567 protocol=https 2021-06-26T07:30:16.253872+00:00 app[web.1]: 10.43.233.218 - - [26/Jun/2021:07:30:16 +0000] &quot;GET / HTTP/1.1&quot; 200 2412 &quot;-&quot; &quot;Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36&quot; 2021-06-26T07:30:16.635010+00:00 heroku[router]: at=info method=GET path=&quot;/&quot; host=safety-helmet-object-detection.herokuapp.com request_id=8f5913f4-2fc5-4875-a490-18c10c068f80 fwd=&quot;122.200.18.98&quot; dyno=web.1 connect=1ms service=10ms status=200 bytes=2567 protocol=http 2021-06-26T07:30:16.635538+00:00 app[web.1]: 10.47.181.239 - - [26/Jun/2021:07:30:16 +0000] &quot;GET / HTTP/1.1&quot; 200 2412 &quot;-&quot; &quot;Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36&quot; 2021-06-26T07:30:16.913529+00:00 heroku[router]: at=info method=GET path=&quot;/static/style.css&quot; host=safety-helmet-object-detection.herokuapp.com request_id=46585aa5-e4b8-465a-b088-6026bec294e2 fwd=&quot;122.200.18.98&quot; dyno=web.1 connect=1ms service=8ms status=200 bytes=696 protocol=http 2021-06-26T07:30:16.913943+00:00 app[web.1]: 10.47.181.239 - - [26/Jun/2021:07:30:16 +0000] &quot;GET /static/style.css HTTP/1.1&quot; 200 0 &quot;http://safety-helmet-object-detection.herokuapp.com/&quot; &quot;Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36&quot; 2021-06-26T07:30:17.013092+00:00 app[web.1]: 10.63.152.20 - - [26/Jun/2021:07:30:17 +0000] &quot;GET /static/pytorch.png HTTP/1.1&quot; 200 0 &quot;http://safety-helmet-object-detection.herokuapp.com/&quot; &quot;Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36&quot; 2021-06-26T07:30:17.013496+00:00 heroku[router]: at=info method=GET path=&quot;/static/pytorch.png&quot; host=safety-helmet-object-detection.herokuapp.com request_id=e334b2ec-032c-4392-99f5-7ff57c368de6 fwd=&quot;122.200.18.98&quot; dyno=web.1 connect=1ms service=16ms status=200 bytes=11679 protocol=http 2021-06-26T07:30:17.371585+00:00 app[web.1]: 10.63.152.20 - - [26/Jun/2021:07:30:17 +0000] &quot;GET /favicon.ico HTTP/1.1&quot; 404 232 &quot;http://safety-helmet-object-detection.herokuapp.com/&quot; &quot;Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36&quot; 2021-06-26T07:30:17.372076+00:00 heroku[router]: at=info method=GET path=&quot;/favicon.ico&quot; host=safety-helmet-object-detection.herokuapp.com request_id=5a0b6062-de2b-465c-80f9-ccc556342ef2 fwd=&quot;122.200.18.98&quot; dyno=web.1 connect=1ms service=2ms status=404 bytes=393 protocol=http 2021-06-26T07:31:30.049426+00:00 app[web.1]: Downloading: &quot;https://github.com/ultralytics/yolov5/archive/master.zip&quot; to /app/.cache/torch/hub/master.zip 2021-06-26T07:31:37.289772+00:00 app[web.1]: Exception on / [POST] 2021-06-26T07:31:37.289821+00:00 app[web.1]: Traceback (most recent call last): 2021-06-26T07:31:37.289821+00:00 app[web.1]: File &quot;/app/.cache/torch/hub/ultralytics_yolov5_master/hubconf.py&quot;, line 40, in _create 2021-06-26T07:31:37.289822+00:00 app[web.1]: model = attempt_load(fname, map_location=torch.device('cpu')) # download/load FP32 model 2021-06-26T07:31:37.289837+00:00 app[web.1]: File &quot;/app/.cache/torch/hub/ultralytics_yolov5_master/models/experimental.py&quot;, line 119, in attempt_load 2021-06-26T07:31:37.289838+00:00 app[web.1]: ckpt = torch.load(attempt_download(w), map_location=map_location) # load 2021-06-26T07:31:37.289838+00:00 app[web.1]: File &quot;/app/.heroku/python/lib/python3.9/site-packages/torch/serialization.py&quot;, line 579, in load 2021-06-26T07:31:37.289839+00:00 app[web.1]: with _open_file_like(f, 'rb') as opened_file: 2021-06-26T07:31:37.289839+00:00 app[web.1]: File &quot;/app/.heroku/python/lib/python3.9/site-packages/torch/serialization.py&quot;, line 230, in _open_file_like 2021-06-26T07:31:37.289839+00:00 app[web.1]: return _open_file(name_or_buffer, mode) 2021-06-26T07:31:37.289840+00:00 app[web.1]: File &quot;/app/.heroku/python/lib/python3.9/site-packages/torch/serialization.py&quot;, line 211, in __init__ 2021-06-26T07:31:37.289840+00:00 app[web.1]: super(_open_file, self).__init__(open(name, mode)) 2021-06-26T07:31:37.289840+00:00 app[web.1]: FileNotFoundError: [Errno 2] No such file or directory: 'best.pt' 2021-06-26T07:31:37.289840+00:00 app[web.1]: 2021-06-26T07:31:37.289841+00:00 app[web.1]: The above exception was the direct cause of the following exception: 2021-06-26T07:31:37.289841+00:00 app[web.1]: 2021-06-26T07:31:37.289841+00:00 app[web.1]: Traceback (most recent call last): 2021-06-26T07:31:37.289842+00:00 app[web.1]: File &quot;/app/.heroku/python/lib/python3.9/site-packages/flask/app.py&quot;, line 2070, in wsgi_app 2021-06-26T07:31:37.289842+00:00 app[web.1]: response = self.full_dispatch_request() 2021-06-26T07:31:37.289842+00:00 app[web.1]: File &quot;/app/.heroku/python/lib/python3.9/site-packages/flask/app.py&quot;, line 1515, in full_dispatch_request 2021-06-26T07:31:37.289842+00:00 app[web.1]: rv = self.handle_user_exception(e) 2021-06-26T07:31:37.289842+00:00 app[web.1]: File &quot;/app/.heroku/python/lib/python3.9/site-packages/flask/app.py&quot;, line 1513, in full_dispatch_request 2021-06-26T07:31:37.289843+00:00 app[web.1]: rv = self.dispatch_request() 2021-06-26T07:31:37.289843+00:00 app[web.1]: File &quot;/app/.heroku/python/lib/python3.9/site-packages/flask/app.py&quot;, line 1499, in dispatch_request 2021-06-26T07:31:37.289843+00:00 app[web.1]: return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args) 2021-06-26T07:31:37.289843+00:00 app[web.1]: File &quot;/app/app.py&quot;, line 23, in predict 2021-06-26T07:31:37.289844+00:00 app[web.1]: model = torch.hub.load('ultralytics/yolov5', 'custom', path='best.pt', force_reload=True).autoshape() # force_reload = recache latest code 2021-06-26T07:31:37.289844+00:00 app[web.1]: File &quot;/app/.heroku/python/lib/python3.9/site-packages/torch/hub.py&quot;, line 339, in load 2021-06-26T07:31:37.289844+00:00 app[web.1]: model = _load_local(repo_or_dir, model, *args, **kwargs) 2021-06-26T07:31:37.289845+00:00 app[web.1]: File &quot;/app/.heroku/python/lib/python3.9/site-packages/torch/hub.py&quot;, line 368, in _load_local 2021-06-26T07:31:37.289845+00:00 app[web.1]: model = entry(*args, **kwargs) 2021-06-26T07:31:37.289845+00:00 app[web.1]: File &quot;/app/.cache/torch/hub/ultralytics_yolov5_master/hubconf.py&quot;, line 65, in custom 2021-06-26T07:31:37.289845+00:00 app[web.1]: return _create(path, autoshape=autoshape, verbose=verbose, device=device) 2021-06-26T07:31:37.289846+00:00 app[web.1]: File &quot;/app/.cache/torch/hub/ultralytics_yolov5_master/hubconf.py&quot;, line 60, in _create 2021-06-26T07:31:37.289846+00:00 app[web.1]: raise Exception(s) from e 2021-06-26T07:31:37.289851+00:00 app[web.1]: Exception: Cache may be out of date, try `force_reload=True`. See https://github.com/ultralytics/yolov5/issues/36 for help. 2021-06-26T07:31:37.290810+00:00 app[web.1]: 10.63.152.20 - - [26/Jun/2021:07:31:37 +0000] &quot;POST / HTTP/1.1&quot; 500 290 &quot;http://safety-helmet-object-detection.herokuapp.com/&quot; &quot;Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36&quot; 2021-06-26T07:31:37.291579+00:00 heroku[router]: at=info method=POST path=&quot;/&quot; host=safety-helmet-object-detection.herokuapp.com request_id=15ab238a-5ef8-40e5-86e9-159f6b4de20f fwd=&quot;122.200.18.98&quot; dyno=web.1 connect=1ms service=8230ms status=500 bytes=463 protocol=http </code></pre> <p><strong>Main Errors are</strong></p> <p>FileNotFoundError: [Errno 2] No such file or directory: 'best.pt'</p> <p>The above exception was the direct cause of the following exception:</p> <p>Exception: Cache may be out of date, try <code>force_reload=True</code>. See <a href="https://github.com/ultralytics/yolov5/issues/36" rel="nofollow noreferrer">https://github.com/ultralytics/yolov5/issues/36</a> for help.</p> <p>(app.py, that is the code I am working with is in the same directory as best.pt)</p> <p>The code I wrote is as follows:</p> <pre><code>&quot;&quot;&quot; Web App to perform inference on a YOLOv5s custom model &quot;&quot;&quot; import io from PIL import Image from pathlib import Path from flask import Flask, render_template, request, redirect import torch app = Flask(__name__) @app.route(&quot;/&quot;, methods=[&quot;GET&quot;, &quot;POST&quot;]) def predict(): if request.method == &quot;POST&quot;: if &quot;file&quot; not in request.files: return redirect(request.url) model = torch.hub.load('ultralytics/yolov5', 'custom', path='best.pt', force_reload=True).autoshape() # force_reload = True re cache last code (But it doesn't work here) model.eval() file = request.files[&quot;file&quot;] if not file: return img_bytes = file.read() img = Image.open(io.BytesIO(img_bytes)) results = model(img, size=640) results.display(save=True, save_dir = Path('static')) return redirect(&quot;static/image0.jpg&quot;) return render_template(&quot;index.html&quot;) if __name__ == &quot;__main__&quot;: app.run() </code></pre>
<p>I fixed this issue, The FileNotFoundError: [Errno 2] No such file or directory: 'best.pt' was the main error, Heroku couldn't figure that path out, so I included this custom model in a folder called static. My new code block is:</p> <p><code>model = torch.hub.load('ultralytics/yolov5', 'custom', path='static/best.pt', force_reload=True).autoshape()</code></p> <p>where app.py and static are in the same directory.</p>
python|heroku|pytorch|internal-server-error|yolov5
3
377,545
68,299,412
Cumulative sum of rows in Python Pandas
<p>I'm working on a dataframe which I get a value for each year and state :</p> <pre><code> 0 State 1965 1966 1967 1968 1 Alabama 20.2 40 60.3 80 2 Alaska 10 15 18 20 3 Arizona 5 5 10 12 </code></pre> <p>I need each value sum the last with the current one :</p> <pre><code> 0 State 1965 1966 1967 1968 1 Alabama 20.2 60.2 120.5 200.5 2 Alaska 10 25 43 63 3 Arizona 5 10 20 32 </code></pre> <p>I tried <code>df['sum'] = df.sum(axis=1)</code> and <code>.cumsum</code> but I don't know how to apply it to my problem, as I don't need a new column with the total sum.</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.cumsum.html" rel="nofollow noreferrer"><code>DataFrame.cumsum</code></a> with <code>axis=1</code> and convert non numeric column <code>State</code> to <code>index</code>:</p> <pre><code>df = df.set_index('State').cumsum(axis=1) print (df) 1965 1966 1967 1968 State Alabama 20.2 60.2 120.5 200.5 Alaska 10.0 25.0 43.0 63.0 Arizona 5.0 10.0 20.0 32.0 </code></pre> <p>Or select all columns without first and assign back:</p> <pre><code>df.iloc[:, 1:] = df.iloc[:, 1:].cumsum(axis=1) print (df) State 1965 1966 1967 1968 0 1 Alabama 20.2 60.2 120.5 200.5 2 Alaska 10.0 25.0 43.0 63.0 3 Arizona 5.0 10.0 20.0 32.0 </code></pre>
python-3.x|pandas|dataframe
0
377,546
68,296,386
Tensorflow.js returns "NaN" Value when running Linear Regression Model
<p>I'm trying to run this linear regression model which would essentially give me an output based on <code>const prediction = model.predict((tf.tensor2d([20], [1,1])));</code> I'm however unfortunately getting NaN Value everytime I run the code to receive a prediction.</p> <p>What's the best way to approach a solution? Are there other methods?</p> <p>Thank you!</p> <p>Below is the code:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-js lang-js prettyprint-override"><code>async function learnLinear() { const fontSize = document.getElementById("count").innerHTML; const parsed = parseInt(fontSize); const model = tf.sequential(); model.add(tf.layers.dense({ units: 1, inputShape: [1] })); const learningRate = 0.0001; const optimizer = tf.train.sgd(learningRate); model.compile({ loss: "meanSquaredError", optimizer: "sgd", }); const xs = tf.tensor2d( [ 54, 20, 22, 34, 18, 47, 28, 54, 36, 51, 44, 31, 39, 19, 45, 48, 32, 27, 25, 54, 27, 38, 25, 38, 57, 49, 28, 19, 59, 28, 27, 55, 60, 49, 40, 45, 35, 45, 39, 25, 50, 58, 28, 59, 21, 37, 47, 31, 46, 18, ], [50, 1] ); const ys = tf.tensor2d( [ 14, 15, 15, 15, 16, 17, 15, 16, 15, 17, 17, 15, 16, 15, 15, 16, 17, 17, 17, 14, 16, 15, 15, 16, 17, 15, 16, 14, 15, 16, 14, 17, 15, 14, 14, 17, 15, 14, 14, 16, 16, 14, 14, 17, 17, 14, 17, 14, 14, 17, ], [50, 1] ); await model.fit(xs, ys, { epochs: 500 }); const prediction = model.predict(tf.tensor2d([20], [1, 1])); const value = prediction.dataSync()[0]; console.log("Prediction", value); }</code></pre> </div> </div> </p>
<p>You forgot to specify what metric the model is supposed to track.</p> <pre><code>const batchSize = 32; const epochs = 500; model.compile({ loss: &quot;meanSquaredError&quot;, optimizer: &quot;sgd&quot;, metrics: [&quot;mse&quot;], }); await model.fit(xs, ys, batchSize, epochs); const prediction = model.predict(tf.tensor2d([20], [1, 1])); </code></pre>
javascript|tensorflow|linear-regression|tensorflow.js
0
377,547
68,339,861
How to remove space "before" column name when importing csv data in Pandas
<p>I use the default Pandas csv reading to import some data as followed:</p> <pre><code>df = pd.read_csv('data_file.csv') </code></pre> <p>The data frame I got is as below:</p> <pre><code> Force [N] Stress [MPa] 0 0.000000 2.230649e-13 1 0.014117 1.071518e-01 2 0.135255 3.365490e+00 </code></pre> <p>The data frame column I got here has spaces before the actual name (illustrated below) so that I could not use the actual column name string to access the column afterwards.</p> <pre><code>&gt;&gt;&gt; df.columns Index([' Force [N]', ' Stress [MPa] ']) </code></pre> <p>I would like to remove the space <strong>before</strong> the string, but keep the space within the column name string. Tried the following one but it will remove all spaces so the column name got changed as well.</p> <pre><code>df.columns=df.columns.str.replace(' ',''); </code></pre> <p>Is there anyway to strip the white spaces to the left of column name when importing csv? I would like the column name to be like:</p> <pre><code>'Force [N]','Stress [MPa]' </code></pre>
<p>To avoid post-processing the column data set <code>skipinitialspace=True</code> to <a href="https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html" rel="nofollow noreferrer"><code>pd.read_csv</code></a>:</p> <pre><code>df = pd.read_csv('data_file.csv', skipinitialspace=True) </code></pre> <p><code>df</code>:</p> <pre><code> Force [N] Stress [MPa] 0 0.000000 2.230649e-13 1 0.014117 1.071518e-01 2 0.135255 3.365490e+00 </code></pre> <p><code>df.columns</code>:</p> <pre><code>Index(['Force [N]', 'Stress [MPa]'], dtype='object') </code></pre> <p><code>data_file.csv</code></p> <pre><code> Force [N], Stress [MPa] 0.000000, 2.230649e-13 0.014117, 1.071518e-01 0.135255, 3.365490e+00 </code></pre>
python|pandas
4
377,548
68,082,460
Pandas un-exponde Series based off of Index
<p>I have a pd.dataframe of sentences such as:</p> <pre><code> text 0 kusebenta ngendlela lefanele kwemabhizinisi em... 1 bumetima ekuvisiseni umnotfo wemhlaba nekuntji... 2 ngaleyo ndlelake sesincume kusebentisa emandla... 3 emabhisizinisi embuso kufanele angenise imali ... 4 nanoma kulungile kutsi umbuso uwasite ngetimal... ... ... 63121 alimata nobe enta kutsi kulahleke imphahla yem... 63122 afaka engotini imphilo yakhe nobe yalabanye ng... 63123 akhinyabeta kuphatfwa kucondziswa kwetigwegwe ... 63124 asebentisa kabi sikhundla sakhe emisebentini y... 63125 antjontja afumbatsisa nobe enta inkohliso </code></pre> <p>I have a pd.series of sentences I got after exploding the pd.dataframe containing sentences like so:</p> <pre class="lang-py prettyprint-override"><code>series1 = df1['field_name'].str.split().explode() </code></pre> <p>This gives the pd.series:</p> <pre><code>0 kusebenta 0 ngendlela 0 lefanele 0 kwemabhizinisi 0 embuso 0 tate 0 owned 0 enterprises 0 kungumgogodla 0 wentfutfuko 0 yelive 1 bumetima 1 ekuvisiseni 1 umnotfo 1 wemhlaba 1 nekuntjintjantjintja 1 kwawo 1 emandla 1 etimakethe 1 kanye Name: text, dtype: object </code></pre> <p>I want to now un-explode and recombine the words from the pd.series to make the full sentence again after I do some processing on the series.</p> <p>I have been looking at using 'groupby' but have not had much success when trying to groupby the index.</p> <p>I have also tried to convert it into a pd.dataframe, but got errors about having the same index.</p> <p>P.S. would it be possible to create a new DF with indexes up to the length of the pd.series and then concat the pd.series data somehow as full sentences or use groupby that way?</p> <p>EDIT 1: Running <code>vocab.head().to_dict()</code> as suggested in comment produces the following output: <code>{0: 'embuso'}</code> Which is the 5th element as head gives 5 results by default.</p> <p>Running <code>vocab.head(20).to_dict()</code> produces: <code>{0: 'yelive', 1: 'kanye'}</code> The last element with index 0 being 'yelive', and 'kanye' being an element from the 'index 1'</p>
<p>Apply <code>str.join</code> on index generated by <code>explode()</code>:</p> <pre><code>&gt;&gt;&gt; series1.groupby(level=0).apply(' '.join) 0 kusebenta ngendlela lefanele kwemabhizinisi em... 1 bumetima ekuvisiseni umnotfo wemhlaba nekuntji... Name: text, dtype: object </code></pre>
python|pandas|dataframe|nlp|series
1
377,549
68,427,239
How to calculate pct_change in pandas with reference to just the first column
<p>I have a dataframe as:</p> <pre><code>df = pd.DataFrame({ &quot;A&quot;: [1, 5, 2, 5, 6], &quot;B&quot;: [-12, 23, 5, 22, 35], &quot;C&quot;: [-32, 12, -10, 3, 2], &quot;D&quot;: [2, 13, 6, 2, 8] }) </code></pre> <p>Now, I want to calculate the percentage change on <code>axis=1</code> but with reference to just <code>&quot;A&quot;</code> for all the columns like the percentage change for <code>&quot;B&quot;</code> w.r.t <code>&quot;A&quot;</code>, <code>&quot;C&quot;</code> w.r.t <code>&quot;A&quot;</code> and so on.</p> <p>The <code>pct_change</code> function does a similar job but it calculates the percent change for successive rows or columns which I don't want.</p> <p>Right now I'm thinking of acheiving this by probably a for loop and adding on the percentages OR splitting the dataframe like <code>[&quot;A&quot;, &quot;B&quot;]</code>, <code>[&quot;A&quot;, &quot;C&quot;]</code>, so forth and then applying <code>pct_change</code> to all separately.</p> <p>The latter approach is I think better, but the question is,</p> <p><strong>Is there an even better approach which will do the same job?</strong></p>
<p>You can use <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.divide.html" rel="nofollow noreferrer">divide</a> function in pandas, diving all columns with column <code>A</code></p> <pre class="lang-py prettyprint-override"><code>pct = df.divide(df[&quot;A&quot;], axis=&quot;index&quot;) - 1 pct.head() </code></pre> <p>Results:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th></th> <th>A</th> <th>B</th> <th>C</th> <th>D</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>0.0</td> <td>-13.000000</td> <td>-33.000000</td> <td>1.000000</td> </tr> <tr> <td>1</td> <td>0.0</td> <td>3.600000</td> <td>1.400000</td> <td>1.600000</td> </tr> <tr> <td>2</td> <td>0.0</td> <td>1.500000</td> <td>-6.000000</td> <td>2.000000</td> </tr> <tr> <td>3</td> <td>0.0</td> <td>3.400000</td> <td>-0.400000</td> <td>-0.600000</td> </tr> <tr> <td>4</td> <td>0.0</td> <td>4.833333</td> <td>-0.666667</td> <td>0.333333</td> </tr> </tbody> </table> </div>
python|pandas
3
377,550
68,300,237
How to merge headers of multiple columns of a pandas dataframe in a new column if values of the multiple columns meet certain criteria
<p>I have a data frame of size 122400*92, out of which 8 columns represent flow, which maybe in different combinations. I want to merge all the columns headers in a new column if the flow in each column is &gt; 20.</p> <p>For an example: A Flow: 52 B Flow: 46 C Flow: 0 D Flow: 54 E Flow: 34 F Flow: 0 G Flow: 12 H Flow: 0</p> <p>New column will give :'A,B,D,E,G' I have used the below code, which seems to work for small dataset but fails to work in large dataset.</p> <p>reqcol=['A FLOW','B FLOW','C FLOW','D FLOW','E FLOW','F FLOW','G FLOW','H FLOW']</p> <pre><code>arr=[] for i in range(len(df)): arr1=[] for j in reqcol: if(df[j][i]&gt;20): arr1.append(j[0]) arr.append(arr1) df['Combination'] = arr </code></pre> <p>Request your help</p>
<p>Let's define some test data:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import numpy as np reqcol = ['A FLOW', 'B FLOW'] df = pd.DataFrame({'NAME': ['N1', 'N2', 'N3', 'N4'], 'A FLOW': [5, 80,50, 40], 'B FLOW' : [10, 0, 40, 10]}) </code></pre> <p>This gives you the following data frame:</p> <pre><code>&gt;&gt;&gt; df NAME A FLOW B FLOW 0 N1 5 10 1 N2 80 0 2 N3 50 40 3 N4 40 10 </code></pre> <p>First let's only look at the relevant columns:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; df[reqcol] A FLOW B FLOW 0 5 10 1 80 0 2 50 40 3 40 10 </code></pre> <p>Now check where the flow is bigger than 20 (only in the relevant columns):</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; df[reqcol].where(lambda x: x &gt; 20) A FLOW B FLOW 0 NaN NaN 1 80.0 NaN 2 50.0 40.0 3 40.0 NaN </code></pre> <p>We can now apply a function onto every row. The function will get a <code>Pandas.Series</code> Object. First we eliminate all <code>NaN</code>'s from the <code>Series</code> by calling <code>.dropna()</code> and then we get all remaining keys by using <code>.keys()</code> (original column name like <code>'FLOW A'</code>).</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; dfBigger20 = df[reqcol].where(lambda x: x &gt; 20) &gt;&gt;&gt; dfBigger20.apply(lambda row: row.dropna().keys(), axis = 1) 0 Index([], dtype='object') 1 Index(['A FLOW'], dtype='object') 2 Index(['A FLOW', 'B FLOW'], dtype='object') 3 Index(['A FLOW'], dtype='object') dtype: object </code></pre> <p>To make this more beatiful we can convert the <code>Index</code> Object returned by <code>.keys()</code> to a regular list.</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; dfBigger20.apply(lambda row: list(row.dropna().keys()), axis = 1) 0 [] 1 [A FLOW] 2 [A FLOW, B FLOW] 3 [A FLOW] dtype: object </code></pre> <p>Now all in one line and saved into the original <code>df</code>:</p> <pre><code>&gt;&gt;&gt; df['Combinations'] = df[reqcol].where(lambda x: x&gt;20).apply(lambda row: list(row.dropna().keys()), axis=1) &gt;&gt;&gt; df NAME A FLOW B FLOW Combinations 0 N1 5 10 [] 1 N2 80 0 [A FLOW] 2 N3 50 40 [A FLOW, B FLOW] 3 N4 40 10 [A FLOW] </code></pre>
pandas
0
377,551
68,136,608
How to pad a list of NumPy arrays in a vectorized way?
<p>I am trying to find a vectorized way (or at least, better than using a loop) to create a three-dimensional NumPy array from a list of 2D NumPy arrays. Right now, I have a list L that looks something like:</p> <pre><code>L = [ np.array([[1,2,3], [4,5,6]]), np.array([[8,9,10]]), ...] </code></pre> <p>Each NumPy array has the same size for the second dimension (in the above case, it is 3). But the first dimension has different sizes.</p> <p>My goal is to create a 3D NumPy array M that incorporates the above data. I've been trying to use the <a href="https://numpy.org/devdocs/reference/generated/numpy.pad.html" rel="nofollow noreferrer">np.pad()</a> function, since I have a maximum size for the first dimension of each of my arrays, but it looks like it would only operate on the individual elements of my list. I could then do what I wanted using the function and looping over every array. However, I'd like to do this without a loop if possible, using a vectorized approach. Are there any techniques to do this?</p> <p>This question is related to <a href="https://stackoverflow.com/questions/16827060/how-to-make-a-list-of-numpy-arrays-all-have-the-same-shape">this one</a>, though I'm hoping to do this over my whole list at once.</p>
<p>First lets look at the common task of padding 1d arrays to a common size.</p> <pre><code>In [441]: alist = [np.ones((2,),int),np.zeros((1,),int)+2, np.zeros((3,),int)+3] In [442]: alist Out[442]: [array([1, 1]), array([2]), array([3, 3, 3])] </code></pre> <p>The obvious iterative approach:</p> <pre><code>In [443]: [np.hstack((arr, np.zeros((3-arr.shape[0]),int))) for arr in alist] Out[443]: [array([1, 1, 0]), array([2, 0, 0]), array([3, 3, 3])] In [444]: np.stack(_) Out[444]: array([[1, 1, 0], [2, 0, 0], [3, 3, 3]]) </code></pre> <p>A clever alternative. It still requires an iteration to determine sizes, but the rest is whole-array &quot;vectorization&quot;:</p> <pre><code>In [445]: sizes = [arr.shape[0] for arr in alist] In [446]: sizes Out[446]: [2, 1, 3] </code></pre> <p>Make the output array with the pad values:</p> <pre><code>In [448]: res = np.zeros((3,3),int) </code></pre> <p>Make a clever mask (@Divakar first proposed this)</p> <pre><code>In [449]: np.array(sizes)[:,None]&gt;np.arange(3) Out[449]: array([[ True, True, False], [ True, False, False], [ True, True, True]]) </code></pre> <p>then map the 'flattened' inputs to res:</p> <pre><code>In [450]: res[_]=np.hstack(alist) In [451]: res Out[451]: array([[1, 1, 0], [2, 0, 0], [3, 3, 3]]) </code></pre> <p>I think this process can be extended to your 2d=&gt;3d case. But it will take a bit of work. I tried doing it directly and found I was getting lost in applying the mask. That's why I decided to first layout the 1d=&gt;2d case. There's enough thinking-outside-the-box that I have to work out the details fresh each time.</p> <h2>2d=&gt;3d</h2> <pre><code>In [457]: a2list = [np.ones((2,3),int),np.zeros((1,3),int)+2, np.zeros((3,3),int)+3] In [458]: [np.vstack((arr, np.zeros((3-arr.shape[0],arr.shape[1]),int))) for arr in a2list] Out[458]: [array([[1, 1, 1], [1, 1, 1], [0, 0, 0]]), array([[2, 2, 2], [0, 0, 0], [0, 0, 0]]), array([[3, 3, 3], [3, 3, 3], [3, 3, 3]])] In [459]: np.stack(_) Out[459]: array([[[1, 1, 1], [1, 1, 1], [0, 0, 0]], [[2, 2, 2], [0, 0, 0], [0, 0, 0]], [[3, 3, 3], [3, 3, 3], [3, 3, 3]]]) </code></pre> <p>Now for the 'vectorized' approach:</p> <pre><code>In [460]: sizes = [arr.shape[0] for arr in a2list] In [461]: sizes Out[461]: [2, 1, 3] In [462]: np.array(sizes)[:,None]&gt;np.arange(3) Out[462]: array([[ True, True, False], [ True, False, False], [ True, True, True]]) In [463]: res = np.zeros((3,3,3),int) </code></pre> <p>and the corresponding indices from the mask:</p> <pre><code>In [464]: I,J=np.nonzero(Out[462]) In [465]: I Out[465]: array([0, 0, 1, 2, 2, 2]) In [466]: J Out[466]: array([0, 1, 0, 0, 1, 2]) In [467]: res[I,J,:] = np.vstack(a2list) In [468]: res Out[468]: array([[[1, 1, 1], [1, 1, 1], [0, 0, 0]], [[2, 2, 2], [0, 0, 0], [0, 0, 0]], [[3, 3, 3], [3, 3, 3], [3, 3, 3]]]) </code></pre>
numpy|vectorization
1
377,552
68,315,961
Bad performance of numpy slicing in function
<p>I have example code like</p> <pre><code>import numpy as np nbins = 1301 init = np.ones(nbins*2+1)*(-1) init[0] = 0 init[1::2] = 0 z0 = np.array(init, dtype=np.complex128, ndmin=1) initn = z0.view(np.float64) deltasf = np.linspace(-10, 10, nbins) gsf = np.linspace(1, 2, nbins) def jacobian_mbes_test(Y, t): leny = len(Y) J = np.zeros((leny, leny)) J[0,0] = -10 J[0,1] = 0 J[0, 2::4] = gsf J[1,0] = 0 J[1,1] = -10 J[1,3::4] = gsf J[2::4, 0] = gsf*Y[4::4] J[3::4, 0] = gsf*Y[5::4] J[4::4, 0] = -4*gsf*Y[2::4] J[2::4, 1] = -gsf*Y[5::4] J[3::4, 1] = gsf*Y[4::4] J[4::4, 1] = -4*gsf*Y[3::4] J[(range(2, leny, 4), range(2, leny, 4))] = -0.8 J[(range(3, leny, 4), range(3, leny, 4))] = -0.8 J[(range(4, leny, 4), range(4, leny, 4))] = -0.001 J[(range(5, leny, 4), range(5, leny, 4))] = -0.001 J[(range(2, leny, 4), range(3, leny, 4))] = deltas J[(range(3, leny, 4), range(2, leny, 4))] = -deltas J[(range(2, leny, 4), range(4, leny, 4))] = gsf*Y[0] J[(range(2, leny, 4), range(5, leny, 4))] = -gsf*Y[1] J[(range(3, leny, 4), range(4, leny, 4))] = gsf*Y[1] J[(range(3, leny, 4), range(5, leny, 4))] = gsf*Y[0] J[(range(4, leny, 4), range(2, leny, 4))] = -4*gsf*Y[0] J[(range(4, leny, 4), range(3, leny, 4))] = -4*gsf*Y[1] return J </code></pre> <p>which computes a Jacobian for a set of differential equations using scipy.integrate.odeint</p> <p>Unfortunately the performance of the function <code>jacobian_mbes_test</code> is not very good. <code>%timeit jacobian_mbes_test(initn, 0)</code> gives 12.2ms even though I tried to do everything efficiently with slicing(?).</p> <p>Changing the function to (since I guess the more complicated index assingment takes most time)</p> <pre><code>def jacobian_mbes_test2(Y, t): leny = len(Y) J = np.zeros((leny, leny)) J[0,0] = -10 J[0,1] = 0 J[0, 2::4] = gsf J[1,0] = 0 J[1,1] = -10 J[1,3::4] = gsf J[2::4, 0] = gsf*Y[4::4] J[3::4, 0] = gsf*Y[5::4] J[4::4, 0] = -4*gsf*Y[2::4] J[2::4, 1] = -gsf*Y[5::4] J[3::4, 1] = gsf*Y[4::4] J[4::4, 1] = -4*gsf*Y[3::4] return J </code></pre> <p>reduces this time to 4.6ms, which is still not great. The funny thing is that if I time these assignments outside of a function like</p> <pre><code>J = np.zeros((len(initn), len(initn)) %timeit J[4::4, 1] = -4*gsf*initn[3::4] </code></pre> <p>I get at most 10µs, so I would expect at most (!) sth like 100µs for the second function. Is there some copying around in memory going on in this function I am not aware of?</p> <p>Is there some more efficient way to assign values to certain indices?</p>
<p>The problem comes from the <strong>matrix size</strong> (206 MiB). Indeed, the matrix is pretty big and (virtually) filled with zeros every time the function is called. Assuming the all values would be physically written to memory in 12.2 ms, the throughput would be <code>5206**2*8/12.2e-3/1024**3 = 16.5 GiB/s</code> which is pretty good for a <em>sequential</em> workload (especially on an average personal computer which often barely reach more than 25 GiB/s in sequential). In practice, the memory is not set to zero and most of the computation time comes from the management of <strong>virtual memory</strong>.</p> <p><code>np.zeros((leny, leny))</code> internally request your operating systems (OS) to reserve some space and initialize it with zero. Your OS will virtually allocate many <em>pages</em> (chunks of typically 4 KiB) and mark them as to be set to zero later (delayed initialization). As a result, this process is very cheap. However, when <code>J[0,0] = -10</code> is executed later, it write data in given <em>first-touched page</em> that need to be initialized. This is called a <em>page fault</em>. This process is expensive (especially in sequential) as it stop the execution of your process, find some place in physical memory, fill a whole page, and then resume your process, for <em>each</em> page! In your case, many array cells to be initialized cause pages to be filled which is much more costly than just filling actually the cells.</p> <p><em>CPU caches</em> also matter a lot in this code as page filling will be much slower when the computation is done in RAM. This happens when the amount of data is to big to fit in the last level-cache (of typically 1-64 MiB).</p> <p><em>The memory access pattern</em> strongly impacts performance on quite big data structures (ie. not fitting in CPU caches). Indeed, a random access pattern or strided access with huge stride are really bad for performance since the processor cannot easily predict the location of the RAM chunks to load ahead of time (not to mention the RAM latency is quite big). Contiguous accesses are much faster.</p> <p>All of this should be taken into account when you measure the time on simpler cases.</p> <p>One simple way to see the impact of virtual memory is to use <code>np.full((leny, leny), 0.0)</code> instead of <code>np.zeros(leny, leny)</code>. This is 3 times slower on my machine. All the pages needs to be filled in that case. You can also create measure the time to fille multiple time the <code>J</code> matrix. The first time should be significantly slower (2.2 times slower on my machine).</p> <p>One solution is to allocate and initialize the array once and fill/recycle only some values but this is a bit tricky to do. In the worst case, most of the matrix need to be zeroised which will be too slow in your case since it will be bound by your memory throughput.</p> <p>One simple solution is to <strong>use a space matrix</strong>. Space matrices can be created using the module <code>scipy.sparse</code>. They are often a slower to compute, but faster and much more compact if there are a lot of zeros (which is your case). Here is an example:</p> <pre><code>from scipy.sparse.lil import lil_matrix def jacobian_mbes_test(Y, t): leny = len(Y) J = lil_matrix((leny, leny)) J[0,0] = -10 J[0,1] = 0 J[0, 2::4] = gsf J[1,0] = 0 J[1,1] = -10 J[1,3::4] = gsf J[2::4, 0] = gsf*Y[4::4] J[3::4, 0] = gsf*Y[5::4] J[4::4, 0] = -4*gsf*Y[2::4] J[2::4, 1] = -gsf*Y[5::4] J[3::4, 1] = gsf*Y[4::4] J[4::4, 1] = -4*gsf*Y[3::4] J[(range(2, leny, 4), range(2, leny, 4))] = -0.8 J[(range(3, leny, 4), range(3, leny, 4))] = -0.8 J[(range(4, leny, 4), range(4, leny, 4))] = -0.001 J[(range(5, leny, 4), range(5, leny, 4))] = -0.001 J[(range(2, leny, 4), range(3, leny, 4))] = deltas J[(range(3, leny, 4), range(2, leny, 4))] = -deltas J[(range(2, leny, 4), range(4, leny, 4))] = gsf*Y[0] J[(range(2, leny, 4), range(5, leny, 4))] = -gsf*Y[1] J[(range(3, leny, 4), range(4, leny, 4))] = gsf*Y[1] J[(range(3, leny, 4), range(5, leny, 4))] = gsf*Y[0] J[(range(4, leny, 4), range(2, leny, 4))] = -4*gsf*Y[0] J[(range(4, leny, 4), range(3, leny, 4))] = -4*gsf*Y[1] return J </code></pre> <p>This is 2 times faster on my machine. There are plenty of different types of sparse matrices, each adapted to some specific use-cases. There is probably a better type of sparse matrix than LIL ones.</p> <p>LIL spaces matrices are quite good for space matrices with a dynamic structure. CSR are quite good for static ones. Thus, you can for example build a CSR matrix ahead of time filled with NaN values at the location where you plan to write some data and then copy it and fill the new space matrix with the new values when needed. This is 10-15% faster on my machine. It is probably possible to reduce the time even more using a better access pattern.</p>
python|performance|numpy|slice|numpy-slicing
1
377,553
68,328,267
Why is tf.io.read_file not able to read from pathlib.Path object?
<p>I have an image path:</p> <pre><code>file_path = '/dir/sub_dir/my_image.jpg' </code></pre> <p>These both work fine:</p> <pre><code>open(file_path, 'r') tensorflow.io.read_file(file_path) </code></pre> <p>I instantiate a pathlib.Path object</p> <pre><code>p = pathlib.Path(file_path) </code></pre> <p>These both work fine as well:</p> <pre><code>open(p, 'r') tensorflow.io.read_file(str(p)) </code></pre> <p>But this does not work:</p> <pre><code>tensorflow.io.read_file(p) </code></pre> <p>it returns:</p> <pre><code>ValueError: Attempt to convert a value (PosixPath('/dir/sub_dir/my_image.jpg')) with an unsupported type (&lt;class 'pathlib.PosixPath'&gt;) to a Tensor. </code></pre> <p>Why is <code>tensorflow.io.read_file</code> not able to read <code>pathlib.Path</code> object</p> <p>I understand why <code>tensorflow.io.read_file(str(p))</code> works fine. What I do not understand is that how is it <code>open</code> can work with path_like_object but <code>tensorflow.io.read_file</code> cannot</p> <p><a href="https://docs.python.org/3/glossary.html#term-path-like-object" rel="nofollow noreferrer">https://docs.python.org/3/glossary.html#term-path-like-object</a></p> <p><a href="https://docs.python.org/3/library/functions.html#open" rel="nofollow noreferrer">https://docs.python.org/3/library/functions.html#open</a></p>
<p>That's because tensorflow.io.read_file expects a String object as the file path, not a PosixPath object, which is what you're passing it from pathlib.</p> <p>When you use str(path), you're converting the PosixPath to string, and thats why it works fine in that case.</p> <p>You can find more details on the documentations.</p> <p>TensorFlow: <a href="https://www.tensorflow.org/api_docs/python/tf/io/read_file" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/io/read_file</a></p> <p>PathLib: <a href="https://docs.python.org/3/library/pathlib.html" rel="nofollow noreferrer">https://docs.python.org/3/library/pathlib.html</a></p>
tensorflow|pathlib
0
377,554
68,058,307
Problem in reading text file with negative numbers
<p><strong>Text File:</strong> I have a text file containing more than 87,000 data points. The format of the text file is as follows:</p> <ul> <li>X Coordinate ----- Y Coordinate ------- Parameter 1 ------ Parameter 2--------</li> <li>2.744596610E-02 1.247197202E+00 7.121462841E-03 2.467938066E-05</li> <li>2.732558411E-02 1.242196291E+00 1.365028508E-02 6.262368697E-05</li> <li>2.713870635E-02 1.227254209E+00 <strong>1.958976965E-03-3.179617352E-06</strong></li> </ul> <p>There is <strong>no space</strong> between the two numbers highlighted in bold because of the leading <strong>- (minus)</strong> sign, because of which the resulting csv/pandas dataframe results in something like below.</p> <p><strong>Output:</strong></p> <pre><code>| X Coordinate | Y Coordinate | Parameter 1 | Parameter 2 | | -------------- | -------------- | --------------- | ------------ | | 2.744596610E-02 | 1.247197202E+00 | 7.121462841E-03 | 2.467938066E-05 | | 2.732558411E-02 | 1.242196291E+00 | 1.365028508E-02 | 6.262368697E-05 | | 2.713870635E-02 | 1.227254209E+00 | 1.958976965E-03-3.179617352E-06| | </code></pre> <p><strong>Required:</strong></p> <pre><code>| X Coordinate | Y Coordinate | Parameter 1 | Parameter 2 | | -------------- | -------------- | --------------- | ------------ | | 2.744596610E-02 | 1.247197202E+00 | 7.121462841E-03 | 2.467938066E-05 | | 2.732558411E-02 | 1.242196291E+00 | 1.365028508E-02 | 6.262368697E-05 | | 2.713870635E-02 | 1.227254209E+00 | 1.958976965E-03 |-3.179617352E-06 | </code></pre> <p>I am comfortable with python/pandas, so any of the programming techniques would be of great help.</p>
<p>A <a href="https://regex101.com/r/8XNhiI/1" rel="nofollow noreferrer">regex</a> can put spaces in there:</p> <pre><code>import re with open(&quot;current.txt&quot;) as fh, open(&quot;new.txt&quot;, &quot;w&quot;) as gh: # skip the first line fh.readline() # for other lines.. for line in fh: gh.write(re.sub(r&quot;(E[+-]\d+)(\S)(\d|\.)&quot;, r&quot;\1 -\3&quot;, line)) </code></pre> <p>Then</p> <pre><code># you can include the header, I didn't paste df = pd.read_csv(&quot;new.txt&quot;, sep=&quot; &quot;, header=None) </code></pre> <p>gives me</p> <pre><code>&gt;&gt;&gt; df 0 1 2 3 0 0.027446 1.247197 0.007121 0.000025 1 0.027326 1.242196 -0.013650 0.000063 2 0.027139 -1.227254 0.001959 -0.000003 </code></pre>
python|pandas|dataframe|csv
1
377,555
68,141,498
Pandas dropping duplicates doesn't drop last duplicate
<p>Setting keep=False should remove all duplicates but if I run my function is still returns a duplicate of the previous row</p> <pre><code>def date_to_csv(): import pandas as pd from random import randint df = pd.read_csv(&quot;test.csv&quot;) df = df.append({'Date': datetime.date.today(), 'Price': randint(1,100)}, ignore_index=True) result_df = df.drop_duplicates(keep=False) result_df.to_csv('test.csv', mode='a', index=False, header=None) </code></pre> <p>If my csv file is empty with only the column headers 'Date' and 'Price' and I run my function 3 times it returns this in csv:</p> <pre><code>Date,Price 2021-06-26,74 2021-06-26,74 2021-06-26,51 2021-06-26,51 2021-06-26,13 </code></pre> <p>When I expect it to return something like this:</p> <pre><code>Date,Price 2021-06-26,74 2021-06-26,51 2021-06-26,13 </code></pre>
<p>Because of <code>mode='a'</code> you can't remove previous duplicates after several execution of your function. Here is a code for your expected behaviour:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd from datetime import datetime def date_to_csv(): df = pd.read_csv('test.csv') df = df.append({'Date': str(datetime.now().date()), 'Price': randint(1, 100)}, ignore_index=True) df.to_csv('test.csv', index=False) </code></pre>
python|pandas|dataframe|csv|duplicates
2
377,556
68,108,997
Start function from a specific row
<p>I want to know, how can i run a method or a function or a loop for just the <code>not-NaN</code> rows. I don't want to dropna the dataframe and reset the index. For now, I am using the AvgHigh function but it's considering the <code>NaN</code> rows too. Also, if the suggested method can be used with series and arrays both. if not, please suggest for both. Thanks in advance.</p> <h1>Edit</h1> <pre><code>def AvgHigh(src, val) : dat_list = [] last_src = np.nan # init variable that keeps the prev iteration value for a in range(len(src)) : if src[a] &gt; val : dat_list.append(src[a]) # yield src[a] last_src = src[a] # update prev iteration value (for next iteration) elif (src[a] &lt;= val) and (a == 0) : dat_list.append(np.nan) # yield np.nan elif (src[a] &lt;= val) and (a != 0) : dat_list.append(last_src) # yield src[a-1] return dat_list df1['high_r'] = AvgHigh(df1['Values'], 14020) </code></pre> <p><a href="https://i.stack.imgur.com/MLD6A.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MLD6A.png" alt="Image" /></a></p>
<p>This should get the high_r column you are looking for. Added in a check on the pd.nan values.</p> <pre><code>df1 = pd.DataFrame({'values': [np.nan, np.nan,np.nan, np.nan,np.nan, 14018,14022,14023,14021,14020,14014]}) def AvgHigh(src, val) : dat_list = [] last_src = np.nan # init variable that keeps the prev iteration value for a in range(len(src)): if not np.isnan(src[a]): ##&lt;&lt;&lt;&lt;&lt;&lt;New if statement if src[a] &gt; val : dat_list.append(src[a]) # yield src[a] last_src = src[a] # update prev iteration value (for next iteration) elif (src[a] &lt;= val) and (a == 0) : dat_list.append(np.nan) # yield np.nan elif (src[a] &lt;= val) and (a != 0) : dat_list.append(last_src) # yield src[a-1] else: ##&lt;&lt;&lt;&lt;&lt;&lt;New else statement dat_list.append(np.nan) return dat_list df1['high_r'] = AvgHigh(df1['values'], 14020) df1 values high_r 0 NaN NaN 1 NaN NaN 2 NaN NaN 3 NaN NaN 4 NaN NaN 5 14018.0 NaN 6 14022.0 14022.0 7 14023.0 14023.0 8 14021.0 14021.0 9 14020.0 14021.0 10 14014.0 14021.0 </code></pre>
python|pandas|dataframe|numpy|nan
1
377,557
68,385,980
Using a Data Frame Column as start & another as end for range to calculate mean for another dataframe column
<p>I have 2 dataframes, 1 that contains start and end row ids and another that contains the dataframe where I want to calculate the mean for all rows between those coordinates.</p> <p>First dataframe:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>id</th> <th>Exon region start (bp)</th> <th>Exon region end (bp)</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>577</td> <td>647</td> </tr> <tr> <td>1</td> <td>648</td> <td>1601</td> </tr> <tr> <td>2</td> <td>1602</td> <td>1670</td> </tr> <tr> <td>3</td> <td>1671</td> <td>3229</td> </tr> <tr> <td>4</td> <td>3230</td> <td>3304</td> </tr> </tbody> </table> </div> <p>Second Dataframe:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>id</th> <th>chrom</th> <th>pos</th> <th>mean</th> <th>median</th> <th>over_1</th> <th>over_5</th> <th>over_10</th> <th>over_15</th> <th>over_20</th> <th>over_25</th> <th>over_30</th> <th>over_50</th> <th>over_100</th> <th>average_exon_coverage</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>1</td> <td>12141</td> <td>0.029005</td> <td>0</td> <td>0.021939</td> <td>0.000105</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> </tr> <tr> <td>1</td> <td>1</td> <td>12142</td> <td>0.029216</td> <td>0</td> <td>0.021622</td> <td>0.000105</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> <td>0.0</td> </tr> </tbody> </table> </div> <p>I have managed to create a column in the new dataframe 'average_exon_coverage' and tried to calculate the mean for the start and end positions but I am not sure what I am doing wrong, my code is below:</p> <pre><code>meanList = [] for x in range(exon['Exon region start (bp)'].astype(int), exon['Exon region end (bp)'].astype(int)): meanList.append(exomes_avg_mean['mean']) exomes_avg_mean['average exon coverage'] = numpy.mean(meanList) meanList=[] </code></pre> <p>I want to take the first column as start and the second column as end and keep calculating the mean all coordinates between them and put them in the column I have created.</p> <p>Thanks.</p>
<p>Consider -<br> First dataframe having range as dfRange and <br> second datframe having dfData.<br></p> <p>Step1- Find the shape of dfRange. using shape you can get max rows.</p> <p>step2 - using For loop</p> <pre><code>for rowNumber in range(maxRows): </code></pre> <p>you can get each row of dfRange and their corresponding start and end value.</p> <p>like for any row -&gt; dfRange[rowNumber][0] gives <strong>Exon region start</strong> and</p> <p>dfRange[rowNumber][1] gives <strong>Exon region end</strong></p> <p>Step 3- Slice tempDf= dfData[start:end+1] <br> step 4- sum up and take mean on whatever axis you want of tempDf. <br> step 5 - store that results wherever you want.<br> Step 6 - loop back for other rows<br><br></p> <p>otherwise instead of shape you can directly go for</p> <pre><code>for index, row in dfRange.iterrows(): </code></pre>
python|python-3.x|pandas|numpy
1
377,558
68,088,768
Sample and group by, and pivot time series data
<p>I'm struggling to handle a complex (imho) operation on time series data.</p> <p>I have a time series data set and would like to break it into nonoverlapping pivoted grouped by chunks. It is organized by customer, year, and value. For the purposes of this toy example, I am trying to break it out into a simple forecast of the next 3 months.</p> <pre><code>df = pd.DataFrame({'Customer': {0: 'a', 1: 'a', 2: 'a', 3: 'a', 4: 'a', 5: 'a', 6: 'a', 7: 'b', 8: 'b', 9: 'b'}, 'Year': {0: 2020, 1: 2020, 2: 2020, 3: 2020, 4: 2020, 5: 2021, 6: 2021, 7: 2020, 8: 2020, 9: 2020}, 'Month': {0: 8, 1: 9, 2: 10, 3: 11, 4: 12, 5: 1, 6: 2, 7: 1, 8: 2, 9: 3}, 'Value': {0: 5.2, 1: 2.2, 2: 1.7, 3: 9.0, 4: 5.5, 5: 2.5, 6: 1.9, 7: 4.5, 8: 2.9, 9: 3.1}}) </code></pre> <p>My goal is to create a dataframe where each row contains non overlapping data that is in 3 month pivoted increments. So each row has the 3 &quot;value&quot; data points upcoming from that point in time. I'd also like this data to include the most recent data, so if there is an odd amount of data, that data is dropped (see example a).</p> <pre><code>| Customer | Year | Month | Month1 | Month2 | Month3 | | b | 2020 | 1 | 4.5 | 2.9 | 3.1 | | a | 2020 | 9 | 2.2 | 1.7 | 9.0 | | a | 2020 | 12 | 5.5 | 2.5 | 1.9 | </code></pre> <p>Much appreciated.</p>
<p>There might be a better way to do this, but this will give you the output you want :</p> <p>First we add a <code>Customer_counter</code> column to add an ID to rows member of the same chunk, and we remove extra rows.</p> <pre><code>df[&quot;Customer_chunk&quot;] = (df[::-1].groupby(&quot;Customer&quot;).cumcount()) // 3 df = df.groupby([&quot;Customer&quot;, &quot;Customer_chunk&quot;]).filter(lambda group: len(group) == 3) </code></pre> <p>Then we group by Customer and Customer_chunk to generate each column of the desired output.</p> <pre><code>df_grouped = df.groupby([&quot;Customer&quot;, &quot;Customer_chunk&quot;]) colYear = df_grouped[&quot;Year&quot;].first() colMonth = df_grouped[&quot;Month&quot;].first() colMonth1 = df_grouped[&quot;Value&quot;].first() colMonth2 = df_grouped[&quot;Value&quot;].nth(2) colMonth3 = df_grouped[&quot;Value&quot;].last() </code></pre> <p>And finally we create the output by merging every columns.</p> <pre><code>df_output = pd.concat([colYear, colMonth, colMonth1, colMonth2, colMonth3], axis=1).reset_index().drop(&quot;Customer_chunk&quot;, axis=1) </code></pre> <p>Some steps feel a bit clunky, there's probably room for improvement in this code but it shouldn't impact performance too much.</p>
pandas|time-series
1
377,559
68,324,849
UNIX conversion to datetime either mismatch or returns 1970-01-01
<p>I'm trying to convert a Pandas DataFrame column from UNIX to Datetime, but I either get a mismatch error or the new dates are all 1970-01-01.</p> <p>Here is tail sample of the list:</p> <blockquote> <p>ds y</p> <p>86 1625616000000 34149.989815</p> <p>87 1625702400000 33932.254638</p> <p>88 1625788800000 32933.578199</p> <p>89 1625875200000 33971.297750</p> <p>90 1625884385000 33868.766626</p> </blockquote> <p>When I look at how my UNIX looks like in datetime:</p> <pre><code>mydatetime = datetime.fromtimestamp(1618185600000 // 1000, tz=tzutc()) print('mydatetime',mydatetime) </code></pre> <p>I get:</p> <blockquote> <p>mydatetime 2021-04-12 00:00:00+00:00</p> </blockquote> <p>So when I use the conversion function:</p> <pre><code>df2 = pd.to_datetime(df1['ds'].astype(str), format='%Y-%m-%d %H:%M:%S+%f:%Z') </code></pre> <p>I get:</p> <blockquote> <p>ValueError: time data '1618185600000' does not match format '%Y-%m-%d %H:%M:%S+%f:%Z' (match)</p> </blockquote> <p>But, when I use the lazy road:</p> <pre><code>df2 = pd.to_datetime(df1['ds'], unit='ns') </code></pre> <p>The results are:</p> <blockquote> <p>86 1970-01-01 00:27:05.616000</p> <p>87 1970-01-01 00:27:05.702400</p> <p>88 1970-01-01 00:27:05.788800</p> <p>89 1970-01-01 00:27:05.875200</p> <p>90 1970-01-01 00:27:05.886197</p> <p>Name: ds, type: datetime64[ns]</p> </blockquote>
<p>Use <code>pd.Timestamp</code> to convert to datetime:</p> <pre><code>&gt;&gt;&gt; df['ds'].mul(1e6).apply(pd.Timestamp) 0 2021-07-07 00:00:00 1 2021-07-08 00:00:00 2 2021-07-09 00:00:00 3 2021-07-10 00:00:00 4 2021-07-10 02:33:05 Name: ds, dtype: datetime64[ns] </code></pre> <p>Or suggested by @HenryEcker:</p> <pre><code>&gt;&gt;&gt; pd.to_datetime(df['ds'], unit='ms') 0 2021-07-07 00:00:00 1 2021-07-08 00:00:00 2 2021-07-09 00:00:00 3 2021-07-10 00:00:00 4 2021-07-10 02:33:05 Name: ds, dtype: datetime64[ns] </code></pre>
python|pandas|dataframe
2
377,560
68,210,907
Calculate the cumulative sum of multiplying each element of one array by all the elements of a second array
<p>I need to efficiently calculate the running sum of multiplying the elements of one array by all the elements of a second array. It is probably easiest to explain what I am trying to do with code:</p> <pre><code>import time import numpy as np arr1 = np.random.uniform(size=1000000) arr2 = np.array([0.1, 0.2, 0.3, 0.35, 0.5, 0.4, 0.38, 0.2, 0.15]) result = np.zeros(len(arr1)) start_time = time.time() for i in range(0, len(arr1) - len(arr2)): for j in range (0, len(arr2)): result[i+j] += arr1[i] * arr2[j] print(&quot;--- %s seconds ---&quot; % (time.time() - start_time)) </code></pre> <p>My 'a' array will typically be big so I would like this to run as fast as possible, but right now it is too slow (~5s on my computer). Is there a more efficient way of doing this in Python?</p>
<p>In general multiplying two sliding windows is called a <a href="https://en.wikipedia.org/wiki/Convolution" rel="nofollow noreferrer">convolution</a>, implemented in numpy. Your definition is subtly different at the end, however this can be fixed.</p> <pre><code>result = np.convolve(arr1, arr2)[:len(arr1)] diff = len(arr1) - len(arr2) for k in range(diff, len(arr1)): # i = k - j # 0 &lt;= i &lt; diff gives k - diff &lt; j &lt;= k # 0 &lt;= j &lt; len(arr2) lo = max(0, 1 + k - diff) hi = min(1 + k, len(arr2)) result[k] = np.dot(arr1[k-hi+1:k-lo+1][::-1], arr2[lo:hi]) </code></pre>
python|arrays|numpy
1
377,561
68,041,243
How to filter text data containing key words from an unnamed column excel with python pandas and print to txt file
<p>Im pretty new to this so please bear with me.</p> <p>I have an excel sheet that contains certain text strings i would like to extract and copy to a text file - i have been doing this manually for a long time and im sick of it.</p> <p>So my plan was to write a script that would extract this data from the excel sheet and create a txt file.</p> <p>This is how far i have gotten:</p> <pre><code>#EXTRACT CLIPID FROM XCEL SHEET import pandas as pd from tkinter import Tk # from tkinter import Tk for Python 3.x from tkinter.filedialog import askopenfilename Tk().withdraw() filename = askopenfilename() data = pd.read_excel (filename) df = pd.DataFrame(data) print (df) </code></pre> <p>The data i want is located in column A1, but is not always in the same row. There are 3 separate keywords i want to look for:</p> <ol> <li>&quot;POP&quot;</li> <li>&quot;TVS&quot;</li> <li>&quot;PLANET&quot;</li> </ol> <p>The strings look something like this:</p> <p>Channel2021_1_DRU_POP_15s_16062021 Channel2021_2_FANT_POP_15s_16062021 Channel2021_3_ITA_POP_15s_16062021</p> <p>Channel2021_1_DRU_TVS_15s_16062021 Channel2021_2_FANT_TVS_15s_16062021 Channel2021_3_ITA_TVS_15s_16062021</p> <p>Channel2021_1_DRU_PLANET_15s_16062021 Channel2021_2_FANT_PLANET_15s_16062021 Channel2021_3_ITA_PLANET_15s_16062021</p> <p>This is the form of the extracted data i would like to write in a txt file.</p> <p>So in essence i want to search column A1 for strings containing POP and print, then strings containing TVS and print, and lastly strings containing PLANET and print.</p> <p>Any help would be greatly appreciated!</p> <p>Thank you!</p> <p>Dusan</p> <p>PS: Here is the output of <code>df</code>:</p> <pre><code> Unnamed: 0 ... Unnamed: 16 0 NaN ... NaN 1 NaN ... NaN 2 Spot 1 15 s ... NaN 3 NaN ... Indicazioni 4 106290.01 ... dire tutto + grafica ITALIA 5 138575.01 ... NaN 6 142956.01 ... NaN 7 85146.01 ... NaN 8 Eurospin2021_16bis_1_POP_ITA_15s_24_06_2021 ... NaN 9 Eurospin2021_16bis_1_TVS_ITA_15s_24_06_2021 ... NaN 10 Eurospin2021_16bis_1_PLANET_ITA_15s_24_06_2021 ... NaN 11 NaN ... NaN 12 NaN ... NaN 13 Spot 2 15 s ... NaN 14 NaN ... Indicazioni 15 164171.01 ... dire tutto + grafica ITALIA 16 9003309.01 ... NaN 17 88310.01 ... NaN 18 Eurospin2021_16bis_2_POP_ITA_15s_24_06_2021 ... NaN 19 Eurospin2021_16bis_2_TVS_ITA_15s_24_06_2021 ... NaN 20 Eurospin2021_16bis_2_PLANET_ITA_15s_24_06_2021 ... NaN 21 NaN ... NaN 22 NaN ... NaN 23 Spot 3 15 s ... NaN 24 NaN ... Istruzione 25 800214.01 ... dire tutto + dire al kg dopo il prezzo per la ... 26 9001392.01 ... NaN 27 9002306.01 ... NaN 28 147804.01 ... NaN 29 Eurospin2021_16bis_3_POP_DRUZ_15s_24_06_2021 ... NaN 30 Eurospin2021_16bis_3_TVS_DRUZ_15s_24_06_2021 ... NaN 31 Eurospin2021_16bis_3_PLANET_DRUZ_15s_24_06_2021 ... NaN [32 rows x 17 columns] </code></pre>
<p>Here's a proposal if you're still looking for a solution:</p> <p>Withe sample frame</p> <pre><code>df = pd.DataFrame({ 0: [ 'Channel2021_1_DRU_POP_15s_16062021', 'Channel2021_2_FANT_POP_15s_16062021', 'Channel2021_3_ITA_POP_15s_16062021', 1., 2., 'Channel2021_1_DRU_TVS_15s_16062021', 'Channel2021_2_FANT_TVS_15s_16062021', 'Channel2021_3_ITA_TVS_15s_16062021', 3., 4., 'Channel2021_1_DRU_PLANET_15s_16062021', 'Channel2021_2_FANT_PLANET_15s_16062021', 'Channel2021_3_ITA_PLANET_15s_16062021', 5. ], 1: '...', }) </code></pre> <pre><code> 0 1 0 Channel2021_1_DRU_POP_15s_16062021 ... 1 Channel2021_2_FANT_POP_15s_16062021 ... 2 Channel2021_3_ITA_POP_15s_16062021 ... 3 1 ... 4 2 ... 5 Channel2021_1_DRU_TVS_15s_16062021 ... 6 Channel2021_2_FANT_TVS_15s_16062021 ... 7 Channel2021_3_ITA_TVS_15s_16062021 ... 8 3 ... 9 4 ... 10 Channel2021_1_DRU_PLANET_15s_16062021 ... 11 Channel2021_2_FANT_PLANET_15s_16062021 ... 12 Channel2021_3_ITA_PLANET_15s_16062021 ... 13 5 ... </code></pre> <p>this</p> <pre><code>selection = df.iloc[:, 0].str.contains(r'POP|TVS|PLANET', na=False) print(df.iloc[:, 0][selection]) df.iloc[:, 0][selection].to_csv('items.txt', index=False, header=False) </code></pre> <p>prints you the desired entries</p> <pre><code>0 Channel2021_1_DRU_POP_15s_16062021 1 Channel2021_2_FANT_POP_15s_16062021 2 Channel2021_3_ITA_POP_15s_16062021 5 Channel2021_1_DRU_TVS_15s_16062021 6 Channel2021_2_FANT_TVS_15s_16062021 7 Channel2021_3_ITA_TVS_15s_16062021 10 Channel2021_1_DRU_PLANET_15s_16062021 11 Channel2021_2_FANT_PLANET_15s_16062021 12 Channel2021_3_ITA_PLANET_15s_16062021 </code></pre> <p>and writes them into a file <code>items.txt</code></p> <pre><code>Channel2021_1_DRU_POP_15s_16062021 Channel2021_2_FANT_POP_15s_16062021 Channel2021_3_ITA_POP_15s_16062021 Channel2021_1_DRU_TVS_15s_16062021 Channel2021_2_FANT_TVS_15s_16062021 Channel2021_3_ITA_TVS_15s_16062021 Channel2021_1_DRU_PLANET_15s_16062021 Channel2021_2_FANT_PLANET_15s_16062021 Channel2021_3_ITA_PLANET_15s_16062021 </code></pre> <p>Since I'm unsure about the column names I have only used the index base selection (<code>.iloc</code>).</p> <p>If you want the results in the order you've given then this</p> <pre><code>df = pd.concat([ df.iloc[:, 0][df.iloc[:, 0].str.contains(tag, na=False)] for tag in ('POP', 'TVS', 'PLANET') ]) </code></pre> <p>should work (just print <code>df</code> afterwards or write it to a file).</p> <p>Btw.: This is too complicated</p> <pre><code>data = pd.read_excel (filename) df = pd.DataFrame(data) </code></pre> <p>You only need <code>pd.read_excel</code>:</p> <pre><code>df = pd.read_excel(filename) </code></pre> <p><strong>EDIT</strong>: Regarding the comments:</p> <pre><code>with open('items.txt', 'wt') as file: file.write('The following has been sent:') for tag in ('POP', 'TVS', 'PLANET'): file.write(f'\n{tag}:\n') items = df.iloc[:, 0][df.iloc[:, 0].str.contains(tag, na=False)].to_list() file.write('\n'.join(items)) </code></pre>
python|excel|pandas|txt
0
377,562
68,134,962
to_csv function with delimeter as pipe giving issues
<p>I am running this</p> <pre><code>data = [['tom', 10], ['nick,paul', 15], ['juli', 14]] df = pd.DataFrame(data, columns=['Name', 'Age']) delimiter='|' df.to_csv('C:\\Users\\mpaul\\workspace\\out\\tre\\test.csv', index=False, date_format='%m-%d-%Y', sep=delimiter) </code></pre> <p>Output</p> <p><a href="https://i.stack.imgur.com/EQGfs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EQGfs.png" alt="enter image description here" /></a></p> <p>I want</p> <p><a href="https://i.stack.imgur.com/fWK6Z.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fWK6Z.png" alt="enter image description here" /></a></p> <p>What am I doing wrong?</p>
<p>The problem here is that csv files are interpreted with commas as the separator (as its name suggests: <em>comma separated values</em>). The <code>to_csv()</code> method just writes down your data with the separator specified. If you open the file with a plain text editor you may find the result you want. I've tested your code and get the same result, but when I open the output with the Notepad I see this</p> <p><a href="https://i.stack.imgur.com/X6s2r.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/X6s2r.png" alt="Output with the Notepad" /></a></p>
python|pandas
0
377,563
68,396,903
how can I convert position (2nd, 3rd or 4th etc ) to index(00 for 1st position, 01 for 2nd etc) in 2D array in python numpy?
<pre><code> import numpy as np; entries = []; # take user input row = int(input(&quot;Enter number of rows: &quot;)); column = int(input(&quot;Enter number of columns: &quot;)); print(&quot;Enter Values&quot;); # get values for i in range(row): a = []; for j in range(column): a.append(int(input())); entries.append(a); # convert values into matrix (dimentional array) matrix = np.array(entries).reshape(row, column); print(&quot;Matrix is :&quot;); print(matrix); </code></pre> <p>I want to know the logic so if user enter that he wants to change the element on nth position, how to change that position to index to perform update operation?</p> <p>Let say with above code user creates this array</p> <pre><code> [[10 24 32 [45 56 62]] </code></pre> <p>and after that he want to change the 4th position element just by entering position... how to convert that position into index?</p>
<p>Try with simple maths:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np x = [[10 ,24 ,32], [45 ,56 ,62]] x = np.array(x) pos, val = 4, -1 x[pos//x.shape[1], pos%x.shape[0]] = val </code></pre> <p>Outputs:</p> <pre class="lang-py prettyprint-override"><code>[[10 24 32] [-1 56 62]] </code></pre>
python|numpy
0
377,564
68,149,153
How to calculate the average hours between two different times in Python
<p>Let's say, I have the following times.</p> <pre><code>Time_1 Time_2 10:08:00 10:08:00 11:00:00 12:00:00 12:30:00 14:30:00 </code></pre> <p>I would like to calculate the average time between them and would like to get the following result.</p> <pre><code>Time_1 Time_2 Average_time 10:08:00 10:08:00 10:08:00 11:00:00 12:00:00 11.30.00 12:30:00 14:30:00 13.15.00 </code></pre> <p>Can anyone help with this?</p>
<p>you can use timedelta, this code calculate average of times.</p> <pre><code>import datetime import pandas as pd time1=&quot;11:00:00&quot; time2=&quot;12:00:00&quot; t1 = pd.Timedelta(time1) t2 = pd.Timedelta(time2) avg=(t1+t2)/2 d = {'time1':[t1] , 'time2':[t2], 'average':[avg]} df = pd.DataFrame(d) print(df.to_string()) </code></pre> <p>output:</p> <pre><code> time1 time2 average 0 11:00:00 12:00:00 11:30:00 </code></pre>
python|pandas|datetime
3
377,565
68,384,568
Possible to retain line breaks in pandas read_html when making data frame from html table?
<p>I'm trying to convert a scraped HTML table into a dataframe in python using pandas <code>read_html</code>. The problem is that <code>read_html</code> brings in a column of my data without breaks, which makes the content of those cells hard to parse. In the original HTML, each &quot;word&quot; in the column is separated by a break. Is there a way to keep this formatting or otherwise keep the &quot;words&quot; separated when converting to a data frame?</p> <pre><code>import requests from bs4 import BeautifulSoup import pandas as pd url=&quot;https://www.who.int/en/activities/tracking-SARS-CoV-2-variants/&quot; html_content = requests.get(url).text # Parse the html content soup = BeautifulSoup(html_content, &quot;lxml&quot;) voc_html = soup.find(&quot;table&quot;) #convert to dataframe voc_df = pd.read_html(str(voc_html))[0] #retain list of variants voc_list=voc_df['Pango lineages'] </code></pre> <p>example from <code>voc_list</code> where separate items are smushed together: <code>voc_list[1]</code></p> <pre><code>`B.1.351\xa0B.1.351.2B.1.351.3` </code></pre> <p>what I would like it to look like: <code>B.1.3510 B.1.351.2 B.1.351.3</code> (or have each item on its own row)</p> <p>excerpt from original html version which includes breaks:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-html lang-html prettyprint-override"><code>&lt;td style="width:13%;background-color:#69d4ef;text-align:left;vertical-align:middle;"&gt;Beta &lt;br/&gt;&lt;/td&gt;&lt;td style="width:12.9865%;background-color:#69d4ef;text-align:left;"&gt;&lt;p&gt;B.1.351 &lt;br/&gt;B.1.351.2&lt;br/&gt;B.1.351.3&lt;/p&gt;&lt;/td&gt;</code></pre> </div> </div> </p> <p>Thanks for any guidance!</p>
<p>Maybe...</p> <pre><code>import pandas as pd import requests url = r'https://www.who.int/en/activities/tracking-SARS-CoV-2-variants/' page = requests.get(url) table = pd.read_html(page.text.replace('&lt;br /&gt;',' ')) df = table[0] </code></pre> <p>Outputs:</p> <pre><code> WHO label Pango lineages GISAID clade Nextstrain clade \ 0 Alpha B.1.1.7 GRY 20I (V1) 1 Beta B.1.351 B.1.351.2 B.1.351.3 GH/501Y.V2 20H (V2) 2 Gamma P.1 P.1.1 P.1.2 GR/501Y.V3 20J (V3) 3 Delta B.1.617.2 AY.1 AY.2 G/478K.V1 21A Additional amino acid changes monitored* Earliest documented samples \ 0 +S:484K +S:452R United Kingdom, Sep-2020 1 +S:L18F South Africa, May-2020 2 +S:681H Brazil, Nov-2020 3 +S:417N India, Oct-2020 Date of designation 0 18-Dec-2020 1 18-Dec-2020 2 11-Jan-2021 3 VOI: 4-Apr-2021 VOC: 11-May-2021 print(df) </code></pre> <p>Equally you could replace the <code>&lt;br /&gt;</code> with <code>\n</code>.</p>
python|html|pandas|beautifulsoup
5
377,566
68,306,855
Pandas shift() column down, but replace NaN entry with previous value?
<p>Alright, this should be straightforward. I'm shifting a column down, and just need to fill the resulting NaN with the previous value instead. How can I do this?</p> <pre><code>&gt;&gt;&gt; df1 = pd.DataFrame({ 'time_id': [5,5,5,5,5,5,5,5,11,11,11,11,11,11,11,11], ... 'A': [1,2,4,5,7,9,11,12,2,3,4,5,8,12,13,14], ... 'B': [randint(1, 99)*10 for x in range(16)], ... 'C': [randint(1, 99)*100 for x in range(16)]}) &gt;&gt;&gt; df1 time_id A B C 0 5 1 610 9400 1 5 2 250 4600 2 5 4 350 9200 3 5 5 100 6700 4 5 7 110 6400 5 5 9 220 7100 6 5 11 200 800 7 5 12 580 7200 8 11 2 700 1100 9 11 3 770 4700 10 11 4 170 3700 11 11 5 900 2500 12 11 8 730 8800 13 11 12 940 2600 14 11 13 740 2700 15 11 14 790 4800 &gt;&gt;&gt; df1['C_prev'] = df1.groupby(['time_id'])['C'].shift(1) &gt;&gt;&gt; df1 time_id A B C C_prev 0 5 1 610 9400 NaN 1 5 2 250 4600 9400.0 2 5 4 350 9200 4600.0 3 5 5 100 6700 9200.0 4 5 7 110 6400 6700.0 5 5 9 220 7100 6400.0 6 5 11 200 800 7100.0 7 5 12 580 7200 800.0 8 11 2 700 1100 NaN 9 11 3 770 4700 1100.0 10 11 4 170 3700 4700.0 11 11 5 900 2500 3700.0 12 11 8 730 8800 2500.0 13 11 12 940 2600 8800.0 14 11 13 740 2700 2600.0 15 11 14 790 4800 2700.0 </code></pre> <p>I.e. the first NaN should just repeat the first value, 9400, and similarly the next column (since they're grouped by time_id) should fill its NaN with 1100.</p> <p>Help appreciated!</p>
<p>you can use interpolate()</p> <pre><code>df.C_prev.interpolate(method = 'backfill', limit_direction = 'backward') </code></pre>
python|pandas|dataframe
1
377,567
68,377,425
Applying a function to every cell in a dataframe based of row and column
<p>I have a dateframe whose rows and columns are numbers. Is there a way to apply a function to every cell, based of whatever row and column the cell belongs to. To illustrate, I want:</p> <pre><code> | 2022 | 2023 | 2024 | 0 | f(0, 2022) | f(0, 2023) | f(0, 2024) | 1 | f(1, 2022) | f(1, 2023) | f(1, 2024) | </code></pre>
<p>Try using <code>pd.DataFrame.apply(func, axis)</code><br> You might find the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.apply.html" rel="nofollow noreferrer">documentation</a> helpful</p>
python|pandas|dataframe
-1
377,568
68,170,884
How to apply Target Encoding in test dataset?
<p>I am working on a project, where I had to apply target encoding for 3 categorical variables:</p> <pre><code>merged_data['SpeciesEncoded'] = merged_data.groupby('Species')['WnvPresent'].transform(np.mean) merged_data['BlockEncoded'] = merged_data.groupby('Block')['WnvPresent'].transform(np.mean) merged_data['TrapEncoded'] = merged_data.groupby('Trap')['WnvPresent'].transform(np.mean) </code></pre> <p>I received the results and ran the model. Now the problem is that I have to apply the same model to test data that has columns Block, Trap, and Species, but doesn't have the values of the target variable WnvPresent (which has to be predicted).</p> <p>How can I transfer my encoding from training sample to the test? I would greatly appreciate any help.</p> <p>P.S. I hope it makes sense.</p>
<p>You need to same the mapping between the feature and the mean value, if you want to apply it to the test dataset.</p> <p>Here is a possible solution:</p> <pre><code>species_encoding = df.groupby(['Species'])['WnvPresent'].mean().to_dict() block_encoding = df.groupby(['Block'])['WnvPresent'].mean().to_dict() trap_encoding = df.groupby(['Trap'])['WnvPresent'].mean().to_dict() merged_data['SpeciesEncoded'] = df['Species'].map(species_encoding) merged_data['BlockEncoded'] = df['Block'].map(species_encoding) merged_data['TrapEncoded'] = df['Trap'].map(species_encoding) test_data['SpeciesEncoded'] = df['Species'].map(species_encoding) test_data['BlockEncoded'] = df['Block'].map(species_encoding) test_data['TrapEncoded'] = df['Trap'].map(species_encoding) </code></pre> <p>This would answer your question, but I want to add, that this approach can be improved. Directly using mean values of targets could make the models overfit on the data.</p> <p>There are many approaches to improve target encoding, one of them is smoothing, here is a link to an example: <a href="https://maxhalford.github.io/blog/target-encoding/" rel="nofollow noreferrer">https://maxhalford.github.io/blog/target-encoding/</a></p> <p>Here is an example:</p> <pre><code>m = 10 mean = df['WnvPresent'].mean() # Compute the number of values and the mean of each group agg = df.groupby('Species')['WnvPresent'].agg(['count', 'mean']) counts = agg['count'] means = agg['mean'] # Compute the &quot;smoothed&quot; means species_encoding = ((counts * means + m * mean) / (counts + m)).to_dict() </code></pre>
python|pandas
0
377,569
68,029,770
Onnx to trt - [8] Assertion failed: creator && "Plugin not found
<p>I am using <strong>TensorRT</strong> in order to convert a model from onnx to trt -format. The model is originally a <a href="/questions/tagged/tensorflow" class="post-tag" title="show questions tagged &#39;tensorflow&#39;" rel="tag">tensorflow</a> model from Tensorflow Model Zoo (SSD ResNet50). When I try to convert it I get the error:</p> <pre><code>[E] [TRT] /home/jenkins/agent/workspace/OSS/OSS_L0_MergeRequest/oss/parsers/onnx/ModelImporter.cpp:708: ERROR: /home/jenkins/agent/workspace/OSS/OSS_L0_MergeRequest/oss/parsers/onnx/builtin_op_importers.cpp:4298 In function importFallbackPluginImporter: [8] Assertion failed: creator &amp;&amp; &quot;Plugin not found, are the plugin name, version, and namespace correct?&quot; [E] Engine set up failed &amp;&amp;&amp;&amp; FAILED TensorRT.trtexec # trtexec --onnx=../model.onnx --fp16=enable --workspace=5500 --batch=1 --saveEngine=model_op11.trt --verbose </code></pre> <p>As far as I can tell it is looking for a plugin for the <code>NonMaxSuppresion</code> operation. Does anyone know how to convert a model from Tensorflow Model Zoo to TensorRT?</p>
<p>Got this fixed by using TensorRT 8.</p>
tensorflow|onnx|nvidia-jetson|tensorrt
0
377,570
68,136,779
logits and labels must be broadcastable: data augmentation layers makes logits and labels mismatch
<p>I'm trying to <strong>move all my data augmentation preprocessing over to inside my model</strong>, hence, i have created a preprocessing model and merged it into my Resnet50.</p> <p>The problem is, my <code>tf.data</code> pipeline inputs <code>batch_size</code> images to the model, that when fed into the preprocessing pipeline generates: <code>batch_size * 54</code> images (54 samples per image), hence, <strong>the label information is not associated to the generated images</strong> and i get the error (batch_size = 16):</p> <pre><code>InvalidArgumentError: logits and labels must be broadcastable: logits_size=[864,516] labels_size=[16,516] [[node categorical_crossentropy_1/softmax_cross_entropy_with_logits (defined at &lt;ipython-input-26-8e524a3a5e0b&gt;:31) ]] [Op:__inference_train_function_118686] </code></pre> <p>Any guesses on what should i do to keep run data augmentation on the GPU and associate the labels to the corresponding generated images?</p> <h3>Auxiliary Code:</h3> <pre class="lang-py prettyprint-override"><code>''' Data augmentation pipeline: (yields 54 images by sample) Extract 5 random crops + 1 central crop, Rotate +-45 deg, Translate in two random directions, then mirror (vertically) ''' def preprocessing_model(): input = keras.Input(shape=(224, 224, 3), name=&quot;input&quot;) rescaling = tf.keras.layers.experimental.preprocessing.Rescaling(1./255)(input) central_crop = tf.keras.layers.experimental.preprocessing.CenterCrop(height=112,width=112)(rescaling) resized_single_crop = tf.keras.layers.experimental.preprocessing.Resizing(224,224)(central_crop) random_crop = keras.Sequential([tf.keras.layers.experimental.preprocessing.RandomCrop(height=56,width=74)]) random_crop0 = random_crop(rescaling,training=True) random_crop1 = random_crop(rescaling,training=True) random_crop2 = random_crop(rescaling,training=True) random_crop3 = random_crop(rescaling,training=True) random_crop4 = random_crop(rescaling,training=True) crops = tf.keras.layers.concatenate([random_crop0,random_crop1,random_crop2,random_crop3,random_crop4],axis=0) resized_crops = tf.keras.layers.experimental.preprocessing.Resizing(224,224)(crops) rotate_1 = keras.Sequential([tf.keras.layers.experimental.preprocessing.RandomRotation(factor=[0.125,0.125])]) rotate_2 = keras.Sequential([tf.keras.layers.experimental.preprocessing.RandomRotation(factor=[-0.125,-0.125])]) rotated_a = rotate_1(rescaling,training=True) rotated_b = rotate_2(rescaling,training=True) augmented_images = tf.keras.layers.concatenate([rescaling,resized_crops,resized_single_crop,rotated_a,rotated_b],axis=0) translate_1 = keras.Sequential([keras.layers.experimental.preprocessing.RandomTranslation(height_factor=(-0.25,0.25),width_factor=(0.25,0.25))]) translate_2 = keras.Sequential([keras.layers.experimental.preprocessing.RandomTranslation(height_factor=(-0.25,0.25),width_factor=(-0.25,-0.25))]) translated_a = translate_1(augmented_images,training=True) translated_b = translate_2(augmented_images,training=True) augmented_images = tf.keras.layers.concatenate([augmented_images,translated_a,translated_b],axis=0) mirrored_versions = keras.Sequential([tf.keras.layers.experimental.preprocessing.RandomFlip('vertical')]) mirrored_images = mirrored_versions(augmented_images,training=True) augmented_images = tf.keras.layers.concatenate([augmented_images,mirrored_images],axis=0) model = tf.keras.Model(inputs=input,outputs=augmented_images) return model </code></pre> <blockquote> <p>Merging the preprocessing model into the ResNet50:</p> </blockquote> <pre class="lang-py prettyprint-override"><code>def load_and_configure_model(optimizer, loss, metrics, path): model = ResNet50V2(include_top=True, weights='imagenet') transfer_layer = model.get_layer('avg_pool') resnet_submodel = Model(inputs=model.input,outputs=transfer_layer.output) augmentation_pipeline = preprocessing_model() augmentation_model_cfg = augmentation_pipeline.get_config() # Get layer configuration dictionary. model_config = resnet_submodel.get_config() submodel = model_config['layers'] submodel.remove(submodel[0]) # Remove the previous input layer prepr_model_layers = augmentation_model_cfg['layers'] prepr_model_layers.extend(submodel) # Join both models # Replace the previous input layer with the output from the preprocessing model # (Connect the preprocessing model to the resnet) output_name = prepr_model_layers[len(augmentation_pipeline.get_config()['layers'])-1]['name'] prepr_model_layers[len(augmentation_pipeline.get_config()['layers'])]['inbound_nodes'] = [[[output_name, 0, 0, {}]]] new_model = augmentation_pipeline.__class__.from_config(augmentation_model_cfg, custom_objects={}) # change custom objects if necessary # Set back pre-trained weights on new model weights = [layer.get_weights() for layer in resnet_submodel.layers[1:]] for layer, weight in zip(new_model.layers[15:], weights): layer.set_weights(weight) for layer in new_model.layers[15:]: layer.trainable = False for layer in new_model.layers[15:]: trainable = ('conv5_block3' in layer.name) layer.trainable = trainable transfer_layer = new_model.get_layer('avg_pool') class1 = Dense(1000, activation='softmax',name='class_1')(transfer_layer.output) class2 = Dense(516, activation='softmax',name='class_2')(transfer_layer.output) class3 = Dense(124,activation='softmax', name='class_3')(transfer_layer.output) model = keras.Model( inputs=[new_model.inputs], outputs=[class1,class2,class3], ) if not path == None : model.load_weights(path) model.compile(optimizer=optimizer, loss=loss, metrics=metrics) print(model.summary()) return model </code></pre> <blockquote> <p>tf.data pipeline</p> </blockquote> <pre class="lang-py prettyprint-override"><code>def train_model(train_path, validation_path, buffer_size, epochs, steps_per_epoch, model): train_filenames = get_filenames(train_path) random.shuffle(train_filenames) validation_filenames = get_filenames(validation_path) random.shuffle(validation_filenames) dataset_length = 91758 train_size = dataset_length * 0.7 validation_size = dataset_length - train_size batch_size = 16 AUTO = tf.data.AUTOTUNE train_dataset = tf.data.TFRecordDataset(buffer_size=int(1e+8),num_parallel_reads=AUTO,filenames=train_filenames).cache('/cache/train_cache').map(parsing_fn,num_parallel_calls=AUTO) train_dataset = train_dataset.batch(batch_size) train_dataset = train_dataset.repeat() train_dataset = train_dataset.prefetch(AUTO) # Create a validation dataset validation_dataset = tf.data.TFRecordDataset(num_parallel_reads=AUTO,filenames=validation_filenames).map(parsing_fn,num_parallel_calls=AUTO) validation_dataset = validation_dataset.batch(batch_size) validation_dataset = validation_dataset.prefetch(AUTO) validation_dataset = validation_dataset.repeat(1) validation_steps = validation_size / batch_size # &quot;This ensures that the same validation samples are used every time&quot; history = model.fit(x=train_dataset, epochs=epochs, steps_per_epoch=steps_per_epoch, validation_data=validation_dataset, validation_steps=validation_steps) return history </code></pre>
<p>Any Keras layer will have to keep the batch size the same as the input, so it's not possible as a Keras layer.</p> <p>If you really want to generate multiple images, you would have to do this in the ingest pipeline.</p> <p>That said, the more common approach is to randomly select one out of the multiple images during each epoch. If you do that, you can keep it as a layer within the model.</p>
python|tensorflow|keras|deep-learning|data-preprocessing
0
377,571
68,043,922
How to keep date format the same in pandas?
<pre><code>import pandas as pd import sys df = pd.read_csv(sys.stdin, sep='\t', parse_dates=['Date'], index_col=0) df.to_csv(sys.stdout, sep='\t') </code></pre> <pre><code>Date Open 2020/06/15 182.809924 2021/06/14 257.899994 </code></pre> <p>I got the following output with the input shown above.</p> <pre><code>Date Open 2020-06-15 182.809924 2021-06-14 257.899994 </code></pre> <p>The date format is changed. Is there a way to maintain the date format automatically? (For example, if the input is in YYYY/MM/DD format, the output should be in YYYY-MM-DD. If the input is in YYYY-MM-DD, the output should in YYYY-MM-DD, etc.)</p> <p>I prefer a way that I don't have to manually test the data format. It is best if there is an automatical way to maintain the date format, no matter what the particular date format is.</p>
<p>You can specify the <code>date_format</code> argument in <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_csv.html" rel="nofollow noreferrer"><code>to_csv</code></a>:</p> <pre><code>df.to_csv(sys.stdout, sep='\t', date_format=&quot;%Y/%m/%d&quot;) </code></pre>
python|pandas
3
377,572
68,292,862
PerformanceWarning: DataFrame is highly fragmented. This is usually the result of calling `frame.insert` many times, which has poor performance
<p>I got following warning</p> <blockquote> <p>PerformanceWarning: DataFrame is highly fragmented. This is usually the result of calling <code>frame.insert</code> many times, which has poor performance. Consider using pd.concat instead. To get a de-fragmented frame, use <code>newframe = frame.copy()</code></p> </blockquote> <p>when I tried to append multiple dataframes like</p> <pre><code>df1 = pd.DataFrame() for file in files: df = pd.read(file) df['id'] = file df1 = df1.append(df, ignore_index =True) </code></pre> <p>where</p> <pre><code> df['id'] = file </code></pre> <p>seems to cause the warning. I wonder if anyone can explain how copy() can avoid or reduce the fragment problem or suggest other different solutions to avoid the issues.</p> <p>Thanks,</p> <hr /> <p>I tried to create a testing code to duplicate the problem but I don't see Performance Warning with a testing dataset (random integers). The same code would continue to produce warning when reading in the real dataset. It looks like something triggered the issues in the real dataset.</p> <pre><code>import pandas as pd import numpy as np import os import glob rows = 35000 cols = 1900 def gen_data(rows, cols, num_files): if not os.path.isdir('./data'): os.mkdir('./data') files = [] for i in range(num_files): file = f'./data/{i}.pkl' pd.DataFrame( np.random.randint(1, 1_000, (rows, cols)) ).to_pickle(file) files.append(file) return files # Comment the first line to run real dataset, comment the second line will run the testing dataset files = gen_data(rows, cols, 10) # testing dataset, runs okay files = glob.glob('../pickles3/my_data_*.pickle') # real dataset, get performance warning dfs = [] for file in files: df = pd.read_pickle(file) df['id'] = file dfs.append(df) dfs = pd.concat(dfs, ignore_index = True) </code></pre>
<p><code>append</code> is not an efficient method for this operation. <code>concat</code> is more appropriate in this situation.</p> <p>Replace</p> <pre><code>df1 = df1.append(df, ignore_index =True) </code></pre> <p>with</p> <pre><code> pd.concat((df1,df),axis=0) </code></pre> <p>Details about the differences are in this question: <a href="https://stackoverflow.com/questions/15819050/pandas-dataframe-concat-vs-append">Pandas DataFrame concat vs append</a></p>
python|pandas
7
377,573
68,267,296
Correlation heatmap turned values into nan in Python
<p>I want to conduct a heatmap on my table <code>df</code>, which looks normal at the beginning:</p> <pre><code> Total Paid Post Engaged Negative like 1 2178 0 0 66 0 1207 2 1042 0 0 60 0 921 3 2096 0 0 112 0 1744 4 1832 0 0 109 0 1718 5 1341 0 0 38 0 889 6 1933 0 0 123 0 1501 ... </code></pre> <p>but after I applied:</p> <pre><code>df= full_Data.iloc[1:,4:10] df= pd.DataFrame(df,columns=['A','B','C', 'D', 'E', 'F']) corrMatrix = df.corr() sn.heatmap(corrMatrix, annot=True) plt.show() </code></pre> <p>it returned an empty graph:</p> <pre><code>C:\Users\User\Anaconda3\lib\site-packages\seaborn\matrix.py:204: RuntimeWarning: All-NaN slice encountered vmin = np.nanmin(calc_data) C:\Users\User\Anaconda3\lib\site-packages\seaborn\matrix.py:209: RuntimeWarning: All-NaN slice encountered vmax = np.nanmax(calc_data) </code></pre> <p><a href="https://i.stack.imgur.com/nCpLv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nCpLv.png" alt="enter image description here" /></a></p> <p>and <code>df</code> returned:</p> <pre><code> A B C D E F 1 nan nan nan nan nan nan 2 nan nan nan nan nan nan 3 nan nan nan nan nan nan 4 nan nan nan nan nan nan 5 nan nan nan nan nan nan ... </code></pre> <p>Why all the values are turned into <code>nan</code>?</p> <hr /> <p>Update:</p> <p>Tried to convert <code>df</code> without naming column in the old way:</p> <pre><code>df.columns = ['A','B','C', 'D', 'E', 'F'] </code></pre> <p>and</p> <pre><code>df= pd.DataFrame(df.to_numpy(),columns=['A','B','C', 'D', 'E', 'F']) </code></pre> <p>and both caught error:</p> <pre><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-43-3a27f095066b&gt; in &lt;module&gt; 12 13 corrMatrix = df.corr() ---&gt; 14 sn.heatmap(corrMatrix, annot=True) 15 plt.show() 16 ~\Anaconda3\lib\site-packages\seaborn\_decorators.py in inner_f(*args, **kwargs) 44 ) 45 kwargs.update({k: arg for k, arg in zip(sig.parameters, args)}) ---&gt; 46 return f(**kwargs) 47 return inner_f 48 ~\Anaconda3\lib\site-packages\seaborn\matrix.py in heatmap(data, vmin, vmax, cmap, center, robust, annot, fmt, annot_kws, linewidths, linecolor, cbar, cbar_kws, cbar_ax, square, xticklabels, yticklabels, mask, ax, **kwargs) 545 plotter = _HeatMapper(data, vmin, vmax, cmap, center, robust, annot, fmt, 546 annot_kws, cbar, cbar_kws, xticklabels, --&gt; 547 yticklabels, mask) 548 549 # Add the pcolormesh kwargs here ~\Anaconda3\lib\site-packages\seaborn\matrix.py in __init__(self, data, vmin, vmax, cmap, center, robust, annot, fmt, annot_kws, cbar, cbar_kws, xticklabels, yticklabels, mask) 164 # Determine good default values for the colormapping 165 self._determine_cmap_params(plot_data, vmin, vmax, --&gt; 166 cmap, center, robust) 167 168 # Sort out the annotations ~\Anaconda3\lib\site-packages\seaborn\matrix.py in _determine_cmap_params(self, plot_data, vmin, vmax, cmap, center, robust) 202 vmin = np.nanpercentile(calc_data, 2) 203 else: --&gt; 204 vmin = np.nanmin(calc_data) 205 if vmax is None: 206 if robust: &lt;__array_function__ internals&gt; in nanmin(*args, **kwargs) ~\Anaconda3\lib\site-packages\numpy\lib\nanfunctions.py in nanmin(a, axis, out, keepdims) 317 # Fast, but not safe for subclasses of ndarray, or object arrays, 318 # which do not implement isnan (gh-9009), or fmin correctly (gh-8975) --&gt; 319 res = np.fmin.reduce(a, axis=axis, out=out, **kwargs) 320 if np.isnan(res).any(): 321 warnings.warn(&quot;All-NaN slice encountered&quot;, RuntimeWarning, ValueError: zero-size array to reduction operation fmin which has no identity </code></pre>
<p>I think problem is passed object <code>DataFrame</code> to <code>pd.DataFrame</code> constructor, so there are different original columns names and new columns names from list, so only <code>NaN</code>s are created.</p> <p>Solution is convert it to numpy array:</p> <pre><code>df= pd.DataFrame(df.to_numpy(),columns=['A','B','C', 'D', 'E', 'F']) </code></pre> <p>Or set new columns names in next step without <code>DataFrame</code> constructor:</p> <pre><code>df = full_Data.iloc[1:,4:10] df.columns = ['A','B','C', 'D', 'E', 'F'] </code></pre> <p>Solution create <code>dict</code> by existing columns only:</p> <pre><code>old = df.columns new = ['A','B','C', 'D', 'E', 'F'] df = df.rename(columns=dict(zip(old, new))) print (df) A B C D E F 1 2178 0 0 66 0 1207 2 1042 0 0 60 0 921 3 2096 0 0 112 0 1744 4 1832 0 0 109 0 1718 5 1341 0 0 38 0 889 6 1933 0 0 123 0 1501 </code></pre> <hr /> <pre><code>print (df.corr()) A B C D E F A 1.000000 NaN NaN 0.606808 NaN 0.727034 B NaN NaN NaN NaN NaN NaN C NaN NaN NaN NaN NaN NaN D 0.606808 NaN NaN 1.000000 NaN 0.916325 E NaN NaN NaN NaN NaN NaN F 0.727034 NaN NaN 0.916325 NaN 1.000000 </code></pre> <p>EDIT:</p> <p>Problem was columns was not numeric.</p> <pre><code>df = df.astype(int) </code></pre> <p>Or:</p> <pre><code>df = df.apply(pd.to_numeric, errors='coerce') </code></pre>
python|pandas|dataframe|seaborn|nan
3
377,574
469,931
deleting rows of a numpy array based on uniqueness of a value
<p>let's say I have a bi-dimensional array like that</p> <pre><code>numpy.array( [[0,1,1.2,3], [1,5,3.2,4], [3,4,2.8,4], [2,6,2.3,5]]) </code></pre> <p>I want to have an array formed eliminating whole rows based on uniqueness of values of last column, selecting the row to keep based on value of third column. e.g. in this case i would like to keep only one of the rows with 4 as last column, and choose the one which has the minor value of third column, having something like that as a result:</p> <pre><code>array([0,1,1.2,3], [3,4,2.8,4], [2,6,2.3,5]) </code></pre> <p>thus eliminating row [1,5,3.2,4]</p> <p>which would be the best way to do it?</p>
<p>My numpy is way out of practice, but this should work:</p> <pre><code>#keepers is a dictionary of type int: (int, int) #the key is the row's final value, and the tuple is (row index, row[2]) keepers = {} deletions = [] for i, row in enumerate(n): key = row[3] if key not in keepers: keepers[key] = (i, row[2]) else: if row[2] &gt; keepers[key][1]: deletions.append(i) else: deletions.append(keepers[key][0]) keepers[key] = (i, row[2]) o = numpy.delete(n, deletions, axis=0) </code></pre> <p>I've greatly simplified it from my declarative solution, which was getting quite unwieldy. Hopefully this is easier to follow; all we do is maintain a dictionary of values that we want to keep and a list of indexes we want to delete.</p>
python|arrays|numpy|unique
1
377,575
59,088,097
Same observations on data frame feature appear as independent
<p>I have got the following DF:</p> <p><code>carrier_name sol_carrier aapt 702 aapt carrier 185 afrix 72 afr-ix 4 airtel 35 airtel 2 airtel dia and broadband 32 airtel mpls standard circuits 32 amt 6 anca test 1 appt 1 at tokyo 1 at&amp;t 5041 att 2 batelco 723 batelco 2 batelco (manual) 4 beeline 1702 beeline - 01 6 beeline - 02 6 </code></p> <p>i need to get a unique list of <code>carrier_name</code> so I have done some basic housekeeping as I only want to keep the names with no white spaces at the beginign or end of the observation with the following code:</p> <pre><code>`carrier = pd.DataFrame(data['sol_carrier'].value_counts(dropna=False)) carrier['carrier_name'] = carrier.index carrier['carrier_name'] = carrier['carrier_name'].str.strip() carrier['carrier_name'] = carrier['carrier_name'].str.replace('[^a-zA-Z]', ' ') carrier['carrier_name'] = np.where(carrier['carrier_name']==' ',np.NaN,carrier['carrier_name']) carrier['carrier_name'] = carrier['carrier_name'].str.strip() carrier = carrier.reset_index(drop=True) carrier = carrier[['carrier_name','sol_carrier']] carrier.sort_values(by='carrier_name')` </code></pre> <p>what happens here is that i get a list of <code>carrier_name</code> but still get some duplicate observations like <code>airtel</code> or <code>beeline</code>for example. I dont understand why this is happening as both observations are the same and and there are no more whitespaces at the begining or the end of the observation and, this observations are followed by its respective <code>value_counts()</code>so there is no reason for them to be duplicated. Here is the same DF but after the above code has been applied:</p> <p><code>carrier_name sol_carrier aapt 702 aapt carrier 185 afr ix 4 afrix 72 airtel 35 airtel 2 airtel dia and broadband 32 airtel mpls standard circuits 32 amt 6 anca test 1 appt 1 at t 5041 at tokyo 1 att 2 batelco 723 batelco 2 batelco manual 4 beeline 1702 beeline 6 beeline 6</code></p>
<p>That happens because you don't aggregate the results you just change the values in 'carrier_name' columns.</p> <p>To aggregate the results call </p> <pre><code>carrier.groupby('carrier_name').sol_carrier.sum() </code></pre> <p>or modify the 'data' dataframe and then call</p> <pre><code>data['sol_carrier'].value_counts() </code></pre>
python|pandas|replace
1
377,576
59,138,033
Read 5 lines from a panda dataframe and insert it in one cell per line in another panda dataframe
<p>I am reading data from an excel file: the dataframe resulting is an array with a single column and several lines:</p> <pre><code> identifier 0 6051 1 771 2 6051 3 5219 4 3667 ... 6023 771 6024 6051 6025 772 [6026 rows x 1 columns] </code></pre> <p>What I need is to create a new dataframe with 1205 lines (6025/5) and one single column where I insert in each line single cell 5 values from original dataframe: The result should be something like this</p> <pre><code> identifier 0 6051 771 6051 5219 3667 1 2578 3697 24 7865 7852 2 635 6987 2485 3658 2587 3 219 8579 2569 1478 3698 4 567 5974 6587 8752 6848 ... 1203 981 6987 2547 369 4752 1204 5651 6987 3975 6975 3974 1205 662 6975 2354 1284 1298 [1205 rows x 1 columns] </code></pre> <p>I am reading original dataframe just like this:</p> <pre><code>file = '01-03-2010.xlsx' require_cols = [0] df = pd.read_excel(file, sheet_name='Folha2', usecols = require_cols) df2 = pd.DataFrame(columns=['sentence']) </code></pre> <p>df2 is the resulting dataframe.</p> <p>Can anyone help? BR</p>
<p>You can try the following. </p> <pre><code>df['group'] = df.index//5 # add extra column to hold the group value new_df = df.groupby('group').identifier.apply(list).apply(pd.Series) df.drop('group', axis=1) # drop the extra column that was created. print(new_df.head()) </code></pre> <p>Edit:</p> <p><strong>Input</strong></p> <pre><code>df = pd.DataFrame(np.random.randint(0,1000,size=6026), columns=["identifier"]) df.head() identifier 0 752 1 14 2 184 3 139 4 37 </code></pre> <p><strong>Solution</strong></p> <pre><code>df['group'] = df.index//5 df1 = df.groupby('group').identifier.apply(list).apply(pd.Series).fillna(0) df1 = df1.astype('int32') df1.head() 0 1 2 3 4 group 0 752 14 184 139 37 1 716 499 902 54 565 2 74 427 939 380 244 3 651 803 97 78 492 4 169 376 737 342 616 </code></pre> <p><strong>Solution 2:</strong> (one column with array of 5 elements)</p> <pre><code>df['group'] = df.index//5 df1 = pd.DataFrame(df.groupby('group').identifier.apply(list)) df1.head() identifier group 0 [752, 14, 184, 139, 37] 1 [716, 499, 902, 54, 565] 2 [74, 427, 939, 380, 244] 3 [651, 803, 97, 78, 492] 4 [169, 376, 737, 342, 616] </code></pre>
python|pandas|dataframe|machine-learning
2
377,577
59,273,374
How is my model working if all the base layer trainables are set to false?
<p>This is the model I make for my deep learning project and I am getting decent accuracy out of it. My question is, if I froze the weights of the initial model(which is my base model of VGG19) how did I manage to train the whole model? And also after adding the VGG19 layer with the layers frozen I got better results than I acheived only which a few layers of CNN. Could it be because the weights of the VGG19 were initialized into my CNN layer?</p> <pre><code>img_h=224 img_w=224 initial_model = applications.vgg19.VGG19(weights='imagenet', include_top=False,input_shape = (img_h,img_w,3)) last = initial_model.output for layer in initial_model.layers: layer.trainable = False x = Conv2D(128, kernel_size=3, strides=1, activation='relu')(last) x = Conv2D(64, kernel_size=3, strides=1, activation='relu')(x) x = Flatten()(x) x = Dense(512, activation='relu')(x) x = Dense(256, activation='relu')(x) x = Dense(128, activation='relu')(x) x = (Dropout(0.1))(x) preds = Dense(2, activation='sigmoid')(x) </code></pre>
<p>"Freezing the layers" just means you don't update the weights on those layers when you backpropagate the error. Therefore, you'll just update the weights on those layers that are not frozen, which enables your neural net to learn.</p> <p>You are adding some layers after VGG. I don't know if this is a common approach, but it totally makes sense that it kind of works, assuming you are interpreting your metrics right.</p> <p>Your VGG has already been pre-trained on ImageNet, so it's a pretty good baseline for many use-cases. You are basically using VGG as your <strong>encoder</strong>. Then, on the output of this encoder (which we can call <strong>latent representation of your input</strong>), you train a neural net.</p> <p>I would also try out more <em>mainstream</em> transfer learning techniques, where you gradually unfreeze layers starting from the end, or you have gradually smaller learning rate.</p>
tensorflow|machine-learning|keras|computer-vision
0
377,578
59,320,208
How to create my own loss function in Pytorch?
<p>I'd like to create a model that predicts parameters of a circle (coordinates of center, radius).</p> <p>Input is an array of points (of arc with noise):</p> <pre><code>def generate_circle(x0, y0, r, start_angle, phi, N, sigma): theta = np.linspace(start_angle*np.pi/180, (start_angle + phi)*np.pi/180, num=N) x = np.array([np.random.normal(r*np.cos(t) + x0 , sigma, 1)[0] for t in theta]) y = np.array([np.random.normal(r*np.sin(t) + y0 , sigma, 1)[0] for t in theta]) return x, y n_x = 1000 start_angle = 0 phi = 90 N = 100 sigma = 0.005 x_full = [] for i in range(n_x): x0 = np.random.normal(0 , 10, 1)[0] y0 = np.random.normal(0 , 10, 1)[0] r = np.random.normal(0 , 10, 1)[0] x, y = generate_circle(x0, y0, r, start_angle, phi, N, sigma) x_full.append(np.array([ [x[i], y[i]] for i in range(len(x))])) X = torch.from_numpy(np.array(x_full)) print(X.size()) # torch.Size([1000, 100, 2]) </code></pre> <p>Output: [x_c, y_c, r]</p> <p>As a loss function I need to use this one: <a href="https://i.stack.imgur.com/bJuAR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bJuAR.png" alt="enter image description here"></a></p> <p>I tried to implement something like the following:</p> <pre><code>class Net(torch.nn.Module): def __init__(self, n_feature, n_hidden, n_output): super(Net, self).__init__() self.hidden = torch.nn.Linear(n_feature, n_hidden) self.predict = torch.nn.Linear(n_hidden, n_output) def forward(self, x): x = F.relu(self.hidden(x)) x = self.predict(x) return x # It doesn't work, it's just an idea def my_loss(point, params): arr = ((point[:, 0] - params[:, 0])**2 + (point[:, 1] - params[:, 1])**2 - params[:, 2]**2)**2 loss = torch.sum(arr) return loss # For N pairs (x, y) model predicts parameters of circle net = Net(n_feature=N*2, n_hidden=10, n_output=3) optimizer = torch.optim.SGD(net.parameters(), lr=1e-4) for t in range(1000): prediction = net(X.view(n_x, N*2).float()) loss = my_loss(X, prediction) print(f"loss: {loss}") optimizer.zero_grad() loss.backward() optimizer.step() </code></pre> <p>So, the question is how to correctly implement my own loss function in terms of Pytorch in this case?</p> <p>Or how to change the model's structure to get expected results?</p>
<p>You're trying to create a loss between the predicted outputs and the inputs instead of between the predicted outputs and the true outputs. To do this you need to save the true values of <code>x0</code>, <code>y0</code>, and <code>r</code> when you generate them.</p> <pre class="lang-py prettyprint-override"><code>n_x = 1000 start_angle = 0 phi = 90 N = 100 sigma = 0.005 x_full = [] targets = [] # &lt;-- Here for i in range(n_x): x0 = np.random.normal(0 , 10, 1)[0] y0 = np.random.normal(0 , 10, 1)[0] r = np.random.normal(0 , 10, 1)[0] targets.append(np.array([x0, y0, r])) # &lt;-- Here x, y = generate_circle(x0, y0, r, start_angle, phi, N, sigma) x_full.append(np.array([ [x[i], y[i]] for i in range(len(x))])) X = torch.from_numpy(np.array(x_full)) Y = torch.from_numpy(np.array(targets)) # &lt;-- Here print(X.size()) # torch.Size([1000, 100, 2]) print(Y.size()) # torch.Size([1000, 3]) </code></pre> <p>Now, when you call <code>my_loss</code> you should use:</p> <pre class="lang-py prettyprint-override"><code>loss = my_loss(Y, prediction) </code></pre> <p>You are passing in all your data points every iteration of your for loop, I would split your data into smaller sections so that your model doesn't just learn to output the same values every time. e.g. you have generated 1000 points so pass in a random selection of 100 in each iteration using something like <code>random.sample(...)</code></p> <p>Your input numbers are pretty large which means your loss will be huge, so generate inputs between 0 and 1 and then if you need the value to be between 0 and 10 you can just multiply by 10.</p>
python-3.x|machine-learning|deep-learning|pytorch|loss-function
0
377,579
59,218,230
Classifications after better cluster found - Sklearn
<p>I using kmeans to classificate data.</p> <p>And I found my better k cluster with Elbow method and silhouette to validate decision.</p> <p>So now how can i classificate my data and plot dist chart?</p> <p>Could you please help me with this?</p> <p>This is my code.</p> <pre><code>import pandas as pd import seaborn as sns from sklearn.datasets import load_iris from sklearn.cluster import KMeans from sklearn import preprocessing import matplotlib.pyplot as plt from sklearn.metrics import silhouette_score %matplotlib inline df_diabetes = pd.read_csv('diabetes.csv') #Deletando a coluna "Classe" df_noclass = df_diabetes.drop('Classe', axis=1) df_noclass.head() nomes = df_diabetes_noclass.columns valores = df_diabetes_noclass.values escala_min_max = preprocessing.MinMaxScaler() valores_normalizados = escala_min_max.fit_transform(valores) df_diabetes_normalizado = pd.DataFrame(valores_normalizados) df_diabetes_normalizado.columns = nomes df_diabetes_normalizado.head(5) sse = {} for k in range(1, 10): kmeans = KMeans(n_clusters=k, max_iter=1000).fit(data) df_diabetes_normalizado["clusters"] = kmeans.labels_ sse[k] = kmeans.inertia_ plt.figure(figsize=(14,9)) plt.plot(list(sse.keys()), list(sse.values())) plt.xlabel("Numero de Clusters") plt.ylabel("SSE") plt.show() X = df_diabetes_normalizado y = df_diabetes_normalizado for n_cluster in range(2, 11): kmeans = KMeans(n_clusters=n_cluster).fit(X) label = kmeans.labels_ sil_coeff = silhouette_score(X, label, metric='euclidean') print("Para n_clusters={}, O Coeficiente de silueta é {}".format(n_cluster, sil_coeff)) </code></pre> <p>I need to classificate my datas now and create a plot like image below.</p> <p><a href="https://i.stack.imgur.com/G0M0T.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/G0M0T.png" alt="enter image description here"></a></p>
<p>If you want to predict which cluster your new data belongs to, you need to use the predict method:</p> <pre><code>kmeans.predict(newData) </code></pre> <p>Here is the documentation link for the predict method:</p> <p><a href="https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html#sklearn.cluster.KMeans.predict" rel="nofollow noreferrer">https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html#sklearn.cluster.KMeans.predict</a></p>
python|scikit-learn|sklearn-pandas
0
377,580
59,291,053
How to select rows based on two column that must contain specific value?
<p>I have a dataset with a lot of incorrect duplicates on a certain field, in my reproducible example there are duplicates in serial with different color and shape. I have the actual dataframe with the correct color and shape to serial mapped, and need to select the correct rows with that.</p> <p>Example:</p> <pre><code>import pandas as pd items = pd.DataFrame({ 'serial': ['0001', '0001', '0001', '0002', '0002', '0002'], 'color': ['Blue', 'Red', 'Green', 'Blue', 'Red', 'Green'], 'shape': ['Square', 'Circle', 'Star', 'Square', 'Circle', 'Star'], 'more_data': ['G', 'H', 'I', 'J', 'K', 'L'], 'even_more_data': ['A', 'B', 'C', 'D', 'E', 'F'] }) real = pd.DataFrame({ 'serial': ['0001', '0002'], 'color': ['Blue', 'Red'], 'shape': ['Square', 'Circle'] }) </code></pre> <p>Then,</p> <pre><code>Out[1]: items serial color shape more_data even_more_data 0 0001 Blue Square G A 1 0001 Red Circle H B 2 0001 Green Star I C 3 0002 Blue Square J D 4 0002 Red Circle K E 5 0002 Green Star L F Out[2]: real serial color shape 0 0001 Blue Square 1 0002 Red Circle </code></pre> <p>I need to use 'real' to select the correct rows in 'items' so the expected result is:</p> <pre><code>Out[3]: serial color shape more_data even_more_data 0 0001 Blue Square G A 4 0002 Red Circle K E </code></pre>
<p>You can use merge:</p> <pre><code>real.merge(items) </code></pre> <p>output</p> <pre><code>Out[305]: serial color shape more_data even_more_data 0 0001 Blue Square G A 1 0002 Red Circle K E </code></pre>
python|pandas|dictionary|pandas-loc
1
377,581
59,465,357
Pandas set value in a column equal to 5% quantile if they are smaller than that
<p>Generating data</p> <pre><code>random.seed(42) date_rng = pd.date_range(start='1/1/2018', end='1/08/2018', freq='H') df = pd.DataFrame(np.random.randint(0,10,size=(len(date_rng), 3)), columns=['data1', 'data2', 'data3'], index= date_rng) mask = np.random.choice([1, 0], df.shape, p=[.35, .65]).astype(bool) df[mask] = np.nan </code></pre> <p>I want to do the following operation: calculate the 5% quantile of each column, then compare the value of each cell in that column with the calculated quantile: if they are smaller, set them to the 5% quantile of the column.</p> <p>I have read those questions </p> <p><a href="https://stackoverflow.com/questions/31511997/pandas-dataframe-replace-all-values-in-a-column-based-on-condition/31512025">Pandas DataFrame: replace all values in a column, based on condition</a></p> <p><a href="https://stackoverflow.com/questions/43757977/replacing-values-greater-than-a-number-in-pandas-dataframe">Replacing values greater than a number in pandas dataframe</a></p> <p>and come up with my solution:</p> <pre><code>df[df &lt; df.quantile(q=0.05, axis=0)] = df.quantile(q=0.05, axis=0) </code></pre> <p>but it's not working, because I'm trying to replace each value with a series. How can I solve this problem? Thank you</p>
<p>You can get quantile by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.quantile.html" rel="nofollow noreferrer"><code>DataFrame.quantile</code></a> of all columns and pass it to <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.clip.html" rel="nofollow noreferrer"><code>DataFrame.clip</code></a>.</p> <pre><code>np.random.seed(42) date_rng = pd.date_range(start='1/1/2018', end='1/08/2018', freq='H') df = pd.DataFrame(np.random.randint(0,10,size=(len(date_rng), 3)), columns=['data1', 'data2', 'data3'], index= date_rng) mask = np.random.choice([1, 0], df.shape, p=[.35, .65]).astype(bool) print (df) data1 data2 data3 2018-01-01 00:00:00 6 3 7 2018-01-01 01:00:00 4 6 9 2018-01-01 02:00:00 2 6 7 2018-01-01 03:00:00 4 3 7 2018-01-01 04:00:00 7 2 5 ... ... ... 2018-01-07 20:00:00 7 6 4 2018-01-07 21:00:00 0 6 6 2018-01-07 22:00:00 8 2 8 2018-01-07 23:00:00 0 0 3 2018-01-08 00:00:00 8 5 2 </code></pre> <p>For testing is used different quantile:</p> <pre><code>print (df.quantile(q=0.55)) data1 6.0 data2 4.0 data3 5.0 Name: 0.55, dtype: float64 df = df.clip(lower=df.quantile(q=0.55), axis=1) print (df) data1 data2 data3 2018-01-01 00:00:00 6 4 7 2018-01-01 01:00:00 6 6 9 2018-01-01 02:00:00 6 6 7 2018-01-01 03:00:00 6 4 7 2018-01-01 04:00:00 7 4 5 ... ... ... 2018-01-07 20:00:00 7 6 5 2018-01-07 21:00:00 6 6 6 2018-01-07 22:00:00 8 4 8 2018-01-07 23:00:00 6 4 5 2018-01-08 00:00:00 8 5 5 </code></pre>
python|pandas|dataframe|quantile
2
377,582
59,294,501
Having datetime only go up to minutes in my time column
<p>I am pulling information from our server and the time column goes all the way to nanoseconds. I need to merge multiple dfs and this specificity is causing my script to return an empty df.</p> <p>I tried using:</p> <pre><code>df['Time'] = pd.to_datetime(df['Time'], format="%d-%m-%y %H:%M") </code></pre> <p>but I do not believe that what I am trying to do is done through the "format" parameter.</p> <p>I was going to convert the time column to strings and then parse to the correct number of characters but I really want to avoid converting to strings and then back to datetime.</p> <p>Can what I am looking to do be done using to_datetime() or must I convert to strings, parse and convert back to datetime?</p>
<p>You can use the round() method on Timestamps to convert the column to the resolution you wish.</p> <p>For example</p> <pre><code>import pandas as pd import datetime pd.to_datetime(datetime.datetime.now()).round('H') </code></pre> <p>Returns </p> <pre><code>Timestamp('2019-12-11 22:00:00') </code></pre> <p>as it converts the current time (to nanosecond frequency) to the nearest hour.</p>
python|pandas|datetime
5
377,583
59,193,560
Group by all elements of a column, in pandas
<p>I have the dataframe: </p> <pre><code>elements1 | elements2 a dog b dog a cat x cat c cat m pig k pig ... </code></pre> <p>and I want to obtain a dataframe of the form: </p> <pre><code>elements1 | elements2 a, b dog a, x, c cat m, k pig ... </code></pre> <p>where we essentially group by <code>elements2</code> and divide the corresponding <code>elements1</code> with a comma. All items in this dataframe are strings. </p>
<p>we can use <code>groupby</code>, <code>apply</code> and a <code>lambda</code> where we join all the matching elements with a comma. </p> <pre><code>df1 = df.groupby('elements2')['elements1'].apply(lambda x : ','.join(x)).reset_index() cols = ['elements1','elements2'] # sort cols by your desired input. print(df1[cols].sort_values('elements1')) elements1 elements2 1 a,b dog 0 a,x,c cat 2 m,k pig </code></pre>
pandas|dataframe|group-by
2
377,584
59,398,262
Append in exisitng excel sheet using PyExcelerate
<p>I have a dataframe containing large number of records (more than 300,000 rows and 100 columns) . I want to write this dataframe into an pre exsiting excel file (say Output.xlsx).</p> <p>I tried this using openpyexcel as below-</p> <pre><code>with pd.ExcelWriter('Output.xlsx',engine='openpyxl', mode='a') as writer: df.to_excel(writer,sheet_name='mysht1', index=False ) </code></pre> <p>This is inefficient as for 1000 records it was taking around 10 seconds .</p> <p>I see that PyExcelerate performance is much faster around 2 minutes for 300,000 records.</p> <p>However, I was able to add a sheet to new excel file but how can I append it to existing one.</p> <pre><code>values = [df.columns] + list(df.values) wbk = Workbook() ws = wbk.new_sheet('mysht1', data=values) wbk.save('out.xlsx') #wbk.save('Output.xlsx') just override my Output.xlsx with this new tab. </code></pre>
<p>PyExcelerate doesn't support reading Excel files therefore it can't easily do this. Reading is also out of scope for the library so it's unlikely to be added unfortunately. A possible, faster workaround could be to write the sheets to be appended to a new Excel file and use another script to merge the two files.</p>
python|excel|pandas|pyexcelerate
3
377,585
59,108,736
Replace values based on column of column names
<p>I have a large dataframe (>1000 rows) of measurements. One of the columns is Fails (type str) that contains the columns for which the measurement failed. Whether the measurement fails isn't solely based on the value so I can't just replace all negative values for example, which is why there is a Fails column </p> <pre><code>Cd Sn Sb Zn Fails -1 -2 0.0 4 Cd Sn Sb -2 0.0 -1 5 Cd Sn Sb -3 -3 -2 6 Cd Sn Sb 1 2 3 4 Zn </code></pre> <p>If the element failed I need to replace the measurement with nan. So for every row in the df, I need to set df.loc[row,col]=nan if the col is in Fails. </p> <pre><code>Cd Sn Sb Zn Fails nan nan nan 4 Cd Sn Sb nan nan nan 5 Cd Sn Sb nan nan nan 6 Cd Sn Sb 1 2 3 nan Zn </code></pre> <p>What is a efficient way of doing this? </p> <p>Edit: </p> <p>I tried to use a simple example above. There's many more columns in the df. There's actually 29 different elements. This is what the portion of interest looks like </p> <pre><code>data.iloc[:,5:34] Out[45]: Se As Ga Ni ... Tl Pb U Ir 0 19.026755 3.290577 0.0 0.0 ... 0.619604 4.674604 0.030976 0.0 1 35.682812 55.108543 0.0 0.0 ... 4.217798 25.213694 0.216073 0.0 2 93.600473 187.171588 0.0 0.0 ... 12.480773 74.187307 0.647617 0.0 3 229.575678 560.092296 0.0 0.0 ... 37.041994 261.348135 1.926765 0.0 4 56.337625 14.344270 0.0 0.0 ... 0.375804 0.926559 0.004466 0.0 .. ... ... ... ... ... ... ... ... ... 871 NaN NaN NaN NaN ... NaN NaN NaN NaN data["Fails"] Out[50]: 0 Cd Sn Sb Cu Zn 1 Cd Sn Sb Cu Zn 2 Cd Sn Sb Cu Zn 3 Cd Sn Sb Cu Zn 4 Cd Sn Sb Cu Zn 871 </code></pre> <p>When I try the solutions suggested I am getting more nans than I should </p> <pre><code> Se As Ga Ni Mn ... Tl Pb U Ir 0 NaN NaN NaN NaN 0.715142 ... NaN NaN 0.030976 NaN 1 NaN NaN NaN NaN 2.295966 ... NaN NaN 0.216073 NaN 2 NaN NaN NaN NaN 6.654716 ... NaN NaN 0.647617 NaN 3 NaN NaN NaN NaN 20.567433 ... NaN NaN 1.926765 NaN 4 NaN NaN NaN NaN 0.285542 ... NaN NaN 0.004466 NaN .. .. .. .. .. ... ... .. .. ... .. 871 NaN NaN NaN NaN NaN ... NaN NaN NaN NaN </code></pre> <p>In the first couple of rows only Cd,Sn,Sb,Cu and Zn should be set to nan and everything else should be kept as is. </p>
<p>Here's my approach:</p> <pre><code>rep_cols = ['Cd','Sn','Sb','Cu','Zn'] s = df.Fails.str.split(expand=True).stack().reset_index(name='col') df.loc[:, rep_cols] = df.mask(s.pivot('level_0', 'col', 'level_1').notnull()) </code></pre> <p>Output:</p> <pre><code> Cd Sn Sb Zn Fails 0 NaN NaN NaN 4.0 Cd Sn Sb 1 NaN NaN NaN 5.0 Cd Sn Sb 2 NaN NaN NaN 6.0 Cd Sn Sb 3 1.0 2.0 3.0 NaN Zn </code></pre>
python|python-3.x|pandas
0
377,586
59,192,903
Keras costume loss for two connected Autoencoders
<p>I would like to train two Autoencoders jointly and connect their activation layer in the deepest layer. </p> <p>How can I add all the terms in one loss function? </p> <p>Assume: </p> <pre><code>diffLR = Lambda(lambda x: abs(x[0] - x[1]))([model1_act7, model2_act5]) model = Model(inputs=[in1, in2], outputs=[diffLR, model1_conv15, model2_conv10]) model.compile(loss=['MAE', 'mean_squared_error','mean_squared_error'], optimizer='SGD', metrics=['mae', rmse]) model.fit([x_train_n, y_train_n], [yM1, x_train_n, y_train_n], batch_size=10, epochs=350, validation_split=0.2, shuffle=True) #, callbacks=[es]) </code></pre> <p>Two networks are convolutional Autoencoders mapping x->x and y->y. Lambda layer connects the latent space of two networks. Target for diffLR is to train the network to the point that two feature spaces represent same distribution. (yM1 is a zero matrix of the same size as latent feature space.) </p> <p>Now each are optimized separately (or I think they are optimized separately...), I would like to join them in a single loss function like this: </p> <pre><code>def my_loss(z, x, y, z_pred, x_pred, y_pred): loss = backend.sqrt(backend.mean(backend.square(x_pred-x))) + backend.sqrt(backend.mean(backend.square(y_pred-y))) + backend.sqrt(backend.mean(backend.square(z_pred-z))) return loss model.compile(loss=[my_loss], optimizer='SGD', metrics=['mae', rmse]) </code></pre> <p>I get this error: </p> <pre><code>ValueError: When passing a list as loss, it should have one entry per model outputs. The model has 3 outputs, but you passed loss=[&lt;function my_loss at 0x7fa3d17f2158&gt;] </code></pre> <p>or </p> <pre><code>model.compile(loss=my_loss, optimizer='SGD', metrics=['mae', rmse]) TypeError: my_loss() missing 4 required positional arguments: 'y', 'z_pred', 'x_pred', and 'y_pred' </code></pre> <p>Is this possible to do? How can I do this? </p>
<p>So, what you are doing is performing <code>RootMeanSquareError</code> on each of your <code>n=3</code> output followed by a weighted sum (same weight in your case).</p> <p>As the Error message says clearly:</p> <blockquote> <p>ValueError: <strong>When passing a list as loss, it should have one entry per model outputs</strong>. The model has 3 outputs, but you passed....</p> </blockquote> <p>By passing a list of 3 loss function (might be same or different) while compiling your model You can do the same thing what you are doing in your custom loss function. Additionally, you can also define the weight for each individual loss by passing <code>loss_weights</code> argument value. moreover, you can do something following:</p> <pre><code>def my_loss(y_true, y_pred): return backend.sqrt(K.mean(K.square(y_pred - y_true))) model.compile(loss=[my_loss, my_loss, my_loss], # you can pass 3 different (custom) loss function as well loss_weight=[1.0, 1.0, 1.0], # Default value is 1 optimizer='SGD', metrics=['mae', rmse]) </code></pre>
python|tensorflow|keras
0
377,587
59,453,510
How to repeat rows in dataframe with each values in a list?
<p>I have a dataframe as follows</p> <pre><code>df = pd.DataFrame({ 'DATE' : ['2015-12-01', '2015-12-01', '2015-12-02', '2015-12-02'], 'DAY_NUMBER' : [3, 3, 4, 4], 'HOUR' : [5, 6, 5, 6], 'count' : [12,11,14,15] }) DATE DAY_NUMBER HOUR count 0 2015-12-01 3 5 12 1 2015-12-01 3 6 11 2 2015-12-02 4 5 14 3 2015-12-02 4 6 15 </code></pre> <p>And I have a list <code>extra_hours = [1,13]</code></p> <p>I would like to create new rows in which <code>HOUR</code> column will be filled from <code>extra_hours</code>, and <code>count=0', and repeat these row creation for each unique</code>['DATE', 'DAY_NUMBER']`.</p> <p>My Expected df is as follows.</p> <pre><code> DATE DAY_NUMBER HOUR count 0 2015-12-01 3 5 12.0 1 2015-12-01 3 6 11.0 2 2015-12-02 4 5 14.0 3 2015-12-02 4 6 15.0 0 2015-12-01 3 1 0.0 0 2015-12-01 3 13 0.0 2 2015-12-02 4 1 0.0 2 2015-12-02 4 13 0.0 </code></pre> <p>Now I am creating the dataframe using the below code. I searched a lot, but couldn't find any easier solution. Any help is appreciated to improve the code and performance.</p> <pre><code>extra_df = df[['DATE', 'DAY_NUMBER']].sort_values('DATE').drop_duplicates() extra_df['HOUR'] = np.array(extra_hours).reshape(1,len(extra_hours)).repeat(extra_df.shape[0], axis=0).tolist() df.append(extra_df.explode('HOUR'), sort=False).fillna(0) </code></pre>
<p>Use <code>cross join</code> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.merge.html" rel="nofollow noreferrer"><code>DataFrame.merge</code></a> with helper <code>DataFrame</code> created by <code>extra_hours</code> list, last <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.append.html" rel="nofollow noreferrer"><code>DataFrame.append</code></a> to original:</p> <pre><code>extra_hours = [1,13] extra_df = df[['DATE', 'DAY_NUMBER']].sort_values('DATE').drop_duplicates() extra_df1 = pd.DataFrame({'HOUR':extra_hours, 'count':0, 'tmp':1}) df1 = extra_df.assign(tmp=1).merge(extra_df1, on='tmp').drop('tmp', 1) extra_df = df.append(df1, sort=True, ignore_index=True) print (extra_df) DATE DAY_NUMBER HOUR count 0 2015-12-01 3 5 12 1 2015-12-01 3 6 11 2 2015-12-02 4 5 14 3 2015-12-02 4 6 15 4 2015-12-01 3 1 0 5 2015-12-01 3 13 0 6 2015-12-02 4 1 0 7 2015-12-02 4 13 0 </code></pre>
python-3.x|pandas
3
377,588
59,343,369
Resampling an array by duplicating or skipping items (translate numpy to js)
<p>I am trying to translate this python/numpy code I have into javascript. This method takes an array and a target size and resizes the array by duplicating or skipping every N items.</p> <p>Here is an example:</p> <pre class="lang-js prettyprint-override"><code>let original_array = [0,1,2,3,4,5,6,7,8,9]; upsample(original_array, 12); // returns [0, 0, 1, 2, 3, 4, 4, 5, 6, 7, 8, 9] </code></pre> <p>This is my working python/numpy code:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np upsample(arr, target_size): original_array = np.array(arr) x = np.linspace(0, original_array.size, num=target_size, endpoint=False) x = original_array[x.astype(int)] return x </code></pre>
<p>I resolved this using this node module that copies matlab's linspace behaviour: <a href="https://github.com/jfhbrook/node-linspace" rel="nofollow noreferrer">https://github.com/jfhbrook/node-linspace</a></p> <p>Here is the code:</p> <pre class="lang-js prettyprint-override"><code>const linspace = require('linspace'); function upsample(arr, targetSize) { let newArr = new Array(targetSize); let lin = linspace(0, arr.length, targetSize); for (let i = 0; i &lt; arr.length; i++) lin[i] = parseInt(lin[i]); for (let j = 0; j &lt; newArr.length-1; j++) newArr[j] = arr[lin[j]]; return newArr; } </code></pre>
javascript|python|arrays|numpy|math
0
377,589
59,128,344
How to transform a dataframe with a column whose values are lists to a dataframe where each element of each list in that column becomes a new row
<p>I have a dataframe with entries in this format:</p> <pre><code>user_id,item_list 0,3569 6530 4416 5494 6404 6289 10227 5285 3601 3509 5553 14879 5951 4802 15104 5338 3604 2345 9048 8627 1,16148 8470 7671 8984 9795 6811 3851 3611 7662 5034 5301 6948 5840 345 14652 10729 8429 7295 4949 16144 ... </code></pre> <p>*Note that the user_id is not an index of the dataframe</p> <p>I want to transform the dataframe into one that looks like this:</p> <pre><code>user_id,item_id 0,3569 0,6530 0,4416 0,5494 ... 1,4949 1,16144 ... </code></pre> <p>Right now I am trying this but it is wildly inefficient:</p> <pre><code>df = pd.read_csv("20recs.csv") numberOfRows = 28107*20 df2 = pd.DataFrame(index=np.arange(0, numberOfRows),columns=('user', 'item')) iter = 0 for index, row in df.iterrows(): user = row['user_id'] itemList = row['item_list'] items = itemList.split(' ') for item in items: df2.loc[iter] = [user]+[item] iter = iter + 1 </code></pre> <p>As you can see, I even tried pre-allocating the memory for the dataframe but it doesn't seem to help much.</p> <p>So there must be a much better way to do this. Can anyone help me?</p>
<p>Use <code>split</code> to transform the lists to actual lists, then <code>explode</code> to ... well, explode the DataFrame. <strong>Requires pandas >= 0.25.0</strong></p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; df = pd.DataFrame({'user_id': [0,1], 'item_list': ['1 2 3', '4 5 6']}) &gt;&gt;&gt; df user_id item_list 0 0 1 2 3 1 1 4 5 6 &gt;&gt;&gt; (df.assign(item_id=df.item_list.apply(lambda x: x.split(' '))) .explode('item_id')[['user_id', 'item_id']]) user_id item_id 0 0 1 0 0 2 0 0 3 1 1 4 1 1 5 1 1 6 </code></pre>
python|pandas
1
377,590
59,076,114
Batch normalization destroys validation performances
<p>I'm adding some batch normalization to my model in order to improve the training time, following some tutorials. This is my model:</p> <pre><code>model = Sequential() model.add(Conv2D(16, kernel_size=(3, 3), activation='relu', input_shape=(64,64,3))) model.add(BatchNormalization()) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(64, kernel_size=(3, 3), activation='relu')) model.add(BatchNormalization()) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(128, kernel_size=(3, 3), activation='relu')) model.add(BatchNormalization()) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(256, kernel_size=(3, 3), activation='relu')) model.add(BatchNormalization()) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten()) model.add(Dense(512, activation='relu')) model.add(Dropout(0.5)) #NB: adding more parameters increases the probability of overfitting!! Try to cut instead of adding neurons!! model.add(Dense(units=512, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(units=20, activation='softmax')) </code></pre> <p>Without batch normalization, i get around 50% accuracy on my data. Adding batch normalization destroys my performance, with a validation accuracy reduced to 10%. </p> <p><a href="https://i.stack.imgur.com/J63Xf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J63Xf.png" alt="enter image description here"></a></p> <p>Why is this happening? </p>
<p>Try using lesser number of batch normalization layers. And it is a general practice to use it at the last convolution layer. Start with just one of them and add more if it improves the validation accuracy.</p>
tensorflow|keras|conv-neural-network|batch-normalization
1
377,591
59,134,499
How to modify path where Torch Hub models are downloaded
<p>When I download models through Torch Hub, models are automatically downloaded in <code>/home/me/.cache/torch</code>.</p> <p><strong>How can I modify this behavior ?</strong></p>
<p>From <a href="https://pytorch.org/docs/stable/hub.html#where-are-my-downloaded-models-saved" rel="noreferrer">official documentation</a>, there is several ways to modify this path.<br> In priority order :</p> <ol> <li><p>Calling hub.set_dir()</p></li> <li><p>$TORCH_HOME/hub, if environment variable TORCH_HOME is set.</p></li> <li><p>$XDG_CACHE_HOME/torch/hub, if environment variable XDG_CACHE_HOME is set.</p></li> <li><p>~/.cache/torch/hub</p></li> </ol> <p>So I just had to do :</p> <p><code>export TORCH_HUB=/my/path/</code></p> <hr> <h3>Edit</h3> <p><code>TORCH_HUB</code> appear to be deprecated, use <code>TORCH_HOME</code> instead</p>
path|pytorch
20
377,592
59,084,304
How to know version of MKL used by numpy in python anaconda distributive?
<p>How to know version of MKL used by numpy in python anaconda distributive from python code?</p>
<p>Found method mkl.get_version_info():</p> <pre><code>import mkl mkl.get_version_string() </code></pre> <p>console:</p> <pre><code>'Intel(R) Math Kernel Library Version 2019.0.0 Product Build 20180829 for Intel(R) 64 architecture applications' </code></pre>
python|numpy|anaconda|intel-mkl
4
377,593
59,432,845
What does `asof` mean in Pandas?
<p>I've read the documentation for pandas. There is a useful function called <code>merge_asof</code> which appears to merge two dataframes with rows that are close together. But I don't know what <code>asof</code> means. Is it <code>as of</code>? Or is it an abbreviation for something?</p> <p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.merge_asof.html" rel="noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.merge_asof.html</a></p>
<p>It means "as of". Here are two sources that reference it as such:</p> <p><a href="https://code.kx.com/v2/ref/asof/" rel="nofollow noreferrer">kdb+ and q asof</a></p> <p><a href="https://issues.apache.org/jira/browse/SPARK-22947" rel="nofollow noreferrer">SPIP: as-of join in Spark SQL</a></p>
pandas
3
377,594
59,074,105
What is the correct keyword for the Proximal AdaGrad optimizer on Tensorflow?
<p>I was experimenting with the Proximal AdaGrad for science fair and I was not able to use it because it counts it as it not existing. </p> <p>My code:</p> <pre><code>import tensorflow as tf from tensorflow import keras import matplotlib.pyplot as plt import numpy as np import time start_time = time.time() data = tf.keras.datasets.fashion_mnist (train_images, train_labels), (test_images, test_labels) = data.load_data() class_names = ['T-shirt', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle Boot'] train_images = train_images/255.0 test_images = test_images/255.0 model = keras.Sequential([ keras.layers.Flatten(input_shape=(28, 28)), keras.layers.Dense(100, activation="relu"), keras.layers.Dense(10, activation="softmax") ]) model.compile(optimizer="Proximal AdaGrad", loss="sparse_categorical_crossentropy", metrics=["accuracy"]) model.fit(train_images, train_labels, epochs=200) test_loss, test_acc = model.evaluate(test_images, test_labels) print("Test acc is:", test_acc) print("--- %s seconds ---" % (time.time() - start_time)) </code></pre> <p>The Error:</p> <pre><code>ValueError Traceback (most recent call last) &lt;ipython-input-2-2d12844ae498&gt; in &lt;module&gt;() 24 ]) 25 ---&gt; 26 model.compile(optimizer="Proximal AdaGrad", loss="sparse_categorical_crossentropy", metrics=["accuracy"]) 27 28 model.fit(train_images, train_labels, epochs=200) 6 frames /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/training/tracking/base.py in _method_wrapper(self, *args, **kwargs) 455 self._self_setattr_tracking = False # pylint: disable=protected-access 456 try: --&gt; 457 result = method(self, *args, **kwargs) 458 finally: 459 self._self_setattr_tracking = previous_value # pylint: disable=protected-access /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training.py in compile(self, optimizer, loss, metrics, loss_weights, sample_weight_mode, weighted_metrics, target_tensors, distribute, **kwargs) 250 'experimental_run_tf_function', True) 251 --&gt; 252 self._set_optimizer(optimizer) 253 is_any_optimizer_v1 = any(isinstance(opt, optimizers.Optimizer) 254 for opt in nest.flatten(self.optimizer)) /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/training.py in _set_optimizer(self, optimizer) 1451 self.optimizer = [optimizers.get(opt) for opt in optimizer] 1452 else: -&gt; 1453 self.optimizer = optimizers.get(optimizer) 1454 1455 if (self._dtype_policy.loss_scale is not None and /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/optimizers.py in get(identifier) 844 elif isinstance(identifier, six.string_types): 845 config = {'class_name': str(identifier), 'config': {}} --&gt; 846 return deserialize(config) 847 else: 848 raise ValueError('Could not interpret optimizer identifier:', identifier) /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/optimizers.py in deserialize(config, custom_objects) 813 module_objects=all_classes, 814 custom_objects=custom_objects, --&gt; 815 printable_module_name='optimizer') 816 817 /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/utils/generic_utils.py in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name) 178 config = identifier 179 (cls, cls_config) = class_and_config_for_serialized_keras_object( --&gt; 180 config, module_objects, custom_objects, printable_module_name) 181 182 if hasattr(cls, 'from_config'): /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/utils/generic_utils.py in class_and_config_for_serialized_keras_object(config, module_objects, custom_objects, printable_module_name) 163 cls = module_objects.get(class_name) 164 if cls is None: --&gt; 165 raise ValueError('Unknown ' + printable_module_name + ': ' + class_name) 166 return (cls, config['config']) 167 ValueError: Unknown optimizer: Proximal AdaGrad </code></pre> <p>I tried naming it other things such as "ProximalAdaGrad" and "ProximalGrad" but non of those worked. The Activation Function looks to have no issues but the Optimizer itself seems to have a bug. I searched for a post on GitHub but did not find anyone posting an issue about this.</p>
<p>There is an <a href="https://github.com/tensorflow/addons/issues/591" rel="nofollow noreferrer">open issue about this</a>. The TensorFlow implementation exists (even in TensorFlow 2.x, as <a href="https://www.tensorflow.org/api_docs/python/tf/compat/v1/train/ProximalAdagradOptimizer" rel="nofollow noreferrer"><code>tf.compat.v1.train.ProximalAdagradOptimizer</code></a>), but there is no corresponding Keras implementation at the moment. However, the Keras API is able to wrap an existing TensorFlow optimizer, so you should be able to do the following:</p> <pre><code># This works both in recent 1.x and 2.0 optimizer = tf.compat.v1.train.ProximalAdagradOptimizer(0.001) model.compile(optimizer=optimizer, loss="sparse_categorical_crossentropy", metrics=["accuracy"]) </code></pre>
python|python-3.x|tensorflow|keras|deep-learning
1
377,595
59,225,070
aggregating, grouping and UN stack to many columns
<p>i am in python i have data-frame of 177 columns that contain patient values for 24 hours as in this </p> <pre><code>subject_id hour_measure urinecolor Respiraory 3 1.00 red 40 3 1.15 red 90 4 2.00 yellow 60 </code></pre> <p>i want for every hour to calculate some statistics like mean, max, std, skew, etc</p> <p>as it contain text and numeric columns it can't loop in all data-frame to make aggregation , therefore ,i try to make it for every column like in the following code </p> <pre><code> grouped= df.groupby(['Hour_measure','subject_id']).agg({"Heart Rate":['sum','min','max','std', 'count','var','skew']}) grouped2= df.groupby(['Hour_measure','subject_id']).agg({"Respiraory":['sum','min','max','std', 'count']}) #write aggregated values to csv file grouped.coloumns=["_".join(x) for x in grouped.columns.ravel()] grouped.to_csv('temp3.csv') with open('temp3.csv', 'a') as f: grouped2.to_csv(f, header=True) # make unstack to convert all to rows df.set_index(['subject_id','Hour_measure']).unstack() </code></pre> <p>this code works correctly, but the idea i want to use loop to aggregate every numeric column .For every text column choose most frequent value in the hour instead of statistical functions, and also add it to the file which will stacked finally based on subject_id and hour_measure to have finally like this </p> <pre><code> heart rate 1 2 3.... to 24 then the next feature subject_id min max std skwe min max std 1 40 110 50 60 60 290 40 </code></pre>
<p>Use:</p> <pre><code>print (df) hour subject_id hour_measure urinecolor Respiraory 0 1 3 1.00 red 40 1 1 3 1.15 red 90 2 1 4 2.00 yellow 60 </code></pre> <hr> <pre><code>df1 = (df.groupby(['hour_measure','subject_id', 'hour']) .agg(['sum','min','max','std', 'count','var','skew'])) print (df1) Respiraory sum min max std count var skew hour_measure subject_id hour 1.00 3 1 40 40 40 NaN 1 NaN NaN 1.15 3 1 90 90 90 NaN 1 NaN NaN 2.00 4 1 60 60 60 NaN 1 NaN NaN f = lambda x: next(iter(x.mode()), None) cols = df.select_dtypes(object).columns df2 = df.groupby(['hour_measure','subject_id', 'hour'])[cols].agg(f) df2.columns = pd.MultiIndex.from_product([df2.columns, ['mode']]) print (df2) urinecolor mode hour_measure subject_id hour 1.00 3 1 red 1.15 3 1 red 2.00 4 1 yellow </code></pre> <hr> <pre><code>df3 = pd.concat([df1, df2], axis=1).unstack().reorder_levels([0,2,1], axis=1) print (df3) Respiraory urinecolor hour 1 1 sum min max std count var skew mode hour_measure subject_id 1.00 3 40 40 40 NaN 1 NaN NaN red 1.15 3 90 90 90 NaN 1 NaN NaN red 2.00 4 60 60 60 NaN 1 NaN NaN yellow </code></pre>
python|python-3.x|pandas|scikit-learn|pyhook
0
377,596
59,080,993
best way of counting number of rows for each grouped by column
<p>after grouping by two columns df.groupby(["id","b"]) now I want to find "id" where there are more than 5 rows. </p> <p>so in the df below, </p> <p>id = 4 has 2 rows<br> id = 4 has 3 rows.</p> <pre><code> count id b 4 1568 1 4167 1 5 1100 1 1832 2 1969 5 </code></pre> <p>I've simply reset_index()</p> <p>and got id values on each row then added them.</p>
<p>try this</p> <pre><code>rows = df.groupby("id")["b"].apply(lambda x: len(list(x))) </code></pre> <p>output</p> <pre><code>id 4 2 5 3 </code></pre>
python|pandas
1
377,597
59,285,977
Fill cells of a new dataframe without losing some columns of the former one
<p>I have a dataframe with a reference to a commune, a number of vote in this city and the results of a few parties. </p> <pre><code> Comm Votes LPC CPC BQ 0 comm1 1315.0 2.0 424.0 572.0 1 comm2 4682.0 117.0 2053.0 1584.0 2 comm3 2397.0 2.0 40.0 192.0 3 comm4 931.0 2.0 12.0 345.0 4 comm5 842.0 47.0 209.0 76.0 ... ... ... ... ... ... 1522 comm1523 23808.0 1588.0 4458.0 13147.0 1523 comm1524 639.0 40.0 126.0 40.0 1524 comm1525 10477.0 13.0 673.0 333.0 1525 comm1526 2674.0 1.0 55.0 194.0 1526 comm1527 1691.0 331.0 29.0 78.0 </code></pre> <p>I have a code that creates random columns for all party columns but the original one in order to pretend to have preferences rather than votes: </p> <pre><code>def fill_cells(cell): votes_max = cell['Votes'] all_dict = {} parties_temp = parties.copy() #iterate over parties for p in parties_temp: preferences = ['1','2','3'] # iterate over preferences for preference in preferences: preferences.remove(preference) # sample new data with equal choices sampled = np.random.choice(preferences, int(votes_max-cell[p])) # transform into dictionary c_sampled = dict(collections.Counter(sampled)) c_sampled.update({p:cell[p]}) c_sampled['1'] = c_sampled.pop(p) # batch update of the dictionary keys all_dict.update( dict(zip([p+'_%s' %k for k in c_sampled.keys()],c_sampled.values())) ) # but I'm loosing the city references return pd.Series(all_dict) </code></pre> <p>It returns:</p> <pre><code>LPC_2 LPC_3 LPC_1 CPC_2 CPC_3 CPC_1 BQ_2 BQ_3 BQ_1 0 891.0 487.0 424.0 743.0 373.0 572.0 1313.0 683.0 2.0 1 2629.0 1342.0 2053.0 3098.0 1603.0 1584.0 4565.0 2301.0 117.0 2 2357.0 1186.0 40.0 2205.0 1047.0 192.0 2395.0 1171.0 2.0 3 919.0 451.0 12.0 586.0 288.0 345.0 929.0 455.0 2.0 4 633.0 309.0 209.0 766.0 399.0 76.0 795.0 396.0 47.0 ... ... ... ... ... ... ... ... ... ... 1520 1088.0 536.0 42.0 970.0 462.0 160.0 1117.0 540.0 13.0 1521 4742.0 2341.0 219.0 3655.0 1865.0 1306.0 4705.0 2375.0 256.0 1522 19350.0 9733.0 4458.0 10661.0 5352.0 13147.0 22220.0 11100.0 1588.0 1523 513.0 264.0 126.0 599.0 267.0 40.0 599.0 306.0 40.0 1524 9804.0 4885.0 673.0 10144.0 5012.0 333.0 10464.0 5162.0 13.0 </code></pre> <p>However with this code I lose the votes and the city reference. How can I keep them?</p> <p>I tried:</p> <pre><code>def fill_cells(cell): votes_max = cell['Votes'] all_dict = {} parties_temp = parties.copy() #iterate over parties for p in parties_temp: # preferences = ['1','2','3','4','5','6','7','8','9','10','11'] preferences = ['1','2','3'] # iterate over preferences for preference in preferences: preferences.remove(preference) # sample new data with equal choices sampled = np.random.choice(preferences, int(votes_max-cell[p])) # transform into dictionary c_sampled = dict(collections.Counter(sampled)) c_sampled.update({p:cell[p]}) c_sampled['1'] = c_sampled.pop(p) # batch update of the dictionary keys all_dict.update( dict(zip(cell['Votes'],cell['Comm'],[p+'_%s' %k for k in c_sampled.keys()],c_sampled.values())) ) return pd.Series(all_dict) </code></pre> <p>But it returns:</p> <pre><code>--------------------------------------------------------------------------- TypeError Traceback (most recent call last) &lt;ipython-input-255-dd1231fd4f2d&gt; in &lt;module&gt; ----&gt; 1 df_test.apply(fill_cells, axis =1) C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\frame.py in apply(self, func, axis, broadcast, raw, reduce, result_type, args, **kwds) 6926 kwds=kwds, 6927 ) -&gt; 6928 return op.get_result() 6929 6930 def applymap(self, func): C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\apply.py in get_result(self) 184 return self.apply_raw() 185 --&gt; 186 return self.apply_standard() 187 188 def apply_empty_result(self): C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\apply.py in apply_standard(self) 290 291 # compute the result using the series generator --&gt; 292 self.apply_series_generator() 293 294 # wrap results C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\apply.py in apply_series_generator(self) 319 try: 320 for i, v in enumerate(series_gen): --&gt; 321 results[i] = self.f(v) 322 keys.append(v.name) 323 except Exception as e: &lt;ipython-input-253-d0d8d6a4b22d&gt; in fill_cells(cell) 18 # batch update of the dictionary keys 19 all_dict.update( ---&gt; 20 dict(zip(cell['Votes'],cell['Comm'],[p+'_%s' %k for k in c_sampled.keys()],c_sampled.values())) 21 ) 22 return pd.Series(all_dict) TypeError: ('zip argument #1 must support iteration', 'occurred at index 0') </code></pre>
<p>I had a major doubt about empty values in the dataframe. I suppose we are talking about this kind of dataframe, where all parties and all communes are listed, but with NULL values here and there. Consider my example:</p> <pre><code> print(df) Comm Votes LPC CPC BQ 0 comm1 1315.0 2.0 424.0 572.0 1 comm2 4682.0 117.0 2053.0 1584.0 2 comm3 2397.0 2.0 40.0 192.0 3 comm4 931.0 2.0 12.0 345.0 4 comm5 842.0 47.0 209.0 76.0 5 comm6 1000.0 NaN 309.0 NaN 6 comm7 1203.0 315.0 NaN 210.0 7 comm8 581 NaN NaN NaN </code></pre> <p>Then comes my other doubt about situations like the one in row comm6 of my example. Votes will be the sum of votes for the parties or will be the number of voting people for the commune? I guess it must be the last one, because your code seems to calculate some kind of distributing for the votes among those, based on a random choice. </p> <p>This is my solution. </p> <pre><code>import pandas as pd import numpy as np from random import randint # integers generation import collections empties = np.where(pd.isnull(df)) #map the nulls print(empties) (array([5, 5, 6, 7, 7, 7], dtype=int32), array([2, 4, 3, 2, 3, 4], dtype=int32)) #1st array is row, 2nd is columns rows=collections.Counter(empties[0]) #Counter({7: 3, 5: 2, 6: 1}), key is row, value will be divisor distribution = {} # empty dictionary for k,v in rows.items(): all_votes = df.iloc[k,1:2] # getting num of voting people for the row voted = df.iloc[k,2:].sum(axis=0) # sum of votes in the row available = all_votes - voted l_dist = [] #is 0 votes fair? I think so, unless is the only missing value if v == 1: l_dist = list(available) else: for i in range(v): if i != (v-1): #not the last one l_dist.append(float(randint(0,(available.any()-sum(l_dist))))) else: l_dist.append(available.any()-sum(l_dist)) distribution[k] = l_dist #example({5: [415.0, 276.0], 6: [678.0], 7: [58.0, 319.0, 204.0]}) for k,v in distribution.items(): e = np.array(empties) idx = np.where(e[0] == k) row_name = 'comm'+str(k+1) cols_pos = np.take(e[1],idx)[0] for v_pos,ind in zip(v,range(len(v))): col_name = df.columns[cols_pos[ind]] df.loc[df['Comm'] == row_name, col_name] = v[ind] print(df) Comm Votes LPC CPC BQ 0 comm1 1315.0 2.0 424.0 572.0 1 comm2 4682.0 117.0 2053.0 1584.0 2 comm3 2397.0 2.0 40.0 192.0 3 comm4 931.0 2.0 12.0 345.0 4 comm5 842.0 47.0 209.0 76.0 5 comm6 1000.0 415.0 309.0 276.0 6 comm7 1203.0 315.0 678.0 210.0 7 comm8 581.0 58.0 319.0 204.0 </code></pre>
python|python-3.x|pandas|dataframe
0
377,598
59,316,821
difference between two list and handle exception if results are null
<p>I have a dataframe with two columns. Each column has list of items and I am trying to subtract one column form another as below.</p> <pre><code> test['new'] = test['products'].apply(set) - test['old_products'].apply(set) </code></pre> <p>This works. </p> <p>When there are no elements for newly created column looks like <code>test['new'] = set()</code> How to handle this exception and make as <code>NA</code> if the result is null. Thanks.</p>
<p>As usual with exceptions. Try something like:</p> <pre><code>try: test['new'] = test['products'].apply(set) - test['old_products'].apply(set) except: test['new'] = 'NA' </code></pre> <p>Check documentation there: <a href="https://docs.python.org/3/tutorial/errors.html#handling-exceptions" rel="nofollow noreferrer">https://docs.python.org/3/tutorial/errors.html#handling-exceptions</a></p> <p><strong>EDIT.</strong> After discussion, it seems that there is no exception, so answer is:</p> <pre><code>if test['products'].apply(set) - test['old_products'].apply(set) == set(): test['new'] = 'NA' else: test['new'] = test['products'].apply(set) - test['old_products'].apply(set) </code></pre>
python-3.x|pandas
1
377,599
59,097,774
Process the data in the database and write to a new table
<p>The problem I'm having is processing a table in the database and then merging all to write to another table.</p> <p>The table structure is like this:</p> <p><a href="https://i.stack.imgur.com/eI0S0.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eI0S0.jpg" alt="enter image description here"></a></p> <p>Owns 3 database tables.</p> <p>I want to merge into the table by the Entryid</p> <p>My idea is to extract the data with pandas and process.</p> <p>But there are many problems writing to the database.</p> <p>Below is the code I tried:</p> <pre><code># -*- coding: UTF-8 -*- import pandas as pd import pymongo def data_Process(): client = pymongo.MongoClient(host="mongodb://localhost:27017/") collection1 = client.test.declare_customs collection2 = client.test.tax collection3 = client.test.ship collection4 = client.test.jieguan data1 = pd.DataFrame(list(collection1.find())) data2 = pd.DataFrame(list(collection2.find())) data3 = pd.DataFrame(list(collection3.find())) data3 = data3.groupby(by=['entryId']).agg(';'.join) data4 = pd.merge(data1, data2, on='entryId', how='left') data5 = pd.merge(data4, data3, on='entryId', how='left') data5.to_excel('data5.xlsx') collection4.insert(data5) if __name__ == '__main__': data_Process() </code></pre> <p>Error: <a href="https://i.stack.imgur.com/wIflo.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wIflo.jpg" alt="enter image description here"></a></p> <p>If you have a better idea.</p> <p>Thanks.</p>
<p>Refer the below answer, </p> <p><a href="https://stackoverflow.com/questions/20167194/insert-a-pandas-dataframe-into-mongodb-using-pymongo">Insert a Pandas Dataframe into mongodb using PyMongo</a></p> <p><a href="https://stackoverflow.com/a/20167984/3704501">https://stackoverflow.com/a/20167984/3704501</a></p> <p>You need to convert the pandas dataframe into json format before inserting into mongo collection</p>
python|pandas|mongodb
0