Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
377,500
47,172,754
Tensorflow - Using tf.losses.hinge_loss causing Shapes Incompatible error
<p>My current code using <code>sparse_softmax_cross_entropy</code> works fine. </p> <pre class="lang-py prettyprint-override"><code>loss_normal = ( tf.reduce_mean(tf.losses .sparse_softmax_cross_entropy(labels=labels, logits=logits, ...
<p>The tensor <code>labels</code> has the shape <code>[1024]</code>, the tensor <code>logits</code> has <code>[1024, 2]</code> shape. This works fine for <a href="https://www.tensorflow.org/api_docs/python/tf/nn/sparse_softmax_cross_entropy_with_logits" rel="nofollow noreferrer"><code>tf.nn.sparse_softmax_cross_entropy...
machine-learning|tensorflow|cross-entropy
1
377,501
47,373,002
Fine-tuning from ./checkpoints/ssd_300_vgg.ckpt. Ignoring missing vars: False
<p>i want to train SSD for num_classes=2 . link of this code is <a href="https://github.com/balancap/SSD-Tensorflow" rel="nofollow noreferrer">here</a> . I write this code in command prompt:</p> <pre><code>python train_ssd_network.py \ --train_dir=./logs/ \ --dataset_dir=./tfrecords\ --dataset_name=pascalv...
<p>You should define "checkpoint_exclude_scopes ",if you want to fine-tuning a network trained on ImageNet. More detail described in the SSD-Tensorflow <a href="https://github.com/balancap/SSD-Tensorflow" rel="nofollow noreferrer">README.md</a></p>
tensorflow|deep-learning|jupyter-notebook|object-detection
0
377,502
47,444,999
Check if column contains type string (object)
<p>I have a huge dataset with thousands of rows and hundreds of columns. One of these columns contain a string because I am getting an error. I want to locate this string. All my columns are supposed to be float values, however one of these columns has a type <code>str</code> somewhere.</p> <p>How can I loop through a...
<p>Using <code>applymap</code> with <code>type</code></p> <pre><code>df = pd.DataFrame({'C1': [1,2,3,'4'], 'C2': [10, 20, '3',40]}) df.applymap(type)==str Out[73]: C1 C2 0 False False 1 False False 2 False True 3 True False </code></pre> <p>Here you know the str cell. Then we using <code>np.whe...
python|python-3.x|pandas|dataframe|dataset
5
377,503
47,273,887
pandas Date get numeric Time intervall
<p>i have the following dataframe example with the Date and Interval:</p> <pre><code> Date Interval 0 2013-08-01 14:00:00 1 2013-08-01 14:15:00 2 2013-08-01 14:30:00 3 2013-08-01 14:45:00 4 2013-08-01 15:00:00 ... </code><...
<p>IIUC, I think you want this:</p> <pre><code> df['timestamp'] = pd.to_datetime(df['Date'] + ' ' + df['Interval']) df['IntervalMap'] = df['timestamp'].dt.hour.mul(4) + df['timestamp'].dt.minute.floordiv(15) + 1 </code></pre> <p>Output:</p> <pre><code> Date Interval timestamp IntervalMap 0 2013...
python|pandas|dataframe
0
377,504
47,087,741
Use TQDM Progress Bar with Pandas
<p>Is it possible to use TQDM progress bar when importing and indexing large datasets using Pandas?</p> <p>Here is an example of of some 5-minute data I am importing, indexing, and using to_datetime. It takes a while and it would be nice to see a progress bar.</p> <pre><code>#Import csv files into a Pandas dataframe...
<p>Find length by getting shape</p> <pre><code>for index, row in tqdm(df.iterrows(), total=df.shape[0]): print("index",index) print("row",row) </code></pre>
python|pandas|tqdm
179
377,505
47,426,764
Custom Keras Loss Function throws 'ValueError None'
<h2>This custom Keras Loss function:</h2> <pre><code>def thresholdLoss (actual, predicted): rel = predicted / safeActual relAbove = tf.greater(rel, 1.50) relBelow = tf.less(rel, .50) isOutsideThresh = tf.logical_or(relAbove, relBelow, name='outsideRange') errCounts = tf.to_float(tf.count_nonzero(isOutsideThr...
<p>This is a result of the loss function not being continuous. It returns a count, rather than a smooth/continuously-varying, real-valued error.</p> <p>Keras computes a gradient from the loss function. When that function is not continuous, Keras can return None values as the gradient vector. And this is the result of ...
tensorflow|keras
1
377,506
47,260,715
How to get the value of a feature in a layer that match a the state dict in PyTorch?
<p>I have some cnn, and I want to fetch the value of some intermediate layer corresponding to a some key from the state dict. How could this be done? Thanks.</p>
<p>I think you need to create a new class that redefines the forward pass through a given model. However, <strong>most probably you will need to create the code regarding the architecture of your model</strong>. You can find here an example:</p> <pre><code>class extract_layers(): def __init__(self, model, target_...
python|pytorch
2
377,507
47,124,007
How to sum appearances in columns with pandas without lambda
<p>I have the following dataframe</p> <pre><code> a b c d e 0 0 0 -1 1 -1 1 0 1 -1 1 -1 2 -1 0 -1 1 1 3 -1 1 1 -1 1 4 1 0 1 -1 1 5 1 0 0 0 1 6 1 1 0 0 -1 7 1 1 -1 0 0 </code></pre> <p>For each numb...
<p>Using <code>get_dummies</code></p> <pre><code>df=df.astype(str) pd.get_dummies(df.stack()).sum(level=0) Out[667]: -1 0 1 0 2 2 1 1 2 1 2 2 2 1 2 3 2 0 3 4 1 1 3 5 0 3 2 6 1 2 2 7 1 2 2 </code></pre> <p>More info </p> <pre><code>pd.concat([df,pd.get_dummies(df.stack()).sum(lev...
python|pandas|numpy|dataframe|lambda
2
377,508
47,183,159
How to set weights in Keras with a numpy array?
<p>I am having trouble with the Keras backend functions for setting values. I am trying to convert a model from PyTorch to Keras and am trying to set the weights of the Keras model, but the weights do not appear to be getting set. Note: I am not actually setting with np.ones just using that for an example.</p> <p>I ...
<p>What is <code>keras_layer</code> in your code?</p> <p>You can set weights these ways:</p> <pre><code>model.layers[i].set_weights(listOfNumpyArrays) model.get_layer(layerName).set_weights(...) model.set_weights(listOfNumpyArrays) </code></pre> <p>Where <code>model</code> is an instance of an existing model. Y...
python|tensorflow|machine-learning|keras|deep-learning
52
377,509
47,521,393
Best way in Pytorch to upsample a Tensor and transform it to rgb?
<p>For a nice output in Tensorboard I want to show a batch of input images, corresponding target masks and output masks in a grid. Input images have different size then the masks. Furthermore the images are obviously RGB. From a batch of e.g. 32 or 64 I only want to show the first 4 images.</p> <p>After some fiddling ...
<p>Maybe you can just convert your Tensors to the numpy array (.data.cpu().numpy() ) and use opencv to do upsampling? OpenCV implementation should be quite fast.</p>
python|python-3.x|deep-learning|tensorboard|pytorch
2
377,510
47,384,508
KeyError: "The name 'predictions/accuracy/Mean:0' refers to a Tensor which does not exist...."
<p>I am using tensorflow1.4cpu version on my ubuntu16.04. I have previously trained a convnet model on mnist dataset.</p> <p>Now i want to access the model again and predict accuracy on mnist.test.images. I successfully load the model:</p> <pre><code>import tensorflow as tf from tensorflow.examples.tutorials.mnist im...
<p>The method tf.train.Saver().restore works. But the other method tf.train.import_meta_graph('tradConv_mnistModel/tradConvMnis‌​t- 10000.meta') does not work!</p>
tensorflow|mnist
-1
377,511
47,355,141
DatabaseError: ('HY000', '[HY000] [Microsoft][ODBC SQL Server Driver]Connection is busy with results for another hstmt (0) (SQLExecDirectW)')
<p>I am trying to read data from SQL server into pandas data frame. Below is the code.</p> <pre><code>def get_data(size): con = pyodbc.connect(r'driver={SQL Server}; server=SPROD_RPT01; database=Reporting') cur = con.cursor() db_cmd = &quot;select distinct top %s * from dbo.KrishAnalyticsAllCalls&quot; %siz...
<p>I was facing the same issue. This was fixed when I used <code>fetchall()</code> function. The following the code that I used.</p> <pre class="lang-py prettyprint-override"><code>import pypyodbc as pyodbc def connect(self, query): con = pyodbc.connect(self.CONNECTION_STRING) cursor = con.cursor() print(...
python|sql-server|pandas|pyodbc
9
377,512
47,519,802
does tensorflow convert_to_tensor do memory copy?
<p>I am creating a tensorflow constant based on a ndarray list object. My understanding was that tensor itself wouldnt do a memory copy of the underlying data, but create a python object using the same underlying ndarray data. However, after running a little test, seems like it does copy the data </p> <pre><code>def m...
<p><code>tf.convert_to_tensor</code> uses a <code>_tensor_conversion_func_registry</code> were conversions functions are registered for different input types. For Python list, tuple and numpy nd.arrays this happens in <a href="https://github.com/tensorflow/tensorflow/blob/5a810ea2b3ff056a8d7dc5eef19c8193ffc0f8f6/tensor...
python|numpy|tensorflow
0
377,513
47,329,860
Pandas, are there any faster ways to update values?
<p>Currently, my table has over 10000000 records, and there is a column named <code>ID</code>, and I want to update column named '3rd_col' with a new value if the <code>ID</code> is in the given list.</p> <p>I use <code>.loc</code> and here is my code</p> <pre><code>for _id in given_ids: df.loc[df.ID == _id, '3rd...
<p>You can create <code>dictionary</code> first and then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.replace.html" rel="nofollow noreferrer"><code>replace</code></a>:</p> <pre><code>#sample function def return_new_val(x): return x * 3 given_ids = list('abc') d = {_id: return_new_...
python|pandas
7
377,514
47,433,199
How can I feed a batch into LSTM without reordering them by length in PyTorch?
<p>I am new to Pytorch and I have got into some trouble. I want to build a rank-model to judge the similarity of question and its answers (including right answers and wrong answers). And I use LSTM as an encoder. </p> <p>There are two LSTMs in my model and they share the weights. So my model’s input is two sequences (...
<p>Maybe a starting point could be to use similar RNN wrapper as here <a href="https://github.com/facebookresearch/DrQA/blob/master/drqa/reader/layers.py#L20" rel="nofollow noreferrer">https://github.com/facebookresearch/DrQA/blob/master/drqa/reader/layers.py#L20</a> You can encode question and asnwer separately (this...
pytorch
1
377,515
47,535,406
How does numpy allocate memory for nested array?
<p>The following program creates a large array from a nested list of arrays:</p> <pre><code>import numpy as np a = np.arange(6).reshape(2, 3) nested_list = [[a, a + 1], [a + 2, a + 3]] b = np.array(nested_list) </code></pre> <p>Does np.array pre-allocate memory for only once for the result before copying data into th...
<p><code>nested_list = [[a, a + 1], [a + 2, a + 3]]</code> produces 3 new arrays (the sums) plus a list of pointers to those arrays. That's just basic Python interpreter action.</p> <p><code>b = np.array(nested_list)</code>: <code>np.array</code> is a complex compiled function, so without some serious digging it is h...
python|numpy
1
377,516
47,373,706
EnumDescriptorProto error while importing tensorflow
<p>Im trying to run the following code</p> <pre><code>import tensorflow as tf print("Hello TensorFlow version", tf.__Version__) </code></pre> <p>It is firing the following error</p> <blockquote> <p>Users/anaconda/envs/cnn/bin/python /Users/Downloads/rude-carnie/version.py Traceback (most recent call last): Fi...
<p>Today I came across same problem. Uninstall all the protobuf on your machine (both from anaconda and pip)</p> <pre><code>pip uninstall protobuf conda remove protobuf </code></pre> <p>install protobuf using pip:</p> <pre><code>pip install protobuf==3.5.0.post1 </code></pre> <p>Source : <a href="https://github.co...
python|tensorflow
0
377,517
47,516,286
Append arrays of different dimensions to get a single array
<p>l have three vectors (numpy arrays), <code>vector_1, vector_2, vector_3</code> as follow :</p> <p>Dimension(vector1)=(200,2048)</p> <p>Dimension(vector2)=(200,8192)</p> <p>Dimension(vector3)=(200,32768)</p> <p>l would like to append these vectors to get <strong>vector_4</strong> :</p> <p><strong>Dimension(vec...
<p>I believe you are looking for <code>numpy.hstack</code>.</p> <pre><code>&gt;&gt;&gt; import numpy as np &gt;&gt;&gt; a = np.arange(4).reshape(2,2) &gt;&gt;&gt; b = np.arange(6).reshape(2,3) &gt;&gt;&gt; c = np.arange(8).reshape(2,4) &gt;&gt;&gt; a array([[0, 1], [2, 3]]) &gt;&gt;&gt; b array([[0, 1, 2], ...
python|arrays|numpy|append
1
377,518
47,286,547
Is there a way in pd.read_csv to replace NaN value with other character?
<p>I have some data in csv file. Because it is collected from machine,all lines should be number but some NaN values exists in some lines.And the machine can auto replace these NaN values with a string '-'.</p> <p>My question is how to set params of <strong><em>pd.read_csv()</em></strong> to auto replace '-'values wit...
<p>while reading the <code>csv</code> file you can use the parameter <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="noreferrer">na_values</a>:</p> <pre><code>df = pd.read_csv('file.csv',na_values='-') </code></pre> <p>Edit: you can then convert nan to 0 by:</p> <pre><code>...
python|pandas|csv
13
377,519
47,335,625
Reshape array using numpy - ValueError: cannot reshape array
<p>I have an array of floats <code>vec</code> which I want to reshape</p> <pre><code>vec.shape &gt;&gt;&gt; (3,) len(vec[0]) # all 3 rows of vec have 150 columns &gt;&gt;&gt; 150 np.reshape(vec, (3,150)) &gt;&gt;&gt; ValueError: cannot reshape array of size 3 into shape (3,150) </code></pre> <p>but I get the error ab...
<p>The <code>vec.shape</code> means that the array has 3 items. But they are dtype object, that is, pointers to items else where in memory.</p> <p>Apparently the items are arrays themselves. One of the <code>concatenate</code> or <code>stack</code> functions can join them into one array, provided the dimensions matc...
python|arrays|numpy|reshape
3
377,520
47,085,662
Merge histograms with different ranges
<p>Is it any fast way to merge two numpy histograms with different bin ranges and bin number?</p> <p>For example:</p> <pre><code>x = [1,2,2,3] y = [4,5,5,6] a = np.histogram(x, bins=10) # a[0] = [1, 0, 0, 0, 0, 2, 0, 0, 0, 1] # a[1] = [ 1. , 1.2, 1.4, 1.6, 1.8, 2. , 2.2, 2.4, 2.6, 2.8, 3. ] b = np.his...
<p>I'd actually have added a comment to dangom's answer, but I lack the reputation required. I'm a little confused by your example. You're plotting the histogram of the histogram bins if I'm not mistaken. It should rather be this, right?</p> <pre><code>plt.figure() plt.plot(a[1][:-1], a[0], marker='.', label='a') plt....
python|numpy|merge|histogram
3
377,521
47,482,009
Pandas rolling window to return an array
<p>Here is a sample code. </p> <pre><code>df = pd.DataFrame(np.random.randn(10, 2), columns=list('AB')) df['C'] = df.B.rolling(window=3) </code></pre> <p>Output:</p> <pre><code> A B C 0 -0.108897 1.877987 Rolling [window=3,center=False,axis=0] 1 -1.276055 ...
<p>Since pandas <code>1.1</code> rolling objects are iterable.</p> <p>For a list of lists:</p> <pre><code>df['C'] = [window.to_list() for window in df.B.rolling(window=3)] </code></pre> <p>For a Series of Series's do:</p> <pre><code>df['C'] = pd.Series(df.B.rolling(window=3)) </code></pre> <p>Also checkout the <a href=...
python|pandas|numpy|dataframe
23
377,522
11,113,649
How to run a .py module?
<p>I've got zero experience with Python. I have looked around some tutorial materials, but it seems difficult to understand a advanced code. So I came here for a more specific answer. For me the mission is to redo the code in my computer. </p> <p>Here is the scenario:</p> <p>I'm a graduate student studying tensor fa...
<p>To answer Q2: you need to transform the RDF and save it before you can load it from the file 'us-presidents.pickle'. The author of that code probably did that once because the Python native pickle format loads faster. As the pickle format includes the datatype of the data, it is possible that <code>X</code> is some ...
python|numpy|scipy|rdf|factorization
1
377,523
11,254,248
Efficiently accumulating a collection of sparse scipy matrices
<p>I've got a collection of O(N) NxN <code>scipy.sparse.csr_matrix</code>, and each sparse matrix has on the order of N elements set. I want to add all these matrices together to get a regular NxN numpy array. (N is on the order of 1000). The arrangement of non-zero elements within the matrices is such that the resu...
<p>I think I've found a way to speed it up by a factor of ~10 if your matrices are very sparse.</p> <pre><code>In [1]: from scipy.sparse import csr_matrix In [2]: def sum_sparse(m): ...: x = np.zeros(m[0].shape) ...: for a in m: ...: ri = np.repeat(np.arange(a.shape[0]),np.diff(a.indptr)) ...
python|optimization|numpy|scipy|sparse-matrix
4
377,524
11,297,030
Matplotlib - Stepped histogram with already binned data
<p>I am trying to get a histogram with already binned data. I have been trying to use <code>bar()</code> for this, but I can't seem to figure out how to make it a stepped histogram <a href="http://matplotlib.org/mpl_examples/pylab_examples/histogram_demo_extended_02.png" rel="noreferrer">like this one from the examples...
<p>You could cheat, by offsetting your data and using <code>plot</code> instead:</p> <pre><code>from matplotlib import pyplot import numpy as np #sample data: x = np.arange(30) y = np.cumsum(np.arange(30)) #offset the x for horizontal, repeat the y for vertical: x = np.ravel(zip(x,x+1)) y = np.ravel(zip(y,y)) pyplot...
python|numpy|matplotlib|scipy
8
377,525
11,373,192
Generating Discrete random variables with specified weights using SciPy or NumPy
<p>I am looking for a simple function that can generate an array of specified random values based on their corresponding (also specified) probabilities. I only need it to generate float values, but I don't see why it shouldn't be able to generate any scalar. I can think of many ways of building this from existing funct...
<p>Drawing from a discrete distribution is directly built into numpy. The function is called <a href="https://numpy.org/doc/stable/reference/random/generated/numpy.random.choice.html" rel="noreferrer">random.choice</a> (difficult to find without any reference to discrete distributions in the numpy docs).</p> <pre><code...
python|random|numpy|scipy
82
377,526
68,188,278
"torch.relu_(input) unknown parameter type" from pytorch
<p>I am trying to run this <a href="https://github.com/ultravideo/Stereo-3D-Pose-Estimation" rel="nofollow noreferrer">3D pose estimation repo</a> in Google Colab on a GPU, but after doing all of the steps and putting in my own left/right cam vids, I get this error in Colab:</p> <pre><code>infering thread started 1 1 :...
<p>Since the traceback happens in the pytorch library, I checked the code there on the pytorch github.</p> <p>What the error means is that you are calling an inplace activation function in torch.relu_ to some object called input. However, what is happening is that the type of input is not recognized by the torch backen...
python-3.x|pytorch|google-colaboratory|pose-estimation
1
377,527
68,295,285
Using values from a column for another column
<div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">County</th> <th style="text-align: center;">State</th> <th style="text-align: center;">County State</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">Davis County</td> <td style="text-align: center;">NE</td> ...
<p>After the <code>County</code> column is cleaned, seems like just string concatonating the two columns would work:</p> <pre><code>def county(df): df['County'].replace([r'\s*?County.*'], '', regex=True, inplace=True) df['County State'] = df['County'] + ', ' + df['State'] return df </code></pre> <p><code>co...
python|pandas
0
377,528
68,060,861
How do I modify an item in a pandas dataframe using python?
<p>I would like to be able to access an excel spreadsheet and modify the values in it which say AHU 01-01 to AHU-01-01. I would like to take the &quot;AHU &quot; and change it to &quot;AHU-&quot; without losing the numbers which follow right after.</p> <p>Here is my current code:</p> <pre><code>import pandas as pd df1=...
<pre><code>mydict = {'row_num': [1,2,3,4,5], 'col_to_change': ['AHU 03-01', 'AHU 03-02', 'AHU 03-03', 'AHU 04-01', 'AHU 01-01']} df = pd.DataFrame(mydict) df['changed_column'] = df[['col_to_change']].apply(lambda x : &quot;-&quot;.join(x[0].split()) , axis = 1) <...
python|pandas|dataframe|indexing
0
377,529
68,246,198
How do I sum up when complying to two conditions and then put the summed data in a new data frame?
<p>I have a data frame with dates, categories, and time durations. I want to sum the time durations if the entries have the same date and the same category.</p> <p>Input:</p> <pre><code>Date Duration Category 01/01/2021 0.1 Entertainment 01/01/2021 1.4 Working 01/01/2021 2.1 Entertainme...
<p>you can use <code>pivot_table()</code>:</p> <pre><code>out=(df.pivot_table('Duration','Date','Category',fill_value=0,aggfunc='sum') .rename_axis(columns=None) .reset_index()) </code></pre> <p><strong>OR</strong></p> <p>you can use <code>pd.crosstab()</code>:</p> <pre><code>out=(pd.crosstab(df['Date'],d...
python|pandas|dataframe|sum
1
377,530
68,171,716
What is the input shape of the InputLayer in keras Tensorflow?
<p>I have this data</p> <pre><code>X_regression = tf.range(0, 1000, 5) y_regression = X + 100 X_reg_train, X_reg_test = X_regression[:150], X_regression[150:] y_reg_train, y_reg_test = y_regression[:150], y_regression[150:] </code></pre> <p>I inspect the data input data</p> <pre><code>X_reg_train[0], X_reg_train[0].sh...
<p><code>InputLayer</code> is actually just the same as specifying the parameter <code>input_shape</code> in a <code>Dense</code> layer. Keras actually uses <code>InputLayer</code> when you use <code>method 2</code> in the background.</p> <pre><code># Method 1 model_reg.add(tf.keras.layers.InputLayer(input_shape=(1,)))...
python|tensorflow|deep-learning|tf.keras
0
377,531
68,340,223
How to batch an object detection dataset?
<p>I am working on implementing a face detection model on the wider face dataset. I learned it was built into <a href="https://www.tensorflow.org/datasets/catalog/wider_face" rel="nofollow noreferrer">Tensorflow datasets</a> and I am using it. However, I am facing an issue while batching the data. Since, an Image can h...
<p>It might be a bit late but I thought I should post this anyways. The padded_batch feature ought to do the trick here. It kind of goes around the issue by matching dimension via padding zeros</p> <pre><code>ds,info = tfds.load('wider_face', split='train', shuffle_files=True, with_info= True) ds1 = ds.padded_batch(12)...
tensorflow|deep-learning|computer-vision|conv-neural-network|object-detection
1
377,532
68,046,110
simple quest about LD_LIBRARY_PATH and new entries
<p>In Ubuntu 16, when setting environment variable like <a href="https://stackoverflow.com/questions/13428910/how-to-set-the-environmental-variable-ld-library-path-in-linux">this</a>, what we had in LD_LIBRARY_PATH before will be erased? I have already ROS and want to add the <a href="https://github.com/GarrickLin/nump...
<p>Read about how ${LD_LIBRARY_PATH} is used like this:</p> <pre><code>$ man ld.so </code></pre> <p>You already know that ${PATH} is a list of directories that are polled when just an application name is given:</p> <pre><code>$ foo </code></pre> <p>and ${LD_LIBRARY_PATH} works exactly the same for the ld.so(1) dynamic ...
linux|python-2.7|numpy|environment-variables
1
377,533
68,128,827
How to know if a record has updated its date
<p>I want to know if a record has update its date in a pandas Dataframe. The dataframe is made up of several columns in which for each value of A we have several values ​​of B with start dates and end dates. Thanks to the timestamp we can know if there is a new record or a previous one has been modified.</p> <p>What I ...
<p>I'm not sure what you mean exactly with a 'close' date range, so this answer won't exactly match the output you listed in the question.</p> <p>For demo purposes I've made a csv file called <code>data.csv</code> with the data in your question</p> <pre><code>A,B,Start,End,Timestamp A1,B1,2021-05-10 00:00:00,2021-05-27...
python|pandas|dataframe|date|datetime
2
377,534
68,403,233
Adjust figure yellow bricks model - python
<p>I am trying to adjust the axes limits on a yellow bricks figure. However, I can't seem to adjust it. I can change axes labels and titles but not the limits. It works if I don't render the figure with <code>visualizer.show()</code> but then I lose labels, titles, legend etc.</p> <pre><code>from sklearn.linear_model i...
<p>Instead of calling the <code>visualizer.show()</code> method, you can try calling the <code>visualizer.finalize()</code> method and then accessing the underlying matplotlib axes to change the limits. You are also overwriting <code>ax</code> which wasn't doing you any favours either.</p> <p>Here is the full code exam...
python|pandas|yellowbrick
1
377,535
68,285,099
How to avoid overfitting in train data?
<p>I have train data 700 image for each gesture (5 gesture),<a href="https://i.stack.imgur.com/l4pWq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/l4pWq.png" alt="train image" /></a> validation test data 200 image <a href="https://i.stack.imgur.com/5G60l.jpg" rel="nofollow noreferrer"><img src="ht...
<p>This situation may be ocurring because you are setting the input_shape parameter in all convolutional layers. You should define it just in the first one.</p> <p>Your code should looks like:</p> <pre><code>def get_model(): &quot;&quot;&quot; Returns a compiled convolutional neural network model. Assume that the `inpu...
python|conv-neural-network|tensorflow2.0
-1
377,536
68,222,820
Local Gradient Aggregation for Horovod using Tensorflow 1.X
<p>I am trying to use Horovod for distributing training GPU on different servers. Following the advice <a href="https://www.determined.ai/blog/optimizing-horovod" rel="nofollow noreferrer">Here</a>.</p> <p>I wanted to implement local gradient aggregation. In the explanation the modification looks easy <code>optimizer =...
<p>I have solved the problem. As indicated in the error message <code>aggregation_counter</code> variable is not initialized. I was using <code>sess.run(tf.global_variables_initializer())</code>. To solve the problem I add <code>sess.run(tf.local_variables_initializer())</code>. Doing this has done the trick. I am not ...
python|tensorflow|tensorflow1.15|horovod
0
377,537
68,279,224
Why Running torchscript model inference on IOS results in threading error?
<p>I have been trying to integrate pytorch model developed on python into IOS. The example I have looked at is from this <a href="https://github.com/pytorch/ios-demo-app/blob/master/D2Go" rel="nofollow noreferrer">github repo</a>.</p> <p>I used the same d2go model in my own application. One thing I noticed is that if t...
<p>You should probably declare &quot;pixelBuffer&quot; in the same scope (inside the dispatch block)</p>
ios|multithreading|pytorch|dispatch-queue|torchscript
0
377,538
68,229,187
Keras model concat: Attribute and Value error
<p>This is a keras model I have made based on the paper Liu, Gibson, et al 2017 (<a href="https://arxiv.org/abs/1708.09022" rel="nofollow noreferrer">https://arxiv.org/abs/1708.09022</a>). It can be seen in fig1.</p> <p>I have 3 questions-</p> <ol> <li>I am not sure if I am correctly using concatenate as per the paper....
<p>You can use the <code>Functional()</code> API in order to solve your problem (I haven't read the paper, but here is how you can combine models and get a final output).</p> <p>I used 'relu' activation for simplicity purposes (ensure you use <code>keras</code> inside <code>tensorflow</code>)</p> <p>Here is the code th...
python|tensorflow|keras|deep-learning|concatenation
2
377,539
68,090,137
How can I subset a data frame for unique rows using repeating values from a column in another data frame in python?
<p>I have 2 data frames. I want to subset df_1 based on df_2 so that the rows in the resulting data frame correspond to the rows in df_2. Here are two example data frames:</p> <pre><code>df_1 = pd.DataFrame({ &quot;ID&quot;: [&quot;Lemon&quot;,&quot;Banana&quot;,&quot;Apple&quot;,&quot;Cherry&quot;,&quot;Tomato&quo...
<p>Try adding an indicator column to both <code>df_1</code> and <code>df_2</code> with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.cumcount.html" rel="nofollow noreferrer"><code>groupby cumcount</code></a> to get position as well:</p> <pre><code>df_1['i'] = df_1.group...
python|pandas|merge|subset
1
377,540
68,150,248
How to extract overlapping patches from a 3D volume and recreate the input shape from the patches?
<p>Pytorch offers <code>torch.Tensor.unfold</code> operation which can be chained to arbitrarily many dimensions to extract overlapping patches. How can we reverse the patch extraction operation such that the patches are combined to the input shape.</p> <p>The focus is 3D volumetric images with 1 channel (biomedical). ...
<p>To extract (overlapping-) patches and to reconstruct the input shape we can use the <code>torch.nn.functional.unfold</code> and the inverse operation <code>torch.nn.functional.fold</code>. These methods only process 4D tensors or 2D images, however you can use these methods to process one dimension at a time.</p> <p...
machine-learning|3d|computer-vision|pytorch
1
377,541
68,324,172
Retrieving intermediate features from pytorch torch.hub.load
<p>I have a Net object instantiated in pytorch via torch.hub.load:</p> <pre><code>model = torch.hub.load('facebookresearch/pytorchvideo', 'slowfast_r50', pretrained=True) </code></pre> <p>The final layer is a projection to a 400-dim vector. Is there a way to get the pentultimate layer instead during a forward pass?</p>
<p>Yes, easiest way is to switch the layer with <a href="https://pytorch.org/docs/stable/generated/torch.nn.Identity.html" rel="nofollow noreferrer"><code>torch.nn.Identity</code></a> (which simply returns it's inputs unchanged):</p> <p>Line below changes this submodule:</p> <pre><code>(6): ResNetBasicHead( (drop...
pytorch
1
377,542
68,430,896
How to optimize PySpark when using multiple data frames?
<p>Looking to optimize the following code for speed. Extensive background in python and pandas, but new to pyspark. Any suggestions you may have will be greatly appreciated.</p> <p>For clarity, the code has been broken up into parts 0 through 5. Feel free to address a single part or make a suggestion that efficiently t...
<p>Using a list comprehension did not improve performance for #0. I found it to be more efficient to extract the distinct value count from the string and add it to a dictionary.</p> <pre><code># 0. Find Number of Distinct Values for each Column distinct_value_dict = {} for i, x in enumerate(df.columns): df0 = df.ag...
python|pandas|dataframe|apache-spark|pyspark
0
377,543
68,038,132
timedelta64 column with apache superset
<p>I have a dataframe with a <code>timedelta64</code> column that I want to use in my analytics. I upload the data to superset via &quot;Upload CSV&quot; and there doesn't seem to be a way to tell superset that a particular column is a timedelta during the upload process (similar to how you can tell it to parse specifi...
<p>the column types in Superset mirror the database column types. So if you upload a CSVS to a Postgres database, you need to make sure that timedelta is an actual column type (as far as I know, it isn't a column type!).</p> <p>In fact, the suggested workaround is to save the value / column as string! <a href="https://...
python|pandas|apache-superset
0
377,544
68,140,388
An error 'Cache may be out of date, try `force_reload=True`.' comes up even though I have included `force_reload=True` in the code block?
<p>My Heroku App gives an Internal Server Error (500) when I try to get an inference for a model. With the command <code>heroku logs --tail</code> The following error comes up ( This is part of the error received )</p> <pre><code>2021-06-25T13:13:01.052585+00:00 heroku[web.1]: State changed from up to starting 2021-06-...
<p>I fixed this issue, The FileNotFoundError: [Errno 2] No such file or directory: 'best.pt' was the main error, Heroku couldn't figure that path out, so I included this custom model in a folder called static. My new code block is:</p> <p><code>model = torch.hub.load('ultralytics/yolov5', 'custom', path='static/best.pt...
python|heroku|pytorch|internal-server-error|yolov5
3
377,545
68,299,412
Cumulative sum of rows in Python Pandas
<p>I'm working on a dataframe which I get a value for each year and state :</p> <pre><code> 0 State 1965 1966 1967 1968 1 Alabama 20.2 40 60.3 80 2 Alaska 10 15 18 20 3 Arizona 5 5 10 12 </code></pre> <p>I need each value sum the last with the current one...
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.cumsum.html" rel="nofollow noreferrer"><code>DataFrame.cumsum</code></a> with <code>axis=1</code> and convert non numeric column <code>State</code> to <code>index</code>:</p> <pre><code>df = df.set_index('State').cumsum(axis=1) p...
python-3.x|pandas|dataframe
0
377,546
68,296,386
Tensorflow.js returns "NaN" Value when running Linear Regression Model
<p>I'm trying to run this linear regression model which would essentially give me an output based on <code>const prediction = model.predict((tf.tensor2d([20], [1,1])));</code> I'm however unfortunately getting NaN Value everytime I run the code to receive a prediction.</p> <p>What's the best way to approach a solution?...
<p>You forgot to specify what metric the model is supposed to track.</p> <pre><code>const batchSize = 32; const epochs = 500; model.compile({ loss: &quot;meanSquaredError&quot;, optimizer: &quot;sgd&quot;, metrics: [&quot;mse&quot;], }); await model.fit(xs, ys, batchSize, epochs); const prediction = model.pred...
javascript|tensorflow|linear-regression|tensorflow.js
0
377,547
68,339,861
How to remove space "before" column name when importing csv data in Pandas
<p>I use the default Pandas csv reading to import some data as followed:</p> <pre><code>df = pd.read_csv('data_file.csv') </code></pre> <p>The data frame I got is as below:</p> <pre><code> Force [N] Stress [MPa] 0 0.000000 2.230649e-13 1 0.014117 1.071518e-01 2 ...
<p>To avoid post-processing the column data set <code>skipinitialspace=True</code> to <a href="https://pandas.pydata.org/docs/reference/api/pandas.read_csv.html" rel="nofollow noreferrer"><code>pd.read_csv</code></a>:</p> <pre><code>df = pd.read_csv('data_file.csv', skipinitialspace=True) </code></pre> <p><code>df</cod...
python|pandas
4
377,548
68,082,460
Pandas un-exponde Series based off of Index
<p>I have a pd.dataframe of sentences such as:</p> <pre><code> text 0 kusebenta ngendlela lefanele kwemabhizinisi em... 1 bumetima ekuvisiseni umnotfo wemhlaba nekuntji... 2 ngaleyo ndlelake sesincume kusebentisa emandla... 3 emabhisizinisi embuso kufanele angenise imali ... 4 nanoma kulungile kutsi umbuso...
<p>Apply <code>str.join</code> on index generated by <code>explode()</code>:</p> <pre><code>&gt;&gt;&gt; series1.groupby(level=0).apply(' '.join) 0 kusebenta ngendlela lefanele kwemabhizinisi em... 1 bumetima ekuvisiseni umnotfo wemhlaba nekuntji... Name: text, dtype: object </code></pre>
python|pandas|dataframe|nlp|series
1
377,549
68,427,239
How to calculate pct_change in pandas with reference to just the first column
<p>I have a dataframe as:</p> <pre><code>df = pd.DataFrame({ &quot;A&quot;: [1, 5, 2, 5, 6], &quot;B&quot;: [-12, 23, 5, 22, 35], &quot;C&quot;: [-32, 12, -10, 3, 2], &quot;D&quot;: [2, 13, 6, 2, 8] }) </code></pre> <p>Now, I want to calculate the percentage change on <code>axis=1</code> but with refere...
<p>You can use <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.divide.html" rel="nofollow noreferrer">divide</a> function in pandas, diving all columns with column <code>A</code></p> <pre class="lang-py prettyprint-override"><code>pct = df.divide(df[&quot;A&quot;], axis=&quot;index&quot;) - 1 pct...
python|pandas
3
377,550
68,300,237
How to merge headers of multiple columns of a pandas dataframe in a new column if values of the multiple columns meet certain criteria
<p>I have a data frame of size 122400*92, out of which 8 columns represent flow, which maybe in different combinations. I want to merge all the columns headers in a new column if the flow in each column is &gt; 20.</p> <p>For an example: A Flow: 52 B Flow: 46 C Flow: 0 D Flow: 54 E Flow: 34 F Flow: 0 G Flow: 12 H Flow:...
<p>Let's define some test data:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import numpy as np reqcol = ['A FLOW', 'B FLOW'] df = pd.DataFrame({'NAME': ['N1', 'N2', 'N3', 'N4'], 'A FLOW': [5, 80,50, 40], 'B FLOW' : [10, 0, 40, 10]}) </code></pre> <p>This gives you the following data frame:</...
pandas
0
377,551
68,136,608
How to pad a list of NumPy arrays in a vectorized way?
<p>I am trying to find a vectorized way (or at least, better than using a loop) to create a three-dimensional NumPy array from a list of 2D NumPy arrays. Right now, I have a list L that looks something like:</p> <pre><code>L = [ np.array([[1,2,3], [4,5,6]]), np.array([[8,9,10]]), ...] </code></pre> <p>Each NumPy array ...
<p>First lets look at the common task of padding 1d arrays to a common size.</p> <pre><code>In [441]: alist = [np.ones((2,),int),np.zeros((1,),int)+2, np.zeros((3,),int)+3] In [442]: alist Out[442]: [array([1, 1]), array([2]), array([3, 3, 3])] </code></pre> <p>The obvious iterative approach:</p> <pre><code>In [443]: [...
numpy|vectorization
1
377,552
68,315,961
Bad performance of numpy slicing in function
<p>I have example code like</p> <pre><code>import numpy as np nbins = 1301 init = np.ones(nbins*2+1)*(-1) init[0] = 0 init[1::2] = 0 z0 = np.array(init, dtype=np.complex128, ndmin=1) initn = z0.view(np.float64) deltasf = np.linspace(-10, 10, nbins) gsf = np.linspace(1, 2, nbins) def jacobian_mbes_test(Y, t): len...
<p>The problem comes from the <strong>matrix size</strong> (206 MiB). Indeed, the matrix is pretty big and (virtually) filled with zeros every time the function is called. Assuming the all values would be physically written to memory in 12.2 ms, the throughput would be <code>5206**2*8/12.2e-3/1024**3 = 16.5 GiB/s</code...
python|performance|numpy|slice|numpy-slicing
1
377,553
68,328,267
Why is tf.io.read_file not able to read from pathlib.Path object?
<p>I have an image path:</p> <pre><code>file_path = '/dir/sub_dir/my_image.jpg' </code></pre> <p>These both work fine:</p> <pre><code>open(file_path, 'r') tensorflow.io.read_file(file_path) </code></pre> <p>I instantiate a pathlib.Path object</p> <pre><code>p = pathlib.Path(file_path) </code></pre> <p>These both work ...
<p>That's because tensorflow.io.read_file expects a String object as the file path, not a PosixPath object, which is what you're passing it from pathlib.</p> <p>When you use str(path), you're converting the PosixPath to string, and thats why it works fine in that case.</p> <p>You can find more details on the documentat...
tensorflow|pathlib
0
377,554
68,058,307
Problem in reading text file with negative numbers
<p><strong>Text File:</strong> I have a text file containing more than 87,000 data points. The format of the text file is as follows:</p> <ul> <li>X Coordinate ----- Y Coordinate ------- Parameter 1 ------ Parameter 2--------</li> <li>2.744596610E-02 1.247197202E+00 7.121462841E-03 2.467938066E-05</li> <li>2.7325584...
<p>A <a href="https://regex101.com/r/8XNhiI/1" rel="nofollow noreferrer">regex</a> can put spaces in there:</p> <pre><code>import re with open(&quot;current.txt&quot;) as fh, open(&quot;new.txt&quot;, &quot;w&quot;) as gh: # skip the first line fh.readline() # for other lines.. for line in fh: ...
python|pandas|dataframe|csv
1
377,555
68,141,498
Pandas dropping duplicates doesn't drop last duplicate
<p>Setting keep=False should remove all duplicates but if I run my function is still returns a duplicate of the previous row</p> <pre><code>def date_to_csv(): import pandas as pd from random import randint df = pd.read_csv(&quot;test.csv&quot;) df = df.append({'Date': datetime.date.today(), 'Price': randint...
<p>Because of <code>mode='a'</code> you can't remove previous duplicates after several execution of your function. Here is a code for your expected behaviour:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd from datetime import datetime def date_to_csv(): df = pd.read_csv('test.csv') ...
python|pandas|dataframe|csv|duplicates
2
377,556
68,108,997
Start function from a specific row
<p>I want to know, how can i run a method or a function or a loop for just the <code>not-NaN</code> rows. I don't want to dropna the dataframe and reset the index. For now, I am using the AvgHigh function but it's considering the <code>NaN</code> rows too. Also, if the suggested method can be used with series and array...
<p>This should get the high_r column you are looking for. Added in a check on the pd.nan values.</p> <pre><code>df1 = pd.DataFrame({'values': [np.nan, np.nan,np.nan, np.nan,np.nan, 14018,14022,14023,14021,14020,14014]}) def AvgHigh(src, val) : dat_list = [] last_src = np.nan # init variable that k...
python|pandas|dataframe|numpy|nan
1
377,557
68,385,980
Using a Data Frame Column as start & another as end for range to calculate mean for another dataframe column
<p>I have 2 dataframes, 1 that contains start and end row ids and another that contains the dataframe where I want to calculate the mean for all rows between those coordinates.</p> <p>First dataframe:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>id</th> <th>Exon region start (bp)</th> <t...
<p>Consider -<br> First dataframe having range as dfRange and <br> second datframe having dfData.<br></p> <p>Step1- Find the shape of dfRange. using shape you can get max rows.</p> <p>step2 - using For loop</p> <pre><code>for rowNumber in range(maxRows): </code></pre> <p>you can get each row of dfRange and their corres...
python|python-3.x|pandas|numpy
1
377,558
68,088,768
Sample and group by, and pivot time series data
<p>I'm struggling to handle a complex (imho) operation on time series data.</p> <p>I have a time series data set and would like to break it into nonoverlapping pivoted grouped by chunks. It is organized by customer, year, and value. For the purposes of this toy example, I am trying to break it out into a simple forecas...
<p>There might be a better way to do this, but this will give you the output you want :</p> <p>First we add a <code>Customer_counter</code> column to add an ID to rows member of the same chunk, and we remove extra rows.</p> <pre><code>df[&quot;Customer_chunk&quot;] = (df[::-1].groupby(&quot;Customer&quot;).cumcount()) ...
pandas|time-series
1
377,559
68,324,849
UNIX conversion to datetime either mismatch or returns 1970-01-01
<p>I'm trying to convert a Pandas DataFrame column from UNIX to Datetime, but I either get a mismatch error or the new dates are all 1970-01-01.</p> <p>Here is tail sample of the list:</p> <blockquote> <p>ds y</p> <p>86 1625616000000 34149.989815</p> <p>87 1625702400000 33932.254638</p> <p>88 162578880...
<p>Use <code>pd.Timestamp</code> to convert to datetime:</p> <pre><code>&gt;&gt;&gt; df['ds'].mul(1e6).apply(pd.Timestamp) 0 2021-07-07 00:00:00 1 2021-07-08 00:00:00 2 2021-07-09 00:00:00 3 2021-07-10 00:00:00 4 2021-07-10 02:33:05 Name: ds, dtype: datetime64[ns] </code></pre> <p>Or suggested by @HenryEcker...
python|pandas|dataframe
2
377,560
68,210,907
Calculate the cumulative sum of multiplying each element of one array by all the elements of a second array
<p>I need to efficiently calculate the running sum of multiplying the elements of one array by all the elements of a second array. It is probably easiest to explain what I am trying to do with code:</p> <pre><code>import time import numpy as np arr1 = np.random.uniform(size=1000000) arr2 = np.array([0.1, 0.2, 0.3, 0.3...
<p>In general multiplying two sliding windows is called a <a href="https://en.wikipedia.org/wiki/Convolution" rel="nofollow noreferrer">convolution</a>, implemented in numpy. Your definition is subtly different at the end, however this can be fixed.</p> <pre><code>result = np.convolve(arr1, arr2)[:len(arr1)] diff = len...
python|arrays|numpy
1
377,561
68,041,243
How to filter text data containing key words from an unnamed column excel with python pandas and print to txt file
<p>Im pretty new to this so please bear with me.</p> <p>I have an excel sheet that contains certain text strings i would like to extract and copy to a text file - i have been doing this manually for a long time and im sick of it.</p> <p>So my plan was to write a script that would extract this data from the excel sheet ...
<p>Here's a proposal if you're still looking for a solution:</p> <p>Withe sample frame</p> <pre><code>df = pd.DataFrame({ 0: [ 'Channel2021_1_DRU_POP_15s_16062021', 'Channel2021_2_FANT_POP_15s_16062021', 'Channel2021_3_ITA_POP_15s_16062021', 1., 2., 'Channel2021_1_DRU...
python|excel|pandas|txt
0
377,562
68,134,962
to_csv function with delimeter as pipe giving issues
<p>I am running this</p> <pre><code>data = [['tom', 10], ['nick,paul', 15], ['juli', 14]] df = pd.DataFrame(data, columns=['Name', 'Age']) delimiter='|' df.to_csv('C:\\Users\\mpaul\\workspace\\out\\tre\\test.csv', index=False, date_format='%m-%d-%Y', sep=delimiter) </code></pre> <p>Output</p> <p><a href="https://i.stac...
<p>The problem here is that csv files are interpreted with commas as the separator (as its name suggests: <em>comma separated values</em>). The <code>to_csv()</code> method just writes down your data with the separator specified. If you open the file with a plain text editor you may find the result you want. I've teste...
python|pandas
0
377,563
68,396,903
how can I convert position (2nd, 3rd or 4th etc ) to index(00 for 1st position, 01 for 2nd etc) in 2D array in python numpy?
<pre><code> import numpy as np; entries = []; # take user input row = int(input(&quot;Enter number of rows: &quot;)); column = int(input(&quot;Enter number of columns: &quot;)); print(&quot;Enter Values&quot;); # get values for i in range(row): a = []; for j in range(column): a.append(int(input()));...
<p>Try with simple maths:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np x = [[10 ,24 ,32], [45 ,56 ,62]] x = np.array(x) pos, val = 4, -1 x[pos//x.shape[1], pos%x.shape[0]] = val </code></pre> <p>Outputs:</p> <pre class="lang-py prettyprint-override"><code>[[10 24 32] [-1 56 62]] </code></pr...
python|numpy
0
377,564
68,149,153
How to calculate the average hours between two different times in Python
<p>Let's say, I have the following times.</p> <pre><code>Time_1 Time_2 10:08:00 10:08:00 11:00:00 12:00:00 12:30:00 14:30:00 </code></pre> <p>I would like to calculate the average time between them and would like to get the following result.</p> <pre><code>Time_1 ...
<p>you can use timedelta, this code calculate average of times.</p> <pre><code>import datetime import pandas as pd time1=&quot;11:00:00&quot; time2=&quot;12:00:00&quot; t1 = pd.Timedelta(time1) t2 = pd.Timedelta(time2) avg=(t1+t2)/2 d = {'time1':[t1] , 'time2':[t2], 'average':[avg]} df = pd.DataFrame(d) print(df.to_...
python|pandas|datetime
3
377,565
68,384,568
Possible to retain line breaks in pandas read_html when making data frame from html table?
<p>I'm trying to convert a scraped HTML table into a dataframe in python using pandas <code>read_html</code>. The problem is that <code>read_html</code> brings in a column of my data without breaks, which makes the content of those cells hard to parse. In the original HTML, each &quot;word&quot; in the column is separa...
<p>Maybe...</p> <pre><code>import pandas as pd import requests url = r'https://www.who.int/en/activities/tracking-SARS-CoV-2-variants/' page = requests.get(url) table = pd.read_html(page.text.replace('&lt;br /&gt;',' ')) df = table[0] </code></pre> <p>Outputs:</p> <pre><code> WHO label Pango lineages G...
python|html|pandas|beautifulsoup
5
377,566
68,306,855
Pandas shift() column down, but replace NaN entry with previous value?
<p>Alright, this should be straightforward. I'm shifting a column down, and just need to fill the resulting NaN with the previous value instead. How can I do this?</p> <pre><code>&gt;&gt;&gt; df1 = pd.DataFrame({ 'time_id': [5,5,5,5,5,5,5,5,11,11,11,11,11,11,11,11], ... 'A': [1,2,4,5,7,9,11,12,2,...
<p>you can use interpolate()</p> <pre><code>df.C_prev.interpolate(method = 'backfill', limit_direction = 'backward') </code></pre>
python|pandas|dataframe
1
377,567
68,377,425
Applying a function to every cell in a dataframe based of row and column
<p>I have a dateframe whose rows and columns are numbers. Is there a way to apply a function to every cell, based of whatever row and column the cell belongs to. To illustrate, I want:</p> <pre><code> | 2022 | 2023 | 2024 | 0 | f(0, 2022) | f(0, 2023) | f(0, 2024) | 1 | f(1, 2022...
<p>Try using <code>pd.DataFrame.apply(func, axis)</code><br> You might find the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.apply.html" rel="nofollow noreferrer">documentation</a> helpful</p>
python|pandas|dataframe
-1
377,568
68,170,884
How to apply Target Encoding in test dataset?
<p>I am working on a project, where I had to apply target encoding for 3 categorical variables:</p> <pre><code>merged_data['SpeciesEncoded'] = merged_data.groupby('Species')['WnvPresent'].transform(np.mean) merged_data['BlockEncoded'] = merged_data.groupby('Block')['WnvPresent'].transform(np.mean) merged_data['TrapEnco...
<p>You need to same the mapping between the feature and the mean value, if you want to apply it to the test dataset.</p> <p>Here is a possible solution:</p> <pre><code>species_encoding = df.groupby(['Species'])['WnvPresent'].mean().to_dict() block_encoding = df.groupby(['Block'])['WnvPresent'].mean().to_dict() trap_enc...
python|pandas
0
377,569
68,029,770
Onnx to trt - [8] Assertion failed: creator && "Plugin not found
<p>I am using <strong>TensorRT</strong> in order to convert a model from onnx to trt -format. The model is originally a <a href="/questions/tagged/tensorflow" class="post-tag" title="show questions tagged &#39;tensorflow&#39;" rel="tag">tensorflow</a> model from Tensorflow Model Zoo (SSD ResNet50). When I try to conver...
<p>Got this fixed by using TensorRT 8.</p>
tensorflow|onnx|nvidia-jetson|tensorrt
0
377,570
68,136,779
logits and labels must be broadcastable: data augmentation layers makes logits and labels mismatch
<p>I'm trying to <strong>move all my data augmentation preprocessing over to inside my model</strong>, hence, i have created a preprocessing model and merged it into my Resnet50.</p> <p>The problem is, my <code>tf.data</code> pipeline inputs <code>batch_size</code> images to the model, that when fed into the preprocess...
<p>Any Keras layer will have to keep the batch size the same as the input, so it's not possible as a Keras layer.</p> <p>If you really want to generate multiple images, you would have to do this in the ingest pipeline.</p> <p>That said, the more common approach is to randomly select one out of the multiple images durin...
python|tensorflow|keras|deep-learning|data-preprocessing
0
377,571
68,043,922
How to keep date format the same in pandas?
<pre><code>import pandas as pd import sys df = pd.read_csv(sys.stdin, sep='\t', parse_dates=['Date'], index_col=0) df.to_csv(sys.stdout, sep='\t') </code></pre> <pre><code>Date Open 2020/06/15 182.809924 2021/06/14 257.899994 </code></pre> <p>I got the following output with the input shown above.</p> <pre><code>Da...
<p>You can specify the <code>date_format</code> argument in <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_csv.html" rel="nofollow noreferrer"><code>to_csv</code></a>:</p> <pre><code>df.to_csv(sys.stdout, sep='\t', date_format=&quot;%Y/%m/%d&quot;) </code></pre>
python|pandas
3
377,572
68,292,862
PerformanceWarning: DataFrame is highly fragmented. This is usually the result of calling `frame.insert` many times, which has poor performance
<p>I got following warning</p> <blockquote> <p>PerformanceWarning: DataFrame is highly fragmented. This is usually the result of calling <code>frame.insert</code> many times, which has poor performance. Consider using pd.concat instead. To get a de-fragmented frame, use <code>newframe = frame.copy()</code></p> </blo...
<p><code>append</code> is not an efficient method for this operation. <code>concat</code> is more appropriate in this situation.</p> <p>Replace</p> <pre><code>df1 = df1.append(df, ignore_index =True) </code></pre> <p>with</p> <pre><code> pd.concat((df1,df),axis=0) </code></pre> <p>Details about the differences are in t...
python|pandas
7
377,573
68,267,296
Correlation heatmap turned values into nan in Python
<p>I want to conduct a heatmap on my table <code>df</code>, which looks normal at the beginning:</p> <pre><code> Total Paid Post Engaged Negative like 1 2178 0 0 66 0 1207 2 1042 0 0 60 0 921 3 2096 0 0 112 0 1744 4 1832 ...
<p>I think problem is passed object <code>DataFrame</code> to <code>pd.DataFrame</code> constructor, so there are different original columns names and new columns names from list, so only <code>NaN</code>s are created.</p> <p>Solution is convert it to numpy array:</p> <pre><code>df= pd.DataFrame(df.to_numpy(),columns=[...
python|pandas|dataframe|seaborn|nan
3
377,574
469,931
deleting rows of a numpy array based on uniqueness of a value
<p>let's say I have a bi-dimensional array like that</p> <pre><code>numpy.array( [[0,1,1.2,3], [1,5,3.2,4], [3,4,2.8,4], [2,6,2.3,5]]) </code></pre> <p>I want to have an array formed eliminating whole rows based on uniqueness of values of last column, selecting the row to keep based on value of third...
<p>My numpy is way out of practice, but this should work:</p> <pre><code>#keepers is a dictionary of type int: (int, int) #the key is the row's final value, and the tuple is (row index, row[2]) keepers = {} deletions = [] for i, row in enumerate(n): key = row[3] if key not in keepers: keepers[key] = (i...
python|arrays|numpy|unique
1
377,575
59,088,097
Same observations on data frame feature appear as independent
<p>I have got the following DF:</p> <p><code>carrier_name sol_carrier aapt 702 aapt carrier 185 afrix 72 afr-ix 4 airtel 35 airtel 2 airtel dia and broadband 32 airtel mpls standard circuits 32 amt 6 anca test 1 appt 1 at tokyo ...
<p>That happens because you don't aggregate the results you just change the values in 'carrier_name' columns.</p> <p>To aggregate the results call </p> <pre><code>carrier.groupby('carrier_name').sol_carrier.sum() </code></pre> <p>or modify the 'data' dataframe and then call</p> <pre><code>data['sol_carrier'].value_...
python|pandas|replace
1
377,576
59,138,033
Read 5 lines from a panda dataframe and insert it in one cell per line in another panda dataframe
<p>I am reading data from an excel file: the dataframe resulting is an array with a single column and several lines:</p> <pre><code> identifier 0 6051 1 771 2 6051 3 5219 4 3667 ... 6023 771 6024 6051 6025 772 [6026 rows x 1 columns] </code></pre> <p>What I n...
<p>You can try the following. </p> <pre><code>df['group'] = df.index//5 # add extra column to hold the group value new_df = df.groupby('group').identifier.apply(list).apply(pd.Series) df.drop('group', axis=1) # drop the extra column that was created. print(new_df.head()) </code></pre> <p>Edit:</p> <p><strong>Input</...
python|pandas|dataframe|machine-learning
2
377,577
59,273,374
How is my model working if all the base layer trainables are set to false?
<p>This is the model I make for my deep learning project and I am getting decent accuracy out of it. My question is, if I froze the weights of the initial model(which is my base model of VGG19) how did I manage to train the whole model? And also after adding the VGG19 layer with the layers frozen I got better results t...
<p>"Freezing the layers" just means you don't update the weights on those layers when you backpropagate the error. Therefore, you'll just update the weights on those layers that are not frozen, which enables your neural net to learn.</p> <p>You are adding some layers after VGG. I don't know if this is a common approac...
tensorflow|machine-learning|keras|computer-vision
0
377,578
59,320,208
How to create my own loss function in Pytorch?
<p>I'd like to create a model that predicts parameters of a circle (coordinates of center, radius).</p> <p>Input is an array of points (of arc with noise):</p> <pre><code>def generate_circle(x0, y0, r, start_angle, phi, N, sigma): theta = np.linspace(start_angle*np.pi/180, (start_angle + phi)*np.pi/180, num=N) ...
<p>You're trying to create a loss between the predicted outputs and the inputs instead of between the predicted outputs and the true outputs. To do this you need to save the true values of <code>x0</code>, <code>y0</code>, and <code>r</code> when you generate them.</p> <pre class="lang-py prettyprint-override"><code>n...
python-3.x|machine-learning|deep-learning|pytorch|loss-function
0
377,579
59,218,230
Classifications after better cluster found - Sklearn
<p>I using kmeans to classificate data.</p> <p>And I found my better k cluster with Elbow method and silhouette to validate decision.</p> <p>So now how can i classificate my data and plot dist chart?</p> <p>Could you please help me with this?</p> <p>This is my code.</p> <pre><code>import pandas as pd import seabor...
<p>If you want to predict which cluster your new data belongs to, you need to use the predict method:</p> <pre><code>kmeans.predict(newData) </code></pre> <p>Here is the documentation link for the predict method:</p> <p><a href="https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html#sklearn.cl...
python|scikit-learn|sklearn-pandas
0
377,580
59,291,053
How to select rows based on two column that must contain specific value?
<p>I have a dataset with a lot of incorrect duplicates on a certain field, in my reproducible example there are duplicates in serial with different color and shape. I have the actual dataframe with the correct color and shape to serial mapped, and need to select the correct rows with that.</p> <p>Example:</p> <pre><c...
<p>You can use merge:</p> <pre><code>real.merge(items) </code></pre> <p>output</p> <pre><code>Out[305]: serial color shape more_data even_more_data ...
python|pandas|dictionary|pandas-loc
1
377,581
59,465,357
Pandas set value in a column equal to 5% quantile if they are smaller than that
<p>Generating data</p> <pre><code>random.seed(42) date_rng = pd.date_range(start='1/1/2018', end='1/08/2018', freq='H') df = pd.DataFrame(np.random.randint(0,10,size=(len(date_rng), 3)), columns=['data1', 'data2', 'data3'], index= date_rng) mask = np.random.choice([1, 0], df.shape, p...
<p>You can get quantile by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.quantile.html" rel="nofollow noreferrer"><code>DataFrame.quantile</code></a> of all columns and pass it to <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.clip.html" rel="no...
python|pandas|dataframe|quantile
2
377,582
59,294,501
Having datetime only go up to minutes in my time column
<p>I am pulling information from our server and the time column goes all the way to nanoseconds. I need to merge multiple dfs and this specificity is causing my script to return an empty df.</p> <p>I tried using:</p> <pre><code>df['Time'] = pd.to_datetime(df['Time'], format="%d-%m-%y %H:%M") </code></pre> <p>but I d...
<p>You can use the round() method on Timestamps to convert the column to the resolution you wish.</p> <p>For example</p> <pre><code>import pandas as pd import datetime pd.to_datetime(datetime.datetime.now()).round('H') </code></pre> <p>Returns </p> <pre><code>Timestamp('2019-12-11 22:00:00') </code></pre> <p>as it...
python|pandas|datetime
5
377,583
59,193,560
Group by all elements of a column, in pandas
<p>I have the dataframe: </p> <pre><code>elements1 | elements2 a dog b dog a cat x cat c cat m pig k pig ... </code></pre> <p>and I want to obtain a dataframe of the form: </p> <...
<p>we can use <code>groupby</code>, <code>apply</code> and a <code>lambda</code> where we join all the matching elements with a comma. </p> <pre><code>df1 = df.groupby('elements2')['elements1'].apply(lambda x : ','.join(x)).reset_index() cols = ['elements1','elements2'] # sort cols by your desired input. print(df1...
pandas|dataframe|group-by
2
377,584
59,398,262
Append in exisitng excel sheet using PyExcelerate
<p>I have a dataframe containing large number of records (more than 300,000 rows and 100 columns) . I want to write this dataframe into an pre exsiting excel file (say Output.xlsx).</p> <p>I tried this using openpyexcel as below-</p> <pre><code>with pd.ExcelWriter('Output.xlsx',engine='openpyxl', mode='a') as writer:...
<p>PyExcelerate doesn't support reading Excel files therefore it can't easily do this. Reading is also out of scope for the library so it's unlikely to be added unfortunately. A possible, faster workaround could be to write the sheets to be appended to a new Excel file and use another script to merge the two files.</p>
python|excel|pandas|pyexcelerate
3
377,585
59,108,736
Replace values based on column of column names
<p>I have a large dataframe (>1000 rows) of measurements. One of the columns is Fails (type str) that contains the columns for which the measurement failed. Whether the measurement fails isn't solely based on the value so I can't just replace all negative values for example, which is why there is a Fails column </p> <...
<p>Here's my approach:</p> <pre><code>rep_cols = ['Cd','Sn','Sb','Cu','Zn'] s = df.Fails.str.split(expand=True).stack().reset_index(name='col') df.loc[:, rep_cols] = df.mask(s.pivot('level_0', 'col', 'level_1').notnull()) </code></pre> <p>Output:</p> <pre><code> Cd Sn Sb Zn Fails 0 NaN NaN NaN 4.0 ...
python|python-3.x|pandas
0
377,586
59,192,903
Keras costume loss for two connected Autoencoders
<p>I would like to train two Autoencoders jointly and connect their activation layer in the deepest layer. </p> <p>How can I add all the terms in one loss function? </p> <p>Assume: </p> <pre><code>diffLR = Lambda(lambda x: abs(x[0] - x[1]))([model1_act7, model2_act5]) model = Model(inputs=[in1, in2], outputs=[diffLR...
<p>So, what you are doing is performing <code>RootMeanSquareError</code> on each of your <code>n=3</code> output followed by a weighted sum (same weight in your case).</p> <p>As the Error message says clearly:</p> <blockquote> <p>ValueError: <strong>When passing a list as loss, it should have one entry per model ...
python|tensorflow|keras
0
377,587
59,453,510
How to repeat rows in dataframe with each values in a list?
<p>I have a dataframe as follows</p> <pre><code>df = pd.DataFrame({ 'DATE' : ['2015-12-01', '2015-12-01', '2015-12-02', '2015-12-02'], 'DAY_NUMBER' : [3, 3, 4, 4], 'HOUR' : [5, 6, 5, 6], 'count' : [12,11,14,15] }) DATE DAY_NUMBER HOUR count 0 2015-12-01 3 5 12 1 2015-12...
<p>Use <code>cross join</code> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.merge.html" rel="nofollow noreferrer"><code>DataFrame.merge</code></a> with helper <code>DataFrame</code> created by <code>extra_hours</code> list, last <a href="http://pandas.pydata.org/pandas-docs/s...
python-3.x|pandas
3
377,588
59,343,369
Resampling an array by duplicating or skipping items (translate numpy to js)
<p>I am trying to translate this python/numpy code I have into javascript. This method takes an array and a target size and resizes the array by duplicating or skipping every N items.</p> <p>Here is an example:</p> <pre class="lang-js prettyprint-override"><code>let original_array = [0,1,2,3,4,5,6,7,8,9]; upsample(or...
<p>I resolved this using this node module that copies matlab's linspace behaviour: <a href="https://github.com/jfhbrook/node-linspace" rel="nofollow noreferrer">https://github.com/jfhbrook/node-linspace</a></p> <p>Here is the code:</p> <pre class="lang-js prettyprint-override"><code>const linspace = require('linspace...
javascript|python|arrays|numpy|math
0
377,589
59,128,344
How to transform a dataframe with a column whose values are lists to a dataframe where each element of each list in that column becomes a new row
<p>I have a dataframe with entries in this format:</p> <pre><code>user_id,item_list 0,3569 6530 4416 5494 6404 6289 10227 5285 3601 3509 5553 14879 5951 4802 15104 5338 3604 2345 9048 8627 1,16148 8470 7671 8984 9795 6811 3851 3611 7662 5034 5301 6948 5840 345 14652 10729 8429 7295 4949 16144 ... </code></pre> <p>*No...
<p>Use <code>split</code> to transform the lists to actual lists, then <code>explode</code> to ... well, explode the DataFrame. <strong>Requires pandas >= 0.25.0</strong></p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; df = pd.DataFrame({'user_id': [0,1], 'item_list': ['1 2 3', '4 5 6']}) &gt;&gt;&gt;...
python|pandas
1
377,590
59,076,114
Batch normalization destroys validation performances
<p>I'm adding some batch normalization to my model in order to improve the training time, following some tutorials. This is my model:</p> <pre><code>model = Sequential() model.add(Conv2D(16, kernel_size=(3, 3), activation='relu', input_shape=(64,64,3))) model.add(BatchNormalization()) model.add(MaxPooling2D(pool_siz...
<p>Try using lesser number of batch normalization layers. And it is a general practice to use it at the last convolution layer. Start with just one of them and add more if it improves the validation accuracy.</p>
tensorflow|keras|conv-neural-network|batch-normalization
1
377,591
59,134,499
How to modify path where Torch Hub models are downloaded
<p>When I download models through Torch Hub, models are automatically downloaded in <code>/home/me/.cache/torch</code>.</p> <p><strong>How can I modify this behavior ?</strong></p>
<p>From <a href="https://pytorch.org/docs/stable/hub.html#where-are-my-downloaded-models-saved" rel="noreferrer">official documentation</a>, there is several ways to modify this path.<br> In priority order :</p> <ol> <li><p>Calling hub.set_dir()</p></li> <li><p>$TORCH_HOME/hub, if environment variable TORCH_HOME is se...
path|pytorch
20
377,592
59,084,304
How to know version of MKL used by numpy in python anaconda distributive?
<p>How to know version of MKL used by numpy in python anaconda distributive from python code?</p>
<p>Found method mkl.get_version_info():</p> <pre><code>import mkl mkl.get_version_string() </code></pre> <p>console:</p> <pre><code>'Intel(R) Math Kernel Library Version 2019.0.0 Product Build 20180829 for Intel(R) 64 architecture applications' </code></pre>
python|numpy|anaconda|intel-mkl
4
377,593
59,432,845
What does `asof` mean in Pandas?
<p>I've read the documentation for pandas. There is a useful function called <code>merge_asof</code> which appears to merge two dataframes with rows that are close together. But I don't know what <code>asof</code> means. Is it <code>as of</code>? Or is it an abbreviation for something?</p> <p><a href="https://pandas.p...
<p>It means "as of". Here are two sources that reference it as such:</p> <p><a href="https://code.kx.com/v2/ref/asof/" rel="nofollow noreferrer">kdb+ and q asof</a></p> <p><a href="https://issues.apache.org/jira/browse/SPARK-22947" rel="nofollow noreferrer">SPIP: as-of join in Spark SQL</a></p>
pandas
3
377,594
59,074,105
What is the correct keyword for the Proximal AdaGrad optimizer on Tensorflow?
<p>I was experimenting with the Proximal AdaGrad for science fair and I was not able to use it because it counts it as it not existing. </p> <p>My code:</p> <pre><code>import tensorflow as tf from tensorflow import keras import matplotlib.pyplot as plt import numpy as np import time start_time = time.time() data =...
<p>There is an <a href="https://github.com/tensorflow/addons/issues/591" rel="nofollow noreferrer">open issue about this</a>. The TensorFlow implementation exists (even in TensorFlow 2.x, as <a href="https://www.tensorflow.org/api_docs/python/tf/compat/v1/train/ProximalAdagradOptimizer" rel="nofollow noreferrer"><code>...
python|python-3.x|tensorflow|keras|deep-learning
1
377,595
59,225,070
aggregating, grouping and UN stack to many columns
<p>i am in python i have data-frame of 177 columns that contain patient values for 24 hours as in this </p> <pre><code>subject_id hour_measure urinecolor Respiraory 3 1.00 red 40 3 1.15 red 90 4 2.00 y...
<p>Use:</p> <pre><code>print (df) hour subject_id hour_measure urinecolor Respiraory 0 1 3 1.00 red 40 1 1 3 1.15 red 90 2 1 4 2.00 yellow 60 </code></pre> <hr> <pre><code>df1 = (df.groupby(['hour_...
python|python-3.x|pandas|scikit-learn|pyhook
0
377,596
59,080,993
best way of counting number of rows for each grouped by column
<p>after grouping by two columns df.groupby(["id","b"]) now I want to find "id" where there are more than 5 rows. </p> <p>so in the df below, </p> <p>id = 4 has 2 rows<br> id = 4 has 3 rows.</p> <pre><code> count id b 4 1568 1 4167 1 5 1100 ...
<p>try this</p> <pre><code>rows = df.groupby("id")["b"].apply(lambda x: len(list(x))) </code></pre> <p>output</p> <pre><code>id 4 2 5 3 </code></pre>
python|pandas
1
377,597
59,285,977
Fill cells of a new dataframe without losing some columns of the former one
<p>I have a dataframe with a reference to a commune, a number of vote in this city and the results of a few parties. </p> <pre><code> Comm Votes LPC CPC BQ 0 comm1 1315.0 2.0 424.0 572.0 1 comm2 4682.0 117.0 2053.0 1584.0 2 comm3 2397.0 2.0 40.0 192.0 3 comm4 931.0 ...
<p>I had a major doubt about empty values in the dataframe. I suppose we are talking about this kind of dataframe, where all parties and all communes are listed, but with NULL values here and there. Consider my example:</p> <pre><code> print(df) Comm Votes LPC CPC BQ 0 comm1 1315.0 ...
python|python-3.x|pandas|dataframe
0
377,598
59,316,821
difference between two list and handle exception if results are null
<p>I have a dataframe with two columns. Each column has list of items and I am trying to subtract one column form another as below.</p> <pre><code> test['new'] = test['products'].apply(set) - test['old_products'].apply(set) </code></pre> <p>This works. </p> <p>When there are no elements for newly created column look...
<p>As usual with exceptions. Try something like:</p> <pre><code>try: test['new'] = test['products'].apply(set) - test['old_products'].apply(set) except: test['new'] = 'NA' </code></pre> <p>Check documentation there: <a href="https://docs.python.org/3/tutorial/errors.html#handling-exceptions" rel="nofollow nor...
python-3.x|pandas
1
377,599
59,097,774
Process the data in the database and write to a new table
<p>The problem I'm having is processing a table in the database and then merging all to write to another table.</p> <p>The table structure is like this:</p> <p><a href="https://i.stack.imgur.com/eI0S0.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eI0S0.jpg" alt="enter image description here"></a>...
<p>Refer the below answer, </p> <p><a href="https://stackoverflow.com/questions/20167194/insert-a-pandas-dataframe-into-mongodb-using-pymongo">Insert a Pandas Dataframe into mongodb using PyMongo</a></p> <p><a href="https://stackoverflow.com/a/20167984/3704501">https://stackoverflow.com/a/20167984/3704501</a></p> <p...
python|pandas|mongodb
0