Unnamed: 0 int64 0 378k | id int64 49.9k 73.8M | title stringlengths 15 150 | question stringlengths 37 64.2k | answer stringlengths 37 44.1k | tags stringlengths 5 106 | score int64 -10 5.87k |
|---|---|---|---|---|---|---|
700 | 49,622,836 | Search for a keyword in a user's tweets | <p>Whenever I run the code below, it gives me the most recent 10 tweets from <code>@Spongebob</code>, instead of giving me the tweets that include "<code>Bikini Bottom</code>" from the last 10 tweets. How do I make it conditional to the keyword?</p>
<pre><code>user = api.get_user('SpongeBob')
public_tweets = api.user_... | <p>You need to use the <a href="https://developer.twitter.com/en/docs/tweets/search/api-reference/get-search-tweets" rel="nofollow noreferrer">Twitter Search API</a>, with the correct <a href="https://developer.twitter.com/en/docs/tweets/search/guides/standard-operators" rel="nofollow noreferrer">search operators</a>.<... | pandas|api|twitter|tweepy | 1 |
701 | 49,776,841 | How to split on dataframe column into two another column in pandas? | <p>I am trying to split on column named as "variable" into two another column "Type" and "Parameter"</p>
<pre><code> BatchNumber PhaseNumber SiteID variable Values
0 4552694035 0020B 2 min_tempC 27.0
1 4552694035 OverAll 2 max_tempF 24.0
</code></pre>
<p>I tried to use below code</p>... | <p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.pop.html" rel="nofollow noreferrer"><code>DataFrame.pop</code></a> for extract column with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.split.html" rel="nofollow noreferrer"><code>split</code></a> ... | python|pandas|dataframe | 2 |
702 | 67,223,048 | From specific row in a df_a, to count its occurrences in the past a year in df_b | <p>I have two dataframe as below and I want to return how many Success (Yes) in a year (for a specific person) 1 year prior to his/her specific date, i.e. each entry in <code>to check</code> to define the range in <code>history</code>.</p>
<p>For example, in <code>to_check</code>, Mike 20200602, I want to know how many... | <p><code>merge</code> and <code>query</code>, but I would suggest leaving the dates as number for easy offset:</p>
<pre><code># both `Check` and `History` are numbers, not dates
(df_check.merge(df_history, on='Name', how='left')
.query('History<=Check<History+10000')
.groupby('Name').agg({'History':'first... | python|pandas|dataframe | 3 |
703 | 67,327,268 | Split a Pandas column with lists of tuples into separate columns | <p>I have data in a pandas dataframe and I'm trying to separate and extract data out of a specific column <code>col</code>. The values in <code>col</code> are all lists of various sizes that store 4-value tuples (previous 4 key-value dictionaries). These values are always in the same relative order for the tuple.</p>
<... | <p>Here is another solution, you can try out using <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.explode.html#pandas-dataframe-explode" rel="nofollow noreferrer"><code>explode</code></a> + <a href="https://pandas.pydata.org/docs/reference/api/pandas.concat.html#pandas.concat" rel="nofollow nore... | python|pandas|dataframe|numpy | 3 |
704 | 67,588,650 | Pandas Grouper "Cumulative" sum() | <p>I'm trying to calculate the cumulative total for the next 4 weeks.</p>
<p>Here is an example of my data frame</p>
<pre><code>d = {'account': [10, 10, 10, 10, 10, 10, 10, 10],
'volume': [25, 60, 40, 100, 50, 100, 40, 50]}
df = pd.DataFrame(d)
df['week_starting'] = pd.date_range('05/02/2021',
... | <p>This should work:</p>
<pre><code>df['volume_next_4_weeks'] = [sum(df['volume'][i:i+4]) for i in range(len(df))]
</code></pre>
<p>For the other column showing the addition as <code>string</code>, I have stored the values in a list using the same logic above but not applying sum and then joining the list elements as ... | python|pandas|numpy|pandas-groupby|cumsum | 0 |
705 | 59,927,915 | Pandas: get unique elements then merge | <p>I think this should be simple, but I'm having difficulty searching for solutions to this problem, perhaps because I don't know the best vocabulary. But to illustrate, say I have three data frames:</p>
<p><code>df1 = df({'id1':['1','2','3'], 'val1':['a','b','c']})</code></p>
<p><code>df2 = df({'id2':['1','2','4'], ... | <p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.concat.html" rel="nofollow noreferrer"><code>concat</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>DataFrame.set_index</code></a> for all... | python|pandas|dataframe|merge | 4 |
706 | 60,210,750 | How can I solve this problem nested renamer is not supported | <pre><code>SpecificationError Traceback (most recent call last)
<ipython-input-42-d850d85f8342> in <module>
----> 1 train_label=extract_feature(train,train_label)
<ipython-input-33-23ab8dbf7d96> in extract_feature(df, train)
1 def extract_feature(df,train):
----> 2 ... | <p>I see two times the term 'std' in </p>
<p>t=groupy_feature(df,'ship','x',['max','min','mean','std','median','std','skew','sum'])</p> | python|pandas | 0 |
707 | 65,163,818 | Transpose part of dataframe Python | <p>I have a dataset as below:</p>
<pre class="lang-py prettyprint-override"><code>>>>df = pd.DataFrame(
[
["site1", "2020-12-05T15:50:00", "0", "0"],
["site1", "2020-12-05T15:55:00", "0.5", "0"],
["... | <p>use melt:</p>
<pre><code>df.melt(id_vars=['code','site_time']).rename(columns={'variable':'trace'}).sort_values(by=['code','trace',])
</code></pre>
<p>desired result:</p>
<pre><code> code site_time trace value
0 site1 2020-12-05T15:50:00 r1 0
1 site1 2020-12-05T15:55:00 r1 0.5
4 site1 202... | python|pandas|dataframe|transpose | 2 |
708 | 65,280,524 | Matplotlib: Why does interpolated points fall outside the plotted line? | <p>I have recreated a common geoscientific plot using Matplotlib. It shows the grain size distribution of a soil sample and is used for soil classification.</p>
<p>Basically, a soil sample is placed in a stack of sieves, which is then shaked for a certain amount of time, and the remaining weight of each grain fraction ... | <p>The problem is that you are using linear interpolation to find the points, while the plot has straight lines on a log scale. This can be accomplished via interpolation in log space:</p>
<pre class="lang-py prettyprint-override"><code>def interpolate(yval, df, xcol, ycol):
return np.exp(np.interp([yval], df[ycol... | python|pandas|numpy|matplotlib | 3 |
709 | 65,157,499 | How do i specify to a model what to take as input of a custom loss function? | <p>I'm having issues in understanding/implementing a custom loss function in my model.</p>
<p>I have a keras model which is composed by 3 sub models as you can see here in the model architecture,</p>
<p><a href="https://i.stack.imgur.com/l1gJs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/l1gJs.png... | <p>You can add loss with <code>compile()</code> only for standard loss function signature (y_true, y_pred). You can not use it because your signature is something like (y_true, (y_pred1, y_pred2)). Use <code>add_loss()</code> API instead. See here: <a href="https://keras.io/api/losses/" rel="nofollow noreferrer">https:... | python|tensorflow|keras|deep-learning|loss-function | 0 |
710 | 65,380,768 | TensorflowJS: how to reset input/output shapes for pretrained model in TFJS | <p>For the <a href="https://github.com/HasnainRaz/Fast-SRGAN/blob/master/models/generator.h5" rel="nofollow noreferrer">pre-trained model in python</a> we can reset input/output shapes:</p>
<pre><code>from tensorflow import keras
# Load the model
model = keras.models.load_model('models/generator.h5')
# Define arbitra... | <p>To define a new model from the layers of the previous model, you need to use <code>tf.model</code></p>
<pre><code>this.model = tf.model({inputs: inputs, outputs: outputs});
</code></pre> | tensorflow|tensorflow.js|tensorflowjs-converter | 1 |
711 | 65,431,015 | Object Detection Few-Shot training with TensorflowLite | <p>I am trying to create a mobile app that uses object detection to detect a specific type of object. To do this I am starting with the <a href="https://github.com/tensorflow/examples/tree/master/lite/examples/object_detection/android" rel="nofollow noreferrer">Tensorflow object detection example Android app</a>, which... | <p>The fix apparently was <a href="https://github.com/tensorflow/examples/compare/master...cachvico:darren/fix-od" rel="nofollow noreferrer">https://github.com/tensorflow/examples/compare/master...cachvico:darren/fix-od</a> - .tflite files can now be zip files including the labels, but the example app doesn't work with... | tensorflow2.0|object-detection|tensorflow-lite | 0 |
712 | 65,161,630 | Getting the index of a timestamp element in a pandas data frame | <p>I have a pandas data frame that I created as follows:</p>
<pre><code>dates = pd.date_range('12-01-2020','12-10-2020')
my_df = pd.DataFrame(dates, columns = ['Date'])
</code></pre>
<p>So this gives</p>
<pre><code> Date
0 2020-12-01
1 2020-12-02
2 2020-12-03
3 2020-12-04
4 2020-12-05
5 2020-12-06
6 2020-12-07
7... | <p>You can also use <code>query</code> without having to reset the index</p>
<pre><code>my_df.query("Date == '2020-12-05'").index.values[0]
</code></pre>
<p>or if you want to assign the value to search:</p>
<pre><code>d = pd.to_datetime('12-05-2020')
my_df.query("Date == @d").index.values[0]
</code... | python|pandas|dataframe|indexing|timestamp | 1 |
713 | 65,324,352 | Pandas df.equals() returning False on identical dataframes? | <p>Let <code>df_1</code> and <code>df_2</code> be:</p>
<pre><code>In [1]: import pandas as pd
...: df_1 = pd.DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6]})
...: df_2 = pd.DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6]})
In [2]: df_1
Out[2]:
a b
0 1 4
1 2 5
2 3 6
</code></pre>
<p>We add a row <code>r</code> to ... | <p>This again is a subtle one, well done for spotting it.</p>
<pre><code>import pandas as pd
df_1 = pd.DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6]})
df_2 = pd.DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6]})
r = pd.DataFrame({'a': ['x'], 'b': ['y']})
df_1 = df_1.append(r, ignore_index=True)
df_1 = pd.concat([df_1, r]).drop_du... | python|pandas|dataframe|equals|dtype | 10 |
714 | 65,221,982 | How to clear a numpy array? | <p>How can I clear a one dimensional numpy array?</p>
<p>if it was a list, I would do this:</p>
<pre><code>my_list = []
</code></pre>
<p>How to do this with numpy?</p>
<p><strong>edited:</strong></p>
<p>What I mean by clearing is to remove all element in the same array.
And for list I had to use <code>my_list.clear()</... | <p>What do you mean by 'clear'? The code</p>
<pre><code>my_list = []
</code></pre>
<p>just overwrites whatever was stored as <code>my_list</code> previously, regardless whether it was a list, array or whatever. If that is what you wish to do, you use the exact same syntax if <code>my_list</code> was an array.</p>
<p>If... | python|numpy | 1 |
715 | 50,125,055 | Merging multiple CSV files and dropping duplicates by field | <p>I need to match data from multiple CSV files.
For example, if I have three CSV files.</p>
<p>input 1 csv</p>
<pre class="lang-none prettyprint-override"><code>PANYNJ LGA WEST 1,available, LGA West GarageFlushing
PANYNJ LGA WEST 4,unavailable,LGA West Garage
iPark - Tesla,unavailable,530 E 80th St
</code></pre>
<... | <p>With <code>pandas</code>, you can use <code>pd.concat</code> followed by <code>pd.drop_duplicates</code>:</p>
<pre><code>import pandas as pd
from io import StringIO
str1 = StringIO("""PANYNJ LGA WEST 1,available, LGA West GarageFlushing
PANYNJ LGA WEST 4,unavailable,LGA West Garage
iPark - Tesla,unavailable,530 E ... | python|python-3.x|pandas|csv | 1 |
716 | 50,077,712 | Replacing 2D subarray in 3D array if condition is met | <p>I have a matrix that looks like this:</p>
<pre><code>a = np.random.rand(3, 3, 3)
[[[0.04331462, 0.30333583, 0.37462236],
[0.30225757, 0.35859228, 0.57845153],
[0.49995805, 0.3539933, 0.11172398]],
[[0.28983508, 0.31122743, 0.67818926],
[0.42720309, 0.24416101, 0.5469823 ],
[0.22894097, 0.76159389, 0.804... | <p>Here is a vectorized way to get what you want.<br>
Taking <code>a</code> from your example:</p>
<pre><code>a[(a < 0.2).any(axis=1).any(axis=1)] = 0.2
print(a)
</code></pre>
<p>gives:</p>
<pre><code>array([[[ 0.2 , 0.2 , 0.2 ],
[ 0.2 , 0.2 , 0.2 ],
[ 0.2 ... | python|numpy|vectorization | 3 |
717 | 46,964,363 | Filtering out outliers in Pandas dataframe with rolling median | <p>I am trying to filter out some outliers from a scatter plot of GPS elevation displacements with dates</p>
<p>I'm trying to use df.rolling to compute a median and standard deviation for each window and then remove the point if it is greater than 3 standard deviations.</p>
<p>However, I can't figure out a way to loo... | <p>Just filter the dataframe</p>
<pre><code>df['median']= df['b'].rolling(window).median()
df['std'] = df['b'].rolling(window).std()
#filter setup
df = df[(df.b <= df['median']+3*df['std']) & (df.b >= df['median']-3*df['std'])]
</code></pre> | pandas|median|outliers|rolling-computation | 17 |
718 | 46,640,945 | Grouping by multiple columns to find duplicate rows pandas | <p>I have a <code>df</code></p>
<pre><code>id val1 val2
1 1.1 2.2
1 1.1 2.2
2 2.1 5.5
3 8.8 6.2
4 1.1 2.2
5 8.8 6.2
</code></pre>
<p>I want to group by <code>val1 and val2</code> and get similar dataframe only with rows which has multiple occurance of sa... | <p>You need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.duplicated.html" rel="noreferrer"><code>duplicated</code></a> with parameter <code>subset</code> for specify columns for check with <code>keep=False</code> for all duplicates for mask and filter by <a href="http://pandas.pydata.... | python|pandas|dataframe | 57 |
719 | 67,990,177 | Use pandas to determine how many of the previous rows are required to sum to a threshold | <p>I have a pandas data frame and I would like to create a new column <code>desired_col</code>, which contains the required number of rows in <code>col1</code> to sum >= a threshold value.</p>
<p>For example if I choose my threshold value to be 10 and I have</p>
<pre><code>d = {'col1': [10,1,2,6,1,6,4,3,2,3,1,1,4,5,... | <p>This code will produce your desired output:</p>
<pre><code>import pandas as pd
d = {'col1': [10,1,2,6,1,6,4,3,2,3,1,1,4,5,5]}
df = pd.DataFrame(data=d)
target = []
for i in range(len(df)):
element = df.iloc[i].col1
print(element)
s = element
threshold = 10
j=i
steps = 0
while s < thre... | python|pandas|dataframe | 0 |
720 | 61,190,366 | Creating several weight tensors for each object in Multi-Object Tracking (MOT) using TensorFlow | <p>I am using TensorFlow V1.10.0 and developing a Multi-Object Tracker based on MDNet. I need to assign a separate weight matrix for each detected object for the fully connected layers in order to get different embedding for each object during online training. I am using this tf.map_fn in order to generate a higher-ord... | <p>Here is a workaround, I was able to generate the multiple kernels outside the graph in a for loop and then giving it to the graph:</p>
<pre><code>w6 = []
for n_obj in range(pos_data.shape[0]):
w6.append(tf.get_variable("fc6/kernel-" + str(n_obj), shape=(512, 2),
initializer=tf.contrib.l... | tensorflow | 0 |
721 | 61,401,720 | Return a matrix by applying a boolean mask (a boolean matrix of same size) in python | <p>I have generated a square matrix of size 4 and a boolean matrix of same size by:</p>
<pre><code>import numpy as np
A = np.random.randn(4,4)
B = np.full((4,4), True, dtype = bool)
B[[0],:] = False
B[:,[0]] = False
</code></pre>
<p>The following code return two matrices of size 4, A has all the random numbers, and ... | <pre><code>In [214]: A = np.random.randn(4,4)
...: B = np.full((4,4), True, dtype = bool)
...: B[[0],:] = False
...: B[:,[0]] = False
In [215]: A ... | python-3.x|numpy|boolean|boolean-logic | 0 |
722 | 68,837,542 | Dividing by Rows in Pandas | <p>I have a dataframe like this:</p>
<pre><code>df = pd.DataFrame()
df['Stage'] = ['A','B','A','B']
df['Value'] = ['3','0','2','4']
Stage Value
A 3
B 0
A 2
B 4
</code></pre>
<p>I want to be able to transform it into something like this:</p>
<pre><code>df = pd.DataFrame()
df['... | <p>You can <code>unstack</code> and <code>eval</code>:</p>
<pre><code>df['Value'] = df['Value'].astype(int)
df['group'] = df.groupby('Stage').cumcount()
df.set_index(['group', 'Stage'])['Value'].unstack().eval('B/A')
</code></pre>
<p>output:</p>
<pre><code>group
0 0.0
1 2.0
Name: Ratio, dtype: float64
</code></pr... | python|pandas | 1 |
723 | 68,631,781 | pandas Series: Remove and replace NaNs with interpolated data point | <p>I have a pandas Series and I have currently just resampled it using</p>
<pre><code>signal = pd.Series(thick, index = pd.TimedeltaIndex(time_list_thick,unit = 's'))
resampled_signal = signal.resample('1S').mean()
</code></pre>
<p>However, my resampled data contains NaNs which I would like to remove:</p>
<pre><code>00... | <p>You can use <code>df.interpolate()</code> to perform this operation. Additionally, the <code>time</code> method can be used in order to take time-based index into account.</p>
<p>As follows:</p>
<pre><code>import pandas as pd
import datetime
import numpy as np
todays_date = datetime.datetime.now().date()
index = p... | python|pandas|dataframe|interpolation|series | 0 |
724 | 53,155,749 | Replace elements in numpy array avoiding loops | <p>I have a quite large 1d numpy array Xold with given values. These values shall be
replaced according to the rule specified by a 2d numpy array Y:
An example would be</p>
<pre><code>Xold=np.array([0,1,2,3,4])
Y=np.array([[0,0],[1,100],[3,300],[4,400],[2,200]])
</code></pre>
<p>Whenever a value in Xold is identica... | <p><strong>SELECTING THE FASTEST METHOD</strong></p>
<p>Answers to this question provided a nice assortment of ways to replace elements in numpy array. Let's check, which one would be the quickest. </p>
<p><em>TL;DR:</em> Numpy indexing is the winner</p>
<pre><code> def meth1(): # suggested by @Slam
for old, new... | python|numpy|for-loop|numpy-slicing | 5 |
725 | 52,910,615 | Python: Using a list with TF-IDF | <p>I have the following piece of code that currently compares all the words in the 'Tokens' with each respective document in the 'df'. Is there any way I would be able to compare a predefined list of words with the documents instead of the 'Tokens'. </p>
<pre><code>from sklearn.feature_extraction.text import TfidfVect... | <p>Not sure if I understand you correctly, but if you want to make the Vectorizer consider a fixed list of words, you can use the <code>vocabulary</code> parameter.</p>
<pre><code>my_words = ["foo","bar","baz"]
# set the vocabulary parameter with your list of words
tfidf_vectorizer = TfidfVectorizer(
norm=None,
... | python|pandas|text|tf-idf|tfidfvectorizer | 1 |
726 | 65,774,348 | how to store title of two urls in an excel file | <pre><code>import bs4
from bs4 import BeautifulSoup
from pandas.core.base import DataError
from pandas.core.frame import DataFrame
import requests
import pandas as pd
from fake_useragent import UserAgent
urls = ['https://www.digikala.com/search/category-mobile', 'https://www.digikala.com/search/category-tablet-eboo... | <p>Your <code>for url in urls</code> is indeed iterating over both urls, however the <code>ex.to_excel('sasa.xlsx', index=False)</code> line will overwrite <code>'sasa.xlsx'</code> on the second loop.</p>
<p>I would recommend either:</p>
<ul>
<li>Changing the filename on the second loop, or</li>
<li>Writing the results... | python-3.x|pandas|python-requests | 1 |
727 | 65,757,175 | Saving image from numpy array gives errors | <p>this function is to show image of adversarial and its probability, I only want to download the image.</p>
<pre><code>def visualize(x, x_adv, x_grad, epsilon, clean_pred, adv_pred, clean_prob, adv_prob):
x = x.squeeze(0) #remove batch dimension # B X C H X W ==> C X H X W
x = x.mul(torch.FloatTens... | <p>Try changing this:</p>
<pre><code>im = Image.fromarray(x_adv)
</code></pre>
<p>to this:</p>
<pre><code>im = Image.fromarray((x_adv * 255).astype(np.uint8))
</code></pre> | python|numpy|matplotlib|numpy-ndarray | 0 |
728 | 65,491,886 | class wise object detection product count in tensorflow 2 | <p>Can we count detected objects in <strong>Tensorflow 2</strong> object detection API class wise?</p>
<p>as i am new to this so having hard time in manipulating the output of object detection model according to my use case described below</p>
<p>Say you have two classes tomato and potato in a super market shelf stock,... | <p>Check out his link: <a href="https://www.tensorflow.org/api_docs/python/tf/data/Dataset" rel="nofollow noreferrer">https://www.tensorflow.org/api_docs/python/tf/data/Dataset</a></p>
<p>Here, you can find how to iterate over Dataset using ".as_numpy_iterator()", but also how to use different methods to mani... | python|tensorflow|object-detection-api|tensorflow2 | 0 |
729 | 63,382,734 | Comparing strings within two columns in pandas with SequenceMatcher | <p>I am trying to determine the similarity of two columns in a pandas dataframe:</p>
<pre><code>Text1 All
Performance results achieved by the approaches submitted to this Challenge. The six top approaches and three others outperform the s... | <ul>
<li><a href="https://docs.python.org/3/library/difflib.html#sequencematcher-objects" rel="nofollow noreferrer"><code>SequenceMatcher</code></a> isn't designed for a pandas series.</li>
<li>You could <code>.apply</code> the function.</li>
<li><a href="https://docs.python.org/3/library/difflib.html#sequencematcher-e... | python|pandas|nlp|sequencematcher | 1 |
730 | 53,497,334 | Erro groupby by timedate | <p>I have a sample dataset with 2 columns: Dates and eVal like this: </p>
<pre><code> eVal Dates
0 3.622833 2015-01-01
1 3.501333 2015-01-01
2 3.469167 2015-01-01
3 3.436333 2015-01-01
4 3.428000 2015-01-01
5 3.400667 2015-01-01
6 3.405667 2015-01-01
7 ... | <p>you can use below code line. </p>
<blockquote>
<p>time.groupby('Dates').mean()</p>
</blockquote>
<p>I have tried this on your sample and below are sample output.</p>
<pre><code>eVal Dates
2015-01-01 3.506160
2015-01-02 3.450111
</code></pre> | python-3.x|mean|pandas-groupby | 1 |
731 | 53,580,946 | Problems with shifting a matrix | <p>this is my code and I have a few problems:</p>
<pre><code>import numpy as np
from scipy.ndimage.interpolation import shift
index_default = np.array([2, 4])
b = np.zeros((5, 5), dtype=float)
f = np.array([[0, 0, 0, 0, 1],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 1],
[0, 0, 0, 0, 0],
... | <p>Change the b+= to b=.</p>
<pre><code>b+= ... # corresponds to b = b + add(m_shift,b)
# net effect is that b = b + m_shift + b I think you intend b=b+m_shift
b+=m_shift # probably works too.
</code></pre> | python|arrays|numpy|shift | 0 |
732 | 71,996,972 | Ammed a column in pandas dataframe based on the string value in another column | <p>I have a text column with chat information, I want to pass in a list of words that if they appear in that text column will ammend the error column to 1 and 0 if the words do not appear:</p>
<pre><code>chatid text card_declined booking_error website_error
401 hi my card declined.. 0 ... | <p>You can assign converted boolean Series to integers, because some upercase values in list is added <code>case=False</code> parameter in <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.contains.html" rel="nofollow noreferrer"><code>Series.str.contains</code></a>:</p>
<pre><code>ca... | python|pandas|dataframe | 1 |
733 | 56,544,931 | Replace entry in specific numpy array stored in dictionary | <p>I have a dictionary containing a variable number of numpy arrays (all same length), each array is stored in its respective key. </p>
<p>For each index I want to replace the value in one of the arrays by a newly calculated value. (This is a very simplyfied version what I'm actually doing.)</p>
<p>The problem is tha... | <pre class="lang-py prettyprint-override"><code>>>> hex(id(example_dict["key1"]))
'0x26a543ea990'
>>> hex(id(example_dict["key2"]))
'0x26a543ea990'
</code></pre>
<p><code>example_dict["key1"]</code> and <code>example_dict["key2"]</code> are pointing at the same address. To fix this, you can use a dic... | python|dictionary|numpy-ndarray | 1 |
734 | 67,034,981 | Pandas dataframe values and row condition both depend on other columns | <p>I have a Pandas DataFrame:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'col1': ['a','a','b','b'],
'col2': [1,2,3,4],
'col3': [11,12,13,14]})
col1 col2 col3
0 a 1 11
1 a 2 12
2 b 3 13
3 b 4 14
</code></pre>
<p>I need to replace an entry in <co... | <p>Use <code>DataFrame.iloc</code></p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'col1': ['a', 'a', 'b', 'b'], 'col2': [1, 2, 3, 4], 'col3': [11, 12, 13, 14]})
df.loc[df['col1'] == 'b', 'col2'] = df['col3'] * np.exp(df['col2'])
print(df)
</code></pre>
<p>Giving the correct</p>
<pre><code> ... | python|pandas|dataframe|numpy|slice | 1 |
735 | 68,383,897 | Why is this numba code to store and process pointcloud data slower than the pure Python version? | <p>I need to store some data structure like that:</p>
<pre><code>{'x1,y1,z1': [[p11_x,p11_y,p11_z], [p12_x,p12_y,p12_z], ..., [p1n_x,p1n_y,p1n_z]],
'x2,y2,z2': [[p21_x,p21_y,p21_z], [p22_x,p22_y,p22_z], ..., [p2n_x,p2n_y,p2n_z]],
...
'xn,yn,zn': [[pn1_x,pn1_y,pn1_z], [pn2_x,pn2_y,pn2_z], ..., [pnm_x,pnm_y,pnm_z]]}
<... | <p>The pure Python version append items in <code>O(1)</code> time thanks to the dictionary container while the Numba version use a O(n) array search (bounded by 50). Moreover, <code>np.zeros(shape=(100,100,100,50,3))</code> allocate an array of about 1 GiB which resulting in many cache misses during the computation whi... | python|numpy|performance|numba | 2 |
736 | 68,116,678 | pandas to_json , django and d3.js for visualisation | <p>I have a pandas dataframe that I converted to json in order to create graphs and visualize with d3.js so I would like to know how to send this json format obtained in django (in the view or template) in order to visualize with d3.js</p>
<pre><code>def parasol_view(request):
parasol = function_parasol()
paras... | <p>I'm not sure what the parasol.to_html is for, therefore I left that part untouched.
But this is what I would do in order to use your .json file:</p>
<p>Views.py:</p>
<pre><code>def parasol_view(request):
parasol = function_parasol()
# parasol_json = parasol.to_json(orient='records')
parasol = parasol.to_... | json|django|pandas|d3.js | 0 |
737 | 68,138,679 | GridSearchCV results heatmap | <p>I am trying to generate a heatmap for the GridSearchCV results from sklearn. The thing I like about <a href="https://sklearn-evaluation.readthedocs.io/en/stable/user_guide/grid_search.html" rel="nofollow noreferrer">sklearn-evaluation</a> is that it is really easy to generate the heatmap. However, I have hit one iss... | <p>I fiddled around with the <code>grid_search.py</code> file <code>/lib/python3.8/site-packages/sklearn_evaluation/plot/grid_search.py</code>. At line 192/193 change the lines</p>
<p>From</p>
<pre><code>row_names = sorted(set([t[0] for t in matrix_elements.keys()]),
key=itemgetter(1))
col_names = so... | python|matplotlib|scikit-learn|seaborn|sklearn-pandas | 5 |
738 | 59,391,569 | Is word embedding + other features possible for classification problem? | <p>My task was to create a classifier model for a review dataset. I have 15000 train observations, 5000 dev and 5000 test. </p>
<p>The task specified that 3 features needed to be used: I used <code>TFIDF</code> (5000 features there), <code>BOW</code> (2000 more features) and the <code>review length</code> (1 more feat... | <p>Theoretically <strong>yes</strong>.</p>
<p>Every document (lets say sentences) in your text corpus can be quantified as an array. Adding all these arrays you get a matrix. Lets say that this qunatification was using BOW, now you want to apply word2vec, only thing you need to make sure is that your arrays (quantifie... | python|machine-learning|scikit-learn|text-classification|sklearn-pandas | 0 |
739 | 59,260,854 | What does required_grad do in PyTorch? (Not requires_grad) | <p>I have been trying to carry out transfer learning on a multiclass classification task using resnet as my backbone.</p>
<p>In many tutorials, it was stated that it would be wise to try and train only the last layer (usually a fully connected layer) again, while freezing the other layers. The freezing would be done a... | <p>Ok, this was really silly.</p>
<pre><code>for param in model.parameters():
param.required_grad = False
</code></pre>
<p>In this case, a new 'required_grad' is created due to the typo I made.
For example, even the following wouldn't invoke an error:</p>
<pre><code>for param in model.parameters():
param.wha... | python|pytorch|backpropagation|resnet | 3 |
740 | 56,966,499 | Feeding Torch objects from csv to learn | <p>I am trying to give PyTorch the input to build a very simple Neural Network. Here is my problem:
I have all the data I want to use in an csv and I am using Panda to read it.
Here is my code: </p>
<pre><code>data = pd.read_csv("../myFile.csv")
input = [x for x in data]
input = np.asarray(input)
input = torch.from_nu... | <p>Have you checked the output of <code>[x for x in data]</code> ? It's just the list of column names, which are of type string. That's why, you are getting above error. Now, I will help you solve your problem using a sample <code>csv</code> file.</p>
<p>Filename: <code>data.csv</code></p>
<pre><code>custID name age ... | python|csv|pytorch|torch | 0 |
741 | 46,143,091 | How do I add a new column and insert (calculated) data in one line? | <p>I'm pretty new to python so it's a basic question.</p>
<p>I have data that I imported from a csv file. Each row reflects a person and his data. Two attributes are Sex and Pclass. I want to add a new column (predictions) that is fully depended on those two in one line. If both attributes' values are 1 it should assi... | <p>Use:</p>
<pre><code>np.random.seed(12)
df = pd.DataFrame(np.random.randint(3,size=(10,2)), columns=['Sex','Pclass'])
df['prediction'] = ((df['Sex'] == 1) & (df['Pclass'] == 1)).astype(int)
print (df)
Sex Pclass prediction
0 2 1 0
1 1 2 0
2 0 0 0
3 ... | python|pandas | 1 |
742 | 45,991,452 | Sorting the columns of a pandas dataframe | <pre><code>Out[1015]: gp2
department MOBILE QA TA WEB MOBILE QA TA WEB
minutes minutes minutes minutes growth growth growth growth
period
2016-12-24 NaN NaN 140.0 400.0 NaN NaN 0.0 260.0
2016-12-25... | <p>Just use <code>df.sort_index</code>:</p>
<pre><code>df = df.sort_index(level=[0, 1], axis=1)
print(df)
MOBILE QA TA WEB
growth minutes growth minutes growth minutes growth minutes
period ... | python|pandas|sorting|dataframe | 6 |
743 | 50,712,246 | Pytrends anaconda install conflict with TensorFlow | <p>It seems I have a conflict when trying to install pytrends via anaconda. After submitting "pip install pytrends" the following error arises:</p>
<p>tensorflow-tensorboard 1.5.1 has requirement bleach==1.5.0, but you'll have bleach 2.0.0 which is incompatible.
tensorflow-tensorboard 1.5.1 has requirement html5lib==0... | <p>Try upgrading your version of tensorflow. I tried it with Tensorflow 1.6.0 ,tensorboard 1.5.1 and it worked fine. I was able to import pytrends.</p> | python|tensorflow|anaconda | 1 |
744 | 51,086,721 | convert pandas dataframe of strings to numpy array of int | <p>My input is a pandas dataframe with strings inside:</p>
<pre><code>>>> data
218.0
221.0
222.0
224.0 71,299,77,124
227.0 50,283,81,72
229.0
231.0 84,349
233.0
235.0
240.0 53,254
Name... | <p>You can use <code>apply</code>, this isn't vector operation though</p>
<pre><code>In [277]: df.val.fillna('').apply(
lambda x: np.array(x.split(','), dtype=int).reshape(-1, 2) if x else [])
Out[277]:
0 []
1 []
2 []
3 [[71, 299], [77, 1... | python|pandas|numpy | 3 |
745 | 66,707,461 | Read dataframe values by certain creteria | <p>I'm reading some data using panadas and this is dataframe:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;"></th>
<th style="text-align: center;">PARAMETER</th>
<th style="text-align: right;">VALUE</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">0<... | <p>One way would be using <code>loc</code> like:</p>
<pre><code>float(df.loc[df['PARAMETER']=='Param4']['VALUE']) # locate col PARAMETER and get VALUE
Out[81]: 30.0
# Or
df.loc[df['PARAMETER']=='Param4']['VALUE'].values
Out[94]: array([30.])
</code></pre>
<p>Another way would be to create a <code>dict</code> and acces... | python|pandas|dataframe | 1 |
746 | 57,316,557 | tf.keras.layers.pop() doesn't work, but tf.keras._layers.pop() does | <p>I want to pop the last layer of the model. So I use the <code>tf.keras.layers.pop()</code>, but it doesn't work.</p>
<pre><code>base_model.summary()
</code></pre>
<p><a href="https://i.stack.imgur.com/wRz02.png" rel="noreferrer"><img src="https://i.stack.imgur.com/wRz02.png" alt="enter image description here"></a>... | <p>I agree this is confusing. The reason is that <code>model.layers</code> returns a shallow copy of the layers list so:</p>
<p>The tldr is dont use <code>model.layers.pop()</code> to remove the last layer. Instead we should create a new model with all but the last layer. Perhaps something like this:</p>
<pre class="... | python|tensorflow|tensorflow2.0 | 9 |
747 | 57,655,992 | How to Concatenate Mnist Test and Train Images? | <p>I'm trying to train a Generative Adversarial Network. To train the network I'm using mnist dataset. I will train the network with concatenated test and train images.</p>
<pre><code>import numpy as np
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist=input_data.read_data_sets("... | <p>That's not how you should use <code>numpy.concatenate</code>. You can do it like this:</p>
<pre><code>images = np.concatenate([mnist.test.images, mnist.train.images], axis=0)
</code></pre>
<p>If you go through the <code>numpy.concatenate</code> <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.co... | python|numpy|tensorflow | 1 |
748 | 72,996,661 | Pivot from tabular to matrix with row and column multiindex and given order | <p>Given the following dataframe containing a matrix in tabular format:</p>
<pre><code>values = list(range(5))
var1 = ['01', '02', '03', '0', '0']
var2 = ['a', 'b', 'c', 'd', 'e']
var3 = ['01', '02', '03', '0', '0']
var4 = ['a', 'b', 'c', 'd', 'e']
var5 = ['S1', 'S1','S1', 'S3', 'S2']
var6 = ['P1', 'P1','P1', 'P3', 'P2... | <p>Yes, you can do <code>pd.Categorical</code> with <code>ordered=True</code>:</p>
<pre><code>df['var5'] = pd.Categorical(df['var5'], categories=var5_order, ordered=True)
df['var6'] = pd.Categorical(df['var6'], categories=var6_order, ordered=True)
df.pivot_table(index=['var6','var1','var2'],
columns=['... | python|pandas | 1 |
749 | 73,059,534 | Combine (merge/join/concat) two dataframes by mask (leave only first matches) in pandas [python] | <p>I have <code>df1</code>:</p>
<pre class="lang-py prettyprint-override"><code> match
0 a
1 a
2 b
</code></pre>
<p>And I have <code>df2</code>:</p>
<pre class="lang-py prettyprint-override"><code> match number
0 a 1
1 b 2
2 a 3
3 a 4
</code></pre>
<p>I want to com... | <p>In your case you can do with <code>groupby</code> and <code>cumcount</code> before <code>merge</code> ,Notice I do not keep two match columns since they are the same</p>
<pre><code>df1['key'] = df1.groupby('match').cumcount()
df2['key'] = df2.groupby('match').cumcount()
out = df1.merge(df2)
Out[418]:
match key ... | python|pandas|join | 1 |
750 | 72,992,807 | converting the duration column into short, medium, long values | <p>how to Convert the duration column into short, medium, long values. Come up with the boundaries by splitting the duration range in 3 equal size ranges</p>
<pre class="lang-py prettyprint-override"><code>df['duration'].head()
</code></pre>
<... | <p>You can use <a href="https://pandas.pydata.org/docs/reference/api/pandas.cut.html" rel="nofollow noreferrer"><code>pandas.cut</code></a> on the Series of Timedelta after conversion with <a href="https://pandas.pydata.org/docs/reference/api/pandas.to_timedelta.html" rel="nofollow noreferrer"><code>pandas.to_timedelta... | python|pandas|dataframe|binning | 1 |
751 | 73,090,849 | How to transform Pandas df for stacked bargraph | <p>I have the following df resulting from:</p>
<pre><code>plt_df = df.groupby(['Aufnahme_periode','Preisgruppe','Land']).count()
INDEX Aufnahme_periode Preisgruppe Land Anzahl
1344 2021-11-01 1 NL 2
1345 2021-12-01 1 AT 8
1346 2021-12-01 1 BE ... | <p>You need a <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.pivot_table.html" rel="nofollow noreferrer"><code>pivot_table</code></a>:</p>
<pre><code>(plt_df
.pivot_table(index='Aufnahme_periode', columns='Preisgruppe',
values='Anzahl', aggfunc='sum')
.plot.bar(stacked=True)
)
</... | python|pandas|matplotlib|bar-chart | 0 |
752 | 70,988,713 | Normalizing rows of pandas DF when there's string columns? | <p>I'm trying to normalize a Pandas DF by row and there's a column which has string values which is causing me a lot of trouble. Anyone have a neat way to make this work?</p>
<p>For example:</p>
<pre><code> system Fluency Terminology No-error Accuracy Locale convention Other
19 hyp.metricsystem2 ... | <p>Use <code>select_dtypes</code> to select numeric only columns:</p>
<pre><code>subset = bad_models.select_dtypes('number')
bad_models[subset.columns] = subset.div(subset.sum(axis=1), axis=0)
print(bad_models)
# Output
system Fluency Terminology No-error Accuracy Locale convention Other
19 h... | python|pandas|dataframe | 0 |
753 | 51,731,389 | Issue for scraping website data for every webpage automatically and save it in csv by using beautiful Soup, pandas and request | <p>The code can't scrape web data page by page successfully and the csv format doesn't match the web data record. I want to the code enable run all web pages automatically. Right now, it only can run first page data. How it can run second, third page by itself? Secondly, in csv format, 'hospital_name','name','license_t... | <h3>* Python 3 *</h3>
<pre><code>import csv
import requests
from bs4 import BeautifulSoup
FIELDNAMES = (
'first_name',
'last_name',
'license_type',
'location',
'reg_num'
)
def get_page(page_num):
base_url = "https://www.abvma.ca/client/roster/clientRosterView.html"
params = {
'c... | python|pandas|csv|beautifulsoup|python-requests | 0 |
754 | 51,790,793 | Pandas Resample Upsample last date / edge of data | <p>I'm trying to upsample weekly data to daily data, however, I'm having difficulty upsampling the last edge. How can I go about this?</p>
<pre><code>import pandas as pd
import datetime
df = pd.DataFrame({'wk start': ['2018-08-12', '2018-08-12', '2018-08-19'],
'car': [ 'tesla model 3', 'tesla model x', 'tesla mod... | <p>Yes, you are right, last edge data are excluded. Solution is add them to input <code>DataFrame</code> - my solution creates a helper <code>Dataframe</code> using <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop_duplicates.html" rel="nofollow noreferrer"><code>drop_duplicates</code... | python|python-3.x|pandas|datetime|reindex | 2 |
755 | 36,045,510 | Matrix multiplication with iterator dependency - NumPy | <p>Sometime back <a href="https://stackoverflow.com/questions/36042556/numpy-multiplication-anything-more-than-tensordot"><code>this question</code></a> (now deleted but 10K+ rep users can still view it) was posted. It looked interesting to me and I learnt something new there while trying to solve it and I thought that... | <p>I learnt few things along the way trying to find vectorized and faster ways to solve it.</p>
<p>1) First off, there is a dependency of iterators at <code>"for j in range(i)"</code>. From my previous experience, especially with trying to solve such problems on <code>MATLAB</code>, it appeared that such dependency co... | python|arrays|performance|numpy|multiplication | 3 |
756 | 36,015,685 | Python Pandas co-occurrence after groupby | <p>I would like to compute the co-occurrence percentages after grouping. I am unable to determine the best method for doing so. I can think of ways to brute-force the answers, but this means lots of hard-coded calculations that may break as more source data is added. There must be a more elegant method, but I don't s... | <p>I think you want crosstabs:</p>
<p><a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.crosstab.html" rel="nofollow">http://pandas.pydata.org/pandas-docs/stable/generated/pandas.crosstab.html</a></p>
<p>This will give you just the raw frequencies. You can then divide each cell by the total number... | python|pandas|find-occurrences|bigdata | 0 |
757 | 35,943,101 | How to delete every nth row of an if one contains a zero? | <p>I have an array containing data for three different indicators (X-Z) in five different categories (A-E).
Now I want to check every column from the dataset whether there is a 0 in it. In case there is a 0 in a row, I want to delete all indicators of this type.</p>
<p>In my minimum example it should find the zero in ... | <p>I would do it in two passes. It is a lot cleaner, and it might even be faster under some circumstances. Here's an implementation without numpy; feel free to convert it to use <code>array()</code>.</p>
<pre><code>AA =(['0','A','B','C','D','E'],
['X','2','3','3','3','4'],
['Y','3','4','9','7','3'],
['Z... | python|arrays|numpy | 4 |
758 | 37,546,491 | How does a neural network work with correlated image data | <p>I am new to TensorFlow and deep learning. I am trying to create a fully connected neural network for image processing. I am somewhat confused.</p>
<p>We have an image, say 28x28 pixels. This will have 784 inputs to the NN. For non-correlated inputs, this is fine, but image pixels are generally correlated. For ins... | <p>Please research some tutorials on CNN (Convolutional Neural Network); <a href="http://deeplearning.net/tutorial/lenet.html" rel="nofollow">here</a> is a starting point for you. A fully connected layer of a NN surrenders <em>all</em> of the correlation information it might have had with the input. Structurally, it ... | neural-network|tensorflow|deep-learning | 0 |
759 | 41,851,044 | Python Median Filter for 1D numpy array | <p>I have a <code>numpy.array</code> with a dimension <code>dim_array</code>. I'm looking forward to obtain a median filter like <code>scipy.signal.medfilt(data, window_len)</code>. </p>
<p>This in fact doesn't work with <code>numpy.array</code> may be because the dimension is <code>(dim_array, 1)</code> and not <code... | <p>Based on <a href="https://stackoverflow.com/a/40085052/3293881"><code>this post</code></a>, we could create sliding windows to get a <code>2D</code> array of such windows being set as rows in it. These windows would merely be views into the <code>data</code> array, so no memory consumption and thus would be pretty ... | python|arrays|numpy|filtering|median | 9 |
760 | 41,970,426 | m Smallest values from upper triangular matrix with their indices as a list of tuples | <p>I have a np.ndarray as follows: </p>
<pre><code>[[ inf 1. 3. 2. 1.]
[ inf inf 2. 3. 2.]
[ inf inf inf 5. 4.]
[ inf inf inf inf 1.]
[ inf inf inf inf inf]]
</code></pre>
<p>Is there a way to get the indices and values of the m smallest items in that nd array? So, if I wanted the 4 ... | <p>For an <code>Inf</code> filled array -</p>
<pre><code>r,c = np.unravel_index(a.ravel().argsort()[:4], a.shape)
out = zip(r,c,a[r,c])
</code></pre>
<p>For performance, consider using <code>np.argpartition</code>. So, replace <code>a.ravel().argsort()[:4]</code> with <code>np.argpartition(a.ravel(), range(4))[:4]</c... | python|arrays|pandas|numpy|min | 2 |
761 | 37,852,924 | How to imagine convolution/pooling on images with 3 color channels | <p>I am a beginner and i understood the mnist tutorials. Now i want to get something going on the SVHN dataset. In contrast to mnist, it comes with 3 color channels. I am having a hard time visualizing how convolution and pooling works with the additional dimensionality of the color channels.</p>
<p>Has anyone a good ... | <p>This is very simple, the difference only lies in the <strong>first convolution</strong>:</p>
<ul>
<li>in grey images, the input shape is <code>[batch_size, W, H, 1]</code> so your first convolution (let's say 3x3) has a filter of shape <code>[3, 3, 1, 32]</code> if you want to have 32 dimensions after.</li>
<li>in ... | tensorflow|convolution|pooling | 4 |
762 | 47,892,097 | Broadcast rotation matrices multiplication | <p>How to do the line marked with <code># <----</code> in a more direct way?</p>
<p>In the program, each row of <code>x</code> is coordinates of a point, <code>rot_mat[0]</code> and <code>rot_mat[1]</code> are two rotation matrices. The program rotates <code>x</code> by each rotation matrix.</p>
<p>Changing the or... | <p>Use <a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.tensordot.html" rel="nofollow noreferrer"><code>np.tensordot</code></a> for multiplication involving such <code>tensors</code> -</p>
<pre><code>np.tensordot(rot_mats, x, axes=((2),(1))).swapaxes(1,2)
</code></pre>
<p>Here's some timings... | python|numpy | 4 |
763 | 58,764,104 | How to upload a new file in a image-recognitionmodel | <p>I am trying to make my first image-recognition model by following a tutorial on the site:
<a href="https://towardsdatascience.com/all-the-steps-to-build-your-first-image-classifier-with-code-cf244b015799" rel="nofollow noreferrer">Tutorial Towardsdatascience.com</a></p>
<p>After building the model you should be abl... | <p>I found the problem. The images X where divided by 255 and the image was not. After I divided image/255 the problem was solved.</p> | python|tensorflow|deep-learning|classification|image-recognition | 0 |
764 | 70,137,431 | How to apply one label to a NumPy dimension for a Keras Neural Network? | <p>I'm currently working on a simple neural network using Keras, and I'm running into a problem with my labels. The network is making a binary choice, and as such, my labels are all 1s and 0s. My data is composed of a 3d NumPy array, basically pixel data from a bunch of images. Its shape is (560, 560, 32086). However s... | <p>The <code>ValueError</code> occurs because the first dimension is supposed to be the number of samples and needs to be the same for <code>x</code> and <code>y</code>. In your example that is not the case. You would need <code>datax</code> to to have shape <code>(32086, 560, 560)</code> and <code>datay</code> should ... | python|numpy|tensorflow|keras|neural-network | 0 |
765 | 56,287,199 | Save the current state of program and resume again from last saved point | <p>I have a script to download images from a link. Suppose the script gets terminated due to some reason then I want to save the point till which the images have been downloaded and resume again from the point last saved</p>
<p>I have made the download script and tried saving the state of the program using pickle till... | <p>There might be multiple solutions to this problem but this comes first in mind it will help you to solve this problem.</p>
<p><strong>Approach :</strong></p>
<p>It's very clear, the script starts to download from starting because it can't remember the index till where it has downloaded the last time.</p>
<p>To so... | python|python-3.x|pandas|python-requests | 1 |
766 | 56,220,609 | How do I clip error bars in a pandas plot? | <p>I have an array of averages and an array of standard deviations in a pandas dataframe that I would like to plot. The averages correspond to timings (in seconds), and cannot be negative. How do I clip the standard errors in the plot to a minimum of zero?</p>
<pre><code>import numpy as np
import pandas as pd
avg_ti... | <p>Try using <code>plt.errorbar</code> and pass in <code>yerr=[y_low, y_high]</code>:</p>
<pre><code>y_errors = time_df[['avg_time', 'std_dev']].min(axis=1)
fig, ax = plt.subplots(figsize=(16,8))
plt.errorbar(x=time_df['x'],
y=time_df['avg_time'],
yerr = [y_errors, time_df['std_d... | python|pandas|matplotlib | 2 |
767 | 56,094,714 | How can I call a custom layer in Keras with the functional API | <p>I have written a tiny implementation of a Keras custom layer where I have literally copied the class definition from <a href="https://keras.io/layers/writing-your-own-keras-layers/" rel="nofollow noreferrer">https://keras.io/layers/writing-your-own-keras-layers/</a></p>
<p>Yet when I try to call this custom layer a... | <p>I recognize that this is the Keras' example to create new layers, which you can find <a href="https://keras.io/layers/writing-your-own-keras-layers/" rel="nofollow noreferrer">here</a>.</p>
<p>One very important detail is that this is a <code>keras</code> example, but you are using it with <code>tf.keras</code>. I ... | python|tensorflow|machine-learning|keras|deep-learning | 0 |
768 | 55,597,903 | Need to speed up very slow loop for image manipulation on Python | <p>I am currently completing a program in Pyhton (3.6) as per internal requirement. As part of it, I am having to loop through a colour image (3 bytes per pixel, R, G & B) and distort the image pixel by pixel.</p>
<p>I have the same code in other languages (C++, C#), and non-optimized code executes in about two s... | <p>You normally vectorize this kind of thing by making a displacement map. </p>
<p>Make a complex image where each pixel has the value of its own coordinate, apply the usual math operations to compute whatever transform you want, then apply the map to your source image.</p>
<p>For example, in <a href="https://pypi.or... | python|python-3.x|numpy | 1 |
769 | 64,666,455 | Non-OK-status: GpuLaunchKernel(...) Internal: invalid configuration argument | <p>I run my code on tensorflow 2.3.0 Anaconda with CUDA Toolkit 10.1 CUDNN 7.5.0 (Windows 10) and it returns a issue</p>
<pre><code>F .\tensorflow/core/kernels/random_op_gpu.h:246] Non-OK-status: GpuLaunchKernel(FillPhiloxRandomKernelLaunch<Distribution>, num_blocks, block_size, 0, d.stream(), key, counter, gen, ... | <p>Figured things out myself. This comes when you forget to initialize GPU.</p>
<p>Adding the following codes solve the problem</p>
<pre><code>import tensorflow as tf
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
try:
# Currently, memory growth needs to be the same across GPUs
for gpu in... | tensorflow | 1 |
770 | 64,636,448 | Merging two datasets on a column in common | <p>I am trying to use a dataset which includes some useful information to create a new column including those information, whether they were included.</p>
<p>df1</p>
<pre><code>Info User Year
24 user1 2012
0 user2 2012
12 user3 2010
24.5 user4 2011
24 user5 2012
</code></pre>
<p>... | <p>You should do two things: 1) Specify the minimum columns required (<code>[['Info', 'User']]</code>) and <code>how='left'</code>, so you don't merge another <code>Year</code> column in. You had the dataframes flipped around in your merge:</p>
<pre><code>pd.merge(df2, df1[['Info', 'User']], on=['User'], how='left')
O... | python|pandas | 3 |
771 | 65,055,727 | Passing file from filedialogue to another function tkinter | <p>I am creating a program that let me visualise my csv file with tkinter.
However, I am not able to read the file I grab with the <code>filedialogue</code>.
I also have tried to pass filename as argument to the <code>file_loader</code> that as well does not work.
I still found myself with a <code>FileNotFoundError: [E... | <p>I see what you were trying to do, but the 'r' is only needed for filenames that you directly enter in the source code (aka "hardcoded" or "string literals"). Here you can use the file path directly from the label.</p>
<pre><code>def file_loader():
try:
csv_file= label_file["text&q... | python|pandas|dataframe|csv|tkinter | 1 |
772 | 39,836,318 | Comparing Arrays for Accuracy | <p>I've a 2 arrays:</p>
<pre><code>np.array(y_pred_list).shape
# returns (5, 47151, 10)
np.array(y_val_lst).shape
# returns (5, 47151, 10)
np.array(y_pred_list)[:, 2, :]
# returns
array([[ 0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[ 0., 0., 0., 0., 0., 0., 0., 0., 0., 1.],
[ 0., 0., ... | <p>You can find a lot of useful classification scores in <code>sklearn.metrics</code>, particularly <code>accuracy_score()</code>. See the <a href="http://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html" rel="noreferrer">doc here</a>, you would use it as:</p>
<pre><code>import sklearn
acc... | python|arrays|numpy | 6 |
773 | 39,834,999 | Adding row to dataframe in pandas | <p>Suppose I am trying add rows to a dataframe with 40 columns. Ordinarily it would be something like this:</p>
<pre><code>df = pandas.DataFrame(columns = 'row1', 'row2', ... ,'row40'))
df.loc[0] = [value1, value2, ..., value40]
</code></pre>
<p>(don't take the dots literally)</p>
<p>However let's say value1 to val... | <p>I think you need:</p>
<pre><code>df.loc[0] = list1 + [value11, value12, ..., value40]
</code></pre>
<p>Sample:</p>
<pre><code>df = pd.DataFrame(columns = ['row1', 'row2','row3', 'row4','row5', 'row6','row40'])
list1 = ['a','b','c']
df.loc[0] = list1 + ['d','e','f', 'g']
print (df)
row1 row2 row3 row4 row5 row... | python|list|pandas|append|multiple-columns | 2 |
774 | 44,366,171 | Label Encoding of multiple columns without using pandas | <p>Is there a simple way to label encode without using pandas for multiple columns?</p>
<p>Like using only numpy and sklearn's <code>preprocessing.LabelEncoder()</code></p> | <p>One solution would be to loop through the columns, converting them to numeric values, using <code>LabelEncoder</code>:</p>
<pre><code>le = LabelEncoder()
cols_2_encode = [1,3,5]
for col in cols_2_encode:
X[:, col] = le.fit_tramsform(X[:, col])
</code></pre> | python|python-3.x|numpy|machine-learning|scikit-learn | 1 |
775 | 44,170,709 | numpy.savez strips leading slash from keys | <p>I'm trying to save a bunch of numpy arrays keyed by the absolute file path that the data came from using savez. However, when I use load to retrieve that data the leading slashes have been removed from the keys.</p>
<pre><code>>>> import numpy as np
>>> data = {}
>>> data['/foo/bar'] = np... | <p>Looks like the stripping is done by the Python <code>zipfile</code> module, possibly on extract rather than on writing:</p>
<p><a href="https://docs.python.org/2/library/zipfile.html" rel="nofollow noreferrer">https://docs.python.org/2/library/zipfile.html</a></p>
<blockquote>
<p>Note If a member filename is an ... | numpy | 1 |
776 | 69,507,636 | ValueError: not enough values to unpack (expected 2, got 1) when trying to access dataset | <p>test_ds is a dataset of shape</p>
<pre><code><PrefetchDataset shapes: ((None, 256, 256, 3), (None,)), types: (tf.float32, tf.int32)>.
</code></pre>
<p>When i try to fetch data using for loop it worked.</p>
<pre><code>for image_batch,label_batch in test_ds.take(1):
</code></pre>
<p>but when i try to fetch using... | <p>A <code>tf.data.Dataset</code> is an iterator. You need to iterate over it in order to access its elements. Try:</p>
<pre><code>image_batch, label_batch = next(iter(test_ds.take(1)))
</code></pre> | python|tensorflow|deep-learning | 1 |
777 | 40,893,602 | How to install NumPy for Python 3.6 | <p>I am using Python 3.6b3 for a long running project, developing on Windows.
For this project I also need NumPy.
I've tried Python36 -m pip install numpy, but it seems that pip is not yet in the beta.
What's the best way to install NumPy for Python 3.6b3?</p>
<p>[EDIT: Added installation log, after using ensurepip]</... | <p>As long as binary packages (so-called 'wheels') for 3.6 have not been released to PyPi yet, you can resort to unofficial (but working) ones available at <a href="http://www.lfd.uci.edu/~gohlke/pythonlibs/" rel="noreferrer">http://www.lfd.uci.edu/~gohlke/pythonlibs/</a>. Download the file and install it like this:</p... | python|python-3.x|numpy|pip | 8 |
778 | 53,882,747 | Python 2.7 - merge two CSV files without headers and with two delimiters in the first file | <p>I have one csv test1.csv (I do not have headers in it!!!). I also have as you can see delimiter with pipe but also with exactly one tab after the eight column.</p>
<pre><code>ug|s|b|city|bg|1|94|ON-05-0216 9.72|28|288
ug|s|b|city|bg|1|94|ON-05-0217 9.72|28|288
</code></pre>
<p>I have second file test2.csv with o... | <p>Use:</p>
<pre><code># Reading files
df1 = pd.read_csv('file1.csv', header=None, sep='|')
df2 = pd.read_csv('file2.csv', header=None, sep='|')
# splitting file on tab and concatenating with rest
ndf = pd.concat([df1.iloc[:,:7], df1[7].str.split('\t', expand=True), df1.iloc[:,8:]], axis=1)
ndf.columns = np.arange(1... | python|pandas|python-2.7|join|inner-join | 1 |
779 | 54,120,265 | Receiving error (NoneType 'to_csv') while attempting to implement script into a GUI | <p>I'm trying to build a small program that combines csv files. I've made a GUI where a user selects directories of the location of the csv files and where they want the final combined csv file to be outputted. I'm using this script to merge csv files at the moment.</p>
<pre><code>from pathlib import Path
import panda... | <p>The problem is that you are using <code>StringVar</code> in the following statements inside <code>runscript()</code>:</p>
<pre><code>path = r'%s' % folder_path
combine = r'%s' % folder_pathcombine
</code></pre>
<p>Therefore no file will be found in the for loop below the above statements and <code>combine_csv</cod... | python|pandas|tkinter | 2 |
780 | 38,314,674 | stack all levels of a MultiIndex | <p>I have a dataframe:</p>
<pre><code>index = pd.MultiIndex.from_product([['a', 'b'], ['A', 'B'], ['One', 'Two']])
df = pd.DataFrame(np.arange(16).reshape(2, 8), columns=index)
df
</code></pre>
<p><a href="https://i.stack.imgur.com/MHPpX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MHPpX.png" al... | <p>You can first find <code>len</code> of levels, get <code>range</code> and pass it to <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.stack.html" rel="noreferrer"><code>stack</code></a>:</p>
<pre><code>print (df.columns.nlevels)
3
print (list(range(df.columns.nlevels)))
[0, 1, 2]
pr... | python|pandas|multi-index | 11 |
781 | 38,445,715 | Pandas Seaborn Heatmap Error | <p>I have a DataFrame that looks like this when unstacked.</p>
<pre><code>Start Date 2016-07-11 2016-07-12 2016-07-13
Period
0 1.000000 1.000000 1.0
1 0.684211 0.738095 NaN
2 0.592105 NaN NaN
</code></pre>
<p>I'm trying to plot it in Seaborn... | <p>While exact reproducible data is not available, consider below using posted snippet data. This example runs a <code>pivot_table()</code> to achieve the structure as posted with StartDates across columns. Overall, your heatmap possibly outputs the multiple color bars and overlapping figures due to the <code>unstack()... | python|pandas|matplotlib|heatmap|seaborn | 1 |
782 | 66,133,725 | Error while converting pytorch model to Coreml. Layer has 1 inputs but expects at least 2 | <p>My goal is to convert my Pytorch model to Coreml. I have no problem with doing an inference with pytorch. However, after I trace my model and try to convert it</p>
<pre><code>trace = torch.jit.trace(traceable_model, data)
mlmodel = ct.convert(
trace,
inputs=[ct.TensorType(name="Image&quo... | <p>Often the coremltools converters will ignore parts of the model that they don't understand. This results in a conversion that is apparently successful, but actually misses portions of the model.</p> | python|pytorch|coreml|coremltools | 0 |
783 | 66,108,871 | How to create a pie chart from a text column data only using pandas dataframe directly? | <p>I have a csv file which I read using</p>
<pre><code>pd.read_csv()
</code></pre>
<p>In the file name I have a column with <em>Female</em> or <em>Male</em> and I would like to create a pie visualization <em>Female</em> or <em>Male</em>. This means, my legend will contain the color and type ( <em>Female</em> or <em>Mal... | <pre><code>s = pd.Series(['Female', 'Female', 'Female', 'Male', 'Male', 'Female'])
s.value_counts(normalize=True).plot.pie(autopct='%.1f %%', ylabel='', legend=True)
</code></pre>
<p><a href="https://i.stack.imgur.com/7bQVl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7bQVl.png" alt="enter image d... | pandas|dataframe | 3 |
784 | 66,276,156 | calculate std() in python | <p>i have a dataframe as below</p>
<pre><code>Res_id Mean_per_year
a 10.4
a 12.4
b 4.4
b 4.5
c 17
d 9
</code></pre>
<p>i would like to calculate the std() using panda on "mean_per_year" for the same res_id
i did an aggregate and got</p>
<pre><code>Res_id Mean_per_year_agg
a 10.4,... | <p>You simply want <code>data.groupby('res_id')["Mean_per_year"].std()</code> on your original dataframe (remove the whole <code>aggregate</code> business).</p> | python|pandas|std | 1 |
785 | 66,243,395 | Converting values of old columns into new columns, and using old values of paired columns as values in the new columns | <p>I've been tasked with cleaning data from a mobile application designed by a charity</p>
<p>In one section, a users Q/A app use session is represented by a row. This section consists of repeated question answer field pairs, where a field represents the question asked and then the field next to it represents the corre... | <p>This is another, more elegant solution. But again does not deal with object columns</p>
<p>First define the number of question-answer pairs:</p>
<pre><code>num_answers = 2 #Following your 'Starting data' in the question
</code></pre>
<p>Then use the following couple of lines to obtain a dataframe as required:</p>
<p... | python|pandas|format|schema|data-cleaning | 0 |
786 | 66,244,145 | why does same element of an ndarray have different id in numpy? | <p>As the below code shows, 3-dimension ndarray b is the view of one-dimension a.<br />
Per my understanding, b[1,0,3] and a[11] should refer to same object with value 11.<br />
But from the print result, id(a[11]) and id(b[1,0,3]) are different.</p>
<p>Isn't id represent the memory address of an object?<br />
If yes, ... | <p>When you apply <code>reshape</code> it doesn't necessarily store <code>b</code> in the same memory location. Refer to the <a href="https://numpy.org/doc/stable/reference/generated/numpy.reshape.html" rel="nofollow noreferrer">documentation</a>, which says:</p>
<blockquote>
<p>Returns:
reshaped_array : ndarray</p>
</... | python|numpy|multidimensional-array | 1 |
787 | 66,186,839 | Find index of most recent DateTime in Pandas dataframe | <p>I have a dataframe that includes a column of datetimes, past and future. Is there a way to find the index of the most recent datetime?</p>
<p>I can <em>not</em> assume that each datetime is unique, nor that they are in order.</p>
<p>In the event that the most recent datetime is not unique, all the relevant indeces s... | <p><strong>unique datetimes...</strong></p>
<p>a convenient option would be to use <a href="https://pandas.pydata.org/docs/reference/api/pandas.Index.get_loc.html" rel="nofollow noreferrer"><code>get_loc</code></a> method of <a href="https://pandas.pydata.org/docs/reference/api/pandas.DatetimeIndex.html" rel="nofollow ... | python|pandas|datetime | 3 |
788 | 52,452,491 | Stop Crawling urls if class does not exist in table beautifulsoup and pandas | <p>i'm using a list of urls in a csv file to crawl and extract data from a html table. i want to stop going through the urls when 'style3' is not present in the table.
I've created a function that will return false if it's not there, but i'm confused as to how to actually implement it. </p>
<p>Any suggestions for a... | <p>You can do this with a single for-loop and break (there's no need for the <code>while more</code>):</p>
<pre><code>lst = []
with open('WV_urls.csv','r') as csvf: # Open file in read mode
urls = csv.reader(csvf)
for url in urls:
page = urlopen(url[0]).read()
df1, header = pd.read_html(page, h... | python|pandas|dataframe|beautifulsoup | 1 |
789 | 52,730,645 | Keras: Callbacks Requiring Validation Split? | <p>I'm working on a multi-class classification problem with Keras 2.1.3 and a Tensorflow backend. I have two numpy arrays, <code>x</code> and <code>y</code> and I'm using <code>tf.data.Dataset</code> like this:</p>
<pre><code>dataset = tf.data.Dataset.from_tensor_slices(({"sequence": x}, y))
dataset = dataset.apply(tf... | <p>Using the tensorflow keras API, you can provide a <code>Dataset</code> for training and another for validation.</p>
<p>First some imports</p>
<pre><code>import tensorflow as tf
from tensorflow import keras
from tensorflow.keras.layers import Dense
import numpy as np
</code></pre>
<p>define the function which will... | python|tensorflow|keras | 2 |
790 | 46,421,521 | Passing operators as functions to use with Pandas data frames | <p>I am selecting data from series on basis of threshold . </p>
<pre><code>>>> s = pd.Series(np.random.randn(5))
>>> s
0 -0.308855
1 -0.031073
2 0.872700
3 -0.547615
4 0.633501
dtype: float64
>>> cfg = {'threshold' : 0 , 'op' : 'less' }
>>> ops = {'less' : '<', 'more... | <p>I'm all about @cᴏʟᴅsᴘᴇᴇᴅ's answer and @Zero's linked Q&A...<br>
But here is an alternative with <code>numexpr</code> </p>
<pre><code>import numexpr as ne
s[ne.evaluate('s {} {}'.format(ops[cfg['op']], cfg['threshold']))]
0 -0.308855
1 -0.031073
3 -0.547615
Name: A, dtype: float64
</code></pre>
<hr>
<... | python|pandas|conditional|series|dynamic-execution | 4 |
791 | 58,546,864 | How to use dropna to drop columns on a subset of columns in Pandas | <p>I want to use Pandas' <code>dropna</code> function on <code>axis=1</code> to drop columns, but only on a subset of columns with some <code>thresh</code> set. More specifically, I want to pass an argument on which columns to ignore in the <code>dropna</code> operation. How can I do this? Below is an example of what I... | <pre><code># Desired subset of columns against which to apply `dropna`.
cols = ['building', 'date', 'rate1', 'rate2']
# Apply `dropna` and see which columns remain.
filtered_cols = df.loc[:, cols].dropna(axis=1, thresh=3).columns
# Use a conditional list comprehension to determine which columns were dropped.
dropped_... | python|pandas | 4 |
792 | 58,331,506 | How to create a column that groups all values from column into list that fall between values of different columns in pandas | <p>I have two data frames that look like this:</p>
<pre><code>unit start stop
A 0.0 8.15
B 9.18 11.98
A 13.07 13.80
B 13.82 15.00
A 16.46 17.58
</code></pre>
<p>df_2</p>
<pre><code>time other_data
1 5
2 5
3 6
4 10
5 5
... | <p>Use the following:</p>
<pre><code>df1['other'] = df1.apply(lambda row : df2['other_data'].loc[(df2['time'] > row['start']) & (df2['time'] < row['stop'])].tolist(), axis=1)
</code></pre>
<p>Output is, using your sample datafames:</p>
<pre><code> unit start stop other
0 A 0.0... | python|pandas | 1 |
793 | 58,513,256 | Calculating a daily average in Python (One day has multiple values for one variable) | <p>I have one csv data that has several variables as a daily time series. But there are multiple values for one day. I need to calculate daily averages of temperatures from these multiple values for the entire period.</p>
<p>CSV file is stored here: <a href="https://drive.google.com/file/d/1zbojEilckwg5rzNfWtHVF-wu1f8... | <p>Here is the solution, thanks to <a href="https://stackoverflow.com/questions/24082784/pandas-dataframe-groupby-datetime-month">pandas dataframe groupby datetime month</a>.
Here I used "D" instead of "M". </p>
<pre><code>import pandas as pd
inpcsvFile = 'C:/.../daily average - one day has multiple values.csv'
df =... | python|pandas|datetime | 0 |
794 | 58,230,879 | Get the last value in multiple columns Pandas | <p>I have a dataset that I want to split columns and get only the row with the last non-empty string (from multiple columns). My initial table looks like this.</p>
<p><a href="https://i.stack.imgur.com/PXWqx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PXWqx.png" alt="Initial dataframe"></a></p>
... | <p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.split.html" rel="nofollow noreferrer"><code>Series.str.split</code></a> by default arbitrary whitespace, so no parameter and then select last value of lists by slicing <code>[-1]</code>:</p>
<pre><code>df['last'] = df['name'].s... | python|pandas|split | 1 |
795 | 44,723,716 | plot rectangular wave python | <p>I have a dataframe that looks like this:</p>
<p>df:</p>
<pre><code> Start End Value
0 0 98999 0
1 99000 101999 1
2 102000 155999 0
3 156000 161999 1
4 162000 179999 0
</code></pre>
<p>I would like to plot a rectangular wave that goes from "Start" to "End" and that... | <p>Maybe you can use <a href="https://matplotlib.org/devdocs/api/_as_gen/matplotlib.pyplot.step.html" rel="nofollow noreferrer"><code>plt.step</code></a> to plot a stepwise function:</p>
<pre><code>import pandas as pd
from matplotlib import pyplot as plt
df = pd.DataFrame({'End': [98999, 101999, 155999, 161999, 17999... | python|pandas|dataframe | 1 |
796 | 44,637,952 | How can I convert pandas date time xticks to readable format? | <p>I am plotting a time series with a date time index. The plot needs to be a particular size for the journal format. Consequently, the sticks are not readable since they span many years.</p>
<p>Here is a data sample</p>
<pre><code>2013-02-10 0.7714492098202259
2013-02-11 0.7709101833765016
2013-02-12 0.77049113... | <p>It seems there is some incompatibility between pandas and matplotlib formatters/locators when it comes to dates. See e.g. those questions:</p>
<ul>
<li><p><a href="https://stackoverflow.com/questions/42880333/pandas-plot-modify-major-and-minor-xticks-for-dates">Pandas plot - modify major and minor xticks for dates<... | pandas|matplotlib | 1 |
797 | 60,808,630 | How to remove specific character from a string in pyspark? | <p>I am trying to remove specific character from a string but not able to get any proper solution.
Could you please help me how to do this?</p>
<p>I am loading the data into dataframe using pyspark. One of the column having the extra character which i want to remove.</p>
<p>Example:</p>
<pre><code>|"\""warfarin was ... | <p>You can use some other escape characters instead of '\' you can change this to anything else. If you have option to save file to any other format prefer parquet (or orc) over csv.</p> | python|pandas|dataframe|pyspark | 0 |
798 | 60,996,336 | how distinguish equivalent graphs in networkx | <p>I have a question regarding graph equivalency.</p>
<p>Suppose that:</p>
<pre><code>import networkx as nx
import numpy as np
def is_isomorphic(graph1, graph2):
G1 = nx.from_numpy_matrix(graph1)
G2 = nx.from_numpy_matrix(graph2)
isomorphic = nx.is_isomorphic(G1,G2, edge_match=lambda x, y: x==y)
retu... | <p>The following works for your given examples (and hopefully does generally the thing you want):</p>
<pre class="lang-py prettyprint-override"><code>import networkx as nx
import numpy as np
def is_isomorphic(graph1, graph2):
G1 = nx.from_numpy_matrix(graph1)
G2 = nx.from_numpy_matrix(graph2)
# remove s... | python|numpy|graph|networkx | 0 |
799 | 71,658,818 | Import multiple CSV files into pandas and merge those based on column values | <p>I have 4 dataframes:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df_inventory_parts = pd.read_csv('inventory_parts.csv')
df_colors = pd.read_csv('colors.csv')
df_part_categories = pd.read_csv('part_categories.csv')
df_parts = pd.read_csv('parts.csv')
</code></pre>
<p>Now I have merged the... | <p>You have a pretty clear section of code that does exactly what you want. You want to do three merges so using merge() three times is adequate rather than excessive.</p>
<p>You can make your code a bit shorter by using the fact DataFrames have a merge function so you don't need the left argument. You can also chain t... | python|pandas|dataframe|csv | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.