Unnamed: 0 int64 0 378k | id int64 49.9k 73.8M | title stringlengths 15 150 | question stringlengths 37 64.2k | answer stringlengths 37 44.1k | tags stringlengths 5 106 | score int64 -10 5.87k |
|---|---|---|---|---|---|---|
378,200 | 62,281,292 | How to convert only one axis when constructing a dataframe from a JSON string? | <p>The <a href="https://pandas.pydata.org/docs/reference/api/pandas.read_json.html" rel="nofollow noreferrer">read_json</a> function has an argument <code>convert_axes</code>.</p>
<p>The problem is that for my data the column labels MUST NOT be converted (i.e. keep them as strings), but the index MUST be converted.</p... | <p>Judging by the <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_json.html" rel="nofollow noreferrer">documentation</a> and the <a href="https://github.com/pandas-dev/pandas/blob/89c5a5941c4819e00a04d3f3722f9f5c9cf046f0/pandas/io/json/_json.py" rel="nofollow noreferrer">source code</a>,... | python|json|pandas | 2 |
378,201 | 62,318,260 | How can I count words based on the column? | <p><img src="https://i.stack.imgur.com/ZsTRy.png" alt="enter image description here"></p>
<p>Hello. I stuck here.
Could you tell me how can I count words based on the tags on the second column?</p>
<p>I want to find mostly used words using .most_common() using the categorize: most 10 in VB(Verb), 10 in Noun. </p> | <p>To spell out what Ari Cooper-Davis suggested:</p>
<pre><code>pos.loc[pos.tag == 'VBN'].word.value_counts()
pos.loc[pos.tag == 'TO'].word.value_counts()
etc.
</code></pre> | python|pandas|dataframe|nltk|part-of-speech | 1 |
378,202 | 62,201,977 | Removing rows that does not start with/contain specific words | <p>I have the following output</p>
<pre><code>Age
'1 year old',
'14 years old',
'music store',
'7 years old ',
'16 years old ',
</code></pre>
<p>created after using this line of code</p>
<pre><code>df['Age']=df['Age'].str.split('.', expand=True,n=0)[0]
df['Age'].tolist()
</code></pre>
<p>I would like to remove r... | <p>Use, <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.contains.html" rel="nofollow noreferrer"><code>Series.str.contains</code></a> and create a boolean mask to filter the dataframe:</p>
<pre><code>m = df['Age'].str.contains(r'(?i)^\d+\syears?\sold')
df1 = df[m]
</code></pre>
<... | python|regex|pandas|dataframe | 1 |
378,203 | 62,345,852 | how to calculate a running total in a pandas dataframe | <p>I have a data frame that contains precipitation data that looks like this</p>
<pre><code>Date Time, Raw Measurement, Site ID, Previous Raw Measurement, Raw - Previous
2020-05-06 14:15:00,12.56,8085,12.56,0.0
2020-05-06 14:30:00,12.56,8085,12.56,0.0
2020-05-06 14:45:00,12.56,8085,12.56,0.0
2020-05-06 15:00:00,2.48,8... | <p><code>np.where</code> will also do the job.</p>
<pre><code>import pandas as pd, numpy as np
df['Total Accumulation'] = np.where((df['Raw - Previous'] > 0), df['Raw - Previous'], 0).cumsum() + df.iloc[0,3]
df
</code></pre>
<p>Output:</p>
<pre><code> Date Time Raw Measurement Site ID Previous Raw Measuremen... | python|pandas | 1 |
378,204 | 62,225,230 | Consistent ColumnTransformer for intersecting lists of columns | <p>I want to use <code>sklearn.compose.ColumnTransformer</code> consistently (not parallel, so, the second transformer should be executed only after the first) for intersecting lists of columns in this way:</p>
<pre><code>log_transformer = p.FunctionTransformer(lambda x: np.log(x))
df = pd.DataFrame({'a': [1,2, np.NaN... | <p>The intended usage of <code>ColumnTransformer</code> is that the different transformers are applied in parallel, not sequentially. To accomplish your desired outcome, three approaches come to mind:</p>
<p><strong>First approach:</strong></p>
<pre class="lang-py prettyprint-override"><code>pipe_a = Pipeline(steps=[(... | python|pandas|scikit-learn|scipy|sklearn-pandas | 5 |
378,205 | 62,341,182 | FutureWarning: elementwise comparison failed; when dropping all rows from pandas dataframe | <p>I want to drop those rows in a dataframe that have value '0' in the column 'candidate'. Some of my dataframes only have value '0' in this column. I expected that in this case I will get an empty dataframe, but instead I get the following warning and the unchanged dataframe. How can I get an empty dataframe in this c... | <p>Thanks everyone for your suggestions!</p>
<p>Indeed, the value type is <code>int</code>, but only if 0 is the only value in the column. Where other values are present, the type is <code>object</code>.</p>
<p>So I solved the problem by using:</p>
<p><code>df = df.loc[(df["candidate"] != "0") & (df["candidate"]... | python|pandas|dataframe | 1 |
378,206 | 62,305,744 | Numpy split array without copying | <p>I have a very large array of images (multiple GBs) and want to split it using numpy. This is my code:</p>
<pre><code>images = ... # this is the very large array which contains a lot of images.
images.shape => (50000, 256, 256)
indices = ... # array containing ranges, that group the images array like [(0, 300), ... | <p>Without a running code it's difficult to understand the details. But I can try to give some ideas. If you have <code>images_train</code> and <code>images_test</code> then you will probabely use them to train and to test with a command that is something like</p>
<pre><code>.fit(images_train);
.score(images_test)
</c... | python|numpy|training-data|train-test-split | 1 |
378,207 | 62,184,437 | Reshaping a numpy vector | <p>I am really new to numpy. I have a numpy vector that when I run <code>y.shape</code> returns <code>(4000,)</code>. Is there a way, I can have it return <code>(4000, 1)</code>?</p> | <pre><code>np.reshape(y,(4000,1))
</code></pre>
<p>Reshape function can be used to do this</p> | numpy|vector | 0 |
378,208 | 62,088,979 | Easiest way to print the head of a data in python? | <p>I'm not defining my array with pandas, I'm using numpy to do it and I would like to know if there is any other way to print the first 5 rows of a data. Using pandas this is how I would do it: print(data.head()).</p>
<p>This is how i defined my data:</p>
<pre><code>with open('B0_25.txt', 'r') as simulation_data:
si... | <p>You need the transpose of mydata, otherwise x, y, z, mx, my, mz are the rows rather than the columns.</p>
<pre><code>mydata = np.array([x, y, z, mx, my, mz]).T
print(mydata[:5, :])
</code></pre> | python|pandas|head | 1 |
378,209 | 62,440,732 | print array as a matrix by having all elements in the right columns | <p>I am trying to print my dataframe as a matrix. To do so, I want to use an array. To be clear:</p>
<p>I have a dictionary, Y, which is like this:</p>
<pre><code>{(0, 0): {(0, 0): 0, (1, 0): 1, (0, 1): 1, (0, 2): 2, (0, 3): 3, (1, 3): 4, (0, 4): 10, (1, 4): 9, (0, 5): 11, (1, 1): 2, (1, 2): 5, (2, 2): 6, (2, 4): 8, ... | <p>If you want to print line-by-line and still have things aligned you can do the following:</p>
<pre><code>>>> for l in str(df.to_numpy()).split("\n"):
... print(l)
...
[[ 0 1 1 2 3 4 10 9 11 2 5 6 8 10 10 9 7 8 7 7 9 9 10 8]
[ 1 2 0 1 2 3 9 8 10 3 4 5 7 9 9 8 6 7 6 ... | python|arrays|pandas|numpy|output | 2 |
378,210 | 62,446,010 | Keras Creating CNN Model "The added layer must be an instance of class Layer" | <pre><code>from tensorflow.keras.layers import Conv2D, MaxPooling2D, BatchNormalization
from tensorflow.keras.layers import Dropout, Flatten, Input, Dense
def create_model():
def add_conv_block(model, num_filters):
model.add(Conv2D(num_filters, 3, activation='relu', padding='same'))
model.add(Bat... | <p>The solution is to use <code>InputLayer</code> instead of <code>Input</code>. <code>InputLayer</code> is meant to be used with <code>Sequential</code> models. You can also omit the <code>InputLayer</code> entirely and specify <code>input_shape</code> in the first layer of the sequential model.</p>
<p><code>Input</c... | python|tensorflow|keras|conv-neural-network | 2 |
378,211 | 51,293,345 | Unique data for each day using Python/Pandas Dataframe | <p>I'm trying to process each day's data using pandas. Below is my code, data and current output. However, function getUniqueDates() has to traverse full df to get the unique dates in the list as shown below. Is there any simple and efficient way to get each day's data which can be passed to function processDataForEach... | <p>You can access a NumPy array of unique dates with:</p>
<pre><code>>>> df.date.dt.date.unique()
array([datetime.date(2014, 5, 1), datetime.date(2014, 5, 2),
datetime.date(2014, 5, 3), datetime.date(2014, 5, 4)], dtype=object)
</code></pre>
<p><code>dt</code> is an <em>accessor method</em> of the pan... | python|pandas|dataframe | 1 |
378,212 | 51,195,017 | tf.keras.backend way of replacing a tensors value if it's less than 1 | <p>I am using Keras with the Tensorflow backend.</p>
<p>In my loss function I have a tensor where I need to replace the elements that are less than 1 with a 1.</p>
<p>I can see loads of functions available to me in the docs
<a href="https://www.tensorflow.org/api_docs/python/tf/keras/backend" rel="nofollow noreferrer... | <p>if <code>a</code> is your tensor you can do the following:</p>
<p><code>b = a*tf.cast(a>1, 'float32') + tf.cast(a<=1, 'float32')</code></p> | python|tensorflow|machine-learning|keras|tensor | 1 |
378,213 | 51,358,307 | compare the next row value and change the current row value using pandas python | <p>any way of comparing a row value with the next row value and change the current row value using pandas?</p>
<p>Basically in the the first Data frame DF1, in the value column, one of the value is '999', so the values of the next rows for that 'user-id' is less than the value '999'. so in this case i want to add '100... | <p>I have 2 methods for this:</p>
<p>This method we multiply by the max value of each user-id - it works on the sample dataset you porivded but it might not work overal.</p>
<pre><code>df.set_index('user-id', inplace=True)
df['value'] += df.groupby('user-id')['value'].apply(
lambda x:(x.shift() > x).astype(int).cu... | python|pandas|pandas-groupby | 0 |
378,214 | 51,495,927 | How to visualize a matrix of categories as an RGB image? | <p>I am using neural network to do semantic segmentation(human parsing), something like taking a photo of people as input and the neural network tells that every pixel is most likely to be head, leg, background or some other parts of human. The algorithm runs smoothly and giving a <code>numpy.ndarray</code> as output .... | <p>It's fairly easy. All you need to have is a <a href="https://en.wikipedia.org/wiki/Lookup_table" rel="nofollow noreferrer">lookup table</a> mapping the 23 labels into unique colors. The easiest way is to have a 23-by-3 numpy array with each row storing the RGB values for the corresponding label:</p>
<pre><code>impo... | python|numpy|visualization|image-segmentation|semantic-segmentation | 2 |
378,215 | 51,163,941 | In Pandas, how to make a PivotTable for counting and skip replicates? | <p>In Python3 and pandas I have a dataframe like this:</p>
<pre><code>IdComissao SiglaComissao NomeMembro
12444 CCJR Abelardo Camarinha
12444 CCJR Abelardo Camarinha
12448 CAD Abelardo Camarinha
12448 CAD Abelardo Camarinh... | <p>I think you're looking for <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> and <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.nunique.html" rel="nofollow noreferrer"><code>nunique</code></... | python|pandas|pivot-table | 2 |
378,216 | 51,236,215 | How to reset GPU on keras / tensorflow hang? | <p>Sometimes I have to kill my python application which use GPU with Keras or Tensorflow and after that I can't run them anymore. This is probably because GPU is still used by something. </p>
<p>How to free GPU by force without machine reboot?</p>
<hr>
<p>I tried the following shell script</p>
<pre><code>$ cat ~/bi... | <p>try this: </p>
<pre><code>keras.backend.clear_session()
</code></pre> | python|tensorflow|keras|gpu | 0 |
378,217 | 51,250,413 | Python List append with respective index | <p>I need help on list append. I have to export it into CSV with the respective list index.</p>
<pre><code>lst1 = ['a', 'b', 'c']
lst2 = ['w', 'f', 'g']
lst3 = ['e', 'r', 't']
ap = []
ap.append((lst1, lst2, lst3))
output: [(['a', 'b', 'c'], ['w', 'f', 'g'], ['e', 'r', 't'])]
</code></pre>
<p>Expected output:</p>
... | <p>You need a list of tuples, not a list of a tuple of lists. For your result, you can use <code>zip</code> with unpacking to extract items in an iterable of lists by index.</p>
<pre><code>df = pd.DataFrame(list(zip(*(lst1, lst2, lst3))),
columns=['col1', 'col2', 'col3'])
print(df)
col1 col2 col3... | python|python-3.x|pandas|dataframe | 3 |
378,218 | 51,135,928 | Perform operations on last iteration values using iterrows | <p>I have two datasets.</p>
<p>df</p>
<pre><code>Name Date Quantity
ZMTD 2018-06-30 1000
ZMTD 2018-05-31 975
ZMTD 2018-04-30 920
ZMTD 2018-03-30 900
ZMTD 2018-02-28 840
ZMTD 2018-01-31 820
ZMTD 2017-12-30 760
ZMTD 2017-11-31 600
ZMTD 2017-10-30 ... | <p>This will give you what you seek:</p>
<pre><code>df = df1.merge(df2, on='Name', how='left', suffixes=('', '2'))
df['Factor'] = ((df['Date'] < df['Date2']).astype(int) * df['Factor']).replace(0, 1)
df = df.groupby(['Name', 'Date']).agg({'Quantity': 'max', 'Factor': 'prod'}).reset_index()
df['Quantity'] = df['Q... | python|pandas|loops|iteration | 2 |
378,219 | 51,546,293 | Seaborn and Pandas: Make multiple x-category bar plot using multi index data in python | <p>I have a multi-index dataframe that I've melted to look something like this:</p>
<pre><code>Color Frequency variable value
Red 2-3 times a month x 22
Red A few days a week x 45
Red At least once a day x 344
Red Never x ... | <p>This is what you need to find the portion of each number for that group:</p>
<pre><code>df['proportion'] = df['value'] / df.groupby(['Color','variable'])['value'].transform('sum')
</code></pre>
<p>Output:</p>
<pre><code> variable Frequency Color value portion
0 x 2-3 times a month ... | python|pandas|dataframe|seaborn | 2 |
378,220 | 51,203,054 | Can't Import Tensor Flow in Anaconda 3.6 on Windows 10 | <p>I just installed CUDA 92 CUDANN and Tensor Flow on my Windows 10 laptop. </p>
<p>I am unable to import tensor flow in Python. I get a trace from Python that says: </p>
<blockquote>
<p>can't load a dll</p>
</blockquote>
<p>But it doesn't say which one it is. Here is a directory listing the trace I received. C... | <p>Mostly in windows its caused by MSVCP140.dll missing if you install</p>
<p><a href="https://www.microsoft.com/en-us/download/details.aspx?id=53587" rel="nofollow noreferrer">Microsoft Visual C++</a></p>
<p>if that doesn't help, following dependencies are also there for tensotorflow:</p>
<p>KERNEL32.dll</p>
<p>WS... | python|tensorflow | 0 |
378,221 | 51,217,584 | Semi-Interactive Pandas Dataframe in a GUI | <p>There are a number of excellent answers to this question <a href="https://stackoverflow.com/questions/10636024/python-pandas-gui-for-viewing-a-dataframe-or-matrix">GUIs for displaying dataframes</a>, but what I'm looking to do is a bit more advanced.</p>
<p>I'd like to display a dataframe, but have a couple of the ... | <p>I love Rstudio as my IDE as I can not only view all objects created but I can also edit data in the IDE itself. There are many other great features too.
And you can use R Studio for Python coding too (using reticulate package).</p>
<p>Spyder too gives this feature of viewing or editing the data frame.</p>
<p>However... | python|pandas|user-interface|pyqt|interactive | 1 |
378,222 | 51,500,281 | how to get a hidden layer of tensorflow hub module | <p>I want to use tensorflow hub to generate features for my images, but it seems that the 2048 features of Inception Module are not enough for my problem because my class images are very similar. so I decided to use the features of a hidden layer of this module, for example: </p>
<blockquote>
<p>"module/InceptionV3/... | <p>Please try</p>
<pre><code>module = hub.Module(...) # As before.
outputs = module(dict(images=images),
signature="image_feature_vector",
as_dict=True)
print(outputs.items())
</code></pre>
<p>Besides the <code>default</code> output with the final feature vector output, you should s... | python|tensorflow|tensorflow-hub | 1 |
378,223 | 51,291,804 | Keras: different validation AUROC during training and on epoch end | <p>I'm getting different AUROC depending on when I calculate it. My code is </p>
<pre><code> def auc_roc(y_true, y_pred):
# any tensorflow metric
value, update_op = tf.metrics.auc(y_true, y_pred)
return update_op
model.compile(loss='binary_crossentropy', optimizer=optim, metrics=['accuracy', auc_roc])... | <p>During training, the metrics are calculated "per batch".
And they keep updating for each new batch in some sort of "mean" between the current batch metrics and the previous results. </p>
<p>Now, your callback calculates on the "entire data", and only at the end. There will be normal differences between the two m... | python|tensorflow|keras | 2 |
378,224 | 51,371,835 | Replace None with NaN and ignore NoneType in Pandas | <p>I'm attempting to create a raw string variable from a pandas dataframe, which will eventually be written to a <em>.cfg</em> file, by firstly joining two columns together as shown below and avoiding <code>None</code>:</p>
<p>Section of <code>df</code>: </p>
<pre><code> command ... | <p>First of all, <code>df['value'].replace('None', np.nan, inplace=True)</code> returns <code>None</code> because you're calling the method with the <code>inplace=True</code> argument. This argument tells <code>replace</code> to not return anything but instead modify the original <code>dataframe</code> as it is. Simila... | python|pandas|numpy|dataframe | 2 |
378,225 | 51,499,376 | How to display GroupBy Count as Bokeh vbar for categorical data | <p>I have a small issue creating a Bokeh <strong>vbar</strong> in 0.13.0
from a dataframe <code>groupby</code> <code>count</code> operation. The response <a href="https://stackoverflow.com/questions/46343429/how-use-bokeh-vbar-chart-parameter-with-groupby-object">here</a> was for a multi level group by where as mine is... | <p>The fact that it is multi-level in the other question is not really relevant. When you use a Pandas <code>GroupBy</code> as a data source for Bokeh, Bokeh uses the results of <code>group.describe</code> (which includes counts for each column per group) as the contents of the data source. Here is a complete example t... | bokeh|pandas-groupby | 1 |
378,226 | 51,297,668 | Defining a default argument after with None: what if it's an array? | <p>I'm passing an argument to a function such that I want to delay giving the default parameter, in the usual way:</p>
<pre><code>def f(x = None):
if x == None:
x = ...
</code></pre>
<p>The only problem is that <code>x</code> is likely to be a numpy array. Then <code>x == None</code> returns a boolean arr... | <p>When comparing against <strong><code>None</code></strong>, it is a good practice to use <strong><code>is</code></strong> as opposed to <code>==</code>. Usually it doesn't make a difference, but since objects are free to implement equality any way they see fit, it is not always a reliable option.</p>
<p>Unfortunate... | python|numpy|parameters|arguments | 7 |
378,227 | 51,353,928 | Extract string if match the value in another list | <p>I want to get the value of the lookup list instead of a boolean. I have tried the following codes:</p>
<pre><code>val = pd.DataFrame(['An apple','a Banana','a cat','a dog'])
lookup = ['banana','dog']
# I tried the follow code:
val.iloc[:,0].str.lower().str.contains('|'.join(lookup))
# it returns:
0 False
1 T... | <p>You can use <strong><code>extract</code></strong> instead of <strong><code>contains</code></strong>, and <code>fillna</code> with <code>False</code>:</p>
<pre><code>import re
p = rf'\b({"|".join(lookup)})\b'
val[0].str.extract(p, expand=False, flags=re.I).fillna(False)
0
0 False
1 banana
2 False
3 ... | python|pandas | 10 |
378,228 | 51,148,914 | pandas multiindex set_labels | <p>I have a pandas multiindex like this one</p>
<pre><code>result.index
MultiIndex(levels=[[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35], [1, 6, 12, 17, 18, 21, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, ... | <p>If want use <code>set_label</code> need same types, here integers (it seems bug):</p>
<pre><code>#test if working with integers
mux1 = mux.set_labels((np.array(new_label) * 100).astype(int), level=1)
print (mux1)
MultiIndex(levels=[[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, ... | pandas|label|multi-index | 2 |
378,229 | 51,246,827 | Renaming columns in a Dataframe given that column contains data in a loop | <p><strong>Scenario:</strong> I have a list of dataframes. I am trying to rename the columns and change their order, but the column names do not exactly match, for example: a column might be "iterationlist" or "iteration".</p>
<p>I tried a loop inside a loop to read all the columns and if the name contains what I need... | <p>I think need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.contains.html" rel="nofollow noreferrer"><code>str.contains</code></a> for get columns names by substring and then reorder columns by subset with join both lists:</p>
<pre><code>contentdataframes = []
for f in all_files:
... | python|pandas|dataframe | 2 |
378,230 | 51,304,610 | Pandas: shifting columns depending on if NaN or not | <p>I have a dataframe like so:</p>
<pre><code>phone_number_1_clean phone_number_2_clean phone_number_3_clean
NaN NaN 8546987
8316589 8751369 NaN
4569874 NaN 26459... | <p>Use:</p>
<pre><code>#for each row remove NaNs and create new Series - rows in final df
df1 = df.apply(lambda x: pd.Series(x.dropna().values), axis=1)
#if possible different number of columns like original df is necessary reindex
df1 = df1.reindex(columns=range(len(df.columns)))
#assign original columns names
df1.c... | python|pandas | 6 |
378,231 | 51,454,967 | The truth value of a Series is ambiguous Pandas | <p>What's the problem with this code? I used many comparison lambda function on the dataframe,but this one returns <code>ValueError: ('The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().', u'occurred at index 2')</code> error.</p>
<p>I searched about it and found many questio... | <p>The Problem with your code is that you pass the whole column of the dataframe to your function:</p>
<pre><code>df.apply(lambda y:Return(close=df['Close'], pClose=df['pClose']),axis=1)
</code></pre>
<p>In the function you are calculating a new value i which is in fact a column:</p>
<pre><code>i = ((close - pClose)... | python|pandas | 3 |
378,232 | 51,292,318 | TFrecords occupy more space than original JPEG images | <p>I'm trying to convert my Jpeg image set into to TFrecords. But TFrecord file is taking almost 5x more space than the image set. After a lot of googling, I learned that when JPEG are written into TFrecords, they aren't JPEG anymore. However I haven't come across an understandable code solution to this problem. Please... | <p>Instead of converting image to array and back to bytes, we can just use inbuilt <code>open</code> function to get the bytes. That way, compressed image will be written into TFRecord. </p>
<p>Replace these two lines</p>
<pre><code>img = imread(path)
img_bytes = img.tostring()
</code></pre>
<p>with </p>
<pre><code... | tensorflow|tfrecord | 6 |
378,233 | 51,254,282 | API download data, recommendations? | <p>I am trying to decode data from an API, I just cannot think of a clean way to extract the value and time values. I been trying to do string manipulations, but ends up very complex. </p>
<pre><code>{"max_scale": "0", "min_scale": "0", "graph_label": "Light Level", "average": "1", "length_of_time": "3600", "upper_war... | <p>This is in JSON format. Use the python <a href="https://docs.python.org/3/library/json.html" rel="nofollow noreferrer">json</a> encoder/decoder to load this data. It will turn it into a dictionary, and something like</p>
<pre><code>my_json_dict['values']
</code></pre>
<p>will return you that list.</p> | python|database|string|pandas|extract | 0 |
378,234 | 51,541,386 | Pandas - min and max of a column up until each line | <p>I have a dataframe like this:</p>
<pre><code>pd.DataFrame({'group': {0: 1, 1: 1, 2: 1, 3: 1, 4: 2, 5: 2, 6: 2}, 'year': {0: 2007, 1: 2008, 2: 2009, 3: 2010, 4: 2006, 5: 2007, 6: 2008}, 'amount': {0: 2.0, 1: -4.0, 2: 5, 3: 7.0, 4: 8.0, 5: -10.0, 6: 12.0}}])
group year amount
0 1 2007 2
1 1 ... | <p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.cummax.html" rel="nofollow noreferrer"><code>DataFrameGroupBy.cummax</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.cummin.html" rel="nofollow n... | python|pandas | 2 |
378,235 | 51,314,650 | Pandas groupby function returns NaN values | <p>I have a list of people with fields unique_id, sex, born_at (birthday) and I’m trying to group by sex and age bins, and count the rows in each segment.</p>
<p>Can’t figure out why I keep getting NaN or 0 as the output for each segment. </p>
<p>Here’s the latest approach I've taken...</p>
<p>Data sample:</p>
<pre... | <p>I think you missed the calculation of the current age. The ranges you define for splitting the bithday years only make sense when you use them for calculating the current age (or all grouped cells will be nan or zero respectively because the lowest value in your sample is 1963 and the right-most maximum is 65). So f... | python|pandas|pandas-groupby | 1 |
378,236 | 51,419,237 | TensorFlow FailedPreconditionError: iterator has not been initialized | <p>I want to display the values of tensors.</p>
<p>Here is my code:</p>
<pre><code>#some code here
data = [data_tensor for data_tensor in data_dict.items()]
for i in data:
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print (sess.run(i[1]))
print('_'*100)
</code></... | <p>It looks like you have a dataset iterator that has not been initialized. A dataset iterator is not a variable, hence does not get initialized with <code>tf.global_variables_intializer()</code>. </p>
<p>You have to initialize it explicitly by calling <code>sess.run(iterator.initializer)</code> on whatever dataset it... | python|tensorflow | 5 |
378,237 | 51,375,255 | Bar plot from dataframe | <p>I have a data frame that looks something like this. </p>
<pre><code>print (df)
a b
0 1 5896
1 1 4000
2 1 89647
3 2 54
4 2 3568
5 2 48761
6 3 5896
7 3 2800
8 3 5894
</code></pre>
<p><a href="https://i.stack.imgur.com/sazhb.png" rel="nofollow noreferrer"><img src="https://i.stack.i... | <p>I think need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.cumcount.html" rel="nofollow noreferrer"><code>cumcount</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>set_index</co... | python|pandas|plot | 3 |
378,238 | 51,409,861 | Perform a 'join' on two numpy arrays | <p>I have two numpy array's that look like the following:</p>
<pre><code>a = np.array([[1, 10], [2, 12], [3, 5]])
b = np.array([[1, 0.78], [3, 0.23]])
</code></pre>
<p>The first number in the list is the id parameter, and the second one is a value. I'm looking to combine them. The expected output to be equal to this:... | <p>You are using the the first element like a <code>key</code> of a dictionary or an index of a Pandas series. So I used those tools which are better suited for the combination you are looking to do. I then convert back to the array you are looking for.</p>
<pre><code>import pandas as pd
import numpy as np
a = np.a... | python|python-3.x|pandas|numpy|data-manipulation | 2 |
378,239 | 51,485,042 | set_printoptions for numpy array doesn't work for numpy ndarray? | <p>I'm trying to use <code>set_printoptions</code> from the answer to the question <a href="https://stackoverflow.com/questions/2891790/how-to-pretty-printing-a-numpy-array-without-scientific-notation-and-with-given">How to pretty-printing a numpy.array without scientific notation and with given precision?</a></p>
<p>... | <p>Try <code>numpy.array2string</code> which takes <code>ndarray</code> as input and you can set precision.</p>
<p>Scroll down in below documentation link for examples.</p>
<p><a href="https://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.array2string.html" rel="nofollow noreferrer">https://docs.scipy.org... | python|arrays|numpy | 2 |
378,240 | 51,529,463 | Debug Pytorch Optimizer | <p>When I run <code>optimizer.step</code> on my code, I get this error</p>
<p>RuntimeError: sqrt not implemented for 'torch.LongTensor'</p>
<pre><code>C:\Program Files\Anaconda3\lib\site-packages\IPython\core\magic.py in <lambda>(f, *a, **k)
186 # but it's overkill for just that one bit of state.
18... | <p>The error comes from here:</p>
<pre><code>weight = torch.tensor([10])
bias = torch.tensor([-5])
self.w = nn.Parameter(weight)
self.b = nn.Parameter(bias)
</code></pre>
<p>Had to change it to</p>
<pre><code>weight = torch.tensor([10.0])
bias = torch.tensor([-5.0])
self.w = nn.Parameter(weight)
self.b = nn.Paramete... | pytorch | 1 |
378,241 | 51,246,823 | Create a line graph per bin in Python 3 | <p>I have a dataframe called 'games':</p>
<pre><code>Game_id Goals P_value
1 2 0.4
2 3 0.321
45 0 0.64
</code></pre>
<p>I need to split the P value to 0.05 steps, bin the rows per P value and than create a line graph that shows the sum per p value.</p>
<p>What I currently have... | <p>If I understood you correctly, you want to have discrete steps in the p-value of width 0.05 and show the cumulative sum?</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# create some random example data
df = pd.DataFrame({
'goals': np.random.poisson(3, size=1000),
'p_... | python-3.x|pandas|plot | 0 |
378,242 | 51,457,942 | Regression plot is wrong (python) | <p>So my program reads MPG vs weight relationship and draws a graph of what it is suppose to look like but as you can see the graph is not looking right. </p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
#read txt file
dataframe= pd.read_table('auto_data71.txt',delim_whitespace=Tr... | <p>You need to <strong>sort</strong> the values before plotting.</p>
<p>DATA: <a href="https://files.fm/u/2g5dxyb4" rel="nofollow noreferrer">https://files.fm/u/2g5dxyb4</a></p>
<p><strong>Use this</strong>:</p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocess... | python|pandas|scikit-learn | 0 |
378,243 | 51,403,468 | Keras delayed data augmentation | <p>I am trying to apply a custom image augmentation technique in Keras. I am using fit_generator and a generator to yield images. I would like to start applying the image augmentation only after say 20 epochs (So the first 20 epochs would not have any data augmentation). Unfortunately the generator does not have a noti... | <p>The easiest way to do this is train for 20 epochs with no realtime augmentation (use the Keras ImageDataGenerator with no args) and save your models using a ModelCheckpoint callback. Then reload the model and continue training with RA (use an ImageDataGenerator with the transforms of your choice).</p>
<p>If you wan... | tensorflow|keras | 0 |
378,244 | 51,466,808 | How can I multiply column of the int numpy array to the float digit and stays in int? | <p>I have a numpy array:</p>
<pre><code> >>> b
array([[ 2, 2],
[ 6, 4],
[10, 6]])
</code></pre>
<p>I want to multiply first column by float number, and as result I need int number, because when I doing:</p>
<pre><code>>>> b[:,0] *= 2.1
</code></pre>
<p>It says:</p>
<pre><code>... | <p>@Umang Gupta gave a solution to your problem. I was curious myself as to why this worked, so I'm posting what I found as additional context. FWIW this question has already been asked and answered <a href="https://stackoverflow.com/questions/38673531/multiply-numpy-int-and-float-arrays">here</a>, but that answer al... | python|numpy | 7 |
378,245 | 51,292,212 | Keras model params are all "NaN"s after reloading | <p>I use transfer learning with Resnet50. I create a new model out of the pretrained model provided by Keras (the 'imagenet').</p>
<p>After training my new model, I save it as following:</p>
<pre><code># Save the Siamese Network architecture
siamese_model_json = siamese_network.to_json()
with open("saved_model/siames... | <p>The solution is inspired from @Gurmeet Singh's recommendation above.</p>
<p>Seemingly, weights of trainable layers have become so big after a while during the training and all such weights are set to NaN, which made me think that I was saving and reloading my models in the wrong way but the problem was exploding gr... | python|tensorflow|machine-learning|keras|transfer-learning | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.