Unnamed: 0 int64 0 378k | id int64 49.9k 73.8M | title stringlengths 15 150 | question stringlengths 37 64.2k | answer stringlengths 37 44.1k | tags stringlengths 5 106 | score int64 -10 5.87k |
|---|---|---|---|---|---|---|
377,600 | 59,335,245 | How to output an image with a CNN? | <p>I'm trying to do depth estimation with CNNs (this is my ultimate goal), but a problem that i found is: I just did image classifications with CNNs, using for example "CIFAR-10", "MNIST", "Cats vs Dogs", etc. To do depth estimation I need to output a new image (the NYUv2 dataset has the labeled images). So, I'll input... | <p>I suggest you use a type of <a href="https://arxiv.org/abs/1505.04597" rel="nofollow noreferrer">UNet</a>. This kind of architecture has downsampling layers, followed by up sampling layers to get back to the original spatial dimensions.</p> | python|tensorflow|keras|deep-learning|conv-neural-network | 0 |
377,601 | 59,107,376 | finding the frequency distribution of values in a column | <p>I have df (8360 x 3 columns)</p>
<pre><code> Time A B
0 01.01.2018 00:00:00 0.019098 32.437083
1 01.01.2018 01:00:00 0.018871 32.462083
2 01.01.2018 02:00:00 0.018643 32.487083
3 01.01.2018 03:00:00 0.018416 32.512083
4 01.01.2018 04:00:00 0.018189 32.537083
5 ... | <p>To get idea about distribution of values just plot histogram. For example in Jupyter notebook:</p>
<pre><code>%matplotlib inline
df.B.hist()
</code></pre>
<p>or compute cumulative frequency histogram with scipy</p>
<pre><code>import scipy.stats
scipy.stats.cumfreq(df.B)
</code></pre> | python-3.x|pandas|dataframe | 0 |
377,602 | 59,468,830 | string manipulation with python pandas and replacement function | <p>I'm trying to write a code that checks the sentences in a csv file and search for the words that are given from a second csv file and replace them,my code is as bellow it doesn't return any errors but it is not replacing any words for some reasons and printing back the same sentences without and replacement.</p>
<... | <p>Try:</p>
<pre><code>text=pd.read_csv("sentences.csv")
change=pd.read_csv("replace.csv")
toupdate = dict(zip(change.word, change.replacement))
text = text['sentences'].replace(toupdate, regex=True)
print(text)
</code></pre> | python|string|pandas|replace|nltk | 3 |
377,603 | 59,149,967 | Python plotting graph from csv problems | <p><a href="https://1drv.ms/u/s!AtXcEqW2iQpCjGnWFi8RveymnQwD?e=03kIMt" rel="nofollow noreferrer">csv file</a></p>
<p>Hello so I have this csv file that I want to convert to a graph, what I want is it to pretty much graph the number of jobs in each region by city. I have the columns for both cities and countries in thi... | <p>As Abhinav Kinagi already answered, <code>pandas</code> assumes that your values are separated by commas. You can either change your csv-file or simply put <code>sep='|'</code>in <code>pd.read_csv</code>. Your code should be</p>
<pre class="lang-py prettyprint-override"><code>%matplotlib inline
import pandas as pd
... | python|pandas|csv|matplotlib|graph | 0 |
377,604 | 59,251,168 | Different kind of Apostrophe in python string comparison | <p>I'm trying to do a string search operation using python and my its not working because I have three different kind of Apostrophe in my text <a href="https://i.stack.imgur.com/SI84O.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SI84O.png" alt="images of apostrophes"></a>. I imported by data from ... | <p>If you replace all unusual forms of the apostrophe before doing anything else you avoid running into any problems:</p>
<p><code>df = df.replace("`|’", "'", regex=True)</code></p> | python|string|pandas | 2 |
377,605 | 59,136,292 | Pandas working with 2D array inside dataframe | <p>I have a Pandas DataFrame containing a 2D array as a column looking something like the following:</p>
<pre><code>Name 2DValueList
item 1 [ [ 0.0, 1.0 ], [ 0.0, 6.0 ], [ 0.0, 2.0 ] ]
item 2 [ [ 0.0, 2.0 ], [ 0.0, 1.0 ], [ 0.0, 1.0 ] ]
item 3 [ [ 0.0, 1.0 ], [ 0.0, 3.0 ], [ 0.0, 5.0 ], [ 0.0, 1.0 ] ]
item 4
it... | <p>If every cell of <code>2DValueList</code> is list of lists, the efficient way is using <code>heapq.nlargest</code> with <code>itemgetter</code> together with list comprehension</p>
<pre><code>from heapq import nlargest
from operator import itemgetter
df['new_list'] = [nlargest(2, x, key=itemgetter(1)) for x in df[... | python|arrays|pandas|data-structures | 1 |
377,606 | 59,171,129 | What is the best way to find approximate unordered arrays in python? | <p>Can you help to detect all approximate unordered arrays? For example, I have array (a) like this:</p>
<pre><code>a = np.array([[(1.000, 2.000, 1.000), (1.000, 3.000, 2.000), (4.000, 3.000, 1.000)],
[(1.000, 3.000, 2.000), (4.000, 3.000, 1.000), (1.001, 2.000, 1.000)],
[(4.000, 2.999, 1.001), (1.000, 2.000, 1.000), ... | <p>There are two elements of your task: rounding, and taking the uniques. </p>
<p>If you want to have two numbers be considered "the same" if they differ by less than .01, that's rather difficult. Among other things, this is nontransitive; A can be "close" to B and B "close" to C, without A being "close" to C. In mat... | python|python-3.x|numpy | 1 |
377,607 | 59,161,124 | Divide each element by 2 and it should ignore "String" values | <p>Divide each element by 2 and it should ignore "String" values. End results should be in Pandas Data frame only</p>
<pre><code>df=pd.DataFrame({'a':[3,6,9], 'b':[2,4,6], 'c':[1,2,3]})
print(df)
</code></pre> | <p>You can do:</p>
<p>I added a column with string values for demonstration</p>
<pre><code>df=pd.DataFrame({'a':[3,6,9], 'b':[2,4,6], 'c':[1,2,3], 'd':['a', 'b', 'c']})
for i in list(df.keys()):
try:
df[i] = df[i]/2
except(TypeError):
df[i] = df[i]
print(df)
</code></pre>
<p>This gives:</p>
... | python|pandas | 0 |
377,608 | 59,414,122 | is there an idiomatic panda way to get indices from 2 lists that represent a start and stop signal | <p>I have lists like this:</p>
<pre><code>index A B
0 false true
1 false false
2 true false
3 false false
4 false false
5 false false
6 true false
7 false false
8 false true
9 false false
10 false false
11 ... | <pre><code>a = [False, True, False, True]
b = [True, False, False, True]
if True in a:
aNum = a.index(True)
bNum = b[aNum:].index(True) + aNum if True in b[aNum:] else None
else:
aNum = None
bNum = None
print((aNum, bNum))
</code></pre>
<h3>Output</h3>
<pre><code>(1, 3)
</code></pre> | python|pandas | 1 |
377,609 | 59,182,550 | Some array indexing in numpy | <pre><code> lookup = np.array([60, 40, 50, 60, 90])
</code></pre>
<p>The values in the following arrays are equal to indices of lookup.</p>
<pre><code> a = np.array([1, 2, 0, 4, 3, 2, 4, 2, 0])
b = np.array([0, 1, 2, 3, 3, 4, 1, 2, 1])
c = np.array([4, 2, 1, 4, 4, 0, 4, 4, 2])
array 1st colum... | <p>Do you want this:</p>
<pre><code>d = np.vstack([a,b,c])
# option 1
rows = lookup[d].argmax(0)
d[rows, np.arange(d.shape[1])]
# option 2
(lookup[:,None] == lookup[d].max(0)).argmax(0)
</code></pre>
<p>Output:</p>
<pre><code>array([4, 2, 0, 4, 4, 4, 4, 4, 0])
</code></pre> | numpy | 1 |
377,610 | 59,376,225 | Random number- same number on every run different number for each row | <p>i want the following function will return a different number for each row in a data frame but the same number every time the function runs.</p>
<p>thanks.</p>
<pre><code>def inc14(p):
if p==1:
return random.randint(1,2000)
elif p==2:
return random.randint(2001,3000)
elif p==3:
return random.randint(300... | <p><strong>Defined ranges doesn't matter:</strong></p>
<p>Here a running example if the defined ranges doesn't matter, if they matter see below:</p>
<pre><code>import random
import pandas as pd
random.seed(42) # Seed is here to always produce the same numbers
data = {'Name':['Tom', 'nick', 'krish', 'jack'], 'Age':[... | python|pandas | 1 |
377,611 | 59,136,086 | Tensorflow 'NoneType' object has no attribute 'shape' | <pre><code>import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import os
import cv2
from tqdm import tqdm
DATADIR ="C:/Users/Park/Project/TrainingData"
CATEGORIES = ["doll", "machine", "puzzle"]
for category in CATEGORIES:
path = os.path.join(DATADIR, category)
for img in os.listdir(pat... | <p>To surface the problem quickly, add</p>
<pre><code>if img_array is None:
print("imread failed on {}".format(img))
</code></pre> | python|tensorflow|operating-system|cv2 | 0 |
377,612 | 59,196,765 | ParseError: Error tokenizing data. C error: Expected 50 fields in line 224599, saw 51 | <p>I'm trying to <code>pd.concat</code> multiple .xlsx files in a master CSV and then combine this CSV with past CPU data which is also in CSV format. </p>
<p>The first operation is a success (op 3 out of 8), however on the second pass (history + current data in CSV format - op 7 out of 8) I'm getting the ParseError a... | <p>check separators in your csv file, maybe there are more commas inside cells , read_csv is taking by default <code>sep=','</code>
Propably you should set different separator to open your csv file
<code>pd.read_csv(sep=' ')</code></p> | pandas|concat | 1 |
377,613 | 59,118,533 | numpy multiple boolean index arrays | <p>I have an array which I want to use boolean indexing on, with multiple index arrays, each producing a different array. Example:</p>
<pre><code>w = np.array([1,2,3])
b = np.array([[False, True, True], [True, False, False]])
</code></pre>
<p>Should return something along the lines of:</p>
<pre><code>[[2,3], [1]]
</... | <p>Assuming you want a list of numpy arrays you can simply use a comprehension:</p>
<pre><code>w = np.array([1,2,3])
b = np.array([[False, True, True], [True, False, False]])
[w[bool] for bool in b]
# [array([2, 3]), array([1])]
</code></pre>
<p>If your goal is just a sum of the masked values you use:</p>
<pre><cod... | python|numpy|indexing|boolean|mask | 1 |
377,614 | 59,259,628 | unable to write multi-index dataframe to excel | <p>I want to write a multi-index dataframe to excel:</p>
<pre><code>col = [['info', '', 'key'], ['alert', 'date', 'price'], ['alert', 'date', 'amount']]
df = pd.DataFrame(columns = pd.MultiIndex.from_tuples(col))
df.loc[0, :] = np.random.random(3)
df.to_excel('data.xlsx', index = False)
</code></pre>
<p>However, an e... | <p>After searching the web, I used <code>pywin32</code> to solve the problem.</p>
<pre><code>import win32com.client as win32
df.to_excel('data.xlsx', index = True)
excel = win32.gencache.EnsureDispatch('Excel.Application')
excel.DisplayAlerts = False
wb = excel.Workbooks.Open('data.xlsx')
excel.Visible = True
ws = wb.... | pandas | 1 |
377,615 | 59,046,447 | Invalid Argument error while using keras model API inside an estimator model_fn | <p>The <code>model_fn</code> for custom estimator which I have built is as shown below,</p>
<pre class="lang-py prettyprint-override"><code>def _model_fn(features, labels, mode):
"""
Mask RCNN Model function
"""
self.keras_model = self.build_graph(mode, config)
outputs = self.keras_mod... | <p>This problem occurs only with this particular model (Mask-RCNN). To overcome this problem slight modifications can be made in method <code>self.build_graph(mode, config)</code> as follows:</p>
<pre class="lang-py prettyprint-override"><code>def build_graph(mode, config):
# For Input placeholder definition
a... | python|tensorflow|keras|tensorflow-estimator | 0 |
377,616 | 59,349,347 | How can I extract values from a dataframe or filter based on some criteria in Python? | <p>I have a data frame with extracted values from some files. How can I filter or extract the first two rows of data after the value u in col 1. The col 1 value will have a range of 80 that I want to capture after the value u. The value u might be two or three files after a new filex in col 0 or none at all as shown be... | <p>If same pattern of data (same groups of file columns and same ends of values columns) for pairs of columns is possible test if last value is numeric and then <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.head.html" rel="nofollow noreferrer"><code>GroupBy.head</code></... | python|pandas|dataframe|filter | 0 |
377,617 | 59,322,523 | Predict outcome based on user input with Neural Network | <p>I have wrote a code for simple <code>neural network</code> in <code>Python</code>. Neural network uses <code>Sigmoid</code> function to predict outcome (0 or 1).
My question is, how can I predict outcome based on my own input?</p>
<p>For example, I want to make prediction for these input values:</p>
<pre><code>inp... | <p>Your prediction is by the same method as during training:</p>
<pre><code>my_output = sigmoid(np.dot(my_input, weights))
</code></pre>
<p>If you try using as input the first three examples of your training you will find correct outputs :</p>
<pre><code>my_input = [0.3,-0.1,0.1]
prediction: [1.]
my_input = [0.5,.3,... | python|numpy|machine-learning|neural-network | 1 |
377,618 | 59,042,750 | Use of logical_and with list of lists | <p>Suppose I have the following two lists:</p>
<pre><code>l = [[], [1]]
m = [0, 1]
</code></pre>
<p>If I check to see if elements are in a list:</p>
<pre><code>>>> np.array(m[1]) == 1
True
>>> 1 in np.array(l)[1]
True
</code></pre>
<p>This works as expected.</p>
<p>However, if I use the numpy <co... | <p>So just analyzing your code</p>
<pre><code>np.array(m) == 1
>>> [False True]
1 in np.array(l)
>>> False
</code></pre>
<p>You're basically comparing <strong>False</strong> with <strong>[False True]</strong></p>
<p><strong>Update</strong></p>
<p>If you want the output to be [False True] then yo... | python|numpy|boolean | 0 |
377,619 | 14,271,121 | Assume zero for subsequent dimensions when slicing an array | <p>I have need to slice an array where I would like zero to be assumed for every dimension except the first.</p>
<p>Given an array:</p>
<pre><code>x = numpy.zeros((3,3,3))
</code></pre>
<p>I would like the following behavior, but without needing to know the number of dimensions before hand:</p>
<pre><code>y = a[:,0... | <p>You could create a indexing tuple, like this:</p>
<pre><code>x = arange(3*3*3).reshape(3,3,3)
s = (slice(None),) + (0,)*(x.ndim-1)
print x[s] # array([ 0, 9, 18])
print x[:,0,0] # array([ 0, 9, 18])
</code></pre>
<p>I guess you could also do:</p>
<pre><code>x.transpose().flat[:3]
</code></pre>
<p>but I pref... | python|numpy | 3 |
377,620 | 13,876,441 | Workaround for bug with displaying quvier key and patch in matplotlib 1.20 | <p>Hej,</p>
<p>I'm using the latest version (1.2.0) of matplotlib distributed with macports. I run into an AssertionError (I guess stemming from internal test) running this code</p>
<pre><code>#!/usr/bin/env python
import numpy as np
import matplotlib.pyplot as plt
X,Y = np.meshgrid(np.arange(0, 2*np.pi, .2), np.ar... | <p>You can save as eps or svg and convert to pdf. I found that the best way to produce small pdf files is to save as eps in matplotlib and then use epstopdf.</p>
<p>svg also works fine, you can use Inkscape to convert to pdf. A side-effect of svg is that the text is converted to paths (no embedded fonts), which might ... | python|numpy|matplotlib | 2 |
377,621 | 13,854,632 | Which scipy.optimize.minimize is least sensitive to starting location? | <p>I'm trying to minimize a function using one of the <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html#scipy.optimize.minimize" rel="nofollow">scipy minimizers</a>. Unfortunately my function has plateaus of equal value so minimisers get stuck there. I was wondering which of the ... | <p>Add a linear function of the coordinates to your function to give some nonzero, but very small slope to the flat areas. If your minimum/maximum is in a flat area, you need to decide which part of the flat area to choose as your final answer, so you might as well bias the whole search. After this arrives at a minim... | optimization|numpy|scipy | 1 |
377,622 | 14,126,201 | Performance/standard using 1d vs 2d vectors in numpy | <p>Is there a standard practice for representing vectors as 1d or 2d ndarrays in NumPy? I'm moving from MATLAB which represents vectors as 2d arrays.</p> | <p>In my experience, 1D is the norm in numpy for vectors. The only good reason to keep a vector of <code>n</code> elements as a 2D array of shape <code>(1, n)</code> or <code>(n, 1)</code> is in a linear algebra context, where you wanted to keep row and column vectors differentiated. As EitanT hinted on his now deleted... | python|matlab|numpy|linear-algebra | 4 |
377,623 | 44,871,420 | TensorFlow dynamic_rnn input for regression | <p>I'm stuck trying to convert an existing tensorflow sequence to sequence classifier to a regressor.</p>
<p>Currently I'm stuck in handling the input for <code>tf.nn.dynamic_rnn()</code>. According to the documentation and other answers, input should be in the shape of <code>(batch_size, sequence_length, input_size)<... | <p>If you are trying to do continuous why can't you just reshape your input placeholders to be of shape <code>[BATCH, TIME_STEPS, 1]</code> and add that one extra dimension into your input via <code>tf.expand_dims(input, 2)</code>. This way, your input would match the dimensions that <code>dynamic_rnn</code> expects (a... | python|tensorflow|regression | 1 |
377,624 | 44,872,795 | Convert to True/False values of Pandas Dataframe | <p>I have a rather big dataframe that looks a bit like this:</p>
<pre><code> | obj1 | obj2 | obj3 |
|------------------------
0 | attr1 | attr2 | attr1 |
1 | attr2 | attr3 | NaN |
2 | attr3 | attrN | NaN |
</code></pre>
<p>I'm new(ish) to pandas but I can't figure out a way to make it look like this:</p>
<... | <p>You need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>set_index</code></a> + <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.eq.html" rel="nofollow noreferrer"><code>eq</code></a>:</p>
<pre><code>df = ... | python|pandas|dataframe | 6 |
377,625 | 44,964,006 | apply normalisation on groupBy data in Pandas DataFrame | <p>DataFrame columns: </p>
<pre><code>['PercentSalaryHike', 'Attrition', 'EmployeeCountFraction']
</code></pre>
<p>After Grouping by first two columns:
EmployeeCount shows the <strong><em>fraction of people</em></strong> whose attrition is <strong><em>'yes'</em></strong> and rest <strong><em>'No'</em></strong> for th... | <p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.unstack.html" rel="nofollow noreferrer"><code>unstack</code></a> for reshape data, then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.add_prefix.html" rel="nofollow noreferrer"><code>add_pr... | python|pandas|dataframe|data-analysis | 1 |
377,626 | 44,873,478 | ValueError: size of tuple must match number of fields while using genfromtxt | <pre><code>import os
import numpy as np
os.getcwd()
os.chdir('C:/Users/Gururaj/Desktop/tech/python/')
data = np.genfromtxt("simple.txt", dtype=None, delimiter=",",
names=str(False))
print(data[0])
</code></pre>
<p>=========================error as below======================</p>
<h2>this is being done in Jupyter not... | <p>The <code>genfromtxt</code> docs say:</p>
<blockquote>
<p>names : {None, True, str, sequence}</p>
</blockquote>
<p>you set it to:</p>
<pre><code>In [689]: str(False)
Out[689]: 'False'
</code></pre>
<p>In other words, you've told it to name one field 'False'. Hence the mismatch between the number of fields and... | python|numpy|ipython|ipython-notebook|genfromtxt | 1 |
377,627 | 45,039,027 | Replace Some Columns of DataFrame with Another (Based on Column Names) | <p>I have a <code>DataFrame</code> <code>df1</code>:</p>
<pre><code>| A | B | C | D |
-----------------
| 0 | 1 | 3 | 4 |
| 2 | 1 | 8 | 4 |
| 0 | 2 | 3 | 1 |
</code></pre>
<p>and a <code>DataFrame</code> <code>df2</code>:</p>
<pre><code>| A | D |
---------
| 2 | 2 |
| 3 | 2 |
| 1 | 9 |
</code></pre>
<p>I want to re... | <p>You can use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.combine_first.html" rel="noreferrer"><code>combine_first</code></a>:</p>
<pre><code>df2.combine_first(df1)
# A B C D
#0 2 1.0 3.0 2
#1 3 1.0 8.0 2
#2 1 2.0 3.0 9
</code></pre> | python|pandas | 6 |
377,628 | 45,105,651 | Installed tensorflow, but pycharm ignores it | <p>I installed tensorflow by(answer from Joshua):
<a href="https://stackoverflow.com/questions/43419795/how-to-install-tensorflow-on-anaconda-python-3-6">how to install tensorflow on anaconda python 3.6</a>
If I test it in cmd:</p>
<pre><code>D:\>python
Python 3.6.1 |Anaconda 4.4.0 (64-bit)| (default, May 11 2017,... | <p>just install tensorflow from the project settings. You don't need anaconda.</p>
<p><a href="https://i.stack.imgur.com/Sghqj.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/Sghqj.jpg" alt="enter image description here"></a></p> | python|tensorflow|pycharm | 8 |
377,629 | 44,917,064 | How to crosstab this table with pandas? | <p>I have this data and i want to cross-tabulate between the GDP level (above average vs. below average) vs. Level of alcohol consumption (above average vs. below average). and find the correlation.</p>
<p><a href="https://i.stack.imgur.com/0dqjF.png" rel="nofollow noreferrer">data</a></p>
<p>I'm trying this but is n... | <p>IIUC:</p>
<pre><code>df['GDP_Avg'] = np.where(df.GDP < df.GDP.mean(),'Below Average','Above Average')
df['RC_Avg'] = np.where(df.Recorded_Consupmtion < df.Recorded_Consupmtion.mean(),'Below Average','Above Average')
pd.crosstab(df['GDP_Avg'],df['RC_Avg'], margins=True)
</code></pre>
<p>Output:</p>
<pre><c... | pandas|crosstab|data-science | 1 |
377,630 | 45,128,032 | Error while computing second derivatives in tensorflow | <p>I am training a model that requires computation of second derivatives (i.e) gradients of gradients. Here is a short snippet that does that:</p>
<pre><code>mapping_loss = tf.losses.sparse_softmax_cross_entropy(
1 - adversary_label, adversary_logits)
adversary_loss = tf.losses.sparse_softmax_cross_entropy(
ad... | <p>It seems like TensorFlow does not support second derivatives of softmax cross entropy at the moment. See <a href="https://github.com/tensorflow/tensorflow/blob/c2ce4f68c744e6d328746b144ff1fcf98ac99e6c/tensorflow/python/ops/nn_grad.py#L449" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/blob/c2ce4... | python|tensorflow | 0 |
377,631 | 44,843,956 | Fill a column in the dataframe based on similar values from another dataframe in pandas | <p>I have two dataframe:</p>
<pre><code> df1 df2
№ year № year
1 2010 373
2 2010 374
3 2010 375
4 2010 376
5 2010 ... ... | <p>I think need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html" rel="nofollow noreferrer"><code>map</code></a> by <code>Series</code> created by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>set_inde... | python|pandas | 2 |
377,632 | 45,019,319 | Pandas: Split a string and then create a new column? | <p><a href="https://i.stack.imgur.com/1AVDn.png" rel="noreferrer"><img src="https://i.stack.imgur.com/1AVDn.png" alt="enter image description here"></a></p>
<p>Let's say you have Col1.</p>
<p>How do you create the new column 'Col2' after you split the string values in Col1 until you see _?</p> | <p>Edit to handle strings without '_':</p>
<pre><code>df['Col2'] = (np.where(df['Col1'].str.contains('_'),
df['Col1'].str.split('_').str[1],
df['Col1']))
</code></pre>
<p>OR as COLDSPEED suggests in comments:</p>
<pre><code>df['Col1'].str.split('_').str[-1]
</code></pre>
<p>You c... | python|pandas | 22 |
377,633 | 45,088,560 | Filtering on index levels in a pandas.DataFrame | <p>If I have a multiindex dataframe:</p>
<pre><code>import pandas as pd
df = pd.DataFrame([[1,2,3],[4,5,6],[7,8,9]],columns=['a','b','c']).set_index(['a','b'])
</code></pre>
<p>I can simply filter the dataframe on a column, for example:</p>
<pre><code>df[df.c>4]
</code></pre>
<p>But to do the same on the level o... | <p>You can use <code>query</code> for better readability </p>
<pre><code>In [795]: df.query('b > 4')
Out[795]:
c
a b
4 5 6
7 8 9
</code></pre> | python|pandas | 13 |
377,634 | 45,134,654 | Easily switching between feed_dict and queues for input to TensorFlow model | <p>Right now I have a model configured to take its inputs with <code>feed_dict</code>. The code looks something like this:</p>
<pre><code># model.py
class MyModel(object):
def __init__(self, hyperparams):
self.build_model(hyperparams)
def build_model(self, hps):
self.input_data = tf.placeholder(dtype=tf.f... | <p>You could separate the creation of the <code>Variables</code> and the <code>Operations</code> by:</p>
<ul>
<li>adding a <code>build_variables</code> method called at the instantiation of your <code>Model</code> class,</li>
<li>changing the interface of the <code>build_model</code> method so it accepts your <code>x<... | python|tensorflow | 3 |
377,635 | 44,862,754 | Tensorflow: using an input-pipeline (.csv) as a dictionary for training | <p>I'm trying to train a model on a .csv dataset (5008 columns, 533 rows).
I'm using a textreader to parse the data into two tensors, one holding the data to train on [example] and one holding the correct labels [label]:</p>
<pre><code>def read_my_file_format(filename_queue):
reader = tf.TextLineReader()
key, ... | <p>As the error says, you are trying to feed a tensor to <code>feed_dict</code>. You have defined a <code>input_pipeline</code> queue and you cant pass it as <code>feed_dict</code>. The proper way for the data to be passed to the model and train is shown in the code below:</p>
<pre><code> # A queue which will return b... | tensorflow|tensor | 0 |
377,636 | 44,871,405 | Pandas Iterrows Row Number & Percentage | <p>I'm iterating over a dataframe with 1000s of rows. I ideally would like to know the progress of my loops - i.e. how many rows has it completed, what percentage of total rows has it completed etc.</p>
<p>Is there a way I can print the row number or even better, the percentage of rows iterated over? </p>
<p>My code ... | <p>First of all <code>iterrows</code> gives tuples of <code>(index, row)</code>. So the proper code is</p>
<pre><code>for index, row in testDF.iterrows():
</code></pre>
<p>Index in general case is not a number of row, it is some identifier (this is a power of pandas, but it makes some confusions as it behaves not as or... | python|pandas | 8 |
377,637 | 44,953,066 | Pandas Shift Converts Ints to Float AND Rounds | <p>When shifting column of integers, I know how to fix my column when Pandas automatically converts the integers to floats because of the presence of a NaN.
<a href="https://stackoverflow.com/questions/41870093/pandas-shift-converts-my-column-from-integer-to-float">I basically use the method described here.</a></p>
<p... | <p>Because you know what happens when <code>int</code> is casted as float due to <code>np.nan</code> <strong>and</strong> you know that you don't want the <code>np.nan</code> rows anyway, you can shift yourself with <code>numpy</code></p>
<pre><code>df[1:].assign(prior_epoch=df.epoch.values[:-1])
ep... | python|pandas|rounding | 1 |
377,638 | 45,044,675 | Unique random number sampling with Numpy | <p>I need to create a 10,000 x 50 array in which each row contains an ascending series of random numbers between 1 and 365, like so:</p>
<pre><code>[[ 4 11 14 ..., 355 360 364]
[ 2 13 15 ..., 356 361 361]
[ 4 12 18 ..., 356 361 365]
...,
[ 6 9 17 ..., 356 362 364]
[ 1 10 19 ..., 352 357 360]
[ ... | <p>The following solution does not use sort:</p>
<pre><code>l = np.array([True]*50 + [False]*315)
total = np.arange(1,366)
sample_dates = np.array([total[np.random.permutation(l)] for _ in range(10000)])
</code></pre>
<p>Hence it seems to be faster than the other suggested solutions (takes 0.44 seconds on my computer... | python|performance|numpy | 2 |
377,639 | 45,177,318 | In pandas, is there some compact way to plot data across days of the week? | <p>I've got a simple dataframe with a set of values recorded against <code>datetime</code>s which are set to the index. Is there some compact way to get this data plotted across days of the week? I mean something like the following, with the days of the week across the horizontal axis and the data for different weeks p... | <h3>Setup:</h3>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.dates as mdates
np.random.seed(51723)
#Make some fake data to play with. Includes a partial week.
n = pd.DatetimeIndex(start="2-Jan-2017", end="1-Mar-2017", freq="1H")
df = pd.DataFrame(index=n, data=... | python|pandas|datetime|matplotlib|weekday | 2 |
377,640 | 45,179,829 | How to split a 3D matrix/volume into a fixed size sub-volumes and then re-merge in python? | <p>How can I split a 3D numpy array into fixed size 3D sub-arrays, do some manipulation on the sub-arrays, and finally put them back in the same order to create the original big 3D volume?</p>
<p>e.g. big volume is nxnxm</p>
<p>so, I would like to split it to sub-vlumes of k x k x k, and do some manipulation on each ... | <p>A simple solution would be to process your array with nested for-loops:</p>
<pre><code>A = np.random.rand(5, 4)
print "A:", A
step = 2
newHeight = np.ceil(float(A.shape[0]) / step)
newWidth = np.ceil(float(A.shape[1]) / step)
B = np.zeros((newHeight, newWidth))
C = np.zeros(A.shape)
for i in range(B.shape[0]):
... | python|numpy|matrix|split | 1 |
377,641 | 45,117,849 | Creating a new variable through another script in python | <p>I've been wondering how to accomplish this.</p>
<p>I have 2 python scripts, the major one calls to the other and pass them 3 variables, then this script which is only one function returns a tuple of lenght 4. And finally I call this script through the first one.</p>
<p>My question is is there any method to make th... | <p>Suppose you have file name for both script be script1.py and script2.py in same directory
To call the function named <code>Femma</code> in file <code>script2.py</code> in <code>script1.py</code> you will use.<br>
In file script1.py<br>
<code>import script2<br>
script2.Femma(args)</code> </p> | python|function|numpy | 0 |
377,642 | 44,889,376 | How to go from printing [1, 2, 3] to printing 1, 2, 3 in a text file Python | <p>I am using the following loop to iterate over a numpy array and print into a separate text file.</p>
<pre><code>c= np.array([1, 2, 3])
nc = c.astype(np.int)
for x in nc:
print >> thing_here, x
</code></pre>
<p>yet when I open the thing_here text file it prints my array as <code>[1, 2, 3]</code> rather th... | <p>the <code>join</code> command will do this:</p>
<pre><code>c = np.array([1,2,3])
c_joined = ' '.join(map(str,c))
</code></pre>
<p>If the array is already a list of strings, you can ignore the <code>map()</code> command and just use:</p>
<pre><code>c = np.array([1,2,3])
c_joined = ' '.join(c)
</code></pre>
<p>The... | python|arrays|numpy | 2 |
377,643 | 45,064,916 | How to find the correlation between a group of values in a pandas dataframe column | <p>I have a dataframe df:</p>
<pre><code>ID Var1 Var2
1 1.2 4
1 2.1 6
1 3.0 7
2 1.3 8
2 2.1 9
2 3.2 13
</code></pre>
<p>I want to find the pearson correlation coefficient value between <code>Var1</code> and <code>Var2</code> for every <code>ID</... | <p>To get your desired output format you could use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.corrwith.html" rel="nofollow noreferrer"><code>.corrwith</code></a>:</p>
<pre><code>corrs = (df[['Var1', 'ID']]
.groupby('ID')
.corrwith(df.Var2)
.rename(columns={... | python|pandas|dataframe | 15 |
377,644 | 45,037,453 | Group by multiple column and perform custom aggregation | <p>I have a dataframe example given below.</p>
<pre><code> hour minute value
0 0 10
0 5 20
0 10 30
0 15 50
0 20 10
0 25 55
1 0 55
1 5 50
1 10 10
1 15 20
1 20 30
1 25 40
1 30 50
</cod... | <p>You can use <code>df.groupby(...).transform('mean')</code>to return a series with the mean of each group:</p>
<pre><code>import pandas as pdf
df = pd.DataFrame(columns = ['hour', 'minute', 'value'], data =
[[ 0, 0, 10],
[0, 5, 20],
[0, 10, 30],
[ 0, 15, 50],
[0, 20, ... | python|pandas | 2 |
377,645 | 57,107,719 | Numpy shuffle 3-D numpy array by row | <p>Suppose I have the following 3D matrix:</p>
<p>1 1 1</p>
<p>2 2 2</p>
<p>3 3 3</p>
<p>and behind it (3rd dimension):</p>
<p>a a a</p>
<p>b b b</p>
<p>c c c</p>
<p>Defined as the following if I am correct:</p>
<pre><code>import numpy as np
x = np.array([[[1,1,1],
[2,2,2],
[3,3... | <p>Using <code>np.random.shuffle</code>:</p>
<pre><code>import numpy as np
x = np.array([[[1,1,1],
[2,2,2],
[3,3,3]],
[["a","a","a"],
["b","b","b"],
["c","c","c"]]])
ind = np.arange(x.shape[1])
np.random.shuffle(ind)
x[:, ind, :]
</code></p... | python|arrays|numpy|row|shuffle | 2 |
377,646 | 56,994,738 | How to utilize 100% of GPU memory with Tensorflow? | <p>I have a 32Gb graphics card and upon start of my script I see: </p>
<pre><code>2019-07-11 01:26:19.985367: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 95.16G (102174818304 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
2019-07-11 01:26:19.988090: E tensorflow/stream_executor/cuda/cuda_dri... | <p>Monitoring GPU is not as simple as monitoring CPU.
There are many parallel processes going on which could create a <code>bottleneck</code> for your GPU.</p>
<p>There could be various problems like :<br>
1. Read/Write speed for your data<br>
2. Either CPU or disk is causing a bottleneck</p>
<p>But I think it is pre... | python|tensorflow | 5 |
377,647 | 56,945,295 | How to Load and Continue Training Model Saved as .H5 in Google Drive | <p>I followed the following tutorial in Google Colab to create a text generating RNN: <a href="https://www.tensorflow.org/tutorials/sequences/text_generation" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/sequences/text_generation</a>
Then, I trained it with my own data. At the end, I also added the fo... | <p>Press <code>Ctrl+Alt+P</code> or look at the bottom left of the screen, there you can find both the upload and download snippets. Just adapt it to h5 files instead of txt files:</p>
<p><strong>Upload</strong>:</p>
<pre><code># Import PyDrive and associated libraries.
# This only needs to be done once in a notebook.
... | python|tensorflow|google-colaboratory|pydrive | 0 |
377,648 | 57,176,319 | How to fill in the rows of the hourly timestamp with the last known price until the price column change and likewise continue further | <p>How to fill in the rate with the previously known price for each hour until the price changes likewise fill in the rate until it changes in future.</p>
<pre><code> Provided Raw Dataset
Date product price
2019-01-02 02:00:00 XVZ 22.00
2019-01-02 05:00:00 XVZ ... | <p>Try with <a href="https://www.google.com/search?q=df+resample&rlz=1C1GCEU_enIN822IN823&oq=df+resample&aqs=chrome.0.35i39j0l5.2112j0j7&sourceid=chrome&ie=UTF-8" rel="nofollow noreferrer"><code>resample</code></a>:</p>
<pre><code>df.set_index('Date').resample('H').ffill().reset_index()
</code></pr... | python-3.x|pandas|numpy | 0 |
377,649 | 57,121,057 | Replace first few digits birthdate pandas | <p><strong>Background</strong></p>
<p>I have the following sample df</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'Birthdate':['This person was born Date of Birth: 5/6/1950 and other',
'no Date of Birth: nothing here',
'One Date of Birth: 01/01/2001 last he... | <p>Try this:</p>
<pre><code>df['Birthdate'] = df.Birthdate.str.replace(r'[0-9]?[0-9]/[0-9]?[0-9]/', '*BDAY*')
Out[273]:
Birthdate P_ID N_ID
0 This person was born Date of Birth: *BDAY*1950... 1 A1
1 no Date of Birth: nothing here 2 A2
2 ... | regex|python-3.x|string|pandas|replace | 1 |
377,650 | 57,011,682 | New column of unique index values for a given column | <p>I have the following data frame:</p>
<pre><code>data = dict(name=['a', 'a', 'a', 'b', 'b', 'c', 'd'])
df = pd.DataFrame(data=data, columns=data.keys())
</code></pre>
<p>How do I create a new row with values that correspond to the unique values of <code>name</code>?</p>
<p>What I would like is the following, pleas... | <p>List three method </p>
<pre><code>df.index=pd.factorize(df.name)[0]+1
df.index=df.name.astype('category').cat.codes+1
df.index=df.groupby('name').ngroup()+1
</code></pre> | python|pandas|dataframe|indexing|unique | 2 |
377,651 | 56,871,550 | Sum based on grouping in pandas dataframe? | <p>I have a pandas dataframe df which contains:</p>
<pre><code>major men women rank
Art 5 4 1
Art 3 5 3
Art 2 4 2
Engineer 7 8 3
Engineer 7 4 4
Business 5 5... | <p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.melt.html" rel="nofollow noreferrer"><code>melt()</code></a> then <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby()</code></a>:</p>
<pre><code>df.drop... | python|pandas|dataframe|sum | 2 |
377,652 | 57,285,620 | How to fix "OverflowError: Unsupported UTF-8 sequence length when encoding string" | <p>Getting follwoing error while converting pandas dataframe to json</p>
<blockquote>
<p>OverflowError: Unsupported UTF-8 sequence length when encoding string</p>
</blockquote>
<p>this is code to </p>
<pre><code> bytes_to_write = data.to_json(orient='records').encode()
fs = s3fs.S3FileSystem(key=aws... | <p>As this <a href="https://stackoverflow.com/questions/8422243/overflowerror-unsupported-utf-8-sequence-length-when-encoding-string#comment105608580_14567504">answer suggests</a>, I converted the data-frame using the function <code>.to_json()</code> and the <code>default_handler</code> parameter, you can find the docu... | python|pandas|utf-8|to-json | 5 |
377,653 | 57,268,610 | Problem either with number of characters exceeding cell limit, or storing lists of variable length | <p>The problem:</p>
<p>I have lists of genes expressed in 53 different tissues. Originally, this data was stored in a maximal array of the genes, with 'NaN' where there was no expression. I am trying to create new lists for each tissue that just have the genes expressed, as it was very inefficient to be searching thro... | <p>IIUC you need lists of the gene names found in each tissue. This writes these lists as columns into a csv:</p>
<pre><code>import pandas as pd
df = pd.read_csv('E-MTAB-5214-query-results.tsv', skiprows = [0,1,2,3], sep='\t')
df = df.drop(columns='Gene ID').set_index('Gene Name')
res = pd.DataFrame()
for c in df.co... | python|pandas|csv|export-to-csv|large-files | 1 |
377,654 | 56,956,350 | Count non-na values by row and save total to a new variable in pandas | <p>I am new to python and I am trying to count non-na values, per row, and save the total to a new variable.</p>
<p>I have the data frame: </p>
<pre><code>data = {'x1': ["Yes", "Yes", "No"],
'x2': ["Yes",np.nan, "Yes"],
'x3': [np.nan, np.nan, "No"]}
df = pd.DataFrame(data, columns = ['x1', 'x2', 'x3'])
... | <p>You just need to use <code>count()</code> with <code>axis=1</code>:</p>
<pre><code>df['Total'] = df.count(axis=1)
</code></pre>
<p>Yields:</p>
<pre><code> x1 x2 x3 Total
0 Yes Yes NaN 2
1 Yes NaN NaN 1
2 No Yes No 3
</code></pre> | python|pandas|dataframe|count | 6 |
377,655 | 56,962,051 | Pandas dataframe to_sql with data longer than 65536 characters | <p>I have a Pandas dataframe, where some columns have values longer than 65536 characters. When I tried to export the data to MySQL using <code>df.to_sql(con=engine, name=table_name, if_exists='replace', index=False)</code>, they were truncated to 65536 characters. </p>
<p>Is there a way to automatically convert a col... | <p>This might be a workaround. The only thing is you need to have the list of columns that need to be converted to LONGTEXT. </p>
<pre><code>from sqlalchemy.dialects.mysql import LONGTEXT
dtype = {
"long_column_1": LONGTEXT,
"long_column_2": LONGTEXT
}
pdf.to_sql(con=engine, name=table_name, if_exists='replace... | python|mysql|sql|pandas|dataframe | 2 |
377,656 | 57,008,853 | Convert Pandas Timestamp with Time-Offset Column | <p>I get daily reports which include a timestamp column and a UTC Offset column. Using pandas, I can convert the int Timestamp into a datetime64 type. I unfortunately can't figure out how to use the offset.</p>
<p>Since the 'UTC Offset' column comes in as a string I have tried converting it to an int to help, but can'... | <p>I prefer to convert to datetime as it's easier to apply offsets than with native pandas time format.</p>
<p>To get the timezone offset:</p>
<pre><code>from tzlocal import get_localzone # pip install tzlocal
millis = 1288483950000
ts = millis * 1e-3
local_dt = datetime.fromtimestamp(ts, get_localzone())
utc_offset =... | python|pandas|datetime|utc|timezone-offset | 0 |
377,657 | 57,175,344 | Cython references to slots in a numpy array | <p>I have an object with a numpy array instance variable.</p>
<p>Within a function, I want to declare local references to slots within that numpy array.</p>
<p>E.g.,</p>
<pre><code>cdef double& x1 = self.array[0]
</code></pre>
<p>Reason being, I don't want to spend time instantiating new variables and copying v... | <p>C++ references aren't supported as local variables (even in Cython's C++ mode) because they need to be initialized upon creation and Cython prefers to generate code like:</p>
<pre><code># start of function
double& x_ref
# ...
x_ref = something # assign
# ...
</code></pre>
<p>This ensures that variable scope be... | numpy|pointers|reference|cython | 1 |
377,658 | 57,223,665 | How does TensorFlow generate gen_array_ops.py via array_ops.cc? | <p>TensorFlow generates code automatically. I am curious about how TF generates <code>gen_array_ops.py</code> by <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/ops/array_ops.cc" rel="nofollow noreferrer"><code>array_ops.cc</code></a> ?</p>
<p>The generated python file is at <code>python3... | <p>The Python code generation is done at building time through Bazel. You can find the relevant definition in <a href="https://github.com/tensorflow/tensorflow/blob/v1.14.0/tensorflow/tensorflow.bzl#L795-L919" rel="noreferrer"><code>tensorflow/tensorflow.bzl</code></a>, I will post here just the header:</p>
<pre><code... | python|tensorflow | 8 |
377,659 | 56,927,268 | Error while loading a pretrained resnet model | <p>I am trying to load the pre-trained ResNet model in the below link
<a href="https://drive.google.com/open?id=1xkVK92XLZOgYlpaRpG_-WP0Elzg4ewpw" rel="nofollow noreferrer">https://drive.google.com/open?id=1xkVK92XLZOgYlpaRpG_-WP0Elzg4ewpw</a></p>
<p>But it gives RuntimeError: The Session graph is empty. Add operatio... | <p>you must have a model(Rough house),and load parameter(Bed, furniture).now you need a Rough house(operations,such as:tf.Variable(),tf.add(),tf.nn.softmax_cross_entropy_with_logits()).</p> | tensorflow|resnet|pre-trained-model | 0 |
377,660 | 57,053,378 | Query PubMed with Python - How to get all article details from query to Pandas DataFrame and export them in CSV | <p>How can I get all article details from query on <a href="https://www.ncbi.nlm.nih.gov/pubmed/" rel="nofollow noreferrer">PubMed</a> to Pandas DataFrame and export them all into CSV.</p>
<p>I need following article details:</p>
<p><strong>pubmed_id, title, keywords, journal, abstract, conclusions,methods, results, ... | <p>Here is how I did it. It's fully functional code, all you need to
do is install pymed with
<code>pip install pymed</code> .
Function is here:</p>
<pre><code>from pymed import PubMed
pubmed = PubMed(tool="PubMedSearcher", email="myemail@ccc.com")
## PUT YOUR SEARCH TERM HERE ##
search_term = "Your search term"
r... | python|pandas|dictionary|pubmed | 9 |
377,661 | 57,115,074 | tf.logging.__dict__[hparams.verbosity] / 10) KeyError: 'INFO' | <p>Trying to run GitHub codes of nonlocal recurrent networks.
I am ending up getting this error. How to debug this error? </p>
<p>Traceback (most recent call last):
File "trainer.py", line 97, in
tf.logging.<strong>dict</strong>[hparams.verbosity] / 10)
KeyError: 'INFO'</p>
<p>tried editing codes. but not work... | <p>On tensorflow version 1.14, below code will cause the error.</p>
<pre><code>tf.logging.dict[hparams.verbosity]
</code></pre>
<p>So you can fix the code like below. It will be ok on tf 1.14.</p>
<pre><code>getattr(tf.logging, hparams.verbosity)
</code></pre> | tensorflow | 0 |
377,662 | 57,180,566 | Capture all csv files within all subfolders in main directory - Python 3.x | <p>Below code is used to split csv files based on a given time value. The problem is this code won't capture all the csv files. For example inside TT1 folder there are several subfolders.And those subfolders have folders inside them. And within those sub-sub-folders there are csv files. When I give the path as path='/r... | <p>You can get automatically all the subfolders and change the path:
If all the subfolders start with "Sub":</p>
<pre><code>import pandas as pd
import numpy as np
import glob
import os
path = '/root/Desktop/TT1/'
mystep = 0.4
#define the function
def data_splitter(df, name):
max_time = df['Time'].max() # get max... | python|python-3.x|pandas|csv|glob | 1 |
377,663 | 57,115,873 | ValueError: Error when checking target: expected dense_10 to have shape (1,) but got array with shape (19316,) | <p>I am running a CNN that check for images but does not classify. In fact, the output layer is a dense layer that have as argument the size of the images in the labels in 1d.</p>
<p>As shown below in the code, I am using model.fit_generator() instead of model.fit and when it comes to start training the model the foll... | <p>You are currently using the <code>sparse_categorical_crossentropy</code> loss, which needs integer labels and does the one-hot encoding internally, but your labels are already one-hot encoded.</p>
<p>So for this case you should revert back to the <code>categorical_crossentropy</code> loss.</p> | python|tensorflow|keras|deep-learning | 1 |
377,664 | 57,186,654 | Can't restore tensorflow variables | <p>I have a class as follows and the <code>load</code> function returns me the tensorflow saved graph. </p>
<pre><code>class StoredGraph():
.
.
.
def build_meta_saver(self, meta_file=None):
meta_file = self._get_latest_checkpoint() + '.meta' if not meta_file else meta_file
meta_s... | <p>As long as you have created all the necessary variables in your file and given them the <strong>same</strong> "name" (and of course the shape needs to be correct as well), <code>restore</code> will load all the appropriate values into the appropriate variables. <a href="https://www.tensorflow.org/guide/saved_model#r... | python|tensorflow|deep-learning | 0 |
377,665 | 56,910,427 | Alternative to nested np.where statements to retain NaN values while creating a new pandas boolean column based on two other existing columns | <p>I'm trying to figure out a more straightforward alternative for evaluating and creating a new column in a pandas dataframe based on two other columns that contain either True, False, or NaN values. I want the new column to evaluate as follows relative to the two reference columns:</p>
<ul>
<li>If either True -> Tru... | <p><code>np.select()</code> is made for this type of job:</p>
<pre><code>df['col3'] = pd.Series(np.select(
[(df.col1 == True) | (df.col2 == True), (df.col1 == False) | (df.col2 == False)],
[True, False], np.array(np.nan, object)))
</code></pre>
<p>Or, using only Pandas, but I think this way is less readable:<... | python|pandas|numpy | 1 |
377,666 | 57,113,188 | While using GPU for PyTorch models, getting the CUDA error: Unknown error? | <p>I am trying to use a pre-trained model using PyTorch. While loading the model to GPU, it is giving the following error:</p>
<pre><code>Traceback (most recent call last):
File "model\vgg_model.py", line 45, in <module>
vgg_model1 = VGGFeatureExtractor(True).double().to(device)
File "C:\Users\myidi\Anac... | <p>Strangely, this worked by using CUDA Toolkit 10.1. I don't know why the latest one is not the default one on PyTorch website in the section where they provide the commands to download the libraries. </p>
<p>Used the following command to install the libraries: <code>conda install pytorch torchvision cudatoolkit=10.1... | pytorch | 3 |
377,667 | 57,277,214 | Multi-GPU training of AllenNLP coreference resolution | <p>I'm trying to replicate (or come close) to the results obtained by the <a href="https://www.aclweb.org/anthology/D17-1018" rel="nofollow noreferrer">End-to-end Neural Coreference Resolution</a> paper on the <a href="http://conll.cemantix.org/2012/introduction.html" rel="nofollow noreferrer">CoNLL-2012 shared task</a... | <p>After some digging through the code I found out that AllenNLP does this under the hood directly through its <a href="https://allenai.github.io/allennlp-docs/api/allennlp.training.trainer.html#allennlp.training.trainer.Trainer" rel="nofollow noreferrer">Trainer</a>. The <code>cuda_device</code> can either be a single... | python|pytorch|allennlp | 2 |
377,668 | 57,210,071 | How do I use the pandas.melt function to unpivot a few columns while keeping the rest intact | <p>I am working with a database with 66 columns and I wish to unpivot only 3 columns using python <code>pandas.melt</code> function. </p>
<pre><code>df = pd.melt(df,value_vars=["RFR 1","RFR 2","RFR 3"],var_name="RFR Index",value_name="RFR Mode")
</code></pre>
<p>I'm finding all the other columns are dropped unless I ... | <p>IIUC, you can use <code>pandas.Index.difference</code> to get all columns of your dataframe that are not in your specified list. </p>
<p>A bit of a nonsensical example, but:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame(data=np.random.randn(5,10),
columns=['a','b','c','d',... | python|python-3.x|pandas | 4 |
377,669 | 56,929,866 | Merge operation is occupying full RAM | <p>I have 20 CSV files having a maximum size of 1 GB. In all these files, there are only two common columns "X", "Y". I am trying to merge these files on ["X", "Y"] to get a single file with all the columns. But, while doing so, I am getting <strong>MemoryError</strong> after merging 10 files.<br>Please help me to find... | <p>have you tried to free up memory manually with garbage collector "gc.collect()"? You can do it at the end of each loop, like this:</p>
<pre><code>import gc
final_df = pd.DataFrame()
for f in file_list:
df = pd.read_csv(f)
if final_df.empty:
final_df = df
else:
final_df = final_df.merge(... | python-3.x|pandas | 0 |
377,670 | 57,225,064 | want to calculate values of new columns by picking up the formula defined in another column | <p>i have a df with n number of columns calculated dynamically. in that df it has one column which defines that what formula i need to apply to calculate values of another new column. That formula need to be applied on the existing columns of that df</p>
<p>For example:
df1</p>
<pre><code>Col1 Col2 Col3 Col4(Formula... | <p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.eval.html" rel="noreferrer"><code>df.eval()</code></a>:</p>
<pre><code>df['Col5']=np.diag(df.eval(df['Col4(Formula)']))
print(df)
</code></pre>
<hr>
<pre><code> Col1 Col2 Col3 Col4(Formula) Col5
0 2017 12 2 Co... | python|pandas | 5 |
377,671 | 56,974,974 | Pandas resample() Series giving incorrect indexes | <p>I am trying to bin a multi-year time series of dates and float values. I'm trying to aggregate each day in to 15 minute bins. So I group the data set by day and then resample in 15 minute increments on each day.</p>
<p>The results seemed odd so I took a closer look at the behaviour of the resampling. The code bel... | <p>Resample is a tricky function. The main issue with the resampling is that you need to select which value you want to keep (using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.last.html" rel="nofollow noreferrer"><code>pandas.DataFrame.last</code></a> or <a href="https://pandas.... | python-3.x|pandas|time-series | 2 |
377,672 | 57,200,908 | Remove/replace columns values based on another columns using pandas | <p>I have a data frame like this: </p>
<pre><code>df
col1 col2 col3
ab 1 prab
cd 2 cdff
ef 3 eef
</code></pre>
<p>I want to remove col1 values from the col3 values</p>
<p>the final data frame should look like<</p>
<pre><code>df
col1 col2 col3
ab ... | <p>Use <code>.apply</code> with <code>replace</code> over <code>axis=1</code>:</p>
<pre><code>df['col3'] = df.apply(lambda x: x['col3'].replace(x['col1'], ''), axis=1)
</code></pre>
<p><strong>Output</strong></p>
<pre><code> col1 col2 col3
0 ab 1 pr
1 cd 2 ff
2 ef 3 e
</code></pre> | python|pandas|dataframe | 2 |
377,673 | 57,078,594 | Filling cell based on existing cells | <p>I have data in following format:</p>
<pre><code>8A564 nan json
8A928 nan json
8A563 nan json
8A564 10616280 json
8A563 10616222 json
8A564 nan json
8B1BB 10982483 json
8A564 10616280 json
</code></pre>
<p>I would like to fill data in second column to matc... | <h3><code>groupby</code> and <code>bfill</code></h3>
<p>Keep in mind the the <code>0</code> in <code>groupby(0)</code> refers to the column named <code>0</code>. If your column has a different name, use that.</p>
<pre><code>df.groupby(0).bfill()
0 1 2
0 8A564 10616280 json
1 8A928 NaN ... | python|pandas | 5 |
377,674 | 57,236,261 | Why casting input and model to float16 doesn't work? | <p>I'm trying to change inputs and a deep learning model to flaot16, since I'm using T4 GPU and they work much faster with fp16.
Here's part of the code: I first have my model and then made some dummy data point for the sake of figuring the data casting figured out first (I ran it with the whole batch and got the same... | <p>Check out your implementation of <code>CRNN</code>. My guess is that you have "hidden" state tensor stored in the model, but not as a "buffer" but just as a regular tensor. Therefore, when casting the model to float16 the hidden state remains float32 and causes you this error. </p>
<p>Try to store the hidden state... | python|pytorch | 2 |
377,675 | 56,900,687 | Reverse Rolling mean for DataFrame | <p>I am trying to create a fixture difficulty grid using a DataFrame. I want the mean for the next 5 fixtures for each team.</p>
<p>I’m currently using df.rolling(5, min_periods=1).mean().shift(-4). This is working for the start but is pulling NANs at the end. I understand why NANs are returned – there is no DF to shi... | <p>IIUC you need inverse values by indexing, use rolling and inverse back:</p>
<pre><code>df1 = df.iloc[::-1].rolling(5, min_periods=1).mean().iloc[::-1]
print (df1)
ARS AVL BHA BOU
0 3.4 2.4 2.80 2.60
1 3.5 2.0 2.75 2.75
2 4.0 2.0 3.00 3.00
3 3.5 2.0 3.50 2.50
4 3.0 2.0 2.00 2.00
</code></... | pandas|dataframe|nan|shift|rolling-computation | 2 |
377,676 | 57,253,539 | Parse phone number and string into new columns in pandas dataframe | <p>I've got a list of addresses in a single column <code>address</code>, how would I go about parsing the phone number and restaurant category into new columns? My dataframe looks like this </p>
<pre><code> address
0 Arnie Morton's of Chicago 435 S. La Cienega Blvd. Los Angeles 310-246-1501 Steakhouses ... | <p>Try using Regex with <code>str.extract</code>. </p>
<p><strong>Ex:</strong></p>
<pre><code>df = pd.DataFrame({'address':["Arnie Morton's of Chicago 435 S. La Cienega Blvd. Los Angeles 310-246-1501 Steakhouses",
"Art's Deli 12224 Ventura Blvd. Studio City 818-762-1221 Delis",
... | python|pandas | 3 |
377,677 | 57,153,016 | subsetting a data frame by partially matching another data frame in r (open to python/pandas solution) | <p><strong>basic problem description :</strong></p>
<p>Let <code>df</code> be a data frame and <code>df_match</code> a one row data frame.</p>
<p>I want to subset <code>df</code> such that only the rows remain whose non NA-Values are contained in the non-NA values of <code>df_match</code>.</p>
<p><strong>A minimal e... | <p>You can <code>Map</code> over the columns of <code>df</code> and <code>df_match</code>, and for each column-pair return a vector whose elements are <code>TRUE</code> if the corresponding element of <code>df</code> is <code>NA</code> or equals the element of <code>df_match</code>. Then select the rows where the numbe... | python|r|pandas|dataframe|subset | 1 |
377,678 | 57,294,070 | What is the difference between a numpy array of size (100, 1) and (100,)? | <p>I have two variables coming from diffrent functions and the first one <code>a</code> is:</p>
<pre><code><class 'numpy.ndarray'>
(100,)
</code></pre>
<p>while the other one <code>b</code> is:</p>
<pre><code><class 'numpy.ndarray'>
(100, 1)
</code></pre>
<p>If I try to correlate them via:</p>
<pre><co... | <p>(100,1) is 2d array of rows of length 1 like = <code>[[1],[2],[3],[4]]</code> and second one is 1d array <code>[1, 2, 3, 4 ]</code></p>
<pre><code>a1 = np.array([[1],[2],[3],[4]])
a2 = np.array([1, 2, 3, 4 ])
</code></pre> | python|numpy | 4 |
377,679 | 56,956,253 | Pandas reindex an index of a multi index in decreasing order of the series values | <p>I have a pandas series with a multi index like:</p>
<pre><code>A 385 0.463120
278 0.269023
190 0.244348
818 0.232505
64 0.199640
B 1889 0.381681
1568 0.284957
1543 0.259003
... | <p>You can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.cumcount.html" rel="nofollow noreferrer"><code>pandas.core.groupby.GroupBy.cumcount</code></a>:</p>
<pre class="lang-py prettyprint-override"><code># create example data
df = pd.DataFrame({'a':list(pd.util.te... | python|pandas | 2 |
377,680 | 57,286,016 | Edit concatenated csv with comment lines in the top - Python | <p>I have the following csv file myFile.csv that comes from a pandas dataframe exported:</p>
<pre><code># Comment line with information related to the business
customer_id column_1 column_2 column_3
123 A XX AG
456 B YY TT
# Comment line ... | <p>You can convert one CSV to another with following script:</p>
<pre><code>comments = []
header = ''
data = []
with open('myFile.csv', 'r') as f:
lines = f.readlines()
for i in range(len(lines)):
if not lines[i].startswith('#') and not lines[i-1].startswith('#'):
data.append(lines[i])
elif lines[... | python|pandas|csv|header | 0 |
377,681 | 56,956,454 | Apply preprocessing to the dataset | <p>I am implementing a paper on image segmentation in pytorch. I am required to do some preprocessing steps but as I am trying it the first time so I am unable to incorporate them in the traditional pipeline.
Following are the preprocessing steps-<br></p>
<p>1) N(w, h) = I(w, h) − G(w, h), (1)
where N is the normali... | <p>This is how I did it-</p>
<p>The solution of the first part is first defining the required function and then calling in the transforms using the generic transforms in the following way-</p>
<pre><code>def gaussian_blur(img):
image = np.array(img)
image_blur = cv2.GaussianBlur(image,(65,65),10)
new_image =... | python-3.x|deep-learning|pytorch|image-preprocessing | 0 |
377,682 | 57,144,586 | Tensorflow GradientTape "Gradients does not exist for variables" intermittently | <p>When training my network I am occasionally met with the warning: </p>
<p><code>W0722 11:47:35.101842 140641577297728 optimizer_v2.py:928] Gradients does not exist for variables ['model/conv1d_x/Variable:0'] when minimizing the loss.
</code></p>
<p>This happens sporadically at infrequent intervals (maybe once in ev... | <p>I had an issue that seems similar - may be helpful or not sure depending on what your network actually looks like, but basically, I had a multi-output network and I realised that as I was applying gradients that corresponded to the outputs separately, so for each separate loss there was a branch of the network for w... | python|tensorflow|keras | 16 |
377,683 | 57,241,212 | Integer data type conversion from string | <p>I was working on one data frame series's column whose Data type was 'object' (str).
its format was like '301,694'. </p>
<p>I want data type of that column from panda series to be int or float.
Received errors when I tried below code.</p>
<p>please share knowledge.</p>
<p>1) </p>
<pre><code>df2['Total Ballots Cou... | <p>Hope this helps:
<code>df['colname'] = df['colname'].replace(',', '').astype(int)</code></p>
<p>Another thing to do is :</p>
<p><code>int(''.join([i for i in str(number).split('') if i is not ',']))</code> for each number in the column.</p> | python|pandas | 0 |
377,684 | 57,187,901 | tensorflow has been ineffective training, just a few hundred steps, loss quickly dropped to 0, accuracy reached 100%, has been bothering me for months | <p>I am trying to use tensorflow for image classification. There are 5 categories in total, and there are about 300 images in each category.</p>
<p>But in the training process, just a few hundred steps, I have encountered some problems:
1, loss drops to 0, accuracy reaches 100%
2. After adding the verification set (I ... | <p>Mentioning the Solution here for the benefit of the community.</p>
<p>Problem is resolved if we <code>Normalize the Data</code>, i.e., by dividing each Pixel Value by <code>255</code>. </p>
<p>By dividing by <code>255</code>, the <code>0-255</code> range can be described with a <code>0.0-1.0</code> range where <co... | python|tensorflow|deep-learning|conv-neural-network | 0 |
377,685 | 57,063,629 | Simplify List of coordinates | <p>I have suplied a template image and a test image to the function <code>cv.matchTemplate</code>.</p>
<p>After returning, I filter anything out under 95% match. The results are good and I am producing the desired result. The result is a list of tuples each tuple represented by <code>(x,y)</code> The problem is after ... | <p>Fixed this answer due to OP comment about all of the tuples being in a list.
The first if condition if something you can change if you find that you want to be more/less strict for differences between points (e.g. if you want it to be within 5 pixels, you can do <= 5 rather than == 1). </p>
<pre><code>masterTest... | python|numpy|opencv | 1 |
377,686 | 57,065,024 | How to sort .iterrows() values? | <p>I can't seem to find an answer on this topic. I am trying to sort the values in my queryset. Right now, it's automatically sorted by TICKER_id:</p>
<pre><code>TICKER_id
DXJ -0.5
EWA 1.0
EWC 0.0
EWG -1.0
EWI -0.5
EWP -0.5
EWQ 0.5
EWU 0.0
EWW -0.5
EWY -1.0
EWZ 0.5
EZA 0.5
FEZ... | <p>As far as I can tell, Pandas uses the index to control <code>itterows</code> and will therefore go back to the normal order, even if you've resorted the dataframe, because the index goes with the row.</p>
<p>I've been able to iterate over the df in the intended order by <a href="https://pandas.pydata.org/pandas-doc... | python|pandas|numpy | 2 |
377,687 | 57,235,451 | 'numpy.ndarray' object has no attribute 'iterrows' while predicting value using lstm in python | <p>I have a dataset with three inputs and trying to predict next value of X1 with the combination of previous inputs values. </p>
<p>My three inputs are X1, X2, X3, X4.</p>
<p>So here I am trying to predict next future value of X1. To predict the next X1 these four inputs combination affect with:</p>
<pre><code>X1 +... | <p>Apparenty, <em>data</em> argument of your function is a <em>Numpy</em> array, not a <em>DataFrame</em>.
<em>Data</em>, as a <em>np.ndarray</em>, has also no named columns.</p>
<p>One of possible solutions, keeping the argument as <em>np.ndarray</em> is:</p>
<ul>
<li>iterate over rows of this array using <em>np.app... | python-3.x|pandas|machine-learning|lstm | 0 |
377,688 | 56,977,881 | Replace null values in a column corresponding to specific value in another column pandas | <p>I have a dataframe as below :</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'Country': ['USA','USA','MEX','IND','UK','UK','UK'],
'Region': ['Americas','NaN','NaN','Asia','Europe','NaN','NaN'],
'Flower': ['Rose','Lily','Lily','Orchid','Petunia','Lotus','Dandelion']})
</c... | <p>You can try below code if ffill is not an option, </p>
<pre><code>df['Region'] = np.select((df.Country.isin(['USA', 'MEX']), df.Country == 'UK'),
('Americas', 'Europe'), df.Region)
</code></pre> | python|pandas | 1 |
377,689 | 57,031,859 | Fast combination of non-unique rows in numpy array, mapped to columns (i.e. fast pivot table problem, without Pandas) | <p>I wonder if anyone can offer any ideas or advice on the following coding problem please, where I'm particularly interested in a fast Python implementation (i.e. avoiding Pandas).</p>
<p>I have a (dummy example) set of data like:</p>
<pre><code>| User | Day | Place | Foo | Bar |
1 ... | <p><strong>Approach #1</strong></p>
<p>Here's one based on dimensionality-reduction for memory-efficiency and <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.searchsorted.html" rel="nofollow noreferrer"><code>np.searchsorted</code></a> for tracing back and looking for matching ones between the two ... | python|arrays|pandas|numpy|vectorization | 4 |
377,690 | 45,871,314 | Split DataFrame into a dictionary of groups from multiple columns | <p>I have a dataframe like this:</p>
<pre><code> df = pd.DataFrame({
'Client':['A','B','C','D','E'],
'Revenue':[100,120,50,40,30],
'FYoQ':['FY','Q','Q','Q','FY'],
'Quarter':[np.nan,1,3,4,np.nan],
'Year':[2017,2016,2015,2017,2016]
... | <pre><code>from collections import defaultdict
d = defaultdict(dict)
[d[y].setdefault(q, g) for (y, q), g in df.groupby(['Year', 'Quarter'])];
d = dict(d)
for y, v in d.items():
print(y)
for q, s in v.items():
print(' ' + str(q))
p = s.__repr__()
p = '\n'.join([' ' + l for l ... | python|pandas|dictionary|dataframe|group-by | 3 |
377,691 | 45,941,742 | Apply functions to pandas groupby and indexing | <p>I am trying to understand the Pandas Groupby, but I'm currently seeing some behavior I don't understand. Basically, I have a dataset that looks like (only head shown):</p>
<pre><code> userId movieId rating timestamp parsed_time
0 1 2 3.5 1112486027 2005-04-02 23:53:47
1 1 29 3.5 11124... | <p>The problem happens not during the computation of the standard deviation, but when assigning the result to the new column <code>StdDev</code>. The is because pandas does assignment by index, implicitly.</p>
<p>The code below should work because the result of both <code>groupby</code> operations is indexed on <code>... | python|pandas|dataframe|pandas-groupby | 2 |
377,692 | 45,842,507 | Merging multiple dataframe lines into aggregate lines | <p>For the following dataframe:</p>
<pre><code>df = pd.DataFrame({'Name': {0: "A", 1: "A", 2:"A", 3: "B"},
'Spec1': {0: '1', 1: '3', 2:'5',
3: '1'},
'Spec2': {0: '2a', 1: np.nan, 2:np.nan,
3: np.nan}
}, columns=['Name', 'Spec1', 'Spec2'])
Name Spec... | <p>Another way</p>
<pre><code>In [966]: (df.set_index('Name').unstack()
.dropna().reset_index()
.groupby('Name')[0].apply(','.join))
Out[966]:
Name
A 1,3,5,2a
B 1
Name: 0, dtype: object
</code></pre> | python|pandas | 0 |
377,693 | 45,930,222 | validation_batch_size is equal to train_batch_size in training CNN? | <p>I want to save the model with the highest accuracy, I need to take a batch of validation data in each step to validation after each step to train, the training data set will be reused because of epoch, but <strong>if</strong> <code>train_batch_size</code> <strong>equals to</strong> <code>validation_batch_size</code>... | <p>It is possible to use a different batch size during evaluation. </p>
<p>That being said, you should use the same validation set every time you evaluate the model. Otherwise, the results can increase/decrease because the examples you evaluated on were inherently easier/more difficult compared to the previous evaluat... | tensorflow|deep-learning|conv-neural-network | 0 |
377,694 | 45,831,743 | How to write a dictionary list to an excel file using python? | <p>I have script that create the array at runtime and it is as below </p>
<pre><code>[{'Currency': 'Euro', 'Age of Bike': 12, 'Build Month': '08', 'Metric': '16694 km', 'Build Year': '2005', 'Website Link': u'https://www.autoscout24.nl/aanbod/motorhispania-benzine-geel-2c73a018-35a0-4e00-a1ed-1a3375ef4c4d', 'Country':... | <pre><code>data = ... # your data
df = pd.DataFrame.from_dict(data)
df = df[['Currency', 'Age of Bike', 'Build Month', 'Metric']]
print(df)
Currency Age of Bike Build Month Metric
0 Euro 12 08 16694 km
1 Euro 20 12 75000 km
2 Euro 30 03 ... | python|excel|pandas|dataframe | 3 |
377,695 | 45,847,006 | ValueError: operands could not be broadcast together with shapes - inverse_transform- Python | <p>I know <code>ValueError</code> question has been asked many <a href="https://stackoverflow.com/questions/24560298/python-numpy-valueerror-operands-could-not-be-broadcast-together-with-shapes">times</a>. I am still struggling to find an answer because I am using <code>inverse_transform</code> in my code.</p>
<p>Say ... | <p><strike>Although you didn't specify, I'm assuming you are using <code>inverse_transform()</code> from scikit learn's <code>StandardScaler</code></strike>. You need to fit the data first.</p>
<pre><code>import numpy as np
from sklearn.preprocessing import MinMaxScaler
In [1]: arr_a = np.random.randn(5*3).reshape(... | python|arrays|numpy|scikit-learn|broadcast | 7 |
377,696 | 45,995,040 | Parse ics file in Python. Icalendar package doesn't return start/end date and other properties | <p>Im trying to parse google calendar (<code>cal.ics</code>) using icalendar package and running this script:</p>
<pre><code>from icalendar import Calendar, Event
from datetime import datetime
g = open('cal.ics','rb')
gcal = Calendar.from_ical(g.read())
for component in gcal.walk():
print component.get('summa... | <p>I had this problem years ago- real source of frustration. Solution is to do </p>
<pre><code>component.get('dtstart').dt
</code></pre>
<p>In general, when you're having a problem like this, try calling <code>dir()</code> on it.</p> | python|pandas|icalendar|google-calendar-api | 0 |
377,697 | 45,779,307 | Difference between tf.assign and assignment operator (=) | <p>I'm trying to understand the difference between tf.assign and the assignment operator(=). I have three sets of code</p>
<p>First, using simple tf.assign</p>
<pre><code>import tensorflow as tf
with tf.Graph().as_default():
a = tf.Variable(1, name="a")
assign_op = tf.assign(a, tf.add(a,1))
with tf.Session() a... | <p>The main confusion here is that doing <code>a = a + 1</code> will reassign the Python variable <code>a</code> to the resulting tensor of the addition operation <code>a + 1</code>. <code>tf.assign</code>, on the other hand, is an operation for setting the value of a TensorFlow variable.</p>
<pre><code>a = tf.Variabl... | tensorflow | 5 |
377,698 | 45,986,970 | Is it possible for Tensorflow graph to run outside of session | <p>Could someone please explain the following situation:</p>
<p>I've created a simple convolutional neural network using Tensorflow. I'm using a class and I've created my graph in the constructor. I then train the network using a train method I've written. I'm also using queues and the feed-in mechanism. This is an ex... | <p>Just saying <code>conv_result = self.pred</code> won't do anything -- you need, indeed, to do <code>sess.run(self.pred)</code> to get it to execute. The errors you're getting are something else about your model. As they say, your <code>InputProducer</code> has an empty queue. With the information you've given it can... | python|tensorflow|convolution | 1 |
377,699 | 46,003,482 | pandas groupby count and proportion | <p>I'm trying to do something that I know must be basic pandas, but am racking my brain to figure it out. I want proportions and counts of each group to be available for an arbitrary level of group-bys:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'A': [1, 0, 1, 0, 1, 0, 0, 0], 'B': ['A'] * 4 + ['B'] * 4})
... | <p>You can do produce these values in one expression like so:</p>
<pre><code>df.groupby(['A', 'B']).size().agg(
{'count': lambda x: x, 'prop':lambda x: x / x.sum(level=0)}
).unstack(level=0).reset_index()
# A B count prop
# 0 0 A 2.0 0.400000
# 1 0 B 3.0 0.600000
# 2 1 A 2.0 0.666667
# ... | python|pandas | 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.