Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
2,200
| 62,464,882
|
Is there a way to divide multi index dataframe with a singled indexed dataframe?
|
<p>I have a multi-index series that looks like:</p>
<pre><code> Value1 Value2
Month Group Type
02 A Blue 2 3
Red 5 4
B Blue 4 7
Red 8 12
03 A Blue 9 22
Red 44 5
B Blue 45 34
Red 22 14
</code></pre>
<p>I would like to divide this dataframe with this other here:</p>
<pre><code> Value
Month
02 2
03 10
</code></pre>
<p>And divide based on month index. The result should be like that:</p>
<pre><code> Value1 Value2
Month Group Type
02 A Blue 1 1.5
Red 2.5 2
B Blue 2 3.5
Red 4 6
03 A Blue 0.9 2.2
Red 4.4 0.5
B Blue 4.5 3.4
Red 2.2 1.4
</code></pre>
<p>I have tried df.div(df2, level = 0), but get a dataframe with NaN's</p>
|
<p>You need select column <code>Value</code> for divide by <code>Series</code> and add <code>axis=0</code> for compare by index:</p>
<pre><code>df = df.div(df2['Value'], level=0, axis=0)
print (df)
Value1 Value2
Month Group Type
2 A Blue 1.0 1.5
Red 2.5 2.0
B Blue 2.0 3.5
Red 4.0 6.0
3 A Blue 0.9 2.2
Red 4.4 0.5
B Blue 4.5 3.4
Red 2.2 1.4
</code></pre>
|
python|pandas|dataframe|division|multi-index
| 2
|
2,201
| 62,356,994
|
How to split an array into unequal parts according to a condition in python?
|
<p>I am trying to divide an array of numbers into smaller chunks if the next element in the array is larger than the ith element. Basically, I have the following array:</p>
<p><code>a = [97, 122, 98, 111, 98, 111, 98, 101, 101, 103, 103, 104, 97, 107, 107, 108]</code></p>
<p>and I want to get sub arrays of:</p>
<pre><code>a1 = [97, 122]
a2 = [98, 111]
a3 = [98, 111]
a4 = [98, 101, 101, 103, 103, 104]
</code></pre>
<p>and so on....</p>
|
<p>You can iterate on the pairs <code>current, next</code> and on condition stop the current values or add into it </p>
<pre><code>a = [97, 122, 98, 111, 98, 111, 98, 101, 101, 103, 103, 104, 97, 107, 107, 108]
result = []
values = [a[0]]
for current_v, next_v in zip(a, a[1:]):
if next_v < current_v:
result.append(values)
values = [next_v]
else:
values.append(next_v)
result.append(values)
print(result) # [[97, 122], [98, 111], [98, 111], [98, 101, 101, 103, 103, 104], [97, 107, 107, 108]]
</code></pre>
|
python|arrays|numpy|split|conditional-statements
| 1
|
2,202
| 62,352,617
|
Parallelizing a Dask aggregation
|
<p>Building off of <a href="https://stackoverflow.com/questions/46080171/constructing-mode-and-corresponding-count-functions-using-custom-aggregation-fun">this post</a>, I implemented the custom mode formula, but have found issues with performance on this function. Essentially, when I enter into this aggregation, my cluster only uses one of my threads, which is not great for performance. I am doing calculations on over 150 attributes (mostly categorical data) across 16k rows, which I think I can split up into individual threads/processes and throw back together into a single dataframe later on. Note that this aggregation has to be on two columns so I might be getting worse performance for not being able to use a single column as an index. </p>
<p>Is there a way to incorporate dask futures or parallel processing into the aggregate calculation?</p>
<pre><code>import dask.dataframe as dd
from dask.distributed import Client
from pandas import DataFrame
def chunk(s):
return s.value_counts()
def agg(s):
s = s._selected_obj
return s.groupby(level=list(range(s.index.nlevels))).sum()
def finalize(s):
# s is a multi-index series of the form (group, value): count. First
# manually group on the group part of the index. The lambda will receive a
# sub-series with multi index. Next, drop the group part from the index.
# Finally, determine the index with the maximum value, i.e., the mode.
level = list(range(s.index.nlevels - 1))
return (
s.groupby(level=level)
.apply(lambda s: s.reset_index(level=level, drop=True).argmax())
)
def main() -> DataFrame:
client = Client('scheduler:8786')
ddf = dd.read_csv('/sample/data.csv')
custom_mode = dd.Aggregation('custom mode', chunk, agg, finalize)
result = ddf.groupby(['a','b']).agg(custom_mode).compute()
return result
</code></pre>
<p>Side note, I am using Docker to spin up my scheduler and workers using the daskdev/dask (2.18.1) docker image. </p>
|
<p>In the end, I used futures to essentially parallelize the aggregation for each column. Since I had so many columns, passing each aggregation to its own worker thread saved me a bunch of time. Thanks to David for his comments as well as <a href="https://examples.dask.org/applications/embarrassingly-parallel.html" rel="nofollow noreferrer">the article on parallel workloads from the dask documentation</a>!</p>
<pre><code>from dask.distributed import Client
from pandas import DataFrame
def chunk(s):
return s.value_counts()
def agg(s):
s = s._selected_obj
return s.groupby(level=list(range(s.index.nlevels))).sum()
def finalize(s):
level = list(range(s.index.nlevels - 1))
return (
s.groupby(level=level)
.apply(lambda s: s.reset_index(level=level, drop=True).idxmax())
)
def delayed_mode(ddf, groupby, col, custom_agg):
return ddf.groupby(groupby).agg({col: custom_agg}).compute()
def main() -> DataFrame:
client = Client('scheduler:8786')
ddf = dd.read_csv('/sample/data.csv')
custom_mode = dd.Aggregation('custom mode', chunk, agg, finalize)
futures = []
for col in multiple_trimmed.columns:
future = client.submit(delayed_mode, ddf, ["a", "b"], col, custom_mode_dask)
futures.append(future)
ddfs = client.gather(futures)
result = pd.concat(ddfs, axis=1)
return result
</code></pre>
|
python|pandas|dask|dask-distributed|dask-dataframe
| 1
|
2,203
| 62,341,905
|
Convert a matrix of zeroes and ones into a colored graph
|
<p>I'm trying to convert an array like this one int a double-colored graph- where there are zeroes: blue ,where there are ones: white:</p>
<p><img src="https://i.stack.imgur.com/UB0iR.png" alt="Array numbers"></p>
<p><img src="https://i.stack.imgur.com/fvzcs.png" alt="Array as image"></p>
<p>I have tried to do this with this code:</p>
<pre><code>import numpy as np
import matplotlib as plt
def plot_islands(matrix):
array=np.matrix
colors=np.where(array==1,plt.figure(figsize=(1,1), width=1, height=1, color='b'),plt.figure((1,1), width=1, height=1, color='b'))
plt.show()
</code></pre>
<p>python complains that matplotlib has no attribute with color or figure which is very strange.</p>
|
<p>Although this might not be ideal but it'll do the job:</p>
<pre><code>import matplotlib.pyplot as plt
from matplotlib.colors import LinearSegmentedColormap
import numpy as np
def plot_islands(matrix):
cmap = LinearSegmentedColormap.from_list('my_cmap', ['darkblue', 'white'])
plt.imshow(X=matrix, cmap=cmap)
plt.show()
</code></pre>
|
python|numpy|matplotlib
| 0
|
2,204
| 62,375,626
|
Pandas finding transitive relation from tuples A and B (two columns)
|
<p>Now hello, what I would like is showing hirarchy of likes. People from column 1 can like someone from column 2. Basically it'd be ideal having 4 columns A, B, C, D which show for every person who they like and for that person the next one etc. Basically from (a, b) tuples to (a, b), (b, c), (c, d).
I only know it must be recursive but I have for example no clue how you can check in Pandas different columns and check them and that in a recursive manner. So multiple people can like someone but not everyone has to like someone. But if that's the case, it can only happen over 3 people. </p>
<p>So, I have a dataframe like this:</p>
<pre><code>import pandas as pd
d = {'col1': ['Ben', 'Mike', 'Carla', 'Maggy', 'Josh', 'Kai', 'Maria', 'Sophie'], 'col2': ['Carla', 'Carla', 'Josh', 'Ben', 'Lena', 'Maggy', 'Mike', 'Chad']}
df = pd.DataFrame(data=d)
df
</code></pre>
<p>I would like an output like this:</p>
<pre><code>d = {'A': ['Ben', 'Mike', 'Carla', 'Maggy', 'Josh', 'Kai', 'Maria', 'Sophie'], 'B': ['Carla', 'Carla', 'Josh', 'Ben', 'Lena', 'Maggy', 'Mike', 'Chad'], 'C': ['Josh', 'Josh', 'Lena', 'NA', 'NA', 'Ben', 'Carla', 'NA'], 'D': ['Lena', 'Lena', 'NA', 'NA', 'NA', 'NA', 'Josh', 'NA']}
df = pd.DataFrame(data=d)
df
</code></pre>
<p><strong>I think the rules are like that:</strong></p>
<ol>
<li>Someone (column B) can be liked from someone (from column A) but that somebody (column B) doesn't like anyone. (like Chad doesn't like anyone)</li>
<li>Someone can be liked by only one person (A -> B -> NA -> NA)</li>
<li>Someone can like somebody, that somebody likes someone else. (A -> B -> C -> NA)</li>
<li>Someone can like someone, who likes someone else. And that someone likes someone as well. (A -> B -> C-> D -> NA) </li>
</ol>
<p>How can I achieve this? Thank you </p>
|
<p>What you need is a couple of left-join (merge) operations. </p>
<p>Here's the code, broken to a couple of steps for clarity: </p>
<pre><code>step1 = pd.merge(df, df, left_on="col2", right_on="col1", how = "left")
step1 = step1[["col1_x", "col2_x", "col2_y"]]
step1.columns = ["first", "second", "third"]
step2 = pd.merge(step1, df, left_on="third", right_on= "col1", how = "left")
res = step2.drop("col1", axis=1).rename(columns={"col2": "fourth"})
print(res)
</code></pre>
<p>The result is: </p>
<pre><code> first second third fourth
0 Ben Carla Josh Lena
1 Mike Carla Josh Lena
2 Carla Josh Lena NaN
3 Maggy Ben Carla Josh
4 Josh Lena NaN NaN
5 Kai Maggy Ben Carla
6 Maria Mike Carla Josh
7 Sophie Chad NaN NaN
</code></pre>
|
python|pandas|algorithm|recursion|relation
| 0
|
2,205
| 51,164,460
|
TypeError: 'float' object is not iterable; occurs when calling class from another file
|
<p>This is my first time posting on stackoverflow so thanks for the help. I am relatively new to python and this is my first time working on a personal project involving coding. The ultimate point of the file is to create a class to plot motor curves based on inputs. My main file currently is:</p>
<pre><code>import motor
import plot
if __name__ == '__main__':
try:
x = motor.motor(6.5,0.275,0.0343,1047,6)
print(x.Imax,x.Imin,x.Tmax,x.Wmax,x.V,x.Kv(),x.Kt())
a = map(1047,0.0343,x.kt,0.275,6)
#a.plot()
except KeyboardInterrupt:
print('Quitting')
</code></pre>
<p>motor.py is working as expected. When the code gets to a = map(1047,0.343,x.kt,0.275,6) I get the error : TypeError :'float' object is not iterable. map is a class that I created and is provided by plot.py. The current code is as follows:</p>
<pre><code>class map:
def __init__(self,Wmax,Tmax,kt,Imin,V):
#Defines Inputs
self.Wmax = Wmax
self.Tmax = Tmax
self.kt = kt
self.Imin = Imin
self.V = V
#Defines motor opperating range
self.Wrange = np.arange(0,int(self.Wmax),1)
#Defines Torque as a function of angular velocity
#self.T_function = self.Tmax*(1-self.Wrange/self.Wmax)
#Defines mechanical power as a functionof angular velocity
#self.Pmech_function = self.Tmax*(1-self.Wrange/self.Wmax)\
#*self.Wrange
#Defines electrical power as a function of Torque
#self.Pelec_function =((self.Tmax*(1-self.Wrange/self.Wmax\
#))/self.kt+self.Imin)*self.V
#Defines efficiency as a function of pmech and pelec
#self.eff_function = (self.Tmax*(1-self.Wrange/self.Wmax)*\
#self.Wrange)/(((self.Tmax*(1-self.Wrange/self.Wmax))/self\
#.kt+self.Imin)*self.V)
#self.Wlabel = 'Angular Velocity (rad/sec)'
#self.Tlabel = 'Torque (N-m)'
#self.Pmech_label = 'Mechanical Power (Watts)'
#self.Pelec_label = 'Electrical Power (Watts)'
#self.eff_label ='Efficiency (%)'
""" def plot(self):
#Creates main window
fig = plt.figure()
#Creates four y-axis
ax = fig.add_subplot(111)
ax2 = ax.twinx()
ax3 = ax.twinx()
ax4 = ax.twinx()
#Plots the above functions against Wrange
ax.plot(self.Wrange,self.T_function,'-',color='black')
ax1.plot(self.Wrange,self.Pmech_function,'-',color='red')
ax2.plot(self.Wrange,self.Pelec_function,'-',color='blue')
ax3.plot(self.Wrange,self.eff_function,'-',color='green')
#Defines labels and grid
ax4.grid(True)
ax.set_xlabel(self.Wlabel)
ax.set_ylabel(self.Tlabel)
ax1.set_ylabel(self.Pmech_label)
ax2.set_ylabel(self.Pelec_label)
ax3.set_ylabel(self.eff_label)
plt.show()
"""
test = map(1047,0.343,0.00551,0.275,6)
print(test.Wrange)
</code></pre>
<p>When I run plot.py as it's own file the two test lines on the end function as intended providing no errors. How can I resolve this error? Is there any advice for building a program like this?</p>
<p>Edit: I wasn't aware that map was a function from the standard library. I will be looking for this in the future. I will also be reducing the extra code in future posts. Thank you.</p>
|
<p>Don't call your class "map". That's a function name in the standard library, so things are going to get confused. Your call <code>a = map(1047,0.343,x.kt,0.275,6)</code> is getting the standard library one, which is why the error message is weird and confusing. If you rename the class, things will make more sense.</p>
<p>By convention, your class names should start with an uppercase letter (partially to avoid this kind of confusion).</p>
|
python|numpy
| 5
|
2,206
| 48,284,884
|
Improve accuracy with Tensorflow Object detection pretrained model
|
<p>I am working on building an object detection model which I would like to create with 22 new classes (most of them are not in COCO or PETS datasets)
What I've already done is:</p>
<ul>
<li><p>Prepared images with multiple labels using LabelIMG. </p></li>
<li><p>Decrease image size in 2 for images that are bigger than 500k</p></li>
<li><p>Convert XML to CSV file</p></li>
<li><p>Convert CSV and images to TFRecord</p></li>
<li><p>Using the Tensorflow sample config files I've trained with several pretrained checkpoints. </p></li>
</ul>
<p><strong>Results</strong>: SSD_Mobilenet and SSD_Inception resulted in no classes
found(loss ~10.0) while faster RCNN Inception did succeed to detect some of the
objects(loss ~0.7).</p>
<p>My questions are:</p>
<ol>
<li>What is the difference between train.py from <a href="https://github.com/tensorflow/models/tree/master/research/object_detection" rel="nofollow noreferrer">Object detection</a>, which I used in the above, to retrain.py from <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/image_retraining/retrain.py" rel="nofollow noreferrer">image_retraining</a> to train_image_classifier.py from <a href="https://github.com/tensorflow/models/tree/master/research/slim" rel="nofollow noreferrer">Slim</a>?</li>
<li>Which is better for my task? Or should I do it in a different way?</li>
<li>While running the train.py on FRCNN inception I found that the loss was around 0.7 and not going lower even after 100k steps. Is there any goal in terms of loss to achieve? </li>
<li>How do you suggest to change the config file to improve this?</li>
<li>I found other models for instance Inception V4 etc... which doesn't have sample config files - <a href="https://github.com/tensorflow/models/tree/master/research/slim" rel="nofollow noreferrer">TF slim</a>. Should I try them and if so how can I use them?</li>
</ol>
<p>I am pretty new in this field and I need some support in understanding the terms and actions. </p>
<p>BTW: I am using GTX 1060 (GPU) for training but eval is not working in parallel so I can't get the mAP for validation. I tried to force eval for CPU but with no success.</p>
<p>Thanks.</p>
|
<p>1) What is the difference between train.py from Object detection, which I used in the above, to retrain.py from image_retraining to train_image_classifier.py from Slim</p>
<p>Ans : To what i know, none. Because train.py imports trainer.py which imports slim.learning.train(the same class which is used in train_image_classifier.py) to train.</p>
<p>2) Which is better for my task? Or should I do it in a different way?</p>
<p>Ans: The above answer answers this question too.</p>
<p>3) While running the train.py on FRCNN inception I found that the loss was around 0.7 and not going lower even after 100k steps. Is there any goal in terms of loss to achieve? </p>
<p>Ans: If you use tensorboard to visualize your results, you will find that when your classification loss graph is not changing a lot(has converged), your model is trained. Regarding the loss of 0.7, thats high after training for so many steps. Just check your pipeline config file parameters.</p>
<p>4) How you suggest to change the config file to improve this?</p>
<p>Ans: Learning rate value can be a good start</p>
<p>5) I found other models for instance Inception V4 etc... which doesn't have sample config files - TF slim ? Should I try them and if som how can I use them?</p>
<p>Ans: currently, I dont have an answer for this. but will get back to you.</p>
|
python|object|tensorflow|detection|object-detection
| 0
|
2,207
| 48,373,687
|
Concat column values based on condition
|
<p>This code : </p>
<pre><code>import numpy as np
import pandas as pd
df = pd.DataFrame(['a1', 'a2', 'stop', 'a4', 'a4', 'a5', 'stop', 'a3'],
columns=['c'])
</code></pre>
<p>renders: </p>
<pre><code> c
0 a1
1 a2
2 stop
3 a4
4 a4
5 a5
6 stop
7 a3
</code></pre>
<p>I'm attempting to produce the following dataframe where values in a column are concatenated until 'stop' value is encountered : </p>
<pre><code>columns = ['c1' , 'c2']
data = np.array([['a1, a2','stop'] , ['a4, a4, a5','stop']])
df = pd.DataFrame(data, columns=columns)
df
c1 c2
0 a1, a2 stop
1 a4, a4, a5 stop
</code></pre>
<p>Is this a valid approach, filter the rows where column value is 'stop': </p>
<pre><code>df[df['c'] == 'stop']
</code></pre>
<p>then access the previous rows ?</p>
|
<p>First, create a boolean mask by testing the equality of <code>c</code> to "stop":</p>
<pre><code>>>> df = pd.DataFrame(['a1', 'a2', 'stop', 'a3', 'a4', 'a5', 'stop', 'a6'],
columns=['c'])
>>> mask = df['c'].eq('stop')
</code></pre>
<p>You also specified you want to ignore values after the final stop. Truncate both series with:</p>
<pre><code>>>> stop = mask[::-1].idxmax()
>>> mask = mask[:stop]
>>> c = df['c'][:stop].copy()
</code></pre>
<p>Now groupby: </p>
<pre><code>>>> c.groupby(mask.cumsum()).apply(lambda s: s[s!='stop'].tolist())
c
0 [a1, a2]
1 [a4, a4, a5]
</code></pre>
<p>With a cumulative sum, <code>True</code> maps to 1 and <code>False</code> maps to 0. This serves as the grouping.</p>
<p>A footnote - this logic should work regardless of whether the final value in the Series ends in a <code>stop</code> or not.</p>
|
python|pandas
| 5
|
2,208
| 48,132,807
|
Why does tensor flow return NaN when running variables after training?
|
<p>I can't really understand why this is not working, basically I'm trying to retrieve values for m and q just to print them but I always get [nan, nan]</p>
<pre><code>import tensorflow as tf
import pandas as pd
import matplotlib.pyplot as plt
m = tf.Variable(tf.random_uniform(shape=()), dtype=tf.float32)
q = tf.Variable(tf.random_uniform(shape=()), dtype=tf.float32)
X = tf.placeholder(tf.float32)
y = tf.placeholder(tf.float32)
linear_model = m * X + q
cost = tf.reduce_sum(tf.square(linear_model - y))
optimizer = tf.train.GradientDescentOptimizer(0.01)
gdescent = optimizer.minimize(cost)
train, test = [pd.read_csv(file) for file in ["train.csv", "test.csv"]]
if False: # Scatter training points
plt.scatter(train['x'], train['y'], s=1)
plt.show()
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
_, m, q = sess.run([gdescent, m, q], feed_dict={X: train['x'].values, y: train['y'].values})
print(m, q)
</code></pre>
<p>train.csv and test.csv are both files with a header line and two columns of values, x and y</p>
<p>First 10 lines of train.csv</p>
<pre><code>x,y
24,21.54945196
50,47.46446305
15,17.21865634
38,36.58639803
87,87.28898389
36,32.46387493
12,10.78089683
81,80.7633986
25,24.61215147
</code></pre>
|
<p>Solved, the error was that train.csv was missing the y value at line 215</p>
|
python|tensorflow|machine-learning|deep-learning
| 0
|
2,209
| 48,820,939
|
How to append all the arrays inside a list into a single array combined by the depth in numpy
|
<p>I have a list which contains a variable amount of [n,1,2] numpy arrays. I need a way of combining all those arrays into one [n+however many,1,2] arrays. </p>
<p>I have tried to create a loop with an empty array and then using dstack to sort of combine them. But I have to 1) predefine the size of the array in advance which won't do as I do not know it 2) The array has values that I must overwrite and thus dstack dosen't work too well. Is there a more elegant and easier solution?</p>
<p>I have to use an array of this particular setup as I must pass it on to a method that I have not wrote that requires it.</p>
|
<p>Use <code>np.concatenate</code>.</p>
<pre><code>>>> arrays = [np.zeros((3, 1, 2)) for _ in range(3)]
>>> np.concatenate(arrays, axis=0).shape
(9, 1, 2)
</code></pre>
|
python|numpy
| 1
|
2,210
| 70,840,604
|
DataFrame (pandas) plot opens window but there is no plot
|
<p>I am unable to produce any charts in Python (matplotlib 3.5.1) with the pandas DataFrame plot() method. A window opens and the axes return value is <AxesSubplot:> as opposed to returning an object like that prints as somethig like <matplotlib.axes._subplots.AxesSubplot at 0x7f3958bcf9d0>, which is what I usually see when the plot works.</p>
<p>The backend is QtAgg and as far as I can tell from a poke between fora pages that report a similar problem this should be all right. This is also not a problem with needing to run matplotlib.pyplot.ion() as I see a window open, but it is black with no plot in it.</p>
<p>Any advice would be useful, thanks!</p>
|
<p>I have resolved the (mis)behaviour without really being sure that I have resolved the issue responsible for this misbehaviour.</p>
<p>As looked around for a way out, I found this discussion (<a href="https://stackoverflow.com/questions/3285193/how-to-change-backends-in-matplotlib-python">How to change backends in matplotlib / Python</a>) which made a lot of getting the back end setting right. Wondering if that would make a difference, I changed the back end. The QtAgg back end loaded by default from the matplotlibrc file and so following the instructions in the post on how to permanently change the back end, I modified the file so that the back end is now Qt5Agg. The data now plots.</p>
|
pandas|dataframe|matplotlib|plot
| 0
|
2,211
| 70,871,978
|
Pandas concat multiple sheets appending new column for each sheet
|
<p>I have an excel Workbook with 36 sheets within. Each sheet has the same columns - there are only 5 columns that I actually need (below) but there are a bunch more. Each sheet is follows this naming convention YYYY-MM-DD MRR</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Subscription ID</th>
<th>Activated At</th>
<th>Activated For (days)</th>
<th>Current Status</th>
<th>MRR</th>
</tr>
</thead>
</table>
</div>
<p>What I am trying to do is create a new excel sheet that has
<code>["Subscription ID", "Activated At", "Activated For (days)", "Current Status", "Pasue Start Date", "Pause End Date", "Pause End Status"]</code>
then there will be 36 columns for the MRR of each month for the last 3 years.</p>
<p>This is my first time working with Pandas so I am running into some issues as to how to do this. Any advice would be greatly appreciated. (note that I am still having trouble just concatenating the sheets together with only the columns I need)</p>
<p>The Subscription ID is going to be the Unique Key. An example row of the final output will look like this.</p>
<p>This is an example of one row that is paused - I am going to calculate the pause date (Activated At date plus Activated For num of days) then enter it into the cell.
The second row is an active subscription</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Subscription ID</th>
<th>Activated At</th>
<th>Activated For (days)</th>
<th>Current Status</th>
<th>Pause Start Date</th>
<th>Pause End Date</th>
<th>Pause End Status</th>
<th>Jan 2019 MRR</th>
<th>...</th>
<th>Jan 2022 MRR</th>
</tr>
</thead>
<tbody>
<tr>
<td>0001</td>
<td>2019-01-01</td>
<td>899</td>
<td>Paused</td>
<td>2021-07-08</td>
<td>NaN</td>
<td>Paused</td>
<td>$75</td>
<td>...</td>
<td>$10</td>
</tr>
<tr>
<td>0002</td>
<td>2019-01-02</td>
<td>999</td>
<td>Active</td>
<td>NaN</td>
<td>NaN</td>
<td>NaN</td>
<td>$75</td>
<td>...</td>
<td>$75</td>
</tr>
</tbody>
</table>
</div>
<pre><code>import pandas as pd
excel = 'Total_MRR_ALL.xlsx'
sheets_dict = pd.read_excel(excel, sheet_name=None, usecols="B,E,F,G,H")
column_names = ["Subscription ID", "Activated At", "Activated For (days)", "Current Status", "Pasue Start Date", "Pause End Date", "Pause End Status"]
master = pd.DataFrame(columns=column_names).set_index("Subscription ID")
for sheetname, sheet in sheets_dict.items():
master[sheetname] = ""
x = 0
if(sheetname != "Master"):
for row in sheet.itertuples(index=False):
if( x<1 ):
new_row_dictionary = {
"Subscription ID" : row[0],
"Activated At": row[1],
"Activated For (days)": row[2],
"Current Status" : row[3],
"Pasue Start Date": "",
"Pause End Date": "",
"Pause End Status": "",
sheetname: row[4]
}
print(row[0], " ",row[1], " ", row[2], " ", row[3], " " , row[4] )
master.loc[len(master)-1] = new_row_dictionary
x+=1
master.to_csv("output.csv")
# B E F G H
# Subscription ID Activated At Active For (days) Current Status MRR
# master.to_csv("output.csv")
# B E F G # # # H - H+36
# Subscription ID Activated At Active For (days) Current Status Pause Start Date Pause End Date Pause End Status Mothly MRR
</code></pre>
<p><strong>Update</strong>
With some help from Pankaj here is my updated code (much easier).</p>
<pre><code>import pandas as pd
excel = 'Total_MRR_ALL_v2.xlsx'
df = (pd.concat(pd.read_excel(excel, sheet_name=None, usecols="B,E,F,G,H", index_col=2), axis=0 ).reset_index(level=1, drop=False).rename_axis('Month').reset_index())
df.to_csv("output.csv")
</code></pre>
<p>The issue I am facing now is that all of the Sheet names are in a single column as such:
<a href="https://i.stack.imgur.com/fMLzc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fMLzc.png" alt="Excel Sheets as single column" /></a></p>
<p>What I need is for the Month column to convert each unique month into its own column where the Subscription ID is the unique identifier and then the months are populated with the MRR for each respective month:
<a href="https://i.stack.imgur.com/zj7tx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zj7tx.png" alt="Monthly MRR displayed as individual columns" /></a></p>
|
<p>If I understand your question correctly, you don't need the for loop at all. Excel sheet data can be combined easily using concat function. Additional columns can be added by just initialising them. Then you can use logic to populate values for those additional columns.</p>
<p>Consider this code:</p>
<pre><code>import pandas as pd
excel = 'Total_MRR_ALL.xlsx'
sheets_dict = pd.read_excel(excel, sheet_name=None, usecols="A,B,C,D,E,F,G")
column_names = ["Subscription ID", "Activated At", "Activated For (days)", "Current Status", "Pause Start Date", "Pause End Date", "Pause End Status"]
# combine all sheets to a single dataframe
master = pd.concat(sheets_dict.values())
master = master.set_index("Subscription ID")
# new columns added with initial values. Populate calculated values
master['Jan 2019 MRR'] = 0;
master['Feb 2019 MRR'] = 'a';
master['Mar 2019 MRR'] = '';
# prints the output to csv
master.to_csv("output.csv")
</code></pre>
|
python|excel|pandas
| 1
|
2,212
| 71,082,546
|
How to select rows with at least one categorical value in pandas DataFrame
|
<p>How I can retrieve indexes of the rows, in a pandas DataFrame (df), with "object" type (dtype=='O') in at least one column of the DataFrame?</p>
<p>What I would like to do in practice is to create a new dataframe (numeric_df) with only the numeric values of a source dataframe (df).</p>
|
<p>Use:</p>
<pre><code>df = pd.DataFrame({'object col':[1,'a',25]})
df[df['object col'].apply(lambda x: type(x))==int]
</code></pre>
<p>result:</p>
<p><a href="https://i.stack.imgur.com/JrV8A.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JrV8A.png" alt="enter image description here" /></a></p>
<p>Based on the comment:</p>
<pre><code>df = pd.DataFrame({'object col':[1,'a',25], 'object col2':['a','a',25]})
def check(row):
for col in row:
if type(col)==str:
return False
return True
df[df.apply(check, axis = 1)]
</code></pre>
<p>result:</p>
<p><a href="https://i.stack.imgur.com/Wsyc5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Wsyc5.png" alt="enter image description here" /></a></p>
|
python|pandas|dataframe|google-colaboratory
| 0
|
2,213
| 70,771,650
|
df Objects to float
|
<p>I have a problem I have this df :</p>
<pre><code>
**<class 'pandas.core.frame.DataFrame'>
RangeIndex: 44640 entries, 0 to 44639
Data columns (total 4 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 NOx_Min_[ppm] 44640 non-null object
1 NOx_Min_[mg/m3N] 44640 non-null object
2 NOx_corr_Min_[mg/m3N] 44640 non-null object
3 NOX 44640 non-null object
dtypes: object(4)
memory usage: 1.4+ MB
NOx_Min_[ppm] NOx_Min_[mg/m3N] NOx_corr_Min_[mg/m3N] NOX
0 0 0 0 MMC
1 0 0 0 MMC
2 0 0 0 MMC
3 0 0 0 MMC
4 0 0 0 MMC**
</code></pre>
<p>and I am trying to convert the objects to numbers <strong>gd['NOX']=pd.to_numeric(gd['NOX'])</strong> and then process it through my neural network but it generates the following error:</p>
<pre><code>
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
pandas/_libs/lib.pyx in pandas._libs.lib.maybe_convert_numeric()
ValueError: Unable to parse string "MMC"
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
<ipython-input-11-78184c2df2b2> in <module>()
----> 1 gd['NOX']=pd.to_numeric(gd['NOX'])
/usr/local/lib/python3.7/dist-packages/pandas/core/tools/numeric.py in to_numeric(arg, errors, downcast)
151 try:
152 values = lib.maybe_convert_numeric(
--> 153 values, set(), coerce_numeric=coerce_numeric
154 )
155 except (ValueError, TypeError):
pandas/_libs/lib.pyx in pandas._libs.lib.maybe_convert_numeric()
ValueError: Unable to parse string "MMC" at position 0
</code></pre>
<p>plz I need your help</p>
|
<p>You can try:</p>
<pre><code>gd['NOX'].apply(pd.to_numeric, args=('coerce',))
</code></pre>
|
python|pandas|dataframe|type-conversion
| 0
|
2,214
| 51,714,124
|
Feeding example to tf predictor.from_saved_model() for estimator trained with tf hub module
|
<p>I try to export the model for text classification with <a href="https://www.tensorflow.org/hub/tutorials/text_classification_with_tf_hub" rel="nofollow noreferrer">tf hub modules</a>, and then infer a prediction from it for a single string example using <a href="https://www.tensorflow.org/api_docs/python/tf/contrib/predictor/from_saved_model" rel="nofollow noreferrer">predictor.from_saved_model()</a>. I saw <a href="https://github.com/tensorflow/tensorflow/issues/13477#event-1276931071" rel="nofollow noreferrer">some examples</a> of similar ideas, but still couldn't make it work for the case when using tf hub modules to build features. Here is what I do:</p>
<pre><code> train_input_fn = tf.estimator.inputs.pandas_input_fn(
train_df, train_df['label_ids'], num_epochs= None, shuffle=True)
# Prediction on the whole training set.
predict_train_input_fn = tf.estimator.inputs.pandas_input_fn(
train_df, train_df['label_ids'], shuffle=False)
embedded_text_feature_column = hub.text_embedding_column(
key='sentence',
module_spec='https://tfhub.dev/google/nnlm-de-dim128/1')
#Estimator
estimator = tf.estimator.DNNClassifier(
hidden_units=[500, 100],
feature_columns=[embedded_text_feature_column],
n_classes=num_of_class,
optimizer=tf.train.AdagradOptimizer(learning_rate=0.003) )
# Training
estimator.train(input_fn=train_input_fn, steps=1000)
#prediction on training set
train_eval_result = estimator.evaluate(input_fn=predict_train_input_fn)
print('Training set accuracy: {accuracy}'.format(**train_eval_result))
feature_spec = tf.feature_column.make_parse_example_spec([embedded_text_feature_column])
serving_input_receiver_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(feature_spec)
export_dir_base = self.cfg['model_path']
servable_model_path = estimator.export_savedmodel(export_dir_base, serving_input_receiver_fn)
# Example message for inference
message = "Was ist denn los"
saved_model_predictor = predictor.from_saved_model(export_dir=servable_model_path)
content_tf_list = tf.train.BytesList(value=[str.encode(message)])
example = tf.train.Example(
features=tf.train.Features(
feature={
'sentence': tf.train.Feature(
bytes_list=content_tf_list
)
}
)
)
with tf.python_io.TFRecordWriter('the_message.tfrecords') as writer:
writer.write(example.SerializeToString())
reader = tf.TFRecordReader()
data_path = 'the_message.tfrecords'
filename_queue = tf.train.string_input_producer([data_path], num_epochs=1)
_, serialized_example = reader.read(filename_queue)
output_dict = saved_model_predictor({'inputs': [serialized_example]})
</code></pre>
<p>And the output:</p>
<pre><code>Traceback (most recent call last):
File "/Users/dimitrs/component-pythia/src/pythia.py", line 321, in _train
model = algo.generate_model(samples, generation_id)
File "/Users/dimitrs/component-pythia/src/algorithm_layer/algorithm.py", line 56, in generate_model
model = self._process_training(samples, generation)
File "/Users/dimitrs/component-pythia/src/algorithm_layer/tf_hub_classifier.py", line 91, in _process_training
output_dict = saved_model_predictor({'inputs': [serialized_example]})
File "/Users/dimitrs/anaconda3/envs/pythia/lib/python3.6/site-packages/tensorflow/contrib/predictor/predictor.py", line 77, in __call__
return self._session.run(fetches=self.fetch_tensors, feed_dict=feed_dict)
File "/Users/dimitrs/anaconda3/envs/pythia/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 900, in run
run_metadata_ptr)
File "/Users/dimitrs/anaconda3/envs/pythia/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1135, in _run
feed_dict_tensor, options, run_metadata)
File "/Users/dimitrs/anaconda3/envs/pythia/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1316, in _do_run
run_metadata)
File "/Users/dimitrs/anaconda3/envs/pythia/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1335, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InternalError: Unable to get element as bytes.
</code></pre>
<p>Isn't <code>serialized_example</code> the right input that is suggested by <code>serving_input_receiver_fn</code> ?</p>
|
<p>So, all I need was <code>serialized_example = example.SerializeToString()</code>
Writing the example on a file requires to start a session before reading it back. Simply serialising is enough:</p>
<pre><code> # Example message for inference
message = "Was ist denn los"
saved_model_predictor = predictor.from_saved_model(export_dir=servable_model_path)
content_tf_list = tf.train.BytesList(value=[message.encode('utf-8')])
sentence = tf.train.Feature(bytes_list=content_tf_list)
sentence_dict = {'sentence': sentence}
features = tf.train.Features(feature=sentence_dict)
example = tf.train.Example(features=features)
serialized_example = example.SerializeToString()
output_dict = saved_model_predictor({'inputs': [serialized_example]})
</code></pre>
|
tensorflow|tensorflow-hub
| 1
|
2,215
| 41,797,136
|
How to Concatenate "Jagged" Tensors
|
<p>I am trying to write an implementation of <a href="http://www.aclweb.org/anthology/P14-1062" rel="nofollow noreferrer">this</a> paper in TensorFlow and I have come across a bit of a snag. In my pooling layer, I have to concatenate everything together. This is the code I use:</p>
<pre><code> pooled_outputs = []
for i, filter_size in enumerate(filter_sizes):
with tf.name_scope("conv-maxpool-%s" % filter_size):
# Conv layer
filter_shape = [filter_size, embedding_size, 1, num_filters]
# W is the filter matrix
W = tf.Variable(tf.truncated_normal(filter_shape, stddev=0.1), name="W")
b = tf.Variable(tf.constant(0.1, shape=[num_filters]), name="b")
conv = tf.nn.conv2d(
self.embedded_chars_expanded,
W,
strides=[1, 1, 1, 1],
padding="VALID",
name="conv"
)
# Apply nonlinearity
h = tf.nn.relu(tf.nn.bias_add(conv, b), name="relu")
# Max-pooling layer over the outputs
pooled = tf.nn.max_pool(
h,
ksize=[1, sequence_lengths[i] - filter_size + 1, 1, 1],
strides=[1, 1, 1, 1],
padding="VALID",
name="pool"
)
pooled_outputs.append(pooled)
# Combine all of the pooled features
num_filters_total = num_filters * len(filter_sizes)
print(pooled_outputs)
pooled_outputs = [tf.reshape(out, ["?", 94, 1, self.max_length]) for out in pooled_outputs] # The problem line
self.h_pool = tf.concat(3, pooled_outputs)
</code></pre>
<p>When I run this code, it prints out this for <code>pooled_outputs</code>:</p>
<pre><code>[<tf.Tensor 'conv-maxpool-3/pool:0' shape=(?, 94, 1, 128) dtype=float32>, <tf.Tensor 'conv-maxpool-4/pool:0' shape=(?, 51, 1, 128) dtype=float32>, <tf.Tensor 'conv-maxpool-5/pool:0' shape=(?, 237, 1, 128) dtype=float32>]
</code></pre>
<p>I originally tried this code without the <code>pooled_outputs = [tf.reshape(out, ["?", 94, 1, self.max_length]) for out in pooled_outputs]</code> line in there and I got this error:</p>
<pre><code>ValueError: Dimension 1 in both shapes must be equal, but are 51 and 237
</code></pre>
<p>When I added in the reshape line, I got this error:</p>
<pre><code>TypeError: Expected binary or unicode string, got 94
</code></pre>
<p>The second error I know is because I passed a "?" for the new size, and the first error I think is because the tensors aren't the same size. <strong>How could I properly pad these Tensors so I can concatenate them with no problems?</strong></p>
|
<p>You can pass <code>-1</code> as one of the component of the shape to the <code>tf.reshape</code> method; it will be automatically inferred from the the shape of you tensor so the total size will be the same.</p>
<p>So, try to change the problem line to</p>
<pre><code>pooled_outputs = [tf.reshape(out, [-1, 94, 1, self.max_length]) for out in pooled_outputs]
</code></pre>
<p>See the <a href="https://www.tensorflow.org/api_docs/python/array_ops/shapes_and_shaping#reshape" rel="nofollow noreferrer">documentation</a> for details</p>
|
python|python-3.x|machine-learning|tensorflow|conv-neural-network
| 1
|
2,216
| 64,241,940
|
How can I get the percentage of missing in a column using agg function?
|
<p>I'm working with the dataset database_versao_LatLongDecimal_fonteANM_23_01_2019.csv - you can find it here <a href="https://www.kaggle.com/edumagalhaes/brazilian-dams-and-brumadinho-households" rel="nofollow noreferrer">https://www.kaggle.com/edumagalhaes/brazilian-dams-and-brumadinho-households</a> - and I was hoping to find the percentage of missing in the column "CATEGORIA_DE_RISCO", grouped by UF.</p>
<p>This is what I've tried:</p>
<pre><code>summary = (
base_1.groupby(["UF"], sort=False)
.agg(
media=("Dano_Potencial__Alta", "count"),
minimo=("Dano_Potencial__Alta", "mean"),
Missing_Risco=(
"CATEGORIA_DE_RISCO",
lambda x: x.CATEGORIA_DE_RISCO.isnull().sum() / len(x),
)
)
.reset_index()
.round(1)
)
summary
</code></pre>
<p>But I keep getting the error:</p>
<pre><code>AttributeError: 'Series' object has no attribute 'CATEGORIA_DE_RISCO'
</code></pre>
<p>I understand the error, but I'm not sure why it's happening and how to fix it. I was sure I would find some answer here, but I only found how the get the missing of a column and how to get the percentage of some value. Which is weird, because I used similar logic to the answer of the post <a href="https://stackoverflow.com/questions/32566866/aggregate-groups-in-python-pandas-and-spit-out-percentage-from-a-certain-count">Aggregate groups in Python Pandas and spit out percentage from a certain count</a>.</p>
|
<p>Remove column name and instead divide <code>sum</code> by length use <code>mean</code>:</p>
<pre><code>summary = (
base_1.groupby(["UF"], sort=False)
.agg(
media=("Dano_Potencial__Alta", "count"),
minimo=("Dano_Potencial__Alta", "mean"),
Missing_Risco=(
"CATEGORIA_DE_RISCO",
lambda x: x.isnull().mean(),
)
)
.reset_index()
.round(1)
)
</code></pre>
<p>Another idea with helper column:</p>
<pre><code>summary = (
base_1.assign(null_col = base_1['CATEGORIA_DE_RISCO'].isnull())
.groupby(["UF"], sort=False)
.agg(
media=("Dano_Potencial__Alta", "count"),
minimo=("Dano_Potencial__Alta", "mean"),
Missing_Risco=("null_col",'mean')
)
.reset_index()
.round(1)
)
</code></pre>
|
python|pandas|dataframe|aggregate-functions
| 2
|
2,217
| 64,294,336
|
How do i save bs4 values in xls or csv?
|
<p>So I have extracted data from a website and I need to save it in excel. I'm new to python and can't figure out how to go about it.</p>
<p>here is the data that I've extracted and it's type is bs4.beautifulsoup</p>
<pre><code>{"lookbook":{"img":[],"count":0},"sizeInfoDes":{"sizeInfo":[{"size":"XS","Shoulder ":" 53 cm","Bust ":" 95 cm","Length ":" 51.5 cm","Sleeve Length ":" 49.5 cm"},{"size":"S","Shoulder ":" 55 cm","Bust ":" 99 cm","Length ":" 52.5 cm","Sleeve Length ":" 50 cm"},{"size":"M","Shoulder ":" 57 cm","Bust ":" 103 cm","Length ":" 53.5 cm","Sleeve Length ":" 50.5 cm"},{"size":"L","Shoulder ":" 60 cm","Bust ":" 109 cm","Length ":" 55 cm","Sleeve Length ":" 51 cm"},{"size":"XL","Shoulder ":" 63 cm","Bust ":" 115 cm","Length ":" 56.5 cm","Sleeve Length ":" 51.5 cm"}],"sizeInfoInch":[{"size":"XS","Shoulder ":" 20.9 inch","Bust ":" 37.4 inch","Length ":" 20.3 inch","Sleeve Length ":" 19.5 inch"},{"size":"S","Shoulder ":" 21.7 inch","Bust ":" 39 inch","Length ":" 20.7 inch","Sleeve Length ":" 19.7 inch"},{"size":"M","Shoulder ":" 22.4 inch","Bust ":" 40.6 inch","Length ":" 21.1 inch","Sleeve Length ":" 19.9 inch"},{"size":"L","Shoulder ":" 23.6 inch","Bust ":" 42.9 inch","Length ":" 21.7 inch","Sleeve Length ":" 20.1 inch"},{"size":"XL","Shoulder ":" 24.8 inch","Bust ":" 45.3 inch","Length ":" 22.2 inch","Sleeve Length ":" 20.3 inch"}],"sizeUnit":0,"allcmFlag":1,"sizeInfoAttribute":[],"basicAttribute":{"image_url":"","attribute_info":[],"base_code_info":[],"base_code_info_inch":[]}},"model":{"attr":{"Height":"175 cm","Bust":"85 cm","Waist":"61 cm","Hip":"93 cm"},"size":"S","name":"Andy","attrcm":{"Height":"175 cm","Bust":"85 cm","Waist":"61 cm","Hip":"93 cm"},"attrinch":{"Height":"68.9 inch","Bust":"33.5 inch","Waist":"24 inch","Hip":"36.6 inch"},"sizeUnit":0},"getTheLookInfo":[]}
</code></pre>
<p>I need shouder, bust, lenght and sleeve length values extracted.</p>
|
<p>Convert the data that you have into a <code>dictionary</code> (it mostly looks like a json file, so u can easily convert it into a dictionary using the <code>json</code> module). Then, just use this code to output the data to an excel file:</p>
<pre><code>from bs4 import BeautifulSoup
import pandas as pd
dictionary = {"lookbook":{"img":[],"count":0},"sizeInfoDes":{"sizeInfo":[{"size":"XS","Shoulder ":" 53 cm","Bust ":" 95 cm","Length ":" 51.5 cm","Sleeve Length ":" 49.5 cm"},{"size":"S","Shoulder ":" 55 cm","Bust ":" 99 cm","Length ":" 52.5 cm","Sleeve Length ":" 50 cm"},{"size":"M","Shoulder ":" 57 cm","Bust ":" 103 cm","Length ":" 53.5 cm","Sleeve Length ":" 50.5 cm"},{"size":"L","Shoulder ":" 60 cm","Bust ":" 109 cm","Length ":" 55 cm","Sleeve Length ":" 51 cm"},{"size":"XL","Shoulder ":" 63 cm","Bust ":" 115 cm","Length ":" 56.5 cm","Sleeve Length ":" 51.5 cm"}],"sizeInfoInch":[{"size":"XS","Shoulder ":" 20.9 inch","Bust ":" 37.4 inch","Length ":" 20.3 inch","Sleeve Length ":" 19.5 inch"},{"size":"S","Shoulder ":" 21.7 inch","Bust ":" 39 inch","Length ":" 20.7 inch","Sleeve Length ":" 19.7 inch"},{"size":"M","Shoulder ":" 22.4 inch","Bust ":" 40.6 inch","Length ":" 21.1 inch","Sleeve Length ":" 19.9 inch"},{"size":"L","Shoulder ":" 23.6 inch","Bust ":" 42.9 inch","Length ":" 21.7 inch","Sleeve Length ":" 20.1 inch"},{"size":"XL","Shoulder ":" 24.8 inch","Bust ":" 45.3 inch","Length ":" 22.2 inch","Sleeve Length ":" 20.3 inch"}],"sizeUnit":0,"allcmFlag":1,"sizeInfoAttribute":[],"basicAttribute":{"image_url":"","attribute_info":[],"base_code_info":[],"base_code_info_inch":[]}},"model":{"attr":{"Height":"175 cm","Bust":"85 cm","Waist":"61 cm","Hip":"93 cm"},"size":"S","name":"Andy","attrcm":{"Height":"175 cm","Bust":"85 cm","Waist":"61 cm","Hip":"93 cm"},"attrinch":{"Height":"68.9 inch","Bust":"33.5 inch","Waist":"24 inch","Hip":"36.6 inch"},"sizeUnit":0},"getTheLookInfo":[]}
size_info_dict = dictionary['sizeInfoDes']['sizeInfo']
final_dict = {'Shoulder ':[],
'Bust ':[],
'Length ':[],
'Sleeve Length ':[]}
for curr_dict in size_info_dict:
for key in curr_dict.keys():
try:
final_dict[key].append(curr_dict[key])
except KeyError:
pass
df = pd.DataFrame(final_dict)
df.to_excel("D:\\Output.xlsx",index = False)
</code></pre>
<p>Output Screenshot:</p>
<p><a href="https://i.stack.imgur.com/p3ZYS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/p3ZYS.png" alt="enter image description here" /></a></p>
<p>Hope that this helps!</p>
|
python|python-3.x|pandas|beautifulsoup|python-beautifultable
| 0
|
2,218
| 64,520,774
|
How to upload a .txt file in python dataframe
|
<p>I am trying to upload a txt file which contains data as below . I have around 1M records in the file .
Data consist of different fields (which is to be columns ) in which I have manually added a comma as a delimiter.
The challenge is all the records does not have the same set of fields.
The columns should be "Time" , "ENTER" , "TRANSID" , "SUPERCODE" ,"ID", "MRP","VOLUME","VALUE","PRODUCTtype","BUILDING","TAXNUM", "TAGFIELDS"</p>
<blockquote>
<p>00:00:00.000:, ENTER, transId=1, Supercode=BD3G, id=1, MRP=0.12s9,
volume=110333, value=20942463.27, productype=se IA CF, building=11430,
taxnumber=110F1, tagFields={B=C C=NZd3/1 D="20170514 07:41:53.616"
F=:00000017PouM H=LMT O=6521B841:00023662-A-15.1sd01.200.0.50dsd03.0.0
R="Order not Added" a=A c=FIRST3eNZA j=N}</p>
<p>00:00:00.000:, ENTER,transId=2,Supercode=BYG, id=2, MRP=0.195,
volume=223000, value=43485,> productype=se IA CF, building=110,
taxnumber=110I1, tagFields={B=C> C=NZ3 D="20170514 07:41:25.161"
F=:00000017PouK H=LMT> O=6521B841:00023625-A-15.101.200.0.5003.0.0
R="Ordernot Added" a=A> c=FIRSTNZA j=N}</p>
</blockquote>
<blockquote>
<p>#For this record, there is no taxnumber , so the TAXnumber column field should be blank/Nan for this record
00:00:00.000:, ENTER, transId=3, Supercode=TBC, id=3,MRP=2.71,
volume=3750, value=10162.5, productype=It CF UeCP,> building=110,
tagFields={B=C C=4331K D="20170514 > 13:59:51.288"
H=LMT K=12345O=6521B841:0027d59B6-B-15.101.200.0.5009.0.0 R="Order
notAdded" a=P c=4sd33E> j=N}</p>
</blockquote>
<p>#For this record, there is no building number , so the building number column field should be blank/Nan for this record</p>
<blockquote>
<p>00:00:00.000:, ENTER, transId=4, Supercode=ABT, id=4, MRP=2.73,>
volume=357, value=974.61, productype=se IrA CtF,
taxnumber=110B1, tagFields={B=C C=ZBJF D="20170929 16:10:01.321" H=LT
O=6521B5841:003A98565-A-15.101.2050.0.5009.0.0 R="Order not Added" a=A
c=BNPLLCOLO j=Y}</p>
</blockquote>
<p>I have tried the below steps:</p>
<blockquote>
<p>data = pd.read_csv("path.txt",delimiter=",",header=None)</p>
</blockquote>
<p>I have got the output</p>
<blockquote>
<p>ParserError: Error tokenizing data. C error: Expected 10 fields in
line 66017, saw 11</p>
</blockquote>
|
<p>try using <code>engine='python'</code> and <code>error_bad_lines=False</code> in your pd.read_csv()</p>
|
python|pandas|dataframe|text|python-import
| 0
|
2,219
| 47,908,091
|
Model seems to be overfitting with Optimizer.minimize() but not tf.contrib.layers.optimize_loss()
|
<p>When I create <code>train_op</code> like this:</p>
<pre><code>train_op = tf.contrib.layers.optimize_loss(
loss=loss,
global_step=tf.contrib.framework.get_global_step(),
learning_rate=params['learning_rate'],
optimizer='Adam'
)
</code></pre>
<p>I get a working network that performs well on validation and test sets.</p>
<p>If I just use <code>minimize()</code> method like this:</p>
<pre><code>optimizer = tf.train.AdamOptimizer(learning_rate=params['learning_rate'])
train_op = optimizer.minimize(
loss=loss,
global_step=tf.train.get_global_step()
)
</code></pre>
<p>I get much worse results (precision, recall, loss) even on the first validation after 1000 steps, and after a while it seems like it completely overfitted (loss on validation is more or less constant and is 100x train loss, but precision and recall crash)</p>
<p>I created a function that is cleaned-up version of contrib one, that differs from straight Optimizer.minimize() in two marked places:</p>
<pre><code>def make_train_op(loss, optimizer, global_step):
with tf.variable_scope(None, "OptimizeLoss", [loss, global_step]):
# ==========================================
# this part is extra comparing to minimize()
update_ops = set(tf.get_collection(tf.GraphKeys.UPDATE_OPS))
if update_ops:
with tf.control_dependencies([update_ops]):
loss = tf.identity(loss)
# ==========================================
gradients = optimizer.calculate_gradients(
loss,
tf.trainable_variables()
)
grad_updates = optimizer.apply_gradients(
gradients,
global_step=global_step,
name="train")
# ==========================================
# so is this one
with tf.control_dependencies([grad_updates]):
train_op = tf.identity(loss)
# ==========================================
return train_op
</code></pre>
<p>And validation performs well again. Training in all cases look more or less same (and healthy). Network is relatively straightforward CNN/batchnorm/dropout/maxpool mix with cross-entropy loss.</p>
<p>The way I understand this is that there are some operations that are part of a graph that don't appear as dependencies for loss, but that are needed to calculate gradients. How is that even possible? If this is a normal situation, why aren't those two snippets part of a core? Should I have done something different while building a model to avoid the need for this dependency forcing?</p>
|
<p>The issue is with batchnorm update operations, and it's actually <a href="https://www.tensorflow.org/api_docs/python/tf/layers/batch_normalization" rel="nofollow noreferrer">documented</a>:</p>
<blockquote>
<p>Note: when training, the moving_mean and moving_variance need to be updated. By default the update ops are placed in tf.GraphKeys.UPDATE_OPS, so they need to be added as a dependency to the train_op. For example:</p>
</blockquote>
<pre><code>update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS)
with tf.control_dependencies(update_ops):
train_op = optimizer.minimize(loss)
</code></pre>
|
tensorflow|machine-learning|neural-network|conv-neural-network
| 2
|
2,220
| 47,682,621
|
Returning a pandas DataFrame with the indexes of first next cases of >= value
|
<p><strong>Initial DataFrame</strong> </p>
<pre><code>index is_case value_a value_b
03/01/2005 True 0.598081665 0.189099313
04/01/2005 False 0.480809369 0.142255603
05/01/2005 False 0.963128886 0.422756089
06/01/2005 False 0.687675456 0.739599384
07/01/2005 True 0.513017431 0.397303797
08/01/2005 True 0.691884131 0.922642361
09/01/2005 False 0.659555415 0.222993436
10/01/2005 False 0.920539474 0.553573214
11/01/2005 False 0.360990121 0.535021421
12/01/2005 False 0.512528553 0.343931584
13/01/2005 False 0.083391071 0.277004714
14/01/2005 False 0.382696661 0.204780359
15/01/2005 False 0.838666246 0.337101306
16/01/2005 True 0.363920089 0.355211134
17/01/2005 False 0.354853214 0.691884131
18/01/2005 False 0.089324832 0.910276245
19/01/2005 False 0.611991454 0.513667459
20/01/2005 True 0.210785609 0.839849547
</code></pre>
<p><strong>Desired Output:</strong></p>
<p><em>The important thing about the output is to contain the column <code>output</code> and the <code>index</code>. Other columns may or may not be in the output.</em></p>
<pre><code>index is_case value_a value_b output
03/01/2005 True 0.598081665 0.189099313 06/01/2005
07/01/2005 True 0.513017431 0.397303797 08/01/2005
08/01/2005 True 0.691884131 0.922642361 17/01/2005
16/01/2005 True 0.363920089 0.355211134 17/01/2005
20/01/2005 True 0.210785609 0.839849547 NaN
</code></pre>
<p><strong>Transformation Logic</strong> </p>
<p>Get the <code>value_a</code> of all rows where <code>is_case</code> is <code>True</code> and search from the next row onwards for a <code>value_b</code> which is greater or equal than <code>value_a</code>, it returns the first <code>index</code> that meets this criteria in column <code>output</code>.</p>
|
<p>IIUC, and if your dataframes aren't too big, you can use a cartesian join and filters, then drop duplicates to get the first value matches like this:</p>
<pre><code>df_is_case = df[df['is_case'] == True]
df_joined = df_is_case.assign(key=1)\
.merge(df.assign(key=1),
on='key',
suffixes=('','_y'))\
.query('index < index_y and value_a <= value_b_y')
df_out = pd.concat([df_joined, df_is_case])\
.drop_duplicates(subset='index')[['index', 'is_case', 'value_a', 'value_b', 'index_y']]\
.rename(columns={'index_y':'output'})
print(df_out)
</code></pre>
<p>Output:</p>
<pre><code> index is_case value_a value_b output
3 03/01/2005 True 0.598082 0.189099 06/01/2005
23 07/01/2005 True 0.513017 0.397304 08/01/2005
50 08/01/2005 True 0.691884 0.922642 17/01/2005
68 16/01/2005 True 0.363920 0.355211 17/01/2005
17 20/01/2005 True 0.210786 0.839850 NaN
</code></pre>
|
python|pandas|dataframe
| 3
|
2,221
| 49,013,228
|
Using cleanco on dataframe column
|
<p>I am trying to create a script to clean company names using cleanco module in Python. </p>
<p>cleanco has an example which is as follows:</p>
<pre><code>business_name = "Some Big Pharma, LLC"
x = cleanco(business_name)
x.clean_name()
</code></pre>
<p>which results in "Some Big Pharma".</p>
<p>I am trying to do the same for a column in a pandas data frame.</p>
<p>So far my code is:</p>
<pre><code>#Importing Packages
import pandas as pd
from cleanco import cleanco
#Create a data frame for testing purposes
columns = ['emp'] #Define column names
new_col = ['emp2'] #Define column names for second dataframe
df=pd.DataFrame(columns=columns) #Create an empty data frame
df2=pd.DataFrame(columns=new_col)
df['emp'] = ['ABC, Inc.', 'XYZ LTD']#populate the data frame with dummy values
df["emp"] = [x.strip().replace('.','').replace('''''', '').replace('-', '').replace(',','') for x in df['emp'].str.lower()]
df2['emp2'] = df['emp'].apply(cleanco,1)
df['emp'].apply(cleanco.clean_name()) #This is where the error lies
</code></pre>
<p>I am having trouble calling the <em>clean_name</em> function.</p>
<p>my first data frame:</p>
<p>0 ABC, Inc.</p>
<p>1 XYZ LTD</p>
<p>I want df2 to look like:</p>
<p>0 abc</p>
<p>1 xyz</p>
|
<p>I used a lambda function to pull the "clean" name from the newly created column.</p>
<p>Try this:</p>
<pre><code>df2['emp3'] = df2['emp2'].apply(lambda x: x.clean_name())
</code></pre>
|
python|pandas
| 2
|
2,222
| 48,906,313
|
Is the numpy.linalg.lstsq rcond parameter not working according to the description?
|
<p>I am trying to use the least squares solution from numpy (<a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.lstsq.html" rel="nofollow noreferrer">Description</a>). According to the website to use the new default for the 'rcond' parameter: ''To silence the warning and use the new default, use rcond=None, to keep using the old behavior, use rcond=-1.''</p>
<p>With the rcond parameter set to None:</p>
<pre><code>vector = np.linalg.lstsq(GA, FA, rcond = None)
</code></pre>
<p>It returns me an error:</p>
<pre><code>TypeError: must be real number, not NoneType
</code></pre>
<p>Which does not happen when the parameter is taken away or set to -1. </p>
<p>I did some check and according to this <a href="https://stackoverflow.com/questions/29372559/what-is-the-difference-between-numpy-linalg-lstsq-and-scipy-linalg-lstsq">post</a> and one of the answers had an update stating that there were some recent changes on this method. </p>
<p>Then I would like to ask if someone else is having the same problem or if there is something as a typo on my line (Or something else I haven't thought about).</p>
<p>Kind Regards,
Thanks for your time.</p>
|
<p>You need NumPy >= 1.14. What version are you using?</p>
|
python|python-3.x|numpy
| 2
|
2,223
| 49,140,570
|
Match string with pandas series that contain a list of strings
|
<p>I have a pandas dataframe like this:</p>
<p><a href="https://i.stack.imgur.com/bbQm8.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bbQm8.jpg" alt="enter image description here"></a></p>
<p>The values are string type. I would like to find out if each of these row contains the string <code>'63'</code>.</p>
<p>So I first split each the strings at <code>','</code> by doing <code>df['col_name'].str.split(',')</code>, which gives me this:</p>
<p><a href="https://i.stack.imgur.com/InHZW.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/InHZW.jpg" alt="enter image description here"></a></p>
<p>So each row now contains a list of strings. I next tried to match the string by doing <code>df['col_name'].str.split(',').str.contains('63')</code> but it gives me this:</p>
<p><a href="https://i.stack.imgur.com/iVepY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iVepY.jpg" alt="enter image description here"></a></p>
<p>Why? :( I'd like it to say False for all rows especially for the rows that contain the value <code>263</code>.</p>
|
<p>You can use a list comprehension.</p>
<p>Here is a minimal example.</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'A': [[196], [504], [63, 100], [35, 1], [63]]})
df2 = df[[63 in x for x in df['A']]]
# A
# 2 [63, 100]
# 4 [63]
</code></pre>
<p>This works because the list comprehension produces a Boolean list. This can, of course, be assigned to a series in <code>df</code>:</p>
<pre><code>df['Test'] = [63 in x for x in df['A']]
# A Test
# 0 [196] False
# 1 [504] False
# 2 [63, 100] True
# 3 [35, 1] False
# 4 [63] True
</code></pre>
|
python|pandas|split|string-matching|series
| 1
|
2,224
| 58,622,807
|
python 3.8.0 - print self-documenting expression with value of variable on a new line
|
<p>Python 3.8.0 allows for self-documenting expressions and debugging using <code>=</code>, e.g.: <code>print(f'{myvar=}')</code>.</p>
<p>Is it possible to print the output on a new line? this would be useful for variables with multi-line outputs like dataframes.</p>
<p>e.g. </p>
<pre><code>>>> df = pd.DataFrame({'animal':['alligator', 'bee', 'falcon', 'lion',
'monkey', 'parrot', 'shark', 'whale', 'zebra']})
>>> print(f'{df.head()=}')
df.head() =
animal
0 alligator
1 bee
2 falcon
3 lion
4 monkey
</code></pre>
|
<p>If you make your f string triple-quoted, you can include a newline after the <code>=</code>:</p>
<pre><code>df = pd.DataFrame({'animal':['alligator', 'bee', 'falcon', 'lion',
'monkey', 'parrot', 'shark', 'whale', 'zebra']})
print(f'''{df=
}''')
</code></pre>
|
python|python-3.x|pandas|f-string|python-3.8
| 2
|
2,225
| 58,623,653
|
Pandas groupby get filtered sum over total sum
|
<p>I have the following dataframe:</p>
<pre><code>df = pd.DataFrame([[1, 2, True], [1, 4, False], [2, 6, False], [2, 8, True]], columns=["Group", "Value", "C"])
Group Value C
0 1 2 True
1 1 4 False
2 2 6 False
3 2 8 True
</code></pre>
<p>And I would like for each group to know the sum of values where C equals true over the total sum of values. So for example for group 1 we have 2 / (2+4)</p>
<p>I have managed through some extensive searching to reach the following stage:</p>
<pre><code>df.groupby('Group').agg(lambda x: x.loc[x.C == True, 'Value'].sum() / x.Value.sum())
Value C
Group
1 0.333333 0.333333
2 0.571429 0.571429
</code></pre>
<p>But (as expected) I get two columns and I would like to get only the one. My ideal result would be: </p>
<pre><code> Ratio
Group
1 0.333333
2 0.571429
</code></pre>
<p>I can surely do some modification after the groupby and get what I want, but as I am new to Python I was wondering if I am missing something basic here.</p>
|
<p>I believe you can use a division on <code>groupby.transform()</code> with sum and assign using <code>.assign()</code> after filtering so as to align on ythe index:</p>
<pre><code>df[df['C']].assign(Ratio=df['Value']/df.groupby('Group')['Value'].transform('sum'))
</code></pre>
<p>If more than 1 True per group, use:</p>
<pre><code>m=(df.groupby(['Group','C'],as_index=False,sort=False)['Value'].sum()
.query('C==True').assign(Sum=df.groupby(['Group'])['Value'].transform('sum')))
m[['Group']].assign(Ratio=m['Value']/m['Sum'])
</code></pre>
<hr>
<pre><code> Group Ratio
0 1 0.333333
3 2 0.571429
</code></pre>
|
python|pandas|pandas-groupby
| 2
|
2,226
| 58,664,141
|
How to write data frame to Postgres table without using SQLAlchemy engine?
|
<p>I have a data frame that I want to write to a <strong>Postgres</strong> database. This functionality needs to be part of a <strong>Flask</strong> app.</p>
<p>For now, I'm running this insertion part as a separate script by creating an <strong>SQLAlchemy engine</strong> and passing it to the <code>df.to_sql()</code> to write the data frame to a database table.</p>
<p>But when I integrate this functionality into a Flask app, I already have existing connections to the <strong>Postgres</strong> database which were created using <strong>Psycopg2 connection pool</strong>.</p>
<p>When looked at <code>df.to_sql()</code> documentation, it is mentioned that it uses the <strong>SQLAlchemy engine</strong>. I don't see any other connection mechanism. <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_sql.html#pandas-dataframe-to-sql" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_sql.html#pandas-dataframe-to-sql</a></p>
<p>My question is why do I need this SQLAlchemy engine to be created when I have the existing connections. Why can't I use them? </p>
|
<p>You can use those connections and avoid SQLAlchemy. This is going to sound rather unintuitive, but it will be much faster than regular inserts (even if you were to drop the ORM and make a general query e.g. with <code>executemany</code>). Inserts are slow, even with raw queries, but you'll see that <code>COPY</code> is mentioned several times in <a href="https://stackoverflow.com/questions/12206600/how-to-speed-up-insertion-performance-in-postgresql">How to speed up insertion performance in PostgreSQL</a>. In this instance, my motivations for the approach below are:</p>
<ol>
<li>Use <code>COPY</code> instead of <code>INSERT</code></li>
<li>Don't trust Pandas to generate the correct SQL for this operation (although, as noted by Ilja Everilä, this approach actually got <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#io-sql-method" rel="nofollow noreferrer">added to Pandas in V0.24</a>)</li>
<li>Don't write the data to disk to make an actual file object; keep it all in memory</li>
</ol>
<p>Suggested approach using <a href="http://initd.org/psycopg/docs/cursor.html#cursor.copy_from" rel="nofollow noreferrer"><code>cursor.copy_from()</code></a>:</p>
<pre><code>import csv
import io
import psycopg2
df = "<your_df_here>"
# drop all the columns you don't want in the insert data here
# First take the headers
headers = df.columns
# Now get a nested list of values
data = df.values.tolist()
# Create an in-memory CSV file
string_buffer = io.StringIO()
csv_writer = csv.writer(string_buffer)
csv_writer.writerows(data)
# Reset the buffer back to the first line
string_buffer.seek(0)
# Open a connection to the db (which I think you already have available)
with psycopg2.connect(dbname=current_app.config['POSTGRES_DB'],
user=current_app.config['POSTGRES_USER'],
password=current_app.config['POSTGRES_PW'],
host=current_app.config['POSTGRES_URL']) as conn:
c = conn.cursor()
# Now upload the data as though it was a file
c.copy_from(string_buffer, 'the_table_name', sep=',', columns=headers)
conn.commit()
</code></pre>
<p>This should be orders of magnitude faster than actually doing inserts.</p>
|
python|pandas|postgresql|sqlalchemy
| 3
|
2,227
| 58,989,975
|
Move for loop into numpy single expression when calling polyfit
|
<p><em>Fairly new to numpy/python here, trying to figure out some less c-like, more numpy-like coding styles.</em></p>
<h2>Background</h2>
<p>I've got some code done that takes a fixed set of x values and multiple sets of corresponding y value sets and tries to find which set of the y values are the "most linear".</p>
<p>It does this by going through each set of y values in a loop, calculating and storing the residual from a straight line fit of those y's against the x's, then once the loop has finished finding the index of the minimum residual value.</p>
<p>...sorry this might make a bit more sense with the code below.</p>
<pre><code>import numpy as np
import numpy.polynomial.polynomial as poly
# set of x values
xs = [1,22,33,54]
# multiple sets of y values for each of the x values in 'xs'
ys = np.array([[1, 22, 3, 4],
[2, 3, 1, 5],
[3, 2, 1, 1],
[34,23, 5, 4],
[23,24,29,33],
[5,19, 12, 3]])
# array to store the residual from a linear fit of each of the y's against x
residuals = np.empty(ys.shape[0])
# loop through the xs's and calculate the residual of a linear fit for each
for i in range(ys.shape[0]):
_, stats = poly.polyfit(xs, ys[i], 1, full=True)
residuals[i] = stats[0][0]
# the 'most linear' of the ys's is at np.argmin:
print('most linear at', np.argmin(residuals))
</code></pre>
<h2>Question</h2>
<p>I'd like to know if it's possible to "numpy'ize" that into a single expression, something like</p>
<p>residuals = get_residuals(xs, ys)</p>
<h2>...I've tried:</h2>
<p>I've tried the following, but no luck (it always passes the full arrays in, not row by row):</p>
<pre><code># ------ ok try to do it without a loop --------
def wrap(x, y):
_, stats = poly.polyfit(x, y, 1, full=True)
return stats[0][0]
res = wrap(xs, ys) # <- fails as passes ys as full 2D array
res = wrap(np.broadcast_to(xs, ys.shape), ys) # <- fails as passes both as 2D arrays
</code></pre>
<p><strong>Could anyone give any tips on how to numpy'ize that?</strong> </p>
|
<p>From the <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.polynomial.polynomial.polyfit.html#numpy.polynomial.polynomial.polyfit" rel="nofollow noreferrer"><code>numpy.polynomial.polynomial.polyfit</code> docs</a> (not to be confused with <code>numpy.polyfit</code> which is not interchangable)
:</p>
<blockquote>
<p>x : array_like, shape (M,)</p>
<p>y : array_like, shape (M,) or (M, K)</p>
</blockquote>
<p>Your <code>ys</code> needs to be transposed to have <code>ys.shape[0]</code> equal to <code>xs.shape</code></p>
<pre><code>def wrap(x, y):
_, stats = poly.polyfit(x, y.T, 1, full=True)
return stats[0]
res = wrap(xs, ys)
res
Out[]: array([284.57337884, 5.54709898, 0.41399317, 91.44641638,
6.34982935, 153.03515358])
</code></pre>
|
numpy
| 0
|
2,228
| 58,732,274
|
Pandas Merge Vlookup, KeyError: "['Value'] not in index"
|
<p>I am trying to perform a vlookup/merge betwen two dataframes. I get the error
<code>KeyError: "['Player'] not in index"</code></p>
<p>i've tried to reindex the columns but doesnt seem to work.
<code>df1= df1.reindex(columns = ['Player','Category'])</code></p>
<p>My current code is like so <code>missingnames = pd.merge(df1,df2[['Player','Player Name']],on='Player',how = 'left')</code></p>
<p>My dataframes are like below:</p>
<p>df1:</p>
<p><a href="https://i.stack.imgur.com/J1KQs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J1KQs.png" alt="enter image description here"></a></p>
<p>df2:</p>
<p><a href="https://i.stack.imgur.com/Tit15.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Tit15.png" alt="enter image description here"></a></p>
<p>expected output</p>
<p><a href="https://i.stack.imgur.com/hrqD7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hrqD7.png" alt="enter image description here"></a></p>
<p>can anyone help with this?
Thanks.</p>
|
<p>You can do it like this:</p>
<pre><code>df1['Exists'] = df1['Player'].str.lower().isin(df2['Player Name'].str.lower())
</code></pre>
|
python|pandas
| 1
|
2,229
| 58,977,672
|
How can I pass a pandas data frame into a request parameter
|
<p>I am trying to do something like this:</p>
<pre><code>df = pd.read_csv(r'Desktop/test.csv')
url = 'http://localhost:5000/run_model'
response = requests.post(url, data=df)
</code></pre>
<p>But I am having this error:</p>
<pre><code>ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
|
<p>Depending on your endpoint you can convert your DataFrame to CSV or json like</p>
<pre><code>df.to_json()
</code></pre>
<p>or</p>
<pre><code>df.to_csv()
</code></pre>
|
python|pandas|python-requests
| 2
|
2,230
| 70,167,070
|
How to run tf.lite model on raspery-pi instead of saved keras model
|
<p>I am tring to classify traffic sings by using raspery-pi, for this i trained and saved a keras model that is .h5 file, but it consume too much cpu so i convert it to .tflite model and tried to run. However it gives that error <code>OSError: SavedModel file does not exist at: yourmodel.tflite/{saved_model.pbtxt|saved_model.pb} </code> i checked the path, here is my code.
Also i just changed that line: <code>model = tensorflow.keras.models.load_model("my_model.h5")</code> to <code>model = tensorflow.keras.models.load_model("yourmodel.tflite")</code></p>
<pre><code>import numpy as np
import cv2
import tensorflow
from tensorflow import keras
from tensorflow.keras.preprocessing import image
#############################################
frameWidth= 600 # CAMERA RESOLUTION
frameHeight = 480
brightness = 180
threshold = 0.75 # PROBABLITY THRESHOLD
font = cv2.FONT_HERSHEY_SIMPLEX
##############################################
# SETUP THE VIDEO CAMERA
cap = cv2.VideoCapture(0)
cap.set(3, frameWidth)
cap.set(4, frameHeight)
cap.set(10, brightness)
cap.set(cv2.CAP_PROP_FPS, 3)
# IMPORT THE TRANNIED MODEL
model = tensorflow.keras.models.load_model("yourmodel.tflite")
#model = load_model('best_model.h5')
def equalize(img):
img = cv2.equalizeHist(img)
return img
def grayscale(img):
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
return img
def preprocessing(img):
img = grayscale(img)
img = equalize(img)
img = img/255
return img
def getCalssName(classNo):
if classNo == 0: return 'Speed Limit 20 km/h'
elif classNo == 9: return 'No passing'
elif classNo == 12: return 'Priority road'
elif classNo == 13: return 'Yield'
elif classNo == 14: return 'Stop'
elif classNo == 38: return 'Keep right'
elif classNo == 39: return 'Keep left'
while True:
success, imgOrignal = cap.read()
img = np.asarray(imgOrignal)
#img = cv2.resize(img, (32, 32))
img = preprocessing(img)
cv2.imshow("Processed Image", img)
img = img.reshape(1, 32, 32, 1)
cv2.putText(imgOrignal, "CLASS: " , (20, 35), font, 0.75, (0, 0, 255), 2, cv2.LINE_AA)
cv2.putText(imgOrignal, "PROBABILITY: ", (20, 75), font, 0.75, (0, 0, 255), 2, cv2.LINE_AA)
# PREDICT IMAGE
predictions = model.predict(img)
classIndex = model.predict_classes(img)
probabilityValue =np.amax(predictions)
if probabilityValue > threshold:
print(getCalssName(classIndex))
#cv2.rectangle(image, coordinate[0],coordinate[1], (0, 255, 0), 1)
cv2.putText(imgOrignal,str(classIndex)+" "+str(getCalssName(classIndex)), (120, 35), font, 0.75, (0, 0, 255), 2, cv2.LINE_AA)
cv2.putText(imgOrignal, str(round(probabilityValue*100,2) )+"%", (180, 75), font, 0.75, (0, 0, 255), 2, cv2.LINE_AA)
cv2.imshow("Result", imgOrignal)
if cv2.waitKey(1) and 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
</code></pre>
|
<p>Try to save your keras model using this code</p>
<pre><code># model is your keras model
tflite_model = tf.lite.TFLiteConverter.from_keras_model(model).convert()
with open('model.tflite', 'wb') as f:
f.write(tflite_model)
</code></pre>
<p>To load and use it you will need tf.lite.Interpreter</p>
<pre><code># instead of `model = tensorflow.keras.models.load_model("yourmodel.tflite")`
# use this code to load tflite model
interpreter = tf.lite.Interpreter(model_path="model.tflite")
interpreter.allocate_tensors()
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
# replace `predictions = model.predict(img)` with this code
interpreter.set_tensor(input_details[0]['index'], img)
interpreter.invoke()
predictions = interpreter.get_tensor(output_details[0]['index'])
</code></pre>
|
python|tensorflow|keras|raspberry-pi
| 0
|
2,231
| 70,309,742
|
How to resample to a coarser resolution but to samples within the original index?
|
<p>I have the following use case:</p>
<pre><code>import pandas as pd
import numpy as np
# create dataframe
df = pd.DataFrame(data=np.random.rand(10, 3),
columns=['a', 'b'],
index=pd.date_range('2021-01-01', periods=10, freq='W-FRI'))
# data is random, I'm just saving time with copy paste first row
df
> a b
> 2021-01-01 0.272628 0.974373
> 2021-01-08 0.272628 0.974373
> 2021-01-15 0.272628 0.974373
> 2021-01-22 0.272628 0.974373
> 2021-01-29 0.272628 0.974373
> 2021-02-05 0.759018 0.443803
> 2021-02-12 0.759018 0.443803
> 2021-02-19 0.759018 0.443803
> 2021-02-26 0.759018 0.443803
> 2021-03-05 0.973900 0.929002
</code></pre>
<p>I would like to get the first matching sample within my index when I resample but doing the following doesn't work, note that the dates aren't in my original index:</p>
<pre><code>df.resample('M').first()
> a b
> 2021-01-31 0.272628 0.160300
> 2021-02-28 0.759018 0.443803
> 2021-03-31 0.973900 0.929002
</code></pre>
<p>I'd like to resample to monthly but taking the first matching date sample each time, i.e., I would like the following result:</p>
<pre><code>> a b
> 2021-01-01 0.272628 0.160300
> 2021-02-05 0.759018 0.443803
> 2021-03-05 0.973900 0.929002
</code></pre>
<p>I could do a hack as follows but this is not ideal, it'd only works for this toy example:</p>
<pre><code>df.loc[list(np.diff(df.index.month.values, prepend=0) == 1)]
</code></pre>
|
<p>One way is to transform the index to period, then drop the duplicates:</p>
<pre><code>months = df.index.to_series().dt.to_period('M')
df[~month.duplicated()]
</code></pre>
<p>Another, might actually be better, is <code>groupby().head()</code></p>
<pre><code>df.groupby(pd.Grouper(freq='M')).head(1)
</code></pre>
<p>Output:</p>
<pre><code> a b
2021-01-01 0.695784 0.228550
2021-02-05 0.188707 0.278871
2021-03-05 0.935635 0.785341
</code></pre>
|
python|pandas|numpy|pandas-resample
| 1
|
2,232
| 70,368,274
|
How to get the filenames of the samples from tensorflow dataset?
|
<p>For example, I have the following code, which get me the image arrays and their labels:</p>
<pre><code>import tensorflow_datasets as tfds
builder = tfds.ImageFolder('/home/ubuntu/X-dataset/')
ds_val = builder.as_dataset(split=['val'], shuffle_files=False, as_supervised=True)
ds_val = ds_val.batch(batch_size=32, drop_remainder=False)
ds_val = ds_val.map(lambda x, y: (process_test_data(x), y))
</code></pre>
<p>later on, I want to get all predictions from <code>ds_val</code> like this:</p>
<pre><code>val_preds = model_classification.predict(ds_val, steps=len(ds_val))
</code></pre>
<p>to make comparisons with my ground truth table. However, I need some clue of the input sample names or paths, to attach them correctly on the GT table.</p>
<p>I wished something that shows me the order of the samples used during the dataset iteration like this:</p>
<pre><code>ds_val.filenames
# ['file1.jpg', 'file2.jpg', ..., 'filen.jpg']
</code></pre>
<p>I haven't seen anything on the <code>tfds</code> that allow me do this. My doubt are, Is there an tfds alternative for this?, and Is this a correct path to follow with tfds when measuring sets performance?</p>
|
<pre><code>import tensorflow as tf
image_ds = tf.data.Dataset.list_files('image/*', shuffle=False)
for file in image_ds.take(3):
print(file.numpy())
</code></pre>
<p><strong>Output</strong></p>
<pre><code>b'image/flower.jpg'
b'image/flower2.jpg'
b'image/flower1.jpg'
</code></pre>
|
tensorflow|tensorflow2.0|tensorflow-datasets
| 0
|
2,233
| 55,957,051
|
How to make any changes in a column when there is a change in the corresponding row in other column while looping through in python
|
<ol>
<li>I want to add a new column 'rate' where the first entry of the day should be 0.</li>
<li>If the time difference is 05 mins then it should be 2 else it should be 2.</li>
</ol>
<p>input: </p>
<pre><code>date time
20190101 750
20190101 755
20190101 800
20190101 810
20190101 815
20190102 820
20190102 825
20190103 800
20190103 805
</code></pre>
<p>output should be:</p>
<pre><code>date time rate
20190101 750 0
20190101 755 2
20190101 800 2
20190101 810 0
20190101 815 2
20190102 820 0
20190102 825 2
20190103 800 0
20190103 805 2
</code></pre>
<p>I converted time to datetime to get the correct 5 mins difference.
And run the loop</p>
<pre><code>df['_time'] = pd.to_datetime(df['time'].astype(str), format='%H%M')
</code></pre>
<p>---the loop</p>
<pre><code>k = 20190101
for i in df.date:
if i == k:
df.loc[ df['_time'].diff() == '00:05:00', 'rate'] = 2
df.loc[ df['_time'].diff() != '00:05:00', 'rate'] = 0
k = i
else:
df.loc[( df['_time'].diff() != '00:05:00') & (df['date'] == i),'rate'] = 0
df.loc[ df['_time'].diff() == '00:05:00', 'rate'] = 2
</code></pre>
<p>My output now is:</p>
<pre><code>date time rate
20190101 750 0
20190101 755 2
20190101 800 2
20190101 810 0
20190101 815 2
20190102 820 2
20190102 825 2
20190103 800 0
20190103 805 2
</code></pre>
<p>I am not sure how to get 0 for 20190102 820 </p>
|
<p>If I understand correctly: to fix your problem for the change of day you can use <code>groupby</code> on the date, that way it will not compare just the time but the dates too (if your column date is your index, this will work, if not change <code>df.index</code> to <code>df.date</code> in the groupby)</p>
<pre><code>df['_time'] = pd.to_datetime(df['time'].astype(str), format='%H%M')
flag = df.groupby(df.index)['_time'].diff()
df['rate'] = 0
df.loc[flag.dt.total_seconds()/60 == 5, 'rate'] = 2
</code></pre>
<p>there must be a one-liner to do the exact same, but since you're handle suggest you're new to python, I went the long road to help you</p>
<h3>output</h3>
<pre><code> time rate
date
20190101 750 0
20190101 755 2
20190101 800 2
20190101 810 0
20190101 815 2
20190102 820 0
20190102 825 2
20190103 800 0
20190103 805 2
</code></pre>
|
python-3.x|pandas|loops
| 0
|
2,234
| 55,892,254
|
Trying to get CSV ready for keras model with tensorflow dataset
|
<p>I do have a keras CNN model ready which expects [None,20,20,3] arrays as input. (20 is image size here...) On the other side I do have a CSV with 1200 (20*20*3) columns ready in my cloud storage.</p>
<p>I want to write an ETL pipeline with tensorflow to obtain a [20,20,3] shape tensor for each row in the csv.</p>
<p>My code so far:</p>
<p>I spent days of work already and feel confident, that this approoach might work out in the end.</p>
<pre><code>import tensorflow as tf
BATCH_SIZE = 30
tf.enable_eager_execution()
X_csv_path = 'gs://my-bucket/dataX.csv'
X_dataset = tf.data.experimental.make_csv_dataset(X_csv_path, BATCH_SIZE, column_names=range(1200) , header=False)
X_dataset = X_dataset.map(lambda x: tf.stack(list(x.values())))
iterator = X_dataset.make_one_shot_iterator()
image = iterator.get_next()
</code></pre>
<p>I would expect to have a [30,1200] shape but I still get 1200 tensors of shape [30] instead. My idea is to read every line into a [1200] shaped tensor and then reshape the line to a [20,20,3] tensor to feed my model with. Thanks for your time!</p>
|
<p><code>tf.data.experimental.make_csv_dataset</code> creates a OrderedDict of column arrays. For your task I'd use <code>tf.data.TextLineDataset</code>. </p>
<pre class="lang-py prettyprint-override"><code>def parse(filename):
string = tf.strings.split([filename], sep=',').values
return string
dataset = tf.data.TextLineDataset('sample.csv').map(parse).batch(BATCH_SIZE)
for i in dataset:
print(i)
</code></pre>
<p>This will output tensor of shape (BATCH_SIZE, row_length), where row_length is a row from csv file. You can apply any additional preprocessing, depending on your task</p>
|
csv|tensorflow|dataset|shapes|tensor
| 0
|
2,235
| 55,962,827
|
Code to find top 95 percent of column values in dataframe
|
<p>I am looking for help gathering the top 95 percent of sales in a Pandas Data frame where I need to group by a category column. I found the following (top section of code) which is close. <code>TotalDollars</code> in my df gets properly sorted in descending fashion, but the resulting number of rows includes more than top 95% of total dollars.</p>
<pre><code>Total Dollars Percent Running Percent
117388 11.09% 11.09%
81632 7.71% 18.80%
46316 4.38% 23.18%
41500 3.92% 27.10%
</code></pre>
<p>after hitting 95% running total percent want to eliminate remaining rows for that category. I don't need Percent or Running Percent as df fields (given for illustrative purposes only).</p>
<pre><code>df1 = (df.groupby('channel',group_keys=False)
.apply(lambda x: x.nlargest(int(len(x) * a), 'score')))
</code></pre>
<p>my code:</p>
<pre><code>df_out = (df_Sales.groupby('category', group_keys=False).apply(lambda x: x.nlargest(int(len(x) * 0.95), 'TotalDollars')))
</code></pre>
|
<pre><code>import pandas as pd
import numpy as np
np.random.seed(100)
test_df = pd.DataFrame({
'group': ['A'] * 5 + ['B'] * 5,
'value': np.random.randint(1,100,10)
})
def retain_quantile(df, percentile=0.95):
percentile_val = df['value'].quantile(percentile)
return df[df['value'] <= percentile_val]
grouped_df = test_df.groupby('group').apply(retain_quantile)
grouped_df
group value
group
A 0 A 9
1 A 25
2 A 68
4 A 80
B 5 B 49
6 B 11
7 B 95
8 B 53
</code></pre>
<p>if you're planning on using this for multiple columns, it would be a lot more complicated but the approach is very similar.</p>
|
python-3.x|pandas
| 1
|
2,236
| 64,693,529
|
ipywidgets and pandas dataframe
|
<p>How I can get dataframe out of @interact function for next cell?
My interact function looks something like this:</p>
<pre><code>@interact(eutPlace=eutPlaces)
def selectByEut (eutPlace):
rdsTable = tabelSisse.drop(['Id', 'Serial_number', 'User_modified'], axis='columns')
rdsTable = rdsTable.loc[rdsTable['EUT_place'] == eutPlace]
print(rdsTable.shape)
return rdsTable
</code></pre>
|
<p>Found one solution. Use interactive instead of @interact.</p>
<pre><code>def selectByEut (eutPlace):
rdsTable = tabelSisse.drop(['Id', 'Serial_number', 'User_modified'], axis='columns')
rdsTable = rdsTable.loc[rdsTable['EUT_place'] == eutPlace]
display(rdsTable)
return rdsTable
intrFilt = interactive(selectByEut, eutPlace=eutPlaces)
intrFilt
</code></pre>
<p>And in the next cell i can get the resulting dataframe with:</p>
<pre><code>reducedTable = intrFilt.result
</code></pre>
|
python|pandas|jupyter-notebook|ipywidgets
| 1
|
2,237
| 65,002,526
|
Tensorflow 2: Sort a 3D tensor accoding to a 2D tensor
|
<p>I have a 3D tensor with batch, sequence, feature dimension (N,s,e). It is a sequence of probability distributions. Then I want to order them according to the integer corresponding to the highest predictions. So say</p>
<pre><code>x_probabs = 3D tensor (ex: [[[0.5, 0.1, 0.4], [0.3, 0.3, 0.4], [0.1,
0.8, 0.1]]]; # shape N s e
x = tf.argmax(x_probabs, axis=-1) = [[0, 2, 1]]; # shape N s
</code></pre>
<p>or another example would be</p>
<pre><code>x_probabs=[[[0.6, 0.1, 0.1, 0.1, 0.1], [0.1,0.1,0.1,0.1,0.6], [0.1,0.1,0.1,0.6,0.1]]];
x = [[0, 4, 3]];
</code></pre>
<p>If i wanted to order x i can do <code>ordered_x = tf.sort(x, axis=-1)</code>, then to get the ordering i can do <code>indices_sorted_x = tf.argsort(x, axis=-1)</code>. I want the same ordering applied to x_probabs and i am confused how to that, i have tried <code>sorted_x_probabs = tf.gather(x_probabs, indices_sorted_x)</code> but it doesn't work because the indices are for a 2D tensor and not a 3D one. I'm stuck here.</p>
<p>The following is what it would look like for the first example</p>
<pre><code>sorted_x = [[0,1,2]];
sorted_x_probabs = [[[0.5, 0.1, 0.4],[0.1,
0.8, 0.1],[0.3, 0.3, 0.4]]];
</code></pre>
<p>This would be for the 2nd example</p>
<pre><code>sorted_x = [[0,3,4]];
sorted_x_probabs = [[[0.6, 0.1, 0.1, 0.1, 0.1],[0.1,0.1,0.1,0.6,0.1],[0.1,0.1,0.1,0.1,0.6]]];
</code></pre>
<p>Thank you very much in advance.</p>
|
<p>You can add <code>batch_dims</code> argument to start gathering from the lower dimension:</p>
<pre><code>x = tf.gather(x_probabs, x, batch_dims=1)
</code></pre>
|
python|sorting|tensorflow
| 0
|
2,238
| 44,066,571
|
Accumalate column through pandas
|
<p>I have multiple tab delimited files, all having same entries. I intend to read each file choose first column as index. My final table will have first column as index mapped against last column from all the files. For this, I wrote a pandas code but not a great ones. Is there an alternate way to do this ?</p>
<pre><code>import pandas as pd
df1 = pd.read_csv("FB_test.tsv",sep='\t')
df1_idx = df1.set_index('target_id')
df1_idx.drop(df1_idx[['length','eff_length','est_counts']],inplace=True, axis=1)
print(df1_idx)
df2 = pd.read_csv("Myc_test.tsv",sep='\t')
df2_idx = df2.set_index('target_id')
df2_idx.drop(df2_idx[['length','eff_length','est_counts']],inplace=True, axis=1)
print(df2_idx)
frames = [df1_idx, df2_idx]
results = pd.concat(frames, axis=1)
results
</code></pre>
<p>The output it generated was, </p>
<pre><code> tpm
target_id
A 0
B 0
C 0
D 0
E 0
tpm
target_id
A 1
B 1
C 1
D 1
E 1
Out[18]:
target_id tpm tpm
A 0 1
B 0 1
C 0 1
D 0 1
E 0 1
</code></pre>
<p>How to loop it so that, I read each file and achieve this same output ?</p>
<p>Thanks,
AP</p>
|
<p>To clean the code and use a looping mechanism, you can put both your file names and the columns you are dropping in two separate lists, and then use list comprehension on the file names to import each dataset. Subsequently, you concatenate the output of the list comprehension into one dataframe:</p>
<pre><code>import pandas as pd
drop_cols = ['length','eff_length','est_counts']
filenames = ["FB_test.tsv", "Myc_test.tsv"]
results = pd.concat([pd.read_csv(filename,sep='\t').set_index('target_id').drop(drop_cols, axis=1) for filename in filenames], axis=1)
</code></pre>
<p>I hope this helps.</p>
|
python-3.x|pandas
| 1
|
2,239
| 44,323,136
|
How to debug a Python program that freezes on one line?
|
<p>When I run my code ,it just stay in the line <code>image_batch, label_batch = sess.run([test_images, test_labels])</code> without any error prompt. It just stays here and can't move.</p>
<p>Here is my code:</p>
<pre><code># coding=utf-8
from color_1 import read_and_decode, get_batch, get_test_batch
import color_inference
import cv2
import os
import time
import numpy as np
import tensorflow as tf
import color_train
import math
batch_size=128
num_examples = 10000
crop_size=56
def evaluate():
image_holder = tf.placeholder(tf.float32, [batch_size, 56, 56, 3], name='x-input')
label_holder = tf.placeholder(tf.int32, [batch_size], name='y-input')
test_image, test_label = read_and_decode('val.tfrecords')
test_images, test_labels = get_test_batch(test_image, test_label, batch_size, crop_size)
y=color_inference.inference(image_holder)
num_iter = int(math.ceil(num_examples / batch_size))
true_count = 0
total_sample_count = num_iter * batch_size
top_k_op = tf.nn.in_top_k(y, label_holder, 1)
saver = tf.train.Saver()
with tf.Session() as sess:
ckpt=tf.train.get_checkpoint_state(color_train.MODEL_SAVE_PATH)
if ckpt and ckpt.model_checkpoint_path:
ckpt_name = os.path.basename(ckpt.model_checkpoint_path)
global_step = ckpt.model_checkpoint_path.split('/')[-1].split('-')[-1]
saver.restore(sess, os.path.join(color_train.MODEL_SAVE_PATH, ckpt_name))
print('Loading success, global_step is %s' % global_step)
image_batch, label_batch = sess.run([test_images, test_labels])
predictions = sess.run([top_k_op], feed_dict={image_holder: image_batch,
label_holder: label_batch})
true_count += np.sum(predictions)
print("Count is:%g" % true_count)
precision = true_count * 1.0 / total_sample_count
print("After %s training step,the prediction is :%g",global_step,precision)
else:
print('No checkpoint file found')
return
def main(argv=None):
evaluate()
if __name__=='__main__':
tf.app.run()
</code></pre>
<p>My last question is similar with this ,but the code is litter different with this, maybe you can get something in last question.</p>
|
<p>Seems like you are not starting the queue-runners / initializing the variables properly. I have seen similar behavior with my models when i forgot to to that.
When this is the case you most likely get stuck at the line</p>
<pre><code>image_batch, label_batch = sess.run([test_images, test_labels])
</code></pre>
<p>because the threads that pull data from the tfrecords have not been started.</p>
<p>Before you initialize your session setup an op for initializing the variables and a thread-coordinator:</p>
<pre><code>init_op = tf.group(tf.global_variables_initializer(), tf.local_variables_initializer())
coord = tf.train.Coordinator()
</code></pre>
<p>then at the very start of your session, before pulling any data from the tfrecords you run the op and start the queue runners:</p>
<pre><code>sess.run(init_op)
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
# main loop goes here, like training and evaluating
</code></pre>
|
python|tensorflow
| 0
|
2,240
| 69,448,084
|
Efficient way to populate missing indexes from pandas group by
|
<p>I grouped a column in a pandas dataframe by the number of occurrences of an event per hour of the day like so:</p>
<pre><code>df_sep.hour.groupby(df_sep.time.dt.hour).size()
</code></pre>
<p>Which gives the following result:</p>
<pre><code>time
2 31
3 6
4 7
5 4
6 38
7 9
8 5
9 31
10 8
11 2
12 5
13 30
14 1
15 1
16 28
18 1
20 4
21 29
Name: hour, dtype: int64
</code></pre>
<p>For plotting, I would like to complete the series for each hour of the day. ie, there are no occurrences at midnight (0). So for every missing hour, I would like to create that index and add zero to the corresponding value.</p>
<p>To solve this I created two lists (x and y) using the following loop, but it feels a bit hacky... is there a better way to solve this?</p>
<pre><code>x = []
y = []
for i in range(24):
if i not in df_sep.hour.groupby(df_sep.time.dt.hour).size().index:
x.append(i)
y.append(0)
else:
x.append(i)
y.append(df_sep.hour.groupby(df_sep.time.dt.hour).size().loc[i])
</code></pre>
<p>result:</p>
<pre><code>for i, j in zip(x, y):
print(i, j)
0 0
1 0
2 31
3 6
4 7
5 4
6 38
7 9
8 5
9 31
10 8
11 2
12 5
13 30
14 1
15 1
16 28
17 0
18 1
19 0
20 4
21 29
22 0
23 0
</code></pre>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.reindex.html" rel="nofollow noreferrer"><code>Series.reindex</code></a> with <code>range(24)</code>:</p>
<pre><code>df_sep.hour.groupby(df_sep.time.dt.hour).size().reindex(range(24), fill_value=0)
</code></pre>
|
python-3.x|pandas|pandas-groupby
| 2
|
2,241
| 69,521,760
|
Filtering grouped dataset by index column
|
<p>I'm trying to get a Pandas exercise done and it's driving me bonkers.</p>
<p>I have a dataset containing the number of cyclists that went by a certain zone of the city every hour of each day, so something like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Year</th>
<th>Month</th>
<th>Day</th>
<th>Hour</th>
<th>Zone 1</th>
<th>Zone 2</th>
<th>Zone 3</th>
</tr>
</thead>
<tbody>
<tr>
<td>2014</td>
<td>1</td>
<td>1</td>
<td>0:00</td>
<td>2</td>
<td>0</td>
<td>5</td>
</tr>
<tr>
<td>2014</td>
<td>1</td>
<td>1</td>
<td>1:00</td>
<td>3</td>
<td>1</td>
<td>2</td>
</tr>
<tr>
<td>2014</td>
<td>1</td>
<td>1</td>
<td>2:00</td>
<td>4</td>
<td>1</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
<p>et cetera. There are much many more rows and columns. The "zone" columns contain how many cyclists were recorded for that zone at that time.</p>
<p>The exercise asks to group this dataframe by year, month, and day, and then take the sum on the grouped dataframe. I do that like this:</p>
<pre><code>grouped = data.groupby(["Year", "Month", "Day"]).sum()
</code></pre>
<p>where 'data' is the original, ungrouped dataframe. The resulting dataframe has tuples in the index columns, as the exercise text says it should. Printing grouped.head() returns this:
<a href="https://i.stack.imgur.com/fx2p8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fx2p8.png" alt="enter image description here" /></a></p>
<p>(I dropped the "Hour" column because the exercise says so.) I verified that the index column contains indeed tuples by printing grouped.index, and it looks like this: [(2014,1,1), (2014,1,2), ...]</p>
<p>This is all good, but then the exercise asks to filter this dataframe so that only records from August 2017 are shown. I know I can do that by doing</p>
<pre><code>grouped.filter(some-function-here)
</code></pre>
<p>but the problem is, I am having a hard time understanding how I can filter based on the index column (which doesn't have a name and can't be referred to as you can to others, eg grouped["Auroransilta"]), especially because I'm not sure if I'm doing tuple comparison correctly. For example, I tried this way</p>
<pre><code>grouped.filter(lambda x: x > (2014, 1, 1) for x in grouped.index)
</code></pre>
<p>and I got this:</p>
<p><a href="https://i.stack.imgur.com/KsM88.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KsM88.png" alt="enter image description here" /></a></p>
<p>Variations of that approach all result in an empty dataframe. Thinking I just was doing something wrong with the tuples, I tried to filter by some other column:</p>
<pre><code>grouped.filter(lambda x: x["Baana"] > 300 for x in grouped)
</code></pre>
<p>and that too resulted in the exact same empty dataframe. (The column "Baana" isn't in the screenshot but it is in the dataframe, and yes, there are rows with a count larger than 300). If I omit the for-loop, I get a TypeError saying that I'm not passing an iterable, so I guess it needs to be there even though I don't fully understand why (I thought filter would just apply the function I pass to every group in grouped.)</p>
<p>I have no idea how to fix this as I don't understand what I'm doing wrong.</p>
|
<p>Use <a href="https://pandas.pydata.org/docs/user_guide/timeseries.html#partial-string-indexing" rel="nofollow noreferrer"><code>partial string indexing</code></a>, with <code>DatetimeIndex</code>:</p>
<pre><code>df['datetime'] = pd.to_datetime(df[["Year", "Month", "Day"]])
df = df.drop(["Year", "Month", "Day","Hour"], axis=1)
print (df)
Zone 1 Zone 2 Zone 3 datetime
0 2 0 5 2014-01-01
1 3 1 2 2014-01-01
2 4 1 1 2014-01-01
df = df.groupby(["datetime"]).sum()
print (df)
Zone 1 Zone 2 Zone 3
datetime
2014-01-01 9 2 8
df = df['2014-08']
print (df)
Empty DataFrame
Columns: [Zone 1, Zone 2, Zone 3]
Index: []
</code></pre>
<p>For filtering is used <a href="http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a>:</p>
<pre><code>df = df[df["Banana"] > 300]
</code></pre>
|
python|pandas|dataframe
| 1
|
2,242
| 41,181,499
|
Get Pandas Duplicate Row Count with Original Index
|
<p>I need to find duplicate rows in a Pandas Dataframe, and then add an extra column with the count. Lets say we have a dataframe:</p>
<pre><code>>>print(df)
+----+-----+-----+-----+-----+-----+-----+-----+-----+
| | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |
|----+-----+-----+-----+-----+-----+-----+-----+-----|
| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| 1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| 2 | 2 | 4 | 3 | 4 | 1 | 1 | 4 | 4 |
| 3 | 4 | 3 | 4 | 0 | 0 | 0 | 0 | 0 |
| 4 | 2 | 3 | 4 | 3 | 4 | 0 | 0 | 0 |
| 5 | 5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| 6 | 4 | 5 | 0 | 0 | 0 | 0 | 0 | 0 |
| 7 | 1 | 1 | 4 | 0 | 0 | 0 | 0 | 0 |
| 8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| 9 | 4 | 3 | 4 | 0 | 0 | 0 | 0 | 0 |
| 10 | 3 | 3 | 4 | 3 | 5 | 5 | 5 | 0 |
| 11 | 5 | 4 | 0 | 0 | 0 | 0 | 0 | 0 |
| 12 | 5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| 13 | 0 | 4 | 0 | 0 | 0 | 0 | 0 | 0 |
| 14 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| 15 | 1 | 3 | 5 | 0 | 0 | 0 | 0 | 0 |
| 16 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
| 17 | 3 | 3 | 4 | 4 | 0 | 0 | 0 | 0 |
| 18 | 5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
+----+-----+-----+-----+-----+-----+-----+-----+-----+
</code></pre>
<p>The above frame would then become the one below with an additional column with the count. You can see that we are still preserving the index column.</p>
<pre><code>+----+-----+-----+-----+-----+-----+-----+-----+-----+-----+
| | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
|----+-----+-----+-----+-----+-----+-----+-----+-----|-----|
| 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
| 1 | 2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 2 |
| 2 | 2 | 4 | 3 | 4 | 1 | 1 | 4 | 4 | 1 |
| 3 | 4 | 3 | 4 | 0 | 0 | 0 | 0 | 0 | 2 |
| 4 | 2 | 3 | 4 | 3 | 4 | 0 | 0 | 0 | 1 |
| 5 | 5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 3 |
| 6 | 4 | 5 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
| 7 | 1 | 1 | 4 | 0 | 0 | 0 | 0 | 0 | 1 |
| 10 | 3 | 3 | 4 | 3 | 5 | 5 | 5 | 0 | 1 |
| 11 | 5 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
| 13 | 0 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
| 15 | 1 | 3 | 5 | 0 | 0 | 0 | 0 | 0 | 1 |
| 16 | 4 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 |
| 17 | 3 | 3 | 4 | 4 | 0 | 0 | 0 | 0 | 1 |
+----+-----+-----+-----+-----+-----+-----+-----+-----+-----+
</code></pre>
<p>I've seen other solutions to this such as:</p>
<pre><code> df.groupby(list(df.columns.values)).size()
</code></pre>
<p>But that returns a matrix with gaps and with no initial index.</p>
|
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html" rel="nofollow noreferrer"><code>reset_index</code></a> first for convert <code>index</code> to columns and then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.aggregate.html" rel="nofollow noreferrer"><code>aggregate</code></a> by <code>first</code> and <code>len</code>:</p>
<p>Also if need groupby by all columns is necessary remove <code>index</code> column by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.difference.html" rel="nofollow noreferrer"><code>difference</code></a>:</p>
<pre><code>print (df.columns.difference(['index']))
Index(['2', '3', '4', '5', '6', '7', '8', '9'], dtype='object')
print (df.reset_index()
.groupby(df.columns.difference(['index']).tolist())['index']
.agg(['first', 'size'])
.reset_index()
.set_index(['first'])
.sort_index()
.rename_axis(None))
2 3 4 5 6 7 8 9 size
0 0 0 0 0 0 0 0 0 2
1 2 0 0 0 0 0 0 0 2
2 2 4 3 4 1 1 4 4 1
3 4 3 4 0 0 0 0 0 2
4 2 3 4 3 4 0 0 0 1
5 5 0 0 0 0 0 0 0 3
6 4 5 0 0 0 0 0 0 1
7 1 1 4 0 0 0 0 0 1
10 3 3 4 3 5 5 5 0 1
11 5 4 0 0 0 0 0 0 1
13 0 4 0 0 0 0 0 0 1
15 1 3 5 0 0 0 0 0 1
16 4 0 0 0 0 0 0 0 1
17 3 3 4 4 0 0 0 0 1
</code></pre>
<p>If necessary add next column <code>10</code> need <code>rename</code>:</p>
<pre><code>#if necessary convert to str
last_col = str(df.columns.astype(int).max() + 1)
print (last_col)
10
print (df.reset_index()
.groupby(df.columns.difference(['index']).tolist())['index']
.agg(['first', 'size'])
.reset_index()
.set_index(['first'])
.sort_index()
.rename_axis(None)
.rename(columns={'size':last_col}))
2 3 4 5 6 7 8 9 10
0 0 0 0 0 0 0 0 0 2
1 2 0 0 0 0 0 0 0 2
2 2 4 3 4 1 1 4 4 1
3 4 3 4 0 0 0 0 0 2
4 2 3 4 3 4 0 0 0 1
5 5 0 0 0 0 0 0 0 3
6 4 5 0 0 0 0 0 0 1
7 1 1 4 0 0 0 0 0 1
10 3 3 4 3 5 5 5 0 1
11 5 4 0 0 0 0 0 0 1
13 0 4 0 0 0 0 0 0 1
15 1 3 5 0 0 0 0 0 1
16 4 0 0 0 0 0 0 0 1
17 3 3 4 4 0 0 0 0 1
</code></pre>
|
python|pandas|group-by|aggregate|multiple-columns
| 4
|
2,243
| 54,070,780
|
How to Covert multiple list columns in data frame into given one?
|
<p>I have Dataframe like this</p>
<pre><code> Number String Aut
0 [12, 13] [hi are, ho to] ppppp
1 34 How qqqqq
2 35 are wwwwwww
</code></pre>
<p>i want to convert this into this</p>
<pre><code> Number String Aut
0 12 hi are ppppp
1 13 ho to ppppp
2 34 How qqqqq
3 35 are wwwwwww
</code></pre>
<p>i tried this but not working
<a href="https://stackoverflow.com/questions/27263805/pandas-when-cell-contents-are-lists-create-a-row-for-each-element-in-the-list">ref</a></p>
<pre><code>res = df.set_index(['Aut'])['Number', 'String'].apply(pd.Series).stack()
</code></pre>
<p>Help will be appreciated .</p>
|
<p>There is mix list with scalars, so first need some pre processing and then create DataFrame by <code>chain</code> with <code>repeat</code>:</p>
<pre><code>n = [x if isinstance(x, list) else [x] for x in df['Number']]
s = [x if isinstance(x, list) else [x] for x in df['String']]
lens = [len(x) for x in n]
from itertools import chain
df = pd.DataFrame({
'Number' : list(chain.from_iterable(n)),
'String' : list(chain.from_iterable(s)),
'Aut' : df['Aut'].values.repeat(lens)
})
print (df)
Number String Aut
0 12 hi are ppppp
1 13 ho to ppppp
2 34 How qqqqq
3 35 are wwwwwww
</code></pre>
|
python|pandas|dataframe
| 0
|
2,244
| 66,148,933
|
Update specific column values based upon group by from different column in Pandas
|
<p>I have below pandas data frame in python.</p>
<p><a href="https://i.stack.imgur.com/6wkJr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6wkJr.png" alt="enter image description here" /></a></p>
<p>Looking for below output:</p>
<p><a href="https://i.stack.imgur.com/GOw6x.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GOw6x.png" alt="enter image description here" /></a></p>
<p>Based upon group of last column I need to pick and repeat first value from and in middle column.</p>
|
<p>Try:</p>
<pre><code>df['col1'] = df['col1'].mask(df['col2'].duplicated()).ffill()
</code></pre>
<p>Or:</p>
<pre><code>df['col1'] = df.groupby('col2')['col1'].transform('first')
</code></pre>
|
python|pandas|dataframe
| 2
|
2,245
| 66,099,929
|
Unexpected result sklearn StandardScaler
|
<p>I try to test some Scaler with following code.
I expect a result like the blue distributions but scaled.
What I get is the orange one.
Can anybody help me?</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sklearn.preprocessing import StandardScaler
x1=np.random.normal(loc=21,scale=0.2,size=(100,1))
x2=np.random.normal(loc=1000,scale=550,size=(100,1))
data=np.concatenate((x1,x2),axis=1)
df=pd.DataFrame(data,columns=['x1','x2'])
fig1, axs=plt.subplots(nrows=1, ncols=2)
axs[0].hist(df['x1'])
axs[1].hist(df['x2'])
scaler = StandardScaler()
scaler.fit(df)
df_trans=scaler.transform(df)
fig2, axs=plt.subplots(nrows=1,ncols=2)
axs[0].hist(df_trans[0],color='orange')
axs[1].hist(df_trans[1],color='orange')
</code></pre>
<p><a href="https://i.stack.imgur.com/tR5bA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tR5bA.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/U1FbC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/U1FbC.png" alt="enter image description here" /></a></p>
|
<p>With <code>df_trans[0]</code> you don't select the entire column. You should change them as:</p>
<pre><code>axs[0].hist(df_trans[:,0],color='orange') # all rows, first column
axs[1].hist(df_trans[:,1],color='orange') # all rows, second column
</code></pre>
<p>That will produce as follows:</p>
<p><a href="https://i.stack.imgur.com/1H4ld.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1H4ld.png" alt="enter image description here" /></a></p>
|
python|pandas|matplotlib|scikit-learn
| 1
|
2,246
| 66,201,579
|
Row-wise comparison against a list-type column
|
<p>let's assume I have the following code</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'List type':[[1, 2, 3], [4, 5, 6], [7, 8, 9]], 'Integer type':[5, 4, 1]})
</code></pre>
<p>and resulting Pandas dataframe:</p>
<pre><code>| List-type | Integer-type |
| -------- | -------------|
| [1, 2, 3] | 5 |
| [4, 5, 6] | 4 |
| [7, 8, 9] | 1 |
</code></pre>
<p>Is there a way to compare the integer-type values against the respective list in the <strong>same row</strong> without using a for loop, or the <code>itertools</code> package? Basically what I want is a mask to filter for the rows where the integer is contained in the list. I could not get methods like <code>isin</code> (requires the list already as argument, which requires row-wise indexing) or general comparisons to work (compares the list against the integer) so far. Help is appreciated!</p>
|
<p>One way using <code>pandas.DataFrame.apply</code>:</p>
<pre><code>df["mask"] = df.apply(lambda x: x["Int-type"] in x["List-type"], axis=1)
print(df)
</code></pre>
<p>Output:</p>
<pre><code> List-type Int-type mask
0 [1, 2, 3] 5 False
1 [4, 5, 6] 4 True
2 [7, 8, 9] 1 False
</code></pre>
|
python|pandas
| 2
|
2,247
| 65,941,834
|
Pytorch: How to unflatten/get back the network from flattened network?
|
<p>I am using the following function to flatten the network:</p>
<pre><code>#############################################################################
# Flattening the NET
#############################################################################
def flattenNetwork(net):
flatNet = []
shapes = []
for param in net.parameters():
#if its WEIGHTS
curr_shape = param.cpu().data.numpy().shape
shapes.append(curr_shape)
if len(curr_shape) == 2:
param = param.cpu().data.numpy().reshape(curr_shape[0]*curr_shape[1])
flatNet.append(param)
elif len(curr_shape) == 4:
param = param.cpu().data.numpy().reshape(curr_shape[0]*curr_shape[1]*curr_shape[2]*curr_shape[3])
flatNet.append(param)
else:
param = param.cpu().data.numpy().reshape(curr_shape[0])
flatNet.append(param)
finalNet = []
for obj in flatNet:
for x in obj:
finalNet.append(x)
finalNet = np.array(finalNet)
return finalNet,shapes
</code></pre>
<p>The above function returns all the weights as a <code>numpy</code> column vector <code>finalNet</code> and <code>shapes</code> (list) of the network. I want to see the effect of weight modifications on the prediction accuracy. So, I change the weights. How can I copy this modified weight vector back to the original network? Please help. Thank you.</p>
|
<p>There is a difference between model definition (its <code>forward</code> function), and the parameter configuration (what's called model state, and is easily accessible as a dictionary using <a href="https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module.state_dict" rel="nofollow noreferrer"><code>state_dict</code></a>).</p>
<p>You can get a model's state, as you did with your implementation <code>flattenNetwork</code>. However reverting this operation (<em>i.e.</em> if you only have the weights and layer shapes), for pretty much all models, is not possible.</p>
<p>Now, assuming you do - still - have access to <code>net</code>. My advice is that work with <code>net.state_dict()</code> directly, modify it, then load the dictionary of weights back with <a href="https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module.load_state_dict" rel="nofollow noreferrer"><code>load_state_dict</code></a>. This way, you will avoid having to deal with serializing the model's parameters yourself.</p>
|
python|pytorch|flatten
| 0
|
2,248
| 66,124,374
|
Pandas: add elements to index at even intervals
|
<p>I have a dataframe that looks like this:</p>
<pre><code> B
A
0.00 5.7096
7.33 8.0280
25.82 15.7212
43.63 19.5156
55.24 20.1888
</code></pre>
<p>and I want to add rows with the index at regular intervals (say by 10), so that I can then interpolate the column B with method = 'index'. My desired output is this:</p>
<pre><code> B
A
0.00 5.7096
7.33 8.0280
10.00 NaN
20.00 NaN
25.82 15.7212
30.00 NaN
40.00 NaN
43.63 19.5156
50.00 NaN
55.24 20.1888
60.00 NaN
</code></pre>
<p>I haven't found any reindex option that <em>adds</em> index elements instead of <em>changing</em> them. My best solution is create a new index, append it to the original dataframe, sort and remove duplicates (if any), but I'm pretty sure there is a better solution.</p>
<pre><code>step = 10
idx = pd.DataFrame(index = df.index).reindex([round(i, 0) for i in np.arange(df.index[0], df.index[-1] + step, step)])
df = df.append(idx)
df.sort_index(inplace = True)
df = df[~df.index.duplicated()]
</code></pre>
<p>any suggestion? thanks</p>
|
<p>Effectively do a union by doing an outer join.</p>
<pre><code>df = pd.read_csv(io.StringIO("""A B
0.00 5.7096
7.33 8.0280
25.82 15.7212
43.63 19.5156
55.24 20.1888"""), sep="\s+").set_index("A")
df = df.join(pd.DataFrame(index=pd.RangeIndex(0,60, 10)), how="outer")
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: right;"></th>
<th style="text-align: right;">B</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: right;">0</td>
<td style="text-align: right;">5.7096</td>
</tr>
<tr>
<td style="text-align: right;">7.33</td>
<td style="text-align: right;">8.028</td>
</tr>
<tr>
<td style="text-align: right;">10</td>
<td style="text-align: right;">nan</td>
</tr>
<tr>
<td style="text-align: right;">20</td>
<td style="text-align: right;">nan</td>
</tr>
<tr>
<td style="text-align: right;">25.82</td>
<td style="text-align: right;">15.7212</td>
</tr>
<tr>
<td style="text-align: right;">30</td>
<td style="text-align: right;">nan</td>
</tr>
<tr>
<td style="text-align: right;">40</td>
<td style="text-align: right;">nan</td>
</tr>
<tr>
<td style="text-align: right;">43.63</td>
<td style="text-align: right;">19.5156</td>
</tr>
<tr>
<td style="text-align: right;">50</td>
<td style="text-align: right;">nan</td>
</tr>
<tr>
<td style="text-align: right;">55.24</td>
<td style="text-align: right;">20.1888</td>
</tr>
</tbody>
</table>
</div>
|
python|pandas|reindex
| 2
|
2,249
| 58,227,451
|
Need to convert time to h:m:s
|
<p>I have time formats with missing hours like 36:21 or incompleted hour format like 1:23:30, would like to convert it to standard time format like 00:00:00,
but I don't know why my code did not work. </p>
<p>Need to convert time format H: M:S like this --> 00:00:00
len(x) == 6 i.e. 36:21 ; len(x) == 7 i.e. 1:23:30</p>
<h1>want to get 00:36:21 and 01:23:30</h1>
<pre><code>for x in df7[' Chip Time']:
if len(x) == 6:
x = '00:'+ x
elif len(x) == 7:
x = '0'+ x
print (df7[' Chip Time'])
</code></pre>
<p>===========================================================
Support needed. Thank you!</p>
|
<p>you should use apply method, it is much faster than iteration</p>
<p>also i recommend you to read <a href="https://thispointer.com/python-how-to-pad-strings-with-zero-space-or-some-other-character/" rel="nofollow noreferrer">https://thispointer.com/python-how-to-pad-strings-with-zero-space-or-some-other-character/</a></p>
<p>this should work:</p>
<p><code>df7[' Chip Time'] = df7[' Chip Time'].apply(lambda x : x.rjust(5, '0').rjust(6, ':').rjust(8, '0'))</code></p>
|
python|pandas
| 0
|
2,250
| 69,010,072
|
Plotting matplotlib subplots
|
<p>I begin with matplotlib and subplots. Could you tell me how to assign the 2 plots generated from this code in 2 columns:</p>
<pre><code># Bar Plot for Firm Performance
fig = plt.figure(figsize = (6, 4))
title = fig.suptitle("Firm performance", fontsize=14)
fig.subplots_adjust(top=0.85, wspace=0.3)
fig, axs = plt.subplots(1, 2)
dfSPSSactive['Q7_12_1'].value_counts().sort_index().plot(kind='bar')
dfSPSSactive['Q7_12_2'].value_counts().sort_index().plot(kind='bar', color='red')
</code></pre>
<p><a href="https://i.stack.imgur.com/Xu0JG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Xu0JG.png" alt="enter image description here" /></a></p>
|
<p>You just need to pass <code>ax</code> parameter to <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.plot.html" rel="nofollow noreferrer"><strong><code>pandas.DataFrame.plot</code></strong></a>:</p>
<pre><code>fig, axs = plt.subplots(1, 2, figsize = (6, 4))
title = fig.suptitle("Firm performance", fontsize=14)
fig.subplots_adjust(top=0.85, wspace=0.3)
dfSPSSactive['Q7_12_1'].value_counts().sort_index().plot(ax=axs[0], kind='bar')
dfSPSSactive['Q7_12_2'].value_counts().sort_index().plot(ax=axs[1], kind='bar', color='red')
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/rK0Nl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rK0Nl.png" alt="enter image description here" /></a></p>
|
python|pandas|dataframe|matplotlib|seaborn
| 0
|
2,251
| 69,278,507
|
Unfreeze model Layer by Layer in PyTorch
|
<p>I'm working with a PyTorch model from <a href="https://github.com/yitu-opensource/T2T-ViT" rel="nofollow noreferrer">here</a> (T2T_ViT_7).</p>
<p>I'm trying to freeze all layers except the last (head) layer and train it on my dataset. I want to evaluate its performance, and then unfreeze layers one by one and train it each time and see how it performs.</p>
<p>To initially freeze all the layers and and just unfreeze the head layer, I used:</p>
<pre><code>for param in model.parameters():
param.requires_grad_(False)
model.head.requires_grad_(True)
</code></pre>
<p>Now I want to start from the bottom, and start unfreezing layers one by one. How can I do this? Do I use model.modules() or maybe model.children()?</p>
<p>Thank you!</p>
|
<p>If by <em>layers</em> you mean each block inside of <code>model.blocks</code>, then you can use <a href="https://pytorch.org/docs/stable/generated/torch.nn.Module.html?highlight=children#torch.nn.Module.children" rel="nofollow noreferrer"><code>nn.Module.children</code></a> (// <a href="https://pytorch.org/docs/stable/generated/torch.nn.Module.html?highlight=named_children#torch.nn.Module.named_children" rel="nofollow noreferrer"><code>nn.Module.named_children</code></a>). This will return all direct submodules, while the <a href="https://pytorch.org/docs/stable/generated/torch.nn.Module.html?highlight=modules#torch.nn.Module.modules" rel="nofollow noreferrer"><code>nn.Module.modules</code></a> returns <em>all</em> submodules recursively.</p>
<p>Since <code>model.blocks</code> is a <code>nn.ModuleList</code>, you can slice the blocks to only select the last <code>n</code> layers. Something like that:</p>
<pre><code>model.blocks[-n:].requires_grad_(False)
</code></pre>
|
machine-learning|pytorch|layer
| 2
|
2,252
| 44,563,707
|
How to create pandas DataFrame with index from the list of tuples
|
<p>What would be the best way to create pandas DataFrame with index from records.
Here is my sample:</p>
<pre><code>sales = [('Jones LLC', 150, 200, 50),
('Alpha Co', 200, 210, 90),
('Blue Inc', 140, 215, 95)]
labels = ['account', 'Jan', 'Feb', 'Mar']
df = pd.DataFrame.from_records(sales, columns=labels)
</code></pre>
<p>I need 'Account' to be an index here (not a column)
Thanks</p>
|
<p>Simpliest is <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="noreferrer"><code>set_index</code></a>:</p>
<pre><code>df = pd.DataFrame.from_records(sales, columns=labels).set_index('account')
print (df)
Jan Feb Mar
account
Jones LLC 150 200 50
Alpha Co 200 210 90
Blue Inc 140 215 95
</code></pre>
<p>Or select by list comprehensions:</p>
<pre><code>labels = [ 'Jan', 'Feb', 'Mar']
idx = [x[0] for x in sales]
data = [x[1:] for x in sales]
df = pd.DataFrame.from_records(data, columns=labels, index=idx)
print (df)
Jan Feb Mar
Jones LLC 150 200 50
Alpha Co 200 210 90
Blue Inc 140 215 95
</code></pre>
|
python-2.7|pandas|dataframe
| 6
|
2,253
| 44,653,239
|
Tensorflow 1.2 assigning variables
|
<p>As the title says I'm using tensorflow version 1.2 built from source for my machine. I don't believe that affects my question though.</p>
<p>What is the difference between these two chunks of code?
The top one causes me to never get values assigned while training but the bottom does. I am copying all my epoch data over to the gpu and then grabbing the data for each batch as I need it, so this code runs at the beginning of every batch inside the same session.</p>
<p>The code is in python and all of this is defined inside my model class.
All of the self.data objects are 3D float32 tensors. </p>
<pre><code> ## the index i.e the current step in the epoch
index = tf.to_int32(self.step, name="step_to_int")
## code that doesn't work
tf.assign(self.input_data, self.all_input_data[index])
tf.assign(self.targets, self.all_target_data[index])
## code that works
self.input_data = self.all_input_data[index]
self.targets = self.all_target_data[index]
</code></pre>
|
<p>Remember that pretty much everything is an operation in TensorFlow. I believe the issue in your code is that you never run the assignment operation (you just evaluate the <code>input_data</code> tensor as it has been initialised).</p>
<p>You then need to assign the return of the assignment method to a variable:</p>
<pre><code>self.input_data = tf.assign(self.input_data, self.all_input_data[index])
</code></pre>
<p>This variable will hold both the new value and the reassignment operation, and so whenever you will evaluate it, it will update its value.</p>
<p>Quoting the doc string:</p>
<blockquote>
<p>Returns:
A <code>Tensor</code> that will hold the new value of 'ref' after
the assignment has completed.</p>
</blockquote>
|
python|tensor|tensorflow
| 0
|
2,254
| 60,830,646
|
Indexing Dataframe
|
<p>I am getting SettingWithCopyWarning:</p>
<pre><code>A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
</code></pre>
<p>Here is my dataframe(Capacity):</p>
<pre><code> A B C D E
2020-01-01 00:00:00 4.0 66 15 3.8 3.2
2020-01-01 01:00:00 4.0 66 15 3.8 3.2
2020-01-01 02:00:00 4.0 66 15 3.8 3.2
.
.
.
2020-03-23 22:00:00 4.0 66 15 3.8 3.2
2020-03-23 23:00:00 4.0 66 15 3.8 3.2
</code></pre>
<p>I want to change specific values which belongs to column A, based on date. I mean, if index's month is 3 and more than 3, change value 15 to 20.40.</p>
<pre><code>Capacity['A'][(Capacity.index.month >= 3)] = 20.40
</code></pre>
<p>How can I write this line to avoid getting a warning?</p>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>DataFrame.loc</code></a> for set values of <code>DataFrame</code>, not <code>Series</code>:</p>
<pre><code>Capacity.loc[(Capacity.index.month >= 3), 'A'] = 20.40
</code></pre>
<p>More information in <a href="https://pandas.pydata.org/docs/user_guide/indexing.html#evaluation-order-matters" rel="nofollow noreferrer"><code>evaluation order matters</code></a>.</p>
|
pandas|indexing
| 0
|
2,255
| 60,802,823
|
Multi-column input to ML.PREDICT for a TensorFlow model in BigQueryML
|
<p>We have trained a model in Google Cloud AutoML (a tool that we like a lot) and successfully exported it to GCS, and then created the model in BigQuery using the below command:</p>
<pre><code>create or replace model my_dataset.my_bq_ml_model
options(model_type='tensorflow',
model_path='my gcs path to exported tensorflow model'))
</code></pre>
<p>However, when we use BigQueryML to try and run some predictions using the model we are unsure of the how to format the multiple features that our model uses into the single "inputs" string the exported Tensorflow model accepts in BigQuery.</p>
<pre><code>select *
from ml.predict(model my_project.my_dataset.my_bq_ml_model,
(
select 'How do we format this?' as inputs
from my_rows_to_predict
))
</code></pre>
<p>Has anyone done this yet?</p>
<p>This is similar to this question, which remains open:
<a href="https://stackoverflow.com/questions/60570155/multi-column-input-to-ml-predict-for-a-tensorflow-model-in-bigquery-ml">Multi-column input to ML.PREDICT for a TensorFlow model in BigQuery ML</a></p>
<p>Thank you all.</p>
|
<p>After you load the model into BigQuery ML, click on the model in the BigQuery UI and switch over to the "Schema" tab. This should tell you what columns the model wants.</p>
<p>Alternately, run the program saved_model_cli on the model (it's a python program that comes with tensorflow) to see what the supported signature is</p>
<pre><code>saved_model_cli show --dir $export_path --all
</code></pre>
|
tensorflow|machine-learning|google-cloud-platform|google-bigquery|google-cloud-automl
| 0
|
2,256
| 60,939,280
|
Convert Multi-Index Pandas Dataframe to JSON
|
<p>Consider a Pandas <code>DataFrame</code> with <code>MultiIndex</code>: </p>
<pre><code> virtual_device_135 virtual_device_136
tag_5764 tag_5764
timestamp
31/03/2020 02:10:30 -0.97 NaN
31/03/2020 02:10:35 NaN 0.98
31/03/2020 02:10:40 -0.97 NaN
31/03/2020 02:10:45 NaN -0.98
31/03/2020 02:10:50 -0.97 NaN
</code></pre>
<p>The above <code>DataFrame</code> needs be converted into a <code>json</code> which looks like this:</p>
<pre><code>bodyContent": [
{
"time": "31/03/2020 02:17:01",
"tag_5764_virtual_device_135": -0.97
},
{
"time": "31/03/2020 02:17:12",
"tag_5764_virtual_device_135": -0.97
},
{
"time": "31/03/2020 02:17:22",
"tag_5764_virtual_device_135": -0.97
},
{
"time": "31/03/2020 02:18:37",
"tag_5764_virtual_device_136": -0.98
},
{
"time": "31/03/2020 02:18:47",
"tag_5764_virtual_device_136": -0.98
},
{
"time": "31/03/2020 02:18:57",
"tag_5764_virtual_device_136": -0.98
}
]
</code></pre>
<p>Currently, I am splitting the DF, then rename the column, then merge it and then convert to json. </p>
<p>Is there a better way in Pandas I can use? </p>
<p>Any help is appreciated!</p>
|
<p>I found it can be done as following:</p>
<p>If the <code>DataFrame</code> is df:</p>
<pre class="lang-py prettyprint-override"><code>df.columns = ['_'.join(col) for col in df.columns]
df.reset_index(inplace=True)
df_list = json.loads(df.to_json(orient='records'))
for each in df_list:
body_content_list.append(each)
</code></pre>
<p>Hope this would be useful for someone. </p>
|
python|pandas|pandas-groupby|multi-index
| 1
|
2,257
| 60,927,247
|
Tensorflow LSTM: How to use different weights for each batch?
|
<p>I'm talking about the <code>tf.keras.layers.LSTM</code> implementation, as I want to use <code>cuDNN</code> for my batched LSTM.</p>
<p>Right now, I use a "hand made" LSTM implementation, because I want to have different weights/biases for each batch. Do you know a way how to use TensorFlows LSTM implementation of the LSTM with a unique set of weights/biases for each batch?</p>
|
<p>Maybe you can use something like this. It is an example for a fully-connected layer for a CNN </p>
<pre><code>def dense_fc4(n_objects):
initializer = lambda: tf.contrib.layers.xavier_initializer()(shape=(1024,512))
return tf.Variable(initial_value=initializer, name='fc4/kernel',
shape=(n_objects.shape[0], 1024, 512))
W4 = tf.map_fn(dense_fc4, samples_flat)
b4 = tf.get_variable('fc4/bias', shape=512, initializer=tf.zeros_initializer())
fc4 = tf.add(tf.matmul(samples_flat, W4), b4)
fc4 = tf.nn.relu(fc4)
</code></pre>
|
python|tensorflow|lstm|tensorflow2.0
| 0
|
2,258
| 71,639,870
|
How to transpose and merge two dataframes with pandas - obtaining a key error
|
<p>I have a data frame (file1.txt) like this:</p>
<pre><code>identifer 1 2 3
Fact1 494 43 3
Fact2 383 32 5
Fact3 384 23 5
Fact4 382 21 7
</code></pre>
<p>And another data frame (file2.txt) like this:</p>
<pre><code>Sample Char1 Char2 Char3
1 4 5 5
2 5 2 4
3 5 6 2
4 2 4 4
</code></pre>
<p>the output should look like this:</p>
<pre><code>Sample Fact1 Fact2 Fact3 Char1 Char2 Char3
1 494 383 384 4 5 5
2 43 32 5 5 2 4
3 384 23 5 5 6 2
</code></pre>
<p>I wrote:</p>
<pre><code>#to transpose Table1
df = pd.read_csv('file1.txt', sep='\t',header=0)
df2 = df.T
#To read in table 2
df3 = pd.read_csv('file2.txt', sep='\t',header=0)
df4 = df2.merge(df3,left_on = 'identifier',right_on='Sample',how='inner')
print(df4)
</code></pre>
<p>And I'm getting the error: <code>'KeyError: identifier'</code></p>
<p>When I print the columns of df2, i.e. the transposed data set, I can see that the columns are just the first row of data, and not the header, and the identifier row is the last row listed in the transposed matrix. Could someone explain to me how to transpose and merge these data frames? I was trying to follow a SO answer that said to <code>.set_index()</code> and then transpose, but when I do <code>df2 = df.set_index('identifier').T</code> I'm getting the same error. Following another SO suggestion I was trying <a href="https://stackoverflow.com/questions/20375561/joining-pandas-dataframes-by-column-names">here</a>, I changed from merge to join so I did <code>df2.join(df3.set_index['Sample'],on='identifier)</code> but then I'm getting other errors (in this error 'method object is not subscriptable') so I'm just stuck and would appreciate insight.</p>
|
<p>You can to set the index</p>
<pre><code>df1 = df1.set_index('identifer')
df1.columns = df1.columns.astype(float)
out = df1.T.join(df2.set_index('Sample'))#.reset_index()
Out[82]:
Fact1 Fact2 Fact3 Fact4 Char1 Char2 Char3
1.0 494 383 384 382 4 5 5
2.0 43 32 23 21 5 2 4
3.0 3 5 5 7 5 6 2
</code></pre>
|
python|pandas
| 3
|
2,259
| 69,898,397
|
adding a zero at first element in a numpy array or a way to create an array keeping the first position as zero
|
<p>I got this function to looking for some specific characters in a string and if it is found it add a value at that position in the array, ex:</p>
<pre><code>def skew_array(text):
import numpy as np
skew = np.zeros(len(text))
for i in range(0, len(text)):
if text[i] == 'G':
skew[i] += 1
elif text[i] == 'C':
skew[i] -= 1
else:
skew[i] = 0
return np.insert(skew, 0, 0)
>> skew_array('gacaattagcaa'.upper())
array([ 0., 1., 0., -1., 0., 0., 0., 0., 0., 1., -1., 0., 0.])
</code></pre>
<p>I am not sure if it is the right way to do it (inserting at the end) or if there are a way just to keep the first position as zero when creating the array.</p>
<p>I appreciate any tip!
Thank you by your time.</p>
<p>PS - I was using a list, but it is very bad for memory in bigger strings!</p>
|
<p>i think that functions that change the flat size of arrays are pretty slow, so i'd create skew larger from the start. and you can use a dict to avoid an elif soup</p>
<pre><code>import numpy as np
def skew_array(text, dict_):
skew = np.zeros(len(text)+1)
for i, char in enumerate(text, start=1):
skew[i] = dict_.get(char, 0)
return skew
skew_dict={"G":1, "C":-1}
code='gacaattagcaa'.upper()
print(skew_array(code, skew_dict))
</code></pre>
|
arrays|numpy
| 1
|
2,260
| 43,331,335
|
Subsetting DataFrame based on column names of another DataFrame
|
<p>I have two DataFrames and I want to subset <code>df2</code> based on the column names that intersect with the column names of <code>df1</code>. In <code>R</code> this is easy.</p>
<p><code>R</code> code:</p>
<pre><code>df1 <- data.frame(a=rnorm(5), b=rnorm(5))
df2 <- data.frame(a=rnorm(5), b=rnorm(5), c=rnorm(5))
df2[names(df2) %in% names(df1)]
a b
1 -0.8173361 0.6450052
2 -0.8046676 0.6441492
3 -0.3545996 -1.6545289
4 1.3364769 -0.4340254
5 -0.6013046 1.6118360
</code></pre>
<p>However, I'm not sure how to do this in <code>pandas</code>. </p>
<p><code>pandas</code> attempt: </p>
<pre><code>df1 = pd.DataFrame({'a': np.random.standard_normal((5,)), 'b': np.random.standard_normal((5,))})
df2 = pd.DataFrame({'a': np.random.standard_normal((5,)), 'b': np.random.standard_normal((5,)), 'c': np.random.standard_normal((5,))})
df2[df2.columns in df1.columns]
</code></pre>
<p>This results in <code>TypeError: unhashable type: 'Index'</code>. What's the right way to do this?</p>
|
<p>If you need a true intersection, since <code>.columns</code> yields an Index object which supports basic set operations, you can use <code>&</code>, e.g.</p>
<pre><code>df2[df1.columns & df2.columns]
</code></pre>
<p>or equivalently with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.intersection.html" rel="nofollow noreferrer"><code>Index.intersection</code></a></p>
<pre><code>df2[df1.columns.intersection(df2.columns)]
</code></pre>
<p>However if you are guaranteed that <code>df1</code> is just a column subset of <code>df2</code> you can directly use</p>
<pre><code>df2[df1.columns]
</code></pre>
<p>or if assigning,</p>
<pre><code>df2.loc[:, df1.columns]
</code></pre>
<p><strong>Demo</strong></p>
<pre><code>>>> df2[df1.columns & df2.columns]
a b
0 1.952230 -0.641574
1 0.804606 -1.509773
2 -0.360106 0.939992
3 0.471858 -0.025248
4 -0.663493 2.031343
>>> df2.loc[:, df1.columns]
a b
0 1.952230 -0.641574
1 0.804606 -1.509773
2 -0.360106 0.939992
3 0.471858 -0.025248
4 -0.663493 2.031343
</code></pre>
|
python|pandas|dataframe
| 2
|
2,261
| 43,444,291
|
Strange thing happens when i restore my model in Tensorflow
|
<p>I just want to load my previously saved model and train it further, my code works just fine until the restoring step,things become strange when i use ‘sess.run’. The program end immediately without executing ‘sess.run’.</p>
<p>But, when i removed my AdamOptimizer op, ‘sess,run’ came back to work</p>
<p>Why?</p>
<p>Here is the code:</p>
<pre><code>ckpt_state = tf.train.get_checkpoint_state(last_checkpoint_path)
if not ckpt_state or not ckpt_state.model_checkpoint_path:
print('No check point files are found!')
return
ckpt_files = ckpt_state.all_model_checkpoint_paths
num_ckpt = len(ckpt_files)
if num_ckpt < 1:
print('No check point files are found!')
return
low_res_holder = tf.placeholder(tf.float32, shape=[BATCH_SIZE, INPUT_SIZE, INPUT_SIZE, NUM_CHENNELS])
high_res_holder = tf.placeholder(tf.float32, shape=[BATCH_SIZE, LABEL_SIZE, LABEL_SIZE, NUM_CHENNELS])
keep_prob = tf.placeholder(tf.float32)
is_training = tf.placeholder("bool", shape=[])
global_step = tf.Variable(0, trainable=False, name='global_step')
inferences = models.creat_Dense_Modelpatches(low_res_holder, 13, is_training, keep_prob)
training_loss = models.loss(inferences, high_res_holder, name='training_loss')
low_res_batches, high_res_batches = batch_queue_for_testing(TESTING_DATA_PATH)
learning_rate = tf.train.inverse_time_decay(0.001, global_step, 10000, 2)
train_step = tf.train.AdamOptimizer(learning_rate).minimize(training_loss, global_step=global_step)
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
sess.run(tf.global_variables_initializer())
tf.train.start_queue_runners(sess=sess)
saver = tf.train.Saver(tf.global_variables())
ckpt_file = ckpt_files[-1]
saver.restore(sess, ckpt_file)
low_res_images, high_res_images = sess.run([low_res_batches, high_res_batches])
print("thie code has ran this line...")
</code></pre>
<p>When i ran this codes with</p>
<pre><code>train_step = tf.train.AdamOptimizer(learning_rate).minimize(training_loss, global_step=global_step)
</code></pre>
<p>The output would be</p>
<pre><code>I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:05:00.0)
mt@sj408:~/JP/DR/DR$
</code></pre>
<p>But when train_step op is removed the output would be like this:</p>
<pre><code>I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:05:00.0)
thie code has ran this line...
mt@sj408:~/JP/DR/DR$
</code></pre>
|
<p>You may need to join all the threads you used for your asynchronous execution. Here's an example snippet from (<a href="https://www.tensorflow.org/programmers_guide/reading_data" rel="nofollow noreferrer">https://www.tensorflow.org/programmers_guide/reading_data</a>)</p>
<pre><code>with tf.Session() as sess:
# Start populating the filename queue.
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
for i in range(1200):
# Retrieve a single instance:
example, label = sess.run([features, col5])
coord.request_stop() # <==== You are missing this
coord.join(threads) # <==== And this
</code></pre>
<p>If this doesn't fix your issue, it would be helpful to provide a minimal working example that I can debug locally.</p>
|
python|tensorflow|deep-learning
| 0
|
2,262
| 72,208,712
|
Drop all rows id pandas df except ones mentioned in another df
|
<p>I have two dataframes, one containing lot of columns</p>
<pre><code>df1:
id age topic date text
1 23 Student 1.1. Lorem
2 19 Student 1.2. Cupcake
20 19 Student 1.2. Lorem Ipsum
190 21 Student 11.1. Cupcake Ipsum
</code></pre>
<p>And one with two columns</p>
<pre><code>df2:
id count
1 105
20 4843
31 361
</code></pre>
<p>What I'm trying to accomplish here is to drop all the rows from df1 that are not mentioned in df2 (their id is not there).</p>
<pre><code>result_df:
id age topic date text
1 23 Student 1.1. Lorem
20 19 Student 1.2. Lorem Ipsum
</code></pre>
<p>I've tried this but it's not working:</p>
<pre><code>result_df = df1.drop(df1[df1['id'] != df2['id']].index)
</code></pre>
<p>Could you please help me out?</p>
|
<pre><code>result_df = df1.merge(df2['id'])
</code></pre>
<p>Given:</p>
<pre><code>df1:
id age topic
0 1 23 Student
1 2 19 Student
2 20 19 Student
3 190 21 Student
df2:
id count
0 1 105
1 20 4843
2 31 361
</code></pre>
<p>Doing:</p>
<pre><code>result_df = df1.merge(df2['id'])
print(result_df)
</code></pre>
<p>Output:</p>
<pre><code> id age topic
0 1 23 Student
1 20 19 Student
</code></pre>
|
python|pandas|dataframe|drop
| 1
|
2,263
| 72,318,923
|
Pandas/Openpyxl - Save Current Date into xlsx Filename
|
<p>Trying to save an xlsx file and include the current date in the file name during the process. Currently, I'm using the below code but I receive the error <code>invalid format string</code> - uncertain what format I can use to accomplish this.</p>
<p>I saw this method recommended in another thread but it doesn't work for me. I've tried several other solutions as well but nothing seems to work. Any guidance would be appreciated.</p>
<pre><code>from openpyxl import load_workbook
from datetime import datetime, date
import os
from glob import glob
import pandas as pd
file = glob(
'C:\\Users\\all*.xlsx')[0]
wb1 = load_workbook(file)
ws1 = wb1.worksheets[0]
for row in ws1['A2':'D5']:
for cell in row:
cell.value = None
wb1.save('file1'+now.strftime("%Y%m%d%")+'.xlsx')
</code></pre>
|
<p>The error is in the <code>save</code> command where you have an extra <code>%</code> in the end. Also, just <code>now</code> is not sufficient, it needs the <code>()</code>. For the code above, think it also needs the <code>datetime.</code> to be added. So, change the last line from....</p>
<pre><code>wb1.save('file1'+now.strftime("%Y%m%d%")+'.xlsx')
</code></pre>
<p>to</p>
<pre><code>wb1.save('file1'+datetime.now().strftime("%Y%m%d")+'.xlsx')
</code></pre>
|
pandas|openpyxl|python-3.10
| 0
|
2,264
| 50,602,820
|
How to make a new pandas DF column conditionally based on two other columns
|
<p>I have a dataframe which looks like this:</p>
<pre><code>col 1 | col2 | col3 | col4 | col5
'abc' | 1 | 20 | 10 | 15
'abc' | 2 | 25 | 5 | 30
'def' | 1 | 340 | 12 | 22
'def' | 2 | 185 | 16 | 120
...
</code></pre>
<p>I'd like to create another column <code>col6</code> which is based on the conditional: if <code>col2</code> ==1, then <code>col3</code> * <code>col5</code>, otherwise 0; if <code>col2</code> == 2 then <code>col4</code> * <code>col5</code>, otherwise 0. So the resulting df should look like:</p>
<pre><code>col 1 | col2 | col3 | col4 | col5 | col6
'abc' | 1 | 20 | 10 | 15 | 300
'abc' | 2 | 25 | 5 | 30 | 150
'def' | 1 | 340 | 12 | 22 | 7480
'def' | 2 | 185 | 16 | 120 | 1920
...
</code></pre>
<p>The reason it should return 0 if neither 1 or 2 is just in case <code>col2</code> doesn't have a 1 or a 2.</p>
|
<p>Try this,</p>
<pre><code>df.loc[df['col2'] ==1,'col6']=df['col3']*df['col5']
df.loc[df['col2'] ==2,'col6']=df['col4']*df['col5']
df['col6']=df['col6'].fillna(0)
</code></pre>
|
python|pandas
| 2
|
2,265
| 50,518,309
|
Tensorflow gradient through while_loop
|
<p>I've got a tensorflow model where the output of a layer is a 2d tensor, say <code>t = [[1,2], [3,4]]</code>.</p>
<p>The next layer expects an input which consists of every row combination of this tensor. That is, I need to turn it into <code>t_new = [[1,2,1,2], [1,2,3,4], [3,4,1,2], [3,4,3,4]]</code>.</p>
<p>So far I have tried:</p>
<p>1) <code>tf.unstack(t, axis=0)</code> loop over it's rows and append each combination to a buffer, then <code>t_new = tf.stack(buffer, axis=0)</code>. This works <em>except</em> when the shape is unspecified, ie. None so...</p>
<p>2) I have used a tf.while_loop to generate indices <code>idx=[[0,0], [0,1], [1,0], [1,1]]</code>, then <code>t_new = tf.gather(t, idx)</code>.
My question here is: should I set <code>back_prop</code> to <code>True</code> or <code>False</code> in this <code>tf.while_loop</code>? I'm only generating indices inside the loop. Not sure what <code>back_prop</code> would even mean.</p>
<p>Also, do you know of a better way to achieve what I need?</p>
<p>Here is the while_loop:</p>
<pre><code>i = tf.constant(0)
j = tf.constant(0)
idx = tf.Variable([], dtype=tf.int32)
def body(i, j, idx):
c = tf.concat([idx, [i, j]], axis=0)
i, j = tf.cond(tf.equal(j, sentence_len - 1),
lambda: (i + 1, 0),
lambda: (i, j + 1))
return i, j, c
_, _, indices = tf.while_loop(lambda i, j, _: tf.less(i, sentence_len),
body,
[i, j, idx],
shape_invariants=[i.get_shape(),
j.get_shape(),
tf.TensorShape([None])])
</code></pre>
<p>Now I can do <code>t_new = tf.gather(t, indices)</code>.</p>
<p>But I am very confused about the meaning of <code>tf.while_loop</code>'s <code>back_prop</code> - in general and especially here.</p>
|
<p>In this case you are fine to have <code>back_prop</code> as false. It doesn't need to back propagate through the computation of the indices because that computation doesn't depend on any learned variables.</p>
|
python|tensorflow|machine-learning
| 0
|
2,266
| 62,631,051
|
Insert rows into a dataframe by group and entry is from another dataframe_complex match
|
<p>I wish to insert some entries into a dataframe called 'df_recorded' for each group, and the entry is searched from another dataframe called "df_missed".</p>
<pre><code>import pandas as pd
df_recorded = pd.DataFrame({
'id': ['2008 11', '2008 11', '2008 11', '2008 07', '2008 07', '2008 12', '2008 12', '2008 12'],
'info': ['recorded', 'recorded', 'recorded', 'recorded', 'recorded', 'recorded', 'recorded', 'recorded', ],
'score': [98, 68, 79, 75, 66, 62, 60, 60],
'date' : ['2010-12-10', '2010-10-01', '2010-09-12', '2010-12-10', '2010-11-01', '2010-12-07', '2010-11-10', '2010-09-12']
})
df_missed = pd.DataFrame({
'id': ['2008 11', '2008 07', '2008 12'],
'missed_score': [62, 72, 80],
'missed_date': ['2010-08-01', '2010-10-20', '2010-07-23']
})
id info score date
0 2008 11 recorded 98 2010-12-10
1 2008 11 recorded 68 2010-10-01
2 2008 11 recorded 79 2010-09-12
3 2008 07 recorded 75 2010-12-10
4 2008 07 recorded 66 2010-11-01
5 2008 12 recorded 62 2010-12-07
6 2008 12 recorded 60 2010-11-10
7 2008 12 recorded 60 2010-09-12
df_missed
id missed_score missed_date
0 2008 11 62 2010-08-01
1 2008 07 72 2010-10-20
2 2008 12 80 2010-07-23
</code></pre>
<p>I 'd like to add a row in the end for each group in 'df_recorded', e.g. Adding the same 'id=2008 11' and an new entry called 'missed' in the 'info' column, then adding the score and date by searching the df_missed table, so the result should look like this:</p>
<pre><code>Target result:
id info score date
0 2008 11 recorded 98 2010-12-10
1 2008 11 recorded 68 2010-10-01
2 2008 11 recorded 79 2010-09-12
3 2008 11 missed 62 2010-08-01 # new record
4 2008 07 recorded 75 2010-12-10
5 2008 07 recorded 66 2010-11-01
6 2008 07 missed 72 2010-10-20 # new record
7 2008 12 recorded 62 2010-12-07
8 2008 12 recorded 60 2010-11-10
9 2008 12 recorded 60 2010-09-12
10 2008 12 missed 80 2010-07-23 # new record
</code></pre>
<p>I tried to code with loops but very slow and inefficient. So please help if you have any ideas to make it better. A great thanks.</p>
|
<p>IIUC you can simply rename the columns in the missing df and <code>concat</code>:</p>
<pre><code>df_missed.columns = ["id", "score", "date"]
df = pd.concat([df_recorded,df_missed], ignore_index=True, sort=False).sort_values("id", ascending=False)
df.loc[df["info"].isnull(),"info"] = "missing"
print (df)
id info score date
5 2008 12 recorded 62 2010-12-07
6 2008 12 recorded 60 2010-11-10
7 2008 12 recorded 60 2010-09-12
10 2008 12 missing 80 2010-07-23
0 2008 11 recorded 98 2010-12-10
1 2008 11 recorded 68 2010-10-01
2 2008 11 recorded 79 2010-09-12
8 2008 11 missing 62 2010-08-01
3 2008 07 recorded 75 2010-12-10
4 2008 07 recorded 66 2010-11-01
9 2008 07 missing 72 2010-10-20
</code></pre>
|
python|pandas|dataframe|insert|match
| 4
|
2,267
| 62,637,195
|
Is There Anyway to Fetch a Batch of Image to TFLite model on Edge TPU?
|
<p>I am having trouble on some small detections on Coral Board and I have decided to use sliding window to cut the image in small sample images. But how can I fetch it to the edge_tpu model which only allow 1 image passing through?</p>
<p>Is there anyway to change the batch input when convert to TFlite model?</p>
<p>I have trained my model by <strong>Object Detection API</strong></p>
|
<p>After looking for the code file <a href="https://github.com/tensorflow/models/blob/master/research/object_detection/export_tflite_ssd_graph.py" rel="nofollow noreferrer">export_tflite</a>
I switch to change the number of the batch from 1 to my desire number and everything is fine now.
Edit the batch from 2 files:
<strong>export_tflite_ssd_graph_lib</strong> and <strong>export_tflite_ssd_graph_lib_tf1_test</strong></p>
<p>However, when join Edge TPU compiler, it can't run. Therefore, in order to run a batch of file, I run multiple single input instead and then I can run on edge TPU
<a href="https://i.stack.imgur.com/3Poj4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3Poj4.png" alt="enter image description here" /></a></p>
|
tensorflow-lite|google-coral
| 2
|
2,268
| 62,705,066
|
Euclidian distance between two python matrixes without double for-loop?
|
<p>I am working with two numpy matrixes, <strong>U</strong> (<em>dimensions Nu x 3</em>) and <strong>M</strong> (<em>dimensions 3 x Nm</em>)</p>
<p>A contains Nu users and 3 features</p>
<p>M contains Nm movies (and the same 3 features)</p>
<p>For each user of <strong>U</strong>, I would like to calculate its euclidian distance to every movie in <strong>M</strong> (so I need to compute Nu*Nm euclidian distances).</p>
<p>Is this possible without an explicit double for-loop? I am working with large dimensions matrixes and the double for-loop will probably take too much time.</p>
<p>Thanks in advance.</p>
|
<p>Check out <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.distance.cdist.html#scipy-spatial-distance-cdist" rel="nofollow noreferrer">scipy.spatial.distance.cdist</a>. Something like this will do:</p>
<pre><code>from scipy.spatial.distance import cdist
dist = cdist(U, M.T)
</code></pre>
|
python|arrays|numpy|scipy|distance
| 1
|
2,269
| 62,860,380
|
How to convert tflite model from unquanted to quant/ realtime image classifier react-native
|
<p>I have copied a react native code from <a href="https://medium.com/@namar/high-performance-image-classification-with-react-native-336db0a96cd" rel="nofollow noreferrer">here</a></p>
<p>then i have created a deeplearning model using keras and converted it to tflite (they had used mobilenet quant model ) and replaced the code's model with mine and replaced the output.JSON with mine. App launches successfully with stucked predictions. I am new to react native and want to implement my keras or tflite models in react native can any one help me with this.. as there is no example code for realtime image classification in react native with custom models . plz help</p>
|
<p>when i created tflite model with same dataset using teachablemachine of google the saved my model name as mask_unquant.tflite . i think there is something with quant and unquant.</p>
|
reactjs|react-native|keras|deep-learning|tensorflow-lite
| 0
|
2,270
| 54,620,614
|
Average of each consecutive segment in a list
|
<p>I have a list:</p>
<pre><code>sample_list = array([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16])
</code></pre>
<p>I want to calculate the average of every, say 4 elements. But not 4 elements separately, rather the first 4:</p>
<pre><code>1,2,3,4
</code></pre>
<p>followed by:</p>
<pre><code>2,3,4,5
</code></pre>
<p>followed by:</p>
<pre><code>3,4,5,6
</code></pre>
<p>and so on.</p>
<p>The result will be an array or list of average between every 4 elements in the first list.</p>
<p>Output:</p>
<pre><code>array([2.5, 3.5, 4.5, ...])
</code></pre>
<p>My attempt:</p>
<pre><code>sample_list = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]
splits = 4
def avgerage_splits(data):
datasum = 0
count = 0
for num in data:
datasum += num
count += 1
if count == splits:
yield datasum / splits
datasum = count = 0
if count:
yield datasum / count
print(list(average_splits(sample_list)))
[1.5, 3.5, 5.5, 7.5, 9.5, 11.0]
</code></pre>
<p>This is not the output I need as this calculates the average of every 4 elements before moving onto a new set of 4 elements. I want to only move one element up in the list and calculate the average of those 4 and so on.</p>
|
<p>If <code>numpy</code> is an option a simple way to achieve this is to use <a href="https://docs.scipy.org/doc/numpy-1.14.1/reference/generated/numpy.convolve.html" rel="nofollow noreferrer"><code>np.convolve</code></a>, which can be used to compute a <i>rolling mean</i> when convolving with an array of <a href="https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.ones.html" rel="nofollow noreferrer"><code>np.ones</code></a>:</p>
<pre><code>import numpy as np
sample_list = np.array([1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16], dtype=float)
w = 4
np.convolve(sample_list, np.ones(w), 'valid') / w
</code></pre>
<p><b> Output </b></p>
<pre><code>array([ 2.5, 3.5, 4.5, 5.5, 6.5, 7.5, 8.5, 9.5, 10.5, 11.5, 12.5,
13.5, 14.5])
</code></pre>
<hr>
<p><b> Details </b></p>
<p><a href="https://docs.scipy.org/doc/numpy-1.14.1/reference/generated/numpy.convolve.html" rel="nofollow noreferrer"><code>np.convolve</code></a> is performing a <a href="https://en.wikipedia.org/wiki/Convolution#Discrete_convolution" rel="nofollow noreferrer">discrete convolution</a> between the two input arrays. In this case <code>np.ones(w)</code>, which will be an array of as many ones as the specified window length (4 in this case) <code>array([1., 1., 1., 1.])</code> and <code>sample_list</code>.</p>
<p>The following list comprehension aims to replicate the way <code>np.convolve</code> is computing the output values:</p>
<pre><code>w = 4
np.array([sum(ones*sample_list[m:m+w]) for m in range(len(sample_list)-(w-1))]) / w
array([ 2.5, 3.5, 4.5, 5.5, 6.5, 7.5, 8.5, 9.5, 10.5, 11.5, 12.5,
13.5, 14.5])
</code></pre>
<p>So at each iteration it will take the inner product between the array of ones and the current <i> window</i> of <code>sample_list</code>.</p>
<p>Bellow is an example of how the first outputs are computed so that its a little clearer. Note that in this case the used mode specified for the convolution is <code>valid</code>, which means that, the overlap is specified to be always complete:</p>
<pre><code>[1,1,1,1]
[1,2,3,4,5,6,7,8...]
= (1*1 + 1*2 + 1*3 + 1*4) / 4 = 2.5
</code></pre>
<p>And the following as:</p>
<pre><code> [1,1,1,1]
[1,2,3,4,5,6,7,8...]
= (1*2 + 1*3 + 1*4 + 1*5) / 4 = 3.5
</code></pre>
<p>And so on, yielding as mentioned earlier a <i> moving average </i> of <code>sample_list</code>.</p>
|
python|list|numpy|average
| 8
|
2,271
| 73,595,548
|
Struggling to displaying the right (formatted) value for a matplotlib labels
|
<p><strong>Guide</strong>:
<a href="https://theoehrly.github.io/Fast-F1/examples_gallery/plot_qualifying_results.html#sphx-glr-examples-gallery-plot-qualifying-results-py" rel="nofollow noreferrer">https://theoehrly.github.io/Fast-F1/examples_gallery/plot_qualifying_results.html#sphx-glr-examples-gallery-plot-qualifying-results-py</a></p>
<p>I am having trouble displaying the correct value or formatted form as a matplotlib label.</p>
<p><strong>Issue:</strong> Bar Graph labels displaying as an unwanted, or badly formatted values.</p>
<p><a href="https://i.stack.imgur.com/zqjnD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zqjnD.png" alt="enter image description here" /></a></p>
<p>(Is this TimeDelta[ns] in an integer under scientific notation? The dtype is timedelta64[ns])</p>
<p><strong>Expected Values:</strong> The amount of time each driver is from the leader (s.ms) (HAM=0.038). Note: order is the same</p>
<p><a href="https://i.stack.imgur.com/Jvo8t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Jvo8t.png" alt="enter image description here" /></a></p>
<p>print(times)</p>
<p><strong>Code:</strong></p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/python3-64
#required packages
#pip3 install fastf1
#pip3 install pandas
#pip3 install matplotlib
#pip3 install numpy
import matplotlib.pyplot as plt
import matplotlib.patches as pat
import fastf1 as ff1
import fastf1.plotting as ff1p
ff1p.setup_mpl(mpl_timedelta_support=True, color_scheme=None, misc_mpl_mods=False)
from fastf1.core import Laps
import pandas as pd
import numpy as np
from timple.timedelta import strftimedelta as td
import os
l=str.lower
def data_cache():
cache='/ff1_temp' #temp cache
while(True):
warn=input(l(f'!WARNING! A data cache will be made at {cache}\n'
f'Formula 1 Race Data will be downloaded to {cache}\n'
f'Would you like to continue? [y/n]\n'))
if(warn=='n'):
print('Quitting!\n')
exit(0)
elif(warn=='y'):
print(f'cache location: {cache}\n')
if not os.path.exists(cache): # os.path.exists(cache)
os.mkdir(cache) # os.mkdir(cache)
ff1.Cache.enable_cache(cache) # Fast F1 Cache API
break
else:
print('Plese Enter [y/n]\n')
continue
def data_load():
data=ff1.get_session(2021,'Netherlands','Q') #Y,L,S = Year, Location, Session
data.load(laps=True,telemetry=False,weather=False,messages=False)
return(data)
def data_graph():
data=data_load()
drivers=pd.unique(data.laps['DriverNumber'])
fll=list()
for row in drivers: #get fastest laps for session from each driver
fld=data.laps.pick_driver(row).pick_fastest()
fll.append(fld)
fl=Laps(fll).sort_values(by='LapTime').reset_index(drop=True)
flf=fl.pick_fastest()
fl['LapTimeDelta']=fl['LapTime']-flf['LapTime'] #determine the TimeDelta from leader
tc=list()
for index, lap in fl.iterlaps(): #team colours
color=ff1p.team_color(lap['Team'])
tc.append(color)
return(fl,tc,flf)
def data_plot():
fl,tc,flf=data_graph()
fig,ax=plt.subplots()
times=fl['LapTimeDelta']
fli=fl.index
# y x
bars=ax.barh(fli,times, color=tc,edgecolor='grey')
print(times) #expected values
ax.set_yticks(fl.index)
ax.set_yticklabels(fl['Driver'])
ax.set_xlabel('Time Difference (ms)')
#should be x axis?
ax.bar_label(bars) #(times)
ax.invert_yaxis()
lt=td(flf['LapTime'], '%m:%s.%ms')
plt.suptitle(f'2021 Dutch GP Qualifying\n'
f"Fastest at {lt} ({flf['Driver']})")
plt.show()
if(__name__=="__main__"):
data_cache()
data_plot()
exit(0)
</code></pre>
<p>results of print(bars)</p>
<p><a href="https://i.stack.imgur.com/A7XeA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/A7XeA.png" alt="print(bars)" /></a></p>
<p>results of print(type(times)) and print(type(bars))</p>
<p><a href="https://i.stack.imgur.com/kSqnz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kSqnz.png" alt="print(type(times)) && print(type(bars))" /></a></p>
<p><strong>What has been Attempted:</strong></p>
<pre class="lang-py prettyprint-override"><code>def data_plot():
ax.bar_label(times)
Traceback (most recent call last):
File "\python\datacollection\fp1.ff1.graph.py", line 144, in <module>
data_plot()
File "\python\datacollection\fp1.ff1.graph.py", line 132, in data_plot
ax.bar_label(times)
File "\Python\Python310\lib\site-packages\matplotlib\axes\_axes.py", line 2609, in bar_label
bars = container.patches
File "\Python\Python310\lib\site-packages\pandas\core\generic.py", line 5575, in __getattr__
return object.__getattribute__(self, name)
AttributeError: 'Lap' object has no attribute 'patches'
---
def data_plot_label(fli,times):
for i in range(len(fli)):
plt.text(i,times[i],times[i],ha='center',bbox=dict(alpha=0.8))
def data_plot():
data_plot_label(fli,times)
</code></pre>
<p><a href="https://i.stack.imgur.com/PXzcy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PXzcy.png" alt="failed attempt" /></a></p>
<p><strong>Close:</strong></p>
<p>I'm still pretty green with this stuff,</p>
<ol>
<li>Am I going about this correctly?</li>
<li>What are my options regarding labelling and matplotlib?</li>
<li>How do I set the correct formatted value for this label?</li>
</ol>
<p>I find the graph is harder to understand without the actual values on it. It has less depth.</p>
<p>Relevant Docs:</p>
<ul>
<li><a href="https://theoehrly.github.io/Fast-F1/" rel="nofollow noreferrer">https://theoehrly.github.io/Fast-F1/</a></li>
<li><a href="https://pandas.pydata.org/docs/reference/index.html" rel="nofollow noreferrer">https://pandas.pydata.org/docs/reference/index.html</a></li>
<li><a href="https://matplotlib.org/stable/api/index" rel="nofollow noreferrer">https://matplotlib.org/stable/api/index</a></li>
</ul>
|
<p>I overlooked something in the docs. I was not specifying the label only the container.</p>
<p><strong>Reference:</strong>
<a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.axes.Axes.bar_label.html#matplotlib.axes.Axes.bar_label" rel="nofollow noreferrer">https://matplotlib.org/stable/api/_as_gen/matplotlib.axes.Axes.bar_label.html#matplotlib.axes.Axes.bar_label</a></p>
<pre class="lang-py prettyprint-override"><code>Axes.bar_label(container, labels=None, *, fmt='%g', label_type='edge', padding=0, **kwargs)
</code></pre>
<p><strong>Solution:</strong></p>
<pre class="lang-py prettyprint-override"><code>prfx='0 days 00:00:0'
sufx='000'
remov=''
def data_plot():
#removes leading and tailingzeros
times=times.astype(str).str.replace(prfx,remov).str.replace(sufx,remov)
#before: 0 days 00:00:0x.xxx000
#after: x.xxx
#over looked label, label_type=position-on-bar
ax.bar_label(bars, times, label_type='edge')
</code></pre>
<p><a href="https://i.stack.imgur.com/SVtqb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SVtqb.png" alt="enter image description here" /></a></p>
<p>Just a little more formatting and it should look great!</p>
|
python-3.x|pandas|matplotlib|data-science
| 0
|
2,272
| 73,545,526
|
Ploting dataframe with NAs with linearly joined points
|
<p>I have a dataframe where each column has many missing values. How can I make a plot where the datapoints in each column are joined with lines, i.e. NAs are ignored, instead of having a choppy plot?</p>
<pre><code>import numpy as np
import pandas as pd
pd.options.plotting.backend = "plotly"
d = pd.DataFrame(data = np.random.choice([np.nan] + list(range(7)), size=(10,3)))
d.plot(markers=True)
</code></pre>
<p>One way is to use this for each column:</p>
<pre><code>fig = go.Figure()
fig.add_trace(go.Scatter(x=x, y=y, name="linear",
line_shape='linear'))
</code></pre>
<p>Are there any better ways to accomplish this?</p>
|
<p>You can use <strong>pandas</strong> <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.interpolate.html" rel="nofollow noreferrer">interpolate</a>. Have demonstrated using <strong>plotly express</strong> and chained use so underlying data is not changed.</p>
<p>Post comments have amended answer so that markers are not shown for interpreted points.</p>
<pre><code>import numpy as np
import pandas as pd
import plotly.express as px
d = pd.DataFrame(data=np.random.choice([np.nan] + list(range(7)), size=(10, 3)))
px.line(d).update_traces(mode="lines+markers").add_traces(
px.line(d.interpolate(limit_direction="both")).update_traces(showlegend=False).data
)
</code></pre>
<p><a href="https://i.stack.imgur.com/2o1qg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2o1qg.png" alt="enter image description here" /></a></p>
|
pandas|dataframe|plotly
| 0
|
2,273
| 73,730,231
|
Convert JSON load to readable Pandas Dataframe
|
<p>I am trying to load some data from ArcGis. I am used to working in Pandas Dataframes, not entirely sure how I can read from API's and get it in a nice table. I tried data['results'] and get feature dataset. But not sure how to organize the data by unique Object ID.
Anyone have an idea how to read this into a pandas dataframe?</p>
<pre><code>import urllib.request as urlopen
import urllib.parse as urlencode
import urllib.request as request
import json
import pandas as pd
inPts = {"geometryType" : "esriGeometryPoint",
"spatialReference" : {"wkid" : 54003},
'features':[{'geometry': {'x': -13308192.1956127, 'y': 4221903.58555983}}]}
dist = {'distance':8.5,'units':'esriMiles'}
data = {'Input_Observation_Point': inPts,
'Viewshed_Distance': dist,
'f': 'pjson'}
URL = 'http://sampleserver6.arcgisonline.com/ArcGIS/rest/services/Elevation/ESRI_Elevation_World/GPServer/Viewshed/execute'
req = request.Request(URL, urlencode.urlencode(data).encode('UTF-8'))
response = urlopen.urlopen(req)
response_bytes = response.read()
data = json.loads(response_bytes.decode('UTF-8'))
</code></pre>
|
<p>If I understood right you want to create a dataframe for all the features and the geometry.
The following code creates the dataframe bellow:</p>
<pre><code>import urllib.request as urlopen
import urllib.parse as urlencode
import urllib.request as request
import json
import pandas as pd
import numpy as np
inPts = {"geometryType" : "esriGeometryPoint",
"spatialReference" : {"wkid" : 54003},
'features':[{'geometry': {'x': -13308192.1956127, 'y': 4221903.58555983}}]}
dist = {'distance':8.5,'units':'esriMiles'}
data = {'Input_Observation_Point': inPts,
'Viewshed_Distance': dist,
'f': 'pjson'}
URL = 'http://sampleserver6.arcgisonline.com/ArcGIS/rest/services/Elevation/ESRI_Elevation_World/GPServer/Viewshed/execute'
req = request.Request(URL, urlencode.urlencode(data).encode('UTF-8'))
response = urlopen.urlopen(req)
response_bytes = response.read()
data = json.loads(response_bytes.decode('UTF-8'))#
arcgis_data = data["results"][0]["value"]
columns = ["geometry"]
attributes = []
for field in arcgis_data["fields"]:
columns.append(field["name"])
attributes.append(field["name"])
feature_values = []
for feature in arcgis_data["features"]:
feature_line = [feature["geometry"]]
for feature_name in attributes:
feature_line.append(feature["attributes"][feature_name])
feature_values.append(feature_line)
feature_df = pd.DataFrame(np.array(feature_values), columns=columns)
</code></pre>
<p><a href="https://i.stack.imgur.com/5pD9h.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5pD9h.png" alt="Dataframe resulting" /></a></p>
<p>I tried to keep the code as dynamic as possible to handle more attribute names if the response at "fields" got more fields. Only two attributes that are always required is "geometry" and "attributes".</p>
|
python|pandas|api|rest|arcgis
| 1
|
2,274
| 71,273,332
|
tff.simulation.datasets.ClientData to build federated learning model from CSV files
|
<p>I am building a federated learning model using my own dataset.
I aim to build a multi classification model.
The data are presented in separate 8 CSV files.</p>
<p>I followed the instructions in this <a href="https://stackoverflow.com/questions/60265798/tff-how-define-tff-simulation-clientdata-from-clients-and-fn-function">post</a> As shown in the code below.</p>
<pre><code>dataset_paths = {
'client_0': '/content/ds1.csv',
'client_1': '/content/ds2.csv',
'client_2': '/content/ds3.csv',
'client_3': '/content/ds4.csv',
'client_4': '/content/ds5.csv',
}
def create_tf_dataset_for_client_fn(id):
path = dataset_paths.get(id)
if path is None:
raise ValueError(f'No dataset for client {id}')
return tf.data.Dataset.TextLineDataset(path)
source = tff.simulation.datasets.ClientData.from_clients_and_fn(
dataset_paths.keys(), create_tf_dataset_for_client_fn)
</code></pre>
<p>but it gave me this error</p>
<pre><code>AttributeError: type object 'ClientData' has no attribute 'from_clients_and_fn'
</code></pre>
<p>I was reading this <a href="https://www.tensorflow.org/federated/api_docs/python/tff/simulation/datasets/ClientData#datasets" rel="nofollow noreferrer">documentation</a> and found that <code>.datasets</code> methods would work, so I replaced with <code>.from_clients_and_fn</code> and the error disappeared but I dont know if it is right and what is next?</p>
<p>My questions are:</p>
<ol>
<li>it this is a right method to upload the data to the clients?</li>
<li>if it is not possible to upload the CSV files separately, can I combine all of the data into one CSV file and then consider them as a non-IID data and train them accordingly?
I need some guidance here</li>
</ol>
<p>and thanks in advance</p>
|
<p>In this setup it maybe useful to consider <a href="https://www.tensorflow.org/federated/api_docs/python/tff/simulation/datasets/FilePerUserClientData" rel="nofollow noreferrer"><code>tff.simulation.datasets.FilePerUserClientData</code></a> and <a href="https://www.tensorflow.org/api_docs/python/tf/data/experimental/CsvDataset" rel="nofollow noreferrer"><code>tf.data.experimental.CsvDataset</code></a>.</p>
<p>This might look like (this makes some test CSV data for the sake of the example, the dataset your working with likely has other shapes):</p>
<pre class="lang-py prettyprint-override"><code>dataset_paths = {
'client_0': '/content/ds1.csv',
'client_1': '/content/ds2.csv',
'client_2': '/content/ds3.csv',
'client_3': '/content/ds4.csv',
'client_4': '/content/ds5.csv',
}
# Create some test data for the sake of the example,
# normally we wouldn't do this.
for i, (id, path) in enumerate(dataset_paths.items()):
with open(path, 'w') as f:
for _ in range(i):
f.write(f'test,0.0,{i}\n')
# Values that will fill in any CSV cell if its missing,
# must match the dtypes above.
record_defaults = ['', 0.0, 0]
@tf.function
def create_tf_dataset_for_client_fn(dataset_path):
return tf.data.experimental.CsvDataset(
dataset_path, record_defaults=record_defaults )
source = tff.simulation.datasets.FilePerUserClientData(
dataset_paths, create_tf_dataset_for_client_fn)
print(source.client_ids)
>>> ['client_0', 'client_1', 'client_2', 'client_3', 'client_4']
for x in source.create_tf_dataset_for_client('client_3'):
print(x)
>>> (<tf.Tensor: shape=(), dtype=string, numpy=b'test'>, <tf.Tensor: shape=(), dtype=float32, numpy=0.0>, <tf.Tensor: shape=(), dtype=int32, numpy=3>)
>>> (<tf.Tensor: shape=(), dtype=string, numpy=b'test'>, <tf.Tensor: shape=(), dtype=float32, numpy=0.0>, <tf.Tensor: shape=(), dtype=int32, numpy=3>)
>>> (<tf.Tensor: shape=(), dtype=string, numpy=b'test'>, <tf.Tensor: shape=(), dtype=float32, numpy=0.0>, <tf.Tensor: shape=(), dtype=int32, numpy=3>)
</code></pre>
<p>It may be possible to concatenate all the data into a single CSV, but each record would still need some identifier indicating which row belongs to which client. Mixing all the rows together without any kind of per-client mapping would be akin to standard centralized training, not federated learning.</p>
<p>Once a CSV has all the rows, and perhaps a column with a <code>client_id</code> value, one could presumably use <a href="https://www.tensorflow.org/api_docs/python/tf/data/Dataset#filter" rel="nofollow noreferrer"><code>tf.data.Dataset.filter()</code></a> to only yield the rows belonging to a particular client. This probably won't be particularly efficient though, as it would iterate over the entire global dataset for each client, rather than only that client's examples.</p>
|
python|tensorflow|tensorflow-federated
| 1
|
2,275
| 71,167,751
|
How to quickly add a large list of value to the corresponding python pandas dataframe
|
<p>I have a large csv file with the following format (example), the report_date is currently empty:</p>
<pre><code>| ids | disease_code | report_date |
| --- | ------------ | ----------- |
| 10 | I202 | |
| 11 | I232 | |
| 11 | I242 | |
</code></pre>
<p>I generated a list of tuples from a data source like the following:</p>
<pre><code>[(10, ['I202'], 2021-10-22), (11, ['I232', 'I242'], 2021-11-22), (11, ['I232', 'I242'], 2021-11-12),.....]
</code></pre>
<p>The above order is patient_id, disease_code and the reported_date (The dates are in order corresponding to the disease), for a patient who has more than one disease, the reported date was unfortunately separated into two tuples. Now I want to fill the report_date column by matching the first two values of the tuple with the current csv, like this:</p>
<pre><code>| ids | disease_code | report_date |
| --- | ------------ | ----------- |
| 10 | I202 | 2021-10-22 |
| 11 | I232 | 2021-11-22 |
| 11 | I242 | 2021-11-12 |
</code></pre>
<p>I tried to use a nested loop but it seems like it will take 480 hours to complete. I believe there is a more simple answer but I could not figure it out. Any hint would be appreciated.</p>
|
<p>First, you can create a dataframe with your data. You'll see that the column <code>"disease_code"</code> contains a list of values, just as you mentioned:</p>
<pre><code>>> df = pd.DataFrame(
[(10, ['I202'], "2021-10-22"), (11, ['I232', 'I242'], "2021-11-22"), (11, ['I232', 'I242'], "2021-11-12")],
columns=["ids", "disease_code", "report_date"],
)
>> df["report_date"] = pd.to_datetime(df["report_date"])
>> df
ids disease_code report_date
0 10 [I202] 2021-10-22
1 11 [I232, I242] 2021-11-22
2 11 [I232, I242] 2021-11-12
</code></pre>
<p>Now you need to separate the values in the <code>"disease_code"</code> column by repeating the values in the other columns... <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.explode.html#pandas-dataframe-explode" rel="nofollow noreferrer"><code>pd.DataFrame.explode</code></a> does exactly that. This method transforms values in a list-like column to multiple rows:</p>
<pre><code>>> df.explode(["disease_code"]) # Explode the "disease_code" column
ids disease_code report_date
0 10 I202 2021-10-22
1 11 I232 2021-11-22
1 11 I242 2021-11-22
2 11 I232 2021-11-12
2 11 I242 2021-11-12
</code></pre>
|
python|pandas|csv
| 1
|
2,276
| 71,299,148
|
Filtering dataframe based on other dataframe column on Python
|
<p>I have two DataFrames. One contains multiple columns with sample name and rows containing values. The second DataFrame contains one column called "Sample Name" which contains a list of the names of samples that pass a quality control.
df1</p>
<pre><code>| mz | Sample 001| Sample 002...
|:---- |:---------:| ---------:|
| 234 | 3434 | 34545 |
|:---- |:---------:| ---------:|
| 4542 | 5656563 | 4545 |
</code></pre>
<p>df2</p>
<pre><code>| Sample Name | RT |
| ----------- | ---|
| Sample001 | 8 |
| Sample002 | 8 |...
</code></pre>
<p>df1 contains more than 2000 rows and 200 columns, df2 contains 180 columns. I want to filter df1 to remove the columns that are NOT present on the df2 column "Sample Name"
The resulting DataFrame should be a version of df1 filtered with 180 columns present on the df2 list.</p>
|
<p>Se if this works:</p>
<pre><code>for col in df1.columns:
if col not in df2['Sample Name'].unique():
df1.drop(columns=[col], inplace=True)
</code></pre>
|
python|pandas|dataframe
| 0
|
2,277
| 52,310,755
|
Changing the categories in a column pandas?
|
<p>I was experimenting with iter function on pandas.</p>
<p>1- I made a list from a pandas column.</p>
<blockquote>
<p>in1:</p>
</blockquote>
<pre><code>df_area_code_iter = iter(df["Area Code"])
df_area_code_iter_list = list(df_area_code_iter)
df_area_code_iter_list
</code></pre>
<blockquote>
<p>out_1:</p>
</blockquote>
<pre><code>['Area 1',
'Area 2',
'Area 1',
...
'Area 0']
</code></pre>
<p>2- Then I wanted to iterate through the elements of the list to replace Area 0 with PASS THIS.</p>
<blockquote>
<p>in_2:</p>
</blockquote>
<pre><code>new_column = []
for i in df_area_code_iter_list:
if i == "Area 0":
i == "PASS THIS"
new_column.append(i)
new_column
</code></pre>
<blockquote>
<p>out_2:</p>
</blockquote>
<pre><code>['Area 1',
'Area 2',
'Area 1',
...
'Area 0']
</code></pre>
<p>I know there are other methods to replace the values in a column. However, I want to figure it out by converting the dataframe to a list and then iterating over the elements.</p>
<blockquote>
<p>in_3:</p>
</blockquote>
<pre><code>df["Area Code"] = df["Area Code"].replace(to_replace ="Area 0", value =
"PASS THIS")
df["Area Code"]
</code></pre>
<blockquote>
<p>out_3:</p>
</blockquote>
<pre><code>0 Area 1
1 Area 2
2 Area 1
3 Area 1
4 PASS THIS
5 PASS THIS
6 PASS THIS
... ....
</code></pre>
<p>As I said, I am experimenting and I cannot see any reason as to why the for loop at in_2 is not working.</p>
|
<p>The main problem is assignment requires <code>=</code>, not the equality operator <code>==</code>.</p>
<p>You are forced to append to a new list since the variable <code>i</code> inside your loop is a scalar, not a reference pointing to an element in your original list. Instead, you can use <code>enumerate</code> and modify your existing list:</p>
<pre><code>for idx, val in enumerate(df_area_code_iter_list):
if val == "Area 0":
df_area_code_iter_list[idx] = "PASS THIS"
</code></pre>
<p>Or, more Pythonic, use a list comprehension:</p>
<pre><code>new_list = [x if x != 'Area 0' else 'PASS THIS' for x in df_area_code_iter_list]
</code></pre>
|
python|pandas|for-loop|iterator|iteration
| 1
|
2,278
| 60,652,215
|
How to subtract a column value with every value in another column (pandas)
|
<p>I have two columns A and B. I want to subtract column B value with every value in column A and create a new column without using for-loop.</p>
<p>Below is my Dataframe</p>
<pre><code> A B
0 5 3
1 3 2
2 8 1
</code></pre>
<p>Desired output </p>
<pre><code> A B C D E
0 5 3 2 3 4
1 3 2 0 1 2
2 8 1 5 6 7
C = A - B[0]
D = A - B[1]
E = A - B[2]
</code></pre>
|
<p>Using numpy's array <a href="https://docs.scipy.org/doc/numpy/user/theory.broadcasting.html#array-broadcasting-in-numpy" rel="nofollow noreferrer">broadcasting</a>:</p>
<pre><code>df = pd.DataFrame({'A':[5, 3, 8],
'B':[3, 2, 1]})
df2 = pd.DataFrame(df['A'].values[:, None] - df['B'].values, columns=['C', 'D', 'E'])
df = df.join(df2)
</code></pre>
<p>Result:</p>
<pre><code> A B C D E
0 5 3 2 3 4
1 3 2 0 1 2
2 8 1 5 6 7
</code></pre>
<p><strong>Explanation</strong>:</p>
<pre><code>>>> df['A'].values[:, None]
array([[5],
[3],
[8]])
>>> df['B'].values
array([3, 2, 1])
</code></pre>
<p>When subtracting them, numpy "stretches" <code>df['A'].values[:, None]</code> to:</p>
<pre><code>array([[5, 5, 5],
[3, 3, 3],
[8, 8, 8]])
</code></pre>
<p>and <code>df['B'].values</code> to:</p>
<pre><code>array([[3, 2, 1],
[3, 2, 1],
[3, 2, 1]])
</code></pre>
<p>and the result of subtraction is:</p>
<pre><code>array([[2, 3, 4],
[0, 1, 2],
[5, 6, 7]])
</code></pre>
|
python|pandas|numpy
| 4
|
2,279
| 60,452,450
|
Get only those string where specific ratio condition meets using diff tool SequenceMatcher
|
<p>Any example where I can get two strings in a column of a dataframe when ratio condition met?</p>
<p>Example - While comparing one string with column of a dataframe, it should return only those when SequenceMatcher.ratio() > 0.8.</p>
|
<p>IIUC use <a href="http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#boolean-indexing" rel="nofollow noreferrer"><code>boolean indexing</code></a> with filter by lambda function in <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.apply.html" rel="nofollow noreferrer"><code>Series.apply</code></a>:</p>
<pre><code>text = 'my text'
df1 = df[df['col'].apply(lambda x: SequenceMatcher(None, x, text).ratio()) > 0.8]
</code></pre>
|
python|pandas
| 1
|
2,280
| 60,403,628
|
pandas compute difference using to column filters
|
<p>I have a pandas dataframe like: </p>
<pre><code>| country | year | people
| US | 1990 | 20
| US | 1991 | 34
| .. | .. | ..
| US | 2020 | 456
| UK | 1990 | 5
| UK | 1991 | 7
| .. | .. | ..
| UK | 2020 | 300
</code></pre>
<p>I would like to compute the difference between 2020 and 1990 for each of the countries, expected output:</p>
<pre><code>|country | difference
|US | 436
|UK | 295
</code></pre>
|
<p>Since the years of interest are 2020 and 1990, we filter for just those years, sort the people column in descending order, groupby country, and use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.subtract.html" rel="nofollow noreferrer">numpy subtract</a> and <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.ufunc.reduce.html" rel="nofollow noreferrer">numpy reduce</a> to get the difference:</p>
<pre><code>(df.query('year==[2020,1990]')
.sort_values('people',ascending=False)
.groupby('country',sort=False)
.agg(difference=('people',np.subtract.reduce))
)
difference
country
US 436
UK 295
</code></pre>
<p>Note that the groupby is not sorted - this ensures the sorted values are not tampered with(we need each column to have the highest at the top, so that the subtractions and reduction method in the aggregation will yield positive values)</p>
<p>For division : </p>
<pre><code>(df.query('year==[2020,1990]')
.sort_values('people',ascending=False)
.groupby('country',sort=False)
.agg(fst=('people','first'), lst=('people','last'))
.assign(division=lambda x: x.fst.div(x.lst))
)
</code></pre>
|
python|pandas|dataframe
| 1
|
2,281
| 72,794,356
|
pytorch.info summary for Generator of AN equivalent to discriminator or summary for whole GAN
|
<p>Is it possible to generate a summary for the generator network of a GAN equivalent to the summary for the discriminator network using pytorch.info (containing inputs and outputs) or is there even a standard summary for the whole GAN including both networks?</p>
<p>For the Discriminator I used the following:</p>
<pre><code>model = Discriminator()
batch_size = 32
summary(model, input_size=(batch_size, 3, 28, 28))
</code></pre>
<p>and received the following summary, which I would also like for the generator (see below summary):</p>
<pre><code>==========================================================================================
Layer (type:depth-idx) Output Shape Param #
==========================================================================================
Discriminator [32, 1] --
├─Sequential: 1-1 [32, 1] --
│ └─Linear: 2-1 [32, 2048] 4,818,944
│ └─ReLU: 2-2 [32, 2048] --
│ └─Dropout: 2-3 [32, 2048] --
│ └─Linear: 2-4 [32, 1024] 2,098,176
│ └─ReLU: 2-5 [32, 1024] --
│ └─Dropout: 2-6 [32, 1024] --
│ └─Linear: 2-7 [32, 512] 524,800
│ └─ReLU: 2-8 [32, 512] --
│ └─Dropout: 2-9 [32, 512] --
│ └─Linear: 2-10 [32, 256] 131,328
│ └─ReLU: 2-11 [32, 256] --
│ └─Dropout: 2-12 [32, 256] --
│ └─Linear: 2-13 [32, 1] 257
│ └─Sigmoid: 2-14 [32, 1] --
==========================================================================================´´´
Total params: 7,573,505
Trainable params: 7,573,505
Non-trainable params: 0
Total mult-adds (M): 242.35
==========================================================================================
Input size (MB): 0.30
Forward/backward pass size (MB): 0.98
Params size (MB): 30.29
Estimated Total Size (MB): 31.58
==========================================================================================
</code></pre>
<p>For the generator I used the following to create a summary and unfortunately I wasnt able to includ ethe column for output shape as well as everything under and including the input row (as above):</p>
<pre><code>model = Generator()
batch_size = 32
summary(model, output_size=(batch_size, 3, 28, 28))
</code></pre>
<p>and received the following shorter summary:</p>
<pre><code>=================================================================
Layer (type:depth-idx) Param #
=================================================================
Generator --
├─Sequential: 1-1 --
│ └─Linear: 2-1 25,856
│ └─ReLU: 2-2 --
│ └─Linear: 2-3 131,584
│ └─ReLU: 2-4 --
│ └─Linear: 2-5 525,312
│ └─ReLU: 2-6 --
│ └─Linear: 2-7 2,099,200
│ └─ReLU: 2-8 --
│ └─Linear: 2-9 4,819,248
│ └─Tanh: 2-10 --
=================================================================
Total params: 7,601,200
Trainable params: 7,601,200
Non-trainable params: 0
=================================================================
</code></pre>
|
<p>This package is not recommended for debugging your code, you should therefore always make sure your code runs on random data before testing the summary.</p>
<p>In the second group of commands, you are using <code>output_size</code> instead of <code>input_size</code> (cf. <a href="https://github.com/TylerYep/torchinfo/blob/main/torchinfo/torchinfo.py#L54" rel="nofollow noreferrer">src</a>). Looking at your code for <code>Generator</code>, the input shape should be <code>(batch_size, 100)</code>. Additionally your final linear layer should output a total of <code>3*28*28</code> values in order for you to reshape to an image of shape <code>(3, 28, 28)</code>.</p>
<pre><code>class Generator(nn.Module):
def __init__(self):
super().__init__()
self.model = nn.Sequential(
nn.Linear(100, 256),
nn.ReLU(),
nn.Linear(256, 512),
nn.ReLU(),
nn.Linear(512, 1024),
nn.ReLU(),
nn.Linear(1024, 2048),
nn.ReLU(),
nn.Linear(2048, 28*28*3),
nn.Tanh(),
)
def forward(self, x):
output = self.model(x)
output = output.view(x.size(0), 3, 28, 28)
return output
</code></pre>
<p>Which you can summarize with:</p>
<pre><code>>>> summary(model, input_size=(10,100))
========================================================================================
Layer (type:depth-idx) Output Shape Param #
========================================================================================
Generator [10, 3, 28, 28] --
├─Sequential: 1-1 [10, 2352] --
│ └─Linear: 2-1 [10, 256] 25,856
│ └─ReLU: 2-2 [10, 256] --
│ └─Linear: 2-3 [10, 512] 131,584
│ └─ReLU: 2-4 [10, 512] --
│ └─Linear: 2-5 [10, 1024] 525,312
│ └─ReLU: 2-6 [10, 1024] --
│ └─Linear: 2-7 [10, 2048] 2,099,200
│ └─ReLU: 2-8 [10, 2048] --
│ └─Linear: 2-9 [10, 2352] 4,819,248
│ └─Tanh: 2-10 [10, 2352] --
========================================================================================
Total params: 7,601,200
Trainable params: 7,601,200
Non-trainable params: 0
Total mult-adds (M): 76.01
========================================================================================
Input size (MB): 0.00
Forward/backward pass size (MB): 0.50
Params size (MB): 30.40
Estimated Total Size (MB): 30.90
========================================================================================
</code></pre>
|
pytorch|summary|generative-adversarial-network
| 0
|
2,282
| 72,559,630
|
Preserving the Custom Number Format when using read_excel() then converting to CSV with to_csv() with Pandas
|
<p>I've created a simple script that converts Excel files to CSV using Pandas. Here's the gist of my code:</p>
<pre><code>read_file = pd.read_excel(excel_file)
read_file.to_csv(csv_file, index=None, header=True, float_format='%.0f')
</code></pre>
<p>However, my issue is that the Excel file has several columns with dates, and the output in the CSV file is with the cell's literal value with the format mm/dd/yyyy. On the Excel file, a Custom Number Format has been applied to convert the dates to mmm yyyy format (e.g. 01/01/2001 becomes Jan 2001).</p>
<p>I want to convert the Excel to CSV and have the date values stay in its Custom Number Format rather than the literal value in the cell. Is this possible?</p>
<p>P.S. I know about adding a <code>date_format</code> in <code>to_csv()</code>, but I'd prefer keeping the Custom Number Format as this tool is going to be used in a number of different Excel files that may or may not contain their own Custom Number Formats. That's why I'm having a hard time.</p>
|
<p>Rendering cell value using number format is a function of Excel. I think pandas and openpyxl only know the information of the table, such as the value and number format, but do not know how to render cell value according to the number format.</p>
<p>If we wanted to, we can render the value as a string based on the number format, just like Excel. We can get the number format of the cell through openpyxl.</p>
<pre><code>import openpyxl
workbook = openpyxl.load_workbook(excel_file)
sheet = workbook[workbook.sheetnames[0]]
print(sheet.cell(2,1).number_format)
</code></pre>
|
python|pandas
| 0
|
2,283
| 72,763,875
|
Google Colab TensorFlow model.fit() error
|
<p>Hi Im new to Stack overflow. Im having trouble with my code, I was at model.fit() and when I entered a value at the epochs and ran the code I got an error. Here is the code for the model.fit:</p>
<p>model.fit(
train_ds,
validation_data = valid_ds,
epochs = 10
)</p>
<p>and below here is the error I got:</p>
<p>ValueError Traceback (most recent call last)
in ()
2 train_ds,
3 validation_data = valid_ds,
----> 4 epochs = 10
5 )</p>
<p>1 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in autograph_handler(*args, **kwargs)
1145 except Exception as e: # pylint:disable=broad-except
1146 if hasattr(e, "ag_error_metadata"):
-> 1147 raise e.ag_error_metadata.to_exception(e)
1148 else:
1149 raise</p>
<p>ValueError: in user code:</p>
<pre><code>File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1021, in train_function *
return step_function(self, iterator)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1010, in step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 1000, in run_step **
outputs = model.train_step(data)
File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 859, in train_step
y_pred = self(x, training=True)
File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/usr/local/lib/python3.7/dist-packages/keras/layers/convolutional.py", line 305, in compute_output_shape
f'One of the dimensions in the output is <= 0 '
ValueError: Exception encountered when calling layer "sequential_3" (type Sequential).
One of the dimensions in the output is <= 0 due to downsampling in conv2d_280. Consider increasing the input size. Received input shape [None, 32, 32, 3] which would produce output shape with a zero or negative value in a dimension.
Call arguments received:
• inputs=tf.Tensor(shape=(None, 32, 32, 3), dtype=float32)
• training=True
• mask=None
</code></pre>
<p>Can someone assist me with this? Thanks in advance</p>
|
<p>Probably you need to upscale the size of your image which you are feeding into the CNN. 360x360x3 is a standard one. Your image size is too small, hence it is not able to build the model fully.</p>
|
python|tensorflow
| 0
|
2,284
| 61,860,819
|
How to convert string entries in pandas dataframe to integers?
|
<p>The pandas data frame that I was working with has a column with string entries and I wish to convert it to integers.</p>
<p>The column is called <code>diagnosis</code> and each row has a value of either <code>M</code> or <code>B</code>. I wanted to convert all <code>M</code> to 1 and all <code>B</code> to 0 with the following code</p>
<pre><code>for row in data['diagnosis']:
if row == 'M':
row = 1
else:
row = 0
</code></pre>
<p>But it does not change anything?</p>
|
<p>You can use <code>map</code> to do this</p>
<pre><code>data['diagnosis'] = data['diagnosis'].map({'M':1,'B':0})
</code></pre>
|
python|pandas|dataframe
| 2
|
2,285
| 61,823,299
|
Extract instances of a patterned text sequence from a very long string using Python
|
<p>I am working with this <a href="https://www.senate.gov/artandhistory/history/resources/pdf/chronlist.pdf" rel="nofollow noreferrer">PDF document</a> of about 80 pages. It lists all 1,984 US senators from US history in chronological order. I have extracted the text of the document using PyPDF2. The text is now assigned to a variable as a single, long string. Here is a segment:</p>
<hr>
<pre><code>Silsbee, Nathaniel (Adams/AJ-MA) March 3, 1835 281 November 8 Rodney, Daniel (Adams-DE) January 12, 1827 282 November 9 Bateman, Ephraim (Adams-NJ) January 12, 1829 283 November 27 McKinley, John (J-AL) March 3, 1831 284 (Served again 1837) November 29 Smith, William (R-SC) March 3, 1831 (First served 1816-1823) * * * 1827 * * * January 12 Ridgely, Henry M. (J-DE) March 3, 1829 285 TWENTIETH CONGRESS March 4, 1827, TO MARCH 3, 1829 March 4 Barnard, Isaac D. (J-PA) December 6, 1831 286 Ellis, Powhatan (J-MS) July 16, 1832 (First served 1825-1826) Foot, Samuel A. (Adams/AJ-CT) March 3, 1833 287 McLane, Louis (J-DE) April 16, 1829 288 Parris, Albion K. (J-ME) August 26, 1828 289 Tyler, John (J/AJ-VA) February 29, 1836 290 December 17 Webster, Daniel (Adams/AJ/W-MA) February 22, 1841 291 (Served again 1845) * * * 1828 * * * November 7 Prince, Oliver H. (J-GA) March 3, 1829 292 Start of Initial Senate Service Name/Party End of Service Rank 15 December 10 Burnet, Jacob (Adams/AJ-OH) March 3, 1831 293 December 15 Iredell, James (J-NC) March 3, 1831 294 * * * 1829 * * * January 15 Dudley, Charles E. (J-NY) March 3, 1833 295 Holmes, John (Adams/AJ-ME) March 3, 1833 (First served 1820-1827) January 30 Dickerson, Mahlon (R/CR/J-NJ) March 3, 1833 (First served 1817-1829)
</code></pre>
<hr>
<p>Notice that the name, party affiliation, state, end of service date, and rank of each senator normally appear in a patterned segment. Here are some examples:</p>
<hr>
<pre><code>Rodney, Daniel (Adams-DE) January 12, 1827 282
Bateman, Ephraim (Adams-NJ) January 12, 1829 283
Burnet, Jacob (Adams/AJ-OH) March 3, 1831 293
</code></pre>
<hr>
<p>But there are also some exceptions, such as these:</p>
<hr>
<pre><code>Smith, William (R-SC) March 3, 1831 (First served 1816-1823)
Holmes, John (Adams/AJ-ME) March 3, 1833 (First served 1820-1827)
</code></pre>
<hr>
<p>In these cases the rank is given when the senator is first listed.</p>
<p>My question is, how can I extract the basic information on each senator (name, party, state, end of service, rank)? I believe I need to loop through the string, finding all instances of a regular expression that captures the patterns, and assign each instance to a list within a list. The end result would be a list of lists that I could transform into a dataframe in pandas.</p>
|
<p>You can try the following approach::</p>
<pre><code>d = re.split(r"([a-zA-Z]+\,\s+[a-zA-Z]+)", d)[1:]
d = [", ".join(d[i:i+2]) for i in range(0, len(d), 2)]
d = [re.findall(r'^(.*?)\s+\((.*?)\)\s+(.*?\d{4})(.*?)$', _)[0] for _ in d]
df = pd.DataFrame(d, columns=["name", "party/state", "date_to_convert", "rank_to_clean"])
df["name"] = df["name"].str.replace(r'\,$','')
</code></pre>
<p><strong>Workflow</strong>:</p>
<ol>
<li><p>Split the input on <code>,</code> surrounded by two <em>Names</em>:</p>
<ol>
<li>Use the regex <code>[a-zA-Z]+\,\s+[a-zA-Z]+</code></li>
<li>Surround the regex by parenthesis because the split key (e.g. the names) need to be kept</li>
<li>Apply regex using <a href="https://docs.python.org/fr/3/library/re.html#re.split" rel="nofollow noreferrer"><code>re.split</code></a></li>
<li>Remove first element that is empty space</li>
</ol></li>
<li><p>Here, we have all the lines bu divided in two elements. We need to aggregate two consecutive element. The topic <a href="https://stackoverflow.com/questions/14681609/create-a-2d-list-out-of-1d-list"><em>Create a 2D list out of 1D list</em></a> answer this step. </p></li>
<li><p>Now content can be extracted from each row. Here, we use <a href="https://docs.python.org/fr/3/library/re.html#re.findall" rel="nofollow noreferrer"><code>re.findall</code></a> with regex <code>(.*?)\s+\((.*?)\)\s+(.*?\d{4})(.*?)$</code>. They are 4 groups:</p>
<ul>
<li>Group 1 selects everything till a parenthesis: <code>(.*?)\s+\(</code></li>
<li>Group 2 selects everything till the closing parenthesis: <code>(.*?)\)</code></li>
<li>Group 3 selects everything till a year (e.g. 4 numbers): <code>(.*?\d{4})</code></li>
<li>Group 4 selects everything till the end: <code>(.*?)$</code></li>
</ul></li>
</ol>
<p>For a better understanding of regex, I advice you to see online regex such as <a href="https://regex101.com/" rel="nofollow noreferrer">regex101.com</a> to visualize the results...</p>
<ol start="4">
<li>Create the dataframe</li>
</ol>
<p>Next steps, apply more specific cleanings and separation on dataset such as removing comma on name with:</p>
<pre><code>df["name"] = df["name"].str.replace(r'\,$','')
</code></pre>
<hr>
<p><strong>Code + illustration</strong></p>
<pre><code># import module
import pandas as pd
import re
d = "Silsbee, Nathaniel (Adams/AJ-MA) March 3, 1835 281 November 8 Rodney, Daniel (Adams-DE) January 12, 1827 282 November 9 Bateman, Ephraim (Adams-NJ) January 12, 1829 283 November 27 McKinley, John (J-AL) March 3, 1831 284 (Served again 1837) November 29 Smith, William (R-SC) March 3, 1831 (First served 1816-1823) * * * 1827 * * * January 12 Ridgely, Henry M. (J-DE) March 3, 1829 285 TWENTIETH CONGRESS March 4, 1827, TO MARCH 3, 1829 March 4 Barnard, Isaac D. (J-PA) December 6, 1831 286 Ellis, Powhatan (J-MS) July 16, 1832 (First served 1825-1826) Foot, Samuel A. (Adams/AJ-CT) March 3, 1833 287 McLane, Louis (J-DE) April 16, 1829 288 Parris, Albion K. (J-ME) August 26, 1828 289 Tyler, John (J/AJ-VA) February 29, 1836 290 December 17 Webster, Daniel (Adams/AJ/W-MA) February 22, 1841 291 (Served again 1845) * * * 1828 * * * November 7 Prince, Oliver H. (J-GA) March 3, 1829 292 Start of Initial Senate Service Name/Party End of Service Rank 15 December 10 Burnet, Jacob (Adams/AJ-OH) March 3, 1831 293 December 15 Iredell, James (J-NC) March 3, 1831 294 * * * 1829 * * * January 15 Dudley, Charles E. (J-NY) March 3, 1833 295 Holmes, John (Adams/AJ-ME) March 3, 1833 (First served 1820-1827) January 30 Dickerson, Mahlon (R/CR/J-NJ) March 3, 1833 (First served 1817-1829)"
# Step 1
d = re.split(r"([a-zA-Z]+\,\s+[a-zA-Z]+)", d)[1:]
print(d)
# ['Silsbee, Nathaniel', ' (Adams/AJ-MA) March 3, 1835 281 November 8 ',
# 'Rodney, Daniel', ' (Adams-DE) January 12, 1827 282 November 9 ',
# 'Bateman, Ephraim', ' (Adams-NJ) January 12, 1829 283 November 27 ',
# 'McKinley, John', ' (J-AL) March 3, 1831 284 (Served again 1837) November 29 ',
# 'Smith, William', ' (R-SC) March 3, 1831 (First served 1816-1823) * * * 1827 * * * January 12 ',
# 'Ridgely, Henry', ' M. (J-DE) March 3, 1829 285 TWENTIETH CONGRESS March 4, 1827, TO MARCH 3, 1829 March 4 ',
# 'Barnard, Isaac', ' D. (J-PA) December 6, 1831 286 ',
# 'Ellis, Powhatan', ' (J-MS) July 16, 1832 (First served 1825-1826) ',
# 'Foot, Samuel', ' A. (Adams/AJ-CT) March 3, 1833 287 ',
# 'McLane, Louis', ' (J-DE) April 16, 1829 288 ',
# 'Parris, Albion', ' K. (J-ME) August 26, 1828 289 ',
# 'Tyler, John', ' (J/AJ-VA) February 29, 1836 290 December 17 ',
# 'Webster, Daniel', ' (Adams/AJ/W-MA) February 22, 1841 291 (Served again 1845) * * * 1828 * * * November 7 ',
# 'Prince, Oliver', ' H. (J-GA) March 3, 1829 292 Start of Initial Senate Service Name/Party End of Service Rank 15 December 10 ',
# 'Burnet, Jacob', ' (Adams/AJ-OH) March 3, 1831 293 December 15 ',
# 'Iredell, James', ' (J-NC) March 3, 1831 294 * * * 1829 * * * January 15 ',
# 'Dudley, Charles', ' E. (J-NY) March 3, 1833 295 ',
# 'Holmes, John', ' (Adams/AJ-ME) March 3, 1833 (First served 1820-1827) January 30 ',
# 'Dickerson, Mahlon', ' (R/CR/J-NJ) March 3, 1833 (First served 1817-1829)']
# Step 2
d = [", ".join(d[i:i+2]) for i in range(0, len(d), 2)]
print(d)
# ['Silsbee, Nathaniel, (Adams/AJ-MA) March 3, 1835 281 November 8 ',
# 'Rodney, Daniel, (Adams-DE) January 12, 1827 282 November 9 ',
# 'Bateman, Ephraim, (Adams-NJ) January 12, 1829 283 November 27 ',
# 'McKinley, John, (J-AL) March 3, 1831 284 (Served again 1837) November 29 ',
# 'Smith, William, (R-SC) March 3, 1831 (First served 1816-1823) * * * 1827 * * * January 12 ',
# 'Ridgely, Henry, M. (J-DE) March 3, 1829 285 TWENTIETH CONGRESS March 4, 1827, TO MARCH 3, 1829 March 4 ',
# 'Barnard, Isaac, D. (J-PA) December 6, 1831 286 ',
# 'Ellis, Powhatan, (J-MS) July 16, 1832 (First served 1825-1826) ',
# 'Foot, Samuel, A. (Adams/AJ-CT) March 3, 1833 287 ', 'McLane, Louis, (J-DE) April 16, 1829 288 ',
# 'Parris, Albion, K. (J-ME) August 26, 1828 289 ',
# 'Tyler, John, (J/AJ-VA) February 29, 1836 290 December 17 ',
# 'Webster, Daniel, (Adams/AJ/W-MA) February 22, 1841 291 (Served again 1845) * * * 1828 * * * November 7 ',
# 'Prince, Oliver, H. (J-GA) March 3, 1829 292 Start of Initial Senate Service Name/Party End of Service Rank 15 December 10 ',
# 'Burnet, Jacob, (Adams/AJ-OH) March 3, 1831 293 December 15 ',
# 'Iredell, James, (J-NC) March 3, 1831 294 * * * 1829 * * * January 15 ',
# 'Dudley, Charles, E. (J-NY) March 3, 1833 295 ',
# 'Holmes, John, (Adams/AJ-ME) March 3, 1833 (First served 1820-1827) January 30 ',
# 'Dickerson, Mahlon, (R/CR/J-NJ) March 3, 1833 (First served 1817-1829)']
# Step 3
d = [re.findall(r'^(.*?)\s+\((.*?)\)\s+(.*?\d{4})(.*?)$', _)[0] for _ in d]
[print(_) for _ in d]
# ('Silsbee, Nathaniel,', 'Adams/AJ-MA', 'March 3, 1835', ' 281 November 8 ')
# ('Rodney, Daniel,', 'Adams-DE', 'January 12, 1827', ' 282 November 9 ')
# ('Bateman, Ephraim,', 'Adams-NJ', 'January 12, 1829', ' 283 November 27 ')
# ('McKinley, John,', 'J-AL', 'March 3, 1831', ' 284 (Served again 1837) November 29 ')
# ('Smith, William,', 'R-SC', 'March 3, 1831', ' (First served 1816-1823) * * * 1827 * * * January 12 ')
# ('Ridgely, Henry, M.', 'J-DE', 'March 3, 1829', ' 285 TWENTIETH CONGRESS March 4, 1827, TO MARCH 3, 1829 March 4 ')
# ('Barnard, Isaac, D.', 'J-PA', 'December 6, 1831', ' 286 ')
# ('Ellis, Powhatan,', 'J-MS', 'July 16, 1832', ' (First served 1825-1826) ')
# ('Foot, Samuel, A.', 'Adams/AJ-CT', 'March 3, 1833', ' 287 ')
# ('McLane, Louis,', 'J-DE', 'April 16, 1829', ' 288 ')
# ('Parris, Albion, K.', 'J-ME', 'August 26, 1828', ' 289 ')
# ('Tyler, John,', 'J/AJ-VA', 'February 29, 1836', ' 290 December 17 ')
# ('Webster, Daniel,', 'Adams/AJ/W-MA', 'February 22, 1841', ' 291 (Served again 1845) * * * 1828 * * * November 7 ')
# ('Prince, Oliver, H.', 'J-GA', 'March 3, 1829', ' 292 Start of Initial Senate Service Name/Party End of Service Rank 15 December 10 ')
# ('Burnet, Jacob,', 'Adams/AJ-OH', 'March 3, 1831', ' 293 December 15 ')
# ('Iredell, James,', 'J-NC', 'March 3, 1831', ' 294 * * * 1829 * * * January 15 ')
# ('Dudley, Charles, E.', 'J-NY', 'March 3, 1833', ' 295 ')
# ('Holmes, John,', 'Adams/AJ-ME', 'March 3, 1833', ' (First served 1820-1827) January 30 ')
# ('Dickerson, Mahlon,', 'R/CR/J-NJ', 'March 3, 1833', ' (First served 1817-1829)')
# Step 4
df = pd.DataFrame(d, columns=["name", "party/state", "date_to_convert", "rank_to_clean"])
df["name"] = df["name"].str.replace(r'\,$','')
print(df)
# name party/state date_to_convert rank_to_clean
# 0 Silsbee, Nathaniel Adams/AJ-MA March 3, 1835 281 November 8
# 1 Rodney, Daniel Adams-DE January 12, 1827 282 November 9
# 2 Bateman, Ephraim Adams-NJ January 12, 1829 283 November 27
# 3 McKinley, John J-AL March 3, 1831 284 (Served again 1837) November 29
# 4 Smith, William R-SC March 3, 1831 (First served 1816-1823) * * * 1827 * * * ...
# 5 Ridgely, Henry, M. J-DE March 3, 1829 285 TWENTIETH CONGRESS March 4, 1827, TO M...
# 6 Barnard, Isaac, D. J-PA December 6, 1831 286
# 7 Ellis, Powhatan J-MS July 16, 1832 (First served 1825-1826)
# 8 Foot, Samuel, A. Adams/AJ-CT March 3, 1833 287
# 9 McLane, Louis J-DE April 16, 1829 288
# 10 Parris, Albion, K. J-ME August 26, 1828 289
# 11 Tyler, John J/AJ-VA February 29, 1836 290 December 17
# 12 Webster, Daniel Adams/AJ/W-MA February 22, 1841 291 (Served again 1845) * * * 1828 * * * ...
# 13 Prince, Oliver, H. J-GA March 3, 1829 292 Start of Initial Senate Service Name/...
# 14 Burnet, Jacob Adams/AJ-OH March 3, 1831 293 December 15
# 15 Iredell, James J-NC March 3, 1831 294 * * * 1829 * * * January 15
# 16 Dudley, Charles, E. J-NY March 3, 1833 295
# 17 Holmes, John Adams/AJ-ME March 3, 1833 (First served 1820-1827) January 30
# 18 Dickerson, Mahlon R/CR/J-NJ March 3, 1833 (First served 1817-1829)
</code></pre>
|
python|regex|pandas|pdf
| 2
|
2,286
| 61,795,444
|
Slice multi-index pandas dataframe by date
|
<p>Say I have the following multi-index dataframe:</p>
<pre><code>arrays = [np.array(['bar', 'bar', 'bar', 'bar', 'foo', 'foo', 'foo', 'foo']),
pd.to_datetime(['2020-01-01', '2020-01-02', '2020-01-03', '2020-01-04', '2020-01-01', '2020-01-02', '2020-01-03', '2020-01-04'])]
df = pd.DataFrame(np.zeros((8, 4)), index=arrays)
0 1 2 3
bar 2020-01-01 0.0 0.0 0.0 0.0
2020-01-02 0.0 0.0 0.0 0.0
2020-01-03 0.0 0.0 0.0 0.0
2020-01-04 0.0 0.0 0.0 0.0
foo 2020-01-01 0.0 0.0 0.0 0.0
2020-01-02 0.0 0.0 0.0 0.0
2020-01-03 0.0 0.0 0.0 0.0
2020-01-04 0.0 0.0 0.0 0.0
</code></pre>
<p>How do I select only the part of this dataframe where the first index <code>level = 'bar'</code>, and <code>date > 2020.01.02</code>, such that I can add 1 to this part?</p>
<p>To be clearer, the expected output would be:</p>
<pre><code> 0 1 2 3
bar 2020-01-01 0.0 0.0 0.0 0.0
2020-01-02 0.0 0.0 0.0 0.0
2020-01-03 1.0 1.0 1.0 1.0
2020-01-04 1.0 1.0 1.0 1.0
foo 2020-01-01 0.0 0.0 0.0 0.0
2020-01-02 0.0 0.0 0.0 0.0
2020-01-03 0.0 0.0 0.0 0.0
2020-01-04 0.0 0.0 0.0 0.0
</code></pre>
<p>I managed slicing it according to the first index:</p>
<pre><code>df.loc['bar']
</code></pre>
<p>But then I am not able to apply the condition on the date.</p>
|
<p>Here is possible compare each level and then set <code>1</code>, there is <code>:</code> for all columns in <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>DataFrame.loc</code></a>:</p>
<pre><code>m1 = df.index.get_level_values(0) =='bar'
m2 = df.index.get_level_values(1) > '2020-01-02'
df.loc[m1 & m2, :] = 1
print (df)
0 1 2 3
bar 2020-01-01 0.0 0.0 0.0 0.0
2020-01-02 0.0 0.0 0.0 0.0
2020-01-03 1.0 1.0 1.0 1.0
2020-01-04 1.0 1.0 1.0 1.0
foo 2020-01-01 0.0 0.0 0.0 0.0
2020-01-02 0.0 0.0 0.0 0.0
2020-01-03 0.0 0.0 0.0 0.0
2020-01-04 0.0 0.0 0.0 0.0
</code></pre>
|
python|pandas|dataframe|slice|multi-index
| 2
|
2,287
| 61,690,819
|
Accessing classes/models with Flask Sqlalchemy which I created through pandas.to_sql
|
<p>I am trying to develop a web app with Flask. What I am trying to do is:</p>
<ol>
<li>Get data from client in the form of a table</li>
<li>Use pandas to perform calculations on each row of the table and create multiple dfs and then append them to a single df</li>
<li>Then use to_sql to upload the consolidated df to a table in postgres database using Flask SQLalchemy
(I am fine uptil here)</li>
<li>Then I want to use Flask sqlalchemy to query and edit the table in the database.</li>
</ol>
<p>The problem is step 4. Since I am using to_sql function to create the table within the database, the table does not have any primary key.</p>
<p>When I use the following code to reflect classes:</p>
<pre><code>db.Model.metadata.reflect(db.engine)
class LeaseInfo(db.Model):
__table__ = db.Model.metadata.tables['lease_info']
def __repr__(self):
return self.DISTRICT
</code></pre>
<p>I get the following error:
<strong>sqlalchemy.exc.ArgumentError: Mapper mapped class LeaseInfo->lease_info could not assemble any primary key columns for mapped table 'lease_info'</strong></p>
<p>When I use the following code to access the classes:</p>
<pre><code>Base = automap_base()
Base.prepare(db.engine, reflect = True)
LeaseInfo = Base.classes.lease_info
</code></pre>
<p>I get the following error: <strong>line 212, in __getattr__
raise AttributeError(key)
AttributeError: lease_info</strong></p>
<p>I have seen suggestions which say that I define the classes first with all columns and then append with to_sql but I want to be able to do it more dynamically. </p>
|
<p>Ended up defining classes (db.Model) within the app and then appending data to the models with pd.to_sql but would still like to know a shorter way if possible. </p>
|
python|pandas|flask-sqlalchemy
| 1
|
2,288
| 61,764,107
|
Detect sign changes in Pandas Dataframe
|
<p>I have a pandas dataframe that is datetime indexed and it looks like this:</p>
<pre>
Datetime
2020-05-11 14:00:00-03:00 0.097538
2020-05-11 14:30:00-03:00 -0.083788
2020-05-11 15:00:00-03:00 -0.074128
2020-05-11 15:30:00-03:00 0.059725
2020-05-11 16:00:00-03:00 0.041369
2020-05-11 16:30:00-03:00 0.034388
2020-05-12 10:00:00-03:00 0.006814
2020-05-12 10:30:00-03:00 -0.005308
2020-05-12 11:00:00-03:00 -0.036952
2020-05-12 11:30:00-03:00 -0.070307
2020-05-12 12:00:00-03:00 0.102004
2020-05-12 12:30:00-03:00 -0.139317
2020-05-12 13:00:00-03:00 -0.167589
2020-05-12 13:30:00-03:00 -0.179942
2020-05-12 14:00:00-03:00 0.182351
2020-05-12 14:30:00-03:00 -0.160736
2020-05-12 15:00:00-03:00 -0.150033
2020-05-12 15:30:00-03:00 -0.141862
2020-05-12 16:00:00-03:00 -0.121372
2020-05-12 16:30:00-03:00 -0.095990
Name: result_col, dtype: float64
</pre>
<p>My need is to mark the rows where it changes signal, from negative to positive and vice-versa. Any thoughts on how to achieve it?</p>
<p><strong>Edit</strong>:
I need +1 on the cross up and -1 on the cross down.</p>
|
<p>Let us try </p>
<pre><code>import numpy as np
np.sign(data).diff().ne(0)
</code></pre>
|
python|pandas
| 14
|
2,289
| 57,920,584
|
Gather a slice given an idex array
|
<p>I have an input tensor <code>params</code> of size <code>(BxHxWx200)</code> and an index array <code>idx</code> of size <code>(Bx1000x2)</code>. <code>idx</code> holds 1000 different locations of <code>param</code> tensor, i.e. the last axis (the one of size 2) is <code>HxW</code> location which corresponds to the <code>params</code> input tensor. I want to get the slice of <code>params</code> of size <code>(Bx1000x200)</code> that corresponds to the locations in <code>idx</code>. </p>
<p>In NumPy, it would be as simple as <code>params[idx]</code>. I am not sure how to do it in TensorFlow.</p>
<p>If I try <code>output = tf.gather_nd(params, idx)</code> I am getting a tensor of size <code>(Bx1000xWx200)</code>. Why is that and how can I fix it?</p>
|
<p>You need another index in the last dimension of <code>idx</code> to index the first axis of <code>params</code>:</p>
<pre><code>import tensorflow as tf
# Input data (can be dynamic shapes)
B, H, W = 10, 20, 30
params = tf.placeholder(tf.float32, [B, H, W, 200])
idx = tf.placeholder(tf.int32, [B, 1000, 2])
# Dimensions of idx
idx_s = tf.shape(idx, out_type=idx.dtype)
# Index for first dimension of params
r = tf.range(idx_s[0])
# Tile r to match the shape of idx
idx_b = tf.tile(r[:, tf.newaxis, tf.newaxis], (1, idx_s[1], 1))
# Make the complete index
idx_full = tf.concat([idx_b, idx], axis=-1)
# Gather the result
out = tf.gather_nd(params, idx_full)
print(out.shape)
# (10, 1000, 200)
</code></pre>
|
python|tensorflow
| 1
|
2,290
| 57,741,059
|
Write value (datetime) in pandas dataframe under diff condition
|
<p>I want to write into "Start_time" column the value of datetime that was for first non-zero occurrence of grouped_measurement and write the last time that occurred for grouped_measurement to "End_time" column. If grouped_measurement is 0, the "Start_time" and "End_time" should equal to 0.</p>
<p>I have tried various diff() with diffna() options, but with no success. Here is my code:</p>
<pre><code>import pandas as pd
import numpy as np
import datetime
current_time=datetime.datetime.now()
L=[]
for i in range(22):
L.append(current_time+datetime.timedelta(milliseconds=(i*500)))
# Define input dataframe
df = {'value': [1,1,1,0,0,0,1,1,1,1,1,0,0,0,0,1,1,1,1,1,1,0],
'time': L}
df = pd.DataFrame(df,columns= ['value','time'])
# print("Dataframe is:\n",df)
print("Grouping data according to servo positions, please wait...")
df['grouped_measurement'] = df['value'].diff().fillna(df['value']).eq(1).cumsum().mask(df['value'] == 0, 0)
df['Start_time'] = df['grouped_measurement'].diff().fillna(df['time'])
df['End_time'] = df['grouped_measurement'].diff().fillna(df['time'])
print("Dataframe is:\n",df)
</code></pre>
<p>I have actual result:</p>
<pre><code> value time grouped_measurement Start_time End_time
0 1 2019-08-31 19:14:42.259304 1 1.567279e+18 1.567279e+18
1 1 2019-08-31 19:14:42.759304 1 0.000000e+00 0.000000e+00
2 1 2019-08-31 19:14:43.259304 1 0.000000e+00 0.000000e+00
3 0 2019-08-31 19:14:43.759304 0 -1.000000e+00 -1.000000e+00
4 0 2019-08-31 19:14:44.259304 0 0.000000e+00 0.000000e+00
5 0 2019-08-31 19:14:44.759304 0 0.000000e+00 0.000000e+00
6 1 2019-08-31 19:14:45.259304 2 2.000000e+00 2.000000e+00
7 1 2019-08-31 19:14:45.759304 2 0.000000e+00 0.000000e+00
8 1 2019-08-31 19:14:46.259304 2 0.000000e+00 0.000000e+00
9 1 2019-08-31 19:14:46.759304 2 0.000000e+00 0.000000e+00
10 1 2019-08-31 19:14:47.259304 2 0.000000e+00 0.000000e+00
11 0 2019-08-31 19:14:47.759304 0 -2.000000e+00 -2.000000e+00
12 0 2019-08-31 19:14:48.259304 0 0.000000e+00 0.000000e+00
13 0 2019-08-31 19:14:48.759304 0 0.000000e+00 0.000000e+00
14 0 2019-08-31 19:14:49.259304 0 0.000000e+00 0.000000e+00
15 1 2019-08-31 19:14:49.759304 3 3.000000e+00 3.000000e+00
16 1 2019-08-31 19:14:50.259304 3 0.000000e+00 0.000000e+00
17 1 2019-08-31 19:14:50.759304 3 0.000000e+00 0.000000e+00
18 1 2019-08-31 19:14:51.259304 3 0.000000e+00 0.000000e+00
19 1 2019-08-31 19:14:51.759304 3 0.000000e+00 0.000000e+00
20 1 2019-08-31 19:14:52.259304 3 0.000000e+00 0.000000e+00
21 0 2019-08-31 19:14:52.759304 0 -3.000000e+00 -3.000000e+00
</code></pre>
<p>while expected output is following:</p>
<pre><code> value time grouped_measurement Start_time End_time
0 1 2019-08-31 19:14:42.259304 1 2019-08-31 19:14:42.259304 2019-08-31 19:14:43.259304
1 1 2019-08-31 19:14:42.759304 1 2019-08-31 19:14:42.259304 2019-08-31 19:14:43.259304
2 1 2019-08-31 19:14:43.259304 1 2019-08-31 19:14:42.259304 2019-08-31 19:14:43.259304
3 0 2019-08-31 19:14:43.759304 0 0 0
4 0 2019-08-31 19:14:44.259304 0 0 0
5 0 2019-08-31 19:14:44.759304 0 0 0
6 1 2019-08-31 19:14:45.259304 2 2019-08-31 19:14:45.259304 2019-08-31 19:14:47.259304
7 1 2019-08-31 19:14:45.759304 2 2019-08-31 19:14:45.259304 2019-08-31 19:14:47.259304
8 1 2019-08-31 19:14:46.259304 2 2019-08-31 19:14:45.259304 2019-08-31 19:14:47.259304
9 1 2019-08-31 19:14:46.759304 2 2019-08-31 19:14:45.259304 2019-08-31 19:14:47.259304
10 1 2019-08-31 19:14:47.259304 2 2019-08-31 19:14:45.259304 2019-08-31 19:14:47.259304
11 0 2019-08-31 19:14:47.759304 0 0 0
12 0 2019-08-31 19:14:48.259304 0 0 0
13 0 2019-08-31 19:14:48.759304 0 0 0
14 0 2019-08-31 19:14:49.259304 0 0 0
15 1 2019-08-31 19:14:49.759304 3 2019-08-31 19:14:49.759304 2019-08-31 19:14:52.259304
16 1 2019-08-31 19:14:50.259304 3 2019-08-31 19:14:49.759304 2019-08-31 19:14:52.259304
17 1 2019-08-31 19:14:50.759304 3 2019-08-31 19:14:49.759304 2019-08-31 19:14:52.259304
18 1 2019-08-31 19:14:51.259304 3 2019-08-31 19:14:49.759304 2019-08-31 19:14:52.259304
19 1 2019-08-31 19:14:51.759304 3 2019-08-31 19:14:49.759304 2019-08-31 19:14:52.259304
20 1 2019-08-31 19:14:52.259304 3 2019-08-31 19:14:49.759304 2019-08-31 19:14:52.259304
21 0 2019-08-31 19:14:52.759304 0 0 0
</code></pre>
|
<p>You are pretty close! Use groupby on 'grouped_measurement' column you created.</p>
<pre><code>df['grouped_measurement'] = df['value'].diff().fillna(1).eq(1).cumsum().where(df['value'].ne(0))
result = (df.join(df.groupby('grouped_measurement')['time']
.agg([('Start_time','min'),('End_time','max')])
, on='grouped_measurement')
.fillna(0,downcast='infer'))
</code></pre>
<p>You might need <code>pandas 0.25</code> to use <code>.agg([('Start_time','min'),('End_time','max')]</code>.</p>
<p><strong>Edit</strong></p>
<p>Assuming time column is sorted, then the following method would not rely on groupby,</p>
<pre><code>label_start_end = df['value'].diff().fillna(1, downcast='infer')
df['Start_time'] = df['time'].where(label_start_end.eq(1)).ffill().where(df['value'].eq(1),0)
df['End_time'] = df['time'].where(label_start_end.eq(-1)).bfill().where(df['value'].eq(1),0)
</code></pre>
<p><strong>Edit 2 (No 0 at datetime column)</strong></p>
<pre><code>label_start_end = df['value'].diff().fillna(1, downcast='infer')
mask = df['value'].eq(1)
df['Start_time'] = df['time'].where(label_start_end.eq(1)).ffill().where(mask)
df['End_time'] = df['time'].where(label_start_end.eq(-1)).bfill().where(mask)
</code></pre>
|
python|pandas|datetime
| 2
|
2,291
| 54,839,768
|
Pandas Dataframe. Add multiple columns based on nonnull values of other columns
|
<p>My dataframe example.</p>
<pre><code>np.random.seed(66)
df = pd.DataFrame(
np.random.rand(5, 3),
columns=list('ABC'),
index=['R{}'.format(i) for i in range(5)]
)
df[df < .5] = None
df.head()
A B C
R0 NaN NaN NaN
R1 0.67 NaN NaN
R2 0.75 0.55 0.51
R3 NaN NaN 0.82
R4 NaN NaN 0.67
</code></pre>
<p>Solution for one column</p>
<pre><code>df['A_percent'] = (df.loc[df['A'].notnull(),['A']] * 100).astype(np.int32)
df.head()
A B C A_percent
R0 NaN NaN NaN NaN
R1 0.67 NaN NaN 67.0
R2 0.75 0.55 0.51 75.0
R3 NaN NaN 0.82 NaN
R4 NaN NaN 0.67 NaN
</code></pre>
<p>Everything breaks when I try the same for multiple columns</p>
<pre><code>df['A_percent', 'B_percent'] = (df.loc[df['A', 'B'].notnull(),['A', 'B']] * 100).astype(np.int32)
</code></pre>
<p>Can it be done at all in one step?</p>
|
<p>You can use <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.floor.html" rel="nofollow noreferrer"><code>numpy.floor</code></a>, then boolean mask should be removed:</p>
<pre><code>df[['A_percent', 'B_percent']] = np.floor(df[['A', 'B']] * 100)
print (df)
A B C A_percent B_percent
R0 NaN NaN NaN NaN NaN
R1 0.679109 NaN NaN 67.0 NaN
R2 0.758416 0.557619 0.514803 75.0 55.0
R3 NaN NaN 0.829095 NaN NaN
R4 NaN NaN 0.678006 NaN NaN
</code></pre>
<p>Your solution should be changed by replace missing values to some numeric, e.g. <code>0</code>, so possible converting to <code>integer</code> and for new column use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.where.html" rel="nofollow noreferrer"><code>DataFrame.where</code></a>:</p>
<pre><code>mask = df[['A','B']].notnull()
df1 = (df[['A','B']].fillna(0)*100).astype(np.int32)
df[['A_percent', 'B_percent']] = df1.where(mask)
print (df)
A B C A_percent B_percent
R0 NaN NaN NaN NaN NaN
R1 0.679109 NaN NaN 67.0 NaN
R2 0.758416 0.557619 0.514803 75.0 55.0
R3 NaN NaN 0.829095 NaN NaN
R4 NaN NaN 0.678006 NaN NaN
</code></pre>
|
python|pandas|dataframe
| 2
|
2,292
| 55,060,553
|
how to replace all column value
|
<p><a href="https://i.stack.imgur.com/bp1wj.jpg" rel="nofollow noreferrer">![I want to replace File_attribute and region
_attribute column value with '{}']<a href="https://i.stack.imgur.com/bp1wj.jpg" rel="nofollow noreferrer">1</a></a></p>
|
<p>Assuming the data is in a Pandas DataFrame object <code>df</code>,</p>
<pre><code>df['File_attribute'] = '{}'
df['Region_attribute'] = '{}'
</code></pre>
|
python-3.x|pandas
| 1
|
2,293
| 49,371,486
|
create training validation split using sklearn
|
<p>I have a training set consisting of X and Y, The X is of shape (4000,32,1) and Y is of shape (4000,1).</p>
<p>I would like to create a training/validation set based on split. Here is what I have been trying to do</p>
<pre><code>from sklearn.model_selection import StratifiedShuffleSplit
sss = StratifiedShuffleSplit(test_size=0.1, random_state=23)
for train_index, valid_index in sss.split(X, Y):
X_train, X_valid = X[train_index], X[valid_index]
y_train, y_valid = Y[train_index], Y[valid_index]
</code></pre>
<p>Running the program gives the following error message related to the above code segment</p>
<pre><code>for train_index, valid_index in sss.split(X, Y):
ValueError: The least populated class in y has only 1 member, which is too few. The minimum number of groups for any class cannot be less than 2.
</code></pre>
<p>I am not very clear about the above error message, what's the right way to create a training/validation split for the training set as above?</p>
|
<p>It's a little bit weird because I copy/pasted your code with sklearn's breast cancer dataset as follow </p>
<pre><code>from sklearn.datasets import load_breast_cancer
cancer = load_breast_cancer()
X, Y = cancer.data, cancer.target
from sklearn.model_selection import StratifiedShuffleSplit
sss = StratifiedShuffleSplit(test_size=0.1, random_state=23)
for train_index, valid_index in sss.split(X, Y):
X_train, X_valid = X[train_index], X[valid_index]
y_train, y_valid = Y[train_index], Y[valid_index]
</code></pre>
<p>Here <code>X.shape = (569, 30)</code> and <code>Y.shape = (569,)</code> and I had no error, for example <code>y_valid.shape = 57</code> or one tenth of 569.</p>
<p>I suggest you to reshape X into (4000,32) (and so Y into (4000)), because Python may see it as a list of ONE big element (I am using python 2-7 by the way).</p>
<p>To answer your question, you can alternatively use train_test_split </p>
<pre><code>from sklearn.model_selection import train_test_split
</code></pre>
<p>which according to the help </p>
<blockquote>
<p>Split arrays or matrices into random train and test subsets Quick utility that wraps input validation and
``next(ShuffleSplit().split(X, y))`</p>
</blockquote>
<p>Basically a wrapper of what you wanted to do. You can then specify the training and the test sizes, the random_state, if you want to stratify your data or to shuffle it etc.</p>
<p>It's easy to use for example: </p>
<pre><code>X_train, X_valid, y_train, y_valid = train_test_split(X,Y, test_size = 0.1, random_state=0)
</code></pre>
|
python|numpy|machine-learning|scikit-learn|sklearn-pandas
| 0
|
2,294
| 73,400,270
|
Got an Input to reshape is a tensor with 3368 values, but the requested shape has 2048 error while fine-tuning Roberta
|
<p>I have a csv file that has two input columns and one class with multiple labels which means I'm trying to do a multi-class classification using fine-tuned RoBERTa model. This is the structure of my csv file (<code>df</code>):</p>
<pre><code>text text2 label
murray returns scotland fold euan murray named People generally approve of dogs 3
concerns school diploma plan appeal I'll have you know I've written 4
</code></pre>
<p>I followed this <a href="https://huggingface.co/course/chapter3/2?fw=tf" rel="nofollow noreferrer">HuggingFace</a> tutorial and saw that they use <code>DatasetDict</code> so I transformed my csv file into a <code>DatasetDict</code> structure by</p>
<pre><code>train, test = train_test_split(df, test_size=0.2)
train_dataset = datasets.Dataset.from_dict(train)
test_dataset = datasets.Dataset.from_dict(test)
my_dataset_dict = datasets.DatasetDict({"train":train_dataset,"test":test_dataset})
"""
Received:
DatasetDict({
train: Dataset({
features: ['text', 'text2', 'label'],
num_rows: 1780
})
test: Dataset({
features: ['text', 'text2', 'label'],
num_rows: 445
})
})
"""
</code></pre>
<p>After this, I proceeded with tokenizing the data by doing</p>
<pre><code>MODEL_NAME = "roberta-base"
tokenizer = RobertaTokenizer.from_pretrained(MODEL_NAME)
def tokenize_function(dataset_x):
return tokenizer(dataset_x["text"], dataset_x["text2"], truncation=True)
tokenized_datasets = my_dataset_dict.map(tokenize_function)
</code></pre>
<p>I kept following the tutorial and proceeded further like</p>
<pre><code>data_collator = DataCollatorWithPadding(tokenizer=tokenizer, return_tensors="tf")
tf_train_dataset = tokenized_datasets["train"].to_tf_dataset(
columns=["attention_mask", "input_ids", "token_type_ids"],
label_cols=["labels"],
shuffle=True,
collate_fn=data_collator,
batch_size=8
)
tf_validation_dataset = tokenized_datasets["test"].to_tf_dataset(
columns=["attention_mask", "input_ids", "token_type_ids"],
label_cols=["labels"],
shuffle=False,
collate_fn=data_collator,
batch_size=8
)
</code></pre>
<p>I then initialize the model and compile it so I can fit later on, I have 5 classes so I set <code>num_labels=5</code></p>
<pre><code>roberta_model = TFRobertaModel.from_pretrained(MODEL_NAME, num_labels=5)
roberta_model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-5), loss='sparse_categorical_crossentropy', metrics=["accuracy"])
model.fit(tf_train_dataset, validation_data=tf_validation_dataset, epochs=3)
</code></pre>
<p>However the last line throws me an error that says</p>
<blockquote>
<p>Node: 'model_4/tf_roberta_model_7/roberta/Reshape'
Input to reshape is a tensor with 3368 values, but the requested shape has 2048
[[{{node model_4/tf_roberta_model_7/roberta/Reshape}}]] [Op:__inference_train_function_113985]</p>
</blockquote>
<p>I'm still learning this so I have no idea where this comes from, I went by the same steps from the tutorial I linked except they use BERT and have two classes, while in my case the only thing I changed is the <code>MODEL_NAME</code> and having five classes. Do you happen to know how can I fix this, what should I pay more attention at and how can I avoid errors like this in the future?</p>
|
<p>I'm not really an expert, maybe I'm wrong. However, shouldn't you use a model appropriate for your task (something like a <code>TFRobertaForTokenClassification</code> or <code>TFRobertaForSequenceClassification</code>, etc.). See the <a href="https://huggingface.co/transformers/v3.0.2/model_doc/roberta.html#tfrobertafortokenclassification" rel="nofollow noreferrer">documentation</a> to see all of them.</p>
<p>As far as I know <code>RobertaTokenizer</code> has no head on top. Maybe that's the problem.</p>
|
python|tensorflow|keras|huggingface-transformers|roberta
| 1
|
2,295
| 73,336,937
|
What is the difference between list and 1-D array ? and what is the difference between series and dictionary?
|
<p>What is the difference between list and 1-D array ? and what is the difference between series and dictionary ?</p>
|
<p>Welcome to Stackoverflow :)
list and dictionary are data types in python to hold data. The 1-D array is a data structure similar to a list that can contain any kind of data.
Both list or NumPy arrays can be one or multidimensional.
1-D means that every element in a list or in an array can be accessed by one index.
e.g.,:</p>
<pre class="lang-py prettyprint-override"><code>a = [1,2, 3]
# now if I want to access number 2 in the list I do it as
a[1]
# where as in 2-D, you need to pass to indexes in order to access an element:
b = [[1, 2, 3], [4, 5, 6]]
# here if I want to access number 2 I need to pass two positional index:
b[0][1]
</code></pre>
<p>series are also 1-D arrays that hold any type of data, they are similar to tables. for example</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
var = [1,2, 3]
sr = pd.Series(a)
print(sr)
---output---
0 1
1 7
2 2
</code></pre>
<p>2-D tables are referred to as dataframes. You can use dictionaries to make dataframes as follows:</p>
<pre class="lang-py prettyprint-override"><code>data = {
"books": [20, 30, 40],
"pages": [200, 300, 400]
}
# Convert the data into a 2-D table aka dataframe
df = pd.DataFrame(data)
print(df)
----output----
books pages
0 20 200
1 30 300
2 40 400
</code></pre>
|
python-3.x|pandas|series
| 0
|
2,296
| 67,339,906
|
Convert a list to dictionary
|
<p>Hello is there a way to split this dataset to dictionary where the word will be the key and the value will be a list of numbers separated with comma ?</p>
<pre><code>content =
['the 0.125 0.8542 1.253 \n',
'of 0.678 0.568 0.184 \n',
'that 0.565 0.897 0.267 \n']
</code></pre>
|
<pre><code>content = [
"the 0.125 0.8542 1.253 \n",
"of 0.678 0.568 0.184 \n",
"that 0.565 0.897 0.267 \n",
]
out = {c.split()[0]: [float(x) for x in c.split()[1:]] for c in content}
print(out)
</code></pre>
<p>Prints:</p>
<pre><code>{'the': [0.125, 0.8542, 1.253], 'of': [0.678, 0.568, 0.184], 'that': [0.565, 0.897, 0.267]}
</code></pre>
<hr />
<p>If you want strings as values:</p>
<pre><code>out = {c.split()[0]: ", ".join(c.split()[1:]) for c in content}
print(out)
</code></pre>
<p>Prints:</p>
<pre class="lang-py prettyprint-override"><code>{'the': '0.125, 0.8542, 1.253', 'of': '0.678, 0.568, 0.184', 'that': '0.565, 0.897, 0.267'}
</code></pre>
|
pandas|dataframe|txt
| 0
|
2,297
| 67,276,943
|
Editing a column to equal its first value
|
<p>I have some inconsistencies with some data that I would like to fix</p>
<pre><code>import pandas as pd
import numpy
import io
datastring = """
ride_id,start_time,end_time,driver_id,region,is_completed
4,5,2021-01-15 21:02:58,NaN,2,Cape Town,1
26,27,2021-03-31 21:51:00,NaN,2,San Francisco,1
0,1,2021-04-07 10:31:41,NaN,4,San Francisco,1
23,24,2021-02-20 06:37:07,NaN,4,San Francisco,1
7,8,2021-02-14 00:39:40,NaN,6,San Francisco,0
10,11,2021-02-15 11:23:45,NaN,6,San Francisco,1
3,4,2021-04-22 01:22:34,NaN,7,Paris,0
24,25,2021-02-19 13:01:37,NaN,7,Busan,1
19,20,2021-01-16 03:24:06,NaN,7,San Deigo,1
22,23,2021-03-02 04:07:27,NaN,13,San Francisco,1
28,29,2021-03-21 08:33:48,NaN,13,Los Angeles,1
8,9,2021-03-25 22:14:04,NaN,13,San Francisco,1
6,7,2021-02-22 15:31:42,NaN,13,Boston,1
"""
data = io.StringIO(datastring)
dddd = pd.read_csv(data, sep=",").reset_index()
dddd
</code></pre>
<p>I would like for the driver's region to be set to the first observation, so for example driver 2's second observation would mark his region as Cape Town, not San Francisco.</p>
<p>How would I go about doing that?</p>
|
<p>Use of <code>groupby</code> will group observations by <code>driver_id</code>. Use of <code>transform</code> will let you set every row in a group to the first row value.</p>
<pre><code>dddd['region'] = dddd.groupby('driver_id')['region'].transform('first')
</code></pre>
|
python|pandas
| 1
|
2,298
| 67,205,527
|
Append dataframe columns in a loop to yield a single dataframe
|
<p>I wrote code to extract data from a csv and put them into a dataframe and sort them after. The code looks as such:</p>
<pre><code>def highest_value_sorter(value):
sorted_df = df_result[value].astype('float64').sort_values(ascending=False)
sorted_df = sorted_df.head(10).to_frame().reset_index()
return sorted_df
sorted_df = pd.DataFrame(data=[values])
for value in values:
sorted_tmp_df = highest_value_sorter(value)
sorted_tmp_df = sorted_tmp_df.drop(columns=['index'])
</code></pre>
<p>sorted_tmp_df in my code yields the following result in a loop:</p>
<pre><code> apples
0 922640.524589
1 862396.590682
2 848624.249550
oranges
0 2.394991e+11
1 1.875155e+11
2 6.409508e+10
bananas
0 1.852440e+08
1 6.143871e+07
2 5.757801e+07
</code></pre>
<p>my goal is to get all of these into one dataframe as such:</p>
<pre><code> apples oranges
0 922640.524589 862396.590682
1 862396.590682 5.757801e+07
2 5.757801e+07 922640.524589
</code></pre>
<p>So far I've tried .join and .append as such: sorted_df = sorted_df.append(sorted_tmp_df)/sorted_df = sorted_df.join(sorted_tmp_df) and neither seem to work. Any tips would help, thanks!</p>
|
<p>You can use <a href="https://pandas.pydata.org/docs/reference/api/pandas.concat.html" rel="nofollow noreferrer"><code>pandas.concat()</code></a> to concat list of dataframes on columns with <code>axis</code> set to 1.</p>
<pre class="lang-py prettyprint-override"><code>dfs = []
for value in values:
sorted_tmp_df = highest_value_sorter(value)
sorted_tmp_df = sorted_tmp_df.drop(columns=['index'])
dfs.append(sorted_tmp_df)
df_ = pd.concat(dfs, axis=1)
</code></pre>
|
python|pandas
| 2
|
2,299
| 60,045,913
|
Installing numpy before using numpy.distutils.core.setup
|
<p>I am using <code>numpy.distutils</code> to setup a package (mypackage) that has a frotran module. The problem is that if I do <code>pip install mypackage</code> on an environment that does not have numpy, I get the following error:</p>
<blockquote>
<p>ModuleNotFoundError: No module named 'numpy'</p>
</blockquote>
<p>The easy solution is to ask users (if I manage to have any) to <code>pip install numpy</code> before they install my package, but I do not think this is a very <em>elegant</em> solution.</p>
<p>I came up with the idea of calling <code>setuptools.setup</code> with only <code>setup_requires=['numpy']</code> before I import numpy and it seems to work well. This is my <code>setup.py</code>:</p>
<pre><code>import setuptools
setuptools.setup(
setup_requires=[
'numpy'
],)
from numpy.distutils.core import setup, Extension
mod = Extension(name='mypackage.amodule', sources=['source/a.f90'])
setup(name='mypackage',
packages=['mypackage'],
ext_modules=[mod],)
</code></pre>
<p>I honestly don't fully understand what it implies to call an empty <code>setup()</code> (no name, no package). <strong>Is this a good solution? Is this somehow a bad practice?</strong> </p>
|
<p>It is a common issue. How to install a <em>build-time</em> dependency? You might want to use a <code>pyproject.toml</code> file and take advantage of the <code>build-system</code> feature. See <a href="https://www.python.org/dev/peps/pep-0517/" rel="nofollow noreferrer">PEP517</a>. And an example here:</p>
<pre><code>[build-system]
build-backend = "setuptools.build_meta"
requires = ["setuptools", "numpy"]
</code></pre>
<p>Use the <a href="https://pypi.org/project/pep517/" rel="nofollow noreferrer"><code>pep517</code> tool</a> to build the distributions (<em>sdist</em> and <em>wheel</em>).</p>
|
python|numpy|installation|setuptools|f2py
| 2
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.