Unnamed: 0
int64 0
378k
| id
int64 49.9k
73.8M
| title
stringlengths 15
150
| question
stringlengths 37
64.2k
| answer
stringlengths 37
44.1k
| tags
stringlengths 5
106
| score
int64 -10
5.87k
|
|---|---|---|---|---|---|---|
377,000
| 36,519,546
|
Advice needed: convert objects of dataframe in pandas
|
<p>I have following dataframe that I try to convert it's data type.</p>
<pre><code>In [5]:
df = pd.io.json.json_normalize(data)
df.head()
Out[5]:
a b c d e f g
2014-09-10 5.38 5.45 5.35 1769 10000002 34 6651569991
2014-09-11 5.44 5.48 5.38 1863 10000002 34 8147338425
2014-09-12 5.35 5.45 5.32 1792 10000002 34 10549259297
2014-09-13 5.41 5.48 5.3099 2136 10000002 34 9408246021
2014-09-14 5.43 5.47 5.39 2174 10000002 34 9385610951
In [6]:
df.dtypes
Out[6]:
a object
b object
c object
d object
e object
f object
g object
dtype: object
</code></pre>
<p>Before the 17.0 update I was using solution:
<code>df = df.convert_objects(convert_numeric=True)</code> which worked great, but now is deprecated. Now I am trying to use: <code>pd.to_numeric(df, errors='ignore')</code> but get error <code>arg must be a list, tuple, 1-d array, or Series</code>.</p>
<p>After playing around I managed to come up with the following solution where I use pandas series and iterate each column to have one dimensional array suitable for conversion:</p>
<p><code>for col in df:
df[col] = pd.to_numeric(pd.Series(df[col]), errors='ignore')</code></p>
<p>Is this the best way to do conversion or is there a more elegant solution? I am asking as I'm still getting my head around the Pandas and perhaps it will be useful for someone as I was not able to find a clear answer.</p>
|
<p>Use the <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html" rel="noreferrer">apply</a> function.</p>
<pre><code>In [34]: df = df.apply(lambda x: pd.to_numeric(x), axis=0)
In [36]: df.dtypes
Out[36]:
a float64
b float64
c float64
d int64
e int64
f int64
g int64
dtype: object
</code></pre>
|
python-3.x|pandas|multidimensional-array|dataframe|type-conversion
| 5
|
377,001
| 36,440,158
|
Creating a Pandas Panel with Non-Unique Index Objects
|
<p>I have the following data file <code>LawSchoolSample.csv</code>:</p>
<pre><code>LSAT,GPA
622,3.23
542,2.83
579,3.24
653,3.12
606,3.09
</code></pre>
<p>I'd like to create a pandas dataframe, and then resample from this dataframe <code>B</code> times to form a pandas panel. Here's my attempt (critiques welcomed):</p>
<pre><code>import pandas as pd
df = pd.read_csv("LawSchoolSample.csv")
B = 3
resamples = {}
for i in range(0,B):
name = "Resample {}".format(i)
resamples[name] = df.sample(5,replace=True)
print resamples
resamples_panel = pd.Panel(resamples)
</code></pre>
<p>All is well except the last line: <code>resamples_panel = pd.Panel(resamples)</code>. The error is:</p>
<pre><code>pandas.core.index.InvalidIndexError: Reindexing only valid with uniquely valued Index objects
</code></pre>
<p>I have two questions, then:</p>
<ol>
<li>Is using a <code>panel</code> worth it for this? Or is whatever data structure <code>resamples</code> is good enough?</li>
<li>What's the preferred method of adding <code>dataframes</code> to a panel?</li>
</ol>
|
<p>The long term plan is to deprecate <code>Panel</code>, see pandas documentation:</p>
<blockquote>
<p>In a future version of pandas, we will be deprecating Panel and other
>2 ndim objects. In order to provide for continuity, all NDFrame objects have gained the .to_xarray() method in order to convert to
xarray objects, which has a pandas-like interface for > 2 ndim.
(GH11972)</p>
</blockquote>
<p><a href="http://pandas.pydata.org/pandas-docs/version/0.18.0/whatsnew.html#to-xarray" rel="nofollow">http://pandas.pydata.org/pandas-docs/version/0.18.0/whatsnew.html#to-xarray</a></p>
|
python|pandas
| 0
|
377,002
| 36,636,461
|
How to sum column values with cross-column matching?
|
<p>Python Pandas question:</p>
<p>I have a table with 3 columns: From_City, To_City, Trip_Count and 4 rows:</p>
<p><code>+-----------+---------+------------+
| From_City | To_City | Trip_Count |
+-----------+---------+------------+
| Berlin | London | 2 |
| London | Berlin | 3 |
| Sydney | Tokyo | 4 |
| Tokyo | Sydney | 6 |
+-----------+---------+------------+</code></p>
<p>I want to sum trips between cities into a new table that has 2 rows only:</p>
<p><code>+--------+------------+------------+
| City | Other_City | Trip_Count |
+--------+------------+------------+
| Berlin | London | 5 |
| Sydney | Tokyo | 10 |
+--------+------------+------------+</code></p>
<p>I couldn't figure out how to achieve this in Python (preferably in Pandas). Any suggestions? Thanks!</p>
<p>Note: Order between two cities doesn't matter. either Berlin-London or London-Berlin is fine.</p>
|
<pre><code>import pandas as pd
df = pd.DataFrame({'From_City': ['Berlin', 'London', 'Sydney', 'Tokyo'], 'To_City': ['London', 'Berlin', 'Tokyo', 'Sydney'], 'Trip_Count': [2, 3, 4, 6]})
print df.apply(lambda x: sorted(x[:2].tolist())+[x[2]], axis=1).groupby(['From_City', 'To_City']).sum()
</code></pre>
<p>result</p>
<pre><code> Trip_Count
From_City To_City
Berlin London 5
Sydney Tokyo 10
</code></pre>
|
python|pandas
| 1
|
377,003
| 36,393,247
|
Pandas dateframe reshape
|
<p>I am playing with some date looks like below .</p>
<pre><code> song_id action_type ds
0 effb071415be51f11e845884e67c0f8c 1 14 days
1 f87ff481d85d2f95335ab602f38a7655 1 13 days
2 8a27d9a6c59628c991c154e8d93f412e 2 12 days
3 ecea5fe33e6817d09c395f2910479728 1 11 days
4 31a3d0420d89c9b121bb55dbdbbeda6b 1 13 days
5 096e604f7b152fad0246ae731ed8ca73 1 15 days
6 5a7d9d75b898cd1b19ef6941cc1ddccf 1 12 days
7 8a103bd3a3295fbf9b3c3bf7972db299 2 12 days
8 17e90c7b3b7ebbe4b47344fcfab2fa7a 1 11 days
9 8a27d9a6c59628c991c154e8d93f412e 2 13 days
</code></pre>
<p>I wonder how can i reshape it and get something like that :</p>
<pre><code>song_id day1_type1 day1_type2 day1_type3 day2_type1 ......... dayn_typen
(songid) (count of type1 on day1) (Nan if no count)...... (count of typen on dayn)
</code></pre>
<p>Now I use </p>
<pre><code>action.groupby(['song_id','ds','action_type']).action_type.sum()
</code></pre>
<p>and get something similar :</p>
<pre><code>song_id ds action_type
00088cb1e6d740fcd42bc8ff2673c805 3 days 1 1
4 days 1 1
13 days 2 2
27 days 1 1
41 days 1 1
2 2
42 days 1 1
67 days 2 2
68 days 1 1
75 days 2 2
0008de587f84d8c9491502c5a5c8b466 0 days 1 4
2 4
1 days 1 17
4 days 1 6
7 days 1 10
8 days 1 5
</code></pre>
<p>How can i rebuild or reshape what i get to get what i want ?</p>
<p>Thanks in advance .</p>
|
<pre><code>>>> (df.groupby(['song_id', 'ds', 'action_type'])
.action_type
.sum()
.unstack(['action_type', 'ds'])
.fillna(0)
.sortlevel(level=[0, 1], axis=1))
action_type 1 2
ds 11 days 12 days 13 days 14 days 15 days 12 days 13 days
song_id
096e604f7b152fad0246ae731ed8ca73 0 0 0 0 1 0 0
17e90c7b3b7ebbe4b47344fcfab2fa7a 1 0 0 0 0 0 0
31a3d0420d89c9b121bb55dbdbbeda6b 0 0 1 0 0 0 0
5a7d9d75b898cd1b19ef6941cc1ddccf 0 1 0 0 0 0 0
8a103bd3a3295fbf9b3c3bf7972db299 0 0 0 0 0 2 0
8a27d9a6c59628c991c154e8d93f412e 0 0 0 0 0 2 2
ecea5fe33e6817d09c395f2910479728 1 0 0 0 0 0 0
effb071415be51f11e845884e67c0f8c 0 0 0 1 0 0 0
f87ff481d85d2f95335ab602f38a7655 0 0 1 0 0 0 0
</code></pre>
|
python|pandas|data-processing
| 2
|
377,004
| 36,589,521
|
How to surface plot/3d plot from dataframe?
|
<p>I am new to <code>pandas</code> and <code>matplotlib</code>. Couldn't able to get exact reference to plot my <code>DataFrame</code> whose schema is as follows</p>
<pre><code>schema = StructType([
StructField("x", IntegerType(), True),
StructField("y", IntegerType(), True),
StructField("z", IntegerType(), True)])
</code></pre>
<p>Like to plot 3d graph w.r.t. x, y and z</p>
<p>Here is the sample code i used</p>
<pre><code>import matplotlib.pyplot as pltt
dfSpark = sqlContext.createDataFrame(tupleRangeRDD, schema) // reading as spark df
df = dfSpark.toPandas()
fig = pltt.figure();
ax = fig.add_subplot(111, projection='3d')
ax.plot_surface(df['x'], df['y'], df['z'])
</code></pre>
<p>I am getting a empty graph plot. definitely missing something. Any pointers?</p>
<p>-Thx</p>
<p>Request-1: Print df</p>
<pre><code>def print_full(x):
pd.set_option('display.max_rows', len(x))
print(x)
pd.reset_option('display.max_rows')
print_full(df)
</code></pre>
<p>Result of top 10</p>
<pre><code> x y z
0 301 301 10
1 300 301 16
2 300 300 6
3 299 301 30
4 299 300 20
5 299 299 14
6 298 301 40
7 298 300 30
8 298 299 24
9 298 298 10
10 297 301 48
</code></pre>
|
<p><code>.plot_surface()</code> takes <code>2D</code> <code>arrays</code> as inputs, not <code>1D</code> <code>DataFrame</code> columns. This has been explained quite well <a href="https://stackoverflow.com/questions/9170838/surface-plots-in-matplotlib">here</a>, along with the below code that illustrates how one could arrive at the required format using <code>DataFrame</code> input. Reproduced below with minor modifications like additional comments.</p>
<p>Alternatively, however, there is <a href="http://matplotlib.org/mpl_toolkits/mplot3d/tutorial.html#tri-surface-plots" rel="noreferrer"><code>.plot_trisurf()</code></a> which uses <code>1D</code> inputs. I've added an example in the middle of the code.</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
from mpl_toolkits.mplot3d import Axes3D
## Matplotlib Sample Code using 2D arrays via meshgrid
X = np.arange(-5, 5, 0.25)
Y = np.arange(-5, 5, 0.25)
X, Y = np.meshgrid(X, Y)
R = np.sqrt(X ** 2 + Y ** 2)
Z = np.sin(R)
fig = plt.figure()
ax = Axes3D(fig)
surf = ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
ax.set_zlim(-1.01, 1.01)
ax.zaxis.set_major_locator(LinearLocator(10))
ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))
fig.colorbar(surf, shrink=0.5, aspect=5)
plt.title('Original Code')
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/3yDEE.png" rel="noreferrer"><img src="https://i.stack.imgur.com/3yDEE.png" alt="Original Matlab example"></a></p>
<pre><code>## DataFrame from 2D-arrays
x = X.reshape(1600)
y = Y.reshape(1600)
z = Z.reshape(1600)
df = pd.DataFrame({'x': x, 'y': y, 'z': z}, index=range(len(x)))
# Plot using `.trisurf()`:
ax.plot_trisurf(df.x, df.y, df.z, cmap=cm.jet, linewidth=0.2)
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/ofFEb.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ofFEb.png" alt="Using trisurf with only 1D input"></a></p>
<pre><code># 2D-arrays from DataFrame
x1 = np.linspace(df['x'].min(), df['x'].max(), len(df['x'].unique()))
y1 = np.linspace(df['y'].min(), df['y'].max(), len(df['y'].unique()))
"""
x, y via meshgrid for vectorized evaluation of
2 scalar/vector fields over 2-D grids, given
one-dimensional coordinate arrays x1, x2,..., xn.
"""
x2, y2 = np.meshgrid(x1, y1)
# Interpolate unstructured D-dimensional data.
z2 = griddata((df['x'], df['y']), df['z'], (x2, y2), method='cubic')
# Ready to plot
fig = plt.figure()
ax = fig.gca(projection='3d')
surf = ax.plot_surface(x2, y2, z2, rstride=1, cstride=1, cmap=cm.coolwarm,
linewidth=0, antialiased=False)
ax.set_zlim(-1.01, 1.01)
ax.zaxis.set_major_locator(LinearLocator(10))
ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))
fig.colorbar(surf, shrink=0.5, aspect=5)
plt.title('Meshgrid Created from 3 1D Arrays')
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/lEcU3.png" rel="noreferrer"><img src="https://i.stack.imgur.com/lEcU3.png" alt="Modified example using <code>DataFrame</code> input"></a></p>
|
python|numpy|pandas|matplotlib|dataframe
| 45
|
377,005
| 36,627,268
|
Python-control - step system
|
<p>When I create a system using the python-control package:</p>
<pre><code>import control
H = control.tf([1], [1])
</code></pre>
<p>And then want to <em>iteratively</em> simulate that system, how do I do it?</p>
<p>I know I can do this:</p>
<pre><code>T = np.arange(0, 10, 0.01)
u = np.sin(T)
y, t, x = control.lsim(H, u, T)
</code></pre>
<p>But what I want to do is this:</p>
<pre><code>Tstart = get_current_time() # returns a scalar
T = get_current_time()
x = None
while T - Tstart < 100:
u = get_next_input() # returns a scalar
T = get_current_time()
y, x = control.step_system(H, u, T, x)
do_something_with_output(y)
</code></pre>
<p>Is there some way I can do this? How else are you supposed to use a system developed with the control package to, you know, control something?</p>
|
<p>This is a great question. I am interested in this myself and asked a <a href="https://www.mathworks.com/matlabcentral/answers/513684-how-to-step-through-a-discrete-model-simulation-one-step-at-a-time?s_tid=srchtitle" rel="nofollow noreferrer">similar question</a> on the Mathworks forum a while ago and it's not currently possible in MATLAB.</p>
<p>The good news is, you can now do it in Python Control using the <a href="https://python-control.readthedocs.io/en/0.8.3/iosys.html" rel="nofollow noreferrer">iosys</a> module and the <a href="https://python-control.readthedocs.io/en/0.8.3/generated/control.input_output_response.html#control.input_output_response" rel="nofollow noreferrer"><code>input_output_response</code></a> function.</p>
<p>For a linear system as in your example you use the <a href="https://python-control.readthedocs.io/en/0.8.3/generated/control.iosys.LinearIOSystem.html#control.iosys.LinearIOSystem" rel="nofollow noreferrer"><code>LinearIOSystem</code></a> class</p>
<p>Here is my simulation example:</p>
<pre><code>import time
import numpy as np
import matplotlib.pyplot as plt
import control
from control import input_output_response
from control.iosys import LinearIOSystem
# Define system
# Continuous-time transfer function
G = control.tf([1], [2, 1])
# Convert to state-space representation
Gss = control.ss(G)
# Construct IO system
sys = LinearIOSystem(Gss, inputs='u', outputs='y')
def get_next_input(u, avg_time=0.5):
"""Function to simulate data acquisition"""
t0 = time.time()
wait_time = avg_time*(0.5 + np.random.rand())
while time.time() - t0 < wait_time:
pass
if np.random.rand() > 0.8:
u = u + np.random.randn()
return u
# Simulate system in response to irregular inputs
t0 = time.time()
t = 0
y0 = 0
u = 0
x = np.zeros(sys.nstates)
np.random.seed(1)
sim_results = [[0, u, y0]]
print(sim_results[-1])
while t < 10:
u_new, t_new = get_next_input(u), time.time() - t0
# Simulation of system up to current time
T_sim = [t, t_new]
T_sim, Y_sim, X_sim = input_output_response(sys, T_sim, u, X0=x,
return_x=True)
sim_results.append([T_sim[-1], u_new, Y_sim[-1]])
print(sim_results[-1])
# Set current state and outputs to end of simulation period
x = X_sim[0, -1]
u = u_new
t = t_new
sim_results = np.array(sim_results)
t = sim_results[:, 0]
u = sim_results[:, 1]
y = sim_results[:, 2]
# Plot inputs and outputs
plt.subplot(2, 1, 1)
plt.plot(t, y, 'o-')
plt.xlabel('t')
plt.ylabel('y(t)')
plt.grid()
plt.subplot(2, 1, 2)
plt.step(t, u, where='post')
plt.xlabel('t')
plt.ylabel('u(t)')
plt.grid()
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/aA51C.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aA51C.png" alt="input-output time response of simulated system" /></a></p>
<p>In answer to your final question:</p>
<blockquote>
<p>How else are you supposed to use a system developed with the control package to, you know, control something?"</p>
</blockquote>
<p>I think tools like the MATLAB control module and python-control are intended to be used for the analysis, design and simulation of control systems, not necessarily for their implementation. Depending on your application, usually the final control system implementation is done on specialized hardware and/or software or might be hand-coded in a low-level language like C for example. High-level languages like MATLAB and Python are arguably too unreliable and difficult to maintain/upgrade for them to be attractive solutions in any serious process control or real-world robotics application. But for hobbyists and laboratory experiments they are ideal and so I agree this kind of functionality is useful.</p>
|
python|numpy|scipy|python-control
| 1
|
377,006
| 36,548,736
|
TensorFlow: Unpooling
|
<p>Is there TensorFlow native function that does unpooling for Deconvolutional Networks ? </p>
<p>I have written this in normal python, but it is getting complicated when want to translate it to TensorFlow as it's objects does not even support item assignment at the moment, and I think this is a great inconvenience with TF.</p>
|
<p>I don't think there is an official unpooling layer yet which is frustrating because you have to use image resize (bilinear interpolation or nearest neighbor) which is like an average unpooling operation and it's reaaaly slow. Look at the tf api in the section 'image' and you will find it. </p>
<p>Tensorflow has a maxpooling_with_argmax thing where you get you maxpooled output as well as the activation map which is nice as you could use it in an unpooling layer to preserve the 'lost' spacial information but it seems as there isn't such an unpooling operation that does it. I guess that they are planning to add it ... soon.</p>
<p>Edit: I found some guy on google discuss a week ago who seems to have implemented something like this but I personally haven't tried it yet.
<a href="https://github.com/ppwwyyxx/tensorpack/blob/master/tensorpack/models/pool.py#L66">https://github.com/ppwwyyxx/tensorpack/blob/master/tensorpack/models/pool.py#L66</a></p>
|
tensorflow|conv-neural-network|deconvolution
| 16
|
377,007
| 36,428,059
|
Why does the efficiency of numpy not scale
|
<p>I have been comparing the relative efficiency of numpy versus Python list comprehensions in multiplying together arrays of random numbers. (Python 3.4/Spyder, Windows and Ubuntu).</p>
<p>As one would expect, for all but the smallest arrays, numpy rapidly outperforms an list comprehension, and for increasing array length you get the expected sigmoid curve for performance. But the sigmoid is far from smooth, which I am puzzling to understand.</p>
<p>Obviously there is a certain amount of quantization noise for shorter array lengths, but I am getting unexpectedly noisy results, particularly under Windows. The figures are the mean of 100 runs of the various array lengths, so should have any transient effects smoothed out (so I would have thought). </p>
<p><a href="https://i.stack.imgur.com/uquMS.png" rel="noreferrer"><img src="https://i.stack.imgur.com/uquMS.png" alt="Performance characteristic"></a></p>
<pre><code>Numpy and Python list performance comparison
</code></pre>
<p>The figures below show the ratio of multiplying arrays of differing lengths using numpy against list comprehension.</p>
<pre><code>Array Length Windows Ubuntu
1 0.2 0.4
2 2.0 0.6
5 1.0 0.5
10 3.0 1.0
20 0.3 0.8
50 3.5 1.9
100 3.5 1.9
200 10.0 3.0
500 4.6 6.0
1,000 13.6 6.9
2,000 9.2 8.2
5,000 14.6 10.4
10,000 12.1 11.1
20,000 12.9 11.6
50,000 13.4 11.4
100,000 13.4 12.0
200,000 12.8 12.4
500,000 13.0 12.3
1,000,000 13.3 12.4
2,000,000 13.6 12.0
5,000,000 13.6 11.9
</code></pre>
<p>So I guess my question is can anyone explain why the results, particularly under Windows are so noisy. I have run the tests multiple times but the results always seem to be exactly the same. </p>
<p>UPDATE. At Reblochon Masque's suggestion I have disabled grabage collection. Which smooths the Windows performance out somewhat, but the curves are still lumpy.</p>
<p><a href="https://i.stack.imgur.com/B0k4H.png" rel="noreferrer"><img src="https://i.stack.imgur.com/B0k4H.png" alt="Updated performance characteristic without garbage collection"></a></p>
<pre><code>Numpy and Python list performance comparison
(Updated to remove garbage collection)
Array Length Windows Ubuntu
1 0.1 0.3
2 0.6 0.4
5 0.3 0.4
10 0.5 0.5
20 0.6 0.5
50 0.8 0.7
100 1.6 1.1
200 1.3 1.7
500 3.7 3.2
1,000 3.9 4.8
2,000 6.5 6.6
5,000 11.5 9.2
10,000 10.8 10.7
20,000 12.1 11.4
50,000 13.3 12.4
100,000 13.5 12.6
200,000 12.8 12.6
500,000 12.9 12.3
1,000,000 13.3 12.3
2,000,000 13.6 12.0
5,000,000 13.6 11.8
</code></pre>
<p>UPDATE </p>
<p>At @Sid's suggestion, I've restricted it to running on a single core on each machine. The curves are slightly smoother (particularly the Linux one), but still with the inflexions and some noise, particularly under Windows.</p>
<p>(It was actually the inflexions that I was originally going to post about, as they appear consistently in the same places.)</p>
<pre><code>Numpy and Python list performance comparison
(Garbage collection disabled and running on 1 CPU)
Array Length Windows Ubuntu
1 0.3 0.3
2 0.0 0.4
5 0.5 0.4
10 0.6 0.5
20 0.3 0.5
50 0.9 0.7
100 1.0 1.1
200 2.8 1.7
500 3.7 3.3
1,000 3.3 4.7
2,000 6.5 6.7
5,000 11.0 9.6
10,000 11.0 11.1
20,000 12.7 11.8
50,000 12.9 12.8
100,000 14.3 13.0
200,000 12.6 13.1
500,000 12.6 12.6
1,000,000 13.0 12.6
2,000,000 13.4 12.4
5,000,000 13.6 12.2
</code></pre>
<p><a href="https://i.stack.imgur.com/JqXEY.png" rel="noreferrer"><img src="https://i.stack.imgur.com/JqXEY.png" alt="enter image description here"></a></p>
|
<p>The garbage collector explains the bulk of it. The rest could be fluctuation based on other programs running on your machine.
How about turning most things off and running the bare minimum and testing it. Since you are using datetime (which is the actual time passed) it must be taking into account any processor context switches as well.</p>
<p>You could also try running this while having it affixed to a processor using a unix call, that might help further smoothen it out. On Ubuntu it can be done hence: <a href="https://askubuntu.com/a/483827">https://askubuntu.com/a/483827</a></p>
<p>For windows processor affinity can be set thus: <a href="http://www.addictivetips.com/windows-tips/how-to-set-processor-affinity-to-an-application-in-windows/" rel="nofollow noreferrer">http://www.addictivetips.com/windows-tips/how-to-set-processor-affinity-to-an-application-in-windows/</a></p>
|
python|arrays|numpy
| 3
|
377,008
| 5,490,723
|
What alternatives are there to numpy on Google App Engine?
|
<p>What alternatives can you recommend to numpy to use on the Google App Engine?</p>
<p>Specifically I'm interested in matrix manipulations on large matrices. </p>
|
<p>The Python 2.7 runtime <a href="https://developers.google.com/appengine/docs/python/tools/libraries27" rel="nofollow noreferrer">includes NumPy 1.6.1</a>.</p>
|
python|google-app-engine|numpy
| 12
|
377,009
| 53,238,019
|
How do I create a debug build of a recent Tensorflow version with CUDA Support?
|
<p>I tried and tried to create a debug build for a recent version of Tensorflow , using the official docker images (latest-cuda-devel-py3 -> r1.12.0) but nothing seems to work. Has someone recently created a successful debug build for Tensorflow (>= r1.11.0) and can share his approach ?</p>
<p>This is what I tried so far.</p>
<p>I basically tried to follow the instructions at <a href="https://www.tensorflow.org/install/source" rel="nofollow noreferrer">https://www.tensorflow.org/install/source</a>, but tried to modify them to generate a debug build. Nothing I tried resulted in a successful build.</p>
<p>The Host System is a Linux x86-64 machine with lots of RAM (e.g. 512 GB of RAM -> DGX-1). The CUDA Version within the Docker-Image is CUDA-9.0. The recent "latest" Tensorflow Version which is inside the docker image is r1.12.0</p>
<p>In order to get any cuda-build working, I needed to use "nvidia-docker", otherwise I get a linker error with "libcuda.so.1".</p>
<p>I started like this:</p>
<pre><code>nvidia-docker pull tensorflow/tensorflow:latest-devel-gpu-py3
nvidia-docker run --runtime=nvidia -it -w /tensorflow -v $PWD:/mnt -e HOST_PERMS="$(id -u):$(id -g)" \
tensorflow/tensorflow:latest-devel-gpu-py3 bash
</code></pre>
<p>Then I tried to configure the project using</p>
<pre><code>cd /tensorflow
./configure
</code></pre>
<p>I tried various configs. I tried keeping all values at their defaults. I tried enabling only the parts which I need. I tried not running ./configure at all. I pointed it to my own cuda-9.0 and tensorrt installtion. But not running ./configure at all (in the docker image) seems to produce best results (e.g. I can do optimized builds successfully with least effort).</p>
<p>If I build it using the exact official build instructions, i.e. creating an <strong>optimized/non-debug</strong> build, everything works as expected. So running the following seems to succeed.</p>
<pre><code>bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package
</code></pre>
<p>Same thing, if I run the following, which includes debug info, but does not turn off optimization (e.g. I cannot really use this for debug purposes).</p>
<pre><code>bazel build --config cuda --strip=never -c opt --copt="-ggdb" //tensorflow/tools/pip_package:build_pip_package
</code></pre>
<p>But everything which disables optimizations does not seem to work. If I run the following (with or without the --strip=never flag )</p>
<pre><code>bazel build --config cuda --strip=never -c dbg
//tensorflow/tools/pip_package:build_pip_package
</code></pre>
<p>I arrive at the following error:</p>
<blockquote>
<p>INFO: From Compiling
tensorflow/contrib/framework/kernels/zero_initializer_op_gpu.cu.cc:
external/com_google_absl/absl/strings/string_view.h(496): error:
constexpr function return is non-constant</p>
</blockquote>
<p>Which can be resolved by defining -DNDEBUG (see <a href="https://stackoverflow.com/questions/52665441/nvcc-error-string-view-h-constexpr-function-return-is-non-constant">nvcc error: string_view.h: constexpr function return is non-constant</a> ). </p>
<p>But If I run the following:</p>
<pre><code>bazel build --config cuda --strip=never -c dbg --copt="-DNDEBUG" //tensorflow/tools/pip_package:build_pip_package
</code></pre>
<p>I get these linking errors at the final step of the build:</p>
<blockquote>
<p>ERROR:
/tensorflow/python/BUILD:3865:1:
Linking of rule '//tensorflow/python:_pywrap_tensorflow_internal.so'
failed (Exit 1)
/usr/lib/gcc/x86_64-linux-gnu/5/../../../x86_64-linux-gnu/crti.o: In
function <code>_init': (.init+0x7): relocation truncated to fit:
R_X86_64_REX_GOTPCRELX against undefined symbol</code><strong>gmon_start</strong>'
/usr/lib/gcc/x86_64-linux-gnu/5/crtbeginS.o: In function
<code>deregister_tm_clones': crtstuff.c:(.text+0x3): relocation truncated
to fit: R_X86_64_PC32 against</code>.tm_clone_table'
crtstuff.c:(.text+0xa): relocation truncated to fit: R_X86_64_PC32
against symbol <code>__TMC_END__' defined in .nvFatBinSegment section in
bazel-out/k8-dbg/bin/tensorflow/python/_pywrap_tensorflow_internal.so
crtstuff.c:(.text+0x1e): relocation truncated to fit:
R_X86_64_REX_GOTPCRELX against undefined symbol
</code>_ITM_deregisterTMCloneTable'
/usr/lib/gcc/x86_64-linux-gnu/5/crtbeginS.o: In function
<code>register_tm_clones': crtstuff.c:(.text+0x43): relocation truncated to
fit: R_X86_64_PC32 against</code>.tm_clone_table' crtstuff.c:(.text+0x4a):
relocation truncated to fit: R_X86_64_PC32 against symbol
<code>__TMC_END__' defined in .nvFatBinSegment section in
bazel-out/k8-dbg/bin/tensorflow/python/_pywrap_tensorflow_internal.so
crtstuff.c:(.text+0x6b): relocation truncated to fit:
R_X86_64_REX_GOTPCRELX against undefined symbol
</code>_ITM_registerTMCloneTable'
/usr/lib/gcc/x86_64-linux-gnu/5/crtbeginS.o: In function
<code>__do_global_dtors_aux': crtstuff.c:(.text+0x92): relocation truncated
to fit: R_X86_64_PC32 against</code>.bss' crtstuff.c:(.text+0x9c):
relocation truncated to fit: R_X86_64_GOTPCREL against symbol
<code>__cxa_finalize@@GLIBC_2.2.5' defined in .text section in
/lib/x86_64-linux-gnu/libc.so.6 crtstuff.c:(.text+0xaa): relocation
truncated to fit: R_X86_64_PC32 against symbol</code>__dso_handle' defined
in .data.rel.local section in
/usr/lib/gcc/x86_64-linux-gnu/5/crtbeginS.o crtstuff.c:(.text+0xbb):
additional relocation overflows omitted from the output
bazel-out/k8-dbg/bin/tensorflow/python/_pywrap_tensorflow_internal.so:
PC-relative offset overflow in GOT PLT entry for
`_ZNK5Eigen10TensorBaseINS_9TensorMapINS_6TensorIKjLi1ELi1EiEELi16ENS_11MakePointerEEELi0EE9unaryExprINS_8internal11scalar_leftIjjN10tensorflow7functor14right_shift_opIjEEEEEEKNS_18TensorCwiseUnaryOpIT_KS6_EERKSH_'
collect2: error: ld returned 1 exit status Target
//tensorflow/tools/pip_package:build_pip_package failed to build</p>
</blockquote>
<p>I hoped to be able to solve that by doing a monolithic build. So I tried that, and got essentially the same error.</p>
<pre><code>bazel build --config cuda -c dbg --config=monolithic --copt="-DNDEBUG" //tensorflow/tools/pip_package:build_pip_package
</code></pre>
<p>I also tried the approaches from <a href="https://stackoverflow.com/questions/40520146/tensorflow-doesnt-build-with-debug-mode">TensorFlow doesnt build with debug mode</a> and several other variants I found by extensive googling. I'm running out of options.</p>
<p>I'd take any Tensorflow version from 1.11 onwards, including (working) nightly builds. It just needs to work with CUDA 9 on x86 linux, include debug symbols and disabled optimizations.</p>
<p>thank you very much in Advance..</p>
|
<p>Just in case someone else stumbles over this problem. I finally got it to compile, using the following command:</p>
<pre><code>bazel build --config cuda --strip=never --copt="-DNDEBUG" --copt="-march=native" --copt="-Og" --copt="-g3" --copt="-mcmodel=medium" --copt="-fPIC" //tensorflow/tools/pip_package:build_pip_package
</code></pre>
<p>After that, installation is a bit of a hazzle, since the wheel cannot be built anymore. But the tensorflow build can be installed anyway:</p>
<p>When building the wheel, via </p>
<pre><code>./bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
</code></pre>
<p>The process fails with an error which seems to be a problem with python's builtin zip compression library (i.e. it cannot compress the resulting archive, since it's too large). </p>
<p>It's important to run it anyway, since it only fails at the final step (archiving). When running build_pip_package, it prints to the console right at the start of the process, that it's building the package in a temporary directory (say, /tmp/Shjwejweu ) - the contents of that temp directory can be used to install tf debug version. Simply copy it to the target machine, then make sure you have any old tensorflow package removed (e.g. pip uninstall tensorflow), and run within:</p>
<pre><code>python setup.py install
</code></pre>
<p>But be careful to actively uninstall the "tensorflow" package first, otherwise you can get two simultaneously installed tensorflow versions..</p>
|
c++|tensorflow|build|bazel|debug-symbols
| 7
|
377,010
| 53,225,011
|
Reshape Pandas dataframe by specific string
|
<p>I have a csv dataset that looks like this:</p>
<pre><code>###12345
LABEL text
LABEL text
###12213
LABEL text
LABEL text
</code></pre>
<p>I want to transform it to that shape</p>
<pre><code>12345 LABEL text
12345 LABEL text
12213 LABEL text
</code></pre>
<p>My first approach was to filter out the lines like this</p>
<pre><code>#df['label'].str.contains("###", na=False)
</code></pre>
<p>but had no success in rearranging as an index.</p>
<p>Could you help me on this?
Thanks!</p>
|
<p>Use:</p>
<pre><code>print (df)
label
0 ###12345
1 LABEL text
2 LABEL text
3 ###12213
4 LABEL text
5 LABEL text
</code></pre>
<hr>
<pre><code>#boolean mask
m = df['label'].str.contains("###", na=False)
#helper column with repalce non ### values to NaNs and repalce it by last non NaN value
df['new'] = df['label'].where(m).ffill()
#remove rows with same values
df = df[df['label'] != df['new']].copy()
#extract new column and add to original
df['label'] = df.pop('new').str.lstrip('#') + ' ' + df['label']
print (df)
label
1 12345 LABEL text
2 12345 LABEL text
4 12213 LABEL text
5 12213 LABEL text
</code></pre>
<hr>
<pre><code>print (df)
label value
0 ###12345 NaN
1 LABEL text
2 LABEL text
3 ###12213 NaN
4 LABEL text
5 LABEL text
m = df['label'].str.contains("###", na=False)
df['new'] = df['label'].where(m).ffill()
df = df[df['label'] != df['new']].copy()
df['label'] = df.pop('new').str.lstrip('#') + ' ' + df['label']
print (df)
label value
1 12345 LABEL text
2 12345 LABEL text
4 12213 LABEL text
5 12213 LABEL text
</code></pre>
|
python|pandas
| 0
|
377,011
| 53,225,258
|
Numpy ignoring frames of TIF file when converting from PIL
|
<p>I have 3 diemnsional image saved as a multi page tif file. I tried reading it in using PIL(low) and it detected the correct number of frames but when I convert it to numpy it ignores the frames and only converts the single page/layer. </p>
<pre><code>from PIL import Image
import numpy as np
pil_ = Image.open(path)
pil_.size # this outputs (1024, 512)
pil_.n_frames # this outputs the correct number of frames i.e. 21
num = np.array(pil_)
num.shape # this outputs (512, 1024)
</code></pre>
<p>Shouldnt the numpy array be a 3D array? How do I convert it so that the frames are also considered?</p>
<p>Thanks</p>
|
<p>There's a thread here (<a href="https://mail.python.org/pipermail/python-list/2007-May/419217.html" rel="nofollow noreferrer">https://mail.python.org/pipermail/python-list/2007-May/419217.html</a>) which suggests that you may be able to manually seek through the frames and assign each one into the 3rd dimension of your numpy array.</p>
|
python|numpy|python-imaging-library
| 1
|
377,012
| 53,330,074
|
Unexpected result in calculating payments
|
<p>I have a matrix of numbers that look like this:</p>
<pre><code>[[ 0. 771.98 0. ..., 771.98 0. 1543.96]
[ 1320.83 4782.33 1320.83 ..., 1954.45 0. 1954.45]
[ 2043.61 0. 4087.22 ..., 4662.3 2907.82 1549.53]
...,
[ 427.6 0. 427.6 ..., 427.6 0. 427.6 ]
[ 868.58 1737.16 0. ..., 868.58 868.58 868.58]
[ 0. 1590.07 0. ..., 787.75 0. 0. ]]
</code></pre>
<p>I also have a vector of numbers that look like this:</p>
<pre><code>0 771.98
1 1320.83
2 2043.61
3 736.03
4 948.03
5 1838.70
...
</code></pre>
<p>Now I need to take each row and divide by the the vector. In other words take <code>row1 = [ 0. 771.98 0. ..., 771.98 0. 1543.96]</code> and divide the first element of the vector <code>771.98</code> which should yield this:</p>
<pre><code>[[ 0. 1. 0. 1. 1. 1. 0. 5. 1. 0. 2.]]
</code></pre>
<p>I tried this:</p>
<pre><code>payment = []
index = 0
for i in range(len(cpi)):
payment = cf[:i+1] / cpi[i]
print(payment[:1])
</code></pre>
<p>But I get this:</p>
<pre><code>[[ 0. 1.60983442 0. 1.60983442 1.60983442 1.60983442
0. 8.04917212 1.60983442 0. 3.21966885]]
</code></pre>
<p>Any idea how to fix this?</p>
<p>Following the answer provided I tried both suggestions. For the first suggestion I get this error:</p>
<pre><code>ValueError Traceback (most recent call last)
<ipython-input-15-3e5833506bde> in <module>()
3 index = 0
4 for i in range(len(cpi)):
----> 5 payment += cf[:i+1] / cpi[i]
6 print(payment)
7 # payment = np.divide(cf.T, cpi).T
ValueError: operands could not be broadcast together with shapes (0,) (1,11)
</code></pre>
<p>For the second suggestion I tried this: </p>
<pre><code>payment = np.divide(cf.T, cpi).T
print(payment)
</code></pre>
<p>and I received this error:</p>
<pre><code>Exception Traceback (most recent call last)
<ipython-input-16-f2ff3fa5409d> in <module>()
4 # payment += cf[:i+1] / cpi[i]
5 # print(payment)
----> 6 payment = np.divide(cf.T, cpi).T
7 print(payment)
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\series.py in __array_wrap__(self, result, context)
480 """
481 return self._constructor(result, index=self.index,
--> 482 copy=False).__finalize__(self)
483
484 def __array_prepare__(self, result, context=None):
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\series.py in __init__(self, data, index, dtype, name, copy, fastpath)
246 else:
247 data = _sanitize_array(data, index, dtype, copy,
--> 248 raise_cast_failure=True)
249
250 data = SingleBlockManager(data, index, fastpath=True)
~\AppData\Local\Continuum\anaconda3\lib\site-packages\pandas\core\series.py in _sanitize_array(data, index, dtype, copy, raise_cast_failure)
3025 elif subarr.ndim > 1:
3026 if isinstance(data, np.ndarray):
-> 3027 raise Exception('Data must be 1-dimensional')
3028 else:
3029 subarr = _asarray_tuplesafe(data, dtype=dtype)
Exception: Data must be 1-dimensional
</code></pre>
|
<p>You got this result because you are re-assigning your <code>payment</code> in each loop. Try change <code>payment = cf[:i+1] / cpi[i]</code> to <code>payment += cf[:i+1] / cpi[i]</code></p>
<p>As you added numpy to the tags I think an easier way would be using numpy:</p>
<pre><code>import numpy as np
a = np.arange(9).reshape(-1,3)
# [[0 1 2]
# [3 4 5]
# [6 7 8]]
b = np.arange(3) + 1
# [1 2 3]
print(np.divide(a.T, b).T)
# [[0. 1. 2. ]
# [1.5 2. 2.5 ]
# [2. 2.33333333 2.66666667]]
</code></pre>
|
python|numpy
| 1
|
377,013
| 53,345,594
|
Groupby first two earliest dates, then average time between first two dates - pandas
|
<p>I'm hoping to groupby users and find the first two uploads. I've figured out how to get the first date via minimum, but I'm having trouble getting that second upload date. Then would like to get the average time between the two upload dates for all users.</p>
<p>df:</p>
<pre><code>Date_Uploaded User_ID Display_Status
2018-10-27 abc123 Cleared
2018-10-28 abc123 Cleared
2018-10-29 abc123 Pending
2018-09-21 abc123 Pending
2018-08-24 efg123 Pending
2018-08-01 efg123 Pending
2018-07-25 efg123 Pending
</code></pre>
|
<p>Using <code>sort_values</code> + <code>head</code></p>
<pre><code>df.sort_values('Date_Uploaded').groupby('User_ID').head(2)
Out[152]:
Date_Uploaded User_ID Display_Status
6 2018-07-25 efg123 Pending
5 2018-08-01 efg123 Pending
3 2018-09-21 abc123 Pending
0 2018-10-27 abc123 Cleared
</code></pre>
|
python|pandas
| 2
|
377,014
| 52,933,379
|
Calculate days between 2 datetime columns in dask dataframe
|
<p>I have a dask dataframe that contains two columns, which is string format, like this</p>
<pre><code>start_date end_date
2018-09-01 2018-10-01
2018-09-02 2018-09-22
...
</code></pre>
<p>I would like to calculate the number of days between the two columns. If it is a pandas dataframe, I can do:</p>
<pre><code>df["num_days"] = (df["end_day"]-df["start_date"]).apply(lambda s:s.total_seconds()/24/60/60)
</code></pre>
<p>But in dask dataframe, that does not seems to work. Anyways to calculate elapsed days between to columns in this case?</p>
<p>Thanks</p>
|
<p><a href="http://docs.dask.org/en/latest/dataframe.html" rel="nofollow noreferrer"><code>dask.dataframe</code></a> supports a useful subset of the Pandas API, including <a href="http://docs.dask.org/en/latest/dataframe-api.html#dask.dataframe.Series.dt" rel="nofollow noreferrer"><code>Series.dt</code></a> methods. Therefore, you can use this functionality directly:</p>
<pre><code>import dask.dataframe as dd
df = dd.read_csv(r'file.csv', delim_whitespace=True,
parse_dates=['start_date', 'end_date'])
df['days'] = (df['end_date'] - df['start_date']).dt.days
print(df.compute())
start_date end_date days
0 2018-09-01 2018-10-01 30
1 2018-09-02 2018-09-22 20
</code></pre>
|
python|pandas|datetime|dataframe|dask
| 3
|
377,015
| 53,189,712
|
Generating binary segmentation file for lane detection in Python/Tensorflow (Tusimple Lanenet Dataset)
|
<p>I was using the code from <a href="https://github.com/MaybeShewill-CV/lanenet-lane-detection" rel="nofollow noreferrer">https://github.com/MaybeShewill-CV/lanenet-lane-detection</a> which using Deep Learning to detect road lane lines.</p>
<p>I successfully tested the model. Now I want to retrain the model on my own data.</p>
<p>The training data consisted of three parts: the original image, the binary segmentation file and the instance segmentation file. Please refer to both <code>gt_image_binary</code> and <code>gt_image_instance</code> folders inside the <code>/data/training_data_example</code> folder from the repo.</p>
<p>The binary segmentation use 255 to represent the lane field and 0 for the rest. The instance use different pixel value to represent different lane field and 0 for the rest.</p>
<p>My question is how do I generate these two labels (binary and instance segmentation files)?</p>
<p>The author said that you just need to follow the guidelines in the readme file for the Tusimple Lanenet Dataset found here: <a href="https://github.com/TuSimple/tusimple-benchmark/blob/master/doc/lane_detection/readme.md" rel="nofollow noreferrer">https://github.com/TuSimple/tusimple-benchmark/blob/master/doc/lane_detection/readme.md</a></p>
<p>And from this, it states that you can generate these files using this format in json file:</p>
<pre><code>{
'raw_file': str. 20th frame file path in a clip.
'lanes': list. A list of lanes. For each list of one lane, the elements are width values on image.
'h_samples': list. A list of height values corresponding to the 'lanes', which means len(h_samples) == len(lanes[i])
}
</code></pre>
<p>Where each json line in 'label_data_(date).json' is the label data for the frame.</p>
<p>e.g.</p>
<pre><code>{
"lanes": [
[-2, -2, -2, -2, 632, 625, 617, 609, 601, 594, 586, 578, 570, 563, 555, 547, 539, 532, 524, 516, 508, 501, 493, 485, 477, 469, 462, 454, 446, 438, 431, 423, 415, 407, 400, 392, 384, 376, 369, 361, 353, 345, 338, 330, 322, 314, 307, 299],
[-2, -2, -2, -2, 719, 734, 748, 762, 777, 791, 805, 820, 834, 848, 863, 877, 891, 906, 920, 934, 949, 963, 978, 992, 1006, 1021, 1035, 1049, 1064, 1078, 1092, 1107, 1121, 1135, 1150, 1164, 1178, 1193, 1207, 1221, 1236, 1250, 1265, -2, -2, -2, -2, -2],
[-2, -2, -2, -2, -2, 532, 503, 474, 445, 416, 387, 358, 329, 300, 271, 241, 212, 183, 154, 125, 96, 67, 38, 9, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2],
[-2, -2, -2, 781, 822, 862, 903, 944, 984, 1025, 1066, 1107, 1147, 1188, 1229, 1269, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2, -2]
],
"h_samples": [240, 250, 260, 270, 280, 290, 300, 310, 320, 330, 340, 350, 360, 370, 380, 390, 400, 410, 420, 430, 440, 450, 460, 470, 480, 490, 500, 510, 520, 530, 540, 550, 560, 570, 580, 590, 600, 610, 620, 630, 640, 650, 660, 670, 680, 690, 700, 710],
"raw_file": "path_to_clip"
}
</code></pre>
<p>Then you will just overlay the points on the original image (using cv2.polylines?).</p>
<p>How do I do this?</p>
<p>I tried to create both binary and instance segmentation files by just drawing a line (in paint, yes) on a black background. Then I got a shape error, so I converted them to grayscale. I also checked the original files which are also in grayscale so I followed this image format.</p>
<p>But again, I got error doing this.</p>
<p>What is the best way to generate the binary and instance segmentation files?</p>
|
<p>I will just drop an answer here (Don't know if it's good to answer an old question).</p>
<pre><code>label = '0601'
labe_filename = 'label_data_%s.json' % label
clips = [json.loads(line) for line in open(labe_filename).readlines()]
for clip in clips:
lanes = clip['lanes']
filepath = clip['raw_file']
ysamples = clip['h_samples']
lanes = [[(x, y) for (x, y) in zip(lane, ysamples) if x >= 0] for lane in lanes]
raw_image = cv2.imread(filepath)
label_image = np.zeros(raw_image.shape[:2], dtype=np.uint8)
for lane in lanes:
cv2.polylines(label_image, np.int32([lane]), isClosed=False, color=(255,255,255), thickness=5)
cv2.imshow("win1", raw_image)
cv2.imshow("win2", label_image)
cv2.waitKey(0)
break
</code></pre>
<p>Tusimple dataset uses something like scanline to form markings for lanes (<a href="https://raw.githubusercontent.com/TuSimple/tusimple-benchmark/master/doc/lane_detection/assets/examples/lane_example.jpg" rel="nofollow noreferrer">this</a>). <code>h_samples</code> are the y coordinates of scanlines. So you need to combine <code>h_samples</code> and <code>lanes</code> (which are the x coordinates) to form a lane marking. And if there is no existing lane marking, <code>x</code> coordinate will be 0. That's what this line does:</p>
<pre><code>lanes = [[(x, y) for (x, y) in zip(lane, ysamples) if x >= 0] for lane in lanes]
</code></pre>
<p>For instance segmentation, use different color for each lane. You may refer to their data provider codes to address the shape errors.</p>
|
python|opencv|tensorflow|image-processing|computer-vision
| 0
|
377,016
| 53,251,860
|
Best way to go towards an index in numpy, with wrap
|
<p>Lets say I have a 2D array below:</p>
<pre><code> [[ 0 0 0 0 0 0 ]
[ 0 0 0 0 0 0 ]
[ 0 0 0 0 0 0 ]
[ 0 0 0 0 0 2 ]
[ 0 1 0 0 0 0 ]
[ 0 0 0 0 0 0 ]]
</code></pre>
<p>I would like to get the direction from where '1' (index 4,1) is to '2' (index 3,5). Assuming directions are only up, down, left, right. Thus no diagonal movement. </p>
<p>One way to get the directions:</p>
<pre><code>"right" if destination.x > start.x else "left" if target.x < start.x else None
"down" if destination.y > start.y else "up" if destination.y < start.y else None
</code></pre>
<p>So for this example, we can go to '2' or the destination by either going "up" or "right". That of course is just one step, once you moved, can perform the same logic to move closer to the destination.</p>
<p>The problem with this logic is that it doesnt take the wrapping into account. With this logic it will take 5 steps to get to the destination. There is a shorter way by actually going left or up that can get to the destination in just 3 steps, because of the wrap.</p>
<p>Was thinking of generating another array where the start will be the middle of the array and perform the same logic. The problem is if the array is even (like this is 6x6, need to pad to get a middle. For example:</p>
<pre><code> [[ 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0]
[ 0 2 0 0 0 0 0]
[ 0 0 0 1 0 0 0]
[ 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0]]
</code></pre>
<p>Here the array is now 7x7. I believe there is a simplier way to get the answer without this extra step, but cant think of it.</p>
|
<p>Well, there is a quite simple formula for computing the distance in case of periodic boundary conditions. Below I consider only periodic b.c. on x-axis:</p>
<pre><code>import numpy as np
# periodic boundary condition for the x-axis only
def steps(start, dest, L_x):
x_start = start[1]
y_start = start[0]
x_dest = dest[1]
y_dest = dest[0]
dx = x_dest - x_start
if np.abs(dx) <= L_x/2:
steps_x = x_dest - x_start
else:
if dx > 0:
steps_x = (x_dest - L_x) - x_start
else:
steps_x = (x_dest + L_x) - x_start
steps_y = y_dest - y_start
return steps_x, steps_y
</code></pre>
<p>Example:</p>
<pre><code>grid = np.array([[0, 0, 0, 0, 0, 0 ],
[0, 0, 0, 0, 0, 0 ],
[0, 0, 0, 0, 0, 0 ],
[0, 0, 0, 0, 0, 2 ],
[0, 1, 0, 0, 0, 0 ],
[0, 0, 0, 0, 0, 0 ]])
L_x = grid.shape[1]
start = (4, 1) # (y, x) or (i, j)
dest = (3, 5)
steps_x, steps_y = steps(start, dest, grid)
dir_x = 'left' if steps_x < 0 else 'right'
dir_y = 'up' if steps_y < 0 else 'down'
print(abs(steps_x), dir_x, ',', abs(steps_y), dir_y)
Out: 2 left , 1 up
</code></pre>
|
python|arrays|numpy|matrix
| 0
|
377,017
| 53,286,656
|
Write python simulation output to a matrix
|
<p>I am trying to take the sum of four columns in a pandas dataframe (which are determined by a random number) and simulate this process 1000 times. I want this to give me 1000 rows each with different results, for each column. </p>
<p>I essentially want to say something like the following:</p>
<pre><code>for i in range(1000):
np.sum(df['A']) = iterations[i, j]
</code></pre>
<p>where <code>df['A']</code> is one of the columns I want to sum for each iteration. That is, 'for each iteration, sum the column values and 'place' this result in a new dataframe called 'iterations', specifying where the result is going to go'. I understand the code doesn't make sense but it describes what I am trying to achieve. To be clear, I do <strong>not</strong> want to write the result to a csv or txt file. </p>
<p>Thank you in advance for your advice. </p>
|
<p>Take the sum of four columns in a pandas dataframe (which are determined by a random number) and simulate this process 1000 times. This should give me 1000 rows each with different results, for each column. We can write:</p>
<pre><code>import os
import pandas as pd
import numpy as np
import random
from tqdm import tqdm
df_output = []
for i in tqdm(range(1000)):
sample_matrix = np.random.rand(60,4)
df = pd.DataFrame(sample_matrix)
df.columns = ['V_' + str(col) for col in df.columns]
df_output.append(np.array(df.sum()))
df_output
</code></pre>
<p>df_output will be a matrix, where the number of rows is 1000 (= the number of simulations)</p>
<p><a href="https://i.stack.imgur.com/zKHzN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zKHzN.png" alt="enter image description here"></a></p>
|
pandas|numpy
| 1
|
377,018
| 53,223,103
|
Pandas dataframe columns after groupby
|
<p>here I had solved one my problem:
<a href="https://stackoverflow.com/questions/53133174/pandas-duplicates-groupby?noredirect=1#comment93160647_53133174">Pandas duplicates groupby</a></p>
<p>Now, after using the command:</p>
<pre><code>df.groupby('Names').agg({'Column1':'sum', 'Column2':'sum','Column3':'min'})
</code></pre>
<p>I have this DataFrame (I create an example):</p>
<pre><code>Names Column1 Column2 Column3
Bob 3 3 2011
John 3 3 2005
Jonh 1 2 2016
Pier 1 1 2003
</code></pre>
<p>But if I use the command df.columns, 'Names' is not displayed anymore.
what can I do to have the column 'Names' as before using groupby?</p>
|
<p>Specify parameter <code>as_index=False</code> while grouping:</p>
<pre><code>df.groupby('Names', as_index=False).agg(
{'Column1':'sum', 'Column2':'sum','Column3':'min'})
Names Column1 Column2 Column3
0 Bob 3 3 2011
1 John 4 5 2005
2 Pier 1 1 2003
</code></pre>
|
python|pandas|dataframe|group-by|pandas-groupby
| 2
|
377,019
| 53,264,042
|
Scipy ValueError: object too deep for desired array with optimize.leastsq
|
<p>I am trying to fit my 3D data with linear 3D function Z = a<em>x+b</em>y+c. I import the data with pandas:</p>
<pre><code>dataframe = pd.read_csv('3d_data.csv',names=['x','y','z'],header=0)
print(dataframe)
x y z
0 52.830740 7.812507 0.000000
1 44.647931 61.031381 8.827942
2 38.725318 0.707952 52.857968
3 0.000000 31.026271 17.743218
4 57.137854 51.291656 61.546131
5 46.341341 3.394429 26.462564
6 3.440893 46.333864 70.440650
</code></pre>
<p>I have done some digging and found that the best way to fit 3D data it is to use optimize from scipy with the model equation and residual function:</p>
<pre><code>def model_calc(parameter, x, y):
a, b, c = parameter
return a*x + b*y + c
def residual(parameter, data, x, y):
res = []
for _x in x:
for _y in y:
res.append(data-model_calc(parameter,x,y))
return res
</code></pre>
<p>I fit the data with:</p>
<pre><code>params0 = [0.1, -0.2,1.]
result = scipy.optimize.leastsq(residual,params0,(dataframe['z'],dataframe['x'],dataframe['y']))
fittedParams = result[0]
</code></pre>
<p>But the result is a ValueError: </p>
<pre><code>ValueError: object too deep for desired array [...]
minpack.error: Result from function call is not a proper array of floats.
</code></pre>
<p>I was trying to minimize the residual function to give only single value or single np.array but it didn't help. I don't know where is the problem and if maybe the search space for parameters it is not too complex. I would be very grateful for some hints!</p>
|
<p>If you are fitting parameters to a function, you can use <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html" rel="nofollow noreferrer">curve_fit</a>. Here's an implementation:</p>
<pre><code>from scipy.optimize import curve_fit
def model_calc(X, a, b, c):
x, y = X
return a*x + b*y + c
p0 = [0.1, -0.2, 1.]
popt, pcov = curve_fit(model_calc, (dataframe.x, dataframe.y), dataframe.z, p0) #popt is the fit, pcov is the covariance matrix (see the docs)
</code></pre>
<p>Note that your sintax must be if the form f(X, a, b, c), where X can be a 2D vector (See <a href="https://stackoverflow.com/questions/28372597/python-curve-fit-with-multiple-independent-variables">this post</a>).</p>
<p><strong>(Another approach)</strong></p>
<p>If you know your fit is going to be linear, you can use <code>numpy.linalg.lstsq</code>. See <a href="https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.linalg.lstsq.html" rel="nofollow noreferrer">here</a>. Example solution:</p>
<pre><code>import numpy as np
from numpy.linalg import lstsq
A = np.vstack((dataframe.x, dataframe.y, np.ones_like(dataframe.y))).T
B = dataframe.z
a, b, c = lstsq(A, B)[0]
</code></pre>
|
python|numpy|scipy|data-fitting|function-fitting
| 1
|
377,020
| 52,906,275
|
Altair: Use of color scheme with log scales
|
<p>I'm new to Altair and I'm trying to make a heatmap with a log scale for color, and also select a non-default color scheme (the default scheme uses very little extent, and also I want light-to-dark colors). I find that I can easily get a log scale with <code>type=log</code>, but that once I do that, the <code>scheme=</code> parameter is then ignored. If I instead manually set the high and low colors with <code>range=</code>, that works fine.</p>
<p>I further found that if I explicitly set <code>type=</code> in any way, even explicitly setting <code>type='linear'</code>, which is the default, <code>scheme=</code> is then ignored. Is this a bug? If not, how can I understand the use of color schemes in a way which allows this to make sense? If I cannot directly use a scheme, how can I inspect the scheme and pull out its color values to reuse?</p>
<p>Here are some examples:</p>
<pre><code>import numpy as np
import pandas as pd
import altair as alt
# This question is about Altair - plotnine is only here for the example data
from plotnine.data import diamonds
# This works, and gives me the greenblue color scheme:
alt.Chart(diamonds).mark_rect().encode(
x=alt.X('carat',bin=True),
y=alt.Y('price',bin=True),
color=alt.Color('count()',scale=alt.Scale(scheme='greenblue'))
)
# This gives me a log scale, but now the greenblue scheme is gone:
alt.Chart(diamonds).mark_rect().encode(
x=alt.X('carat',bin=True),
y=alt.Y('price',bin=True),
color=alt.Color('count()',scale=alt.Scale(type='log',scheme='greenblue'))
)
# Direct specification of range works, but it is not exactly the same
# colors as greenblue. If this is the only way to do it, how do I open
# up the greenblue scheme and grab its colors?
alt.Chart(diamonds).mark_rect().encode(
x=alt.X('carat',bin=True),
y=alt.Y('price',bin=True),
color=alt.Color('count()',scale=alt.Scale(type='log',range=['palegreen','blue']))
)
</code></pre>
|
<p>I think this must have been a bug. I can't find the the issue on github where this behavior was fixed but the code you have posted appears to be working as expected now. I am running atair version <code>'3.2.0'</code>. </p>
<pre><code>import numpy as np
import pandas as pd
import altair as alt
from plotnine.data import diamonds
# Added to alleviate the large dataset issues
alt.data_transformers.enable('json')
alt.Chart(diamonds).mark_rect().encode(
x=alt.X('carat',bin=True),
y=alt.Y('price',bin=True),
color=alt.Color('count()',scale=alt.Scale(scheme='greenblue'))
)
</code></pre>
<p><a href="https://i.stack.imgur.com/2X6Or.png" rel="noreferrer"><img src="https://i.stack.imgur.com/2X6Or.png" alt="enter image description here"></a></p>
<pre class="lang-py prettyprint-override"><code>alt.Chart(diamonds).mark_rect().encode(
x=alt.X('carat',bin=True),
y=alt.Y('price',bin=True),
color=alt.Color('count()',scale=alt.Scale(type='log',scheme='greenblue'))
)
</code></pre>
<p><a href="https://i.stack.imgur.com/YThRr.png" rel="noreferrer"><img src="https://i.stack.imgur.com/YThRr.png" alt="enter image description here"></a></p>
|
python|numpy|altair
| 6
|
377,021
| 53,291,987
|
Filtering Max 3 count using groupby in pandas
|
<p>I am working with a dataframe that contains 20K rows.
I created a sample dataframe as follow to replicate the data frame.</p>
<pre><code>df = pd.DataFrame()
df ['Team'] = ['A1','A1','A1','A2','A2','A2','B1','B1','B1','B2','B2','B2']
df ['Competition'] = ['L1','L1','L1','L1','L1','L1','L2','L2','L2','L2','L2','L2']
df ['Score_count'] = [2,1,3,4,7,8,1,5,8,5,7,1]
</code></pre>
<p>I would like to keep the rows where two maximum values of Score_count by using <code>groupby(['Competition','Team'])</code></p>
<p>I am able to keep the rows with maximum Score_count by using transform(max) as follow:</p>
<pre><code>idx = df.groupby(['Competition','Team'])['Score_count'].transform(max) == df['Score_count']
df = df[idx]
</code></pre>
<p>But what I wanted to do is to keep n numbers of maximum (in this case is two maximum values) Score_count for the same groupby.</p>
<p>How can I do it?</p>
<p>Below is my expected output:</p>
<pre><code> Team Competition Score_count
0 A1 L1 3
1 A1 L1 2
2 A2 L1 8
3 A2 L1 7
4 B1 L2 8
5 B1 L2 5
6 B2 L2 7
7 B2 L2 5
</code></pre>
<p>You may also refer to the picture below for the expected output:</p>
<p><a href="https://i.stack.imgur.com/t3DFI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/t3DFI.png" alt="enter image description here"></a></p>
<p>Can anyone advise how to do it?
Thanks,</p>
<p>Zep</p>
|
<p><code>groupby</code> <em>Team</em> and <em>Competition</em> and then take the two largest values with <code>.nlargest</code>:</p>
<pre><code>df.groupby(['Team', 'Competition']).Score_count.nlargest(2).reset_index([0,1])
# Team Competition Score_count
#2 A1 L1 3
#0 A1 L1 2
#5 A2 L1 8
#4 A2 L1 7
#8 B1 L2 8
#7 B1 L2 5
#10 B2 L2 7
#9 B2 L2 5
</code></pre>
<p>To drop the original index:</p>
<pre><code>df.groupby(['Team', 'Competition']).Score_count.nlargest(2).reset_index([0,1]).reset_index(drop=True)
# Team Competition Score_count
#0 A1 L1 3
#1 A1 L1 2
#2 A2 L1 8
#3 A2 L1 7
#4 B1 L2 8
#5 B1 L2 5
#6 B2 L2 7
#7 B2 L2 5
</code></pre>
|
python|pandas|filter|max
| 2
|
377,022
| 53,346,940
|
Using np.where to find element from sub-arrays
|
<p>I'm trying to find all elements in array X that match element on array Y using np.where() and the condition on where() function is comparing list (a) not one element. Please see the following code:</p>
<pre><code>X = np.array([[0, 2], [2, 1], [1, 3], [5, 9], [6, 7], [4, 6]])
Y = np.array([1, 2, 3, 4, 4, 5])
a = [2, 3, 4]
matchedX = X[np.where(Y == a)]
</code></pre>
<p>I am expecting to get the result like this:</p>
<pre><code>array([[2, 1],
[1, 3],
[5, 9],
[6, 7]])
</code></pre>
<p>but I got different results:</p>
<pre><code>array([], shape=(0, 2), dtype=int64)
</code></pre>
<p>So, I need an alternative solution in a way that I can get the same elements if I do not know about the values of a? This line below give me the exact results that I want, but I do not know in previous the a values.</p>
<pre><code>matchedX = X[np.where((Y == 2) | (Y==3) | (Y==4))]
</code></pre>
|
<p>You can use the set functions of numpy:</p>
<pre><code>X[np.where(np.isin(Y, a))]
array([[2, 1],
[1, 3],
[5, 9],
[6, 7]])
</code></pre>
|
python|numpy
| 1
|
377,023
| 52,922,690
|
Split n-dimensional array into 2D arrays along each axis
|
<p>So say I have an array of arbitrary dimensions (for now, we'll give it three dimensions).</p>
<pre><code>a=array([[[ 0, 1, 2, 3],
[ 4, 5, 6, 7]],
[[ 8, 9, 10, 11],
[12, 13, 14, 15]],
[[16, 17, 18, 19],
[20, 21, 22, 23]]])
</code></pre>
<p>What would be the simplest way to split this array into a list of 2 dimensional arrays (n-1 but 2 in this case) across all the different axes ie. </p>
<pre><code>new_array1=[[0, 1, 2, 3,], [4, 5, 6, 7]...[20, 21, 22, 23]]
new_array2=[[0, 4], [1, 5]...[19, 23]]
new_array3=[[0, 8, 16], [1, 9, 17]...[7, 15, 23]]
</code></pre>
<p>Is there an easy way to do this for an array of arbitrary dimension? </p>
|
<p>Try This:</p>
<pre><code>b = []
for i in a:
b.append([item for sublist in i for item in sublist])
</code></pre>
<p>Output:</p>
<pre><code>[[0, 1, 2, 3, 4, 5, 6, 7],
[8, 9, 10, 11, 12, 13, 14, 15],
[16, 17, 18, 19, 20, 21, 22, 23]]
</code></pre>
|
python|arrays|numpy
| 0
|
377,024
| 53,223,280
|
Predicting using adanet for binary classification problem
|
<p>I followed the tutorial of adanet:
<a href="https://github.com/tensorflow/adanet/tree/master/adanet/examples/tutorials" rel="nofollow noreferrer">https://github.com/tensorflow/adanet/tree/master/adanet/examples/tutorials</a>
and was able to apply adanet to my own binary classification problem.</p>
<p>But how can I predict using the train model? I have a very little knowledge of TensorFlow. Any helps would be really appreciated</p>
|
<p>You can immediately call <a href="https://www.tensorflow.org/api_docs/python/tf/estimator/Estimator?version=stable#predict" rel="nofollow noreferrer"><code>estimator.predict</code></a> on a new unlabeled example and get a prediction.</p>
|
tensorflow|predict
| 1
|
377,025
| 52,995,453
|
How to plot training error , validation error and prediction accuracy over training progress in tensorflow?
|
<p>I am running Vanilla RNN code on tensorflow in google colab. I want to plot training error, validation error and prediction accuracy over training progress without using tensorboard. I am new to tensorflow. Can anyone please guide me. Here is a part of my code </p>
<pre><code> # Model predictions
cls_prediction = tf.argmax(output_logits, axis=1, name='predictions')
# Define the loss function, optimizer, and accuracy
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=output_logits), name='loss')
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate, name='Adam-op').minimize(loss)
correct_prediction = tf.equal(tf.argmax(output_logits, 1), tf.argmax(y, 1), name='correct_pred')
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32), name='accuracy')
init = tf.global_variables_initializer()
sess = tf.InteractiveSession()
sess.run(init)
global_step = 0
# Number of training iterations in each epoch
num_tr_iter = int(len(y_train) / batch_size)
for epoch in range(epochs):
print('Training epoch: {}'.format(epoch + 1))
x_train, y_train = randomize(x_train, y_train)
for iteration in range(num_tr_iter):
global_step += 1
start = iteration * batch_size
end = (iteration + 1) * batch_size
x_batch, y_batch = get_next_batch(x_train, y_train, start, end)
x_batch = x_batch.reshape((batch_size, timesteps, num_input))
# Run optimization op (backprop)
feed_dict_batch = {x: x_batch, y: y_batch}
sess.run(optimizer, feed_dict=feed_dict_batch)
if iteration % display_freq == 0:
# Calculate and display the batch loss and accuracy
loss_batch, acc_batch = sess.run([loss, accuracy],
feed_dict=feed_dict_batch)
print("iter {0:3d}:\t Loss={1:.2f},\tTraining Accuracy={2:.01%}".
format(iteration, loss_batch, acc_batch))
# Run validation after every epoch
feed_dict_valid = {x: x_valid[:1000].reshape((-1, timesteps, num_input)), y: y_valid[:1000]}
loss_valid, acc_valid = sess.run([loss, accuracy], feed_dict=feed_dict_valid)
print('---------------------------------------------------------')
print("Epoch: {0}, validation loss: {1:.2f}, validation accuracy: {2:.01%}".
format(epoch + 1, loss_valid, acc_valid))
print('---------------------------------------------------------')
</code></pre>
<p>What changes should I make in the code to get the plots?</p>
|
<p>One such way would be to store the values in a list, then use something like <a href="https://matplotlib.org/tutorials/introductory/pyplot.html" rel="nofollow noreferrer">matplotlib</a> to plot the values</p>
<p>Example code:</p>
<pre><code>import matplotlib.pyplot as plt
plt.plot([1, 2, 3, 4])
plt.ylabel('some numbers')
plt.show()
</code></pre>
<p>will plot a straight line. In your case, you'd call <code>plt.plot(list_of_prediction_accuracy)</code> or whatever list you want to visualize</p>
|
python|tensorflow|matplotlib
| 1
|
377,026
| 53,292,680
|
Operating on histogram bins Python
|
<p>I am trying to find the median of values within a bin range generated by the <code>np.histrogram</code> function. How would I select the values only within the bin range and operate on those specific values? Below is an example of my data and what I am trying to do:</p>
<pre><code>x = [0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1]
</code></pre>
<p>y values can have any sort of x value associated with them, for example:</p>
<pre><code>hist, bins = np.histogram(x)
hist = [129, 126, 94, 133, 179, 206, 142, 147, 90, 185]
bins = [0., 0.09999926, 0.19999853, 0.29999779, 0.39999706,
0.49999632, 0.59999559, 0.69999485, 0.79999412, 0.8999933,
0.99999265]
</code></pre>
<p>So, I am trying to find the median y value of the 129 values in the first bin generated, etc.</p>
|
<p>One way is with <code>pandas.cut()</code>:</p>
<pre><code>>>> import pandas as pd
>>> import numpy as np
>>> np.random.seed(444)
>>> x = np.random.randint(0, 25, size=100)
>>> _, bins = np.histogram(x)
>>> pd.Series(x).groupby(pd.cut(x, bins)).median()
(0.0, 2.4] 2.0
(2.4, 4.8] 3.0
(4.8, 7.2] 6.0
(7.2, 9.6] 8.5
(9.6, 12.0] 10.5
(12.0, 14.4] 13.0
(14.4, 16.8] 15.5
(16.8, 19.2] 18.0
(19.2, 21.6] 20.5
(21.6, 24.0] 23.0
dtype: float64
</code></pre>
<p>If you want to stay in NumPy, you might want to check out <code>np.digitize()</code>.</p>
|
python|numpy|histogram|median
| 2
|
377,027
| 53,269,061
|
pandas dataframe sum date range of another DataFrame
|
<p>I have two dataframes. I want to sum an "amount" column in the 2nd, for each record in the first datafame.</p>
<p>So for each </p>
<pre><code>df1.Date = sum(df2.amount WHERE df1.Date <= df2.Date AND df1.yearAgo >= df2.Date)
df1 = pd.DataFrame({'Date':['2018-10-31','2018-10-30','2018-10-29','2018-10-28'],'yearAgo':['2017-10-31','2017-10-30','2017-10-29','2017-10-28']})
df2 = pd.DataFrame({'Date':['2018-10-30','2018-7-30','2018-4-30','2018-1-30','2017-10-30'],'amount':[1.0,1.0,1.0,1.0,0.75]})
</code></pre>
<p>desired results:</p>
<pre><code>df1.Date yearToDateTotalAmount
2018-10-31 3.0
2018-10-30 4.75
2018-10-29 3.75
2018-10-28 3.75
</code></pre>
|
<p>IIUC, your expected output should have <code>4</code> in first row.</p>
<p>You can achieve this very efficiently using <code>numpy</code>'s feature of <a href="https://docs.scipy.org/doc/numpy-1.10.4/reference/generated/numpy.ufunc.outer.html" rel="nofollow noreferrer"><code>outer</code></a> comparison, since <code>less_equal</code> and <code>greater_equal</code> are <code>ufunc</code>s.</p>
<p>Notice that</p>
<pre><code>>>> np.greater_equal.outer(df1.Date, df2.Date)
array([[ True, True, True, True, True],
[ True, True, True, True, True],
[False, True, True, True, True],
[False, True, True, True, True]])
</code></pre>
<p>So you can get your mask by</p>
<pre><code>mask = np.greater_equal.outer(df1.Date, df2.Date) &
np.less_equal.outer(df1.yearAgo, df2.Date)
</code></pre>
<p>And use <a href="https://docs.scipy.org/doc/numpy-1.10.4/reference/generated/numpy.ufunc.outer.html" rel="nofollow noreferrer"><code>outer multiplication</code></a> + summing along <code>axis=1</code></p>
<pre><code>>>> np.sum(np.multiply(mask, df2.amount.values), axis=1)
Out[49]:
array([4. , 4.75, 3.75, 3.75])
</code></pre>
<p>In the end, just assign back</p>
<pre><code>>>> df1['yearToDateTotalAmount'] = np.sum(np.multiply(mask, df2.amount.values), axis=1)
Date yearAgo yearToDateTotalAmount
0 2018-10-31 2017-10-31 4.00
1 2018-10-30 2017-10-30 4.75
2 2018-10-29 2017-10-29 3.75
3 2018-10-28 2017-10-28 3.75
</code></pre>
|
python|pandas|dataframe
| 1
|
377,028
| 52,932,209
|
Google BigQuery Sum return wrong result
|
<p>Guys im running this query on public blockchain data, to get total burned tokens. But SUM return result much less then real(run same query without sum and run sum in Pandas). it gives 8306 while pandas 328608.</p>
<p>log.data - hex number</p>
<pre><code>SELECT
SUM(SAFE_CAST(log.data as INT64)/POW(10,18))
FROM
`bigquery-public-data.ethereum_blockchain.logs` AS log
WHERE TRUE
AND log.address = '0xf53ad2c6851052a81b42133467480961b2321c09'
AND log.block_timestamp >= '2018-01-01 00:00:01'
AND log.block_timestamp <= '2018-12-01 00:00:01'
AND SUBSTR(log.topics[SAFE_OFFSET(0)], 1, 10) IN ('0x42696c68','0xcc16f5db')
</code></pre>
<p>im not quite understand why this happens. Will be appreciate for answer)</p>
|
<p>The problem is that some of the <code>log.data</code> values are excluded from the <code>SUM</code>, since they don't fit in the range of <code>INT64</code> and hence the <code>SAFE_CAST(log.data AS INT64)</code> returns <code>NULL</code>. As an example, <code>0x00000000000000000000000000000000000000000000000080b7978da47c78d2</code> is greater than the max <code>INT64</code> value of <code>9223372036854775807</code>, which is <code>0x7FFFFFFFFFFFFFFF</code> in hexadecimal.</p>
<p>You can instead cast the <code>log.data</code> values to the <code>FLOAT64</code> type, which produces a result closer to what you see using Pandas:</p>
<pre><code>SELECT
SUM(CAST(log.data as FLOAT64)/POW(10,18))
FROM
`bigquery-public-data.ethereum_blockchain.logs` AS log
WHERE TRUE
AND log.address = '0xf53ad2c6851052a81b42133467480961b2321c09'
AND log.block_timestamp >= '2018-01-01 00:00:01'
AND log.block_timestamp <= '2018-12-01 00:00:01'
AND SUBSTR(log.topics[SAFE_OFFSET(0)], 1, 10) IN ('0x42696c68','0xcc16f5db')
</code></pre>
<p>This returns <code>329681.7942642243</code>.</p>
|
python|pandas|google-bigquery
| 4
|
377,029
| 53,090,725
|
Streaming Grid Display in Jupyter Notebook
|
<p>I am trying to display live price updates coming from a redis pubsub channel in a grid in Jupyter. Everytime there is a price update, the message will be added at the end of the grid. In order words, a gridview widget will be tied to a Dataframe so everytime it changes, the gridview will change. The idea is to get something like this:
<a href="https://i.stack.imgur.com/T4N7g.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/T4N7g.gif" alt="enter image description here"></a></p>
<p>I tried to do that by displaying and clearing the output. However, I am not getting a the streaming grid that gets updated in-place but rather displaying and clearing the output which is very annoying. </p>
<p>Here is the output widget in one jupyter cell</p>
<pre><code>import ipywidgets as iw
from IPython.display import display
o = iw.Output()
def output_to_widget(df, output_widget):
output_widget.clear_output()
with output_widget:
display(df)
o
</code></pre>
<p>Here is the code to subscribe to redis and get handle the message</p>
<pre><code>import redis, json, time
r = redis.StrictRedis(host = HOST, password = PASS, port = PORT, db = DB)
p = r.pubsub(ignore_subscribe_messages=True)
p.subscribe('QUOTES')
mdf = pd.DataFrame()
while True:
message = p.get_message()
if message:
json_msg = json.loads(message['data'])
df = pd.DataFrame([json_msg]).set_index('sym')
mdf = mdf.append(df)
output_to_widget(mdf, o)
time.sleep(0.001)
</code></pre>
|
<p>Try changing the first line of <code>output_to_widget</code> to <code>output_widget.clear_output(wait = True)</code>.</p>
<p><a href="https://ipython.org/ipython-doc/3/api/generated/IPython.display.html" rel="nofollow noreferrer">https://ipython.org/ipython-doc/3/api/generated/IPython.display.html</a></p>
|
python|pandas|jupyter-notebook
| 1
|
377,030
| 52,953,599
|
Plotting multiple columns from Different Dataframe
|
<p>I have been trying to make a plot in python, but I am facing challenge in syntax. I have googled but I couldn't find something suitable.</p>
<p>I have three data frames. Data in first dataframe is something like attached pic. </p>
<p><a href="https://i.stack.imgur.com/yva5D.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yva5D.jpg" alt="enter image description here"></a></p>
<p>I have two more dataframes that contains one column each. One has max. temperature for past few years as shown below and another one has min. Though this 2nd DF doesn't have City Name column as DF1 has, Max temp in DF2 is for City Names mentioned in DF1 itself . </p>
<p><a href="https://i.stack.imgur.com/jCiEm.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jCiEm.jpg" alt="enter image description here"></a></p>
<p>Data frame 3. </p>
<p><a href="https://i.stack.imgur.com/xPvCg.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xPvCg.jpg" alt="enter image description here"></a></p>
<p>I would like to make line plot in which x axis should have City name and Y axis must be so that It can accommodate Temp from DF1, Max Temp from DF2, and Min Temp from DF3. I want to incorporate all the three line plot in the same graph.</p>
|
<p>You could do something like this:</p>
<pre><code>import pandas as pd
import matplotlib.pyplot as plt
# create data:
df1 = pd.DataFrame({"City Name": ["Toronto", "NYC"],
"Temp": [10, 20],
"Rainfall": [100, 232]})
df2 = pd.DataFrame({"Max Temp": [20, 25]})
df3 = pd.DataFrame({"Min Temp": [0, -10]})
# concatenate all data into one dataframe:
df = pd.concat([df1, df2, df3], axis=1)
# drop rainfall column from data:
df.drop("Rainfall", 1)
# change index to city names:
df.index = df["City Name"]
# drop city name column from data:
df.drop("City Name", 1)
plt.plot(df)
plt.legend(df.columns)
</code></pre>
<p>which would give you:</p>
<p><img src="https://i.stack.imgur.com/mw3kJ.png" alt="Plot1]"></p>
|
python|pandas|python-3.6
| 0
|
377,031
| 53,147,241
|
In using keras Lambda, how do I handle "TypeError: Object arrays are not currently supported"?
|
<p>I'm using Keras, and I want to make a layer that takes <code>[a0, a1]</code>, <code>[b0, b1, b2]</code> as inputs and gives <code>[a0*b0, a0*b1, a0*b2, a1*b0, a1*b1, a1*b2]</code> as output. I tried to use <code>Lambda</code>, but I couldn't succeed. Here's my code:</p>
<pre><code>import numpy as np
from keras.models import Input
from keras.layers import Lambda
def mix(A):
reshaped = [np.reshape(A[m], (1,np.size(A[m]))) for m in range(len(A))]
mixed = reshaped[-1]
for i in range(len(A)-1):
mixed = np.matmul(np.transpose(reshaped[-i-2]), mixed)
mixed = np.reshape(mixed, (1,np.size(mixed)))
return np.reshape(mixed, np.size(mixed))
a = Input(shape=(2,))
b = Input(shape=(3,))
c = Lambda(mix)([a, b])
</code></pre>
<p>Here's the error I got:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-32-07bbf930b48b> in <module>()
1 a = Input(shape=(2,))
2 b = Input(shape=(3,))
----> 3 c = Lambda(mix)([a, b])
~\Anaconda3\envs\mind\lib\site-packages\keras\engine\base_layer.py in __call__(self, inputs, **kwargs)
455 # Actually call the layer,
456 # collecting output(s), mask(s), and shape(s).
--> 457 output = self.call(inputs, **kwargs)
458 output_mask = self.compute_mask(inputs, previous_mask)
459
~\Anaconda3\envs\mind\lib\site-packages\keras\layers\core.py in call(self, inputs, mask)
685 if has_arg(self.function, 'mask'):
686 arguments['mask'] = mask
--> 687 return self.function(inputs, **arguments)
688
689 def compute_mask(self, inputs, mask=None):
<ipython-input-31-bbc21320d8af> in mix(A)
4
5 for i in range(len(A)-1):
----> 6 mixed = np.matmul(np.transpose(reshaped[-i-2]), mixed)
7 mixed = np.reshape(mixed, (1,np.size(mixed)))
8
TypeError: Object arrays are not currently supported
</code></pre>
<p>But if I put:</p>
<pre><code>a = np.array([1,2])
b = np.array([3,4,5])
print(mix([a,b]))
</code></pre>
<p>then I get:</p>
<pre><code>[ 3 4 5 6 8 10]
</code></pre>
<p>which is exactly what I intended. But I don't know how to put this in <code>Lambda</code> properly.</p>
<p>Can anyone tell me how to handle this? I'm new to Keras, so I don't know the internal structure of <code>Lambda</code>, <code>Input</code> or other stuffs.</p>
<hr>
<p>Following Abhijit's comment, I changed the code like this:</p>
<pre><code>import numpy as np
import tensorflow as tf
from keras.models import Input
from keras.layers import Lambda
def mix(A):
reshaped = [tf.reshape(A[m], (1,tf.size(A[m]))) for m in range(len(A))]
mixed = reshaped[-1]
for i in range(len(A)-1):
mixed = tf.matmul(tf.transpose(reshaped[-i-2]), mixed)
mixed = tf.reshape(mixed, (1,tf.size(mixed)))
return tf.reshape(mixed, [tf.size(mixed)])
a = Input(shape=(2,))
b = Input(shape=(3,))
c = Lambda(mix)([a, b])
</code></pre>
<p>Now I don't get any errors, but I don't think I got the right neural network. Because executing:</p>
<pre><code>model = Model(inputs=[a,b], outputs=c)
print(model.summary())
</code></pre>
<p>I get:</p>
<pre><code>__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_22 (InputLayer) (None, 2) 0
__________________________________________________________________________________________________
input_23 (InputLayer) (None, 3) 0
__________________________________________________________________________________________________
lambda_3 (Lambda) (None,) 0 input_22[0][0]
input_23[0][0]
==================================================================================================
Total params: 0
Trainable params: 0
Non-trainable params: 0
__________________________________________________________________________________________________
</code></pre>
<p>But see the layer <code>lambda_3</code>. Shouldn't the output shape be <code>(None, 6)</code>?</p>
|
<p>Apart from the fact that you need to use Keras backend functions (i.e. <code>keras.backend.*</code>) or use backend functions directly (i.e. <code>tf.*</code> or <code>th.*</code>), I think you are making the definition of <code>mix</code> unnecessarily complicated. It can be done much simpler like this:</p>
<pre><code>from keras import backend as K
def mix(ts):
t0 = K.expand_dims(ts[0], axis=-1)
t1 = K.expand_dims(ts[1], axis=1)
return K.batch_flatten(t0 * t1)
a = Input(shape=(2,))
b = Input(shape=(3,))
c = Lambda(mix)([a, b])
model = Model(inputs=[a,b], outputs=c)
</code></pre>
<p>Here is the test:</p>
<pre><code># the reshapes are necessary to make them a batch
a = np.array([1,2]).reshape(1,2)
b = np.array([3,4,5]).reshape(1,3)
print(model.predict([a, b]))
# output
[[ 3. 4. 5. 6. 8. 10.]]
</code></pre>
<p>Further, sometimes the <code>Lambda</code> layer could automatically infer the output shape. However, if you would like you can explicitly set its output shape:</p>
<pre><code>c = Lambda(mix, output_shape=(6,))([a, b])
</code></pre>
<p>Model summary:</p>
<pre><code>Layer (type) Output Shape Param # Connected to
==================================================================================================
input_9 (InputLayer) (None, 2) 0
__________________________________________________________________________________________________
input_10 (InputLayer) (None, 3) 0
__________________________________________________________________________________________________
lambda_5 (Lambda) (None, 6) 0 input_9[0][0]
input_10[0][0]
==================================================================================================
Total params: 0
Trainable params: 0
Non-trainable params: 0
__________________________________________________________________________________________________
</code></pre>
|
python|numpy|tensorflow|keras|keras-layer
| 1
|
377,032
| 53,300,965
|
Pytorch Exception in Thread: ValueError: signal number 32 out of range
|
<p>I'm getting this error:</p>
<pre><code>Exception in Thread: ValueError: signal number 32 out of range
</code></pre>
<p>The specific tutorial that raises an issue for me is the training a classifier (<a href="https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html" rel="noreferrer">https://pytorch.org/tutorials/beginner/blitz/cifar10_tutorial.html</a>), the specific line is: </p>
<pre><code>dataiter = iter(trainloader)
</code></pre>
<p>and the full error traceback is:</p>
<pre><code>Exception in thread Thread-5:
Traceback (most recent call last):
File "/home/chenchen/anaconda3/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/home/chenchen/anaconda3/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/home/chenchen/anaconda3/lib/python3.6/multiprocessing/resource_sharer.py", line 139, in _serve
signal.pthread_sigmask(signal.SIG_BLOCK, range(1, signal.NSIG))
File "/home/chenchen/anaconda3/lib/python3.6/signal.py", line 60, in pthread_sigmask
sigs_set = _signal.pthread_sigmask(how, mask)
ValueError: signal number 32 out of range
</code></pre>
<p>My operation system is Ubuntu 18.10 and my python env is Anaconda3 for python 3.6. I installed pytorch from the latest source. My cuda version is 10.0.</p>
|
<p>I have faced a similar issue and it got resolved when I set:</p>
<pre><code>num_workers=0
</code></pre>
|
python|pytorch
| 11
|
377,033
| 53,161,212
|
Getting percentage for each column after groupby
|
<p>I have a pandas dataframe with two columns <code>A</code> and <code>B</code>. The column <code>B</code> contains three categories <code>X</code>, <code>Y</code>, 'Z'. I need to check the how much percentage is a particular value for each group in A. Here is how the dataframe looks like:</p>
<pre><code> A B
AA X
BB Y
CC Z
AA Y
AA Y
BB Z
.. ..
</code></pre>
<p>Now I want to plot a stacked plot but it should be a percentage based stacked plot and not just count based for each category in <code>B</code> corresponding to a group in <code>A</code>. Here is what I did so far:</p>
<p><code>df.groupby(['A'])['B'].value_counts().unstack()</code> which gives me this</p>
<pre><code>B X Y Z
A
AA 65 666 5
BB 123 475 6
CC 267 1337 40
</code></pre>
<p>Now I want to divide each column by the sum of it's corresponding row like for first row <code>(65/(65+666+5), 666/(65+666+5), 5/(65+666+5),)</code>and plot the results as stacked bar plot.
Can someone please help?</p>
|
<p>You can find the row-wise sum and divide along the axis something like this:</p>
<pre><code>freq_df = df.groupby(['A'])['B'].value_counts().unstack()
pct_df = freq_df.divide(freq_df.sum(axis=1), axis=0)
</code></pre>
<p>And then to plot that you should simply be able to use</p>
<pre><code>pct_df.plot(kind="bar", stacked=True)
</code></pre>
|
python|python-3.x|pandas|data-visualization
| 4
|
377,034
| 52,982,056
|
How to convert numpy datetime64 [ns] to python datetime?
|
<p>I need to convert dates from pandas frame values in the separate function:</p>
<pre><code> def myfunc(lat, lon, when):
ts = (when - np.datetime64('1970-01-01T00:00:00Z','s')) / np.timedelta64(1, 's')
date = datetime.datetime.utcfromtimestamp(ts)
print("Numpy date= ", when, " Python date= ", date)
return float(90) - next_func(lat, lon, date)
</code></pre>
<p>Invokation this function:</p>
<pre><code>new_df['new_column'] = np.vectorize(my_func)(lat, lon, new_df['datetime(LT)'])
</code></pre>
<p>But it raise error:</p>
<pre><code>ufunc subtract cannot use operands with types dtype('int64') and dtype('<M8[s]')
</code></pre>
<p>How to convert numpy datetime64 [ns] to python datetime?</p>
|
<p>I wonder if you need all this conversion work. With the right time units a <code>datetime64</code> can produce a <code>datetime</code> object directly.</p>
<p>I'm not sure about your <code>when</code> variable, but let's assume it comes from <code>pandas</code>, and is something like a <code>DatetimeIndex</code>:</p>
<pre><code>In [56]: time = pandas.date_range('6/28/2013', periods=5, freq='5D')
In [57]: time
Out[57]:
DatetimeIndex(['2013-06-28', '2013-07-03', '2013-07-08', '2013-07-13',
'2013-07-18'],
dtype='datetime64[ns]', freq='5D')
</code></pre>
<p>The equivalent numpy array</p>
<pre><code>In [58]: time.values
Out[58]:
array(['2013-06-28T00:00:00.000000000', '2013-07-03T00:00:00.000000000',
'2013-07-08T00:00:00.000000000', '2013-07-13T00:00:00.000000000',
'2013-07-18T00:00:00.000000000'], dtype='datetime64[ns]')
In [59]: time.values.tolist()
Out[59]:
[1372377600000000000,
1372809600000000000,
1373241600000000000,
1373673600000000000,
1374105600000000000]
</code></pre>
<p>With <code>[ns]</code> the result is a large integer, a 'timestamp' of some sort. But if I convert the time units to something like seconds, or even microseconds (us):</p>
<pre><code>In [60]: time.values.astype('datetime64[s]')
Out[60]:
array(['2013-06-28T00:00:00', '2013-07-03T00:00:00',
'2013-07-08T00:00:00', '2013-07-13T00:00:00',
'2013-07-18T00:00:00'], dtype='datetime64[s]')
In [61]: time.values.astype('datetime64[s]').tolist()
Out[61]:
[datetime.datetime(2013, 6, 28, 0, 0),
datetime.datetime(2013, 7, 3, 0, 0),
datetime.datetime(2013, 7, 8, 0, 0),
datetime.datetime(2013, 7, 13, 0, 0),
datetime.datetime(2013, 7, 18, 0, 0)]
</code></pre>
<p>the result is a list of <code>datetime</code> objects.</p>
|
python|numpy
| 7
|
377,035
| 52,905,360
|
Summing multiple row values of various columns in Pandas
|
<p>I need to add row value of various columns and store it in same(or new) dataframe.
eg:
The dataframe looks something like this:</p>
<pre><code>id col1 col2 col3 col4 ... col50
1 1 12 3 44 0
1 7 0 7 2 10
1 2 3 0 4 9
3 9 0 1 0 0
3 1 1 11 1 0
</code></pre>
<p>And the expected values should be:</p>
<pre><code>id col1 col2 col3 col4... col50
1 10 15 10 46 19
3 10 1 12 1 0
</code></pre>
<p>If I use <code>tmp2 = tmp2.iloc[:,1:50].sum()</code>, it changes the dimension of the dataframe. </p>
|
<p>This is a <strong>grouping aggregation</strong> by <code>id</code>. Therefore, use a <code>GroupBy</code> object:</p>
<pre><code>res = df.groupby('id', as_index=False).sum()
print(res)
id col1 col2 col3 col4 col50
0 1 10 15 10 50 19
1 3 10 1 12 1 0
</code></pre>
|
python|pandas|dataframe|pandas-groupby
| 4
|
377,036
| 52,913,191
|
How to iterate over each individual column values in multiple column dataframe?
|
<p>I have multiple column data frame with columns <strong>['Country', 'Energy Supply', 'Energy Supply per Capita', '% Renewable']</strong>.</p>
<p>In the energy supply column, I want to convert the unit of the column to Peta from Giga. But in the process
<code>energy['Energy Supply']*= energy['Energy Supply']</code>, when the value is like "...." (missing value is denoted by this), is also getting multiplied or say duplicated. Also, the string value in the column is also getting multiplied. (For eg original: Peta, after operation: PetaPetaPetaPeta...).</p>
<p>To stop this from happening, I am running this:</p>
<pre><code>energy = pd.read_excel("Energy Indicators.xls",skiprows = 16, skip_footer = 38)
energy.drop(['Unnamed: 0','Unnamed: 1'],axis = 1, inplace = True)
energy.columns = ['Country', 'Energy Supply', 'Energy Supply per Capita', '% Renewable']
for i in energy['Energy Supply']:
if (isinstance(energy[i],int) == True):
energy['Energy Supply'][i]=energy['Energy Supply'][i]*1000000
return (energy)
</code></pre>
<p>But I am not getting the result i.e. to change the value of integer type variables only, and nothing is changing.</p>
<p>Where I think the problem lies in, the first two rows will give the<strong>false</strong> condition, as first rows are "String" and based on that, the program is not modifying the values, whereas I want to individually check if the value is of integer type and if it is, Multiplies the number by 1,000,000. </p>
<p>Input:</p>
<pre><code> Country Energy Supply Energy Supply per Capita % Renewable
0 NaN Petajoules Gigajoules %
1 Afghanistan 321 10 78.6693
2 Albania 102 35 100
3 Algeria 1959 51 0.55101
4 American Samoa ... ... 0.641026
</code></pre>
<p>Expected Output:</p>
<pre><code> Country Energy Supply Energy Supply per Capita % Renewable
0 NaN Petajoules Gigajoules %
1 Afghanistan 3210000 10 78.6693
2 Albania 1020000 35 100
3 Algeria 19590000 51 0.55101
4 American Samoa ... ... 0.641026
</code></pre>
<p>Current Output:</p>
<pre><code> Country Energy Supply Energy Supply per Capita % Renewable
0 NaN PetajoulesPeta. Gigajoules %
1 Afghanistan 3210000 10 78.6693
2 Albania 1020000 35 100
3 Algeria 19590000 51 0.55101
4 American Samoa ........ ... 0.641026
</code></pre>
|
<p>You can use <a href="https://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.Series.str.isnumeric.html" rel="nofollow noreferrer"><code>str.isnumeric</code></a> to check if a string is numeric and then multiply.</p>
<pre><code>energy['Energy Supply'] = energy['Energy Supply'].apply(lambda x: int(x) * 1000000 if str(x).isnumeric() else x)
print (energy)
Country Energy Supply Energy Supply per Capita % Renewable
0 NaN Petajoules Gigajoules %
1 Afghanistan 321000000 10 78.6693
2 Albania 102000000 35 100
3 Algeria 1959000000 51 0.55101
4 American Samoa ... .. 0.641026
</code></pre>
|
python|pandas|dataframe
| 2
|
377,037
| 53,196,156
|
Different spectrogram between audio_ops and tf.contrib.signal
|
<p>I am trying to update the feature extraction pipeline of an speech command recognition model replacing the function <code>audio_ops.audio_spectrogram()</code> by <code>tf.contrib.signal.stft()</code>. I assumed that they were equivalent, but I am obtaining different spectrogram values with the same input audio. Could someone explain the relation between the two methods, or whether it is possible to obtain the same results using <code>tf.contrib.signal.stft()</code>?</p>
<p>My code:</p>
<p>1) <code>audio_ops</code> method:</p>
<pre><code>from tensorflow.contrib.framework.python.ops import audio_ops
import tensorflow as tf
import numpy as np
from tensorflow.python.ops import io_ops
#WAV audio loader
wav_filename_placeholder_ = tf.placeholder(tf.string, [], name='wav_filename')
wav_loader = io_ops.read_file(wav_filename_placeholder_)
sample_rate = 16000
desired_samples = 16000 #1 sec audio
wav_decoder = audio_ops.decode_wav(wav_loader, desired_channels=1, desired_samples=desired_samples)
#Computing the spectrograms
spectrogram = audio_ops.audio_spectrogram(wav_decoder.audio,
window_size=320,
stride=160,
magnitude_squared=False)
with tf.Session() as sess:
feed_dict={wav_filename_placeholder_:"/<folder_path>/audio_sample.wav"}
#Get the input audio and the spectrogram
audio_ops_wav_decoder_audio, audio_ops_spectrogram = sess.run([wav_decoder.audio, spectrogram], feed_dict)
</code></pre>
<p>2) <code>tf.contrib.signal</code> method:</p>
<pre><code>#Input WAV audio (will be initialized with the same audio signal: wav_decoder.audio )
signals = tf.placeholder(tf.float32, [None, None])
#Compute the spectrograms and get the absolute values
stfts = tf.contrib.signal.stft(signals,
frame_length=320,
frame_step=160,
fft_length=512,
window_fn=None)
magnitude_spectrograms = tf.abs(stfts)
with tf.Session() as sess:
feed_dict = {signals : audio_ops_wav_decoder_audio.reshape(1,16000)}
tf_original, tf_stfts, tf_spectrogram, = sess.run([signals, stfts, magnitude_spectrograms], feed_dict)
</code></pre>
<p>Thank you in advance</p>
|
<p>Found these helpful comments in github that discuss the differences:</p>
<p><a href="https://github.com/tensorflow/tensorflow/issues/11339#issuecomment-345741527" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/issues/11339#issuecomment-345741527</a></p>
<p><a href="https://github.com/tensorflow/tensorflow/issues/11339#issuecomment-443553788" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/issues/11339#issuecomment-443553788</a></p>
<blockquote>
<p>You can think of audio_ops.audio_spectrogram and audio_ops.mfcc as
"fused" ops (like fused batch-norm or fused LSTM cells that TensorFlow
has) for the ops in tf.contrib.signal. I think the original motivation
of them was that a fused op makes it easier to provide mobile support.
I think long term it would be nice if we removed them and provided
automatic fusing via XLA, or unified the API to match
tf.contrib.signal API, and provided fused keyword arguments to
tf.contrib.signal functions, like we do for
tf.layers.batch_normalization.</p>
<p>audio_spectrogram is a C++ implementation of an STFT, while
tf.signal.stft uses TensorFlow ops to compute the STFT (and thus has
CPU, GPU and TPU support).</p>
<p>The main cause of difference between them is that audio_spectrogram
uses fft2d to compute FFTs while tf.contrib.signal.stft uses Eigen
(CPU), cuFFT (GPU), and XLA (TPU). There is another very minor
difference, which is that the default periodic Hann window used by
each is slightly different. tf.contrib.signal.stft follows
numpy/scipy's definition.</p>
</blockquote>
|
python|tensorflow|speech-recognition
| 3
|
377,038
| 53,312,259
|
Rpy2 conversion of categorical data containing nulls to R factors
|
<p>I have a pandas dataframe with a categorical column containing NaN values, e.g.:</p>
<pre><code>g = pd.Series(["A", "B", "C", np.nan], dtype="category")
g
0 A
1 B
2 C
3 NaN
dtype: category
Categories (3, object): [A, B, C]
</code></pre>
<p>In pandas NaN is not a category but you can have NaN values in your categorical data. I want to pass this dataframe through to R using %%R in a Jupyter notebook . The categorical column is successfully recognised by R as a factor, but the factor is malformed, presumably because of the Nan values:</p>
<pre><code>%%R -i g
str(g)
Factor w/ 3 levels "A","B","C": 1 2 3 0
- attr(*, "names")= chr [1:4] "0" "1" "2" "3"
print(g)
Error in as.character.factor(x) : malformed factor
</code></pre>
<p>Is there any way to make sure that the factor is not malformed - e.g. to have an
NA factor level created automatically? </p>
<p>R: 3.5.1, rpy2: 2.9.4, Python - 3</p>
|
<p>At the time of writing this is a bug with rpy2's conversion of pandas categories, that is fixed and will be included in rpy2 starting with release 2.9.5: <a href="https://bitbucket.org/rpy2/rpy2/issues/493/rpy2-conversion-of-categorical-data" rel="nofollow noreferrer">https://bitbucket.org/rpy2/rpy2/issues/493/rpy2-conversion-of-categorical-data</a></p>
<p>A workaround is rather trivial: don't use a <code>NaN</code> in a pandas Category. </p>
<pre><code>g = pd.Series(["A", "B", "C", np.nan], dtype="category")
# Prepare alternative representation to pass it to R
g_r = g.replace(np.nan, 'Missing')
</code></pre>
<p>When converting it is now looking like:</p>
<pre><code>%%R -i g_r
str(g_r)
Factor w/ 4 levels "A","B","C","Missing": 1 2 3 4
- attr(*, "names")= chr [1:4] "0" "1" "2" "3"
</code></pre>
<p>Translating back into an R NA is only a matter of dropping that added level:</p>
<pre><code>%%R -i g_r
str(droplevels(g_r, exclude = "Missing"))
Factor w/ 3 levels "A","B","C": 1 2 3 NA
- attr(*, "names")= chr [1:4] "0" "1" "2" "3"
</code></pre>
|
r|pandas|rpy2|categorical-data|factors
| 0
|
377,039
| 53,169,808
|
Exporting a pandas df to sqlite leads to duplicate datasets instead of one updated dataset
|
<p>I'm uploading a pandas dataframe from a csv file into a sqlite database via sqlalchmemy.
The initial filling is working just fine, but when I rerun the following code, the same data is exported again and the database contains two identical datasets.</p>
<p>How can I change the code, so that only new or changed data is uploaded into the database?</p>
<pre><code>import sqlalchemy
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, String, Numeric, DateTime
from sqlalchemy.orm import sessionmaker
from datetime import datetime
import pandas as pd
# Set up of the engine to connect to the database
# the urlquote is used for passing the password which might contain special characters such as "/"
engine = create_engine('sqlite:///historical_data3.db')
conn = engine.connect()
Base = declarative_base()
# Declaration of the class in order to write into the database. This structure is standard and should align with SQLAlchemy's doc.
class Timeseries_Values(Base):
__tablename__ = 'Timeseries_Values'
#id = Column(Integer)
Date = Column(DateTime, primary_key=True)
ProductID = Column(Integer, primary_key=True)
Value = Column(Numeric)
@property
def __repr__(self):
return "(Date='%s', ProductID='%s', Value='%s')" % (self.Date, self.ProductID, self.Value)
fileToRead = r'V:\PYTHON\ProjectDatabase\HistoricalDATA_V13.csv'
tableToWriteTo = 'Timeseries_Values'
# Panda to create a dataframe with ; as separator.
df = pd.read_csv(fileToRead, sep=';', decimal=',', parse_dates=['Date'], dayfirst=True)
# The orient='records' is the key of this, it allows to align with the format mentioned in the doc to insert in bulks.
listToWrite = df.to_dict(orient='records')
# Set up of the engine to connect to the database
# the urlquote is used for passing the password which might contain special characters such as "/"
metadata = sqlalchemy.schema.MetaData(bind=engine, reflect=True)
table = sqlalchemy.Table(tableToWriteTo, metadata, autoload=True)
# Open the session
Session = sessionmaker(bind=engine)
session = Session()
# Insert the dataframe into the database in one bulk
conn.execute(table.insert(), listToWrite)
# Commit the changes
session.commit()
# Close the session
session.close()
</code></pre>
|
<p>This is working now, I 've added the df.to_sql code:</p>
<pre><code>import sqlalchemy
from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, String, Numeric, DateTime
from sqlalchemy.orm import sessionmaker
from datetime import datetime
import pandas as pd
# Set up of the engine to connect to the database
# the urlquote is used for passing the password which might contain special characters such as "/"
engine = create_engine('sqlite:///historical_data3.db')
conn = engine.connect()
Base = declarative_base()
# Declaration of the class in order to write into the database. This structure is standard and should align with SQLAlchemy's doc.
class Timeseries_Values(Base):
__tablename__ = 'Timeseries_Values'
#id = Column(Integer)
Date = Column(DateTime, primary_key=True)
ProductID = Column(Integer, primary_key=True)
Value = Column(Numeric)
fileToRead = r'V:\PYTHON\ProjectDatabase\HistoricalDATA_V13.csv'
tableToWriteTo = 'Timeseries_Values'
# Panda to create a dataframe with ; as separator.
df = pd.read_csv(fileToRead, sep=';', decimal=',', parse_dates=['Date'], dayfirst=True)
# The orient='records' is the key of this, it allows to align with the format mentioned in the doc to insert in bulks.
listToWrite = df.to_dict(orient='records')
df.to_sql(name='Timeseries_Values', con=conn, if_exists='replace')
metadata = sqlalchemy.schema.MetaData(bind=engine, reflect=True)
table = sqlalchemy.Table(tableToWriteTo, metadata, autoload=True)
# Open the session
Session = sessionmaker(bind=engine)
session = Session()
# Insert the dataframe into the database in one bulk
conn.execute(table.insert(), listToWrite)
# Commit the changes
session.commit()
# Close the session
session.close()
</code></pre>
|
python|pandas|sqlite|sqlalchemy|dataset
| 0
|
377,040
| 53,119,367
|
filtering a dataframe on values in a list
|
<p>I have the below data frame :-</p>
<p><a href="https://i.stack.imgur.com/quuGH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/quuGH.png" alt="enter image description here"></a></p>
<p>I want to filter where ever there is 11 in <code>claim_status</code> </p>
<p>and for the <code>claim_ststaus_reason</code> for aa1.</p>
<p>I am trying to the below code but it simply giving me all the rows </p>
<pre><code>my_list = 'aa1'
df[df['claim_status_reason'].str.contains( "|".join(my_list), regex=True)].reset_index(drop=True)
</code></pre>
<p>Expected output:-</p>
<pre><code>1.) where there is 11 in claim_ststus
2.) where there is aa1 in the claim_status_reason
</code></pre>
|
<p>You can use <code>apply</code> to obtain your desired filter like:</p>
<pre><code>df[(df['claim_staus'].apply(lambda x: 11 in x)) & (df['claim_status_reason'].apply(lambda x: 'a1' in x))]
</code></pre>
|
python|python-2.7|pandas|dataframe
| 4
|
377,041
| 53,045,867
|
Extracting the hour from a time column in pandas
|
<p>Suppose I have the following dataset: </p>
<p><a href="https://i.stack.imgur.com/5GORt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5GORt.png" alt="enter image description here"></a></p>
<p>How would I create a new column, to be the hour of the time?</p>
<p>For example, the code below works for individual times, but I haven't been able to generalise it for a column in pandas.</p>
<pre><code>t = datetime.strptime('9:33:07','%H:%M:%S')
print(t.hour)
</code></pre>
|
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html" rel="noreferrer"><code>to_datetime</code></a> to datetimes with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.hour.html" rel="noreferrer"><code>dt.hour</code></a>:</p>
<pre><code>df = pd.DataFrame({'TIME':['9:33:07','9:41:09']})
#should be slowier
#df['hour'] = pd.to_datetime(df['TIME']).dt.hour
df['hour'] = pd.to_datetime(df['TIME'], format='%H:%M:%S').dt.hour
print (df)
TIME hour
0 9:33:07 9
1 9:41:09 9
</code></pre>
<p>If want working with <code>datetime</code>s in column <code>TIME</code> is possible assign back:</p>
<pre><code>df['TIME'] = pd.to_datetime(df['TIME'], format='%H:%M:%S')
df['hour'] = df['TIME'].dt.hour
print (df)
TIME hour
0 1900-01-01 09:33:07 9
1 1900-01-01 09:41:09 9
</code></pre>
|
python|pandas|datetime
| 11
|
377,042
| 53,293,380
|
pandas dataframe copy of slice warning
|
<p>I'm fairly new to pandas, and was getting the infamous SettingWithCopyWarning in a large piece of code. I boiled it down to the following:</p>
<pre><code>import pandas as pd
df = pd.DataFrame([[0,3],[3,3],[3,1],[1,1]], columns=list('AB'))
df
df = df.loc[(df.A>1) & (df.B>1)]
df['B'] = 10
</code></pre>
<p>When I run this I get the warning:</p>
<p><strong>main</strong>:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead</p>
<p>The strange thing is that if I leave off the "df" line it runs without a warning. Is this intended behavior?</p>
<p>In general, if I want to filter a DataFrame by the values across various columns, do I need to do a copy() to avoid the SettingWithCopyWarning?</p>
<p>thanks very much</p>
|
<p>Assuming your DataFrame as below from your question, this will avoid <code>SettingWithCopyWarning</code></p>
<p>There is <a href="https://github.com/pandas-dev/pandas/issues/11984" rel="nofollow noreferrer">github Discussion</a> and solution suggested by one of the Pandas developer Jeff :) </p>
<pre><code>df
A B
1 3 3
</code></pre>
<p>Best to use this way.</p>
<pre><code>df['B'] = df['B'].replace(3, 10)
df
A B
1 3 10
</code></pre>
|
python|pandas
| 0
|
377,043
| 52,906,950
|
List index is out of range - but I'm checking the length before processing
|
<p>Wonder if you can advise - I get the below error when processing a list of items. I should note, that this script works for 99% of items - as I've expanded the list now to 84M rows, I am now getting this issue.</p>
<p>I do this for each line</p>
<pre><code>elif len(str(x)) > 3 and str(x[len(x)-2]).rstrip() in cdns:
</code></pre>
<p>So, I don't see how the index can be out of range, if I'm actively checking if it's over a certain length before processing?</p>
<pre><code>---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-2-a28be4b396bd> in <module>()
21 elif len(str(x)) > 4 and str(x[len(x)-2]).rstrip() in cdns:
22 cleandomain.append(str(x[len(x)-3])+'.'+str(x[len(x)-2])+'.'+str(x[len(x)-1]))
---> 23 elif len(str(x)) > 5 and str(x[len(x)-3]).rstrip() in cdns:
24 cleandomain.append(str(x[len(x)-4])+'.'+str(x[len(x)-3])+'.'+str(x[len(x)-2])+'.'+ str(x[len(x)-1]))
25 #if its in the TLD list, do this
IndexError: list index out of range
</code></pre>
<p>The full loop is below, so I'd expect that if the index list item was out of range, that it'd just carry out the other command & print the list value?</p>
<pre><code> for x in index:
#if it ends with a number, it's an IP
if str(x)[-1].isnumeric():
cleandomain.append(str(x[0])+'.'+str(x[1])+'.*.*')
#if its in the CDN list, take a subdomain as well
elif len(str(x)) > 3 and str(x[len(x)-2]).rstrip() in cdns:
cleandomain.append(str(x[len(x)-3])+'.'+str(x[len(x)-2])+'.'+str(x[len(x)-1]))
elif len(str(x)) > 4 and str(x[len(x)-3]).rstrip() in cdns:
cleandomain.append(str(x[len(x)-4])+'.'+str(x[len(x)-3])+'.'+str(x[len(x)-2])+'.'+ str(x[len(x)-1]))
#if its in the TLD list, do this
elif len(str(x)) > 3 and str(x[len(x)-2]).rstrip()+'.'+ str(x[len(x)-1]).rstrip() in tld:
cleandomain.append(str(x[len(x)-3])+'.'+str(x[len(x)-2])+'.'+ str(x[len(x)-1]))
elif len(str(x)) > 2 and str(x[len(x)-1]) in tld:
cleandomain.append(str(x[len(x)-2])+'.'+ str(x[len(x)-1]))
#if its not in the TLD list, do this
else:
cleandomain.append(x)
</code></pre>
<p>X is generated as below:</p>
<p>X is a list of lists - the split out parts of a domain like below
[['news', 'bbc', 'co', 'uk'], ['graph', 'facebook', 'com']]</p>
<pre><code>import pandas as pd
path = "Desktop/domx.csv"
df = pd.read_csv(path, delimiter=',', header='infer', encoding = "ISO-8859-1")
df2 = df[((df['domain'] != '----'))]
df3 = df2[['domain', 'use']]
for row in df2.iterrows():
index = df3.domain.str.split('.').tolist()
</code></pre>
<p>Any help would be great</p>
|
<p>Let me expand on what Corentin Limier said in comments with a specific counterexample, since you categorically deny this could be true, without actually checking your debugger:</p>
<p>based on your original question error dump:</p>
<blockquote>
<p>---> 23 elif len(str(x)) > 5 and str(x[len(x)-3]).rstrip() in cdns:<br>
IndexError: list index out of range</p>
</blockquote>
<pre><code>x = ['counterexample']
print ('x =', x)
print ('length of x is', len(x))
print ('length of str(x) is', len(str(x)))
if len(str(x)) > 5:
print ('You think this is safe')
try:
x[len(x)-3]
except IndexError:
print ('but it is not.')
</code></pre>
<blockquote>
<p>x = ['counterexample']<br>
length of x is 1<br>
length of str(x) is 18<br>
You think this is safe<br>
but it is not.</p>
</blockquote>
<p>You need to know if the index is valid, compared to the number of items in x. You are actually looking at the length of the string representation of x, which is completely different. The string is 18 characters long, but there is only one item in the list.</p>
<p>PS: Don't feel bad, we have ALL done this. By this, I mean "get blinders when we have written code completely different from what we thought we did." This is one of the primary reasons for "code review" in professional settings.</p>
|
python|python-3.x|pandas
| 3
|
377,044
| 53,244,590
|
Histogram per hour - matplotlib
|
<p>I'm analyzing public data on transport accidents in the UK.</p>
<p>My dataframe looks like this :</p>
<pre><code>Index Time
0 02:30
1 00:37
2 01:25
3 09:15
4 07:53
5 09:29
6 08:53
7 10:05
</code></pre>
<p>I'm trying to plot a histogram showing accident distribution by time of day,
here is my code : </p>
<pre><code> import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import matplotlib.dates as mdates
df['hour']=pd.to_datetime(df['Time'],format='%H:%M')
df.set_index('hour', drop=False, inplace=True)
df['hour'].groupby(pd.Grouper(freq='60Min')).count().plot(kind='bar', color='b')
</code></pre>
<p>This is the output:</p>
<p><a href="https://i.stack.imgur.com/Rhpy3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Rhpy3.png" alt="Graph"></a></p>
<p>In this graph, I'd like to change the labels on the x-axis to the format 'hh:mm'. How would I go about doing this?</p>
|
<p>What you are missing is setting the format of the matplotlib x-axis format:</p>
<pre><code>df.set_index('hour', drop=False, inplace=True)
df = df['hour'].groupby(pd.Grouper(freq='60Min')).count()
ax = df.plot(kind='bar', color='b')
ticklabels = df.index.strftime('%H:%Mh')
ax.xaxis.set_major_formatter(matplotlib.ticker.FixedFormatter(ticklabels))
plt.show()
</code></pre>
|
python|matplotlib|histogram|pandas-groupby
| 4
|
377,045
| 53,094,238
|
GPU crashes when running Keras/tensorflow-gpu, specifically when clock speed goes to idle at 0 MHz
|
<p>I'm using Jupyter Notebook to run Keras with a Tensorflow GPU backend. I've done some testing with various dummy models while simultaneously monitoring my GPU usage using MSI Afterburner, GPU-Z, nvidia-smi and Task Manager. My GPU is a GeForce GTX 960M, which has no issues running games. The temperatures are also low when running Keras.</p>
<p>What I've noticed is that the Keras runs fine (e.g. loading or training a model) in the beginning but whenever Keras is not running anything, the GPU naturally wants to idle from 1097 MHz to 0 MHz and as soon as it does that the GPU crashes. I can see that the "GPU is lost" on NVSMI. I have to then disable and re-enable my GPU in the Device Manager to get it to work.</p>
<p>Does anyone have any idea why this might be happening?</p>
<p>Edit: I can temporarily prevent this from happening for very small programs by using the "allow_growth" feature as follows:</p>
<pre><code>import tensorflow as tf
from keras.backend.tensorflow_backend import set_session
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
set_session(sess)
</code></pre>
<p>However, this only works if the operation is really small such that it uses only about 0.1 GB of GPU memory such as loading a model or running a really small model. However, if the program is using memory of even 0.3 GB of memory my GPU crashes since the memory does not go to 0 GB <em>before</em> the clock speed drops to 0 MHz (lower power state).</p>
|
<p>I was finally able to figure out the issue thanks to someone from another forum. It was a driver issue. The latest drivers provided by Nvidia are causing the issue unlike the old drivers provided by my laptop manufacturer.</p>
<p>Since I was not able to run tensorflow with my old drivers and do more troubleshooting, what I did was download eDrawings Viewer and open up some random assembly drawings I found online. First I tried with the latest Nvidia drivers, and I see that when I manipulate the models, my card is at P0 state but if I don't do anything and let the software idle, my card goes to a lower power state and crashes my GPU. But when I did the same exercise with my ASUS manufacturer-certified drivers (since this software was compatible even with the older drivers unlike TF), my GPU did NOT crash.</p>
<p>What I also discovered was that eDrawings Viewer does not crash even with the latest Nvidia drivers if I go into the Nvidia Control Panel and select "Prefer Maximum Performance" under Power Management Mode. The card stays at P0 state whenever I have the software open even after idling for minutes. Unfortunately, since python.exe does not have a graphical interface, this option does not work for my case. As a workaround, I can still run tensorflow without getting it to crash by running eDrawings Viewer in the background (or really any program that uses a graphical interface), which keeps my card at the P0 State.</p>
|
python|tensorflow|keras|nvidia
| 0
|
377,046
| 65,776,284
|
After applying pd.to_numeric on multiple columns there is no change in columns Dtype
|
<p>I'm wondering what I'm doing wrong while applying pd.to_numeric on multiple columns in dataframe</p>
<pre><code>df_weather = pd.read_csv ('https://raw.githubusercontent.com/MichalLeh/Edinburgh-bikes-project/main/edinburgh_weather.csv')#("J:/edinburgh_weather.csv")
</code></pre>
<p>Sample of dataframe:</p>
<pre><code> time temp feels wind gust rain humidity cloud pressure vis date
0 00:00 11 °c 11 °c 9 km/h from S 19 km/h 0.0 mm 79% 13% 1020 mb Excellent 2018-09-01
</code></pre>
<p>First I get rid of unwanted characters:</p>
<pre><code>df_weather = (df_weather[['time', 'date', 'temp', 'feels', 'wind', 'gust', 'rain', 'humidity', 'cloud', 'pressure']]
.replace(to_replace ='[^0-9\:\-\.]', value = '', regex = True))
</code></pre>
<p>And then I apply to_numeric:</p>
<pre><code>df_weather[['temp', 'feels', 'wind', 'gust', 'rain', 'humidity', 'cloud', 'pressure']].apply(lambda x: pd.to_numeric(x, errors='coerce'))
df_weather.info()
</code></pre>
<p>I'm not getting any errors and yet the result looks like this:</p>
<pre><code><class 'pandas.core.frame.DataFrame'>
RangeIndex: 6336 entries, 0 to 6335
Data columns (total 11 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 time 6336 non-null object
1 temp 6336 non-null object
2 feels 6336 non-null object
3 wind 6336 non-null object
4 gust 6336 non-null object
5 rain 6336 non-null object
6 humidity 6336 non-null object
7 cloud 6336 non-null object
8 pressure 6336 non-null object
9 vis 6336 non-null object
10 date 6336 non-null object
dtypes: object(11)
memory usage: 544.6+ KB
</code></pre>
<p>BTW <code>pd.to_numeric</code> works when I transform given columns one by one though. I'd love to be able to convert given data at same time. Thank you.</p>
|
<p>You need assign back columns converted to numeric:</p>
<pre><code>cols = ['temp', 'feels', 'wind', 'gust', 'rain', 'humidity', 'cloud', 'pressure']
df_weather[cols] = df_weather[cols].apply(lambda x: pd.to_numeric(x, errors='coerce'))
</code></pre>
|
python|pandas|dataframe
| 2
|
377,047
| 65,555,883
|
Python pandas - Index lookup based on criteria
|
<p>I have an extract of a dataframe below:</p>
<pre><code>
IndexR ATOI_INDEX pf_52w_rolling_R atoi_52w_rolling_R
date
2012-07-27 3.576907 1.384371 -0.208960 -0.038279
2012-08-03 3.563627 1.388237 -0.129268 0.039078
2012-08-10 3.598886 1.404311 -0.118584 0.032610
2012-08-17 3.731085 1.433466 -0.048716 0.072498
2012-08-24 3.727810 1.426206 -0.053756 0.043906
2012-08-31 3.588335 1.417722 -0.092489 0.026757
2012-09-07 3.613275 1.419281 -0.081376 0.040861
2012-09-14 3.746451 1.438596 0.006892 0.066220
2012-09-21 3.834664 1.445147 0.163057 0.136574
2012-09-28 3.785910 1.438789 0.138433 0.099768
2012-10-05 3.899735 1.473921 0.084421 0.086089
2012-10-12 3.896683 1.472302 0.070044 0.073716
2012-10-19 4.040653 1.499956 0.142053 0.110457
2012-10-26 3.934642 1.468487 0.063541 0.034418
2012-11-02 3.939487 1.464218 0.094581 0.048841
2012-11-09 3.953680 1.464842 0.068256 0.046199
2012-11-16 3.802839 1.424206 0.080771 0.045743
2012-11-23 3.874423 1.449342 0.178798 0.115653
2012-11-30 3.976766 1.480888 0.115446 0.058999
2012-12-07 3.974976 1.497452 0.113870 0.092148
</code></pre>
<p>I need to print the date in which the minimum value of the <code>pf_52w_rolling_R</code> occurred.</p>
<p>I've tried a number of possibilities, but I keep getting type errors <code>(TypeError: cannot do index indexing on <class 'pandas.core.indexes.datetimes.DatetimeIndex'> with these indexers [-0.208960] of <class 'float'></code></p>
<p>The latest attempt is below:</p>
<pre><code>rolling.loc[rolling['pf_52w_rolling_R'].min()]
</code></pre>
|
<p>Let us try <code>idxmim</code> which will return the min value index</p>
<pre><code>rolling.loc[rolling['pf_52w_rolling_R'].idxmin()]
</code></pre>
|
python|pandas
| 2
|
377,048
| 65,614,664
|
Concatenate one table to another using a key
|
<p>I have a CSV file. <code>df</code> represents this file. I have two ids in this file. Once the <code>d_id</code> and the <code>i_id</code>. The program now runs through a certain algorithm and gives me the <code>probability</code> and the <code>d_id</code> back in an ordered row (I can also deactivate this ordered row if it should be easier).
In any case, I would like to receive the <code>i_id</code> instead of the <code>d_id</code>. Can someone help me with how I map the <code>d_id</code> to the <code>i_id</code> and only get the <code>i_id</code> as the last output shows?</p>
<pre><code>import pandas as pd
d = {'d_id': [1, 2, 2, 3, 3, 3, 4],
'i_id': [99, 98, 98, 97, 97, 97, 96]}
df = pd.DataFrame(data=d)
print(df)
d_id i_id
0 1 99
1 2 98
2 2 98
3 3 97
4 3 97
5 3 97
6 4 96
d_new = {'d_id': [4, 2, 1, 3],
'probability': [0.8557, 0.83215, 0.2563, 0.14521]}
df_new = pd.DataFrame(data=d_new)
d_id probability
0 4 0.85570
1 2 0.83215
2 1 0.25630
3 3 0.14521
print(df_new)
</code></pre>
<p>What I tried</p>
<pre><code>result = df.merge(df_new, right_on='d_id')
print(result)
[OUT] TypeError: object of type 'NoneType' has no len()
</code></pre>
<p>What I want</p>
<pre><code> i_id probability
0 96 0.85570
1 98 0.83215
2 99 0.25630
3 97 0.14521
</code></pre>
<hr />
<p>What I also tried</p>
<pre><code>result = df.merge(df_new, how='left', on='d_id')
print(result)
d_id i_id probability
0 1 99 0.25630
1 2 98 0.83215
2 2 98 0.83215
3 3 97 0.14521
4 3 97 0.14521
5 3 97 0.14521
6 4 96 0.85570
</code></pre>
|
<p>I think you just need to process your resulted data.</p>
<pre><code>import pandas as pd
d = {'d_id': [1, 2, 2, 3, 3, 3, 4],
'i_id': [99, 98, 98, 97, 97, 97, 96]}
df = pd.DataFrame(data=d)
d_new = {'d_id': [4, 2, 1, 3],
'probability': [0.8557, 0.83215, 0.2563, 0.14521]}
df_new = pd.DataFrame(data=d_new)
result = df.merge(df_new, how='left', on='d_id')[['i_id', 'probability']]
result.drop_duplicates(inplace=True)
</code></pre>
<p>result:</p>
<pre><code> i_id probability
0 99 0.25630
1 98 0.83215
3 97 0.14521
6 96 0.85570
</code></pre>
|
python|pandas|dataframe
| 1
|
377,049
| 65,873,873
|
Illustrating Normal Distribution using Numpy, Matploblib 3D from MATLAB code
|
<p>I am trying to plot normal distribution in 3D. I have a code written in MATLAB, but I have been failed to write it in Python.</p>
<p>The completed MATLAB's code is:</p>
<pre><code>dsig = 0.25;
dx = 0.5;
mu = 0;
[X, SIGMA] = meshgrid(-10:dx:10, 1:dsig:5);
Z = exp(-(X-mu).^2./(2*SIGMA.^2))./sqrt(2*pi*SIGMA.^2);
waterfall(X,SIGMA,Z)
xlabel('x')
ylabel('\sigma')
zlabel('f(x)')
</code></pre>
<p>The code that I have tried to write in Python so far is:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
dsig = 0.25
dx = 0.5
mu = 0
X = np.linspace(-10,dx,10)
SIGMA = np.linspace(1,dsig,5)
X, SIGMA = np.meshgrid(X, SIGMA)
Z = 1/(np.sqrt(2*np.pi*SIGMA*SIGMA))*np.exp(-(x-mu)**2/(2*SIGMA*SIGMA))
</code></pre>
<p>and this code keeps giving me an error.</p>
<p>Could please someone help me out with drawing this 3d plot in Python?</p>
|
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
import math
import scipy.stats as stats
mu = 0
variance = 1
sigma = math.sqrt(variance)
x = np.linspace(mu - 3*sigma, mu + 3*sigma, 100)
y = np.linspace(mu - 3*sigma, mu + 3*sigma, 100)
x, y = np.meshgrid(x, y)
r = np.sqrt(x**2 + y**2)
z = stats.norm.pdf(r, mu, sigma)
fig = plt.figure()
ax = fig.gca(projection='3d') # get current axis
surf = ax.plot_surface(x, y, z, cmap=cm.coolwarm, linewidth=0, antialiased=False)
ax.set_zlim(0, 0.3)
ax.zaxis.set_major_locator(LinearLocator(10))
ax.zaxis.set_major_formatter(FormatStrFormatter('%.02f'))
plt.show()
</code></pre>
<p><a href="https://i.stack.imgur.com/Bd7D6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Bd7D6.png" alt="normal distribution" /></a></p>
|
python|matlab|numpy|statistics|quantitative-finance
| -1
|
377,050
| 65,757,385
|
Tensorfllow: load checkpoint from changed model
|
<p>For some reason I want to test the difference in the performance of a detector and his identical version but finetuned with some 3d convolutions.<br>
The model of the detector is google EfficientDet, the weights are finetuned on custom data. <br>
I was wondering if it was possible to load my custom weight in a model in which the graph-def is not the same (there would be 3d convolutions at some layers). And what could be the way to do that.<br>
I'm new to Tensorflow and a bit sad because in Pytorch this would be so easy</p>
<p>Thanks</p>
|
<p>You can load the second model, get weights of the layer and set weights of your model:</p>
<pre><code>source_model = keras.models.load_model('path/to/location')
weight = source_model.layers[0].get_weights() # <= change index here
EfficientDetModel.layers[0].set_weights(weight) # <= change index here
</code></pre>
|
python|tensorflow|deep-learning|checkpoint
| 1
|
377,051
| 65,711,006
|
Save to Excel during loop (Pandas DataFrame)
|
<p>I am running a loop that creates a large nested dictionary and in the end, saves it to an excel file using pandas.</p>
<p>How do I save to the same excel file after X iterations instead of waiting until the end? For example for every 10th iteration? Is it possible?</p>
<p>Current code (simplified):</p>
<pre class="lang-python prettyprint-override"><code>import pandas as pd
d = {}
def some_function(x, y)
d[x] = {'id': x, 'status': 'ok', 'info': y}
...
print('ok')
elements = ['elem1', 'elem2', 'elem3', 'elem4', ..., 'elem1000']
count = 1
for element in elements:
try:
some_function(count, element)
except:
d[count] = {'id': count, 'status': 'error'}
count += 1
df = pd.DateFrame(d).T
df.to_excel('output.xlsx', index=False)
</code></pre>
|
<p>As far as I know, there's no good way to stream output to a single excel sheet with pandas. You can append the data as additional sheets using the pandas.ExcelWriter, but that doesn't sound like what you want.</p>
<p>You could always just run the df.to_excel every X iterations and overwrite the existing file, but that could get slow as your data gets large.</p>
|
python|excel|pandas
| 0
|
377,052
| 65,564,030
|
Pandas grouping and express as proportion
|
<pre><code>d = [{'name': 'tv', 'value': 10, 'amount': 35},
{'name': 'tv', 'value': 10, 'amount': 14},
{'name': 'tv', 'value': 15, 'amount': 23},
{'name': 'tv', 'value': 34, 'amount': 56},
{'name': 'radio', 'value': 90, 'amount': 35},
{'name': 'radio', 'value': 90, 'amount': 65},
{'name': 'radio', 'value': 100, 'amount': 50},
{'name': 'dvd', 'value': 0.5, 'amount': 35},
{'name': 'dvd', 'value': 0.2, 'amount': 40},
{'name': 'dvd', 'value': 0.5, 'amount': 15}
]
df = pd.DataFrame(d)
dff = df.groupby(['name', 'value']).agg('sum').reset_index()
dfff = dff.groupby(['name']).apply(lambda x: round((x['amount']/x['amount'].sum())*100))
print(dff)
print(dfff)
name value amount
0 dvd 0.2 40
1 dvd 0.5 50
2 radio 90.0 100
3 radio 100.0 50
4 tv 10.0 49
5 tv 15.0 23
6 tv 34.0 56
name
dvd 0 44.0
1 56.0
radio 2 67.0
3 33.0
tv 4 38.0
5 18.0
6 44.0
</code></pre>
<p>I now want to take this dataset and concatenate the rows grouped on the <code>name</code> variable. The <code>amount</code> variable should be expressed as a proportion.</p>
<p>The final dataset should look like below, where the <code>value</code> is the first term and <code>amount</code> expressed as a proportion is the second term.</p>
<pre><code> name concatenated_values
0 dvd 0.2, 44%, 0.5, 56%
1 radio 90, 67%, 100, 33%
.
.
.
</code></pre>
|
<p>Use custom lambda function with flatten nested lists in <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.apply.html" rel="nofollow noreferrer"><code>GroupBy.apply</code></a>:</p>
<pre><code>dff = df.groupby(['name', 'value']).agg('sum').reset_index()
dff['amount'] = ((dff['amount'] / dff.groupby(['name'])['amount'].transform('sum')*100)
.round().astype(int).astype(str) + '%')
f = lambda x: ', '.join(str(z) for y in x.to_numpy() for z in y)
d = dff.groupby('name')[['value','amount']].apply(f).reset_index(name='concatenated_values')
print(d)
name concatenated_values
0 dvd 0.2, 44%, 0.5, 56%
1 radio 90.0, 67%, 100.0, 33%
2 tv 10.0, 38%, 15.0, 18%, 34.0, 44%
</code></pre>
|
python|python-3.x|pandas|pandas-groupby
| 2
|
377,053
| 65,553,064
|
Training & Validation loss and dataset size
|
<p>I'm new on Neural Networks and I am doing a project that has to define a NN and train it. I've defined a NN of 2 hidden layers with 17 inputs and 17 output. The NN has 21 inputs and 3 outputs.</p>
<p>I have a data set of labels of 10 million, and a dataset of samples of another 10 million. My first issue is about the size of the validation set and the training set. I'm using PyTorch and batches, and of what I've read, the batches shouldn't be larger. But I don't know how many approximately should be the size of the sets.</p>
<p>I've tried with larger and small numbers, but I cannot find a correlation that shows me if I'm right choosing a large set o small set in one of them (apart from the time that requires to process a very large set).</p>
<p>My second issue is about the Training and Validation loss, which I've read that can tell me if I'm overfitting or underfitting depending on if it is bigger or smaller. The perfect should be the same value for both, and it also depends on the epochs. But I am not able to tune the network parameters like batch size, learning rate or choosing how much data should I use in the training and validation. If 80% of the set (8 million), it takes hours to finish it, and I'm afraid that if I choose a smaller dataset, it won't learn.</p>
<p>If anything is badly explained, please feel free to ask me for more information. As I said, the data is given, and I only have to define the network and train it with PyTorch.</p>
<p>Thanks!</p>
|
<p>For your first question about batch size, there is no fix rule for what value should it have. You have to try and see which one works best. When your NN starts performing badly don't go above or below that value for batch size. There is no hard rule here to follow.</p>
<p>For your second question, first of all, having training and validation loss same doesn't mean your NN is performing nicely, it is just an indication that its performance will be good enough on a test set if the above is the case, but it largely depends on many other things like your train and test set distribution.</p>
<p>And with NN you need to try as many things you can try. Try different parameter values, train and validation split size, etc. You cannot just assume that it won't work.</p>
|
python|machine-learning|neural-network|pytorch
| 0
|
377,054
| 65,871,648
|
how to use tensorflow dataset map function correct for string column
|
<p>m using tensorflow datasets api</p>
<p>and i have a data with a string column that can represents a binary option</p>
<p>(something like ("yes" or "no")</p>
<p>i'm wondering if i convert it into 1 and 0 (integer value) respectively, and leave the other columns unchanged</p>
<p>my skeleton functions is:</p>
<pre><code>def mapper(features,target):
#features["str_col"] TODO "MAP this when yes to 1 when no to 0"
#return features with x transformed # TODO
</code></pre>
<p>can u assist?</p>
|
<p>You can convert a bool to int:</p>
<pre><code>y = tf.equal(features["str_col"], 'YES')
y = tf.cast(y, tf.int32)
</code></pre>
|
python|tensorflow|tensorflow2.0|tensorflow-datasets
| 0
|
377,055
| 65,489,527
|
How to select the next date in a Pandas dataframe with date index and missing dates
|
<p>I have a dataframe indexed on date with some dates missing (that's ok, they are non-trading data and this is stock data).</p>
<p>How do I access the next row, when I know the previous date, e.g.</p>
<pre><code>date Open
01-01-2021 501
02-01-2021 508
04-01-2021 511
05-01-2021 518
</code></pre>
<p>I would like a function that, when I input '02-01-2021', it outputs the values for 04-01-2021 (without knowing how many days in between might be missing. I am assuming there may be some iterator, or index number I can access?</p>
|
<p>If the rows are sorted by <em>date</em>, use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.searchsorted.html" rel="nofollow noreferrer">searchsorted</a>:</p>
<pre><code>idx = df['date'].searchsorted(pd.to_datetime('02-01-2021'), side='right')
print(df.loc[idx, 'date'])
</code></pre>
<p><strong>Output</strong></p>
<pre><code>2021-04-01 00:00:00
</code></pre>
<p>The time complexity of searchsorted is <em>O(logN)</em>. Otherwise, use:</p>
<pre><code>idx = df['date'].gt(pd.to_datetime('02-01-2021')).idxmax()
print(df.loc[idx, 'date'])
</code></pre>
<p><strong>Output</strong></p>
<pre><code>2021-04-01 00:00:00
</code></pre>
<p>A third alternative is to use <a href="https://docs.python.org/3/library/functions.html#next" rel="nofollow noreferrer">next</a>:</p>
<pre><code>date = pd.to_datetime('02-01-2021')
idx = next(i for i, x in zip(df.index, df['date']) if x > date)
print(df.loc[idx, 'date'])
</code></pre>
<p>Although it needs benchmarking, the last alternative could be faster for unordered data according to this <a href="https://stackoverflow.com/a/48649272/4001592">answer</a>.</p>
|
python|pandas
| 3
|
377,056
| 65,751,411
|
Plotting scatter plot of pandas dataframe with both categorical and numerical data
|
<p>I am trying to plot a scatter plot of the following type of pandas dataframe:</p>
<pre><code>df = pd.DataFrame([['RH1', 1, 3], ['RH2', 0, 3], ['RH3', 2, 0], ['RH4', 1, 2], columns=['name', 'A', 'B'])
</code></pre>
<p>The final plot should have "name" column as Y axis and "A" and "B" as X axis. And the different numerical values with different colours. something like this
<a href="https://i.stack.imgur.com/jBazO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jBazO.png" alt="enter image description here" /></a></p>
<p>I tried to plot it by looping over each row of the dataframe but I got stuck at some place and couldn't do it, the main problem I encounter is the size of both the axis. It would be really great if anyone can help me. Thank you in advance.</p>
|
<p>You can <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.melt.html" rel="nofollow noreferrer">melt</a> your dataframe and use the values as the column for color:</p>
<pre><code>from matplotlib import pyplot as plt
import pandas as pd
df = pd.DataFrame([['RH1', 1, 3], ['RH2', 0, 3], ['RH3', 2, 0], ['RH4', 1, 2]], columns=['name', 'A', 'B'])
df.melt(["name"]).plot(x="variable", y= "name", kind="scatter", c="value", cmap="plasma")
plt.show()
</code></pre>
<p>Sample output:
<a href="https://i.stack.imgur.com/aWTOI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aWTOI.png" alt="enter image description here" /></a></p>
<p>If you have a limited number of values, you can change the colormap to a <a href="https://stackoverflow.com/a/14779462/8881141">discrete colormap</a> and label each color with its value. Alternatively, use seaborn's stripplot:</p>
<pre><code>from matplotlib import pyplot as plt
import pandas as pd
import seaborn as sns
df = pd.DataFrame([['RH1', 1, 3], ['RH2', 0, 3], ['RH3', 2, 0], ['RH4', 1, 2]], columns=['name', 'A', 'B'])
sns.stripplot(data=df.melt(["name"]), x="variable", y= "name", hue="value", jitter=False)
plt.show()
</code></pre>
<p>Output:
<a href="https://i.stack.imgur.com/k11yk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/k11yk.png" alt="enter image description here" /></a></p>
|
python|pandas|matplotlib|seaborn|scatter-plot
| 1
|
377,057
| 65,753,057
|
Creating new variable by aggregation in python 2
|
<p>I have data on births that looks like this:</p>
<pre><code>Date Country Sex
1.1.20 USA M
1.1.20 USA M
1.1.20 Italy F
1.1.20 England M
2.1.20 Italy F
2.1.20 Italy M
3.1.20 USA F
3.1.20 USA F
</code></pre>
<p>My purpose is to get a new dataframe in which each row is a date at a country, and then number of total births, number of male births and number of female births. It's supposed to look like this:</p>
<pre><code>Date Country Births Males Females
1.1.20 USA 2 2 0
1.1.20 Italy 1 0 1
1.1.20 England 1 1 0
2.1.20 Italy 2 1 1
3.1.20 USA 2 0 2
</code></pre>
<p>I tried using this code:</p>
<pre><code>df.groupby(by=['Date', 'Country', 'Sex']).size()
</code></pre>
<p>but it only gave me a new column of total births, with different rows for each sex in every date+country combination.</p>
<p>any help will be appreciated.</p>
<p>Thanks,
Eran</p>
|
<p>You can <code>group</code> the dataframe on columns <code>Date</code> and <code>Country</code> then aggregate column <code>Sex</code> using <code>value_counts</code> followed by <code>unstack</code> to reshape, finally <code>assign</code> the <code>Births</code> columns by summing frequency along <code>axis=1</code>:</p>
<pre><code>out = df.groupby(['Date', 'Country'], sort=False)['Sex']\
.value_counts().unstack(fill_value=0)
out.assign(Births=out.sum(1)).reset_index()\
.rename(columns={'M': 'Male', 'F': 'Female'})
</code></pre>
<p>Or you can use a very similar approach with <a href="https://pandas.pydata.org/docs/reference/api/pandas.crosstab.html" rel="nofollow noreferrer"><code>.crosstab</code></a> instead of <code>groupby</code> + <code>value_counts</code>:</p>
<pre><code>out = pd.crosstab([df['Date'], df['Country']], df['Sex'], colnames=[None])
out.assign(Births=out.sum(1)).reset_index()\
.rename(columns={'M': 'Male', 'F': 'Female'})
</code></pre>
<hr />
<pre><code> Date Country Female Male Births
0 1.1.20 USA 0 2 2
1 1.1.20 Italy 1 0 1
2 1.1.20 England 0 1 1
3 2.1.20 Italy 1 1 2
4 3.1.20 USA 2 0 2
</code></pre>
|
python|pandas|group-by|aggregate
| 0
|
377,058
| 65,623,468
|
Unable to open file libtensorflow_io.so caused by undefined symbol
|
<p>I have a tensorflow 2.2 conda environment setup with python 3.8.2 on Ubuntu.</p>
<p>I ran <code>pip install tensorflow-io==0.14.0</code>.</p>
<p>When I try to</p>
<pre><code>import tensorflow-io as tfio
</code></pre>
<p>I get the erorr:</p>
<pre><code>File "/home/somedir/miniconda3/envs/env_name/lib/python3.8/site-packages/tensorflow_io/core/python/ops/__init__.py", line 65, in _load_library
raise NotImplementedError(
NotImplementedError: unable to open file: libtensorflow_io.so, from paths: ['/home/somedir/miniconda3/envs/env_name/lib/python3.8/site-packages/tensorflow_io/core/python/ops/libtensorflow_io.so']
caused by: ['/home/somedir/miniconda3/envs/env_name/lib/python3.8/site-packages/tensorflow_io/core/python/ops/libtensorflow_io.so undefined symbol:
_ZN10tensorflow0pKernel11TraceStringEPNS_150pKernelContextEb']
</code></pre>
<p>What's the issue and how can I fix it?</p>
|
<p>As @Smedegaard mentioned, tensorflow_io is not on conda forge. The <a href="https://github.com/tensorflow/io/issues/1100" rel="noreferrer">answer of vlasenkoalexey on Github issues</a> to tackle this:</p>
<blockquote>
<p>Obvious workaround is to uninstall tensorflow and tensorflow-io and install them from pip: <br />
pip uninstall tensorflow <br />
pip uninstall tensorflow-io <br />
pip install tensorflow-gpu <br />
pip install --no-deps tensorflow-io</p>
</blockquote>
|
python|tensorflow|pip|conda|undefined-symbol
| 5
|
377,059
| 65,817,913
|
first/count applied to groupby returns empty dataframe
|
<pre><code>import pandas as pd
df = pd.DataFrame( {'A': [1,1,2,3,4,5,5,6,7,7,7,8]} )
dummy = df["A"]
print(dummy)
0 1
1 1
2 2
3 3
4 4
5 5
6 5
7 6
8 7
9 7
10 7
11 8
Name: A, dtype: int64
res = df.groupby(dummy)
print(res.first())
Empty DataFrame
Columns: []
Index: [1, 2, 3, 4, 5, 6, 7, 8]
</code></pre>
<p>Why the last print results in an empty dataframe? I except each group to be a slice of the original df, where each slice would contain as many rows as the number of duplicates for a given value in column "A". What am I missing?</p>
|
<p>My guess is by default, <code>A</code> is set to index before applying the groupby operator (e.g. <code>first</code>). Therefore, <code>df</code> is essentially empty before the actual <code>first</code> operator is applied. If you have another column <code>B</code>:</p>
<pre><code>df = pd.DataFrame( {'A': [1,1,2,3,4,5,5,6,7,7,7,8], 'B':range(12)} )
</code></pre>
<p>then you would see <code>A</code> as the index and the first values for <code>B</code> in each group with <code>df.groupby(dummy).first()</code>:</p>
<pre><code> B
A
1 0
2 2
3 3
4 4
5 5
6 7
7 8
8 11
</code></pre>
<p>On the other note, if you force <code>as_index=False</code>, <code>groupby</code> would not set <code>A</code> as index and you would have the non-empty data:</p>
<pre><code>df.groupby(dummy, as_index=False).first()
</code></pre>
<p>gives:</p>
<pre><code> A
0 1
1 2
2 3
3 4
4 5
5 6
6 7
7 8
</code></pre>
<p>Or, you can groupby on a <strong>copy</strong> of the column:</p>
<pre><code>df.groupby(dummy.copy()).first()
</code></pre>
<p>and you get:</p>
<pre><code> A
A
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
</code></pre>
|
python|python-3.x|pandas
| 2
|
377,060
| 65,538,234
|
Write JSON file with X and Y axis
|
<p>I am working on a requirement to write my JSON output as <strong>[{"x": "MaxTemp", "y": "Temp3pm"}]</strong> and my current output looks like [MaxTemp, Temp3pm], so the logis here is, as per the screenshot the first word is X_axis and the second word after comma(,) is y_axis. Below is my code and I have attached the screenshot of the input data.</p>
<pre><code>x_y_data = list(selected_ri['index'])
x_y_data
ini_string = {'Imp_features_selected_x_y':x_y_data}
# printing initial json
ini_string = json.dumps(ini_string)
# converting string to json
final_dictionary = json.loads(ini_string)
</code></pre>
<p><a href="https://i.stack.imgur.com/i09wt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/i09wt.png" alt="enter image description here" /></a></p>
|
<p>you could use str.split to split text by ',' and expand into two columns, for example:</p>
<pre><code>df = df['index'].str.split(',', expand=True)
# then rename column name to x and y
df.columns = ['x', 'y']
</code></pre>
<p>then you can convert it into a dict and output as json at last</p>
<pre><code>data = df.to_dict('records')
ini_string = json.dumps(data)
</code></pre>
|
python|json|pandas|dataframe
| 0
|
377,061
| 65,597,718
|
If column X contains String then find position of substring in column Y - PYTHON
|
<p>I'm trying to find the starting position of a string in an URL contained in column['url'] if column ['Existe'] contains "F" or "D". I'm new to Python and I'm trying to replicate a workflow from Excel in Python and after an hour of trying methods with lambda, numpy.where or numpy.select, and searching the web, I had to ask for help.</p>
<p>I've tried applying the following code, but this only returns that the value exists, but doesn't actually gives me the position in the string. What I currently have is:</p>
<pre><code>df['Start']= ["/t/" in x[0] and "F" in x[1] for x in zip(df['url'],df['Existe'])]
</code></pre>
<p>Basically, the results it gives me is the following:</p>
<pre><code> order id date time URL typedCount transition Existe Start
0 0 14438 1/3/2021 14:49:37 messenger.com/t/xxxxx 0 link F True
1 1 14437 1/3/2021 14:49:18 messenger.com/t/xxxxx 0 link F True
</code></pre>
<p>What I'm trying to do is to find the starting position of "/t/" in df['url'] if "F" exists in df['Existe'] and placing the result in a new column, df['Start']. I have to use this conditional because df['Existe'] contains both "F" and "D", and it has to look for "/t/" if it's "F", and "/@me/" if it's "D".</p>
<p>The result I'm looking for is:</p>
<pre><code> order id date time URL typedCount transition Existe Start
0 0 14438 1/3/2021 14:49:37 messenger.com/t/xxxxx 0 link F 14
1 1 14437 1/3/2021 14:49:18 messenger.com/t/xxxxx 0 link F 14
</code></pre>
<p>Does anyone know a way of doing this?</p>
<p>Thanks</p>
|
<h2>Avoid Looping Over Rows</h2>
<p>When manipulating data with pandas, <a href="https://stackoverflow.com/a/55557758/5075720">it is typically best to avoid looping over rows</a>. Working with logic that only operates on certain rows, it is better to begin by explicitly identifying those rows. The subset of rows where the value of column <code>Existe</code> is equal to <code>"F"</code> is:</p>
<pre><code>has_f = df["Existe"] == "F"
</code></pre>
<p>Now you can use <code>has_f</code> to select only the rows you care about in <code>df</code>.</p>
<p>When working in pandas, try to use built-in pandas (or numpy) functions as much as possible. While you might not notice the difference when working with small DataFrames, any raw Python code you write and apply with <code>df.apply()</code> will perform poorly compared to the optimized code included in the pandas and numpy packages. Fortunately, pandas has <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/text.html#text-string-methods" rel="nofollow noreferrer">vectorized string functions</a> that can help you here. To find the location of a substring in each row of a column of strings, try the following:</p>
<pre><code>t_locations = df["URL"].str.find("/t/")
</code></pre>
<p>This produces a <code>Series</code> of integer locations of the first occurrence of the substring <code>"/t/"</code> in the column <code>URL</code>. You can do the same for <code>"/@me/"</code>.</p>
<p>Combining these two features of pandas requires using the <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html" rel="nofollow noreferrer"><code>df.loc</code> indexer</a> to select the rows and columns you care about and only applying the <code>str.find()</code> function to those values:</p>
<pre><code>df["Start"] = -1 # some default value
has_f = df["Existe"] == "F"
df.loc[has_f, "Start"] = df.loc[has_f, "URL"].str.find("/t/")
# The "~" here returns the inverse of the Boolean Series
df.loc[~has_f, "Start"] = df.loc[~has_f, "URL"].str.find("/@me/")
</code></pre>
|
python|pandas
| 1
|
377,062
| 65,718,776
|
Unpacking many columns of lists using apply get ValueError: If using all scalar values, you must pass an index
|
<p>I want to unpack multiple columns of lists into many more columns. Basically <a href="https://stackoverflow.com/questions/35491274/pandas-split-column-of-lists-into-multiple-columns">this</a> but <strong>for multiple columns</strong> of lists rather than just one, and avoiding for loops.</p>
<p>As an example I have a <code>pandas.DataFrame</code>,</p>
<pre><code>import pandas as pd
tst = pd.DataFrame({'A': [[1, 2]]* 5, 'B': [[3, 4]]* 5, 'C': [[5, 6]] * 5})
</code></pre>
<p>I can easily unpack one of the columns e.g. <code>A</code> into multiple columns,</p>
<pre><code>pd.DataFrame(tst['A'].to_list(),
columns=['1' + tst['A'].name, '2' + tst['A'].name],
index=list(range(tst['A'].shape[0]))
)
</code></pre>
<p>However when I tried expanding this to multiple columns using <code>.apply</code> to avoid a for loop,</p>
<pre><code>tst.apply(
lambda x: pd.DataFrame(x.to_list(),
columns=['1' + x.name, '2' + x.name],
index=list(range(x.shape[0]))
)
)
</code></pre>
<p>I get the below error, however I am supplying an <code>index</code>...</p>
<pre><code>ValueError: If using all scalar values, you must pass an index
</code></pre>
<p>Is there a way to fix this so that I get an output as per below? (column order doesn't matter)</p>
<pre><code> 1C 2C 1B 2B 1A 2A
0 5 6 3 4 1 2
1 5 6 3 4 1 2
2 5 6 3 4 1 2
3 5 6 3 4 1 2
4 5 6 3 4 1 2
</code></pre>
<p><code>pd.__version__ == '1.0.5'</code></p>
|
<p>If you don't mind to change <code>apply</code>to <code>explode</code> then this is one line solution. Kr.</p>
<pre><code>res=pd.concat([pd.DataFrame(tst[[x]].explode(x).values.reshape(-1,2), columns=['1' + x, '2' + x]) for x in tst.columns], 1)
print(res)
</code></pre>
<p>Which returns:</p>
<pre><code> 1A 2A 1B 2B 1C 2C
0 1 2 3 4 5 6
1 1 2 3 4 5 6
2 1 2 3 4 5 6
3 1 2 3 4 5 6
4 1 2 3 4 5 6
</code></pre>
|
python|pandas|pandas-apply
| 1
|
377,063
| 65,703,930
|
Using tensordot with torch.sparse tensors
|
<p>Is it possible to use a similar method as "tensordot" with torch.sparse tensors?</p>
<p>I am trying to apply a 4 dimensional tensor onto a 2 dimensional tensor. This is possible using torch or numpy. However, I did not find the way to do it using torch.sparse without making the sparse tensor dense using ".to_dense()".</p>
<p>More precisely, here is what I want to do without using ".to_dense()":</p>
<pre><code>import torch
import torch.sparse
nb_x = 4
nb_y = 3
coordinates = torch.LongTensor([[0,1,2],[0,1,2],[0,1,2],[0,1,2]])
values = torch.FloatTensor([1,2,3])
tensor4D = torch.sparse.FloatTensor(coordinates,values,torch.Size([nb_x,nb_y,nb_x,nb_y]))
inp = torch.rand((nb_x,nb_y))
#what I want to do
out = torch.tensordot(tensor4D.to_dense(),inp,dims=([2,3],[0,1]))
print(inp)
print(out)
</code></pre>
<p>(here is the output: <a href="https://i.stack.imgur.com/xlaaA.png" rel="nofollow noreferrer">torch_code</a>)</p>
<p>Alternatively, here is a similar code using numpy:</p>
<pre><code>import numpy as np
tensor4D = np.zeros((4,3,4,3))
tensor4D[0,0,0,0] = 1
tensor4D[1,1,1,1] = 2
tensor4D[2,2,2,2] = 3
inp = np.random.rand(4,3)
out = np.tensordot(tensor4D,inp)
print(inp)
print(out)
</code></pre>
<p>(here is the output: <a href="https://i.stack.imgur.com/J782X.png" rel="nofollow noreferrer">numpy_code</a>)</p>
<p>Thanks for helping!</p>
|
<p>Your specific <code>tensordot</code> can be cast to a simple matrix multiplication by "squeezing" the first two and last two dimensions of <code>tensor4D</code>.</p>
<p>In short, what you want to do is</p>
<pre class="lang-py prettyprint-override"><code>raw = tensor4D.view(nb_x*nb_y, nb_x*nb_y) @ inp.flatten()
out = raw.view(nb_x, nb_y)
</code></pre>
<p>However, since <code>view</code> and <code>reshape</code> are not implemented for sparse tensors, you'll have to it manually:</p>
<pre class="lang-py prettyprint-override"><code>sz = tensor4D.shape
coeff = torch.tensor([[1, sz[1], 0, 0], [0, 0, 1, sz[3]]])
reshaped = torch.sparse.FloatTensor(coeff @ idx, tensor4D._values(), torch.Size([nb_x*nb_y, nb_x*nb_y]))
# once we reshaped tensord4D it's all downhill from here
raw = torch.sparse.mm(reshaped, inp.flatten()[:, None])
out = raw.reshape(nb_x, nb_y)
print(out)
</code></pre>
<p>And the output is</p>
<blockquote>
<pre><code>tensor([[0.4180, 0.0000, 0.0000],
[0.0000, 0.6025, 0.0000],
[0.0000, 0.0000, 0.5897],
[0.0000, 0.0000, 0.0000]])
</code></pre>
</blockquote>
|
pytorch|sparse-matrix|tensor|torch|tensordot
| 1
|
377,064
| 65,745,053
|
Tensorflow softmax does not ignore masking value
|
<p>I am reviving this github <a href="https://github.com/tensorflow/tensorflow/issues/27010" rel="nofollow noreferrer">issue</a> because I believe it is valid and needs to be explained. tf.keras has a masking layer with docs that reads</p>
<blockquote>
<p>For each timestep in the input tensor (dimension #1 in the tensor), if
all values in the input tensor at that timestep are equal to
mask_value, then the timestep will be masked (skipped) in all
downstream layers (as long as they support masking).</p>
<p>If any downstream layer does not support masking yet receives such an
input mask, an exception will be raised.</p>
</blockquote>
<pre><code>
# create padded zeros and change two valid entries.
inputs = np.zeros([1,5])
inputs[0,1] = 0.5
inputs[0,2] = 0.1
inputs = tf.Variable(inputs)
masked_inputs = tf.keras.layers.Masking(mask_value=0.0)(inputs)
with_masking = tf.keras.layers.Softmax()(masked_inputs)
without_masking = tf.keras.layers.Softmax()(inputs)
</code></pre>
<p>The two results are virtually identical</p>
<pre><code>with_masking
<tf.Tensor: shape=(1, 5), dtype=float32, numpy=
array([[0.1737954 , 0.28654018, 0.19207363, 0.1737954 , 0.1737954 ]],
dtype=float32)>
without_masking
<tf.Tensor: shape=(1, 5), dtype=float64, numpy=array([[0.1737954 , 0.28654017, 0.19207362, 0.1737954 , 0.1737954 ]])>
</code></pre>
<h1>Expected behavior</h1>
<p>I expected to just take softmax of the valid entries, similiar to</p>
<pre><code>#Assign one large value
inputs = np.zeros([1,2])
inputs[0,0] = 0.5
inputs[0,1] = 0.1
inputs = tf.Variable(inputs)
without_masking = tf.keras.layers.Softmax()(inputs)
without_masking
<tf.Tensor: shape=(1, 2), dtype=float64, numpy=array([[0.59868766, 0.40131234]])>
</code></pre>
<p>padded at the correct positions</p>
<pre><code>with_masking
<tf.Tensor: shape=(1, 5), dtype=float32, numpy=
array([[0 , 0.59868766, 0.40131234, 0, 0 ]],
dtype=float32)>
</code></pre>
<p>To ignore 0's in a softmax function, we could switch out massively negative numbers?</p>
<p>Related: <a href="https://stackoverflow.com/questions/39091432/tensorflow-softmax-ignore-negative-labels-just-like-caffe">tensorflow - softmax ignore negative labels (just like caffe)</a></p>
<pre><code>from tensorflow import __version__
__version__
'2.3.1'
</code></pre>
|
<p>I think this is already explained well in the <a href="https://github.com/tensorflow/tensorflow/issues/27010" rel="noreferrer">Github issue</a> you have linked. Underlying problem is that irrespective of whether an array is masked or not, <code>softmax()</code> still operates on <code>0.0</code> values and returns a <code>non-zero</code> value as mathematically expected (<a href="https://en.wikipedia.org/wiki/Softmax_function" rel="noreferrer">link</a>).</p>
<p>The only way to get a zero output from a <code>softmax()</code> is to pass a <strong>very small float value</strong>. If you set the masked values to the minimum possible machine limit for <code>float64</code>, <code>Softmax()</code> of this value will be zero.</p>
<p>To get machine limit on float64 you need <code>tf.float64.min</code> which is equal to <code>-1.7976931348623157e+308</code>. More info about machine limits on this <a href="https://stackoverflow.com/questions/65657086/how-can-i-express-this-custom-loss-function-in-tensorflow/65658222#65658222">post</a>.</p>
<p>Here is an implementation for your reference on <code>tf.boolean_mask</code> only, and the correct method of using <code>tf.where</code> for creating the mask and passing it to <code>softmax()</code> -</p>
<pre><code>import tensorflow as tf
inputs = np.zeros([1,5])
inputs[0,1] = 0.5
inputs[0,2] = 0.1
inputs = tf.Variable(inputs)
#Returns only the elements that are not masked (2,)
with_boolmask = tf.boolean_mask(inputs, inputs!=0)
with_boolmask = tf.keras.layers.Softmax()(with_boolmask)
#Correct way to do it!
masked_inp = tf.where(inputs!=0, inputs, tf.float64.min) #<----
with_where = tf.keras.layers.Softmax()(masked_inp)
print('BOOLEAN MASK (NOT EXPECTED)')
print(with_boolmask)
print('')
print('MASKED INPUT - ')
print(masked_inp)
print('')
print('SOFTMAX OUTPUT')
print(with_where)
</code></pre>
<pre><code>BOOLEAN MASK (NOT EXPECTED)
tf.Tensor([0.59868765 0.40131232], shape=(2,), dtype=float32)
MASKED INPUT -
tf.Tensor(
[[-1.79769313e+308 5.00000000e-001 1.00000000e-001 -1.79769313e+308
-1.79769313e+308]], shape=(1, 5), dtype=float64)
SOFTMAX OUTPUT
tf.Tensor([[0. 0.59868765 0.40131232 0. 0. ]], shape=(1, 5), dtype=float32)
</code></pre>
|
tensorflow|keras|deep-learning
| 6
|
377,065
| 65,503,077
|
numpy - tuple is not "(...)" in numpy?
|
<p>My understanding of tuple is it can be enclosed with parenthesis.</p>
<ul>
<li><a href="https://docs.python.org/3/library/stdtypes.html#tuple" rel="nofollow noreferrer">class tuple([iterable])</a></li>
</ul>
<blockquote>
<p>Tuples may be constructed in a number of ways:</p>
<p>Using a pair of parentheses to denote the empty tuple: ()<br>
Using a trailing comma for a singleton tuple: a, or (a,)<br>
Separating items with commas: a, b, c or (a, b, c)<br>
Using the tuple() built-in: tuple() or tuple(iterable)</p>
</blockquote>
<p><a href="https://numpy.org/doc/stable/reference/arrays.indexing.html#basic-slicing-and-indexing" rel="nofollow noreferrer">Basic Slicing and Indexing</a> says slice indexing is a tuple of slice objects and int.</p>
<blockquote>
<p>Basic slicing occurs when obj is a slice object (constructed by
start:stop:step notation inside of brackets), an integer, or <strong>a tuple
of slice objects and integers</strong>.</p>
</blockquote>
<p>However, <code>(...)</code> cannot be used causing an error. Is the error from numpy or Python?</p>
<p>I suppose without parenthesis, <code>0:1, 2:3, 1</code> is still a tuple but why cannot use parenthesis if it is specified as <code>a tuple of slice objects and integers</code>?</p>
<p>It is not so important but after struggling with numpy indexing, this makes it further confusing, hence a clarification would help.</p>
<pre><code>Z = np.arange(36).reshape(3, 3, 4)
print("Z is \n{}\n".format(Z))
a = Z[
(0:1, 2:3, 1)
]
---
File "<ipython-input-53-26b1604433cd>", line 5
(0:1, 2:3, 1)
^
SyntaxError: invalid syntax
</code></pre>
<p>This works.</p>
<pre><code>Z = np.arange(36).reshape(3, 3, 4)
print("Z is \n{}\n".format(Z))
a = Z[
0:1, 2:3, 1
]
print(a)
print(a.base is not None)
</code></pre>
<hr />
<p>As per the comment by <a href="https://stackoverflow.com/users/901925/hpaulj">hpaulj</a>, numpy <code>s_</code> internally takes "a list of slices and integers" and returns a tuple of slice objects.</p>
<pre><code>from numpy import s_
print(s_[0:1, 2:3, 1])
Z = np.arange(36).reshape(3, 3, 4)
print("Z is \n{}\n".format(Z))
print(Z[s_[0:1, 2:3, 1]])
---
(slice(0, 1, None), slice(2, 3, None), 1)
Z is
[[[ 0 1 2 3]
[ 4 5 6 7]
[ 8 9 10 11]]
[[12 13 14 15]
[16 17 18 19]
[20 21 22 23]]
[[24 25 26 27]
[28 29 30 31]
[32 33 34 35]]]
[[9]]
</code></pre>
|
<p>Slice notation like <code>1:2</code> is syntax, it does not create an object, so you cannot use them in a list or tuple or anything; slice <em>objects</em> on the other hand actually refer to the thing returned by <code>slice()</code> which behave the same, and that's what Numpy is referencing with "tuple of slice objects and integers". The valid syntax to do what you were expecting would be <code>Z[(slice(1, 2), slice(2, 3), 1)]</code>. This allows you to save slices to a variable and use them instead.</p>
<p>Here's a simple code snippet to demonstrate:</p>
<pre class="lang-py prettyprint-override"><code>>>> 1:2
File "<stdin>", line 1
SyntaxError: illegal target for annotation
>>> slice(1, 2)
slice(1, 2, None)
>>> [1, 2, 3][1:2]
[2]
>>> [1, 2, 3][slice(1, 2)]
[2]
</code></pre>
|
numpy|tuples|slice
| 4
|
377,066
| 65,649,557
|
How to transform and reshape multiple numpy arrays
|
<p>I have several lists of values, each list is named using 2 numbers e.g., values[1][1] , values[1][2] or values[2][1]....until values[99][99]. I need to transform each list into a numpy 1-D array then reshape each array into a 2-D array with dimensions(20,10).
I was able to do it for one list as follow but I need to do it for all the lists ( I have 99 x 99 =9801 lists )</p>
<pre><code>array_1_1 = np.array([values[1][1]])
array_1_1.shape
</code></pre>
<p>out : (1, 200)</p>
<pre><code>new_array_1_1 = np.reshape(array_1_1 ,(20,10))
new_array_1_1.shape
</code></pre>
<p>out : (20, 10)
Thanks</p>
|
<p>below should do the job, storing all the shaped arrays into a list called <code>store</code></p>
<pre><code>store = []
for i in range(1,100):
for j in range(1,100):
store.append(np.reshape(np.array([values[i][j]]),(20,10)))
</code></pre>
|
python|arrays|numpy|jupyter|reshape
| 1
|
377,067
| 65,488,924
|
Pandas: Impute a given number of missing values before/after a series of available values
|
<p>Let's say I have a time series where I usually have data available for a certain continous span of years, but missing values before and after that span, like this:</p>
<pre><code>df = pd.DataFrame({'year': ["2000","2001","2002", "2003","2004", "2005","2006", "2007"], 'cakes eaten': [np.nan, np.nan, np.nan, 3, 4, 5, np.nan, np.nan]})
print(df)
year cakes eaten
0 2000 NaN
1 2001 NaN
2 2002 NaN
3 2003 3.0
4 2004 4.0
5 2005 5.0
6 2006 NaN
7 2007 NaN
</code></pre>
<p>Is there a way to fill (a given number of) missing values based on the trend seen in the available values?</p>
<p>Let's say I want to fill a maximum of 2 values in each direction, the result would have to look like this:</p>
<pre><code> year cakes eaten
0 2000 NaN
1 2001 1.0
2 2002 2.0
3 2003 3.0
4 2004 4.0
5 2005 5.0
6 2006 6.0
7 2007 7.0
</code></pre>
<p><strong>Also:</strong> is there a way to ensure that this imputation is only performed when there are enough available values , say for example I only want to fill a maximum of 2 values in each direction if there are at least 3 values available (or in more general terms, fill n only if n + m are availalbe) ?</p>
|
<p>Thanks to @olv1do for showing me that <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.interpolate.html" rel="nofollow noreferrer">interpolate()</a> does what I want.</p>
<p>Using interpolate and <code>.first_valid_index</code> and <code>.last_valid_index</code> allows to implement the desired behaviour:</p>
<pre><code>#impute n values in both directions if at least m values are available
def interpolate(data, n, m):
first_valid = data['cakes eaten'].first_valid_index()
last_valid = data['cakes eaten'].last_valid_index()
if(abs(first_valid - last_valid) + 1 >= m):
data['imputed'] = data['cakes eaten'].interpolate(method='spline',order = 1, limit_direction='both', limit = n)
return data
</code></pre>
<p>For the example from the question:</p>
<pre><code>df = pd.DataFrame({'year': ["2000","2001","2002", "2003","2004", "2005","2006", "2007"], 'cakes eaten': [np.nan, np.nan, np.nan, 3, 4, 5, np.nan, np.nan]})
interpolate(df, 2,3)
year cakes eaten imputed
0 2000 NaN NaN
1 2001 NaN 1.0
2 2002 NaN 2.0
3 2003 3.0 3.0
4 2004 4.0 4.0
5 2005 5.0 5.0
6 2006 NaN 6.0
7 2007 NaN 7.0
</code></pre>
<p>Does nothing if there are fewer then m values available:</p>
<pre><code>df = pd.DataFrame({'year': ["2000","2001","2002", "2003","2004", "2005","2006", "2007"], 'cakes eaten': [np.nan, np.nan, np.nan, 3, 4, np.nan, np.nan, np.nan]})
interpolate(df, 2,3)
year cakes eaten
0 2000 NaN
1 2001 NaN
2 2002 NaN
3 2003 3.0
4 2004 4.0
5 2005 NaN
6 2006 NaN
7 2007 NaN
</code></pre>
<p>Also, the <code>spline</code> method also works very well if the values are not as perfectly linear as in my example:</p>
<pre><code>df = pd.DataFrame({'year': ["2000","2001","2002", "2003","2004", "2005","2006", "2007"], 'cakes eaten': [np.nan, np.nan, 1, 4, 2, 3, np.nan, np.nan]})
interpolate(df, 1,4)
year cakes eaten imputed
0 2000 NaN NaN
1 2001 NaN 1.381040
2 2002 1.0 1.000000
3 2003 4.0 4.000000
4 2004 2.0 2.000000
5 2005 3.0 3.000000
6 2006 NaN 3.433167
7 2007 NaN NaN
</code></pre>
|
python|pandas|imputation
| 1
|
377,068
| 65,768,164
|
Computing row-wise maximum of off-diagonal entries in a matrix
|
<p>It's easier to motivate the question with an example: let's say I have a matrix A</p>
<pre><code>A = np.reshape (np.arange(9,dtype = np.float), (3,3))
</code></pre>
<pre><code>>>[[0. 1. 2.]
[3. 4. 5.]
[6. 7. 8.]]
</code></pre>
<p>The row-wise maximum is <strong>(2,5,8)</strong>, found for instance by <code>np.max(A, axis = 1)</code>.</p>
<p>However, I only need the maximum among the off-diagonal elements. The answer should be <strong>(2,5,7)</strong>.</p>
<p>How can I compute that <em>in a concise manner</em>?</p>
<p>An answer I thought about is</p>
<pre><code>B = np.copy (A) # copy A
B [np.diag_indices_from (B)] = -np.inf # fill up diagonals with - infty
max_off_diag_A = np.max (B,axis = 1) # compute maximum
</code></pre>
<p>but is seems a long way to go (and a huge waste of memory, because the matrix I have is not 3x3, but has more than 10 thousand rows and columns).</p>
<p>Thanks in advance for any cleaner/conciser code suggestion.</p>
|
<p>If you don't need the original data after getting the answer, this should suffice. Otherwise I'm not sure what's wrong with your approach.</p>
<pre><code>A = np.reshape (np.arange(9,dtype = np.float), (3,3))
np.fill_diagonal(A, -np.inf)
np.max(A, axis=1)
</code></pre>
|
python|numpy|max
| 2
|
377,069
| 65,520,260
|
Convert JSON to Pandas Dataframe in Python
|
<p>There is data in json format as below:</p>
<pre><code>dict = {"a":1,"b":2,"c":[{dic 1},{dic2},...so on]}
</code></pre>
<p>where dic 1 is defined below, like this list of dictionaries are there</p>
<pre><code>dic 1 = {"d":4,"e":{"f":6,"g":7},"h":{"i":9,"j":[10,11,12]},"m":13}
</code></pre>
<p>so, the whole json file looks like the below:</p>
<pre><code>dict = {"a":1,"b":2,"c":[{"d":4,"e":{"f":6,"g":7},"h":{"i":9,"j":[10,11,12]},"m":13},{dic2},...so on]}
</code></pre>
<p>Now I want to store this data as Pandas Dataframe like the below table, give your suggestion please</p>
<p><strong>Expected Output:</strong>
<a href="https://i.stack.imgur.com/hmcV3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hmcV3.png" alt="dataframe output" /></a></p>
|
<p>The structure of your json is complex! Make it simple!</p>
<p>Your code won't run, returns unhashable type 'dict'. To solve, simply unpack any variable you're using in the main 'dict' (that's **dic1).</p>
<p>Even with that, you end with 2 rows and 3 columns. Why? The data in key 'c' is a list of dicts, pandas interpret list items as data for a column. Organize the json file.</p>
<p>Lastly, avoid using 'dict' to name a variable.</p>
|
python|json|python-3.x|pandas|dataframe
| 3
|
377,070
| 65,609,559
|
TensorFlow: No module named Pandas (I already have Pandas)
|
<pre class="lang-py prettyprint-override"><code>WORKSPACE_PATH = 'Tensorflow/workspace'
SCRIPTS_PATH = 'Tensorflow/scripts'
APIMODEL_PATH = 'Tensorflow/models'
ANNOTATION_PATH = WORKSPACE_PATH+'/annotations'
IMAGE_PATH = WORKSPACE_PATH+'/images'
MODEL_PATH = WORKSPACE_PATH+'/models'
PRETRAINED_MODEL_PATH = WORKSPACE_PATH+'/pre-trained-models'
CONFIG_PATH = MODEL_PATH+'/my_ssd_mobnet/pipeline.config'
CHECKPOINT_PATH = MODEL_PATH+'/my_ssd_mobnet/'
import numpy as np
import pandas as pd
!python {SCRIPTS_PATH + '/generate_tfrecord.py'} -x {IMAGE_PATH + '/train'} -l {ANNOTATION_PATH + '/label_map.pbtxt'} -o {ANNOTATION_PATH + '/train.record'}
!python {SCRIPTS_PATH + '/generate_tfrecord.py'} -x{IMAGE_PATH + '/test'} -l {ANNOTATION_PATH + '/label_map.pbtxt'} -o {ANNOTATION_PATH + '/test.record'}
</code></pre>
<p>running which I get the error:</p>
<pre><code>Traceback (most recent call last):
File "Tensorflow/scripts/generate_tfrecord.py", line 21, in <module>
import pandas as pd
ImportError: No module named pandas
Traceback (most recent call last):
File "Tensorflow/scripts/generate_tfrecord.py", line 21, in <module>
import pandas as pd
ImportError: No module named pandas
</code></pre>
<p>I already have installed Pandas and Tensorflow before through pip.</p>
<p>Thank you for helping me out!</p>
<p>OS: MacOS Catalina 10.13.5</p>
|
<p>Try installing pandas using conda cause conda somehow installs the package globally (base python env)</p>
|
python|pandas|module|pip|path
| 0
|
377,071
| 65,526,429
|
Python: clear a specific range of data from a column in a dataframe
|
<p>I have the problem that the dataframe from my import (stock prices from Yahoo) are not correct for a specific time period. I want to clear the data from 2010-01-01 until 2017-10-17 for "VAR1.DE" and replace it empty or with NaN. I have found the panda function "drop" but this will delete the hole column.</p>
<p>How can I solve the problem?</p>
<p>Here is my code:</p>
<pre><code>from pandas_datareader import data as web
import pandas as pd
import numpy as np
from datetime import datetime
assets = ['1211.HK','BABA','BYND','CAP.DE','JKS','PLUG','QCOM','VAR1.DE']
weights = np.array([0.125,0.125,0.125,0.125,0.125,0.125,0.125,0.125])
stockStartDate='2010-01-01'
today = datetime.today().strftime('%Y-%m-%d')
df = pd.DataFrame()
for stock in assets:
df[stock]=web.DataReader(stock, data_source='yahoo', start=stockStartDate,end=today)['Adj Close']
</code></pre>
|
<p>instead of having a for loop, you can simply do:</p>
<pre><code>df = web.DataReader(name=assets, data_source='yahoo', start=stockStartDate, end=today)['Adj Close']
</code></pre>
<p>since the return dataframe would be indexed by datetime. (i.e. <code>pd.DatetimeIndex</code>)
so you can simply do:</p>
<pre><code>df.loc[:'2017-10-17', 'VAR1.DE'] = np.nan
</code></pre>
<p>reassigning values as NaN for column='VAR1.DE' that are before '2017-10-17'.</p>
|
python|pandas|numpy
| 0
|
377,072
| 65,588,408
|
Pandas reindex and assigning Columns to a new column
|
<p>I am creating a pandas dataframe and wanting to create a new column by assigning and reindexing method. The way I am doing is to pull the data which may have say 'A', 'B', 'C', D' 'E' columns
and I am wanting to create a new column say 'XX'. ( of course there are other columns in the dataframe and its a huge one, I just show this sample below). XX column is usually the OR logic or max of the columns of A->E</p>
<p>Like</p>
<p>INPUT:</p>
<pre><code> df
A B C D E
0 0 1 0 1
0 0 0 0 0
1 0 0 0 0
</code></pre>
<p>OUTPUT:</p>
<pre><code> df
A B C D E XX
0 0 1 0 1 1
0 0 0 0 0 1
1 0 0 0 0 1
</code></pre>
<p>So the way I am doing</p>
<pre><code> ICOLS = ["A", "B", "C", "D", "E]
df = (df.assign(XX=df.reindex(ICOLS, axis=1).dropna().max(axis=1)).dropna(axis=1, how='all'))
</code></pre>
<p>The script is working fine, but its only working when I have all the columns from A to E. Many times in the database ( say C or E etc is missing) , but I still want to have the same logic and XX should give the similar output.</p>
<p>So if the data base has only A, B & E rows, then:</p>
<p>INPUT:</p>
<pre><code> df
A B E
0 0 1
0 0 0
1 0 0
</code></pre>
<p>OUTPUT:</p>
<pre><code> df
A B E XX
0 0 1 1
0 0 0 1
1 0 0 1
</code></pre>
<p>I am not sure how to acheive that in the way I am doing from the list of the inputCols ICOLS. I will appreciate if a help in the direction in which I am trying to fix. Any help will be appreciated. Thanks</p>
|
<p>Do it in one line.</p>
<p>Please filter required columns. Put the columns you need in a list. That will filter them, try find max in each row into a new column and find max in the resultant column</p>
<p>Data</p>
<pre><code> print(df)
A B C f D E
0 0 0 1 2 0 1
1 0 0 0 56 0 0
2 1 0 0 70 0 0
</code></pre>
<p>Solution;</p>
<pre><code>df['xx']=df.filter(items=['A', 'B','E','D']).max(1).max(0)
</code></pre>
<p>OR</p>
<pre><code>ICOLS = ["A", "B", "C", "D", "E"]
df['xx']=df.filter(items=ICOLS).max(1).max(0)
</code></pre>
<p>print(df)</p>
<pre><code> A B C f D E xx
0 0 0 1 2 0 1 1
1 0 0 0 56 0 0 1
2 1 0 0 70 0 0 1
</code></pre>
|
python|pandas|list|indexing
| 1
|
377,073
| 65,546,995
|
Visualize 3D numpy.meshgrid
|
<p>I have used two input arrays with an output array that I have interpolated with numpy's LinearNDInterpolator and scipy's ndimage filters. I was able to easily visualize the output using a matplotlib's pcolormesh. I would like to extend this analysis to 3 input arrays using the same ndimage and interpolation functions, but am not sure how to visualize the data. My best guess as to a solution would be to scatter the data using a solution similar to <a href="https://stackoverflow.com/questions/14995610/how-to-make-a-4d-plot-with-matplotlib-using-arbitrary-data">How to make a 4d plot with matplotlib using arbitrary data</a>
but more steps are needed as my output is a grid.</p>
<p>Here is the skeleton:</p>
<pre><code>from scipy.interpolate import LinearNDInterpolator
dat_A = np.sin(np.arange(200))
dat_B = np.cos(np.arange(200))
dat_C = np.sinh(np.arange(200)/200)
output = dat_A + dat_B - 2*dat_C
A,B,C = np.arange(200),np.arange(200),np.linspace(0,2,200)
A_grid,B_grid,C_grid = np.meshgrid(A,B,C)
interp = LinearNDInterpolator(list(zip(dat_A,dat_B,dat_C)),output)
4D_out = interp(A_grid,B_grid,C_grid)
</code></pre>
<p>How do I visualize this 4D object? I was thinking animating through a 3D plot.</p>
|
<p>I have found an easy way to do this exact visualization task is to use animatplot, which has its own animated pcolormesh function.
<a href="https://pypi.org/project/animatplot/" rel="nofollow noreferrer">https://pypi.org/project/animatplot/</a></p>
|
python|numpy|matplotlib|image-processing|visualization
| 0
|
377,074
| 65,514,897
|
how to transform dataframe
|
<p>I have the following snippet from a dataframe:</p>
<pre><code> cough fever
8 0.0 0.0
9 -1.0 1.0
24 0.0 1.0
29 0.0 -1.0
30 1.0 1.0
</code></pre>
<p>Where I need to sum up each column by each value, like:</p>
<pre><code>df['cough'].value_counts()
Out[189]:
0.0 3
1.0 1
-1.0 1
Name: cough, dtype: int64
df['fever'].value_counts()
Out[189]:
0.0 1
1.0 3
-1.0 1
Name: fever, dtype: int64
</code></pre>
<p>However, I need this in a dataframe, which I was able to do as per:</p>
<pre><code>out=pd.DataFrame()
symptoms=['cough', 'fever']
for s in symptoms:
frames=[out, df[c].value_counts().to_frame()]
out=pd.concat(frames)
</code></pre>
<p>But, the output has empty cells:</p>
<pre><code> cough fever
0.0 3 NaN
-1.0 1 NaN
1.0 1 NaN
0.0 NaN 1
1.0 NaN 1
-1.0 NaN 3
</code></pre>
<p>How do I get the df in the form below (that is eliminate empty cells and transpose, all in one fell swoop)?</p>
<pre><code> -1 0 1
cough 1 3 1
fever 1 1 3
</code></pre>
|
<p>Change the <code>concat</code></p>
<pre><code>out=pd.concat(frames ,axis=1)
</code></pre>
<p>Or simply do</p>
<pre><code>s = df.melt()
out = pd.crosstab(s.variable,s.value)
value -1.0 0.0 1.0
variable
cough 1 3 1
fever 1 1 3
</code></pre>
|
python|pandas|aggregate|transpose
| 2
|
377,075
| 65,629,083
|
Converting STR to INT on Dataframe doesn't work on the specific parts
|
<p>I know, this is an easy question, but I checked so many sites on the internet and couldn't find the problem that I have.</p>
<p>I have a dataframe and one column of this dataframe is for brand. I wanted to give specific numbers for these brands to make brand aggregation easier.</p>
<pre><code>import pandas as pd
last = pd.read_pickle('pre_clustering.pkl')
random_number=9288
first=""
f=0
for i in last['brand']:
if(type(i)==str):
if(first == i):
last.at[f, 'brand']= random_number
print(last.loc[f, 'brand'])
f=f+1
elif(first !=i):
first=i
random_number= random_number +1
last.at[f, 'brand'] = random_number
print(last.loc[f, 'brand'])
f=f+1
else:
f=f+1
brand = last['brand']
</code></pre>
<p><a href="https://i.stack.imgur.com/TmwuB.png" rel="nofollow noreferrer">This is my code and output.</a>
I tried everthing to convert them to integer, but they are still string. I controlled my if else condition to be sure by using print() and it is <a href="https://i.stack.imgur.com/T20Md.png" rel="nofollow noreferrer">working as you see</a></p>
<p>What is wrong with my code? or what should I do to convert my strings to integers?</p>
|
<p>In your code, you use a sequence of <code>f</code> as an index of rows in <code>last</code>, but <code>last</code> is sorted on <code>brand</code>, therefore the sequence of <code>f</code> is not the index of row. as a result, you put the random number in the wrong places and leave others.</p>
<p>In order to correct code, we use <code>last.iterrows()</code> in <code>for loop</code> as follows:</p>
<pre><code>for f, row in last.iterrows():
i=row['brans']
</code></pre>
<p>where <code>f</code> will be the index of the row you dealing with, so you do not need <code>f=f+1</code>.</p>
<p>and <code>i</code> holds the <code>brand</code> in the row you deal with.</p>
<p>Finally, I added some declaration as <code>(comment)</code> with modification of your code:</p>
<pre><code>import pandas as pd
last = pd.read_pickle('pre_clustering.pkl')
random_number=9288
first=""
# f=0 (No need)
for f, row in last.iterrows(): # for i in last['brand']: (Changed: f is the actual row index)
i=row['brand'] # (added)
if(type(i)==str):
if(first == i):
last.at[f, 'brand']= random_number
print(last.loc[f, 'brand'])
# f=f+1 (No need)
elif (first !=i):
first=i
random_number= random_number +1
last.at[f, 'brand'] = random_number
print(last.loc[f, 'brand'])
# f=f+1
#else:
# f=f+1
brand = last['brand']
</code></pre>
<p>Do your best :)</p>
|
python|pandas|dataframe|anaconda|spyder
| 1
|
377,076
| 65,762,748
|
Pandas Seaborn FacetGrid same x labels on every plot, though different y values
|
<p>I'm trying to display FacetGrid with barplots so that it displays data of yellow_cards count (y) and team name (x), divided by different football leagues (other plots should show other leagues, and other team names). The data is being counted correctly but the display shows only the first league on every plot.</p>
<p>Here's the code snippet i'm using to build the FacetGrid:</p>
<pre><code>df_alt2_teams = df_alt2.groupby(['league', 'squad'])['cards_yellow', 'cards_red'].sum().reset_index()
df_alt2_teams = df_alt2_teams.sort_values(by=['cards_yellow', 'cards_red'], ascending=True)
g = sns.FacetGrid(df_alt2_teams, col='league', height=8, aspect=4)
g = g.map(sns.barplot, 'squad', 'cards_yellow', palette="flare", data=df_alt2_teams)
g.set(ylim=(0, 50))
g.set_xticklabels(rotation=90)
</code></pre>
<p>the data differs, but labels don't</p>
<p>data example:</p>
<pre><code>index league squad cards_red cards_yellow
52 Ligue 1 Strasbourg 1.0 2.0
57 Premier League Brighton 1.0 3.0
</code></pre>
|
<p>If you do not need ordered bars, you can directly plot the dataframe and let seaborn do all the calculations:</p>
<pre><code>import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
#test data generation
import numpy as np
n=30
np.random.seed(123)
df_alt2 = pd.DataFrame({"index": np.arange(n),
"league": "Ligue 1",
"squad": np.random.choice(list("ABCDXYZ"), n),
"cards_red": np.random.randint(0, 3, n),
"cards_yellow": np.random.randint(0, 5, n)})
df_alt2.loc[:, "league"][df_alt2.squad.isin(list("XYZ"))] = "Premier League"
g = sns.catplot(data=df_alt2, x="squad", y="cards_yellow", col="league",
kind="bar", estimator=sum, ci=None, sharex=False)
g.set(ylim=(0, 20))
g.set_xticklabels(rotation=90)
plt.tight_layout()
plt.show()
</code></pre>
<p>Sample output:
<a href="https://i.stack.imgur.com/Pib0A.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Pib0A.png" alt="enter image description here" /></a></p>
<p>I would not know how to convince seaborn to order the bars by value. So in this case,
you might have to revert to your precalculation:</p>
<pre><code>import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
#test data generation
import numpy as np
n=30
np.random.seed(123)
df_alt2 = pd.DataFrame({"index": np.arange(n),
"league": "Ligue 1",
"squad": np.random.choice(list("ABCDXYZ"), n),
"cards_red": np.random.randint(0, 3, n),
"cards_yellow": np.random.randint(0, 5, n)})
df_alt2.loc[:, "league"][df_alt2.squad.isin(list("XYZ"))] = "Premier League"
df_alt2_teams = df_alt2.groupby(['league', 'squad'])['cards_yellow', 'cards_red'].sum().reset_index()
df_alt2_teams = df_alt2_teams.sort_values(by=['cards_yellow', 'cards_red'], ascending=True)
g = sns.catplot(data=df_alt2_teams, x="squad", y="cards_yellow", col="league", kind="bar", sharex=False)
g.set(ylim=(0, 20))
g.set_xticklabels(rotation=90)
plt.tight_layout()
plt.show()
</code></pre>
<p>Output:
<a href="https://i.stack.imgur.com/Qq7X4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Qq7X4.png" alt="enter image description here" /></a></p>
|
python|pandas|seaborn
| 1
|
377,077
| 65,907,350
|
How do we change the result of groupby into dataframe?
|
<pre><code>df = pd.DataFrame({
"A" : [1, 2, 1, 2, 1, 2, 2, 1],
"B" : [1, 1, 2, 2, 1, 1, 1, 2],
"C": [1, 1, 1, 1, 2, 2, 2, 2]})
df
</code></pre>
<p>This is my data.</p>
<p>and I used</p>
<pre><code>gbA = df.groupby("A")
</code></pre>
<p>How can we change the result of this back to dataframe?</p>
<p>I read the other posts but I still do not get it...</p>
|
<p>If you want the original df back, use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.apply.html" rel="nofollow noreferrer"><code>pandas.DataFrame.groupby.apply</code></a> with a dummy function:</p>
<pre><code>>>> gbA.apply(lambda x:x)
A B C
0 1 1 1
1 2 1 1
2 1 2 1
3 2 2 1
4 1 1 2
5 2 1 2
6 2 1 2
7 1 2 2
</code></pre>
<p>If you want the groups, use <a href="https://www.python.org/dev/peps/pep-0274/" rel="nofollow noreferrer"><code>dict comprehension</code></a>:</p>
<pre><code>>>> {k: v for k,v in gbA}
{1: A B C
0 1 1 1
2 1 2 1
4 1 1 2
7 1 2 2,
2: A B C
1 2 1 1
3 2 2 1
5 2 1 2
6 2 1 2}
</code></pre>
<p>If you want the grouped df, where <code>A</code> set as index, use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.set_index.html?highlight=set_index#pandas.DataFrame.set_index" rel="nofollow noreferrer"><code>pandas.DataFrame.set_index</code></a> <code>'A'</code> with <code>append=True</code> to keep the original indices intact. Then <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.swaplevel.html" rel="nofollow noreferrer"><code>pandas.DataFrame.swaplevel</code></a>, to swap the multiindex levels, and finally <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_index.html?highlight=sort_index#pandas.DataFrame.sort_index" rel="nofollow noreferrer"><code>pandas.DataFrame.sort_index</code></a> along <code>level=0</code> :</p>
<pre><code>>>> df.set_index('A', append=True).swaplevel().sort_index(level=0)
B C
A
1 0 1 1
2 2 1
4 1 2
7 2 2
2 1 1 1
3 2 1
5 1 2
6 1 2
</code></pre>
|
python|pandas
| 2
|
377,078
| 65,642,697
|
pytorch runs slow when data are pre-transported to GPU
|
<p>I have a model written in pytorch. Since my dataset is small, I can directly load all of the data to GPU. However, I found the forward speed becomes slow if I do so. The following is a runnable example. Specifically, I have the model:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from time import time
import random
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import Dataset, DataLoader
def knn(x, k):
inner = -2*torch.matmul(x.transpose(2, 1), x)
xx = torch.sum(x**2, dim=1, keepdim=True)
pairwise_distance = -xx - inner - xx.transpose(2, 1)
idx = pairwise_distance.topk(k=k, dim=-1)[1] # (batch_size, num_points, k)
return idx
def get_graph_feature(x, k=20, idx=None):
batch_size = x.size(0)
num_points = x.size(2)
x = x.view(batch_size, -1, num_points)
if idx is None:
idx = knn(x, k=k) # (batch_size, num_points, k)
idx_base = torch.arange(0, batch_size, device=x.device).view(-1, 1, 1)*num_points
idx = idx + idx_base
idx = idx.view(-1)
_, num_dims, _ = x.size()
x = x.transpose(2, 1).contiguous() # (batch_size, num_points, num_dims) -> (batch_size*num_points, num_dims) # batch_size * num_points * k + range(0, batch_size*num_points)
feature = x.view(batch_size*num_points, -1)[idx, :]
feature = feature.view(batch_size, num_points, k, num_dims)
x = x.view(batch_size, num_points, 1, num_dims).repeat(1, 1, k, 1)
feature = torch.cat((feature-x, x), dim=3).permute(0, 3, 1, 2).contiguous()
return feature
class DGCNN(nn.Module):
def __init__(self, k=25, output_channels=10):
super(DGCNN, self).__init__()
self.k = k
self.bn1 = nn.BatchNorm2d(64)
self.bn2 = nn.BatchNorm2d(64)
self.bn3 = nn.BatchNorm2d(128)
self.bn4 = nn.BatchNorm2d(256)
self.bn5 = nn.BatchNorm1d(1024)
self.conv1 = nn.Sequential(nn.Conv2d(6, 64, kernel_size=1, bias=False),
self.bn1,
nn.LeakyReLU(negative_slope=0.2))
self.conv2 = nn.Sequential(nn.Conv2d(64*2, 64, kernel_size=1, bias=False),
self.bn2,
nn.LeakyReLU(negative_slope=0.2))
self.conv3 = nn.Sequential(nn.Conv2d(64*2, 128, kernel_size=1, bias=False),
self.bn3,
nn.LeakyReLU(negative_slope=0.2))
self.conv4 = nn.Sequential(nn.Conv2d(128*2, 256, kernel_size=1, bias=False),
self.bn4,
nn.LeakyReLU(negative_slope=0.2))
self.conv5 = nn.Sequential(nn.Conv1d(512, 1024, kernel_size=1, bias=False),
self.bn5,
nn.LeakyReLU(negative_slope=0.2))
self.linear1 = nn.Linear(1024*2, 512, bias=False)
self.bn6 = nn.BatchNorm1d(512)
self.dp1 = nn.Dropout()
self.linear2 = nn.Linear(512, 256)
self.bn7 = nn.BatchNorm1d(256)
self.dp2 = nn.Dropout()
self.linear3 = nn.Linear(256, output_channels)
def forward(self, x):
x = x.transpose(2, 1)
batch_size = x.size(0)
x = get_graph_feature(x, k=self.k)
x = self.conv1(x)
x1 = x.max(dim=-1, keepdim=False)[0]
x = get_graph_feature(x1, k=self.k)
x = self.conv2(x)
x2 = x.max(dim=-1, keepdim=False)[0]
x = get_graph_feature(x2, k=self.k)
x = self.conv3(x)
x3 = x.max(dim=-1, keepdim=False)[0]
x = get_graph_feature(x3, k=self.k)
x = self.conv4(x)
x4 = x.max(dim=-1, keepdim=False)[0]
x = torch.cat((x1, x2, x3, x4), dim=1)
x = self.conv5(x)
x1 = F.adaptive_max_pool1d(x, 1).view(batch_size, -1)
x2 = F.adaptive_avg_pool1d(x, 1).view(batch_size, -1)
x = torch.cat((x1, x2), 1)
x = F.leaky_relu(self.bn6(self.linear1(x)), negative_slope=0.2)
x = self.dp1(x)
x = F.leaky_relu(self.bn7(self.linear2(x)), negative_slope=0.2)
x = self.dp2(x)
x = self.linear3(x)
return x
</code></pre>
<p>Here is what the dataloader and test function looks like:</p>
<pre class="lang-py prettyprint-override"><code>class my_loader(Dataset):
def __init__(self, device):
self.data = torch.rand(256, 2048, 3).to(device).float()
self.labels = torch.rand(256).to(device).long()
def __getitem__(self, ind):
return self.data[ind], self.labels[ind]
def __len__(self):
return len(self.data)
def test():
device = torch.device('cuda:2')
test_set = my_loader(device)
test_loader = DataLoader(test_set, batch_size=16, shuffle=True, num_workers=0)
model = DGCNN().to(device)
model.eval()
#---------- this one is 0.12s --------------#
for inputs, labels in test_loader:
tic = time()
pred = model(inputs)
print('time1 {}'.format(time() - tic))
print('------------------')
#---------- this one is 0.004s --------------#
for inputs, labels in test_loader:
inputs = inputs.detach().cpu().to(device)
tic = time()
pred = model(inputs)
print('time2 {}'.format(time() - tic))
print('------------------')
#---------- this one is 0.12s --------------#
for inputs, labels in test_loader:
tic = time()
inputs = inputs.detach().cpu().to(device)
pred = model(inputs)
print('time3 {}'.format(time() - tic))
print('------------------')
</code></pre>
<p>Basically, it seems that if there is no explicit call of gpu to cpu transportation either before or after the forward propagation, the forward propagation would cost more time. It just seems like that the forward propagation is implicitly doing gpu->cpu transportation.</p>
|
<p>I played around with the code a little bit, and I think the problem is that you are measuring times for both cases in the same run. Here is my boiled down version of your code since your model crushed my GPU memory:</p>
<pre><code>class DGCNN(nn.Module):
def __init__(self, num_layers):
super(DGCNN, self).__init__()
self.layers = nn.ModuleList([nn.Linear(256, 256) for _ in range(1200)])
def forward(self, x):
x = x.view(-1, 256)
for layer in self.layers:
x = layer(x)
return x
class my_loader(Dataset):
def __init__(self, device):
self.data = torch.rand(256, 2048, 3).to(device).float()
self.labels = torch.rand(256).to(device).long()
def __getitem__(self, ind):
return self.data[ind], self.labels[ind]
def __len__(self):
return len(self.data)
</code></pre>
<p>Now, here I demonstrate different versions of <code>test()</code>.</p>
<p>Version #1:</p>
<pre><code>def test():
device = torch.device('cuda:0')
test_set = my_loader(device)
test_loader = DataLoader(test_set, batch_size=16, shuffle=True, num_workers=0)
model = DGCNN().to(device)
model.eval()
#---------- this one is 0.12s --------------#
tic = time()
for inputs, labels in test_loader:
pred = model(inputs)
tac = time()
print(f'# First case -> Full forward pass: {tac - tic:.6f}')
#---------- this one is 0.004s --------------#
tic = time()
for inputs, labels in test_loader:
pred = model(inputs.detach().cpu().to(device))
tac = time()
print(f'# Second case -> Full forward pass: {tac - tic:.6f}')
>>> # First case -> Full forward pass: 3.105103, # Second case -> Full forward pass: 2.831652
</code></pre>
<p>Now I switched the order of timing calculations for the cases. Version #2:</p>
<pre><code>def test():
device = torch.device('cuda:0')
test_set = my_loader(device)
test_loader = DataLoader(test_set, batch_size=16, shuffle=True, num_workers=0)
model = DGCNN().to(device)
model.eval()
#---------- this one is 0.004s --------------#
tic = time()
for inputs, labels in test_loader:
pred = model(inputs.detach().cpu().to(device))
tac = time()
print(f'# Second case -> Full forward pass: {tac - tic:.6f}')
#---------- this one is 0.12s --------------#
tic = time()
for inputs, labels in test_loader:
pred = model(inputs)
tac = time()
print(f'# First case -> Full forward pass: {tac - tic:.6f}')
>>> # Second case -> Full forward pass: 3.288522, # First case -> Full forward pass: 2.583231
</code></pre>
<p>Apparently, the first timing you calculate seems to end up slower. So, I calculated these timings separately in different runs with fresh kernels. Version #3:</p>
<pre><code>def test():
device = torch.device('cuda:0')
test_set = my_loader(device)
test_loader = DataLoader(test_set, batch_size=16, shuffle=True, num_workers=0)
model = DGCNN().to(device)
model.eval()
#---------- this one is 0.12s --------------#
tic = time()
for inputs, labels in test_loader:
pred = model(inputs)
tac = time()
print(f'# First case -> Full forward pass: {tac - tic:.6f}')
>>> # First case -> Full forward pass: 3.091592
</code></pre>
<p>Version #4:</p>
<pre><code>def test():
device = torch.device('cuda:0')
test_set = my_loader(device)
test_loader = DataLoader(test_set, batch_size=16, shuffle=True, num_workers=0)
model = DGCNN().to(device)
model.eval()
#---------- this one is 0.004s --------------#
tic = time()
for inputs, labels in test_loader:
pred = model(inputs.detach().cpu().to(device))
tac = time()
print(f'# Second case -> Full forward pass: {tac - tic:.6f}')
>>> # Second case -> Full forward pass: 3.190248
</code></pre>
<p>So, by testing one at a time, it seems like <code>pred = model(inputs)</code> runs slightly faster than <code>pred = model(inputs.detach().cpu().to(device))</code>, which is the obvious expected result.</p>
|
pytorch
| 2
|
377,079
| 65,634,458
|
train test split is not splitting correctly
|
<p>I am still a beginner in AI and deep learning but I wanted to test whether a neural network will be able to calculate the sum of two numbers so I generated a dataset of 5000 numbers and made test size = 0.3 so the training dataset will be equal to 3500 but what was weird that I found the model is training only on 110 input instead of 3500.</p>
<p>The code used:</p>
<pre><code>import tensorflow as tf
from sklearn.model_selection import train_test_split
import numpy as np
from random import random
def generate_dataset(num_samples, test_size=0.33):
"""Generates train/test data for sum operation
:param num_samples (int): Num of total samples in dataset
:param test_size (int): Ratio of num_samples used as test set
:return x_train (ndarray): 2d array with input data for training
:return x_test (ndarray): 2d array with input data for testing
:return y_train (ndarray): 2d array with target data for training
:return y_test (ndarray): 2d array with target data for testing
"""
# build inputs/targets for sum operation: y[0][0] = x[0][0] + x[0][1]
x = np.array([[random()/2 for _ in range(2)] for _ in range(num_samples)])
y = np.array([[i[0] + i[1]] for i in x])
# split dataset into test and training sets
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=test_size)
return x_train, x_test, y_train, y_test
if __name__ == "__main__":
# create a dataset with 2000 samples
x_train, x_test, y_train, y_test = generate_dataset(5000, 0.3)
# build model with 3 layers: 2 -> 5 -> 1
model = tf.keras.models.Sequential([
tf.keras.layers.Dense(5, input_dim=2, activation="sigmoid"),
tf.keras.layers.Dense(1, activation="sigmoid")
])
# choose optimiser
optimizer = tf.keras.optimizers.SGD(learning_rate=0.1)
# compile model
model.compile(optimizer=optimizer, loss='mse')
# train model
model.fit(x_train, y_train, epochs=100)
# evaluate model on test set
print("\nEvaluation on the test set:")
model.evaluate(x_test, y_test, verbose=2)
# get predictions
data = np.array([[0.1, 0.2], [0.2, 0.2]])
predictions = model.predict(data)
# print predictions
print("\nPredictions:")
for d, p in zip(data, predictions):
print("{} + {} = {}".format(d[0], d[1], p[0]))
</code></pre>
<p><a href="https://i.stack.imgur.com/AIpmI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AIpmI.png" alt="enter image description here" /></a></p>
|
<p>The <code>110/110</code> you are seeing in your image is actually the batch count, not the sample count. So 110 batches * the default batch size of 32 gives you ~3500 training samples, which matches what you'd expect as 70% of 5000.</p>
<p>You can see by backing into it the other way that the last batch would be a partial batch, since it's not evenly divisible by 32:</p>
<pre class="lang-py prettyprint-override"><code>>>> (.7 * 5000) / 110
31.818181818181817
</code></pre>
<p>In neural networks, an epoch is a full pass over the data. It trains in small batches (also called steps), and this is the way Keras logs them.</p>
|
python|numpy|tensorflow|deep-learning
| 2
|
377,080
| 65,901,247
|
An easy way to calculate time intervals between dates in a column in Python
|
<p>Suppose I have a Pandas DataFrame like this:</p>
<pre><code> item event date
A 1 2020-03-09
B 1 2020-03-09
A 2 2020-05-01
B 2 2020-05-01
C 2 2020-05-01
A 3 2020-06-25
C 3 2020-06-25
B 4 2020-07-18
C 4 2020-07-18
</code></pre>
<p>This dataframe contains a unique date per 'event' per 'item'. So this means that an item has several events with distinct dates.</p>
<p>Now I would like to calculate per item the average amount of days between the dates. So this will be different values for each item and it thus requires me to calculate the average of the time between the dates per event per item.</p>
<p>So the expected output would look like:</p>
<pre><code> item average_interval_in_days
A 54
B 65.5
C 39.5
</code></pre>
<p>Anyone an idea how to do this?</p>
|
<p>Very similar to @BradSolomon's answer, with two small differences:</p>
<pre class="lang-py prettyprint-override"><code>df.sort_values(['item', 'date']).groupby('item')['date'].agg(
lambda g: g.diff().mean() / pd.Timedelta(days=1))
# gives:
item
A 54.0
B 65.5
C 39.0
</code></pre>
<p>Notes:</p>
<ol>
<li>ensure that dates are sorted within each group, otherwise the mean will depend on the order; in your example, the dates happen to be sorted, so if you can guarantee it, you may skip <code>.sort_values()</code>;</li>
<li>use <code>... / pd.Timedelta(days=1)</code> to produce directly the mean difference in units of days.</li>
</ol>
<p><strong>Alternative for speed</strong> (no sort, no lambda, but a bit more opaque)</p>
<pre class="lang-py prettyprint-override"><code>gb = df.groupby('item')['date']
(gb.max() - gb.min()) / (gb.count() - 1) / pd.Timedelta(days=1)
# gives:
item
A 54.0
B 65.5
C 39.0
</code></pre>
|
python|pandas|dataframe|date
| 2
|
377,081
| 65,600,270
|
For loop over dataframe python
|
<p>i have dataframe called <code>df_civic</code> with columns - <code>state ,rank, make/model, model year, thefts</code>. I want to calculate <strong>AVG</strong> and <strong>STD</strong> of <code>thefts</code> for each <code>model year</code>.</p>
<p>All years that are in dataframe are taken with: <code>years_civic = list(pd.unique(df_civic['Model Year']))</code></p>
<p>My loop looks like this:</p>
<pre><code>for civic_year in years_civic:
f = df_civic['Model Year'] == civic_year
civic_avg = df_civic[f]['Thefts'].mean()
civic_std = df_civic[f]['Thefts'].std()
civic_std= np.round(car_std,2)
civic_avg= np.round(car_avg,2)
print(civic_avg, civic_std, np.sum(f))
</code></pre>
<p>However output is not what i need, only output that is correct is the one from <code>np.sum(f)</code></p>
<p>Now output looks like this:</p>
<pre><code>9.0 20.51 1
9.0 20.51 1
9.0 20.51 1
9.0 20.51 1
9.0 20.51 13
9.0 20.51 15
9.0 20.51 3
9.0 20.51 2
</code></pre>
|
<p>Pandas provides many powerful functions for aggregating your data. It's usually better to first think of these functions before using <code>for</code> loops.</p>
<p>For instance, you can use:</p>
<pre><code>import pandas as pd
import numpy as np
df_civic.groupby("Model Year").agg({"theft": ["mean", np.std]})
</code></pre>
<p>More doc here: <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.agg.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.agg.html</a></p>
<p>Regarding your code, there is something weird, <code>car_std</code> and <code>car_avg</code> are not defined.</p>
|
python|pandas|dataframe|loops|for-loop
| 1
|
377,082
| 65,692,575
|
How to count the occurrence of a value and set that count as a new value for that value's row
|
<p>Title is probably confusing, but let me make it clearer.</p>
<p>Let's say I have a df like this:</p>
<pre><code>+----+------+---------------+
| Id | Name | reports_to_id |
+----+------+---------------+
| 0 | A | 10 |
| 1 | B | 10 |
| 2 | C | 11 |
| 3 | D | 12 |
| 4 | E | 11 |
| 10 | F | 20 |
| 11 | G | 21 |
| 12 | H | 22 |
+----+------+---------------+
</code></pre>
<p>I would want my resulting df to look like this:</p>
<pre><code>+----+------+---------------+-------+
| Id | Name | reports_to_id | Count |
+----+------+---------------+-------+
| 0 | A | 10 | 0 |
| 1 | B | 10 | 0 |
| 2 | C | 11 | 0 |
| 3 | D | 12 | 0 |
| 4 | E | 11 | 0 |
| 10 | F | 20 | 2 |
| 11 | G | 21 | 2 |
| 12 | H | 22 | 1 |
+----+------+---------------+-------+
</code></pre>
<p>But this what I currently get as a result of my code (that is wrong):</p>
<pre><code>+----+------+---------------+-------+
| Id | Name | reports_to_id | Count |
+----+------+---------------+-------+
| 0 | A | 10 | 2 |
| 1 | B | 10 | 2 |
| 2 | C | 11 | 2 |
| 3 | D | 12 | 1 |
| 4 | E | 11 | 2 |
| 10 | F | 20 | 0 |
| 11 | G | 21 | 0 |
| 12 | H | 22 | 0 |
+----+------+---------------+-------+
</code></pre>
<p>with this code:</p>
<pre><code>df['COUNT'] = df.groupby(['reports_to_id'])['id'].transform('count')
</code></pre>
<p>Any suggestions or directions on how to get the result I want? All help is appreciated! and thank you in advance!</p>
|
<p>Use <code>value_counts</code> to count the <code>reports_to_id</code> by values, then <code>map</code> that to <code>Id</code>:</p>
<pre><code>df['COUNT'] = df['Id'].map(df['reports_to_id'].value_counts()).fillna(0)
</code></pre>
<p>Output:</p>
<pre><code> Id Name reports_to_id COUNT
0 0 A 10 0.0
1 1 B 10 0.0
2 2 C 11 0.0
3 3 D 12 0.0
4 4 E 11 0.0
5 10 F 20 2.0
6 11 G 21 2.0
7 12 H 22 1.0
</code></pre>
<p>Similar idea with <code>reindex</code>:</p>
<pre><code>df['COUNT'] = df['reports_to_id'].value_counts().reindex(df['Id'], fill_value=0).values
</code></pre>
<p>which gives a better looking <code>COUNT</code>:</p>
<pre><code> Id Name reports_to_id COUNT
0 0 A 10 0
1 1 B 10 0
2 2 C 11 0
3 3 D 12 0
4 4 E 11 0
5 10 F 20 2
6 11 G 21 2
7 12 H 22 1
</code></pre>
|
python|python-3.x|pandas|dataframe
| 1
|
377,083
| 65,860,326
|
how to generate a list within a list delimited by a space
|
<p>how do i replicate the structure of result of itertools.product?</p>
<p>so as you know itertools.product gives us an object and we need to put them in a list so we can print it
.. something like this.. right?</p>
<pre><code>import itertools
import numpy as np
CN=np.asarray((itertools.product([0,1], repeat=5)))
print(CN)
</code></pre>
<p>i want to be able to make something like that but i want the data to be from a csv file.. so i want to make something like this</p>
<pre><code>#PSEUDOCODE
import pandas as pd
df = pd.read_csv('csv here')
#a b c d are the columns that i want to get
x = list(df['a'] df['c'] df['c'] df['d'])
print(x)
</code></pre>
<p>so the result will be something like this</p>
<pre><code>[[5.1 3.5 1.4 0.2]
[4.9 3. 1.4 0.2]
[4.7 3.2 1.3 0.2]
[4.6 3.1 1.5 0.2]
[5. 3.6 1.4 0.2]
[5.4 3.9 1.7 0.4]
[4.6 3.4 1.4 0.3]
[5. 3.4 1.5 0.2]
[4.4 2.9 1.4 0.2]
[4.9 3.1 1.5 0.1]]
</code></pre>
<p>how can i do that?</p>
<p>EDIT:
i am trying to learn how to do recursive feature elimination and i saw in some codes in google that they use the iris data set..</p>
<pre><code>from sklearn import datasets
dataset = datasets.load_iris()
x = dataset.data
print(x)
</code></pre>
<p>and when printed it looked something like this</p>
<pre><code>[[5.1 3.5 1.4 0.2]
[4.9 3. 1.4 0.2]
[4.7 3.2 1.3 0.2]
[4.6 3.1 1.5 0.2]
[5. 3.6 1.4 0.2]
[5.4 3.9 1.7 0.4]
[4.6 3.4 1.4 0.3]
[5. 3.4 1.5 0.2]
[4.4 2.9 1.4 0.2]
[4.9 3.1 1.5 0.1]]
</code></pre>
<p>how could i make my dataset something like that so i can use this RFE template ?</p>
<pre><code># Recursive Feature Elimination
from sklearn import datasets
from sklearn.feature_selection import RFE
from sklearn.linear_model import LogisticRegression
# load the iris datasets
dataset = datasets.load_iris()
# create a base classifier used to evaluate a subset of attributes
model = LogisticRegression()
# create the RFE model and select 3 attributes
rfe = RFE(model, 3)
print(rfe)
rfe = rfe.fit(dataset.data, dataset.target)
print("features:",dataset.data)
print("target:",dataset.target)
print(rfe)
# summarize the selection of the attributes
print(rfe.support_)
print(rfe.ranking_)
</code></pre>
|
<p>You don't have to. If you want to use <code>rfe.fit</code> function, you need to feed features and target seperately.</p>
<p>So if your df is like:</p>
<pre class="lang-py prettyprint-override"><code> a b c d target
0 5.1 3.5 1.4 0.2 1
1 4.9 3.0 1.4 0.2 1
2 4.7 3.2 1.3 0.2 0
3 4.6 3.1 1.5 0.2 0
4 5.0 3.6 1.4 0.2 1
5 5.4 3.9 1.7 0.4 1
6 4.6 3.4 1.4 0.3 0
7 5.0 3.4 1.5 0.2 0
8 4.4 2.9 1.4 0.2 1
9 4.9 3.1 1.5 0.1 1
</code></pre>
<p>you can use:</p>
<pre class="lang-py prettyprint-override"><code>...
rfe = rfe.fit(df[['a', 'b', 'c', 'd']], df['target'])
...
</code></pre>
|
python|pandas|numpy|scikit-learn|dataset
| 0
|
377,084
| 65,571,761
|
Create a folder structure based on information from a dataframe
|
<p>I have this dataframe train_info with 423 different artists and filenames corresponding to images of paintings.</p>
<pre class="lang-py prettyprint-override"><code> artist filename
0 Hiroshige 53180.jpg
1 Ivan Aivazovsky 99442.jpg
2 Hiroshige 23508.jpg
3 Hieronymus Bosch 82352.jpg
4 Hiroshige 27254.jpg
... ... ... ... ...
128069 Frans Snyders 14264images161.jpg
128070 Frans Snyders 14260images158.jpg
128071 Frans Snyders 14274images170.jpg
128072 Frans Snyders 14355images90.jpg
128073 Frans Snyders 14270images167.jpg
</code></pre>
<p>Then i have a folder - Paintings - containing all these images.</p>
<p>What i want to do is create another folder - train - with sub-folders for each artist and each sub-folder should contain all the images corresponding to each artist.</p>
<p>Like this:</p>
<pre class="lang-py prettyprint-override"><code>-train
-Hiroshige
-53180.jpg
-23508.jpg
-27254.jpg
...
-Ivan Aivazovsky
-99442.jpg
...
-Frans Snyders
-14264images161.jpg
-14260images158.jpg
-14274images170.jpg
-14355images90.jpg
-14270images167.jpg
...
</code></pre>
<p>Unfortunately, I have no idea how to solve this.</p>
|
<p>An easy, no-sweat way, is to use explicit looping:</p>
<pre><code>import os
import shutil
srcdir = 'Paintings'
dstdir = 'train'
for name, s in df.groupby('artist')['filename']:
artistdir = os.path.join(dstdir, name)
print(f'copying {s.shape[0]} images from {srcdir} to {artistdir}')
os.makedirs(artistdir, exist_ok=True)
for filename in s:
shutil.copy(os.path.join(srcdir, name), os.path.join(artistdir, name))
</code></pre>
<p>Output:</p>
<pre><code>copying 1 images from Paintings to train/Hieronymus Bosch
copying 3 images from Paintings to train/Hiroshige
copying 1 images from Paintings to train/Ivan Aivazovsky
...
</code></pre>
<p>There are faster ways (in terms of pandas operations), but here the <code>copy</code> itself dwarfs that time.</p>
|
python|pandas|dataframe|file|data-science
| 0
|
377,085
| 65,643,248
|
One hot encoding from numpy
|
<p>I am trying to understand values output from an example python <a href="https://towardsdatascience.com/word2vec-from-scratch-with-numpy-8786ddd49e72" rel="nofollow noreferrer">tutorial</a>. The output doesent seem to be in any order that I can understand. The particular python lines are causing me trouble :</p>
<pre><code>vocab_size = 13 #just to provide all variable values
m = 84 #just to provide all variable values
Y_one_hot = np.zeros((vocab_size, m))
Y_one_hot[Y.flatten(), np.arange(m)] = 1
</code></pre>
<p>The input Y.flatten() is evaluated as the following numpy-array :</p>
<pre><code> [ 8 9 7 4 9 7 8 4 8 7 8 12 4 8 9 8 12 7 8 9 7 12 7 2
9 7 8 7 2 0 7 8 12 2 0 8 8 12 7 0 8 6 12 7 2 8 6 5
7 2 0 6 5 10 2 0 8 5 10 1 0 8 6 10 1 3 8 6 5 1 3 11
6 5 10 3 11 5 10 1 11 10 1 3]
</code></pre>
<p>np arrange is a tensor ranging from 0-83</p>
<pre><code>np.arange(m)
[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47
48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83]
</code></pre>
<p>Ok so the output that I am having trouble understanding from the new Y_one_hot is that I recieve a numpy array of size 13 (as expected) but I do not understand why the positions of the ones are located where they are located based on the Y.flatten() input for example here is the first array of the 13:</p>
<pre><code>[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0
0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0]
</code></pre>
<p>Could someone please explain how I got from that input value to that output array from that single line? It seems like the ones are in random positions and in some other arrays of the 13 the number of ones also seems to be random. Is this the intended behavior?</p>
<p>here is a full runnable example:</p>
<pre><code>import numpy as np
import sys
import re
# turn Y into one hot encoding
Y = np.array([ 8, 9, 7, 4 , 9, 7, 8, 4, 8, 7, 8, 12, 4, 8, 9, 8, 12, 7, 8, 9, 7, 12, 7, 2,
9, 7, 8, 7, 2, 0, 7, 8, 12, 2, 0, 8, 8, 12, 7, 0, 8, 6, 12, 7, 2, 8, 6, 5,
7, 2, 0, 6, 5, 10, 2, 0, 8, 5, 10, 1, 0, 8, 6, 10, 1, 3, 8, 6, 5, 1, 3, 11,
6, 5, 10, 3, 11, 5, 10, 1, 11, 10, 1, 3])
m = 84
vocab_size = 13
Y_one_hot = np.zeros((vocab_size, m))
Y_one_hot[Y.flatten(), np.arange(m)] = 1
np.set_printoptions(threshold=sys.maxsize)
print(Y_one_hot.astype(int))
</code></pre>
|
<p>The code you showed is a quick way to convert multiple label indices to one-hot-encodings.</p>
<p>Let's do it with a single index, and convert it to a one-hot-encoding vector. To keep it simple, we will stick with an encoding size of <em>10</em> (i.e. nine <code>0</code>s and one <code>0</code>):</p>
<pre><code>>>> y = 4
>>> y_ohe = np.zeros(10)
>>> y_ohe[y] = 1
array([0., 0., 0., 0., 1., 0., 0., 0., 0., 0.])
</code></pre>
<p>Now, let's try with more than one index: <em>5</em> labels at the same time. The starting array would be two-dimensional: <code>(5, 10)</code>, i.e. a one-hot-encoding vector of size <em>10</em> per label.</p>
<pre><code>>>> y = np.array([4, 2, 1, 7])
>>> y_ohe = np.zeros((4, 10))
array([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])
</code></pre>
<p>The desired result is:</p>
<pre><code>array([[0., 0., 0., 0., 1., 0., 0., 0., 0., 0.],
[0., 0., 1., 0., 0., 0., 0., 0., 0., 0.],
[0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 7., 0., 0.]])
</code></pre>
<p>To do so we will index by row and by column: <code>np.arange(len(y))</code> will give us all rows indices, while <code>y</code> will give us the columns where the <code>1</code> are supposed to be. Since <code>np.arange(len(y))</code> and <code>y</code> have the same length, they will be iterated over zipped, something like</p>
<pre><code>>>> for i, j in zip(np.arange(len(y)), y):
>>> print(i, j)
[0, 4]
[1, 2]
[2, 1]
[3, 7]
</code></pre>
<p>These are the <code>[i, j]</code> coordinates in the 2D tensor <code>y_ohe</code> where we want <code>1</code>s to be.</p>
<p>Assign the indexed value to <code>1</code>s:</p>
<pre><code>>>> y_ohe[np.arange(len(y)), y] = 1
array([[0., 0., 0., 0., 1., 0., 0., 0., 0., 0.],
[0., 0., 1., 0., 0., 0., 0., 0., 0., 0.],
[0., 1., 0., 0., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 0., 1., 0., 0.]])
</code></pre>
<p>Similarly, by indexing the other way around:</p>
<pre><code>>>> y = np.array([4, 2, 1, 7])
>>> y_ohe = np.zeros((10, 4))
>>> y_ohe[y, np.arange(len(y))] = 1
array([[0., 0., 0., 0.],
[0., 0., 1., 0.],
[0., 1., 0., 0.],
[0., 0., 0., 0.],
[1., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 0., 0., 1.],
[0., 0., 0., 0.],
[0., 0., 0., 0.]])
</code></pre>
<hr />
<p>In your case <code>Y</code> had an extra dimension, something like <code>Y = np.array([[4], [2], [1], [7]])</code> to relate to the example I gave above. Which would give <code>y</code> after being flattened.</p>
|
python|numpy|one-hot-encoding
| 3
|
377,086
| 65,800,234
|
Passing datetime64[ns] from pandas' data frame as an argument to a function
|
<p>I'm trying to create an additional column in a data frame to show the number of network days (excluding custom holidays) between two dates. I'm using a function to which I'm trying to pass dates from <code>df</code>'s columns as arguments, but I can't make it work.</p>
<p>Below is my code (I'm using two made-up holidays in the given set):</p>
<pre><code>from networkdays import networkdays
import datetime as dt
import numpy as np
import pandas as pd
public_holidays_list = [dt.date(2021, 1, 6), dt.date(2021, 1, 7)]
public_holidays = set(public_holidays_list)
def working_days(start, end, holidays):
days = networkdays.Networkdays(start, end, holidays)
working_days = len(days.networkdays())
return working_days
</code></pre>
<p>The formula itself works fine:</p>
<pre><code>print(working_days(dt.date(2021, 1, 4), dt.date(2021, 1, 8), public_holidays))
</code></pre>
<blockquote>
<p>3</p>
</blockquote>
<p>Minimal data frame with the <code>dtypes</code> I'm working on:</p>
<pre><code>d = {'Name': ['A', 'B'], 'Start_Date': [dt.date(2021, 1, 4), dt.date(2021, 1, 11)], 'End_Date': [dt.date(2021, 1, 8), dt.date(2021, 1, 15)]}
df = pd.DataFrame(data = d)
df['Start_Date'] = pd.to_datetime(df['Start_Date'])
df['End_Date'] = pd.to_datetime(df['End_Date'])
</code></pre>
<p>When I'm trying the below way...</p>
<pre><code>df['Working_Days'] = working_days(df['Start_Date'], df['End_Date'])
</code></pre>
<p>...I'm getting an error:</p>
<blockquote>
<p>AttributeError: 'Series' object has no attribute 'days'</p>
</blockquote>
<p>I've also tried to use <code>numpy</code>:</p>
<pre><code>df['Working_Days'] = np.vectorize(working_days)(df['Start_Date'], df['End_Date'])
</code></pre>
<p>I got an error as well:</p>
<blockquote>
<p>AttributeError: 'numpy.timedelta64' object has no attribute 'days'</p>
</blockquote>
<p>Could you point me in the right direction?</p>
<p><strong>EDIT: The correct answer to my problem is @Kris's last comment.</strong></p>
<p><strong>IMPORTANT!</strong> Although the <code>lambda</code> doesn't return any errors, it takes <code>public_holidays</code> into consideration correctly in 2 scenarios:</p>
<p>A) The elements of <code>public_holidays</code> are of class <code>datetime.date</code> and <code>df</code>'s dates are of class <code>object</code> (I got this by removing <code>pd.to_datetime()</code> lines from the code).</p>
<p>B) The <code>public_holidays</code> is of type <code>list</code> (created from an Excel table via <code>public_holidays = df_ph['Date'].tolist()</code>), its elements are of class <code>Timestamp</code> and <code>pd.to_datetime()</code> lines are not removed from the code above (making dates in <code>df</code> <code>datetime64[ns]</code>).</p>
|
<p>As per my comment, use <code>.apply</code>:</p>
<pre><code>df['Working_Days'] = df.apply(lambda x: working_days(x.Start_Date, x.End_Date, public_holidays), axis=1)
</code></pre>
|
python|pandas|function|dataframe|datetime
| 1
|
377,087
| 65,806,914
|
How to replace one column values with another column values
|
<p>Actually I want to replace the prefix with the mean here.
How can i acheive it.
When replacing it it is replacing with NaN but not with the mean values.</p>
<p><strong>This is my code:</strong></p>
<p><a href="https://i.stack.imgur.com/I0Ldj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/I0Ldj.png" alt="My code" /></a></p>
|
<p>Since the second column is a multi-index column, you are basically trying to set a single column as a dataframe. You will have to refer to the exact column name to do that replacement.</p>
<pre><code>z['ID']=z['teacher_number_of_previously_posted_projects']['mean']
</code></pre>
|
python-3.x|pandas|replace|group-by
| 1
|
377,088
| 65,503,800
|
Finding mean temperature grouping every N rows
|
<p>I have the following dataframe with hourly temperatures at different coordinates:</p>
<pre><code>df.head
Out[63]:
time latitude longitude t2m
2018-01-01 00:00:00 72.0 -11.0 -3.957336
2018-01-01 01:00:00 72.0 -11.0 -4.165466
2018-01-01 02:00:00 72.0 -11.0 -4.562500
2018-01-01 03:00:00 72.0 -11.0 -4.860107
2018-01-01 04:00:00 72.0 -11.0 -5.155762
... ... ...
2018-12-31 19:00:00 34.0 32.0 16.527161
2018-12-31 20:00:00 34.0 32.0 16.639832
2018-12-31 21:00:00 34.0 32.0 16.700165
2018-12-31 22:00:00 34.0 32.0 16.592102
2018-12-31 23:00:00 34.0 32.0 16.724670
</code></pre>
<p>I would like to find the daily mean temperature for each pair of coordinates. For this, I need to group every 24 rows and find the mean of the <code>t2m</code> column, while keeping the only the date in the <code>time</code> column and the <code>latitude</code> and <code>longitude</code> columns. That is,</p>
<pre><code>df.head
Out[63]:
time latitude longitude t2m
2018-01-01 72.0 -11.0 -6.378744
2018-01-01 71.75 -11.0 -5.564683
... ... ...
2018-12-31 33.75 31.75 16.836736
2018-12-31 34.0 32.0 16.836736
</code></pre>
<p>I tried doing</p>
<pre><code>N=24
test=df.groupby(df.index//N).mean()
</code></pre>
<p>But I got <code>TypeError: cannot perform __floordiv__ with this index type: DatetimeIndex</code>. I tried resetting the index and repeating the operation but then it drops the <code>time</code> column while adding the rest.</p>
<p>What would be the best way to do this? Any help would be much appreciated. Thank you in advance.</p>
<p>EDIT: Using @Shubham Sharma's suggestion, I tried doing</p>
<pre><code>df.reset_index(inplace=True)
N=24
test=df.groupby([df.index//N, 'latitude', 'longitude']).mean()
</code></pre>
<p>And it finds the right mean value, but it drops entirely the <code>time</code> column.</p>
|
<p>In general, it's simpler and more generalizable to use <code>pd.Grouper(freq='D')</code>.</p>
<p>From your data snippet, it's not clear whether your dataframe has an index or not. If it has, then <code>df.head()</code> (and not <code>df.head</code>, BTW) would show:</p>
<pre><code> latitude longitude t2m
time <--- notice the new line
2018-01-01 00:00:00 72.0 -11.0 -3.957336
2018-01-01 01:00:00 72.0 -11.0 -4.165466
2018-01-01 02:00:00 72.0 -11.0 -4.562500
2018-01-01 03:00:00 72.0 -11.0 -4.860107
2018-01-01 04:00:00 72.0 -11.0 -5.155762
</code></pre>
<p>If it doesn't, then <code>df.head()</code> would show the default <code>RangeIndex</code>:</p>
<pre><code> time latitude longitude t2m
0 2018-01-01 00:00:00 72.0 -11.0 -3.957336
1 2018-01-01 01:00:00 72.0 -11.0 -4.165466
2 2018-01-01 02:00:00 72.0 -11.0 -4.562500
3 2018-01-01 03:00:00 72.0 -11.0 -4.860107
4 2018-01-01 04:00:00 72.0 -11.0 -5.155762
</code></pre>
<p>In either case:</p>
<p>If <code>time</code> is the index, then:</p>
<pre><code>out = df.groupby([pd.Grouper(freq='D'), 'latitude', 'longitude']).mean()
# out:
t2m
time latitude longitude
2018-01-01 72.0 -11.0 -4.540234
</code></pre>
<p>If <code>time</code> is just a regular column:</p>
<pre><code>out = df.groupby([pd.Grouper(key='time', freq='D'), 'latitude', 'longitude']).mean()
# out:
t2m
time latitude longitude
2018-01-01 72.0 -11.0 -4.540234
</code></pre>
<p>In both cases, you can bring the result back from having a <code>MultiIndex</code> to being a table-like dataframe:</p>
<pre><code>out = out.reset_index()
# out:
time latitude longitude t2m
0 2018-01-01 72.0 -11.0 -4.540234
</code></pre>
|
python|pandas
| 1
|
377,089
| 65,795,374
|
GCP AI Platform: Error when creating a custom predictor model version ( trained model Pytorch model + torchvision.transform)
|
<p>Am currently trying to deploy a custom model to AI platform by following <strong><em><a href="https://cloud.google.com/ai-platform/prediction/docs/deploying-models#gcloud_1" rel="nofollow noreferrer">https://cloud.google.com/ai-platform/prediction/docs/deploying-models#gcloud_1</a></em></strong>. which is based on a combination of the pre-trained model from <strong>'Pytorch'</strong> and '<strong>torchvision.transform'</strong>. Currently, I keep getting below error which happens to be related to 500MB constraint on custom prediction.</p>
<p><strong>ERROR: (gcloud.beta.ai-platform.versions.create) Create Version failed. Bad model detected with error: Model requires more memory than allowed. Please try to decrease the model size and re-deploy. If you continue to experience errors, please contact support.</strong></p>
<p><strong>Setup.py</strong></p>
<pre><code>from setuptools import setup
from pathlib import Path
base = Path(__file__).parent
REQUIRED_PACKAGES = [line.strip() for line in open(base/"requirements.txt")]
print(f"\nPackages: {REQUIRED_PACKAGES}\n\n")
# [torch==1.3.0,torchvision==0.4.1, ImageHash==4.2.0
# Pillow==6.2.1,pyvis==0.1.8.2] installs 800mb worth of files
setup(description="Extract features of a image",
author=,
name='test',
version='0.1',
install_requires=REQUIRED_PACKAGES,
project_urls={
'Documentation':'https://cloud.google.com/ai-platform/prediction/docs/custom-prediction-routines#tensorflow',
'Deploy':'https://cloud.google.com/ai-platform/prediction/docs/deploying-models#gcloud_1',
'Ai_platform troubleshooting':'https://cloud.google.com/ai-platform/training/docs/troubleshooting',
'Say Thanks!': 'https://medium.com/searce/deploy-your-own-custom-model-on-gcps-ai-platform-
7e42a5721b43',
'google Torch wheels':"http://storage.googleapis.com/cloud-ai-pytorch/readme.txt",
'Torch & torchvision wheels':"https://download.pytorch.org/whl/torch_stable.html "
},
python_requires='~=3.7',
scripts=['predictor.py', 'preproc.py'])
</code></pre>
<p><strong>Steps taken:</strong>
Tried adding ‘torch’ and torchvision directly to ‘REQUIRED_PACKAGES’ list in setup.py file in order to provide PyTorch + torchvision as a dependency to be installed while deployment. I am guessing, Internally Ai platform downloads PyPI package for PyTorch which is +500 MB, this results in the failure of our model deployment. If I just deploy the model with 'torch' only and it seems to be working (of course throws error for not able to find library 'torchvision')</p>
<p><strong>File size</strong></p>
<ul>
<li><em><strong>pytorch</strong></em> (<strong>torch-1.3.1+cpu-cp37-cp37m-linux_x86_64.whl</strong> about <strong>111MB</strong>)</li>
<li><em><strong>torchvision</strong></em> (<strong>torchvision-0.4.1+cpu-cp37-cp37m-linux_x86_64.whl</strong> about <strong>46MB</strong>) from <strong><em><a href="https://download.pytorch.org/whl/torch_stable.html" rel="nofollow noreferrer">https://download.pytorch.org/whl/torch_stable.html</a></em></strong> and stored it on GKS.</li>
<li>The zipped predictor model file (.tar.gz format) which is the output of setup.py (<strong>5kb</strong> )</li>
<li>A trained PyTorch model (size <strong>44MB</strong>)</li>
</ul>
<p>In total, the model dependencies should be less than 250MB but still, keep getting this error. Have also tried to use the torch and torchvision provided from Google mirrored packages <a href="http://storage.googleapis.com/cloud-ai-pytorch/readme.txt" rel="nofollow noreferrer">http://storage.googleapis.com/cloud-ai-pytorch/readme.txt</a>, but same memory issue persists. AI platform is quite new for us and would like some input from professional’s.</p>
<h2 id="more-info-zs60">MORE INFO:</h2>
<p><strong>GCP CLI Input:</strong></p>
<p><strong>My environment variable:</strong></p>
<pre><code>BUCKET_NAME= “something”
MODEL_DIR=<>
VERSION_NAME='v6'
MODEL_NAME="something_model"
STAGING_BUCKET=$MODEL_DIR<>
# TORCH_PACKAGE=$MODEL_DIR"package/torch-1.3.1+cpu-cp37-cp37m-linux_x86_64.whl"
# TORCHVISION_PACKAGE=$MODEL_DIR"package/torchvision-0.4.1+cpu-cp37-cp37m-linux_x86_64.whl"
TORCH_PACKAGE=<>
TORCHVISION_PACKAGE=<>
CUSTOM_CODE_PATH=$STAGING_BUCKET"imt_ai_predict-0.1.tar.gz"
PREDICTOR_CLASS="predictor.MyPredictor"
REGION=<>
MACHINE_TYPE='mls1-c4-m2'
gcloud beta ai-platform versions create $VERSION_NAME \
--model=$MODEL_NAME \
--origin=$MODEL_DIR \
--runtime-version=2.3 \
--python-version=3.7 \
--machine-type=$MACHINE_TYPE \
--package-uris=$CUSTOM_CODE_PATH,$TORCH_PACKAGE,$TORCHVISION_PACKAGE \
--prediction-class=$PREDICTOR_CLASS \
</code></pre>
<p><strong>GCP CLI Output:</strong></p>
<pre><code> **[1] global**
[2] asia-east1
[3] asia-northeast1
[4] asia-southeast1
[5] australia-southeast1
[6] europe-west1
[7] europe-west2
[8] europe-west3
[9] europe-west4
[10] northamerica-northeast1
[11] us-central1
[12] us-east1
[13] us-east4
[14] us-west1
[15] cancel
Please enter your numeric choice: 1
To make this the default region, run `gcloud config set ai_platform/region global`.
Using endpoint [https://ml.googleapis.com/]
Creating version (this might take a few minutes)......failed.
ERROR: (gcloud.beta.ai-platform.versions.create) Create Version failed. Bad model detected with error: **Model requires more memory than allowed. Please try to decrease the model size and re-deploy. If you continue to experience errors, please contact support.**
</code></pre>
<p><strong>My finding:</strong>
Have found articles of people struggling in same ways for PyTorch package and made it work by installing torch wheels on the GCS (<a href="https://medium.com/searce/deploy-your-own-custom-model-on-gcps-ai-platform-" rel="nofollow noreferrer">https://medium.com/searce/deploy-your-own-custom-model-on-gcps-ai-platform-</a>
7e42a5721b43).
Have tried the same approach with torch and torchvision but no luck till now and waiting response from "cloudml-feedback@google.com cloudml-feedback@google.com". Any help on getting custom torch_torchvision based custom predictor working on AI platform that will be great.</p>
|
<p>Got this fixed by a combination of few things. I stuck to 4gb CPU MlS1 machine and custom predictor routine (<500MB).</p>
<ul>
<li>Install the libraries using setup.py parameter but instead of parsing just the package name and it's version, add correct torch wheel (ideally <100 mb).</li>
</ul>
<pre><code>REQUIRED_PACKAGES = [line.strip() for line in open(base/"requirements.txt")] +\
['torchvision==0.5.0', 'torch @ https://download.pytorch.org/whl/cpu/torch-1.4.0%2Bcpu-cp37-cp37m-linux_x86_64.whl']
</code></pre>
<ul>
<li>I reduced the steps for preprocessing taken. Couldn't fit in all of them so jsonify your SEND response and GET one from both preproc.py and predictor.py</li>
</ul>
<pre><code>import json
json.dump(your data to send to predictor class)
</code></pre>
<ul>
<li>Import those function from the class of a library that is needed.</li>
</ul>
<pre><code>from torch import zeros,load
your code
</code></pre>
<h3>[Important]</h3>
<ul>
<li><p>Haven't tested different types of serializing object for the trained model, could be a difference there as well to which one (torch.save, pickle, joblib etc) is memory saving.</p>
</li>
<li><p>Found this link for those whose organization is a partner with GCP might be able to request more quota (am guessing from 500MB to 2GB or so). Didn't had to go in this direction as my issue was resolved and other poped up lol.
<a href="https://cloud.google.com/ai-platform/training/docs/quotas" rel="nofollow noreferrer">https://cloud.google.com/ai-platform/training/docs/quotas</a></p>
</li>
</ul>
|
python|google-cloud-platform|pytorch|torchvision|google-ai-platform
| 1
|
377,090
| 21,125,561
|
How do I automate an environment variable dependent benchmark of BLAS in python/numpy?
|
<p>I need some help in figuring out how to automate a benchmark effort in python. </p>
<p>I'm testing the effects of threading on a BLAS library calls through numpy in python. In a linux environment, threading in OpenBLAS is controlled through the environment variable <code>OMP_NUM_THREADS</code>. I want to do a test where I increment <code>OMP_NUM_THREADS</code> from 1 to a max value, time a routine at each thread count, and then finally manipulate the aggregate timing for all thread counts. </p>
<p>The issue is the following. Environment variables can be set in python, but they only affect subprocesses or subshells. So I can correctly run my benchmark with the following driver code:</p>
<pre><code>#!/usr/bin/env python # driver script for thread test
import os
thread_set =[1,2,4,8,16]
for thread in thread_set:
os.environ['OMP_NUM_THREADS']='{:d}'.format(thread)
os.system("echo $OMP_NUM_THREADS")
os.system("numpy_test")
</code></pre>
<p>and numpy_test script:</p>
<pre><code>#!/usr/bin/env python
#timing test for numpy dot product (using OpenBLAS)
#based on http://stackoverflow.com/questions/11443302/compiling-numpy-with-openblas-integration
import sys
import timeit
setup = "import numpy; x = numpy.random.random((1000,1000))"
count = 5
t = timeit.Timer("numpy.dot(x, x.T)", setup=setup)
dot_time = t.timeit(count)/count
print("dot: {:7.3g} sec".format(dot_time))
</code></pre>
<p>but analyzing this is a very manual process. </p>
<p>In particular, I can't return the value <code>dot_time</code> from <code>numpy_test</code> up to my outer wrapper routine, so I can't analyze the results of my test in any automated fashion. As an example, I'd like to plot <code>dot_time</code> vs number of threads, or evaluate whether <code>dot_time</code>/number of threads is constant. </p>
<p>If I try to do a similar test entirely within a python instance by defining a python test function (avoiding the <code>os.system()</code> approach above), and then running the test function within the <code>thread in thread_set</code> loop, then all instances of the test function inherit the same value for <code>OMP_NUM_THREADS</code> (that of the parent python shell). So this test fails:</p>
<pre><code>#!/usr/bin/env python
#attempt at testing threads that doesn't work
#(always uses inherited value of OMP_NUM_THREADS)
import os
import sys
import timeit
def test_numpy():
setup = "import numpy; x = numpy.random.random((1000,1000))"
count = 5
t = timeit.Timer("numpy.dot(x, x.T)", setup=setup)
dot_time = t.timeit(count)/count
print("dot: {:7.3g} sec".format(dot_time))
return dot_time
thread_set =[1,2,4,8,16]
for thread in thread_set:
os.environ['OMP_NUM_THREADS']='{:d}'.format(thread)
os.system("echo $OMP_NUM_THREADS")
time_to_run = test_numpy()
print(time_to_run)
</code></pre>
<p>This fails in that every instance of <code>thread</code> takes the same time, as <code>test_numpy()</code> always inherits the value of <code>OMP_NUM_THREADS</code> in the parent environment rather than the value set through <code>os.environ()</code>. If something like this worked however, it would be trivial to do the analysis I need to do.</p>
<p>In the real test, I'll be running over a few 1000 permutations, so automation is key. Given that, I'd appreciate an answer to any of these questions:</p>
<ol>
<li><p>How would you return a value (<code>dot_time</code>) from a subprocess like this? Is there a more elegant solution than reading/writing a file?</p></li>
<li><p>Is there a better way to structure this sort of (environment variable dependent) test?</p></li>
</ol>
<p>Thank you in advance.</p>
|
<p>You can do something like this:</p>
<pre><code>import subprocess
os.environ['OMP_NUM_THREADS'] = '{:d}'.format(thread)
proc = subprocess.Popen(["numpy_test"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = proc.communicate()
</code></pre>
<p>Then you'll have the output of the the <code>numpy_test</code> script in stdout. In general I believe <code>subprocess.call</code> and <code>subprocess.Popen</code> are prefered over <code>os.system</code>.</p>
|
python|multithreading|numpy|python-3.3|blas
| 2
|
377,091
| 21,182,759
|
Pivot data and maintain original sort order
|
<p>I'd like to pivot my data which results from a django queryset while maintaining the original (non-alphabetical) sort order on the index column. The pivoted data will then be used in a google visualization line chart.</p>
<p>I've hacked together my own code to do the job but it's a bit ugly and I was wondering if it could be done using a pandas DataFrame pivot.</p>
<p>I've never used pandas before so, after reading the doco, this is what I came up with.</p>
<p>Here is my unpivoted data frame, sorted by date and tenor where tenor suffixes represent: D=Day, M=Month, Y=Year.</p>
<pre><code>df = DataFrame(data)
date tenor value
0 2014-01-01 1D 0.517125
1 2014-01-01 1M 0.5175
2 2014-01-01 2M 0.518159
3 2014-01-01 3M 0.5187
4 2014-01-01 4M 0.51912
5 2014-01-01 5M 0.51949
6 2014-01-01 6M 0.5197
7 2014-01-01 9M 0.519511
8 2014-01-01 1Y 0.5198
9 2014-01-01 18M 0.521228
10 2014-01-01 2Y 0.523097
11 2014-01-01 3Y 0.525054
12 2014-01-01 4Y 0.527055
13 2014-01-01 5Y 0.529054
14 2014-01-01 6Y 0.531099
15 2014-01-01 7Y 0.532852
16 2014-01-01 8Y 0.534207
17 2014-01-01 9Y 0.535314
18 2014-01-02 1D 0.517874
19 2014-01-02 1M 0.5181
20 2014-01-02 2M 0.518451
21 2014-01-02 3M 0.5188
22 2014-01-02 4M 0.519113
23 2014-01-02 5M 0.519418
24 2014-01-02 6M 0.5196
25 2014-01-02 9M 0.519377
26 2014-01-02 1Y 0.5197
27 2014-01-02 18M 0.521406
28 2014-01-02 2Y 0.523405
29 2014-01-02 3Y 0.525254
30 2014-01-02 4Y 0.527151
31 2014-01-02 5Y 0.529256
32 2014-01-02 6Y 0.531543
33 2014-01-02 7Y 0.533457
34 2014-01-02 8Y 0.534802
35 2014-01-02 9Y 0.535847
36 2014-01-03 1D 0.518552
37 2014-01-03 1M 0.5186
38 2014-01-03 2M 0.518536
39 2014-01-03 3M 0.5186
40 2014-01-03 4M 0.518865
41 2014-01-03 5M 0.51916
42 2014-01-03 6M 0.5193
43 2014-01-03 9M 0.519024
44 2014-01-03 1Y 0.5193
45 2014-01-03 18M 0.520882
46 2014-01-03 2Y 0.5228
47 2014-01-03 3Y 0.524647
48 2014-01-03 4Y 0.526752
49 2014-01-03 5Y 0.528957
50 2014-01-03 6Y 0.531065
51 2014-01-03 7Y 0.532856
52 2014-01-03 8Y 0.534325
53 2014-01-03 9Y 0.535558
</code></pre>
<p>Using pandas pivot produces the following results. The pivot worked but the rows are in the wrong order.</p>
<pre><code>df_pivot = df.pivot(index='tenor', columns='date', values='value')
tenor 2014-01-01 2014-01-02 2014-01-03
18M 0.521228 0.521406 0.520882
1D 0.517125 0.517874 0.518552
1M 0.5175 0.5181 0.5186
1Y 0.5198 0.5197 0.5193
2M 0.518159 0.518451 0.518536
2Y 0.523097 0.523405 0.5228
3M 0.5187 0.5188 0.5186
3Y 0.525054 0.525254 0.524647
4M 0.51912 0.519113 0.518865
4Y 0.527055 0.527151 0.526752
5M 0.51949 0.519418 0.51916
5Y 0.529054 0.529256 0.528957
6M 0.5197 0.5196 0.5193
6Y 0.531099 0.531543 0.531065
7Y 0.532852 0.533457 0.532856
8Y 0.534207 0.534802 0.534325
9M 0.519511 0.519377 0.519024
9Y 0.535314 0.535847 0.535558
</code></pre>
<p>I would like the results sorted by the tenor column:</p>
<pre><code>tenor 2014-01-01 2014-01-02 2014-01-03
1D 0.517125 0.517874 0.518552
1M 0.5175 0.5181 0.5186
2M 0.518159 0.518451 0.518536
3M 0.5187 0.5188 0.5186
4M 0.51912 0.519113 0.518865
5M 0.51949 0.519418 0.51916
6M 0.5197 0.5196 0.5193
9M 0.519511 0.519377 0.519024
1Y 0.5198 0.5197 0.5193
18M 0.521228 0.521406 0.520882
2Y 0.523097 0.523405 0.5228
3Y 0.525054 0.525254 0.524647
4Y 0.527055 0.527151 0.526752
5Y 0.529054 0.529256 0.528957
6Y 0.531099 0.531543 0.531065
7Y 0.532852 0.533457 0.532856
8Y 0.534207 0.534802 0.534325
9Y 0.535314 0.535847 0.535558
</code></pre>
<p>I've thought about writing a custom sort function that would convert the tenor values to days when comparing and then using that with pandas (not sure how yet).</p>
<p>I've investigated using <a href="https://developers.google.com/chart/interactive/docs/querylanguage#Pivot" rel="nofollow">google visualization pivot</a> but that only seems to work on a query not on an existing DataTable.</p>
<p>Any other suggestions would be greatly appreciated.</p>
|
<p>compare day unit with month unit is fuzzy, for example which is large: 30D or 1M? If this is no problem, you can use <code>reindex()</code> method to reorder the DataFrame:</p>
<pre><code>import pandas as pd
df_pivot = df.pivot(index='tenor', columns='date', values='value')
DayCounts = {"D":1, "M":365.0/12, "Y":365}
index = sorted(df_pivot.index, key=lambda v:int(v[:-1])*DayCounts[v[-1]])
df_pivot.reindex(index)
</code></pre>
<p>output:</p>
<pre><code>date 2014-01-01 2014-01-02 2014-01-03
1D 0.517125 0.517874 0.518552
1M 0.517500 0.518100 0.518600
2M 0.518159 0.518451 0.518536
3M 0.518700 0.518800 0.518600
4M 0.519120 0.519113 0.518865
5M 0.519490 0.519418 0.519160
6M 0.519700 0.519600 0.519300
9M 0.519511 0.519377 0.519024
1Y 0.519800 0.519700 0.519300
18M 0.521228 0.521406 0.520882
2Y 0.523097 0.523405 0.522800
3Y 0.525054 0.525254 0.524647
4Y 0.527055 0.527151 0.526752
5Y 0.529054 0.529256 0.528957
6Y 0.531099 0.531543 0.531065
7Y 0.532852 0.533457 0.532856
8Y 0.534207 0.534802 0.534325
9Y 0.535314 0.535847 0.535558
</code></pre>
|
python|pandas|google-visualization
| 2
|
377,092
| 21,319,929
|
How to determine whether a Pandas Column contains a particular value
|
<p>I am trying to determine whether there is an entry in a Pandas column that has a particular value. I tried to do this with <code>if x in df['id']</code>. I thought this was working, except when I fed it a value that I knew was not in the column <code>43 in df['id']</code> it still returned <code>True</code>. When I subset to a data frame only containing entries matching the missing id <code>df[df['id'] == 43]</code> there are, obviously, no entries in it. How to I determine if a column in a Pandas data frame contains a particular value and why doesn't my current method work? (FYI, I have the same problem when I use the implementation in this <a href="https://stackoverflow.com/a/19630449/2327821">answer</a> to a similar question).</p>
|
<p><code>in</code> of a Series checks whether the value is in the index:</p>
<pre><code>In [11]: s = pd.Series(list('abc'))
In [12]: s
Out[12]:
0 a
1 b
2 c
dtype: object
In [13]: 1 in s
Out[13]: True
In [14]: 'a' in s
Out[14]: False
</code></pre>
<p>One option is to see if it's in <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.unique.html">unique</a> values:</p>
<pre><code>In [21]: s.unique()
Out[21]: array(['a', 'b', 'c'], dtype=object)
In [22]: 'a' in s.unique()
Out[22]: True
</code></pre>
<p>or a python set:</p>
<pre><code>In [23]: set(s)
Out[23]: {'a', 'b', 'c'}
In [24]: 'a' in set(s)
Out[24]: True
</code></pre>
<p>As pointed out by @DSM, it may be more efficient (especially if you're just doing this for one value) to just use in directly on the values:</p>
<pre><code>In [31]: s.values
Out[31]: array(['a', 'b', 'c'], dtype=object)
In [32]: 'a' in s.values
Out[32]: True
</code></pre>
|
python|pandas
| 313
|
377,093
| 21,399,091
|
Pure python faster than numpy on element-wise operation?
|
<p>Can someone explain these results to me ? The pure Python seems to take less time than the
numpy expression for element-wise exponentiation.</p>
<pre><code>In [224]: ar=np.arange(1000)
%timeit a**4
100000 loops, best of 3: 5.22 µs per loop
In [225]: ar=range(1000)
%timeit [ar[i]**4 for i in ar]
1000 loops, best of 3: 205 µs per loop
</code></pre>
<p>Numpy total time= 100000*5.22 = 522000 µs</p>
<p>Pure Python total time = 1000*205 = 205000 µs</p>
|
<p>The pure Python version was timed for less iterations. That doesn't mean it was faster; that means <code>timeit</code> stopped running it so it wouldn't take 60 seconds* to get results. You can see from the per-loop time that the NumPy version was 40 times faster.</p>
<p>*200 microseconds per loop * 100,000 loops * 3 repetitions of the timing procedure</p>
|
python|numpy
| 4
|
377,094
| 21,024,066
|
Annotate heatmap with value from Pandas dataframe
|
<p>I would like to annotate a heatmap with the values that I pass from a dataframe into the function below. I have looked at matplotlib.text but have not been able to get the values from my dataframe in a desired way in my heatmap. I have pasted in my function for generating a heatmap below, after that my dataframe and the output from the heatmap call. I would like to plot each value from my dataframe in the center of each cell in the heatmap.</p>
<p>Function for generating a heatmap:</p>
<pre><code>import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.colors as mcolors
def heatmap_binary(df,
edgecolors='w',
#cmap=mpl.cm.RdYlGn,
log=False):
width = len(df.columns)/7*10
height = len(df.index)/7*10
fig, ax = plt.subplots(figsize=(20,10))#(figsize=(width,height))
cmap, norm = mcolors.from_levels_and_colors([0, 0.05, 1],['Teal', 'MidnightBlue'] ) # ['MidnightBlue', Teal]['Darkgreen', 'Darkred']
heatmap = ax.pcolor(df ,
edgecolors=edgecolors, # put white lines between squares in heatmap
cmap=cmap,
norm=norm)
ax.autoscale(tight=True) # get rid of whitespace in margins of heatmap
ax.set_aspect('equal') # ensure heatmap cells are square
ax.xaxis.set_ticks_position('top') # put column labels at the top
ax.tick_params(bottom='off', top='off', left='off', right='off') # turn off ticks
plt.yticks(np.arange(len(df.index)) + 0.5, df.index, size=20)
plt.xticks(np.arange(len(df.columns)) + 0.5, df.columns, rotation=90, size= 15)
# ugliness from http://matplotlib.org/users/tight_layout_guide.html
from mpl_toolkits.axes_grid1 import make_axes_locatable
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", "3%", pad="1%")
plt.colorbar(heatmap, cax=cax)
plt.show()
</code></pre>
<p>Herre is an example of My dataframe :</p>
<pre><code>dataframe :
0-5 km / h 5-40 km / h 40-80 km / h 80-120 km / h \
NORDIC 0.113955 0.191888 0.017485 -0.277528
MIDDLE EU 0.117903 0.197084 -0.001447 -0.332677
KOREA 0.314008 0.236503 -0.067174 -0.396518
CHINA 0.314008 0.236503 -0.067174 -0.396518
120-160 km / h 160-190 km / h 190 km / h
NORDIC -0.054365 0.006107 0.002458
MIDDLE EU 0.002441 0.012097 0.004599
KOREA -0.087191 0.000331 0.000040
CHINA -0.087191 0.000331 0.000040
</code></pre>
<p>Generating the heatmap:</p>
<pre><code>heatmap_binary(dataframe)
</code></pre>
<p><img src="https://i.stack.imgur.com/hULRF.png" alt="enter image description here"></p>
<p>Any ideas?</p>
<hr>
<p>Update to clarify my problem</p>
<p>I tried the proposed solution from question which has the result I'm looking for:
<a href="https://stackoverflow.com/questions/11917547/how-to-annotate-heatmap-with-text-in-matplotlib">how to annotate heatmap with text in matplotlib?</a>
However, I still have a problem using the matplotlib.text function for positioning the values in the heatmap:
Here is my cod for trying this solution:</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
data = dataframe.values
heatmap_binary(dataframe)
for y in range(data.shape[0]):
for x in range(data.shape[1]):
plt.text(data[y,x] +0.05 , data[y,x] + 0.05, '%.4f' % data[y, x], #data[y,x] +0.05 , data[y,x] + 0.05
horizontalalignment='center',
verticalalignment='center',
color='w')
#plt.colorbar(heatmap)
plt.show()
</code></pre>
<p>added plot: (different coloring but same problem)
<img src="https://i.stack.imgur.com/5Lumz.png" alt="enter image description here"></p>
|
<p>This functionality is provided by the <a href="http://stanford.edu/~mwaskom/software/seaborn/examples/heatmap_annotation.html" rel="noreferrer">seaborn</a> package. It can produce maps like</p>
<p><a href="https://i.stack.imgur.com/CPmBQ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/CPmBQ.png" alt="Example annotated heatmap"></a></p>
<p>An example usage of <a href="http://stanford.edu/~mwaskom/software/seaborn/examples/heatmap_annotation.html" rel="noreferrer">seaborn</a> is</p>
<pre><code>import seaborn as sns
sns.set()
# Load the example flights dataset and conver to long-form
flights_long = sns.load_dataset("flights")
flights = flights_long.pivot("month", "year", "passengers")
# Draw a heatmap with the numeric values in each cell
sns.heatmap(flights, annot=True, fmt="d", linewidths=.5)
</code></pre>
|
python|text|matplotlib|pandas|heatmap
| 9
|
377,095
| 3,165,379
|
how to display a numpy array with pyglet?
|
<p>I have a label matrix with dimension (100*100), stored as a numpy array, and I would like to display the matrix with pyglet.</p>
<p>My original idea is to use this matrix to form a new pyglet image using function pyglet.image.ImageData(). It requres a buffer of the imagedata as an input, however I have no idea how to get a right formated buffer from the numpy array.</p>
<p>Any one have any idea?</p>
<p>ps. my current solution:</p>
<pre><code>3d_label = numpy.empty([100,100,3])
3d_label[:,:,0] = label * 255 # value range of label is [0,1]
3d_label[:,:,1] = label * 255
3d_label[:,:,2] = label * 255
image_data = ctypes.string_at(id(3d_label.tostring())+20, 100*100*3)
image = pyglet.image.ImageData(100, 100, 'RGB', image_data, -100*3)
</code></pre>
<p>Any better way to construct a [100*100*3] matrix from 3 [100*100] matrix with numpy?</p>
|
<p>I think what you are looking for is <code>np.dstack</code> (or more generally, <code>np.concatenate</code>):</p>
<pre><code>label255=label*255
label3=numpy.dstack((label255,label255,label255))
</code></pre>
<p>This shows <code>dstack</code> produces the same array (<code>label3</code>) as your construction for <code>label_3d</code>:</p>
<pre><code>import numpy as np
label=np.random.random((100,100))
label255=label*255
label3=np.dstack((label255,label255,label255))
label_3d = np.empty([100,100,3])
label_3d[:,:,0] = label * 255 # value range of label is [0,1]
label_3d[:,:,1] = label * 255
label_3d[:,:,2] = label * 255
print(np.all(label3==label_3d))
# True
</code></pre>
<p>PS. I'm not sure, but have you tried using <code>label3.data</code> instead of <code>ctypes.string_at(id(label3.tostring())+20, 100*100*3)</code> ?</p>
|
python|numpy|pyglet
| 5
|
377,096
| 63,700,984
|
combine multiindex dataframe with Int64Index dataframe
|
<p>i have one dataframe with multiindex</p>
<p>result =</p>
<pre><code>MultiIndex([(1, 'HK_MN001'),
(2, 'HK_MN001'),
(3, 'HK_MN002'),
(4, 'HK_MN003'),
(5, 'HK_MN004'),
(6, 'HK_MN005'),
(7, 'HK_MN005'),
(8, 'HK_MN005')],
names=['ID1', 'ID2'])
</code></pre>
<p>Another dataframe with index:</p>
<p>photo_df:</p>
<pre><code>Int64Index([1, 2, 3, 4, 5, 6, 7, 8], dtype='int64', name='ID1')
</code></pre>
<p>I want to concatenate both the dataframes but it gives me error:
Code:</p>
<pre><code>result = pd.concat([result,photo_df], axis = 1,sort=False)
error:
"Can only union MultiIndex with MultiIndex or Index of tuples, "
NotImplementedError: Can only union MultiIndex with MultiIndex or Index of tuples, try mi.to_flat_index().union(other) instead.
</code></pre>
<p>Result Dataframe is:</p>
<p><a href="https://i.stack.imgur.com/S3Zt3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/S3Zt3.png" alt="enter image description here" /></a></p>
<p>photo_df dataframe is:</p>
<pre><code> PhotoID raw_photo
0 1 HK_MN001_DSC_2160_161014Ushio.JPG
1 2 HK_MN001_DSC_2308_161014Ushio.JPG
2 3 HK_MN002_DSC_2327_161014Ushio.JPG
3 4 HK_MN003_DSC_1474_181015Ushio.jpg
4 5 HK_MN004_DSC_1491_181015Ushio.jpg
5 6 HK_MN005_DSC_1506_181015Ushio.JPG
6 7 HK_MN005_DSC_1527_181015Ushio.JPG
7 8 HK_MN005_DSC_1528_181015Ushio.jpg
</code></pre>
<p>Required Output dataframe:(If possible drop the index = Id1)
<a href="https://i.stack.imgur.com/s8gI8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/s8gI8.png" alt="enter image description here" /></a></p>
|
<p>I think you need in both <code>DataFrames</code> create <code>MultiIndex</code>:</p>
<pre><code>photo_df = photo_df.set_index('PhotoID', drop=False)
photo_df.columns = pd.MultiIndex.from_product([photo_df.columns, ['']])
print (photo_df)
PhotoID raw_photo
PhotoID
1 1 HK_MN001_DSC_2160_161014Ushio.JPG
2 2 HK_MN001_DSC_2308_161014Ushio.JPG
3 3 HK_MN002_DSC_2327_161014Ushio.JPG
4 4 HK_MN003_DSC_1474_181015Ushio.jpg
5 5 HK_MN004_DSC_1491_181015Ushio.jpg
6 6 HK_MN005_DSC_1506_181015Ushio.JPG
7 7 HK_MN005_DSC_1527_181015Ushio.JPG
8 8 HK_MN005_DSC_1528_181015Ushio.jpg
#second level ID2 is column
result = pd.concat([result.reset_index(level=1),photo_df], axis = 1,sort=False)
</code></pre>
|
python-3.x|pandas|dataframe|concat|multi-index
| 1
|
377,097
| 63,712,804
|
Combine two dataframes according to the values in one of the columns
|
<pre><code>dataframe1
data_a data_b data_c data_d data_e
61 0.30792 Rest 2.34857 True
183 0.93408 Rest 2.34550 True
305 1.56019 Rest 2.34215 True
427 2.18636 Rest 2.33955 True
549 2.81252 Rest 2.33660 True
dataframe2
data_a data_b data_c data_d data_e
122 0.62616 Discharge 2.32013 False
244 1.25233 Discharge 2.31390 False
366 1.87844 Discharge 2.31087 False
488 2.50460 Discharge 2.30819 False
610 3.13077 Discharge 2.30567 False
</code></pre>
<p>I would like the out put to be as follows:</p>
<pre><code>dataframe3
data_a data_b data_c data_d data_e
61 0.30792 Rest 2.34857 True
122 0.62616 Discharge 2.32013 False
183 0.93408 Rest 2.34550 True
244 1.25233 Discharge 2.31390 False
305 1.56019 Rest 2.34215 True
366 1.87844 Discharge 2.31087 False
427 2.18636 Rest 2.33955 True
488 2.50460 Discharge 2.30819 False
549 2.81252 Rest 2.33660 True
610 3.13077 Discharge 2.30567 False
</code></pre>
<p>As you can see, the new dataframe should be sorted by according to the sequential order of the values in the data_a column.</p>
|
<p>Check with <code>concat</code> then <code>sort_values</code></p>
<pre><code>df3 = pd.concat([df1, df2]).sort_values('data_a')
</code></pre>
|
python|pandas|dataframe|data-science
| 2
|
377,098
| 63,484,261
|
Get a specific value from a cell
|
<p>Below is my df.</p>
<pre><code>import pandas as pd
df = pd.DataFrame ({
'IP':['10.140.34.210;0.0.0.0','0.0.0.0;0.0.0.0;10.0.1.87;0.0.0.0;0.0.0.0','0.0.0.0;172.31.48.174',
'10.140.67.244;0.0.0.0', '1.1.1.1','3.3.3.3'],
})
print(df)
IP
0 10.140.34.210;0.0.0.0
1 0.0.0.0;0.0.0.0;10.0.1.87;0.0.0.0;0.0.0.0
2 0.0.0.0;172.31.48.174
3 10.140.67.244;0.0.0.0
4 1.1.1.1
5 3.3.3.3
</code></pre>
<p>What I would like to achieve is to keep in the <strong>IP</strong> column just the correct IP address without any 0.0.0.0. This is the expected output.</p>
<pre><code>
IP
0 10.140.34.210
1 10.0.1.87
2 172.31.48.174
3 10.140.67.244
4 1.1.1.1
5 3.3.3.3
</code></pre>
<p>I tried with split but it doesn't do the job.</p>
<pre><code>df = df['IP'].str.split(';',expand=True)
print(df)
0 1 2 3 4
0 10.140.34.210 0.0.0.0 None None None
1 0.0.0.0 0.0.0.0 10.0.1.87 0.0.0.0 0.0.0.0
2 0.0.0.0 172.31.48.174 None None None
3 10.140.67.244 0.0.0.0 None None None
4 1.1.1.1 None None None None
5 3.3.3.3 None None None None
</code></pre>
<p>Any idea? Thank you!</p>
|
<p>If thats the only exceptional case you need to get rid of, use <code>replace</code> with regex:</p>
<pre><code>print(df["IP"].replace(";?0\.0\.0\.0;?","", regex=True))
0 10.140.34.210
1 10.0.1.87
2 172.31.48.174
3 10.140.67.244
4 1.1.1.1
5 3.3.3.3
Name: IP, dtype: object
</code></pre>
|
python-3.x|pandas
| 3
|
377,099
| 63,622,985
|
query reg dimension of conv2D layer in cnn
|
<p>I have a Conv2D layer with an input dimension of 256×226×3:</p>
<pre class="lang-py prettyprint-override"><code>self.conv1 = self.track_layer(tf.layers.Conv2D(
32, 9, 1, 'SAME',
activation=tf.nn.relu,
kernel_initializer=conv_init,
))
</code></pre>
<p>Can anyone tell me what is the dimension after passing my input through this convolutional layer?
The syntax of this code seems to be slightly different from the common ones i see.</p>
|
<p>The output shape of this conv layer will essentially remain the same, In case you find the calculations for getting the shape of output a bit intimidating I suggest a small way of measuring it, I suggest you make a small model and give it the input size and print out the summary.</p>
<pre><code>import tensorflow as tf
main_model = tf.keras.models.Sequential()
main_model.add(tf.keras.layers.Conv2D(32,9,1,"SAME",input_shape=(256,226,3)))
main_model.build()
main_model.summary()
</code></pre>
<p>The OUTPUT:</p>
<pre><code>Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_3 (Conv2D) (None, 256, 226, 32) 7808
=================================================================
Total params: 7,808
Trainable params: 7,808
Non-trainable params: 0
</code></pre>
<p>The criteria to calculate the numbers is pretty straight forward and the same is accessible <a href="https://towardsdatascience.com/understanding-and-calculating-the-number-of-parameters-in-convolution-neural-networks-cnns-fc88790d530d" rel="nofollow noreferrer">here</a>.</p>
<p>For <strong>valid padding</strong> do:</p>
<pre><code>import tensorflow as tf
main_model = tf.keras.models.Sequential()
main_model.add(tf.keras.layers.Conv2D(32,9,1,"valid",input_shape=(256,226,3)))
main_model.build()
main_model.summary()
</code></pre>
<p>The OUTPUT will be :</p>
<pre><code>Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 248, 218, 32) 7808
=================================================================
Total params: 7,808
Trainable params: 7,808
Non-trainable params: 0
</code></pre>
|
tensorflow|deep-learning|conv-neural-network
| 1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.