Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
4,200
67,932,445
Least-squares optimization in Python with single equal constraint
<p>I am looking to solve a least-squares problem in python such that I minimize <code>0.5*||np.dot(A,x) - b||^2</code> subject to the constraint <code>np.dot(C,x) = d</code>, with the bounds <code>0&lt;x&lt;np.inf</code>, where</p> <pre><code>A : shape=[m, n] C : shape=[m, n] b : shape=[m] d : shape=[m] </code></pre> <p>are all known matrices.</p> <p>Unfortunately, it looks like the scipy.optimize.lsq_linear() function only works for an upper/lower bound constraint:</p> <pre><code>minimize 0.5 * ||A x - b||**2 subject to lb &lt;= x &lt;= ub </code></pre> <p>not an equality constraint. In addition, I would like to add bounds for the solution such that it is positive ONLY. Is there an easy or clean way to do this using <code>scipy</code> or <code>NumPy</code> built-in functions?</p>
<p>If you stick to <code>scipy.optimize.lsq_linear</code>, you could add a penalty term to the objective function, i.e.</p> <pre><code>minimize 0.5 * ||A x - b||**2 + beta*||Cx-d||**2 subject to lb &lt;= x &lt;= ub </code></pre> <p>and choose the scalar <code>beta</code> sufficiently large in order to transform your constrained optimization problem into a problem with simple box-constraints. However, it is more convenient to use <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html" rel="nofollow noreferrer"><code>scipy.optimize.minimize</code></a> for constrained optimization problems:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np from scipy.optimize import minimize # ... Define your np.ndarrays A, C, b, d here # constraint cons = [{'type': 'eq', 'fun': lambda x: C @ x - d}] # variable bounds bnds = [(0, None) for _ in range(A.shape[1])] # initial point x0 = np.ones(A.shape[1]) # res.x contains your solution res = minimize(lambda x: np.linalg.norm(A@x-b)**2, x0=x0, bounds=bnds, constraints=cons) </code></pre> <p>where I used the euclidian norm. Timing this for a toy example with <code>m = 2000</code> and <code>n = 1000</code> yields 3min and 10s on my machine.</p> <p>Note that both the objective gradient and the constraint Jacobian are approximated by finite differences, which requires many evaluations of the objective and the constraint function. Consequently, the code might be too slow for large problems. In this case, it's worth providing the exact gradient, jacobian and hessian. Furthermore, since</p> <pre class="lang-none prettyprint-override"><code># f(x) = ||Ax-b||^2 = x'A'Ax - 2b'Ax + b'b # grad f(x) = 2A'Ax - 2A'b # hess f(x) = 2A'A </code></pre> <p>we can significantly reduce the time needed to evaluate the functions by precalculating <code>A'A</code>, <code>b'A</code> and <code>b'b</code>. Additionally, we only need to calculate <code>A'Ax</code> once:</p> <pre><code>from scipy.optimize import NonlinearConstraint # Precalculate the matrix-vector products AtA = A.T @ A btA = b.T @ A btb = b.T @ b Atb = btA.T # return the objective value and the objective gradient def obj_and_grad(x): AtAx = AtA @ x obj = x.T @ AtAx - 2*btA @ x + btb grad = 2*AtAx - 2*Atb return obj, grad #constraint d &lt;= C@x &lt;= d (including the jacobian and hessian) con = NonlinearConstraint(lambda x: C @ x, d, d, jac=lambda x: C, hess=lambda x, v: v[0] * np.zeros(x.size)) # res.x contains your solution res = minimize(obj_and_grad, jac=True, hess=lambda x: AtA2, x0=x0, bounds=bnds, constraints=(con,), method=&quot;trust-constr&quot;) </code></pre> <p>Here, the option <code>jac=True</code> tells <code>minimize</code> that our function <code>obj_and_grad</code> returns a tuple containing the objective function and the gradient. We further choose the 'trust-constr' method, since it is the only one available that supports hessians. Timing this again for the same aforementioned toy example, yields 7s on my machine.</p>
python|numpy|optimization|scipy|least-squares
3
4,201
67,716,457
How to find out combination of two columns in Dataframe? when there is multiple columns in dataframes?
<p>I have the following dataframe...</p> <pre><code>df1: playerA playerB PlayerC PlayerD kim lee b f jackson kim d g dan lee a d </code></pre> <p>I want to generate a new data frame with all possible combinations of two columns. For example,</p> <pre><code>df_new: Target Source kim lee kim kim kim lee kim b kim d kim a kim f kim g kim d jackson lee jackson kim jackson lee jackson b . . . . lee kim lee jackson lee dan lee b lee d . . . </code></pre> <p>Thus, I tried this code t</p> <pre><code>import itertools def comb(df1): return [df1.loc[:, list(x)].set_axis(['Target','Source'], axis=1) for x in itertools.combinations(df1.columns, 2)] </code></pre> <p>However, It only shows combinations between columns in the same row.</p> <p>Is there any way that I could generate all the possible combination between columns? Thanks in advance!</p>
<p>Here's an approach using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.melt.html?highlight=melt#pandas.DataFrame.melt" rel="nofollow noreferrer"><code>pandas.DataFrame.melt()</code></a> and <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.merge.html?highlight=merge#pandas.merge" rel="nofollow noreferrer"><code>pandas.merge()</code></a></p> <pre><code>&gt;&gt;&gt; df1 playerA playerB PlayerC PlayerD 0 kim lee b f 1 jackson kim d g 2 dan lee a d &gt;&gt;&gt; target = df1.melt(value_name='Source')[['Source']] &gt;&gt;&gt; df_new = pd.merge(target.rename(columns={'Source':'Target'}), target, how='cross') &gt;&gt;&gt; df_new Target Source 0 kim kim 1 kim jackson 2 kim dan 3 kim lee 4 kim kim .. ... ... 139 d d 140 d a 141 d f 142 d g 143 d d </code></pre> <p>This approach doesn't consider same indices of <code>Target</code> and <code>Source</code>, but you can easily drop those rows using simple math as follow:</p> <pre><code>&gt;&gt;&gt; indices_to_drop = [idx * len(target) + idx for idx in range(len(target)] &gt;&gt;&gt; indices_to_drop [0, 13, 26, 39, 52, 65, 78, 91, 104, 117, 130, 143] &gt;&gt;&gt; df_new.drop(indices_to_drop).reset_index(drop=True) Target Source 0 kim jackson 1 kim dan 2 kim lee 3 kim kim 4 kim lee .. ... ... 127 d b 128 d d 129 d a 130 d f 131 d g </code></pre>
python|pandas|itertools
2
4,202
67,680,199
getting the index of min values with Numpy Python
<p>The function below separates each value into chunks separated by indexes <code>index</code> with the values in <code>L_list</code>. So it outputs the minimum value between indexes <code>3-5</code> which is -5 and the index of the value. Both the <code>numpy_argmin_reduceat(a, b)</code> and the <code>Drawdown</code> function do as planned however the index output of the <code>numpy_argmin_reduceat(a, b)</code> is faulty it The minimum values of <code>Drawdown</code> do not match with the indexes of the outputs of <code>numpy_argmin_reduceat(a, b)</code>.How would I be able to solve this? Arrays:</p> <pre><code>import numpy as np # indexes 0, 1, 2,3,4, 5, 6,7, 8, 9,10, 11, 12 L_list = np.array([10,20,30,0,0,-5,11,2,33, 4, 5, 68, 7]) index = np.array([3,5,7,11]) </code></pre> <p>Functions:</p> <pre><code>#getting the minimum values Drawdown = np.minimum.reduceat(L_list,index+1) #Getting the min Index def numpy_argmin_reduceat(a, b): n = a.max() + 1 # limit-offset id_arr = np.zeros(a.size,dtype=int) id_arr[b] = 1 shift = n*id_arr.cumsum() sortidx = (a+shift).argsort() grp_shifted_argmin = b idx =sortidx[grp_shifted_argmin] - b min_idx = idx +index return min_idx min_idx =numpy_argmin_reduceat(L_list,index+1) #printing function DR_val_index = np.array([np.around(Drawdown,1), min_idx]) DR_result = np.apply_along_axis(lambda x: print(f'Min Values: {x[0]} at index: {x[1]}'), 0, DR_val_index) </code></pre> <p>Output</p> <pre><code>Min Values: -5 at index: 4 Min Values: 2 at index: 6 Min Values: 4 at index: 8 Min Values: 7 at index: 11 </code></pre> <p>Expected Output:</p> <pre><code>Min Values: -5 at index: 5 Min Values: 2 at index: 7 Min Values: 4 at index: 9 Min Values: 7 at index: 12 </code></pre>
<p>If you change the line</p> <pre class="lang-python prettyprint-override"><code>id_arr[b[1:]] = 1 </code></pre> <p>to</p> <pre class="lang-python prettyprint-override"><code>id_arr[b] = 1 </code></pre> <p>I think the function will behave as you hope.</p>
arrays|python-3.x|function|numpy|indexing
0
4,203
61,358,401
How to use a function that returns numpy array within pandas apply
<p>I have a Data Frame that looks like this:</p> <pre><code>import pandas as pd df_dict = {'var1': {(1, 1.0, 'obj1'): 1.0, (1, 1.0, 'obj4'): 1.0, (1, 1.0, 'obj3'): 2.0, (1, 1.0, 'obj5'): 2.0, (1, 1.0, 'obj2'): 3.0, (1, 2.0, 'obj1'): 1.0, (1, 2.0, 'obj4'): 1.0, (1, 2.0, 'obj3'): 2.0, (1, 2.0, 'obj5'): 2.0, (1, 2.0, 'obj2'): 3.0, (1, 3.0, 'obj1'): 1.0, (1, 3.0, 'obj4'): 1.0, (1, 3.0, 'obj3'): 2.0, (1, 3.0, 'obj5'): 2.0, (1, 3.0, 'obj2'): 3.0, (1, 4.0, 'obj1'): 1.0, (1, 4.0, 'obj4'): 1.0, (1, 4.0, 'obj3'): 2.0, (1, 4.0, 'obj5'): 2.0, (1, 4.0, 'obj2'): 3.0}, 'var2': {(1, 1.0, 'obj1'): -0.9799804687499858, (1, 1.0, 'obj4'): 0.009998139880948997, (1, 1.0, 'obj3'): -1.0299944196428612, (1, 1.0, 'obj5'): 0.029994419642846992, (1, 1.0, 'obj2'): 1.9999999999999574, (1, 2.0, 'obj1'): -1.0200195312500426, (1, 2.0, 'obj4'): 0.07001023065477341, (1, 2.0, 'obj3'): -0.6900111607143344, (1, 2.0, 'obj5'): -0.03999255952379599, (1, 2.0, 'obj2'): 1.9400111607142634, (1, 3.0, 'obj1'): -1.0599888392857082, (1, 3.0, 'obj4'): 0.1399972098214164, (1, 3.0, 'obj3'): -0.36002604166661456, (1, 3.0, 'obj5'): -0.12002418154757777, (1, 3.0, 'obj2'): 1.8699776785714306, (1, 4.0, 'obj1'): -1.09000651041665, (1, 4.0, 'obj4'): 0.1900111607142918, (1, 4.0, 'obj3'): -0.029994419642918047, (1, 4.0, 'obj5'): -0.2000093005952408, (1, 4.0, 'obj2'): 1.8099888392857366}, 'var3': {(1, 1.0, 'obj1'): 0.0, (1, 1.0, 'obj4'): -1.9899974149816302, (1, 1.0, 'obj3'): -0.020033892463189318, (1, 1.0, 'obj5'): -0.03999597886028994, (1, 1.0, 'obj2'): -0.029979032628659752, (1, 2.0, 'obj1'): 0.050012925091920124, (1, 2.0, 'obj4'): -1.999978458180145, (1, 2.0, 'obj3'): 0.19003475413597926, (1, 2.0, 'obj5'): 0.18996294806989056, (1, 2.0, 'obj2'): -0.029979032628730806, (1, 3.0, 'obj1'): 0.10002585018380472, (1, 3.0, 'obj4'): -2.03001134535846, (1, 3.0, 'obj3'): 0.3900146484375, (1, 3.0, 'obj5'): 0.41001263786760944, (1, 3.0, 'obj2'): -0.040031881893369814, (1, 4.0, 'obj1'): 0.1499669692095651, (1, 4.0, 'obj4'): -2.040010340073515, (1, 4.0, 'obj3'): 0.5999755859375, (1, 4.0, 'obj5'): 0.6100284352022101, (1, 4.0, 'obj2'): -0.05999396829039938}} df = pd.DataFrame.from_dict(df_dict) var1 var2 var3 measurement_id repeat_id object 1 1.0 obj1 1.0 -0.979980 0.000000 obj4 1.0 0.009998 -1.989997 obj3 2.0 -1.029994 -0.020034 obj5 2.0 0.029994 -0.039996 obj2 3.0 2.000000 -0.029979 2.0 obj1 1.0 -1.020020 0.050013 obj4 1.0 0.070010 -1.999978 obj3 2.0 -0.690011 0.190035 obj5 2.0 -0.039993 0.189963 obj2 3.0 1.940011 -0.029979 3.0 obj1 1.0 -1.059989 0.100026 obj4 1.0 0.139997 -2.030011 obj3 2.0 -0.360026 0.390015 obj5 2.0 -0.120024 0.410013 obj2 3.0 1.869978 -0.040032 4.0 obj1 1.0 -1.090007 0.149967 obj4 1.0 0.190011 -2.040010 obj3 2.0 -0.029994 0.599976 obj5 2.0 -0.200009 0.610028 obj2 3.0 1.809989 -0.059994 </code></pre> <p>I'd like to smooth var2 with <code>scipy.signal.savgol_filter</code> but I need to do this for subsequent object. So my call looks like this:</p> <pre><code>import scipy.signal as signal df.groupby(['measurement_id', 'object'])['var2'].apply(lambda x: signal.savgol_filter(x, window_length=3, polyorder=2)) measurement_id object 1 obj1 [-0.9799804687499857, -1.0200195312500429, -1.... obj2 [1.9999999999999565, 1.9400111607142636, 1.869... obj3 [-1.0299944196428608, -0.6900111607143345, -0.... obj4 [0.009998139880949027, 0.07001023065477342, 0.... obj5 [0.02999441964284698, -0.039992559523796, -0.1... Name: var2, dtype: object </code></pre> <p>However, as the output of <code>savgol_filter</code> is <code>np.ndarray</code>, I'm not really sure how to properly assign the output as a new column <code>var4</code>. I have tried with pandas <code>explode</code> but I'm still lacking the order to do a proper assignment. </p>
<pre><code>import scipy.signal as signal df['var4'] = signal.savgol_filter(df.var2.values, window_length=3, polyorder=2) </code></pre> <p>gives you this output as well:</p> <pre><code>print(df) var1 var2 var3 var4 measurement_id repeat_id object 1 1.0 obj1 1.0 -0.979980 0.000000 -0.979980 obj4 1.0 0.009998 -1.989997 0.009998 obj3 2.0 -1.029994 -0.020034 -1.029994 obj5 2.0 0.029994 -0.039996 0.029994 obj2 3.0 2.000000 -0.029979 2.000000 2.0 obj1 1.0 -1.020020 0.050013 -1.020020 obj4 1.0 0.070010 -1.999978 0.070010 obj3 2.0 -0.690011 0.190035 -0.690011 obj5 2.0 -0.039993 0.189963 -0.039993 obj2 3.0 1.940011 -0.029979 1.940011 3.0 obj1 1.0 -1.059989 0.100026 -1.059989 obj4 1.0 0.139997 -2.030011 0.139997 obj3 2.0 -0.360026 0.390015 -0.360026 obj5 2.0 -0.120024 0.410013 -0.120024 obj2 3.0 1.869978 -0.040032 1.869978 4.0 obj1 1.0 -1.090007 0.149967 -1.090007 obj4 1.0 0.190011 -2.040010 0.190011 obj3 2.0 -0.029994 0.599976 -0.029994 obj5 2.0 -0.200009 0.610028 -0.200009 obj2 3.0 1.809989 -0.059994 1.809989 </code></pre> <hr> <p><em>UPDATE</em></p> <p>the <a href="https://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.signal.savgol_filter.html#scipy-signal-savgol-filter" rel="nofollow noreferrer">scipy savgol filter</a> function call looks the following:</p> <pre><code>scipy.signal.savgol_filter(x, window_length, polyorder, deriv=0, delta=1.0, axis=-1, mode='interp', cval=0.0) </code></pre> <p>uses <code>axis=-1</code> by default, which means the passed numpy array <code>x</code> will be flattened (see <code>df.var2.values.reshape(-1).shape</code> -> <code>(20,)</code>). This results in the same as </p> <pre><code>signal.savgol_filter(df.var2.values, window_length=3, polyorder=2, axis=0) </code></pre> <p>due to multiple function calls in <code>scipy/signal/_savitzky_golay.py</code> (see <a href="https://github.com/scipy/scipy/blob/v0.15.1/scipy/signal/_savitzky_golay.py#L227" rel="nofollow noreferrer">github</a>) until <code>correlate1d([...])</code> in <code>ndimage/filters.py</code> is called with <code>axis=-1</code>. Then <code>_check_axis(axis, input.ndim)</code> with <code>_check_axis(axis=-1, input.ndim=1)</code> (<code>df.var2.values.ndim=1</code>) returns <code>0</code> --> <code>axis=0</code> returns the same in <code>signal.savgol_filter</code>. Thus I recommend sorting the whole array first:</p> <pre><code>df.sort_index(axis=0, level='repeat_id') df['var4'] = signal.savgol_filter(df.var2.values, window_length=3, polyorder=2) </code></pre> <p>which then returns:</p> <pre><code> var1 var2 var3 var4 measurement_id repeat_id object 1 1.0 obj1 1.0 -0.979980 0.000000 -0.979980 obj2 3.0 2.000000 -0.029979 2.000000 obj3 2.0 -1.029994 -0.020034 -1.029994 obj4 1.0 0.009998 -1.989997 0.009998 obj5 2.0 0.029994 -0.039996 0.029994 2.0 obj1 1.0 -1.020020 0.050013 -1.020020 obj2 3.0 1.940011 -0.029979 1.940011 obj3 2.0 -0.690011 0.190035 -0.690011 obj4 1.0 0.070010 -1.999978 0.070010 obj5 2.0 -0.039993 0.189963 -0.039993 3.0 obj1 1.0 -1.059989 0.100026 -1.059989 obj2 3.0 1.869978 -0.040032 1.869978 obj3 2.0 -0.360026 0.390015 -0.360026 obj4 1.0 0.139997 -2.030011 0.139997 obj5 2.0 -0.120024 0.410013 -0.120024 4.0 obj1 1.0 -1.090007 0.149967 -1.090007 obj2 3.0 1.809989 -0.059994 1.809989 obj3 2.0 -0.029994 0.599976 -0.029994 obj4 1.0 0.190011 -2.040010 0.190011 obj5 2.0 -0.200009 0.610028 -0.200009 </code></pre> <hr>
python|pandas
2
4,204
68,685,947
Proper method to make a class of other class instances
<p>I'm working on building a simple report generator that will run a query, put the data into a formatted excel sheet and email it.</p> <p>I am trying to get better at proper coding practices (cohesion, coupling, etc) and I am wondering if this is the proper way to build this or if there's a better way. It feels somewhat redundant to have to pass the arguments of the <code>Extractor</code> twice: once to the main class and then again to the subclass.</p> <p>Should I be using nested classes? <code>**kwargs</code>? Or is this correct?</p> <pre><code>from typing_extensions import ParamSpec import pandas as pd from sqlalchemy import create_engine from O365 import Account from jinjasql import JinjaSql class Emailer: pass class Extractor: '''Executes a sql query and returns the data.''' def __init__(self, query: str, database: str, connection: dict, param_query: bool = False) -&gt; None: self.query = query self.database = database # self.conn_params = connection self.engine = create_engine(connection) # self.engine = Reactor().get_engine(**self.conn_params) def parse_query(self) -&gt; None: '''If the query needs parameterization, do that here.''' pass def run_query(self) -&gt; None: '''Run a supplied query. Expects the name of the query.''' # TODO: Make this check if it's a query or a query name. with open(self.query, &quot;r&quot;) as f: query = f.read() return pd.read_sql_query(query, self.engine) class ReportGenerator: '''Main class''' def __init__(self, query: str, connection: dict, param_query: bool = False) -&gt; None: self.extractor = Extractor(query, connection, param_query) self.emailer = Emailer() def build_report(self) -&gt; None: pass </code></pre>
<p>This isn't a bad solution, but I think it can be improved.</p> <p>If you'll continue this correctly, ReportGenerator should never know about query/connection and other members of Extractor, thus it shouldn't be passed to his constructor. What you can do instead is create an Extractor and an Emailer before the creation of the ReportGenerator and pass them to the constructor.</p>
python|pandas
0
4,205
68,595,055
Python multiprocessing manger.list() not being pass to Matplotlib.animation correctly
<p>I have this 2 processes with a list created with manager.list() being shared between them, one is called DATA() and it is &quot;generating&quot; data and appending to the list and the other is plotting that data using Matplotlib animation FuncAnimation.</p> <p>The issue I am having, is that once I pass the list to the animate function</p> <p><em>[ani = FuncAnimation(plt.gcf(),animate,fargs= (List,), interval=1000)]</em></p> <p>the function is receiving a <strong>&lt;class 'int'&gt;</strong> instead of a <strong>&lt;class multiprocessing.managers.ListProxy' &gt;</strong>.</p> <p>Does anyone have any idea why this is happening?</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt import multiprocessing as mp from multiprocessing import freeze_support,Manager import time from matplotlib.animation import FuncAnimation plt.style.use('fivethirtyeight') my_c = ['x','y'] initial = [[1,3],[2,4],[3,8],[4,6],[5,8],[6,5],[6,2],[7,7]] df = pd.DataFrame(columns=my_c) def data(List): for i in initial: #every 1 sec the list created with the manager.list() is updated List.append(i) #print(f&quot;list in loop {List}&quot;)#making sure list is not empty time.sleep(1) def animate(List,i): print(f&quot;list in animate {type(List)}&quot;)# prints &lt;class 'int'&gt; instead of &lt;class 'multiprocessing.managers.ListProxy'&gt; global df for l in List: print(f&quot; for loop: {type(l)}&quot;) df = df.append({'x':l[0],'y':l[1]}, ignore_index = True) plt.plot(df['x'],df['y'],label = &quot;Price&quot;) plt.tight_layout() def run(List): print(f&quot;run funciton {type(List)}&quot;) # prints &lt;class 'multiprocessing.managers.ListProxy'&gt; ani = FuncAnimation(plt.gcf(),animate,fargs= (List,), interval=1000) #passes List as an argument to Animate function plt.show() if __name__ == '__main__': manager = mp.Manager() List = manager.list() freeze_support() p1 = mp.Process(target = run,args =(List,)) p2 = mp.Process(target = data,args=(List,)) p2.start() p1.start() p2.join() p1.join() </code></pre>
<p>Turns out the animate function was receiving 2 arguments. this response has a better explanation <a href="https://stackoverflow.com/questions/23944657/typeerror-method-takes-1-positional-argument-but-2-were-given">TypeError: method() takes 1 positional argument but 2 were given</a>. I had to modify my function from def animate(List) to animate(self, List)</p>
python|pandas|matplotlib|python-multiprocessing|multiprocessing-manager
0
4,206
68,622,776
Make dictionary keys into rows and dict values as columns with one value as column name and one as column value
<p>I have the following data:</p> <pre><code>print(tables['T10101'].keys()) dict_keys(['Y033RL', 'A007RL', 'A253RL', 'A646RL', 'A829RL', 'A008RL', 'A191RP', 'DGDSRL', 'A822RL', 'A824RL', 'A006RL', 'A825RL', 'A656RL', 'A823RL', 'Y001RL', 'DNDGRL', 'DDURRL', 'A021RL', 'A009RL', 'A020RL', 'DSERRL', 'A011RL', 'DPCERL', 'A255RL', 'A191RL']) </code></pre> <p>Each key has the following value, a list of tuples: The dates are the same but the values change.</p> <pre><code>[('-20.7', '1930'), ('-33.3', '1931'), ('-41.4', '1932'), ('2.5', '1933'), ('38.3', '1934'), ('36.3', '1935'), ('37.2', '1936'), ('16.2', '1937'), ('-30.4', '1938'), ('15.4', '1939'), ('29.6', '1940'), ('17.2', '1941'), ('-42.6', '1942'), ('-10.2', '1943'), ('33.7', '1944'), ('43.2', '1945'), ('24.7', '1946'), ('36.0', '1947'), ('-8.3', '1947Q2'), </code></pre> <p>I would like to create the following dataframe:</p> <pre><code> 1930 | 1931 | 1932 | 1933 | 1934 | Y033RL|-20.7| -33.3| -41.4| 2.5 | 38.3 | A007RL| data| data | data | data | data | </code></pre> <p>What's the best way to do this? I came up with this roundabout way of joining created dataframes but it is very inefficient as I have lots of data. I would like to have everything in a dictionary first and then convert it to one dataframe.</p> <pre><code>def dframeCreator(dataname): dframeList = list(tables[dataname].keys()) df = tables[dataname][dframeList[0]] for x in range(len(dframeList[1:])): df = df.join(tables[dataname][dframeList[x+1]]) return df </code></pre>
<p>We can use <code>dict</code> comprehension to normalize the given dictionary in a standard format which is suitable for creating a dataframe</p> <pre><code>d = tables['T10101'] df = pd.DataFrame({k: dict(map(reversed, v)) for k, v in d.items()}).T </code></pre> <hr /> <pre><code>print(df) 1930 1931 1932 1933 ... 1945 1946 1947 1947Q2 Y033RL -20.7 -33.3 -41.4 2.5 ... 43.2 24.7 36.0 -8.3 </code></pre>
python|pandas|dictionary|merge
3
4,207
68,492,761
Converting yfinance dataframe into multi-index dataframe
<p>Trying to convert this dataframe(df_long) into a multi-index dataframe (df_wide)</p> <p>Imported this data from yfinance. New to python so would really appreciate the help :)</p> <p><a href="https://i.stack.imgur.com/Ssi73.png" rel="nofollow noreferrer">enter image description here</a></p>
<p>To get rid of the multi-index, use <code>.reset_index()</code>:</p> <pre><code>df_long.pivot_table(index=[&quot;Data&quot;], columns='tickers', values='grade').reset_index() </code></pre> <p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reset_index.html" rel="nofollow noreferrer">pandas.DataFrame.reset_index</a></p>
python-3.x|pandas|dataframe
1
4,208
68,757,454
Pandas dataframe how to make element-wise mean of list columns
<p>I have a dataframe with categorical features and values features. Value features are always lists with the exact same size (let's say 3 for the example). So the dataframe is:</p> <pre><code>Sub Model iter key l1 l2 l3 01 b 0 0 [8,8,5] [3,8,1] [6,7,8] 01 b 1 1 [4,3,6] [3,4,0] [4,0,4] 01 b 2 2 [0,1,4] [0,0,5] [8,2,3] 03 b 3 0 [1,8,2] [4,6,0] [1,3,9] 03 b 4 1 [7,3,1] [6,8,1] [6,7,9] 03 b 5 2 [1,1,0] [11,4,8] [8,5,9] </code></pre> <p>I want to group the dataframe by <code>[sub,model]</code>, such that in each row I will take the mean over the values of the columns key. So I will get:</p> <pre><code>Sub Model l1 l2 l3 01 b [4, 4, 5] [2,2,2] [6,3,5] 03 b [3, 4, 1] [7,6,3] [3,5,9] </code></pre> <p>What will be the best way to do so?</p>
<p>You can use <code>agg</code> and a custom function:</p> <pre><code>def mean_list(sr): return sr.apply(pd.Series).mean() out = df.groupby(['Sub', 'Model'])[['l1', 'l2', 'l3']].agg(mean_list) </code></pre> <pre><code>&gt;&gt;&gt; out l1 l2 l3 Sub Model 01 b [4.0, 4.0, 5.0] [2.0, 4.0, 2.0] [6.0, 3.0, 5.0] 03 b [3.0, 4.0, 1.0] [7.0, 6.0, 3.0] [5.0, 5.0, 9.0] </code></pre>
python|pandas|dataframe|pandas-groupby|data-munging
1
4,209
68,707,747
ViVIT PyTorch: RuntimeError: multi-target not supported at /pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:15
<p>I am trying to run Video Vision Transformer (ViViT) code with my dataset but getting an error using <strong>CrossEntropyLoss</strong> from Pytorch as the Loss function.</p> <p>There are 6 classes I have:</p> <pre><code>['Run', 'Sit', 'Walk', 'Wave', 'Sit', 'Stand'] </code></pre> <p><strong>Optimizer</strong></p> <pre><code>optimizer = torch.optim.SGD(model.parameters(), lr=0.0001, weight_decay=1e-9, momentum=0.9) </code></pre> <p><strong>Class Weights</strong></p> <pre><code>tensor([0.0045, 0.0042, 0.0048, 0.0038, 0.0070, 0.0065]) </code></pre> <p><strong>Loss Function</strong></p> <pre><code>loss_func = nn.CrossEntropyLoss(weight=class_weights.to(device)) </code></pre> <p><strong>Code Throwning Error</strong></p> <pre><code>train_epoch(model, optimizer, train_loader, train_loss_history, loss_func) </code></pre> <p><strong>Error</strong></p> <p><a href="https://i.stack.imgur.com/CIJaD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CIJaD.png" alt="Error Stack" /></a></p> <pre><code>RuntimeError: multi-target not supported at /pytorch/aten/src/THCUNN/generic/ClassNLLCriterion.cu:15 </code></pre> <p><strong>Code Calling the transformer</strong></p> <pre><code>model = ViViT(224, 16, 100, 16).cuda() </code></pre> <p><strong>Getting Video Frames</strong></p> <pre><code>def get_frames(filename, n_frames=1): frames = [] v_cap = cv2.VideoCapture(filename) v_len = int(v_cap.get(cv2.CAP_PROP_FRAME_COUNT)) frame_list = np.linspace(0, v_len - 1, n_frames + 1, dtype=np.int16) frame_dims = np.array([224, 224, 3]) for fn in range(v_len): success, frame = v_cap.read() if success is False: continue if (fn in frame_list): frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) frame = cv2.resize(frame, (frame_dims[0], frame_dims[1])) frames.append(frame) v_cap.release() return frames, v_len </code></pre> <p><strong>Dataset Preprocessing</strong></p> <pre><code>class DatasetProcessing(data.Dataset): def __init__(self, df, root_dir): super(DatasetProcessing, self).__init__() # List of all videos path video_list = df[&quot;Video&quot;].apply(lambda x: root_dir + '/' + x) self.video_list = np.asarray(video_list) self.df = df def __getitem__(self, index): # Ensure that the raw videos are in respective folders and folder name matches the output class label video_label = self.video_list[index].split('/')[-2] video_name = self.video_list[index].split('/')[-1] video_frames, len_ = get_frames(self.video_list[index], n_frames = 15) video_frames = np.asarray(video_frames) video_frames = video_frames/255 class_list = ['Run', 'Walk', 'Wave', 'Sit', 'Turn', 'Stand'] class_id_loc = np.where(class_list == video_label) label = class_id_loc d = torch.as_tensor(np.array(video_frames).astype('float')) l = torch.as_tensor(np.array(label).astype('float')) return (d, l) def __len__(self): return self.video_list.shape[0] </code></pre> <p><strong>Training Epochs</strong></p> <pre><code>def train_epoch(model, optimizer, data_loader, loss_history, loss_func): total_samples = len(data_loader.dataset) model.train() for i, (data, target) in enumerate(data_loader): optimizer.zero_grad() x = data.cuda() data = rearrange(x, 'b p h w c -&gt; b p c h w').cuda() target = target.type(torch.LongTensor).cuda() pred = model(data.float()) output = F.log_softmax(pred, dim=1) loss = loss_func(output, target.squeeze(1)) loss.backward() optimizer.step() if i % 100 == 0: print('[' + '{:5}'.format(i * len(data)) + '/' + '{:5}'.format(total_samples) + ' (' + '{:3.0f}'.format(100 * i / len(data_loader)) + '%)] Loss: ' + '{:6.4f}'.format(loss.item())) loss_history.append(loss.item()) </code></pre> <p><strong>Evaluate Model</strong></p> <pre><code>def evaluate(model, data_loader, loss_history, loss_func): model.eval() total_samples = len(data_loader.dataset) correct_samples = 0 total_loss = 0 with torch.no_grad(): for data, target in data_loader: x = data.cuda() data = rearrange(x, 'b p h w c -&gt; b p c h w').cuda() target = target.type(torch.LongTensor).cuda() output = F.log_softmax(model(data.float()), dim=1) loss = loss_func(output, target) _, pred = torch.max(output, dim=1) total_loss += loss.item() correct_samples += pred.eq(target).sum() avg_loss = total_loss / total_samples loss_history.append(avg_loss) print('\nAverage test loss: ' + '{:.4f}'.format(avg_loss) + ' Accuracy:' + '{:5}'.format(correct_samples) + '/' + '{:5}'.format(total_samples) + ' (' + '{:4.2f}'.format(100.0 * correct_samples / total_samples) + '%)\n') </code></pre> <p><strong>Transformer</strong></p> <pre><code>class Transformer(nn.Module): def __init__(self, dim, depth, heads, dim_head, mlp_dim, dropout = 0.): super().__init__() self.layers = nn.ModuleList([]) self.norm = nn.LayerNorm(dim) for _ in range(depth): self.layers.append(nn.ModuleList([ PreNorm(dim, Attention(dim, heads = heads, dim_head = dim_head, dropout = dropout)), PreNorm(dim, FeedForward(dim, mlp_dim, dropout = dropout)) ])) def forward(self, x): for attn, ff in self.layers: x = attn(x) + x x = ff(x) + x return self.norm(x) </code></pre> <p><strong>ViViT Code</strong></p> <pre><code>class ViViT(nn.Module): def __init__(self, image_size, patch_size, num_classes, num_frames, dim = 192, depth = 4, heads = 3, pool = 'cls', in_channels = 3, dim_head = 64, dropout = 0., emb_dropout = 0., scale_dim = 4, ): super().__init__() assert pool in {'cls', 'mean'}, 'pool type must be either cls (cls token) or mean (mean pooling)' assert image_size % patch_size == 0, 'Image dimensions must be divisible by the patch size.' num_patches = (image_size // patch_size) ** 2 patch_dim = in_channels * patch_size ** 2 self.to_patch_embedding = nn.Sequential( Rearrange('b t c (h p1) (w p2) -&gt; b t (h w) (p1 p2 c)', p1 = patch_size, p2 = patch_size), nn.Linear(patch_dim, dim), ) self.pos_embedding = nn.Parameter(torch.randn(1, num_frames, num_patches + 1, dim)) self.space_token = nn.Parameter(torch.randn(1, 1, dim)) self.space_transformer = Transformer(dim, depth, heads, dim_head, dim*scale_dim, dropout) self.temporal_token = nn.Parameter(torch.randn(1, 1, dim)) self.temporal_transformer = Transformer(dim, depth, heads, dim_head, dim*scale_dim, dropout) self.dropout = nn.Dropout(emb_dropout) self.pool = pool self.mlp_head = nn.Sequential( nn.LayerNorm(dim), nn.Linear(dim, num_classes) ) def forward(self, x): x = self.to_patch_embedding(x) b, t, n, _ = x.shape cls_space_tokens = repeat(self.space_token, '() n d -&gt; b t n d', b = b, t=t) x = torch.cat((cls_space_tokens, x), dim=2) x += self.pos_embedding[:, :, :(n + 1)] x = self.dropout(x) x = rearrange(x, 'b t n d -&gt; (b t) n d') x = self.space_transformer(x) x = rearrange(x[:, 0], '(b t) ... -&gt; b t ...', b=b) cls_temporal_tokens = repeat(self.temporal_token, '() n d -&gt; b n d', b=b) x = torch.cat((cls_temporal_tokens, x), dim=1) x = self.temporal_transformer(x) x = x.mean(dim = 1) if self.pool == 'mean' else x[:, 0] return self.mlp_head(x) </code></pre>
<p>Multi target appears to be a feature supported since version 1.10.0. <a href="https://discuss.pytorch.org/t/crossentropyloss-vs-per-class-probabilities-target/138331" rel="nofollow noreferrer">https://discuss.pytorch.org/t/crossentropyloss-vs-per-class-probabilities-target/138331</a></p> <p>Please check your pytorch version.</p> <p>Please refer to the example of using the UTF101 top5 dataset, which is available on <a href="https://colab.research.google.com/drive/1wjQe9vkpJjmIEaWjW0-FJdYnWbWUczvz?usp=sharing" rel="nofollow noreferrer">my Colab</a>. The version of pytorch is 1.12.0+cu113, and the code you listed was able to run the training almost exactly as it was written.</p>
nlp|pytorch|computer-vision|loss-function|transformer-model
0
4,210
68,450,653
Python - efficient way to save an array with multiple labels
<p>I am currently saving some data from a process.</p> <pre><code>np.save('stochastic_data',(rho_av,rho_std)) </code></pre> <p>where rho_av and rho_std are arrays.</p> <p>However, this data depends on some parameters, say E, k, and M. For each of them, I get different data. But I am only saving the data for a given set of parameters, i.e. I fix (E, k, M), I get the data and I save it. However, from the data I have it is not possible to retrieve the set of parameters (E, k, M). Therefore, I would like to save this set with my array.</p> <p>My first approach was to simply do</p> <pre><code>np.save('stochastic_data',(rho_av,rho_std, E, k, M)) </code></pre> <p>but this doesn't work because my parameters are floats, not arrays.</p> <p>My second approach was simply to convert the set of parameters to arrays. Basically, to create an array of identical elements for each parameters, i.e. E-&gt; np.array(E,E,.....,E). However, my arrays are quite big (np.shape(rho_av)=(100000,1000)), so saving the parameters with this shape is not going to be efficient.</p> <p>Is there a more efficient way to do it?</p> <p>Thanks.</p>
<p>You are saving a tuple of arrays.</p> <p>Look what happens with a simple case of 2 arrays with same shape:</p> <pre><code>In [763]: np.save('test.npy',(np.arange(3), np.ones(3))) In [764]: np.load('test.npy') Out[764]: array([[0., 1., 2.], [1., 1., 1.]]) </code></pre> <p>I got back one array - it made an array from the tuple and saved that.</p> <p>If the arrays differ in shape, I still get an array, but it is object dtype. And I get a warning (in new enough numpy versions):</p> <pre><code>In [765]: np.save('test.npy',(np.arange(3), np.ones(4))) /usr/local/lib/python3.8/dist-packages/numpy/lib/npyio.py:528: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray. arr = np.asanyarray(arr) In [766]: np.load('test.npy') Traceback (most recent call last): File &quot;&lt;ipython-input-766-aeaca1f70e0f&gt;&quot;, line 1, in &lt;module&gt; np.load('test.npy') File &quot;/usr/local/lib/python3.8/dist-packages/numpy/lib/npyio.py&quot;, line 440, in load return format.read_array(fid, allow_pickle=allow_pickle, File &quot;/usr/local/lib/python3.8/dist-packages/numpy/lib/format.py&quot;, line 743, in read_array raise ValueError(&quot;Object arrays cannot be loaded when &quot; ValueError: Object arrays cannot be loaded when allow_pickle=False In [767]: np.load('test.npy',allow_pickle=True) Out[767]: array([array([0, 1, 2]), array([1., 1., 1., 1.])], dtype=object) </code></pre> <p><code>np.save</code> is efficient for writing a numeric multi-dimensional array. But the array has <code>objects</code>, it uses <code>pickle</code> to convert that to a string that can be saved. It's not all bad, since doing a <code>pickle</code> of an array actually uses the same core code as <code>np.save</code>.</p> <p>There is a <code>np.savez</code> that saves the arrays to separate <code>npy</code> files, and combines them in a <code>zip</code> archive.</p> <p>But for a diverse mix of items - arrays, lists, scalars, strings etc., <code>pickle</code> might well be the easiest and &quot;most efficient&quot;. But you can't save or load items piecemeal. It's not a database.</p>
python|arrays|numpy|dictionary|save
1
4,211
53,137,588
pytorch backports.functools_lru_cache conflict
<p>I'm using <em>Windows 10</em>, and my the installation dir is: <code>anacoda2/python2.7/python3.6/opencv/cdua10/cudnn ....</code> </p> <p>Now I want to install pytorch, with this command: <code>conda install pytorch -c pytorch</code></p> <p>But as result I'm getting this error:</p> <pre><code>C:\Users\MM&gt;conda install pytorch -c pytorch Solving environment: failed UnsatisfiableError: The following specifications were found to be in conflict: - backports.functools_lru_cache - pytorch Use "conda info &lt;package&gt;" to see the dependencies for each package. </code></pre> <p>The version of <code>backports.functools_lru_cache</code> is <code>1.4</code>.</p> <p><strong>Does anyone know how to solve this?</strong></p>
<p>I faced a similar problem while installing pytorch. I was able to resolve it by creating another environment on anaconda which had python version 3.6. </p>
python-2.7|pytorch
0
4,212
65,864,234
Timestamp format not being matched in pandas to_datetime format specifier
<p>I have a pandas dataframe with some timestamp values in a column. I wish to get the sum of values grouped by every hour.</p> <pre><code> Date_and_Time Frequency 0 Jan 08 15:54:39 NaN 1 Jan 09 10:48:13 NaN 2 Jan 09 10:42:24 NaN 3 Jan 09 20:18:46 NaN 4 Jan 09 12:08:23 NaN </code></pre> <p>I started off removing the leading days in the column and then typed the following to convert the values to date_time compliant format:</p> <pre><code>dateTimeValues['Date_and_Time'] = pd.to_datetime(dateTimeValues['Date_and_Time'], format='%b %d %H:%M:%S') </code></pre> <p>After doing so, I receive the following error:</p> <pre><code>ValueError: time data 'Jan 08 12:41:' does not match format '%b %d %H:%M:%S' (match) </code></pre> <p>On checking my input CSV, I can confirm that no column containing the above data are incomplete.</p> <p>I'd like to know how to resolve this issue and successfully process my timestamps to their desired output format.</p>
<p>I suggest you create a self defined lambda function which selects the needed format string. You may have to edit the lambda function:</p> <pre><code>df = pd.DataFrame({'Date_and_Time':['Jan 08 15:54:39', 'Jan 09 10:48:']}) df &gt;&gt;&gt; Date_and_Time 0 Jan 08 15:54:39 1 Jan 09 10:48: </code></pre> <p>With one typo in line 1. Now selected the format string for every item with the lambda function.</p> <pre><code>def my_lambda(x): f = '%b %d %H:%M:%S' if x.endswith(':'): f = '%b %d %H:%M:' return pd.to_datetime(x , format=f) df['Date_and_Time'] = df['Date_and_Time'].apply(my_lambda) &gt;&gt;&gt; df Date_and_Time 0 1900-01-08 15:54:39 1 1900-01-09 10:48:00 </code></pre>
python|pandas
0
4,213
65,629,540
select a row index weighted by value
<p>weight dictionary: <code>{1:0.1, 2:0.9}</code> (there is a 10% probability of an item with the value 1 of being selected, 90% with value 2)</p> <p>exmple value row: <code>[0, 0, 1, 0, 2, 1]</code> (the will only be 0s and values that are contained in the dictionary)</p> <p>The output should be a randomly chosen index</p> <p>For the example row, the probabilities of the index of being selected for each item in the row should be <code>[0, 0, 0.05, 0, 0.9, 0.05]</code> (note that since the row contains two different <code>1</code> elements, each of them should have a prob of 0.05 of being selected, since the weight counts towards an item with that value being selected)</p>
<p>You can use <code>np.select</code> here.</p> <pre><code>wt = {1:0.1, 2:0.9} a = np.array([0, 0, 1, 0, 2, 1]) choicelist = [a==i for i in wt.keys()] condlist = [v/np.count_nonzero(a==k) for k,v in wt.items()] np.select(choicelist, condlist) # array([0. , 0. , 0.05, 0. , 0.9 , 0.05]) </code></pre>
python|pandas|numpy|random|python-3.8
3
4,214
65,737,094
get the maximum element in an array using "=="
<p>I have an array <code>x</code> :</p> <pre><code>numpy.random.seed(1) #input x = numpy.arange(0.,20.,1) x = x.reshape(5,4) print(x) [[ 0. 1. 2. 3.] [ 4. 5. 6. 7.] [ 8. 9. 10. 11.] [12. 13. 14. 15.] [16. 17. 18. 19.]] </code></pre> <p>I want to access the maximum element in this array. In my assignment, the answer has this line to access the maximum one in <code>x</code> :</p> <pre><code>print(x[x==x.max()]) [19.] </code></pre> <p>I search documentation but found only one way of accessing the maximum element using <code>argmax</code>. I don't find the way of using &quot;<code>==</code>&quot; in the documentation, so I don't understand how this works. Can anyone explain why this works and show where it is in the documentation?</p>
<p>This is called <strong>Boolean array indexing</strong>.</p> <p>With <code>x == x.max()</code>, the below boolean array is generated:</p> <pre class="lang-py prettyprint-override"><code>[[ False False False False] [ False False False False] [ False False False False] [ False False False False] [ False False False True]] </code></pre> <p>Then with <code>x[x==x.max()]</code>, the above array is used as a mask to filter the elements of <code>x</code> that correspond to the <code>True</code>.</p> <h3>Reference:</h3> <p><a href="https://numpy.org/devdocs/reference/arrays.indexing.html#boolean-array-indexing" rel="nofollow noreferrer">https://numpy.org/devdocs/reference/arrays.indexing.html#boolean-array-indexing</a></p>
python|arrays|numpy
2
4,215
63,418,400
For loop into a pandas dataframe
<p>I have the following piece of code and it works but prints out data as it should. I'm trying (unsuccessfully) to putting the results into a dataframe so I can export the results to a csv file. I am looping through a json file and the results are correct, I just need two columns that print out to go into a dataframe instead of printing the results. I took out the code that was causing the error so it will run.</p> <pre><code>import json import requests import re import pandas as pd data = {} df = pd.DataFrame(columns=['subtechnique', 'name']) df RE_FOR_SUB_TECHNIQUE = r&quot;(T\d+)\.(\d+)&quot; r = requests.get('https://raw.githubusercontent.com/mitre/cti/master/enterprise-attack/enterprise-attack.json', verify=False) data = r.json() objects = data['objects'] for obj in objects: ext_ref = obj.get('external_references',[]) revoked = obj.get('revoked') or '*****' subtechnique = obj.get('x_mitre_is_subtechnique') name = obj.get('name') for ref in ext_ref: ext_id = ref.get('external_id') or '' if ext_id: re_match = re.match(RE_FOR_SUB_TECHNIQUE, ext_id) if re_match: technique = re_match.group(1) sub_technique = re_match.group(2) print('{},{}'.format(technique+'.'+sub_technique, name)) </code></pre> <p>Unless there is an easier way to put the results of each row in the loop and have that append to a csv file.</p> <p>Any help is appreciated.</p> <p>Thanks</p>
<p>In this instance, it's likely easier to just write the csv file directly, rather than go through Pandas:</p> <pre><code>with open(&quot;enterprise_attack.csv&quot;, &quot;w&quot;) as f: my_writer = csv.writer(f) for obj in objects: ext_ref = obj.get('external_references',[]) revoked = obj.get('revoked') or '*****' subtechnique = obj.get('x_mitre_is_subtechnique') name = obj.get('name') for ref in ext_ref: ext_id = ref.get('external_id') or '' if ext_id: re_match = re.match(RE_FOR_SUB_TECHNIQUE, ext_id) if re_match: technique = re_match.group(1) sub_technique = re_match.group(2) print('{},{}'.format(technique+'.'+sub_technique, name)) my_writer.writerow([technique+&quot;.&quot;+sub_technique, name]) </code></pre> <p>It should be noted that the above will overwrite the output of any previous runs. If you wish to keep the output of multiple runs, change the file mode to &quot;a&quot;:</p> <pre><code>with open(&quot;enterprise_attack.csv&quot;, &quot;a&quot;) as f: </code></pre>
python-3.x|pandas|dataframe
2
4,216
63,361,442
Is it good to use scipy sparse data structure for non-sparse matrices?
<p>Using <code>scipy.sparse</code> is very efficient both in storing and doing computations on sparse matrices. What if it is used for non-sparse matrices? More specifically, It is clear that we cannot exploit the sparsity benefits of that data structure, however, is it worse (in storage and computation complexity) than using an ordinary <code>numpy</code> array?</p>
<p>Yes, it is worse in both storage and performance, let alone cognitive load for whoever reads your code.</p>
python|arrays|python-3.x|numpy|scipy
1
4,217
63,666,847
" value_counts()" and "agg('count')" returning different results
<p>I have a dataframe and one of its columns (named '<code>income</code>') has int values. Some fields have 0 as set value.</p> <p>When I call</p> <pre><code>print(df[df['income'] == 0].agg('count')) </code></pre> <p>It returns the exact count of 0 values in the DF column.</p> <p>Occurs that if I call</p> <pre><code>print(df['income'].value_counts()[df['income'].value_counts() == 0]) </code></pre> <p>It returns a empty Series:</p> <pre><code>Series([], Name: income, dtype: int64) </code></pre> <p>Can someone please help me deciphering pandas sometimes ilogical behaviour? What's wrong with my second code that pandas does not return the count of 0 values in the dataframe?</p> <p>Thank you in advance.</p>
<p>You can select <code>Series</code> after <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.value_counts.html" rel="nofollow noreferrer"><code>Series.value_counts</code></a> by index - here <code>0</code> for count of <code>0</code> values:</p> <pre><code>df = pd.DataFrame({ 'income':[0,5,4,5,0,4,0,5,5], }) print(df['income'].value_counts()) 5 4 0 3 4 2 Name: income, dtype: int64 print(df['income'].value_counts().loc[0]) 3 </code></pre> <p>For get number of rows matching condition is possible get length of <code>DataFrame</code>:</p> <pre><code>print(len(df[df['income'] == 0])) 3 </code></pre> <p>Or count <code>True</code>s by <code>sum</code>:</p> <pre><code>print((df['income'] == 0).sum()) 3 print(df[df['income'] == 0].agg('count')) income 3 dtype: int64 </code></pre> <p>EDIT: If check by values of <code>Series</code> get all values by counts:</p> <pre><code>s = df['income'].value_counts() print (s) 5 4 0 3 4 2 Name: income, dtype: int64 #number of 3 values print (s.loc[0]) 3 #what values are 4 times? print (s[s == 4]) 5 4 Name: income, dtype: int64 #what values are 2 times? print (s[s == 2]) 4 2 Name: income, dtype: int64 #what values are 0 times? print (s[s == 0]) Series([], Name: income, dtype: int64) </code></pre>
python|pandas
1
4,218
71,905,124
Second to_replace to dataframe is not updating dataframe in Python
<p>I am replacing values of an existing dataframe in Python during a <code>for loop</code>. My original dataframe is of this format:</p> <p><a href="https://i.stack.imgur.com/qyySn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qyySn.png" alt="enter image description here" /></a></p> <p>After trying to update the items in the &quot;slice_file_name&quot; and &quot;fsID&quot; via these pandas replace commands:</p> <pre><code>df1['slice_file_name'] = df1['slice_file_name'].replace(to_replace=str(row[&quot;slice_file_name&quot;]),value=f'{actual_filename}_{directory}.wav') fsID_Name = str(row[&quot;fsID&quot;]) df1['fsID'] = df1['fsID'].replace(to_replace=str(row[&quot;fsID&quot;]), value=f'{fsID_Name}_{directory}') </code></pre> <p>Only the &quot;slice_file_name&quot; gets updated correctly:</p> <p><a href="https://i.stack.imgur.com/fc3Ww.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fc3Ww.png" alt="enter image description here" /></a></p> <p>The &quot;fsID&quot; does not get updated correctly. Can you tell me what I am doing wrong here?</p> <p>I want to update the &quot;fsID&quot; column as follows: - for example for the first data row, &quot;fsID&quot; should be equal to 102305_TimeShift-10pct. I see from my ide that <code>f'{fsID_Name}_{directory}'</code> gives the correct string, but it is not updating the fsID cell. How to update the fsID cell accordingly?</p> <p>Thanks!</p>
<p>If I understand correctly what you are looking for 'fsID', would this work (after your 'slice_file_name' is updated) ?</p> <pre><code>df['fsID']=df['fsID']+'_'+df['slice_file_name'].str.split('_',expand=True)[1].str.replace('.wav','') </code></pre>
python|pandas|dataframe
0
4,219
72,026,216
How to create a rolling unique count by group using pandas
<p>I have a dataframe like the following:</p> <pre><code>group value 1 a 1 a 1 b 1 b 1 b 1 b 1 c 2 d 2 d 2 d 2 d 2 e </code></pre> <p>I want to create a column with how many unique values there have been so far for the group. Like below:</p> <pre><code>group value group_value_id 1 a 1 1 a 1 1 b 2 1 b 2 1 b 2 1 b 2 1 c 3 2 d 1 2 d 1 2 d 1 2 d 1 2 e 2 </code></pre>
<p>Also cab be solved as :</p> <pre><code>df['group_val_id'] = (df.groupby('group')['value']. apply(lambda x:x.astype('category').cat.codes + 1)) df group value group_val_id 0 1 a 1 1 1 a 1 2 1 b 2 3 1 b 2 4 1 b 2 5 1 b 2 6 1 c 3 7 2 d 1 8 2 d 1 9 2 d 1 10 2 d 1 11 2 e 2 </code></pre>
pandas
0
4,220
71,874,683
Compare column name with a row value and getting other row value
<p>I have a dataframe like this:</p> <pre><code> request_created_at sponsor_tier is_active status cash_in 2019/10 ... 2021/07 0 2019/10 2019/10 2.0 True 1 8901.00 ... 1 2019/10 2019/10 2.0 True 2 7602.00 ... </code></pre> <p>I want to compare all my columns with date e.g &quot;2019/10&quot; with the values of my first column and check if status == 1 and if status == 1 I want to copy my cash in value into the columns &quot;2019/10&quot; rows</p>
<pre><code>def check_status (row, data): if (row.status == 1) &amp; (row.request_created_at == data): return row.total_cash_in elif (row.status == 4) &amp; (row.request_created_at == data): return row.total_cash_in elif (row.status == 7) &amp; (row.request_created_at == data): return row.total_cash_in else: return 0 </code></pre> <p>and then</p> <pre><code>for data in array_datas: df[data] = df.apply(lambda row: check_status(row,data), axis=1) </code></pre>
python|pandas|dataframe|data-wrangling
0
4,221
71,904,826
How can i calculate for Average true range in pandas
<p>how can I calculate the Average true range in a data frame</p> <p>I have tried to using <code>np where()</code> and is not working</p> <p>I have all this value below</p> <p>Current High - Current Low</p> <p>abs(Current High - Previous Close)</p> <p>abs(Current Low - Previous Close)</p> <p>but I don't know how I to set the highest between the three value to the pandas data frame</p>
<p>It looks like you might be trying to do the following :</p> <pre><code>import pandas as pd from numpy.random import rand df = pd.DataFrame(rand(10,5),columns={'High-Low','High-close','Low-close','A','B'}) cols = ['High-Low','High-close','Low-close'] df['true_range'] = df[cols].max(axis=1) print(df) </code></pre> <p>The output will look like</p> <pre><code> High-Low Low-close B A High-close true_range 0 0.916121 0.026572 0.082619 0.672000 0.605287 0.916121 1 0.622589 0.944646 0.638486 0.905139 0.262275 0.944646 2 0.611374 0.756191 0.829803 0.828205 0.614956 0.756191 3 0.810638 0.501693 0.504800 0.069532 0.283825 0.810638 4 0.984463 0.900823 0.434061 0.905273 0.518056 0.984463 5 0.377742 0.480266 0.018676 0.383831 0.819448 0.819448 6 0.473753 0.652077 0.730400 0.305507 0.396969 0.652077 7 0.427047 0.733135 0.526076 0.542852 0.719194 0.733135 8 0.911629 0.633997 0.101848 0.020811 0.327233 0.911629 9 0.244624 0.893365 0.278941 0.354696 0.678280 0.893365 </code></pre> <p>If this isn't what you had in mind, it would be helpful to clarify your question by providing a small example where you clearly identify the columns and the index in your DataFrame and what you mean by &quot;true range&quot;.</p>
python|arrays|pandas|dataframe|numpy
1
4,222
56,486,295
In a pandas dataframe (python 3) column, I need to standardize all instances of (for example) "PO BOX", or "P O Box" or "POBOX", etc. to "BOX"
<p>I have a list of 30 or so synonyms that could be found in an address to indicate a PO Box. I would like to be able to scan an address and if one of these synonyms are in the address, change it to simply BOX.</p> <p>First of all, I'm new to Python. I am a seasoned SAS programmer, trying to learn Python. I've tried using a dictionary with the .map() function (thinking this would work like a SAS format), but with no luck. Then I tried something like: df['address'] = df['address'].replace({'PO BOX': 'BOX', 'P BOX': 'BOX', 'POSTBOX': 'BOX', 'P O BOX': 'BOX', 'POB': 'BOX'}, inplace=True)</p> <p>The input looks like this: (sorry for the bad formatting)</p> <ul> <li><p>id address</p> <p>0 13943 PO BOX 1234</p> <p>1 14738 510 BLUE BELL RD</p> <p>5 27455 5887 CORNERS AVENUE</p> <p>6 27457 200 NEW HAVEN DR SUITE 10</p> <p>9 1595554 POBOX 908</p> <p>10 1595971 101 W 7TH STREET</p> <p>14 1597234 P O BOX 616</p></li> </ul> <p>And I want it to look like:</p> <pre><code> id address </code></pre> <p>0 13943 BOX 1234</p> <p>1 14738 510 BLUE BELL RD</p> <p>5 27455 5887 CORNERS AVENUE</p> <p>6 27457 200 NEW HAVEN DR SUITE 10</p> <p>9 1595554 BOX 908</p> <p>10 1595971 101 W 7TH STREET</p> <p>14 1597234 BOX 616</p> <p>But what I'm getting is this:</p> <pre><code> id address </code></pre> <p>0 13943 None</p> <p>1 14738 None</p> <p>5 27455 None</p> <p>6 27457 None</p> <p>9 1595554 None</p> <p>10 1595971 None</p> <p>14 1597234 None</p>
<p>I'm just going to use a pd.Series but it's the same as one dataframe columns. </p> <pre><code>rep = {'PO BOX': 'BOX', 'P BOX': 'BOX', 'POSTBOX': 'BOX', 'P O BOX': 'BOX', 'POBOX':'BOX', 'POB': 'BOX'} address = [ "13943 PO BOX 1234", "14738 510 BLUE BELL RD", "27455 5887 CORNERS AVENUE", "27457 200 NEW HAVEN DR SUITE 10", "1595554 POBOX 908", "1595971 101 W 7TH STREET", "1597234 P O BOX 616" ] </code></pre> <p>Create pandas series. </p> <pre><code>s = pd.Series(address, name='Addy') </code></pre> <p>Use replace and regex equals True.</p> <pre><code>s.replace(rep, regex=True) 0 13943 BOX 1234 1 14738 510 BLUE BELL RD 2 27455 5887 CORNERS AVENUE 3 27457 200 NEW HAVEN DR SUITE 10 4 1595554 BOX 908 5 1595971 101 W 7TH STREET 6 1597234 BOX 616 Name: Addy, dtype: object </code></pre>
python-3.x|pandas|dataframe
0
4,223
47,133,934
Pandas search in ascending index and match certain column value
<p>I have a DF with thousands of rows. Column 'col1' is repeatedly from 1 to 6. Column 'value' is with unique numbers:</p> <pre><code>diction = {'col1': [1, 2, 3, 4, 5, 6, 1, 2, 3, 4, 5, 6], 'target': [34, 65, 23, 65, 12, 87, 36, 51, 26, 74, 34, 87]} df1 = pd.DataFrame(diction, index = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]) col1 target 0 1 34 1 2 65 2 3 23 3 4 65 4 5 12 5 6 87 6 1 36 7 2 51 8 3 26 9 4 74 10 5 34 11 6 87 </code></pre> <p>I'm trying to create a new column (let's call it previous_col) that match col1 value (let's say COL1 value 2 with TARGET column value -> 65) so next time COL1 with value 2 to refer to previous TARGET value from the same row as col1 value 1:</p> <pre><code> col1 previous_col target 0 1 0 34 1 2 0 65 2 3 0 23 3 4 0 65 4 5 0 12 5 6 0 87 6 1 34 36 7 2 65 51 8 3 23 26 9 4 65 74 10 5 12 34 11 6 87 79 </code></pre> <p>Note that first 6 rows are 0 values for previous column cuz no previous target values exist :D The tricky part here is that I need to extract previous target's by DF index ascending order or the first met COL1 value ascending. So if we have a DF with 10k rows not just to match from the top or from the middle same COL1 value and to take the TARGET value. Each value in PREVIOUS_COL should be taken ascending to index and COL1 matching values. I know I can do it with shift but sometimes COL1 is with a missing order not from 1 to 6 strictly so I need to match exactly the COL1 value.</p>
<pre><code>df1['Per_col']=df1.groupby('col1').target.shift(1).fillna(0) df1 Out[1117]: col1 target Per_col 0 1 34 0.0 1 2 65 0.0 2 3 23 0.0 3 4 65 0.0 4 5 12 0.0 5 6 87 0.0 6 1 36 34.0 7 2 51 65.0 8 3 26 23.0 9 4 74 65.0 10 5 34 12.0 11 6 87 87.0 </code></pre>
pandas
1
4,224
47,257,390
TensorFlow - Difference between tf.VariableScope and tf.variable_scope
<p>Have found two links on TensorFlow website referring to <code>variable scope</code>, one is <a href="https://www.tensorflow.org/api_docs/python/tf/variable_scope" rel="nofollow noreferrer">tf.variable_scope</a> which is the one most commonly used, and the other <a href="https://www.tensorflow.org/api_docs/python/tf/VariableScope" rel="nofollow noreferrer">tf.VaribleScope</a>.</p> <p>As there's no example for the application of <code>tf.VariableScope</code>, just reading through the document, I couldn't distinguish whether there's a difference between the two. Tried to implement by replacing <code>tf.variable_scope</code> with <code>tf.VariableScope</code> but got the following error (which indicates there're some differences) </p> <pre><code>Traceback (most recent call last): File "/home/NER/window_model.py", line 105, in &lt;module&gt; model = NaiveNERModel(embeddings) File "/home/NER/window_model.py", line 64, in __init__ pred = self.add_prediction_op(embed) File "/home/NER/window_model.py", line 82, in add_prediction_op with tf.VariableScope('Layer1', initializer=tf.contrib.layers.xavier_initializer()): AttributeError: __enter__ </code></pre> <p>Snippet of the original workable code</p> <pre class="lang-py prettyprint-override"><code>with tf.variable_scope('Layer1', initializer=tf.contrib.layers.xavier_initializer()): W = tf.get_variable("W", [self.dim * self.window_size, self.dim * self.window_size]) b1 = tf.get_variable("b1", [self.dim * self.window_size]) h = tf.nn.relu(tf.matmul(embed, W) + b1) </code></pre>
<p><a href="https://www.tensorflow.org/api_docs/python/tf/VariableScope" rel="nofollow noreferrer"><code>tf.VariableScope</code></a> is an actual scope class that holds the <code>name</code>, <code>initializer</code>, <code>regularizer</code>, <code>partitioner</code>, and many other properties that are propagated to the variables defined in that scope. This class is more of a collection of properties and not a context manager, so you can't use it in <code>with</code> statement (that's what the error tells you).</p> <p>Since the scope has to be first pushed on top of the stack (tensorflow internal class <code>_VariableStore</code> is responsible for this), then popped back from the stack, manual instantiation of <code>tf.VariableScope</code> is tedious and error-prone. That's where the context manager comes in.</p> <p><a href="https://www.tensorflow.org/api_docs/python/tf/variable_scope" rel="nofollow noreferrer"><code>tf.variable_scope</code></a> is a context manager that makes it easier to work with variable scopes. As the documentation describes it:</p> <blockquote> <p>A context manager for defining ops that creates variables (layers).</p> <p>This context manager validates that the (optional) values are from the same graph, ensures that graph is the default graph, and pushes a name scope and a variable scope.</p> </blockquote> <p>The actual work with variables is delegated to the <code>tf.VariableScope</code> object that is created under the hood.</p>
python|tensorflow|contextmanager
1
4,225
47,321,133
sklearn Hierarchical Agglomerative Clustering using similarity matrix
<p>Given a distance matrix, with similarity between various professors :</p> <pre><code> prof1 prof2 prof3 prof1 0 0.8 0.9 prof2 0.8 0 0.2 prof3 0.9 0.2 0 </code></pre> <p>I need to perform hierarchical clustering on this data, where the above data is in the form of 2-d matrix</p> <pre><code> data_matrix=[[0,0.8,0.9],[0.8,0,0.2],[0.9,0.2,0]] </code></pre> <p>I tried checking if I can implement it using sklearn.cluster AgglomerativeClustering but it is considering all the 3 rows as 3 separate vectors and not as a distance matrix. Can it be done using this or scipy.cluster.hierarchy?</p>
<p>Yes, you can do it with <code>sklearn</code>. You need to set:</p> <ul> <li><code>affinity='precomputed'</code>, to use a matrix of distances</li> <li><code>linkage='complete'</code> or <code>'average'</code>, because default linkage(Ward) works only on coordinate input.</li> </ul> <p>With precomputed affinity, input matrix is interpreted as a matrix of distances between observations. The following code </p> <pre><code>from sklearn.cluster import AgglomerativeClustering data_matrix = [[0,0.8,0.9],[0.8,0,0.2],[0.9,0.2,0]] model = AgglomerativeClustering(affinity='precomputed', n_clusters=2, linkage='complete').fit(data_matrix) print(model.labels_) </code></pre> <p>will return labels <code>[1 0 0]</code>: the 1st professor goes to one cluster, and the 2nd and 3rd - to another.</p>
python|pandas|scikit-learn|hierarchical-clustering
11
4,226
68,359,857
Multiple dataframe groupby Pandas
<p>I have 2 datasets to work with:</p> <pre><code>ID Date Amount 1 2020-01-02 1000 1 2020-01-09 200 1 2020-01-08 400 </code></pre> <p>And another dataset which tells which is most frequent day of week and most frequent week of month for each ID(there are multiple such IDs)</p> <pre><code>ID Pref_Day_Of_Week_A Pref_Week_Of_Month_A 1 3 2 </code></pre> <p>For this ID ,Thursday was the most frequent day of the week for ID 1 and 2nd week of the month was the most frequent week of the month.</p> <p>I wish to find sum of all the amounts that took place on the most frequent day of week and frequent week of month, for all IDs(hence requiring groupby):</p> <pre><code>ID Amount_On_Pref_Day Amount_Pref_Week 1 1200 600 </code></pre> <p>I would really appreciate it if anyone could help me calculating this dataframe using pandas. For reference, I have used this function to find the week of month for a given date:</p> <pre><code>#https://stackoverflow.com/a/64192858/2901002 def weekinmonth(dates): &quot;&quot;&quot;Get week number in a month. Parameters: dates (pd.Series): Series of dates. Returns: pd.Series: Week number in a month. &quot;&quot;&quot; firstday_in_month = dates - pd.to_timedelta(dates.dt.day - 1, unit='d') return (dates.dt.day-1 + firstday_in_month.dt.weekday) // 7 + 1 </code></pre>
<p>Idea is filter only matched <code>dayofweek</code> and <code>week</code> and aggregate <code>sum</code>, last join together by <code>concat</code>:</p> <pre><code>#https://stackoverflow.com/a/64192858/2901002 def weekinmonth(dates): &quot;&quot;&quot;Get week number in a month. Parameters: dates (pd.Series): Series of dates. Returns: pd.Series: Week number in a month. &quot;&quot;&quot; firstday_in_month = dates - pd.to_timedelta(dates.dt.day - 1, unit='d') return (dates.dt.day-1 + firstday_in_month.dt.weekday) // 7 + 1 df.Date = pd.to_datetime(df.Date) df['dayofweek'] = df.Date.dt.dayofweek df['week'] = weekinmonth(df['Date']) f = lambda x: x.mode().iat[0] df1 = (df.groupby('ID', as_index=False).agg(Pref_Day_Of_Week_A=('dayofweek',f), Pref_Week_Of_Month_A=('week',f))) s1 = df1.rename(columns={'Pref_Day_Of_Week_A':'dayofweek'}).merge(df).groupby('ID')['Amount'].sum() s2 = df1.rename(columns={'Pref_Week_Of_Month_A':'week'}).merge(df).groupby('ID')['Amount'].sum() df2 = pd.concat([s1, s2], axis=1, keys=('Amount_On_Pref_Day','Amount_Pref_Week')) print (df2) Amount_On_Pref_Day Amount_Pref_Week ID 1 1200 600 </code></pre>
python|pandas|numpy|scipy
1
4,227
68,117,247
Applying function to pandas dataframe column after str.findall
<p>I have such a dataframe <code>df</code> with two columns:</p> <pre><code>Col1 Col2 'abc-def-ghi' 1 'abc-opq-rst' 2 </code></pre> <p>I created a new column <code>Col3</code> like this:</p> <pre><code>df['Col3'] = df['Col1'].str.findall('abc', flags=re.IGNORECASE) </code></pre> <p>And got such a dataframe afterwards:</p> <pre><code>Col1 Col2 Col3 'abc-def-ghi' 1 [abc] 'abc-opq-rst' 2 [abc] </code></pre> <p>What I want to do now is to create a new column <code>Col4</code> where I get a one if Col3 contains <code>'abc'</code> and otherwise zero.</p> <p>I tried to do this with a function:</p> <pre><code>def f(row): if row['Col3'] == '[abc]': val = 1 else: val = 0 return val </code></pre> <p>And applied this to my pandas dataframe:</p> <pre><code>df['Col4'] = df.apply(f, axis=1) </code></pre> <p>But I only get 0, also in column that contain 'abc'. I think there is something wrong with my if-statement. How can I solve this?</p>
<p>Just do</p> <pre><code>df['Col4'] = df.Col3.astype(bool).astype(int) </code></pre>
python|pandas|python-re
3
4,228
68,102,569
numpy create Mx2 matrix based on Mx1 matrix
<p>I have a np.array with 800 values. Each value is either 0 or 1.</p> <p>If the value is 0, I want to replace it with <code>mu_0</code>, which is a 1x2 array; else, I want to replace it with <code>mu_1</code>, which is also a 1x2 array.</p> <p>I tried using <code>np.where(y == 0, mu_0, mu_1)</code>, but python would only broadcast the value of <code>mu</code> to match <code>y</code>, not the other way around. In particular, the error I get is</p> <pre><code>ValueError: operands could not be broadcast together with shapes (800,) (2,) (2,) </code></pre> <p>I tried expanding <code>y</code> into <code>(800, 2)</code>, by padding <code>y_pad = np.c_[y, np.zeros(800)]</code>, but I am unsure how to condition on the first value of each row.</p> <p>If I use <code>np.where(y_pad[:, 0] == 0, ...)</code>, the array gets sliced back into <code>(800,)</code> again.</p>
<p>You can indeed expand <code>y</code> into <code>(800, 2)</code> with, for example, <code>np.repeat</code> so that 0s are 0s, 1s are 1s for each row. Then we can use <code>np.where</code>:</p> <pre><code># first casting to (800, 1) with `newaxis` then repetition y_rep = np.repeat(y[:, np.newaxis], repeats=2, axis=1) result = np.where(y_rep == 0, mu_0, mu_1) </code></pre> <hr> <p>sample run:</p> <pre><code>mu_0 = np.array([ 9, 17]) mu_1 = np.array([-3, -5]) y = np.array([0, 0, 1, 1, 0, 1, 1, 1]) </code></pre> <p>then</p> <pre><code>&gt;&gt;&gt; result array([[ 9, 17], [ 9, 17], [-3, -5], [-3, -5], [ 9, 17], [-3, -5], [-3, -5], [-3, -5]]) </code></pre> <p>where the condition became:</p> <pre><code>&gt;&gt;&gt; y_rep == 0 array([[ True, True], [ True, True], [False, False], [False, False], [ True, True], [False, False], [False, False], [False, False]]) </code></pre>
python|arrays|numpy
1
4,229
57,194,049
Merging two csv files and extracting out useful information using python
<p>I have two .csv files that look like following: </p> <p>file_1:</p> <pre><code>id a b c 10 1 2 3 11 2 3 4 </code></pre> <p>file_2:</p> <pre><code>id d e 10 2 3 11 2 3 12 2 3 </code></pre> <p>My expected output is:</p> <pre><code>id a b c d e 10 1 2 3 2 3 11 2 3 4 2 3 </code></pre> <p>I wish to merge these two files by comparing the id number. If the id number matched, the id and the corresponding rows need to be merged and extracted. If not matched, the corresponding id number's row is ignored. My code look like this:</p> <pre><code>import pandas as pd s1=pd.read_csv("file_1.csv") s2=pd.read_csv("file_2.csv") if s1['id']==s2['id']: merged=s1.merge(s2, on="id", how="outer") else: pass merged merged.to_csv("output.csv") </code></pre> <p>After running this coding, I cannot get my expected output. Anyone can help me? Thanks. </p>
<p>Try using <code>pd.DataFrame.merge</code>:</p> <pre><code>print(file_1.merge(file_2, on='id')) </code></pre> <p>Output:</p> <pre><code> a b c id d e 0 1 2 3 10 2 3 1 2 3 4 11 2 3 </code></pre> <p>If you care about the order of the columns do:</p> <pre><code>print(file_1.merge(file_2, on='id')[['id', 'a', 'b', 'c', 'd', 'e']]) </code></pre> <p>Output:</p> <pre><code> id a b c d e 0 10 1 2 3 2 3 1 11 2 3 4 2 3 </code></pre>
python|pandas
0
4,230
57,186,587
Pandas: replace repeated values with blanks groupby like
<p>I got dataframe with columns got groups of repeated values. What i want is to keep only first item in such columns.</p> <p>I've tried <code>df = df.groupby(['author', 'key'])</code> but don't know how to correctly get all rows. With <code>df.first()</code> only first rows will be printed.</p> <pre><code>import pandas as pd lst = [ ['juli', 'JIRA-1', 'assignee'], ['juli', 'JIRA-1', 'assignee'], ['nick', 'JIRA-1', 'timespent'], ['nick', 'JIRA-3', 'status'], ['nick', 'JIRA-3', 'assignee'], ['tom', 'JIRA-1', 'comment'], ['tom', 'JIRA-1', 'assignee'], ['tom', 'JIRA-2', 'status']] df = pd.DataFrame(lst, columns =['author', 'key', 'field']) #df = df.sort_values(by=['author', 'key']) &gt;&gt;&gt; df author key field 0 juli JIRA-1 assignee 1 juli JIRA-1 assignee 2 nick JIRA-1 timespent 3 nick JIRA-3 status 4 nick JIRA-3 assignee 5 tom JIRA-1 comment 6 tom JIRA-1 assignee 7 tom JIRA-2 status </code></pre> <p>what I got:</p> <pre><code>&gt;&gt;&gt; df.groupby(['author', 'key']).first() field author key juli JIRA-1 assignee nick JIRA-1 timespent JIRA-3 status tom JIRA-1 comment JIRA-2 status </code></pre> <p>what I want:</p> <pre><code>juli JIRA-1 assignee assignee nick JIRA-1 timespent JIRA-3 status assignee tom JIRA-1 comment assignee JIRA-2 status </code></pre>
<p>Looks like you need <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.duplicated.html" rel="nofollow noreferrer"><code>df.duplicated()</code></a> to find duplicates and <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html" rel="nofollow noreferrer"><code>df.loc[]</code></a> to assign blank spaces:</p> <pre><code>df.loc[df.duplicated(['author','key']),['author','key']]='' print(df) </code></pre> <hr> <pre><code> author key field 0 juli JIRA-1 assignee 1 assignee 2 nick JIRA-1 timespent 3 nick JIRA-3 status 4 assignee 5 tom JIRA-1 comment 6 assignee 7 tom JIRA-2 status </code></pre>
python|pandas|pandas-groupby
1
4,231
46,036,607
Transforming a Cassandra OrderedMapSerializedKey to a Python dictionary
<p>I have a column in Cassandra composed of a map of lists which when queried with the Python driver it returns an OrderedMapSerializedKey structure. This structure is a map of lists. I would like to put the whole query into pandas.</p> <p>To extract data from that OrderedMapSerializedKey structure, meaning to get the key and and use it as the label for a new column and keeping only the first element of the list as the value I use the approach mentioned <a href="https://stackoverflow.com/questions/41247345/python-read-cassandra-data-into-pandas/41484806#41484806">here</a> with some complex/dirty manipulation in the factory before returning the built DataFrame. </p> <p>A similar problem was asked <a href="https://stackoverflow.com/questions/42420260/how-to-convert-cassandra-map-to-pandas-dataframe">here</a>, without really an answer.</p> <p>Is there a better way to turn such an OrderedMapSerializedKey structure into a Python dictionary that can be readily loaded into a pandas DataFrame?</p>
<p>I think an ultimate solution could be to store <code>OrderedMapSerializedKey</code> Cassandra structure as a <code>dict</code> in your dataframe column then you could transfer this value / column to anyone you want. Ultimate because you may not know the actual keys in Cassandra rows (maybe different keys are inserted into rows).</p> <p>So here the solution I've tested, you only have to improve the <a href="https://stackoverflow.com/questions/41247345/python-read-cassandra-data-into-pandas/41484806#41484806"><code>pandas_factory</code></a> funciton:</p> <hr> <p><strong>EDIT:</strong></p> <p>In previous solution I replaced only the first (0th) row of Cassandra dataset (<code>rows</code> are list of tuples where every tuple is a row in Cassandra)</p> <pre><code>from cassandra.util import OrderedMapSerializedKey def pandas_factory(colnames, rows): # Convert tuple items of 'rows' into list (elements of tuples cannot be replaced) rows = [list(i) for i in rows] # Convert only 'OrderedMapSerializedKey' type list elements into dict for idx_row, i_row in enumerate(rows): for idx_value, i_value in enumerate(i_row): if type(i_value) is OrderedMapSerializedKey: rows[idx_row][idx_value] = dict(rows[idx_row][idx_value]) return pd.DataFrame(rows, columns=colnames) </code></pre> <p>You have to insert some automatic check whether there is minimum one value before / after the Cassandra map field or manually modify above script accordingly.</p> <p>Nice day!</p>
python|python-3.x|pandas|cassandra
3
4,232
50,975,649
Pandas Drop levels and attach to column title
<p>Using pivot function I have managed to obtain flatten data frame:</p> <pre><code>q_id 1 2 a_id 1 2 3 4 5 6 7 8 movie_id user_id start_rating 931 284 2.0 0 0 0 1 0 0 0 0 804 648 4.5 0 0 0 0 1 0 0 0 840 414 4.5 0 1 0 0 0 0 0 0 843 419 3.5 1 0 0 0 0 1 0 0 848 132 3.5 1 0 0 1 0 0 0 0 </code></pre> <p>My goal was to remove the indexes and attached level to the column name.</p> <pre><code>movie_id user_id start_rating 1_1 1_2 1_3 1_4 2_5 2_6 2_7 2_8 931 284 2.0 0 0 0 1 0 0 0 0 804 648 4.5 0 0 0 0 1 0 0 0 840 414 4.5 0 1 0 0 0 0 0 0 843 419 3.5 1 0 0 0 0 1 0 0 848 132 3.5 1 0 0 1 0 0 0 0 </code></pre> <p>I tried following steps: </p> <p><code>df.columns = ['_'.join(col).strip() for col in df.columns.values]</code></p> <p>but getting: </p> <pre><code> df.columns = ['_'.join(col).strip() for col in df.columns.values] TypeError: sequence item 0: expected string, int found </code></pre>
<p>The function <a href="https://docs.python.org/2/library/string.html#string.join" rel="nofollow noreferrer"><code>join</code></a> works with strings, and the element of col are <code>int</code> as the error shows. You need to convert the element of <code>col</code> to <code>str</code>.</p> <pre><code>df.columns = ['_'.join([str(lev) for lev in col]).strip() for col in df.columns.values] </code></pre> <p>or because here you have two levels, do:</p> <pre><code>df.columns = ['{}_{}'.format(l1,l2) for l1, l2 in df.columns.values] </code></pre>
pandas
1
4,233
51,057,924
Validate in merge function pandas
<p>Today I was trying to go a little deeper in the <code>merge()</code> function of pandas, and I found the option <code>validate</code>, which, as reported in the documentation, can be:</p> <blockquote> <p>validate : string, default None</p> <p>If specified, checks if merge is of specified type.</p> <p>“one_to_one” or “1:1”: check if merge keys are unique in both left and right datasets. “one_to_many” or “1:m”: check if merge keys are unique in left dataset. “many_to_one” or “m:1”: check if merge keys are unique in right dataset. &quot;many_to_many” or “m:m”: allowed, but does not result in checks.</p> </blockquote> <p>I have looked around to find a working example on where and how to use this function, but I couldn't find any. Moreover when I tried to apply it to a group of <code>DataFrame</code>'s I was merging, it didn't seem to change the output. Can anyone give me a working example, to make me understand it better?</p> <p>Thanks in advance,</p> <p>Mattia</p>
<p>The new <code>valdate</code> param will raise a <code>MergeError</code> if the validation fails, example:</p> <pre><code>df1 = pd.DataFrame({'a':list('aabc'),'b':np.random.randn(4)}) df2 = pd.DataFrame({'a':list('aabc'),'b':np.random.randn(4)}) print(df1) print(df2) a b 0 a -2.557152 1 a -0.145969 2 b -1.629560 3 c -0.233517 a b 0 a -0.352038 1 a 0.490438 2 b 0.319452 3 c -0.599481 </code></pre> <p>Now if we merge on column <code>'a'</code> without <code>validate</code>:</p> <pre><code>In[39]: df1.merge(df2, on='a') Out[39]: a b_x b_y 0 a -2.557152 -0.352038 1 a -2.557152 0.490438 2 a -0.145969 -0.352038 3 a -0.145969 0.490438 4 b -1.629560 0.319452 5 c -0.233517 -0.599481 </code></pre> <p>This works but we get more rows for 'a' as column 'b' is different, now we pass <code>validate='1:1'</code>, we get an error:</p> <pre><code>MergeError: Merge keys are not unique in either left or right dataset; not a one-to-one merge </code></pre> <p>if we pass <code>validate='1:m'</code> we get a different error:</p> <pre><code>MergeError: Merge keys are not unique in left dataset;not a one-to-many merge </code></pre> <p>Again this fails the validation, if we pass <code>'m:m'</code>:</p> <pre><code>In[42]: df1.merge(df2, on='a',validate='m:m') Out[42]: a b_x b_y 0 a -2.557152 -0.352038 1 a -2.557152 0.490438 2 a -0.145969 -0.352038 3 a -0.145969 0.490438 4 b -1.629560 0.319452 5 c -0.233517 -0.599481 </code></pre> <p>no error occurs and we get the same merged df if we had not passed the <code>validate</code> param</p> <p>The api docs don't give an example but the <a href="http://pandas.pydata.org/pandas-docs/stable/merging.html#merging-validation" rel="noreferrer">what's new</a> section does, the original <a href="https://github.com/pandas-dev/pandas/issues/16270" rel="noreferrer">github enhancement</a> also gives further background information</p>
python|pandas|validation|merge
15
4,234
66,522,791
Create a subset by filtering on Year
<p>I have a sample dataset as shown below:</p> <pre><code>| Id | Year | Price | |----|------|-------| | 1 | 2000 | 10 | | 1 | 2001 | 12 | | 1 | 2002 | 15 | | 2 | 2000 | 16 | | 2 | 2001 | 20 | | 2 | 2002 | 22 | | 3 | 2000 | 15 | | 3 | 2001 | 19 | | 3 | 2002 | 26 | </code></pre> <p>I want to subset the dataset so that I can consider the values only for last two years. I want to create a variable 'end_year' and pass a year value to it and then use it to subset original dataframe to take into account only the last two years. Since I have new data coming, so I wanted to create the variable. I have tried the below code but I'm getting error.</p> <pre><code>end_year=&quot;2002&quot; df1=df[(df['Year'] &gt;= end_year-1)] </code></pre>
<p>Per the comments, <code>Year</code> is type <code>object</code> in the raw data. We should first cast it to <code>int</code> and then compare with numeric <code>end_year</code>:</p> <pre class="lang-py prettyprint-override"><code>df.Year=df.Year.astype(int) # cast `Year` to `int` end_year=2002 # now we can use `int` here too df1=df[(df['Year'] &gt;= end_year-1)] </code></pre> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th></th> <th>Id</th> <th>Year</th> <th>Price</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>1</td> <td>2001</td> <td>12</td> </tr> <tr> <td>2</td> <td>1</td> <td>2002</td> <td>15</td> </tr> <tr> <td>4</td> <td>2</td> <td>2001</td> <td>20</td> </tr> <tr> <td>5</td> <td>2</td> <td>2002</td> <td>22</td> </tr> <tr> <td>7</td> <td>3</td> <td>2001</td> <td>19</td> </tr> <tr> <td>8</td> <td>3</td> <td>2002</td> <td>26</td> </tr> </tbody> </table> </div>
python|pandas|subset
1
4,235
66,359,164
How to normalize and create similarity matrix in Pyspark?
<p>I have seen many stack overflow questions about similarity matrix but they deal with RDD or other cases and I could not find the direct answer to my problem and I decided to post a new question.</p> <h1>Problem</h1> <pre class="lang-py prettyprint-override"><code>import numpy as np import pandas as pd import pyspark from pyspark.sql import functions as F, Window from pyspark import SparkConf, SparkContext, SQLContext from pyspark.ml.feature import VectorAssembler from pyspark.ml.feature import StandardScaler,Normalizer from pyspark.mllib.linalg.distributed import IndexedRow, IndexedRowMatrix spark = pyspark.sql.SparkSession.builder.appName('app').getOrCreate() sc = spark.sparkContext sqlContext = SQLContext(sc) # pandas dataframe pdf = pd.DataFrame({'user_id': ['user_0','user_1','user_2'], 'apple': [0,1,5], 'good banana': [3,0,1], 'carrot': [1,2,2]}) # spark dataframe df = sqlContext.createDataFrame(pdf) df.show() +-------+-----+-----------+------+ |user_id|apple|good banana|carrot| +-------+-----+-----------+------+ | user_0| 0| 3| 1| | user_1| 1| 0| 2| | user_2| 5| 1| 2| +-------+-----+-----------+------+ </code></pre> <h1>Normalize and create Similarity Matrix using Pandas</h1> <pre class="lang-py prettyprint-override"><code>from sklearn.preprocessing import normalize pdf = pdf.set_index('user_id') item_norm = normalize(pdf,axis=0) # normalize each items (NOT users) item_sim = item_norm.T.dot(item_norm) df_item_sim = pd.DataFrame(item_sim,index=pdf.columns,columns=pdf.columns) apple good banana carrot apple 1.000000 0.310087 0.784465 good banana 0.310087 1.000000 0.527046 carrot 0.784465 0.527046 1.000000 </code></pre> <h1>Question: how to get the similarity matrix like above using PySpark?</h1> <p>I want to run KMeans on that data.</p> <pre class="lang-py prettyprint-override"><code>from pyspark.ml.feature import VectorAssembler from pyspark.ml.clustering import KMeans # I want to do this... model = KMeans(k=2, seed=1).fit(df.select('norm_features')) df = model.transform(df) df.show() </code></pre> <p>References</p> <ul> <li><a href="https://stackoverflow.com/questions/52542903/cosine-similarity-for-two-pyspark-dataframes">Cosine Similarity for two pyspark dataframes</a></li> <li><a href="https://stackoverflow.com/questions/43921636/apache-spark-python-cosine-similarity-over-dataframes">Apache Spark Python Cosine Similarity over DataFrames</a></li> </ul>
<pre><code>import pyspark.sql.functions as F df.show() +-------+-----+-----------+------+ |user_id|apple|good banana|carrot| +-------+-----+-----------+------+ | user_0| 0| 3| 1| | user_1| 1| 0| 2| | user_2| 5| 1| 2| +-------+-----+-----------+------+ </code></pre> <p>Swap rows and columns by unpivoting and pivoting:</p> <pre><code>df2 = df.selectExpr( 'user_id', 'stack(3, ' + ', '.join([&quot;'%s', `%s`&quot; % (c, c) for c in df.columns[1:]]) + ') as (fruit, items)' ).groupBy('fruit').pivot('user_id').agg(F.first('items')) df2.show() +-----------+------+------+------+ | fruit|user_0|user_1|user_2| +-----------+------+------+------+ | apple| 0| 1| 5| |good banana| 3| 0| 1| | carrot| 1| 2| 2| +-----------+------+------+------+ </code></pre> <p>Normalize:</p> <pre><code>df3 = df2.select( 'fruit', *[ ( F.col(c) / F.sqrt( sum([F.col(cc)*F.col(cc) for cc in df2.columns[1:]]) ) ).alias(c) for c in df2.columns[1:] ] ) df3.show() +-----------+------------------+-------------------+-------------------+ | fruit| user_0| user_1| user_2| +-----------+------------------+-------------------+-------------------+ | apple| 0.0|0.19611613513818404| 0.9805806756909202| |good banana|0.9486832980505138| 0.0|0.31622776601683794| | carrot|0.3333333333333333| 0.6666666666666666| 0.6666666666666666| +-----------+------------------+-------------------+-------------------+ </code></pre> <p>Do the matrix multiplication:</p> <pre><code>df4 = (df3.alias('t1').repartition(10) .crossJoin(df3.alias('t2').repartition(10)) .groupBy('t1.fruit') .pivot('t2.fruit', df.columns[1:]) .agg(F.first(sum([F.col('t1.'+c) * F.col('t2.'+c) for c in df3.columns[1:]]))) ) df4.show() +-----------+-------------------+-------------------+------------------+ | fruit| apple| good banana| carrot| +-----------+-------------------+-------------------+------------------+ | apple| 1.0000000000000002|0.31008683647302115|0.7844645405527362| |good banana|0.31008683647302115| 0.9999999999999999|0.5270462766947298| | carrot| 0.7844645405527362| 0.5270462766947298| 1.0| +-----------+-------------------+-------------------+------------------+ </code></pre>
python|pandas|apache-spark|pyspark|apache-spark-sql
6
4,236
57,695,946
pandas change attribute values to row values of object
<p>data aggregation parsed from file at the moment:</p> <pre><code>obj price1*red price1*blue price2*red price2*blue a 5 7 10 12 b 15 17 20 22 </code></pre> <p>desired outcome:</p> <pre><code>obj color price1 price2 a red 5 7 a blue 10 12 b red 15 17 b blue 20 22 </code></pre> <p>this example is simplified. the data of the real usecase persists out of 404 columns and 10'000 of rows. The data mostly has arround 99 positions of colors and 4 different kind of pricelists (pricelists are always 4 kinds of).</p> <p>I already tried a different approach from another part i programmed before in python</p> <pre><code>df_pricelist = pd.melt(df_pricelist, id_vars=["object_nr"], var_name='color', value_name='prices') </code></pre> <p>but this approach was initially used to pivot data from a single attribute to multiple lines. Or in other words only 1 cell for the different pricelists instead of multiple cells.</p> <p>Where i also used assign to add the different blocks of the string to dofferent column cells.</p> <p>To get all the different columns into the dataframe i use str.startswith. This way i don't have to know all the different colors there could be.</p>
<p>A solution that makes use of a <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/advanced.html#" rel="nofollow noreferrer">MultiIndex</a> as an intermediate step:</p> <pre><code>import pandas as pd # Construct example dataframe col_names = ["obj", "price1*red", "price1*blue", "price2*red", "price2*blue"] data = [ ["a", 5, 7, 10, 12], ["b", 15, 17, 20, 22], ] df = pd.DataFrame(data, columns=col_names) # Convert objects column into rows index df2 = df.set_index("obj") # Convert columns index into two-level multi-index by splitting name strings color_price_pairs = [tuple(col_name.split("*")) for col_name in df2.columns] df2.columns = pd.MultiIndex.from_tuples(color_price_pairs, names=("price", "color")) # Stack colors-level of the columns index into a rows index level df2 = df2.stack() df2.columns.name = "" # Optional: convert rows index (containing objects and colors) into columns df2 = df2.reset_index() </code></pre> <p>This is a print-out that shows both the original dataframe <code>df</code> and the result dataframe <code>df2</code>:</p> <pre><code>In [1] df Out[1]: obj price1*red price1*blue price2*red price2*blue 0 a 5 7 10 12 1 b 15 17 20 22 In [2]: df2 Out[2]: obj color price1 price2 0 a blue 7 12 1 a red 5 10 2 b blue 17 22 3 b red 15 20 </code></pre>
python|pandas|pivot|multiple-columns|rows
0
4,237
57,303,028
How to work out mean after specific number of months for different customers using python?
<p>I have a <code>dataframe</code> of customers and how much they spend each month as shown below:</p> <pre><code>data =[['Armin',12,5,11,24,5,4,10,5],['Benji',10,12,10,32,4,18,0,0],['Casey',0,0,30,15,25,5,0,0]] df = pd.DataFrame(data, columns = ['Name','2019-01','2019-02','2019-03','2019-04','2019-05','2019-06','2019-07','2019-08']) </code></pre> <p>I need to work out the 3 month average for each customer starting from their specified month as shown by the dataframe below:</p> <pre><code>data2 = [['Armin','2019-04'],['Benji','2019-02'],['Casey','2019-03']] df2 = pd.DataFrame(data2, columns = ['Name','Specified Month']) </code></pre> <p>So for Armin the 3-month average starting from his specified month would be <code>(24 + 5 + 4)/3 = 11</code>.</p> <p>The expected result would be something similar to below:</p> <pre><code>df['Specified Average'] = [11,18,23.3] </code></pre>
<p>First get positions by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Index.get_indexer.html" rel="nofollow noreferrer"><code>Index.get_indexer</code></a> in <code>df</code>, then select next 3 values with <code>np.add.outer</code> and get <code>mean</code>:</p> <pre><code>N = 3 a = df.columns.get_indexer(df2['Specified Month']) df2['Specified Average'] = (np.mean(df.values[np.arange(len(df)), np.add.outer(np.arange(N), a)], axis=0) .astype(float)) print (df2) Name Specified Month Specified Average 0 Armin 2019-04 11.000000 1 Benji 2019-02 18.000000 2 Casey 2019-03 23.333333 </code></pre> <p>Another pandas only solution is more general - working if data not exist between both DataFrames and also working if next 3 months not exist for any datetime:</p> <pre><code>s = (df.reset_index() .melt(id_vars=['Name','index'], var_name='Specified Month') .merge(df2, how='left', indicator=True) .assign(groups=lambda x: x['_merge'].eq('both').astype(int).groupby(x['Name']).cumsum()) .query("groups != 0") .groupby('Name') .head(N) .sort_values('index') .groupby('Name', sort=False)['value'] .mean() ) print (s) Name Armin 11.000000 Benji 18.000000 Casey 23.333333 Name: value, dtype: float64 df2['Specified Average'] = s.values print (df2) Name Specified Month Specified Average 0 Armin 2019-04 11.000000 1 Benji 2019-02 18.000000 2 Casey 2019-03 23.333333 </code></pre>
python|pandas|mean
4
4,238
57,522,916
Round Datetime Object DOWN in Python
<p>I'm trying to round a <code>datetime</code> object DOWN in Python, and am having a few problems. There is lots on here about rounding <code>datetime</code> but I can't find anything specific to my needs.</p> <p>I'm trying to get a date range of 15 minute intervals, with <code>.now()</code> being the end point. To get my <code>end=</code> I do:</p> <p><code>pd.Timestamp.now().round('15min')</code></p> <p>which returns:</p> <p><code>2019-08-16 11:15:00</code> which is exactly what I want, however, if I run this at 11:23 say, it will return me <code>2019-08-16 11:30:00</code>, and that's not actually what I want, I want it to round down to <code>2019-08-16 11:15:00</code> up until the moment we strike 11:30.</p> <p>Is there a simple way to get it to round down as I haven't had any luck finding the answer if so. </p> <p>Cheers for any help</p>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Timestamp.floor.html" rel="nofollow noreferrer"><code>Timestamp.floor</code></a>:</p> <pre><code>print (pd.Timestamp('2019-08-16 11:15:00').floor('15min')) 2019-08-16 11:15:00 print (pd.Timestamp('2019-08-16 11:23:00').floor('15min')) 2019-08-16 11:15:00 print (pd.Timestamp('2019-08-16 11:30:00').floor('15min')) 2019-08-16 11:30:00 </code></pre> <p>For testing:</p> <pre><code>df = pd.DataFrame({'dates':pd.date_range('2009-01-01', freq='T', periods=20)}) df['new'] = df['dates'].dt.floor('15min') print (df) 0 2009-01-01 00:00:00 2009-01-01 00:00:00 1 2009-01-01 00:01:00 2009-01-01 00:00:00 2 2009-01-01 00:02:00 2009-01-01 00:00:00 3 2009-01-01 00:03:00 2009-01-01 00:00:00 4 2009-01-01 00:04:00 2009-01-01 00:00:00 5 2009-01-01 00:05:00 2009-01-01 00:00:00 6 2009-01-01 00:06:00 2009-01-01 00:00:00 7 2009-01-01 00:07:00 2009-01-01 00:00:00 8 2009-01-01 00:08:00 2009-01-01 00:00:00 9 2009-01-01 00:09:00 2009-01-01 00:00:00 10 2009-01-01 00:10:00 2009-01-01 00:00:00 11 2009-01-01 00:11:00 2009-01-01 00:00:00 12 2009-01-01 00:12:00 2009-01-01 00:00:00 13 2009-01-01 00:13:00 2009-01-01 00:00:00 14 2009-01-01 00:14:00 2009-01-01 00:00:00 15 2009-01-01 00:15:00 2009-01-01 00:15:00 16 2009-01-01 00:16:00 2009-01-01 00:15:00 17 2009-01-01 00:17:00 2009-01-01 00:15:00 18 2009-01-01 00:18:00 2009-01-01 00:15:00 19 2009-01-01 00:19:00 2009-01-01 00:15:00 </code></pre>
python|pandas|datetime
3
4,239
57,350,239
DecisionTreeRegressor score not calculated
<p>I'm trying to calculate the score of a DecisionTreeRegressor with the following code:</p> <pre><code>from sklearn import preprocessing from sklearn.model_selection import train_test_split from sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor from sklearn.metrics import accuracy_score from sklearn import tree # some features are better using LabelEncoder like HouseStyle but the chance that they will affect # the target LotFrontage are small so we just use HotEncoder and drop unwanted columns later encoded_df = pd.get_dummies(train_df, prefix_sep="_", columns=['MSZoning', 'Street', 'Alley', 'LotShape', 'LandContour', 'Utilities', 'LotConfig', 'LandSlope', 'Neighborhood', 'Condition1', 'Condition2', 'BldgType', 'HouseStyle']) encoded_df = encoded_df[['LotFrontage', 'LotArea', 'LotShape_IR1', 'LotShape_IR2', 'LotShape_IR3', 'LotConfig_Corner', 'LotConfig_CulDSac', 'LotConfig_FR2', 'LotConfig_FR3', 'LotConfig_Inside']] # imputate LotFrontage with the mean value (we saw low outliers ratio so we gonna use the mean value) encoded_df['LotFrontage'].fillna(encoded_df['LotFrontage'].mean(), inplace=True) X = encoded_df.drop('LotFrontage', axis=1) y = encoded_df['LotFrontage'].astype('int32') X_train, X_test, y_train, y_test = train_test_split(X, y) classifier = DecisionTreeRegressor() classifier.fit(X_train, y_train) y_pred = classifier.predict(X_test) y_test = y_test.values.reshape(-1, 1) classifier.score(y_test, y_pred) print("Accuracy is: ", accuracy_score(y_test, y_pred) * 100) </code></pre> <p>when it's gets to calculating the score of the model I get the following error:</p> <pre><code>ValueError: Number of features of the model must match the input. Model n_features is 9 and input n_features is 1 </code></pre> <p>Not sure as to why it happens because according <a href="https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html#sklearn.tree.DecisionTreeClassifier.score" rel="nofollow noreferrer">sklearn docs</a> the Test Samples are to be in the shape of <code>(n_samples, n_features)</code> and <code>y_test</code> is indeed in this shape:</p> <pre><code>y_test.shape # (365, 1) </code></pre> <p>and the True labels should be in the shape of <code>(n_samples) or (n_samples, n_outputs)</code> and <code>y_pred</code> is indeed in this shape:</p> <pre><code>y_pred.shape # (365,) </code></pre> <p>The dataset: <a href="https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data" rel="nofollow noreferrer">https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data</a></p>
<p>The first argument of the score function shouldn't be the target value of the test set, it should be the input value of the test set, so you should do</p> <pre><code>classifier.score(X_test, y_test) </code></pre>
python|pandas|scikit-learn|data-science|decision-tree
2
4,240
70,504,394
How to use Wildcard in Numpy array operations?
<p>For instance, define a numpy array with <code>numpy.str_</code> format and doing the replace operation supported by <code>numpy.char</code>:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np a = np.array(['a-b-c-d','e-f-g-h'],np.str_) print (np.char.replace(a,'*-','i-')) </code></pre> <p>The returned result would be <code>['a-b-c-d', 'e-f-g-h']</code> but <code>['i-i-i-d', 'i-i-i-h']</code> is expected.</p>
<p>Is there any reason to use numpy arrays? You can't use wildcards with <code>numpy.char.replace</code>.</p> <p>I would suggest to use python lists here and the <code>re</code> module:</p> <pre><code>l = ['a-b-c-d', 'e-f-g-h'] import re out = [re.sub('.-', 'i-', i) for i in l] </code></pre> <p>Output: <code>['i-i-i-d', 'i-i-i-h']</code></p>
python|numpy
3
4,241
51,152,776
How to save lmplot in pandas
<p>For reg plot, this is working:</p> <pre><code>sns_reg_plot = sns.regplot(x="X", y="y", data=df) sns_reg_fig = sns_reg_plot.get_figure() sns_reg_fig.savefig(path) </code></pre> <p>But for lm plot I get an error:</p> <pre><code>sns_lm_plot = sns.lmplot(x="X", y="y", hue="hue", data=df) .. sns_lm_fig = sns_lm_plot.get_figure() AttributeError: 'FacetGrid' object has no attribute 'get_figure' </code></pre>
<p>Just remove the .get_figure line</p> <pre><code> sns_im_plot = sns.lmplot(x="X", y="y",hue="hue" data=df) sns_im_plot.savefig(path) </code></pre>
python|pandas|matplotlib|visualization|figures
1
4,242
51,614,014
Merging Pandas dataframes on same column identifier with improper output
<p><strong>Scenario:</strong> I have a code that reads a set of excel files from a directory and gathers the contents of each to a dataframe in a list then concatenates it. The code also reads another file where it gets the data for some identifiers into another dataframe.</p> <p><strong>Example of data in the concatenated dataframe from the list:</strong> </p> <pre><code>Iteration Run Value 9154aa 3 100 9154aa 7 112 9154aa 1 120 3148nf 77 58 3148nf 7 86 9421jh 23 27 9421jh 42 736 9421jh 4 44 9421jh 9 82 </code></pre> <p><strong>Example of the other dataframe:</strong></p> <pre><code>Iteration Date 9154aa 01012011 1582he 01052013 3148nf 01092011 9421jh 01012010 </code></pre> <p>The first DF has information for multiple iterations concatenated, while the additional DF has a piece of information for all iterations.</p> <p><strong>Objective:</strong> My objective is to put the date related to a iteration into the first dataframe (in every row that corresponds to that iterations).</p> <p><strong>Output Example:</strong></p> <pre><code>Iteration Run Value Date 9154aa 3 100 01012011 9154aa 7 112 01012011 9154aa 1 120 01012011 3148nf 77 58 01092011 3148nf 7 86 01092011 9421jh 23 27 01012010 9421jh 42 736 01012010 9421jh 4 44 01012010 9421jh 9 82 01012010 </code></pre> <p><strong>Issue:</strong> Although the script runs with no crashes, for some reason my output is duplicating one (or more) of my iteration entries.</p> <p><strong>Example of flawed output:</strong></p> <pre><code>Iteration Run Value Date 9154aa 3 100 01012011 9154aa 7 112 01012011 9154aa 1 120 01012011 3148nf 77 58 01092011 3148nf 77 58 01092011 3148nf 7 86 01092011 3148nf 7 86 01092011 9421jh 23 27 01012010 9421jh 42 736 01012010 9421jh 4 44 01012010 9421jh 9 82 01012010 </code></pre> <p>I have no idea of the reason for this behavior.</p> <p><strong>Question:</strong> What am I doing wrong?</p> <p><strong>Code:</strong> </p> <pre><code>sourcefolder = "\\Network\DGMS\2018" outputfolder = "\\Network\DGMS\2018" adjustmentinputs = "//Network/DGMS/Uploader_v1.xlsm" selectmonth = input("Please enter month ('January', 'February'...):") # Get Adjustments ApplyOnDates = pd.read_excel(open(adjustmentinputs, 'rb'), sheet_name='Calendar') # Get content all_files = glob.glob(os.path.join(sourcefolder, "*.xls*")) contentdataframes = [] contentdataframes2 = [] for f in all_files: df = pd.read_excel(f) df['Iteration'] = os.path.basename(f).split('.')[0].split('_')[0] mask = df.columns.str.contains('Base|Last|Fix') c2 = df.columns[~mask].tolist() df = df[c2] contentdataframes.append(df) print (f) concatenatedfinal = pd.concat(contentdataframes) # Date Adjustment ApplyOnDates = ApplyOnDates[["IT", selectmonth]] ApplyOnDates = ApplyOnDates.rename(index=str, columns={"IT": "Iteration", selectmonth: "Date"}) Datawithfixeddates = pd.DataFrame.merge(concatenatedfinal, ApplyOnDates, left_on='Iteration', right_on='Iteration', indicator=False) </code></pre> <p><strong>OBS:</strong> In the example I used only a small amount of data, while normally it would do it for dozens of iterations.</p>
<p>You need to use a left join here. As per the <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html" rel="nofollow noreferrer">documentation</a>, left join preserves all of the values in the first DataFrame, subbing in values from the second dependent on the structure of the first.</p> <p>Set the parameter <code>pd.DataFrame.merge(how='left')</code></p> <p>For your data as below:</p> <pre><code>In[13]: print(df1) Out[13]: Iteration Run Value 0 9154aa 3 100 1 9154aa 7 112 2 9154aa 1 120 3 3148nf 77 58 4 3148nf 7 86 5 9421jh 23 27 6 9421jh 42 736 7 9421jh 4 44 8 9421jh 9 82 </code></pre> <p>and</p> <pre><code>In[15]: print(df2) Out[15]: Iteration Date 0 9154aa 01012011 1 1582he 01052013 2 3148nf 01092011 3 9421jh 01012010 </code></pre> <p>The following is true</p> <pre><code>In[16]: print(df1.merge(df2,left_on='Iteration',right_on='Iteration',how='left')) Out[16]: Iteration Run Value Date 0 9154aa 3 100 01012011 1 9154aa 7 112 01012011 2 9154aa 1 120 01012011 3 3148nf 77 58 01092011 4 3148nf 7 86 01092011 5 9421jh 23 27 01012010 6 9421jh 42 736 01012010 7 9421jh 4 44 01012010 8 9421jh 9 82 01012010 </code></pre>
python|pandas|dataframe
2
4,243
37,819,082
Error reached while attempting to create a numpy array
<p>Here's my code: </p> <pre><code>import numpy as np x = np.array[[1,2]] print x </code></pre> <p>Here's the output:</p> <pre><code>Traceback (most recent call last): File "Shear_Moment_Test.py", line 2, in &lt;module&gt; x = np.array[[1,2]] TypeError: 'builtin_function_or_method' object has no attribute '__getitem__' </code></pre> <p>Any advice would be appreciated!</p>
<p>The syntax which you used is wrong. Use this <code>x = np.array([1,2])</code></p>
python|arrays|numpy
1
4,244
31,331,358
Unicode Encode Error when writing pandas df to csv
<p>I cleaned 400 excel files and read them into python using pandas and appended all the raw data into one big df.</p> <p>Then when I try to export it to a csv:</p> <pre><code>df.to_csv("path",header=True,index=False) </code></pre> <p>I get this error:</p> <pre><code>UnicodeEncodeError: 'ascii' codec can't encode character u'\xc7' in position 20: ordinal not in range(128) </code></pre> <p>Can someone suggest a way to fix this and what it means?</p> <p>Thanks</p>
<p>You have <code>unicode</code> values in your DataFrame. Files store bytes, which means all <code>unicode</code> have to be encoded into bytes before they can be stored in a file. You have to specify an encoding, such as <code>utf-8</code>. For example, </p> <pre><code>df.to_csv('path', header=True, index=False, encoding='utf-8') </code></pre> <p>If you don't specify an encoding, then the encoding used by <code>df.to_csv</code> defaults to <code>ascii</code> in Python2, or <code>utf-8</code> in Python3.</p>
python|pandas|export-to-csv|python-unicode
69
4,245
47,912,449
Pandas 0.21 reindex() after get_dummies()
<p><strong>Background</strong></p> <p>My project is doing a Pandas upgrade from 0.19.2 to 0.21.0. In the project, I have a DataFrame with one categorical column. And I use get_dummies() to encode it, and then use reindex() to filter columns. However, if the columns arg in reindex() contain non-encoded column, the reindex() fails.</p> <p><strong>Sample Code</strong></p> <p>The code below works for 0.19.2 but fails under 0.21.0.</p> <pre><code>df = pd.DataFrame.from_items([('GDP', [1, 2]),('Nation', ['AB', 'CD'])]) df = pd.get_dummies(df, columns=['Nation'], sparse=True) # SparseDataFrame df.reindex(columns=['GDP']) # Fails :/ </code></pre> <p>The error message is</p> <pre><code>df.reindex(columns=['GDP']) .... TypeError: values must be SparseArray </code></pre> <p><strong>What I Hope to Achieve</strong></p> <p>Use reindex(columns=...) to filter selected columns contain encoded and non-encoded columns. Thanks! </p> <p><strong>Update (2018-01-17)</strong></p> <p>An issue is created at <a href="https://github.com/pandas-dev/pandas/issues/18914" rel="nofollow noreferrer">GitHub</a>.</p>
<p>This certainly seems like a bug. As of v0.21, they've reworked a lot of their <code>reindex</code> API, so it seems something could've broken somewhere.</p> <p>I don't have an answer, but I do have a workaround, hopefully it should do: You'll need to first transpose, and <em>then</em> reindex.</p> <pre><code>df.T.reindex(index=['GDP']).T GDP 0 1 1 2 </code></pre>
python|pandas|dataframe
2
4,246
58,686,226
How add element to a NumPy array in a Python function
<p>I am trying to create a series of NumPy arrays from a text file using a pool of workers with multiprocessing module.</p> <pre><code>def process_line(line, x,y,z,t): sl = line.split() x = np.append(x,float(sl[0].replace(',',''))) y = np.append(y,float(sl[1].replace(',',''))) z = np.append(z,float(sl[2].replace(',',''))) t = np.append(t,float(sl[3].replace(',',''))) def txt_to_HDF_converter(name, path_file): #init objects x = np.empty(0) y = np.empty(0) z = np.empty(0) t = np.empty(0) pool = mp.Pool(4) jobs = [] with open(path_file) as f: for line in f: jobs.append(pool.apply_async(process_line,(line,x,y,z,t))) #wait for all jobs to finish for job in jobs: job.get() #clean up pool.close() </code></pre> <p>The problem comes when the arrays are assigned in the <code>process_line</code> function, as if arguments where passed by value, at the end of the cycle I end up with arrays with only one element. Any idea of how get around this?</p>
<p>You are passing the values as part of a tuple in the code here:</p> <pre><code> jobs.append(pool.apply_async(process_line,(line,x,y,z,t))) </code></pre> <p>Then you unpack this tuple implicitly in the function:</p> <pre><code>def process_line(line, x,y,z,t): </code></pre> <p>Then you do not change the existing values but instead create new ones with these lines:</p> <pre><code> x = np.append(x,float(sl[0].replace(',',''))) y = np.append(y,float(sl[1].replace(',',''))) z = np.append(z,float(sl[2].replace(',',''))) t = np.append(t,float(sl[3].replace(',',''))) </code></pre> <p>Let me repeat this: You do not change the original arrays (as you appear to expect). Instead you just use the old values to <em>create</em> new values which you then assign to the local variables <code>x</code>, <code>y</code>, <code>z</code>, and <code>t</code>. Then you leave the function and forget about the new values. I would say this can never have any effect (also not for the last value) outside of the function.</p> <p>You have several options of going around this.</p> <ol> <li><p>Use global variables. This is a quick fix but bad style and in the long run you will hate me for this advice. But if you just need it to work quickly, then this might be your option.</p></li> <li><p>Return your values. After creating the new values, return them somehow and make sure that the next call gets the previously returned values again as input. This is the functional approach.</p></li> <li><p>Pass your values by reference. You can do this by instead of passing <code>x</code> create a one-element list. See the code below on how to do this. Passing references is typical C-style programming and not very Pythonic (but it works). Lots of IDEs will warn you about doing it this way and the typical Python developer will have a hard time understanding what you are doing there. A nicer variant of this is not to use simple lists but to put your data into some kind of object which will be passed by reference.</p></li> </ol> <pre><code>x_ref = [x] y_ref = [y] y_ref = [y] t_ref = [t] with open(path_file) as f: for line in f: jobs.append(pool.apply_async(process_line,(line,x_ref,y_ref,z_ref,t_ref))) </code></pre> <p>Then the <code>process_line</code> needs to be adjusted to expect references as well:</p> <pre><code>def process_line(line, x_ref,y_ref,z_ref,t_ref): sl = line.split() x_ref[0] = np.append(x_ref[0],float(sl[0].replace(',',''))) y_ref[0] = np.append(y_ref[0],float(sl[1].replace(',',''))) z_ref[0] = np.append(z_ref[0],float(sl[2].replace(',',''))) t_ref[0] = np.append(t_ref[0],float(sl[3].replace(',',''))) </code></pre>
python|pass-by-reference|numpy-ndarray
1
4,247
58,651,915
pandas split cumsum to upper limits and then continue with remainder in different column
<p>I have a <code>DataFrame</code> such that:</p> <pre><code> Amt Date 01/01/2000 10 01/02/2000 10 01/03/2000 10 01/04/2000 10 01/05/2000 10 01/06/2000 10 01/07/2000 10 </code></pre> <p>Now suppose I have two storage facilities to store the <code>Amt</code> of product that I purchase; <strong>Storage 1</strong> which has a cap of 22.5, and <strong>Storage 2</strong> which has a capacity of 30. I would like to add both of these as columns and have them each sum cumulatively at a <strong>SPLIT</strong> quantity of <code>Amt</code> (for every 10, 5 goes into each). Once <strong>Storage 1</strong> reaches capacity, I would like the remainder to go into <strong>Storage 2</strong> until it becomes full, at which point the remainder would go into a <em>third</em> column, <strong>Sell</strong>. After this, the <code>Amt</code> can continue to accumulate in the <strong>Sell</strong> column for the remainder of the <code>DataFrame</code>, such that the output would look like: </p> <pre><code> Amt | Storage 1 | Storage 2 | Sell | Date 01/01/2000 10 5 5 0 01/02/2000 10 10 10 0 01/03/2000 10 15 15 0 01/04/2000 10 20 20 0 01/05/2000 10 22.5 27.5 0 01/06/2000 10 22.5 30 7.5 01/07/2000 10 22.5 30 17.5 </code></pre> <p>I am aware of <code>cumsum</code> but I am not sure how to set conditions on it, nor do I know how to retrieve the remainder value in the case the storage fills up.</p> <p>I apologize if this is unclear. If I am missing any necessary information, please let me know. Thanks in advance.</p>
<p>Use <code>np.select</code> to get the storage amount:</p> <pre><code>s = df["Amt"].cumsum() df["Storage 1"] = np.select([s&lt;=45, s&gt;45], [s/2,22.5]) df["Storage 2"] = np.select([s&lt;=52.5, s&gt;52.5], [s-df["Storage 1"], 30]) df["Sell"] = s-df["Storage 1"]-df["Storage 2"] print (df) # Amt Storage 1 Storage 2 Sell Date 01/01/2000 10 5.0 5.0 0.0 01/02/2000 10 10.0 10.0 0.0 01/03/2000 10 15.0 15.0 0.0 01/04/2000 10 20.0 20.0 0.0 01/05/2000 10 22.5 27.5 0.0 01/06/2000 10 22.5 30.0 7.5 01/07/2000 10 22.5 30.0 17.5 </code></pre>
python|pandas|cumsum
2
4,248
58,947,771
Python Pandas - convert list into series
<p>I have an excel data set looks like this:</p> <p><a href="https://i.stack.imgur.com/Gr2yU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Gr2yU.png" alt="enter image description here"></a></p> <p>for copy purpose:</p> <pre><code>ID buffer LocalHub@3c183d50 [intraCity_Simulator.Parcel@55078545, intraCity_Simulator.Parcel@75b895dd, intraCity_Simulator.Parcel@44227899, intraCity_Simulator.Parcel@696b0129, intraCity_Simulator.Parcel@86ec871, intraCity_Simulator.Parcel@7a0d8542, intraCity_Simulator.Parcel@67a58fba] LocalHub@d3a0fbe [intraCity_Simulator.Parcel@61b9a28c, intraCity_Simulator.Parcel@1b5d2e8b, intraCity_Simulator.Parcel@65911201, intraCity_Simulator.Parcel@2e53ab95, intraCity_Simulator.Parcel@464b73fa, intraCity_Simulator.Parcel@640ff28a, intraCity_Simulator.Parcel@77fc8d6c, intraCity_Simulator.Parcel@609051b0, intraCity_Simulator.Parcel@25e0c299, intraCity_Simulator.Parcel@436af74b, intraCity_Simulator.Parcel@24c3fb2, intraCity_Simulator.Parcel@130592c8, intraCity_Simulator.Parcel@444d20b1, intraCity_Simulator.Parcel@6d59d5b2, intraCity_Simulator.Parcel@764a25d3, intraCity_Simulator.Parcel@4bdd2c62] </code></pre> <p>I would like to re-arrange and display the list value as a column corresponding to the ID, e.g.</p> <pre><code>ID buffer LocalHub@3c183d50 intraCity_Simulator.Parcel@55078545 LocalHub@3c183d50 intraCity_Simulator.Parcel@75b895dd ... ... </code></pre>
<p>Solution for pandas 0.25+ is remove <code>[]</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.strip.html" rel="nofollow noreferrer"><code>Series.str.strip</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.split.html" rel="nofollow noreferrer"><code>Series.str.split</code></a> values for lists and then <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.explode.html" rel="nofollow noreferrer"><code>DataFrame.explode</code></a>, last <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reset_index.html" rel="nofollow noreferrer"><code>DataFrame.reset_index</code></a> with <code>drop=True</code> is for default <code>RangeIndex</code>:</p> <pre><code>df = (df.assign(buffer = df['buffer'].str.strip('[]').str.split(',')) .explode('buffer') .reset_index(drop=True)) print (df) ID buffer 0 LocalHub@3c183d50 intraCity_Simulator.Parcel@55078545 1 LocalHub@3c183d50 intraCity_Simulator.Parcel@75b895dd 2 LocalHub@3c183d50 intraCity_Simulator.Parcel@44227899 3 LocalHub@3c183d50 intraCity_Simulator.Parcel@696b0129 4 LocalHub@3c183d50 intraCity_Simulator.Parcel@86ec871 5 LocalHub@3c183d50 intraCity_Simulator.Parcel@7a0d8542 6 LocalHub@3c183d50 intraCity_Simulator.Parcel@67a58fba 7 LocalHub@d3a0fbe inraCity_Simulator.Parcel@61b9a28c 8 LocalHub@d3a0fbe intraCity_Simulator.Parcel@1b5d2e8b 9 LocalHub@d3a0fbe intraCity_Simulator.Parcel@65911201 10 LocalHub@d3a0fbe intraCity_Simulator.Parcel@2e53ab95 11 LocalHub@d3a0fbe intraCity_Simulator.Parcel@464b73fa 12 LocalHub@d3a0fbe intraCity_Simulator.Parcel@640ff28a 13 LocalHub@d3a0fbe intraCity_Simulator.Parcel@77fc8d6c 14 LocalHub@d3a0fbe intraCity_Simulator.Parcel@609051b0 15 LocalHub@d3a0fbe intraCity_Simulator.Parcel@25e0c299 16 LocalHub@d3a0fbe intraCity_Simulator.Parcel@436af74b 17 LocalHub@d3a0fbe intraCity_Simulator.Parcel@24c3fb2 18 LocalHub@d3a0fbe intraCity_Simulator.Parcel@130592c8 19 LocalHub@d3a0fbe intraCity_Simulator.Parcel@444d20b1 20 LocalHub@d3a0fbe intraCity_Simulator.Parcel@6d59d5b2 21 LocalHub@d3a0fbe intraCity_Simulator.Parcel@764a25d3 22 LocalHub@d3a0fbe intraCity_Simulator.Parcel@4bdd2c62 </code></pre> <p>Solution for below pandas versions is use <code>repeat</code> by lengths of lists by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.len.html" rel="nofollow noreferrer"><code>Series.str.len</code></a>:</p> <pre><code>from itertools import chain splitted = df['buffer'].str.strip('[]').str.split(',') df = pd.DataFrame({ 'ID' : df['ID'].values.repeat(splitted.str.len()), 'buffer' : list(chain.from_iterable(splitted.tolist())) }) print (df) ID buffer 0 LocalHub@3c183d50 intraCity_Simulator.Parcel@55078545 1 LocalHub@3c183d50 intraCity_Simulator.Parcel@75b895dd 2 LocalHub@3c183d50 intraCity_Simulator.Parcel@44227899 3 LocalHub@3c183d50 intraCity_Simulator.Parcel@696b0129 4 LocalHub@3c183d50 intraCity_Simulator.Parcel@86ec871 5 LocalHub@3c183d50 intraCity_Simulator.Parcel@7a0d8542 6 LocalHub@3c183d50 intraCity_Simulator.Parcel@67a58fba 7 LocalHub@d3a0fbe inraCity_Simulator.Parcel@61b9a28c 8 LocalHub@d3a0fbe intraCity_Simulator.Parcel@1b5d2e8b 9 LocalHub@d3a0fbe intraCity_Simulator.Parcel@65911201 10 LocalHub@d3a0fbe intraCity_Simulator.Parcel@2e53ab95 11 LocalHub@d3a0fbe intraCity_Simulator.Parcel@464b73fa 12 LocalHub@d3a0fbe intraCity_Simulator.Parcel@640ff28a 13 LocalHub@d3a0fbe intraCity_Simulator.Parcel@77fc8d6c 14 LocalHub@d3a0fbe intraCity_Simulator.Parcel@609051b0 15 LocalHub@d3a0fbe intraCity_Simulator.Parcel@25e0c299 16 LocalHub@d3a0fbe intraCity_Simulator.Parcel@436af74b 17 LocalHub@d3a0fbe intraCity_Simulator.Parcel@24c3fb2 18 LocalHub@d3a0fbe intraCity_Simulator.Parcel@130592c8 19 LocalHub@d3a0fbe intraCity_Simulator.Parcel@444d20b1 20 LocalHub@d3a0fbe intraCity_Simulator.Parcel@6d59d5b2 21 LocalHub@d3a0fbe intraCity_Simulator.Parcel@764a25d3 22 LocalHub@d3a0fbe intraCity_Simulator.Parcel@4bdd2c62 </code></pre>
python|pandas
1
4,249
58,920,176
Pandas_datareader error SymbolWarning: Failed to read symbol: 'T', replacing with NaN
<p>I have this code which gets a list of symbols from wikipedia and then gets the stock data from yahoofinance. This is a simple code which was working fine up till a couple days ago but for some reason i getting the <code>unable to read symbol</code> error on many stocks. Is yahoo doing this? What can i do to fix this error. I cannot ignore this because over 50 symbols were NaN and when i rerun the code different symbols show up in the error</p> <p>pandas_datareader version: 0.8.1 Code:</p> <pre><code>import datetime import pandas as pd import numpy as np import csv from pandas_datareader import data as web import matplotlib import matplotlib.pyplot as plt import requests import bs4 as bs from urllib.request import urlopen from bs4 import BeautifulSoup import tqdm from pandas import DataFrame import seaborn as sns resp = requests.get('http://en.wikipedia.org/wiki/List_of_S%26P_500_companies') soup = bs.BeautifulSoup(resp.text, 'lxml') table = soup.find('table', {'class': 'wikitable sortable'}) tickers = [] for row in table.findAll('tr')[1:]: ticker = row.findAll('td')[0].text.strip() tickers.append(ticker) start = datetime.date(2008,11,1) end = datetime.date.today() # df = web.get_data_yahoo(tickers, start, end) df = web.DataReader(tickers, 'yahoo', start, end) </code></pre> <p>Error:</p> <pre><code>C:\ProgramData\Anaconda3\lib\site-packages\pandas_datareader\base.py:270: SymbolWarning: Failed to read symbol: 'T', replacing with NaN. warnings.warn(msg.format(sym), SymbolWarning) C:\ProgramData\Anaconda3\lib\site-packages\pandas_datareader\base.py:270: SymbolWarning: Failed to read symbol: 'BKR', replacing with NaN. warnings.warn(msg.format(sym), SymbolWarning) C:\ProgramData\Anaconda3\lib\site-packages\pandas_datareader\base.py:270: SymbolWarning: Failed to read symbol: 'BRK.B', replacing with NaN. warnings.warn(msg.format(sym), SymbolWarning) </code></pre>
<p>Looks like the date issue could be fixed by replacing tickers with a <code>'.'</code> to <code>'-'</code> per this <a href="https://github.com/pydata/pandas-datareader/issues/617#issuecomment-497620900" rel="nofollow noreferrer">github issue</a></p> <p>Also you do not need <code>requests</code> or <code>BeautifulSoup</code> just use <code>pd.read_html</code></p> <p>I successfully created a DataFrame with no warnings or errors in <code>python 3.6.8</code> and <code>pandas 24.2</code> for all 505 tickers. See example below:</p> <pre><code>import pandas as pd from pandas_datareader import data as web import datetime # no need for requests or BeautifulSoup use read_html df = pd.read_html('https://en.wikipedia.org/wiki/List_of_S%26P_500_companies')[0] # convert symbol column to list tickers = df['Symbol'].values.tolist() # list comprehension to replace data in strings t = [x.replace('.', '-') for x in tickers] start = datetime.date(2008,11,1) end = datetime.date.today() df2 = web.DataReader(t, 'yahoo', start, end) </code></pre> <p>Here is the list of all 505 tickers in the DataFrame:</p> <pre><code>print(*df2.columns.levels[1]) A AAL AAP AAPL ABBV ABC ABMD ABT ACN ADBE ADI ADM ADP ADS ADSK AEE AEP AES AFL AGN AIG AIV AIZ AJG AKAM ALB ALGN ALK ALL ALLE ALXN AMAT AMCR AMD AME AMG AMGN AMP AMT AMZN ANET ANSS ANTM AON AOS APA APD APH APTV ARE ARNC ATO ATVI AVB AVGO AVY AWK AXP AZO BA BAC BAX BBT BBY BDX BEN BF-B BIIB BK BKNG BKR BLK BLL BMY BR BRK-B BSX BWA BXP C CAG CAH CAT CB CBOE CBRE CBS CCI CCL CDNS CDW CE CERN CF CFG CHD CHRW CHTR CI CINF CL CLX CMA CMCSA CME CMG CMI CMS CNC CNP COF COG COO COP COST COTY CPB CPRI CPRT CRM CSCO CSX CTAS CTL CTSH CTVA CTXS CVS CVX CXO D DAL DD DE DFS DG DGX DHI DHR DIS DISCA DISCK DISH DLR DLTR DOV DOW DRE DRI DTE DUK DVA DVN DXC EA EBAY ECL ED EFX EIX EL EMN EMR EOG EQIX EQR ES ESS ETFC ETN ETR EVRG EW EXC EXPD EXPE EXR F FANG FAST FB FBHS FCX FDX FE FFIV FIS FISV FITB FLIR FLS FLT FMC FOX FOXA FRC FRT FTI FTNT FTV GD GE GILD GIS GL GLW GM GOOG GOOGL GPC GPN GPS GRMN GS GWW HAL HAS HBAN HBI HCA HD HES HFC HIG HII HLT HOG HOLX HON HP HPE HPQ HRB HRL HSIC HST HSY HUM IBM ICE IDXX IEX IFF ILMN INCY INFO INTC INTU IP IPG IPGP IQV IR IRM ISRG IT ITW IVZ JBHT JCI JEC JKHY JNJ JNPR JPM JWN K KEY KEYS KHC KIM KLAC KMB KMI KMX KO KR KSS KSU L LB LDOS LEG LEN LH LHX LIN LKQ LLY LMT LNC LNT LOW LRCX LUV LVS LW LYB M MA MAA MAC MAR MAS MCD MCHP MCK MCO MDLZ MDT MET MGM MHK MKC MKTX MLM MMC MMM MNST MO MOS MPC MRK MRO MS MSCI MSFT MSI MTB MTD MU MXIM MYL NBL NCLH NDAQ NEE NEM NFLX NI NKE NLOK NLSN NOC NOV NOW NRG NSC NTAP NTRS NUE NVDA NVR NWL NWS NWSA O OKE OMC ORCL ORLY OXY PAYX PBCT PCAR PEAK PEG PEP PFE PFG PG PGR PH PHM PKG PKI PLD PM PNC PNR PNW PPG PPL PRGO PRU PSA PSX PVH PWR PXD PYPL QCOM QRVO RCL RE REG REGN RF RHI RJF RL RMD ROK ROL ROP ROST RSG RTN SBAC SBUX SCHW SEE SHW SIVB SJM SLB SLG SNA SNPS SO SPG SPGI SRE STI STT STX STZ SWK SWKS SYF SYK SYY T TAP TDG TEL TFX TGT TIF TJX TMO TMUS TPR TRIP TROW TRV TSCO TSN TTWO TWTR TXN TXT UA UAA UAL UDR UHS ULTA UNH UNM UNP UPS URI USB UTX V VAR VFC VIAB VLO VMC VNO VRSK VRSN VRTX VTR VZ WAB WAT WBA WCG WDC WEC WELL WFC WHR WLTW WM WMB WMT WRK WU WY WYNN XEC XEL XLNX XOM XRAY XRX XYL YUM ZBH ZION ZTS print(len(df2.columns.levels[1])) 505 </code></pre>
python-3.x|pandas|pandas-datareader
1
4,250
70,075,182
'list' object has no attribute 'items' for saving list of array as mat file
<p>I want to save a list of NumPy array as a mat file but raised with error <code>'list' object has no attribute 'items'</code>. You can see my try below:</p> <pre><code>import numpy as np import scipy.io output=[] for i in range(10): a=np.random.randint(0,100,size=(60,60,4)) output.append(a) scipy.io.savemat('test.mat', output) </code></pre>
<p>Here is the answer:</p> <pre><code>import numpy as np import scipy.io output=[] for i in range(10): a=np.random.randint(0,100,size=(60,60,4)) output.append(a) scipy.io.savemat('test.mat', mdict={'my_list': output}) </code></pre>
python|list|numpy|scipy|mat-file
1
4,251
70,170,938
Convert Pandas dataframe with multiindex to a list of dictionaries
<p>I have a pandas dataframe, which looks like</p> <pre><code> df = Index1 Index2 Index3 column1 column2 i11 i12 i13 2 5 i11 i12 i23 3 8 i21 i22 i23 4 5 </code></pre> <p>How to convert this into list of dictionaries with keys as <code>Index3, column1, column2</code> and values as in the respective cells.</p> <p>So, expected output:</p> <pre><code> [[{Index3: i13, column1: 2, column2: 5}, {Index3: i23, column1: 3, column2: 8}], [{Index3: i23, column1: 4, column2: 5}]] </code></pre> <p>Please note that the same values of <code>Index1</code> and <code>Index2</code> form 1 inner list and the values won't be repeated.</p>
<pre><code>d = {'Index1': [&quot;i11&quot;, &quot;i12&quot;, &quot;i13&quot;], 'Index2': [&quot;i21&quot;, &quot;i22&quot;, &quot;i23&quot;], 'Index3': [&quot;i31&quot;, &quot;i32&quot;, &quot;i33&quot;], 'column1': [2, 3,4], 'column2': [5, 8, 5]} df = pd.DataFrame(data=d) </code></pre> <p>This should fit:</p> <pre><code>a = [] for i in range(df.shape[0]): a.append({&quot;Index3&quot;: df.iloc[2,i],&quot;column 1&quot;: df.iloc[i,3], &quot;column2&quot;: df.iloc[i,4]}) </code></pre> <p>Res:</p> <pre><code>[{'Index3': 'i13', 'column 1': 2, 'column2': 5}, {'Index3': 'i23', 'column 1': 3, 'column2': 8}, {'Index3': 'i33', 'column 1': 4, 'column2': 5}] </code></pre>
python-3.x|pandas|dictionary
1
4,252
70,203,246
Speeding up similarity function using numpy and parallel processing?
<p>I've got a matrix of points (real shape is generally in the neighborhood of (8000,127000)):</p> <p><code>M = [[1,10,2],[10,2,2],[8,3,4],[2,1,9]]</code></p> <p>And a target:</p> <p><code>N = [1,2,10]</code></p> <p>I'm using this function to create an array of distances from N (which I then sort by distance):</p> <pre><code>similarity_scores = M.dot(N)/ (np.linalg.norm(M, axis=1) * np.linalg.norm(N)) </code></pre> <p>Which depending on the shape of M can be very fast or take upwards of a second or two. I'm using this for live search where I am creating N on the fly.</p> <p>Is there a way I can split up M and parallel process this function to gain speed? From my experience so far, multiprocessing requires loading a lot of data just to run the processes in parallel... Not something that seems to work on an on demand type function.</p>
<p>Depending on what your exact needs are, you may want to consider an alternate data structure. If you are searching for something like the <code>k</code> nearest neighbors to a given <code>N</code>, you might consider using <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.KDTree.html" rel="nofollow noreferrer"><code>scipy.spatial.KDTree</code></a>:</p> <pre><code>tree = scipy.spatial.KDTree(M) </code></pre> <p>You can then obtain the <code>k</code> nearest distances to a given <code>N</code> using <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.KDTree.query.html#scipy.spatial.KDTree.query" rel="nofollow noreferrer"><code>query</code></a>:</p> <pre><code>dist = tree.query(N, k=10) </code></pre> <p>For a set of 3D points, this will be much faster than doing a brute-force search like you are proposing. However, for dimensions in the thousands (really anything &gt;10 or so), it's unlikely that you will get any speedup at all from a KDTree.</p>
python|numpy|multiprocessing
0
4,253
64,941,304
Unable to give Keras neural network multiple inputs
<p>I am trying to get a data pipeline of text based data into a neural network with two heads. Made use of the official documentation that tells you to zip it into a dictionary of values, but it did not work.</p> <pre><code>&lt;MapDataset shapes: (None, 32), types: tf.int64&gt; &lt;MapDataset shapes: (None, 32), types: tf.int64&gt; </code></pre> <p>These are the shapes of the data that will go into each head. Have been converted into sequences of ints using the <code>VectorizeLayer()</code></p> <p>This is the graph of the neural network</p> <p><a href="https://i.stack.imgur.com/6DHCH.png" rel="nofollow noreferrer">enter image description here</a></p> <p>I am constructing the final dataset using</p> <pre><code>final_dataset=tf.data.Dataset.from_tensors(( {&quot;input_1&quot;:vectorized_sen1,&quot;input_2&quot;:vectorized_sen2}, {&quot;output&quot;:label_db} )).batch(64) </code></pre> <p>But this is the error it keeps throwing</p> <pre><code>TypeError: Failed to convert object of type &lt;class 'tensorflow.python.data.ops.dataset_ops._NestedVariant'&gt; to Tensor. Contents: &lt;tensorflow.python.data.ops.dataset_ops._NestedVariant object at 0x7f9b073af780&gt;. Consider casting elements to a supported type. </code></pre>
<p>From what I can tell of your problem it seems that your two input layers are <em>already</em> datasets. from_tensors likely is expecting tensor objects, not datasets. For instance given a model of this shape:</p> <pre><code>input_a = keras.Input(10,name=&quot;initial_input&quot;) input_b = keras.Input(5, name=&quot;secondary_input&quot;) dense_1 = keras.layers.Dense(100, activation=&quot;relu&quot;)(input_a) concat = keras.layers.concatenate([dense_1, input_b]) layer = keras.layers.Dense(105, activation=&quot;relu&quot;)(concat) layer = keras.layers.Dense(10, activation=&quot;softmax&quot;, name=&quot;output&quot;)(layer) model = keras.Model([input_a, input_b], layer) model.summary() keras.utils.plot_model(model, &quot;multi-output.png&quot;, show_shapes=True) </code></pre> <p>Model shape:</p> <p><a href="https://i.stack.imgur.com/CKbMQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CKbMQ.png" alt="model shape" /></a></p> <p>Normally, I could provide inputs like this:</p> <pre><code>a = tf.ones([1000, 10], dtype=tf.dtypes.int64) b = tf.ones([1000, 5], dtype=tf.dtypes.int64) output = tf.random.normal([1000, 10]) input_dataset = tf.data.Dataset.from_tensor_slices(({'initial_input': a, 'secondary_input': b}, {'output': output})).batch(25) </code></pre> <p>This is fairly straight forward, as all of the data (a, b, output) are Tensors. In your case though it seems that some or all of the input data are already TF datasets. In this scenario you will want to zip the datasets into a single dataset, then map them to the proper structure. For example, with the exact same tensors converted to datasets first, I could use this:</p> <pre><code># Let's say for example that they're each already datasets and I need to combine them a_ds = tf.data.Dataset.from_tensor_slices(a) b_ds = tf.data.Dataset.from_tensor_slices(b) o_ds = tf.data.Dataset.from_tensor_slices(output) full_dataset = tf.data.Dataset.zip((a_ds, b_ds, o_ds)).map(lambda a, b, o: ({'initial_input': a, 'secondary input': b}, {'output': o}) ).batch(25) </code></pre> <p>In this example I turn each of the tensors into their own datasets. Then I zip them using the Dataset helper zip function. This behaves similar to Python's native zip function so the behavior is fairly intuitive. However, the dataset isn't quite in the right structure, so I map it into the proper structure using a simple lambda.</p> <p>For reference, here is the notebook used to tinker with this: <a href="https://colab.research.google.com/drive/1vVF2tbzLFZ-9pZql1uT3SBTvNloMwFpp?usp=sharing" rel="nofollow noreferrer">https://colab.research.google.com/drive/1vVF2tbzLFZ-9pZql1uT3SBTvNloMwFpp?usp=sharing</a></p>
tensorflow2.0|tensorflow-datasets|tf.keras
1
4,254
39,730,528
TypeError: Signature mismatch. Keys must be dtype <dtype: 'string'>, got <dtype:'int64'>
<p>While running the <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/learn/wide_n_deep_tutorial.py" rel="nofollow noreferrer">wide_n_deep_tutorial</a> program from TensorFlow on my dataset, the following error is displayed.</p> <pre><code>"TypeError: Signature mismatch. Keys must be dtype &lt;dtype: 'string'&gt;, got &lt;dtype:'int64'&gt;" </code></pre> <p><img src="https://i.stack.imgur.com/aqs3x.png" alt="enter image description here"></p> <p>Following is the code snippet:</p> <pre><code> def input_fn(df): """Input builder function.""" # Creates a dictionary mapping from each continuous feature column name (k) to # the values of that column stored in a constant Tensor. continuous_cols = {k: tf.constant(df[k].values) for k in CONTINUOUS_COLUMNS} # Creates a dictionary mapping from each categorical feature column name (k) # to the values of that column stored in a tf.SparseTensor. categorical_cols = {k: tf.SparseTensor( indices=[[i, 0] for i in range(df[k].size)], values=df[k].values, shape=[df[k].size, 1]) for k in CATEGORICAL_COLUMNS} # Merges the two dictionaries into one. feature_cols = dict(continuous_cols) feature_cols.update(categorical_cols) # Converts the label column into a constant Tensor. label = tf.constant(df[LABEL_COLUMN].values) # Returns the feature columns and the label. return feature_cols, label def train_and_eval(): """Train and evaluate the model.""" train_file_name, test_file_name = maybe_download() df_train=train_file_name df_test=test_file_name df_train[LABEL_COLUMN] = ( df_train["impression_flag"].apply(lambda x: "generated" in x)).astype(str) df_test[LABEL_COLUMN] = ( df_test["impression_flag"].apply(lambda x: "generated" in x)).astype(str) model_dir = tempfile.mkdtemp() if not FLAGS.model_dir else FLAGS.model_dir print("model directory = %s" % model_dir) m = build_estimator(model_dir) print('model succesfully build!') m.fit(input_fn=lambda: input_fn(df_train), steps=FLAGS.train_steps) print('model fitted!!') results = m.evaluate(input_fn=lambda: input_fn(df_test), steps=1) for key in sorted(results): print("%s: %s" % (key, results[key])) </code></pre> <p>Any help is appreciated.</p>
<p>would help to see the output prior to the error message to determine which part of the process this error tripped out at, but, the message says quite clearly that the key is expected to be a string whereas an integer was given instead. I am only guessing, but are the column names set out correctly in the earlier part of your script as they could potentially be the keys that are being referred to in this instance?</p>
python|pandas|tensorflow
0
4,255
39,626,432
how to calculate entropy from np histogram
<p>I have an example of a histogram with:</p> <pre><code>mu1 = 10, sigma1 = 10 s1 = np.random.normal(mu1, sigma1, 100000) </code></pre> <p>and calculated </p> <pre><code>hist1 = np.histogram(s1, bins=50, range=(-10,10), density=True) for i in hist1[0]: ent = -sum(i * log(abs(i))) print (ent) </code></pre> <p>Now I want to find the entropy from the given histogram array, but since np.histogram returns two arrays, I'm having trouble calculating the entropy. How can I just call on first array of np.histogram and calculate entropy? I would also get math domain error for the entropy even if my code above is correct. :( </p> <p>**Edit: How do I find entropy when Mu = 0? and log(0) yields math domain error?</p> <hr> <p>So the actual code I'm trying to write is: </p> <pre><code>mu1, sigma1 = 0, 1 mu2, sigma2 = 10, 1 s1 = np.random.normal(mu1, sigma1, 100000) s2 = np.random.normal(mu2, sigma2, 100000) hist1 = np.histogram(s1, bins=100, range=(-20,20), density=True) data1 = hist1[0] ent1 = -(data1*np.log(np.abs(data1))).sum() hist2 = np.histogram(s2, bins=100, range=(-20,20), density=True) data2 = hist2[0] ent2 = -(data2*np.log(np.abs(data2))).sum() </code></pre> <p>So far, the first example ent1 would yield nan, and the second, ent2, yields math domain error :(</p>
<p>You can calculate the entropy using vectorized code:</p> <pre><code>import numpy as np mu1 = 10 sigma1 = 10 s1 = np.random.normal(mu1, sigma1, 100000) hist1 = np.histogram(s1, bins=50, range=(-10,10), density=True) data = hist1[0] ent = -(data*np.log(np.abs(data))).sum() # output: 7.1802159512213191 </code></pre> <p>But if you like to use a for loop, you may write:</p> <pre><code>import numpy as np import math mu1 = 10 sigma1 = 10 s1 = np.random.normal(mu1, sigma1, 100000) hist1 = np.histogram(s1, bins=50, range=(-10,10), density=True) ent = 0 for i in hist1[0]: ent -= i * math.log(abs(i)) print (ent) # output: 7.1802159512213191 </code></pre>
python|numpy|histogram|entropy
9
4,256
69,554,229
How to convert the prediction of pytorch into normal text
<p>I have a PyTorch model and i am doing prediction on it. After doing prediction i am getting the output as</p> <pre><code>tensor([[-3.4333]], grad_fn=&lt;AddmmBackward&gt;) </code></pre> <p>But i need it as normal integer <code>-3.4333</code>. How can i do it.</p>
<p>Call <a href="https://pytorch.org/docs/stable/generated/torch.Tensor.item.html" rel="nofollow noreferrer"><code>.item</code></a> on your tensor to convert it to a standard python number.</p>
python|pytorch|tensor
1
4,257
69,427,862
PyTorch attach extra connection when building model
<p>I have the following Resnet prototype on Pytorch:</p> <pre><code>Resnet_Classifier( (activation): ReLU() (model): Sequential( (0): Res_Block( (mod): Sequential( (0): Conv1d(1, 200, kernel_size=(5,), stride=(1,), padding=same) (1): ReLU() (2): BatchNorm1d(200, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (3): Conv1d(200, 200, kernel_size=(5,), stride=(1,), padding=same) (4): ReLU() (5): BatchNorm1d(200, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (6): Conv1d(200, 200, kernel_size=(5,), stride=(1,), padding=same) (7): ReLU() (8): BatchNorm1d(200, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (shortcut): Conv1d(1, 200, kernel_size=(1,), stride=(1,), padding=same) ) (1): ReLU() (2): Flatten(start_dim=1, end_dim=-1) (3): Dropout(p=0.1, inplace=False) (4): Linear(in_features=40000, out_features=2, bias=True) (5): Softmax(dim=1) ) ) </code></pre> <p>Input sample shape is (1, 200).</p> <p>It seems to be absolutely okay but, when I try to get graph in <code>tensorboard</code>, I get the following structure: <a href="https://i.stack.imgur.com/SzdPe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SzdPe.png" alt="enter image description here" /></a></p> <p>Somehow my Residual block connected with Linear. Does this connection really corresponds my net structure?</p> <p>Model definition:</p> <pre><code>class Res_Block(nn.Module): def __init__(self, in_ch, out_ch, ks, stride, activation): super(Res_Block, self).__init__() self.mod = nn.Sequential( nn.Conv1d(in_ch, out_ch, ks, stride, padding='same'), deepcopy(activation), nn.BatchNorm1d(out_ch), nn.Conv1d(out_ch, out_ch, ks, stride, padding='same'), deepcopy(activation), nn.BatchNorm1d(out_ch), nn.Conv1d(out_ch, out_ch, ks, stride, padding='same'), deepcopy(activation), nn.BatchNorm1d(out_ch) ) self.shortcut = nn.Conv1d(in_ch, out_ch, kernel_size=1, stride=1, padding='same') def forward(self, X): return self.mod(X) + self.shortcut(X) layers = [] layers.append(Res_Block(1, 200, 5, 1, nn.ReLU())) layers.append(nn.ReLU()) layers.append(nn.Flatten()) layers.append(nn.Dropout(0.2)) layers.append(nn.Linear(200 * 200, 2)) layers.append(nn.Softmax(dim=1)) R = nn.Sequential(*layers) </code></pre>
<p>The model visualization seems incorrect, the main branch and skip connection are encapsulated inside your <code>Res_Block</code> definition, it should not appear outside of the red <code>Res_Block[0]</code> box, but instead inside.</p>
pytorch|tensorboard|tensorboardx
0
4,258
69,384,704
Filtering large data set by year
<p>Working with a very large dataset that I need to be able to filter by year. I read the text file as a csv:</p> <pre><code>df1=pd.read_csv(filename, sep=&quot;\t&quot;, error_bad_lines=False, usecols=['ID','Date', 'Value1', 'Value2']) </code></pre> <p>And convert the Date column to a date:</p> <pre><code>df1['Date'] = pd.to_datetime(df1['Date'], errors='coerce') </code></pre> <p>I also convert all nulls to zeroes:</p> <pre><code>df2=df1.fillna(0) </code></pre> <p>At this point, my 'Date' field is listed as dtype &quot;Object&quot;, and the dates are formatted like this:</p> <pre><code>2018-02-09 00:00:00 </code></pre> <p>However, I'm not sure how to filter by year. When I try this code:</p> <pre><code>df3 = df2[df2['Date'].dt.year == 2018] </code></pre> <p>I get this error:</p> <pre><code>AttributeError: Can only use .dt accessor with datetimelike values </code></pre> <p>I think what is happening is some dates have been read in as null values, but I'm not sure if that's the case, and I'm not sure how to convert them to dates (a zero date is fine).</p> <p>Is my code to filter the data set correct? How can I get around this attribute error?</p> <p>Thanks!</p>
<p>You could also specify to parse <code>Date</code> when reading it. As @ALollz mentioned you have some NaN values in <code>Date</code> and when you replace them with 0 this changes the type of the column. If you just want to filter by the year then the code below should work. If you wanted to filter by year/month then use <code>'%Y-%m</code> and year/month/date use <code>'%Y-%m-%d'</code>.</p> <pre><code>df1=pd.read_csv(filename, sep=&quot;\t&quot;, error_bad_lines=False, usecols=['ID','Date', 'Value1', 'Value2'] parse_dates=['Date']) df_filtered = df1[df1['Date'].dt.strftime('%Y') == '2018'] </code></pre>
python|pandas|datetime|large-data
0
4,259
53,913,511
How to implement SQL Row_number in Python Pandas?
<p>I am trying to number my dataframe records using SQL "Row_number over" function available in SQL but it results in error as shown in the image. Please note that I don't wish to number records using Pandas function. </p> <p>Here is the code</p> <pre><code>df1.head() </code></pre> <p>output of df1.head statement</p> <pre><code>date beef veal pork lamb_and_mutton broilers other_chicken turkey 0 1944-01-01 00:00:00.000000 751.0 85.0 1280.0 89.0 NaN NaN NaN 1 1944-02-01 00:00:00.000000 713.0 77.0 1169.0 72.0 NaN NaN NaN 2 1944-03-01 00:00:00.000000 741.0 90.0 1128.0 75.0 NaN NaN NaN 3 1944-04-01 00:00:00.000000 650.0 89.0 978.0 66.0 NaN NaN NaN 4 1944-05-01 00:00:00.000000 681.0 106.0 1029.0 78.0 NaN NaN NaN </code></pre> <hr> <pre><code>p = """SELECT ROW_NUMBER() OVER(ORDER BY date ASC) AS Row#, beef,veal FROM df1""" df1 = pysqldf(p) </code></pre> <p>Once I execute this statement it throws an error</p> <p>This code is from Python 3 version. Normal SQL queries work but looks like this row_number function isn't available/supported by Python. Can you please help me with this? I receive an operational error</p>
<p>The problem is pretty simple and you might have figured it out already. The # breaks the whole thing as that is an unrecognized token.</p> <p>If you leave that out, your code should work.</p> <pre><code>from pandasql import sqldf q1='select beef, veal, ROW_NUMBER() OVER (ORDER BY date ASC) as RN FROM df1' df_new=sqldf(q1) </code></pre> <p>Also it is a good practice to name your headers differently from the basic syntax. Date and row can be a functions in SQL, so you better go with 'RN' for the row column and 'date_' or 'date_of_purchase' for the date.</p>
python|sql|pandas
2
4,260
54,208,890
Function with IF and ELIF using pandas
<p>What is a better way (if there is one) to define a function that checks whether a pandas column is within a given range of integers?</p> <p>I have a column in a Pandas dataframe which I wanted to check whether the values are between a set range. I chose to do so by creating a function which accepts the dataframe as an argument and tests whether the column is within the range using IF and ELIF. This may be ok where the range is small, however if the range is large, the resulting IF, ELIF function can be daunting to maintain. Is there a better way to achieve this? </p> <p>My code that works- </p> <pre><code>def fn(dframe): if dframe['A'] &lt; 125: return 935 + 0.2 * dframe['A'] elif (dframe['A'] &gt;= 955) and (dframe['A'] &lt;= 974): return 921.2 + 0.2 * (dframe['A'] - 955) elif (dframe['A'] &gt;= 975) and (dframe['A'] &lt;= 1023): return 925.2 + 0.2 * (dframe['BCCH'] - 975) elif (dframe['A'] &gt;= 511) and (dframe['A'] &lt;= 885): return 1805.2 + 0.2 * (dframe['A'] - 512) </code></pre> <p>This code works as expected however if the range is large, the resultant function is difficult to manage.</p> <p>EDIT:</p> <p>Thanks @ycx, @Jorge and all- I like the readability of your code. However was wondering like @ycx's approach If I have the Min and Max values of the 'condlist' in a csv file - such as </p> <p><a href="https://i.stack.imgur.com/IulQY.png" rel="nofollow noreferrer">condlist_from_CSV_file</a></p> <p>then I can read that in to a dataframe. Now, I would like to check if every row of a column 'A' from another dataframe is between these limits and if true, then return the corresponding 'Choice', else return 'None' does that make sense? Desired Output -</p> <p><a href="https://i.stack.imgur.com/vHXGp.png" rel="nofollow noreferrer">output dataframe with check</a></p> <p>and so on..</p>
<p>You can make use of <code>np.select</code> to manage your conditions and options. This allows you to maintain your conditions and options easily and makes use of the <code>numpy</code> library functions which may help speed up your code</p> <pre><code>def fn(dframe): import numpy as np condlist = [ dframe['A'] &lt; 125, (dframe['A'] &gt;= 955) and (dframe['A'] &lt;= 974), (dframe['A'] &gt;= 975) and (dframe['A'] &lt;= 1023), (dframe['A'] &gt;= 511) and (dframe['A'] &lt;= 885), ] choicelist = [ 935 + 0.2 * dframe['A'], 921.2 + 0.2 * (dframe['A'] - 955), 925.2 + 0.2 * (dframe['BCCH'] - 975), 1805.2 + 0.2 * (dframe['A'] - 512), ] output = np.select(condlist,choicelist) return output </code></pre>
python|pandas|function
0
4,261
38,218,691
Building and linking shared Tensorflow library on OSX El Capitan to call from Ruby via Swig
<p>I'm trying to help build a Ruby wrapper around <a href="https://www.tensorflow.org/versions/r0.9/get_started/os_setup.html" rel="nofollow">Tensorflow</a> using <a href="http://www.swig.org/Doc1.3/Ruby.html#Ruby_nn11" rel="nofollow">Swig</a>. Currently, I'm stuck at making a shared build, <code>.so</code>, and exposing its C/C++ headers to Ruby. So the question is: How do I build a <code>libtensorflow.so</code> shared build including the full Tensorflow library so it's available as a shared library on OSX El Capitan (note: <code>/usr/lib/</code> is read-only on El Capitan)?</p> <h3>Background</h3> <p>In this <a href="https://github.com/chrhansen/ruby-tensorflow" rel="nofollow">ruby-tensorflow</a> project, I need to package a Tensorflow <code>.bundle</code> file, but whenever I <code>irb -Ilib -rtensorflow</code> or try to run the specs <code>rspec</code>, I get and errors that the basic numeric types are not defined, but they are clearly defined <a href="https://github.com/chrhansen/ruby-tensorflow/blob/master/ext/files/tensor_c_api.h#L73" rel="nofollow">here</a>. </p> <p>I'm guessing this happens because my <code>.so</code>-file was not created properly or something is not linked as it should. C++/Swig/Bazel are not my strong sides, I'd like to focus on learning Tensorflow and building a good wrapper in Ruby, but I'm pretty stuck at this point getting to that fun part!</p> <p>What I've done:</p> <ol> <li><code>git clone --recurse-submodules https://github.com/tensorflow/tensorflow</code></li> <li><code>cd tensorflow</code></li> <li><code>bazel build //tensorflow:libtensorflow.so</code> (wait 10-15min on my machine)</li> <li>Copied the generated <code>libtensorflow.so</code> (166.6 MB) to the <a href="https://github.com/chrhansen/ruby-tensorflow/tree/master/ext" rel="nofollow"><code>/ext</code>-folder</a></li> <li>Run the <code>ruby extconf.rb</code>, <code>make</code>, and <code>make install</code> <a href="https://github.com/Arafatk/ruby-tensorflow#installation" rel="nofollow">described in the project</a></li> <li>Run <code>rspec</code></li> </ol> <p>In desperation, I've also gone through the official <a href="https://www.tensorflow.org/versions/r0.9/get_started/os_setup.html#installing-from-sources" rel="nofollow">installation from source</a> several times, but I don't know if that, the last <code>sudo pip install /tmp/tensorflow_pkg/tensorflow-0.9.0-py2-none-any.whl</code>-step even creates a shared build or just exposes a Python interface.</p> <p>The guy, Arafat, who made the <a href="https://github.com/Arafatk/ruby-tensorflow" rel="nofollow">original repository</a> and made the instructions that I've followed, says his <code>libtensorflow.so</code> is <strike>4.5 GB on his Linux machine – so over 20X the size of the shared build on my OSX machine.</strike> UPDATE1: he says his <code>libtensorflow.so</code>-build is <strong>302.2 MB</strong>, 4.5GB was the size of the entire tensorflow folder.</p> <p>Any help or alternative approaches are very appreciated!</p>
<p>After more digging around, discovering <code>otool</code> (thanks Kristina) and better understanding what a <a href="https://stackoverflow.com/questions/2339679/what-are-the-differences-between-so-and-dylib-on-osx"><code>.so</code></a>-file is, the solution didn't require much change in my setup:</p> <h3>Shared Build</h3> <pre><code># Clone source files git clone --recurse-submodules https://github.com/tensorflow/tensorflow cd tensorflow # Build library bazel build //tensorflow:libtensorflow.so # Copy the newly shared build/library to /usr/local/lib sudo cp bazel-bin/tensorflow/libtensorflow.so /usr/local/lib </code></pre> <h3>Calling from Ruby using Swig</h3> <p>Follow the steps here, <a href="https://github.com/chrhansen/ruby-tensorflow#install-ruby-tensorflow" rel="nofollow noreferrer">https://github.com/chrhansen/ruby-tensorflow#install-ruby-tensorflow</a>, to run Swig, create a Makefile and <code>make</code></p> <p>When you run <code>make</code> you should see a line saying:</p> <pre><code>$ make $ linking shared-object libtensorflow.bundle </code></pre> <p>If your shared build is not accessible you'll see something like:</p> <pre><code>$ ld: library not found for -ltensorflow </code></pre> <h3>Simple tutorial</h3> <p>For those starting on this adventure, using C/C++ libraries in Ruby, this post was a good tutorial for me: <a href="http://engineering.gusto.com/simple-ruby-c-extensions-with-swig/" rel="nofollow noreferrer">http://engineering.gusto.com/simple-ruby-c-extensions-with-swig/</a></p>
ruby|macos|tensorflow|swig|bazel
4
4,262
38,413,913
numpy einsum: nested dot products
<p>I have two <code>n</code>-by-<code>k</code>-by-<code>3</code> arrays <code>a</code> and <code>b</code>, e.g.,</p> <pre><code>import numpy as np a = np.array([ [ [1, 2, 3], [3, 4, 5] ], [ [4, 2, 4], [1, 4, 5] ] ]) b = np.array([ [ [3, 1, 5], [0, 2, 3] ], [ [2, 4, 5], [1, 2, 4] ] ]) </code></pre> <p>and it like to compute the dot-product of all pairs of "triplets", i.e.,</p> <pre><code>np.sum(a*b, axis=2) </code></pre> <p>A better way to do that is perhaps <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html" rel="nofollow"><code>einsum</code></a>, but I can't seem to get the indices straight.</p> <p>Any hints here?</p>
<p>You are loosing the third axis on those two <code>3D</code> input arrays with that sum-reduction, while keeping the first two axes aligned. Thus, with <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.einsum.html" rel="nofollow"><code>np.einsum</code></a>, we would have the first two strings identical alongwith the third string being identical too, but would be skipped in the output string notation signalling we are reducing along that axis for both the inputs. Thus, the solution would be -</p> <pre><code>np.einsum('ijk,ijk-&gt;ij',a,b) </code></pre>
python|numpy|numpy-einsum
3
4,263
38,298,452
compatibility of tensorflow and hadoop
<p>I want to use tensorflow as a machine learning library and hadoop as a big data framework. But i don know these are compatible. I can't search any reference in website.</p> <p>my question is</p> <ol> <li>Can I use tensorflow with hadoop?</li> <li>(if no) please recommend big data framework can be use with tensorflow.</li> </ol>
<p>They are compatible. In my company we have application running on Hadoop using TensorFlow. To make it clear think of TensorFlow as python library which can be used within python. One example will be to write a Map-Reduce code in which mapper groups/accumulates data and in reducer your learning code is present in reducer using TensorFlow.</p>
hadoop|tensorflow
0
4,264
38,216,848
Tensorflow Serving on Docker | write too long
<p>Trying to follow the tutorial "Using TensorFlow Serving via Docker": <a href="https://tensorflow.github.io/serving/docker" rel="nofollow">https://tensorflow.github.io/serving/docker</a></p> <p>I'm getting the error below:</p> <pre><code>$ docker build --pull -t $USER/tensorflow-serving-devel -f Dockerfile.devel . ERRO[1791] Can't add file /Users/bone/.docker/machine/machines/testD/disk.vmdk to tar: archive/tar: write too long Sending build context to Docker daemon 194.3 GB Error response from daemon: Untar re-exec error: exit status 1: output: write /.docker/machine/machines/default/disk.vmdk: no space left on device </code></pre>
<p>The mistake was running this command on the command line outside the docker machine. Once this command was run inside the docker machine, it worked fine.</p>
docker|tensorflow|tensorflow-serving
0
4,265
52,691,128
Group by ID and complet time series Pandas
<p>I have a pandas Dataframe with observations of one ID and I have a problem similar to the one solved <a href="https://stackoverflow.com/questions/27823273/counting-frequency-of-values-by-date-using-pandas">here</a>. </p> <pre><code>Timestamp ID 2014-10-16 15:05:17 123 2014-10-16 14:56:37 148 2014-10-16 14:25:16 123 2014-10-16 14:15:32 123 2014-10-16 13:41:01 123 2014-10-16 12:50:30 148 2014-10-16 12:28:54 123 2014-10-16 12:26:56 123 2014-10-16 12:25:12 123 ... 2014-10-08 15:52:49 150 2014-10-08 15:04:50 150 2014-10-08 15:03:48 148 2014-10-08 15:02:27 200 2014-10-08 15:01:56 236 2014-10-08 13:27:28 147 2014-10-08 13:01:08 148 2014-10-08 12:52:06 999 2014-10-08 12:43:27 999 Name: summary, Length: 600 </code></pre> <p>On the mentioned post they show how to group by ID and also how to make the count.Using <code>df['Week/Year'] = df['Timestamp'].apply(lambda x: "%d/%d" % (x.week, x.year))</code> I have now this: </p> <pre><code> Timestamp ID Week/Year 0 2014-10-16 15:05:17 123 42/2014 1 2014-10-16 14:56:37 150 42/2014 2 2014-10-16 14:25:16 123 42/2014 </code></pre> <p>My problem is that now I want to make a time series so, actually, I need:</p> <pre><code>Category Week_42_2014 Week_43_2014 Week_44_2014 123 7 0 6 150 0 0 2 ... </code></pre> <p>This is, I need the weeks as a column, the categories as rows and also fill the gaps of the weeks with no observations. In my case I also need days, but I guess that it is really similar. </p> <p>Thanks, </p>
<p>You can use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.pivot_table.html" rel="nofollow noreferrer"><code>pd.pivot_table</code></a>:</p> <pre><code>res = df.pivot_table(index='ID', columns='Week/Year', aggfunc='count', fill_value=0) print(res) Timestamp Week/Year 41/2014 42/2014 ID 123 0 7 147 1 0 148 2 2 150 2 0 200 1 0 236 1 0 999 2 0 </code></pre>
python|pandas|dataframe|time-series
3
4,266
46,352,688
accessing arrays stored in pandas dataframe
<p>I have a pandas dataframe in which one column contains 1-D numpy arrays and another contains scalar data for instance:</p> <pre><code>df = A B 0 x [0, 1, 2] 1 y [0, 1, 2] 2 z [0, 1, 2] </code></pre> <p>I want to get B for the row where <code>A=='x'</code> So I tried <code>df[df.A == 'x'].B.values</code> which gives me the output:</p> <pre><code>array([array([0, 1, 2])], dtype=object) </code></pre> <p>The output has an extra <code>array([])</code> around it. I get that Pandas is treating it like an object and not just data, and I have a way to access the array by using <code>df[df.A == 'x'].B.values[0]</code> instead. In the case of scalar data I can just use the syntax <code>df[df.A == 'x'].B</code> which is a lot cleaner than the <code>df[df.A == 'x'].B.values[0]</code> which I have to use.</p> <p>My question is: is there a better/cleaner/shorter way to access the data in the format I put it in? or is this just something I will have to live with?</p>
<p>The difference isn't the fact that the array is an object, but that the query you specify could return more than one object (hence the outer array()). If you're confident that the query will return only a single object, then you can use @Wen 's solution to use <code>.item()</code>:</p> <pre><code>In [1]: import pandas as pd In [2]: df = pd.DataFrame([ ...: dict(A='x', B=[0,1,2]), ...: dict(A='y', B=[0,1,2]), ...: dict(A='z', B=[0,1,2]), ...: ]) In [3]: df[df.A == 'x'].B.item() Out[3]: [0, 1, 2] </code></pre> <p>But based on the kind of query, you should at least consider checking the results to make sure:</p> <pre><code>In [4]: df = pd.DataFrame([ ...: dict(A='x', B=[0,1,2]), ...: dict(A='y', B=[0,1,2]), ...: dict(A='z', B=[0,1,2]), ...: dict(A='x', B=[3,3,3]), ...: ]) In [5]: df[df.A == 'x'].B.item() --------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-8-e0ad528e719e&gt; in &lt;module&gt;() ----&gt; 1 df[df.A == 'x'].B.item() ... ValueError: can only convert an array of size 1 to a Python scalar In [6]: df[df.A == 'x'].B.values Out[6]: array([[0, 1, 2], [3, 3, 3]], dtype=object) </code></pre>
python|arrays|pandas|numpy
2
4,267
46,558,735
numpy - create polynomial by its roots
<p>I'm trying to create a <code>numpy.polynomial</code> by the roots of the polynomial.</p> <p>I could only find a way to do that by the polynomial's a's</p> <p>The way it works now, for the polynomial <code>x^2 - 3x + 2</code> I can create it like that:</p> <p><code>poly1d([1, -3, 2])</code></p> <p>I want to create it by its roots, which are <code>-1, -2</code></p>
<p>Numpy has a function that does this: <a href="https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.polynomial.polynomial.polyfromroots.html#numpy.polynomial.polynomial.polyfromroots" rel="nofollow noreferrer"><code>numpy.polynomial.polynomial.polyfromroots</code></a></p> <p>Note that</p> <blockquote> <p>If a zero has multiplicity n, then it must appear in roots n times.</p> </blockquote>
python|numpy
0
4,268
58,212,973
Deep learning for computer vision: What after MNIST stage?
<p>I am trying to explore computer vision using deep learning techniques. I have gone through basic literature, made a NN of my own to classify digits using MNIST data(without using any library like TF,Keras etc, and in the process understood concepts like loss function, optimization, backward propagation etc), and then also explored Fashion MNIST using TF Keras.</p> <p>I applied my knowledge gained so far to solve a Kaggle problem(identifying a plant type), but results are not very encouraging.</p> <p>So, what should be my next step in progress? What should I do to improve my knowledge and models to solve more complex problems? What more books, literature etc should I read to move ahead of beginner stage?</p>
<p>You should try hyperparameter tuning, it will help improve your model performance. Feel free to surf around various articles, fine tuning your model will be the next step as you have fundamental knowledge regarding how model works.</p>
tensorflow|deep-learning|computer-vision|mnist|kaggle
1
4,269
58,491,488
Iterate Through Python Dict for Specific Values
<p>I am using Python 3.6 and I have a dictionary like so:</p> <pre><code>{ "TYPE": { "0": "ELECTRIC", "1": "ELECTRIC", "2": "ELECTRIC", "3": "ELECTRIC", "4": "TELECOMMUNICATIONS" }, "ID": { "0": 13, "1": 13, "2": 13, "3": 13, "4": 24 }, "1/17/2019": { "0": 23, "1": 23, "2": 23, "3": 23, "4": 1 }, "DATE": { "0": "1/17/2019", "1": "2/28/2019", "2": "3/5/2019", "3": "3/28/2019", "4": "1/1/2019" } } </code></pre> <p>How can I iterate through that and access the values of only <code>Utility Type</code> and <code>ID</code>? My goal is to append them to a list of lists like so <code>[[ELECTRIC, 13], [ELECTRIC, 13], ...]</code></p> <p>Up to this point I have been able to access the values like so:</p> <pre><code>for key, value in addresses.items(): if key == 'UTILITY TYPE': for k, v in value.items(): print(v) elif key == 'ID': for k, v in value.items(): print(v) </code></pre> <p>but I can't figure out how to append the value of <code>v</code> to my list.</p>
<h1>Solution</h1> <p>You could do this using <code>pandas</code> library. Also, consider trying <code>pd.DataFrame(d)</code> to see if that could be of any use to you (since I don't know your final usecase).</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd # d is your dictionary df = pd.DataFrame(d).T columns = df.columns labels = df.index.tolist() print('labels: {}\n'.format(labels)) [df[x].tolist() for x in columns] </code></pre> <p><strong>Output</strong>: </p> <pre><code>labels: ['TYPE', 'ID', '1/17/2019', 'DATE'] [['ELECTRIC', 13, 23, '1/17/2019'], ['ELECTRIC', 13, 23, '2/28/2019'], ['ELECTRIC', 13, 23, '3/5/2019'], ['ELECTRIC', 13, 23, '3/28/2019'], ['TELECOMMUNICATIONS', 24, 1, '1/1/2019']] </code></pre> <h2>Dummy Data</h2> <pre class="lang-py prettyprint-override"><code>d = { "TYPE": { "0": "ELECTRIC", "1": "ELECTRIC", "2": "ELECTRIC", "3": "ELECTRIC", "4": "TELECOMMUNICATIONS" }, "ID": { "0": 13, "1": 13, "2": 13, "3": 13, "4": 24 }, "1/17/2019": { "0": 23, "1": 23, "2": 23, "3": 23, "4": 1 }, "DATE": { "0": "1/17/2019", "1": "2/28/2019", "2": "3/5/2019", "3": "3/28/2019", "4": "1/1/2019" } } </code></pre>
python|pandas|dictionary
1
4,270
58,456,116
Format pandas output to csv
<p>I am new to python and pandas and have created a test web page with html code to use to help with learning how to pull the data and then format into CSV for use in excel. Below is the code I have come up with that puts it into a nice format but I am stuck on how to format it into a CSV file to import.</p> <p>Code: </p> <pre><code># Importing pandas import pandas as pd # The webpage URL whose table we want to extract url = "/home/dvm01/e007" # Assign the table data to a Pandas dataframe table = pd.read_html(url,**index_col=0**)[0] #table2 = pd.read_html(url)[0],pd.read_html(url)[1],pd.read_html(url)[6] # Print the dataframe print(table) #print(table2) # Store the dataframe in Excel file #table.to_excel("data.xlsx") </code></pre> <p>Output:</p> <pre><code> Account Account.1 ID: e007 Description: ABST: 198, SUR: J DOUTHIT Geo ID: 014.0198.0000 </code></pre> <p>What I am trying to figure out is how to remove the index for the rows and make the text before the first: to be a column header. In row 1 I have two: but everything after the first: should be the data for the column header.</p> <p>I would like to take the above current output and and have ID, Description, and Geo ID as the column headers and the text that comes after the ':' to be the data for each of the headers. </p> <p>I do not need 'Account' and 'Account.1' I believe these are being recognized as column headers. Below is what I would like the output to look like in Excel, but I cannot figure out how to format it correctly to export out to a CSV that can be imported. Maybe I do not even need to import or format into a CSV, the 'table.to_excel' function seems to not need that step.</p> <pre><code>+------+---------------------------+---------------+ | ID | Description | Geo ID | +------+---------------------------+---------------+ | e007 | ABST: 198, SUR: J Douthit | 014.0198.0000 | +------+---------------------------+---------------+ </code></pre> <p>I was able to remove the index numbers, by using index_col=0 above where I define the dfs variable. Not sure that is the best way but it does do what I was trying to accomplish for that portion.</p> <p>Since I am new to python I am having a hard time formatting my question into Google or StackOverflow to get the answers I am looking for. If someone could just point me in the right direction in what I am looking for, that would work but examples would be nice as well.</p> <p>Thanks for any guidance</p>
<p>so to format your questions you can show us an example of what you want. try something like this:</p> <pre><code>|id|name|data1|data2|date3|-url-| |--|----|-----|-----|-----|-----| |1 |xyz |datax|datay|dataz|x:url| |2 |xyz |datax|datay|dataz|x:url| |3 |xyz |datax|datay|dataz|x:url| ... </code></pre> <p>Then you can ask questions about how to create the right Dataframe output that fits your desired design:)</p> <p>you can also use this generator online: <a href="https://www.tablesgenerator.com/text_tables" rel="nofollow noreferrer">https://www.tablesgenerator.com/text_tables</a></p> <pre><code>+----+------+-------+-------+-------+------+ | Id | Name | Data1 | Data2 | Data3 | Url | +----+------+-------+-------+-------+------+ | 1 | xyz | datax | datay | dataz | xurl | +----+------+-------+-------+-------+------+ | 2 | xyz | datax | datay | dataz | xurl | +----+------+-------+-------+-------+------+ | 3 | xyz | datax | datay | dataz | xurl | +----+------+-------+-------+-------+------+ </code></pre> <p>Ok now that you have your Data Table design. next I would ask you to try using Jupyter Notebook. This will let you test your dataframes line by line. Each Test should be a new transfermation of the dataset.</p> <p>How I see the workflow going to get to your needs: 1. See test and see what your current DF columns are:</p> <pre><code>print(df.columns) </code></pre> <p>2. use this command to edit your columns:</p> <pre><code>df.rename(columns={'old column 1':'ID', 'old column 2':'Description', 'old column 3':'Geo ID'}, inplace=True) </code></pre> <ol start="3"> <li><p>use this command change an index data</p> <p>df.rename(index={0:'zero',1:'one'}, inplace=True)</p></li> <li><p>use this command to change a row</p> <pre><code>df.loc['--insert_Column_here--', '--insert_row_here--'] = new_value </code></pre></li> </ol>
python|pandas
1
4,271
69,048,548
why is tensorflow/keras and training and validation metrics way off from each other?
<p>a description of my project, I am trying to train a network that recognizes a picture containing a number from 0 to 9 and categorizing it as such. my model is as follows</p> <pre><code>model = Sequential( [ tf.keras.applications.MobileNetV2(include_top=False, input_shape=(224, 224, 3)), Flatten(), Dense(128), LeakyReLU(alpha=.3), Dense(128), LeakyReLU(alpha=.3), Dense(128), LeakyReLU(alpha=.3), Dense(128), LeakyReLU(alpha=.3), Dense(10, activation='softmax') ] </code></pre> <p>)</p> <pre><code>model.compile( optimizer='adam', loss='categorical_crossentropy', metrics=['categorical_accuracy','accuracy','mae'] </code></pre> <p>)</p> <p>I dont't think it is a matter of overfitting.The data is coming from a data generator that is creating new pictures with digits using opencv's putText function using random fonts, font sizes/thickness as well as random rotates and shifts meaning all the data is completely unique. I have also verified the data visually and there doesn't seem to be anything unusual with it. I have done two experiments. first I created two separate generators, a training and validation generator, at the end of the epoch, the the previous validation data becomes the training data and new data is created for the validation, and yet when this happened I didn't see the training metrics drop at all. Next I trained the model with a static set of training data and it uses that exact same data for validation.</p> <p>train_x,train_y=new_data(3200)</p> <p>train_x=train_x/255</p> <p>history = model.fit(train_x,train_y,steps_per_epoch=steps, epochs=15, verbose=1,validation_data=(train_x,train_y))</p> <p>and yet in that instance despite being the exact same data the validation metrics is drastically worse than the training metrics as shown in the attached images. Does anyone know what's going on? Am I just misunderstanding something about the training process in keras?</p> <p><a href="https://i.stack.imgur.com/RH84n.png" rel="nofollow noreferrer">enter image description here</a></p>
<p>The problem is with the batch normalization layers in the MobileNetV2 model, specifically with batch normalization momentum parameter as discussed in :</p> <p><a href="https://stackoverflow.com/questions/65415799/fit-works-as-expected-but-then-during-evaluate-model-performs-at-chance">fit() works as expected but then during evaluate() model performs at chance</a></p> <p>a quick fix is to change that the default momentum of .999 to .9</p>
python|tensorflow|keras
0
4,272
69,111,142
Pandas - Add value to a Column on a For
<p>Im trying a script and i want to add value to each row on my Column login:</p> <p>If the result on For is 200 i want add &quot;yes&quot; to the login Column on the correct row.</p> <p>thank u!</p> <pre><code>import pandas as pd from requests import get lista = pd.read_csv('sites.csv', sep=',') df = pd.DataFrame(lista, columns=['Site', 'Login']) newdf = df.assign(Site=df['Site'].map(str) + 'login') for i in newdf['Site']: result = get(i) if result.status_code == 200: print(i + '' ' login page') elif result.status_code == 404: print(i + '' ' not a login page') else : print(i + '' ' Not a login page')enter code here </code></pre> <p>the csv data:</p> <pre><code>Site, Login https://www.site1.com.br/, https://www.site2.com.br/, https://www.site3.com.br/, </code></pre>
<p>One way you could do it is to just iterate over the rows and add it in that way:</p> <pre><code>for i in range(newdf.shape[0]): result = get(newdf.iloc[i,0]) if result == 200: newdf.iloc[i,1] = &quot;yes&quot; </code></pre> <p>etc.</p>
python|pandas
0
4,273
69,082,127
Plot heatmap (kdeplot) with geopandas
<p>I have the following data stored in a <code>geopandas.DataFrame</code> object. <code>geometry</code> are polygons and <code>x</code> are the values I want to use as a heat scale.</p> <pre><code> id geometry x 9 01001 POLYGON ((-102.10641 22.06035, -102.10368 22.0... 33 19 01002 POLYGON ((-102.05189 22.29144, -102.05121 22.2... 2 29 01003 POLYGON ((-102.68569 22.09963, -102.69087 22.0... 0 39 01004 POLYGON ((-102.28787 22.41649, -102.28753 22.4... 0 49 01005 POLYGON ((-102.33568 22.05067, -102.33348 22.0... 22 </code></pre> <p>I can use the following code to plot a map and color each polygon according to the value in column <code>x</code>.</p> <pre class="lang-py prettyprint-override"><code>t.plot(column='x', cmap='coolwarm', legend=False) plt.axis('off') plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/zyLpv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zyLpv.png" alt="bad heatmap" /></a></p> <p>This is not too bad, but considering I have the polygons and values in a single object, I was wondering if theres a way to turn this plot into a heatmap using <code>geopandas</code>.</p>
<p>I was recommended to use <code>geoplot</code>.</p> <p><code>geoplot.kdeplot</code> expects a <code>geopandas.DataFrame</code> object with one row per Point. That is, something along the lines of:</p> <pre><code> PointID geometry 0 204403876 POINT (-101.66700 21.11670) 1 204462769 POINT (-101.66700 21.11670) 2 144407530 POINT (-101.66700 21.11670) 3 118631118 POINT (-101.66700 21.11670) 4 118646035 POINT (-101.66700 21.11670) </code></pre> <p>And then plot these points over the map, which is passed as a separate object.</p> <p>To show this in code, suppose the polygons are stored in <code>df_map</code> and the points are stored in <code>df_points</code>.</p> <pre class="lang-py prettyprint-override"><code># Import geoplot import geoplot import geoplot.crs as gcrs # Plot heatmap ax = geoplot.kdeplot(df_points, projection=gcrs.AlbersEqualArea()) # Add polygons geoplot.polyplot(df_map, ax=ax) </code></pre> <p>Which should yield something along the lines of this.</p> <p><a href="https://i.stack.imgur.com/AbBz3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AbBz3.png" alt="nice heatmap" /></a></p> <p>Sadly, I cannot post my results because <code>projection=gcrs.AlbersEqualArea()</code> crashes my session, but I hope this helps someone in the future.</p>
python|matplotlib|heatmap|geopandas
0
4,274
61,056,779
Serialize custom dynamic layer in keras tensorflow
<p>I am rather new to tensorflow as well as Python (transfering from R). Currently I am working on a recommendation system in python using keras and tensorflow. The data is "unary", so I only know if someone clicked on something or if he didn't. </p> <p>The core model is build with the functional API and is a basic wide model (input = multi-hot encoded binary rating matrix, output = probability per class) with one hidden layer in between. To make the model easier to use, I want to be able to throw in class labels and output topN predictions with class labels and probabilities. So instead of "Input = [[0,1,0,1,0,0,0,0]], Ouput = [[0.3,0.1,0.6,0.0]" I want to be able to do like "Input = [['apple','orange','bean']], Ouput = [['lemon',banana'],[0.3,0.2]]".</p> <p>To do this, I train the basic model and then wrap two custom layers around the model, one at the beginning and one at the end (like here: <a href="https://towardsdatascience.com/customize-classification-model-output-layer-46355a905b86" rel="nofollow noreferrer">https://towardsdatascience.com/customize-classification-model-output-layer-46355a905b86</a>) (I also tried feature_columns, they did not really do it for me). To make the input custom layer, I had to set the layer to "dynamic = True", to enable eager execution. I could not find a way to create a layer for this "tokenization" without using eager execution. This works fine so far.</p> <p>But now I can not restore the saved model (neither using h5 nor save_model.pb). I also specified the "get_config" method for the custom layers, and everything works fine as long as I only safe the model with the second custom layer at the end. So I suppose the error occurs because the first layer is dynamic. So how do I serealize a dynamic custom layer in keras?</p> <p>I would really appreciate any help or even thoughts as I could not find any matching topic (or even any topic covering dynamic custom layers in general). Please find some code here:</p> <pre><code>class LabelInputLayer(Layer): def __init__(self, labels, n_labels, **kwargs): self.labels = labels self.n_labels = n_labels super(LabelInputLayer, self).__init__(**kwargs) def call(self, x): batch_size = tf.shape(x)[0] tf_labels = tf.constant([self.labels], dtype="int32") n_labels = self.n_labels x = tf.constant(x) tf_labels = tf.tile(tf_labels,[batch_size,1]) # go through every instance and multi-hot encode labels for j in tf.range(batch_size): index = [] for i in tf.range(n_labels): if tf_labels[j,i].numpy() in x[j,:].numpy(): index.append(True) else: index.append(False) # create initial rating matrix and append for each instance if j == 0: rating_te = tf.where(index,1,0) else: rating_row = tf.where(index,1,0) rating_te = tf.concat([rating_te,rating_row],0) rating_te = tf.reshape(rating_te, [batch_size,-1]) return [rating_te] def compute_output_shape(self, input_shape): return tf.TensorShape([None, self.n_labels]) # define get_config to enable serialization of the layer def get_config(self): config={'labels':self.labels, 'n_labels':self.n_labels} base_config = super(LabelInputLayer, self).get_config() return dict(list(base_config.items()) + list(config.items())) </code></pre> <p>Here I am creating the basic model:</p> <pre><code>#Create Functional Model--------------------- a = tf.keras.layers.Input(shape=[n_classes]) b = tf.keras.layers.Dense(4000, activation = "relu", input_dim = n_classes)(a) c = tf.keras.layers.Dropout(rate = 0.2)(b) d = tf.keras.layers.Dense(n_classes, activation = "softmax")(c) nn_model = tf.keras.Model(inputs = a, outputs = d) </code></pre> <p>And this is the final model (works in the notebook but can not be restored once saved, so no use in production):</p> <pre><code>input_layer = Input(shape = input_shape, name = "first_input") encode_layer = LabelInputLayer(labels = labels, n_labels = n_labels, dynamic = True, input_shape = input_shape)(input_layer) pre_trained = tf.keras.models.Sequential(nn_model.layers[1:])(encode_layer) decode_layer = LabelLimitLayer(labels, n_preds)(pre_trained) encoder_model = tf.keras.Model(inputs = input_layer, outputs = decode_layer) </code></pre> <p>Save and restore the model:</p> <pre><code>tf.saved_model.save(encoder_model, "encoder_model") model = tf.keras.models.load_model("encoder_model") </code></pre> <p>This is the error I receive if I want to restore the model in another notebook (Unfortunately I can also not use the "custom_objects" parameter in the load method, as I have to deploy the model from the save file only):</p> <pre><code>ValueError: Could not find matching function to call loaded from the SavedModel. Got: Positional arguments (1 total): * Tensor("x:0", shape=(None, 10), dtype=float32) Keyword arguments: {} Expected these arguments to match one of the following 0 option(s): </code></pre>
<p>I managed to get around the Issue by using tf.functions instead of dynamic=True. I did not however manage to save a dynamic layer in Tensorflow. Maybe that helps someone.</p>
python|tensorflow2.0|keras-layer|tf.keras
0
4,275
60,931,377
Efficiently select elements from an (x,y) field with a 2D mask in Python
<p>I have a large field of 2D-position data, given as two arrays <code>x</code> and <code>y</code>, where <code>len(x) == len(y)</code>. I would like to return the array of indices <code>idx_masked</code> at which <code>(x[idx_masked], y[idx_masked])</code> is masked by an N x N <code>int</code> array called <code>mask</code>. That is, <code>mask[x[idx_masked], y[idx_masked]] == 1</code>. The <code>mask</code> array consists of <code>0</code>s and <code>1</code>s only.</p> <p>I have come up with the following solution, but it (specifically, the last line below) is very slow, given that I have N x N = 5000 x 5000, repeated 1000s of times:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt # example mask of one corner of a square N = 100 mask = np.zeros((N, N)) mask[0:10, 0:10] = 1 # example x and y position arrays in arbitrary units x = np.random.uniform(0, 1, 1000) y = np.random.uniform(0, 1, 1000) x_bins = np.linspace(np.min(x), np.max(x), N) y_bins = np.linspace(np.min(y), np.max(y), N) x_bin_idx = np.digitize(x, x_bins) y_bin_idx = np.digitize(y, y_bins) idx_masked = np.ravel(np.where(mask[y_bin_idx - 1, x_bin_idx - 1] == 1)) plt.imshow(mask[::-1, :]) </code></pre> <p><a href="https://i.stack.imgur.com/EMmlR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EMmlR.png" alt="the mask itself"></a></p> <pre><code>plt.scatter(x, y, color='red') plt.scatter(x[idx_masked], y[idx_masked], color='blue') </code></pre> <p><a href="https://i.stack.imgur.com/ebBSt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ebBSt.png" alt="the masked particle field"></a></p> <p>Is there a more efficient way of doing this?</p>
<p>Given that <code>mask</code> overlays your field with identically-sized bins, you do not need to define the bins explicitly. <code>*_bin_idx</code> can be determined at each location from a simple floor division, since you know that each bin is <code>1 / N</code> in size. I would recommend using <code>1 - 0</code> for the total width (what you passed into <code>np.random.uniform</code>) instead of <code>x.max() - x.min()</code>, if of course you know the expected size of the range.</p> <pre><code>x0 = 0 # or x.min() x1 = 1 # or x.max() x_bin = (x1 - x0) / N x_bin_idx = ((x - x0) // x_bin).astype(int) # ditto for y </code></pre> <p>This will be faster and simpler than digitizing, and avoids the extra bin at the beginning.</p> <p>For most purposes, you do not need <code>np.where</code>. 90% of the questions asking about it (including this one) should not be using <code>where</code>. If you want a fast way to access the necessary elements of <code>x</code> and <code>y</code>, just use a boolean mask. The mask is simply</p> <pre><code>selction = mask[x_bin_idx, y_bin_idx].astype(bool) </code></pre> <p>If <code>mask</code> is already a boolean (which it should be anyway), the expression <code>mask[x_bin_idx, y_bin_idx]</code> is sufficient. It results in an array of the same size as <code>x_bin_idx</code> and <code>y_bin_idx</code> (which are the same size as <code>x</code> and <code>y</code>) containing the mask value for each of your points. You can use the mask as</p> <pre><code>x[selection] # Elements of x in mask y[selection] # Elements of y in mask </code></pre> <p>If you absolutely need the integer indices, <code>where</code> is sill not your best option.</p> <pre><code>indices = np.flatnonzero(selection) </code></pre> <p>OR</p> <pre><code>indices = selection.nonzero()[0] </code></pre> <p>If your goal is simply to extract values from <code>x</code> and <code>y</code>, I would recommend stacking them together into a single array:</p> <pre><code>coords = np.stack((x, y), axis=1) </code></pre> <p>This way, instead of having to apply indices twice, you can extract the values with just</p> <pre><code>coords[selection, :] </code></pre> <p>OR</p> <pre><code>coords[indices, :] </code></pre> <p>Depending on the relative densities of <code>mask</code> and <code>x</code> and <code>y</code>, either the boolean masking or linear indexing may be faster. You will have to time some relevant cases to get a better intuition.</p>
python|arrays|numpy
2
4,276
71,650,564
Pandas DataFrame styler - How to style pandas dataframe as excel table?
<p><a href="https://i.stack.imgur.com/sT4sK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sT4sK.png" alt="enter image description here" /></a></p> <p>How to style the pandas dataframe as an excel table (alternate row colour)?</p> <p>Sample style:</p> <p><a href="https://i.stack.imgur.com/4qCoM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4qCoM.png" alt="enter image description here" /></a></p> <p>Sample data:</p> <pre><code>import pandas as pd import seaborn as sns df = sns.load_dataset(&quot;tips&quot;) </code></pre>
<p>If your final goal is to save <code>to_excel</code>, the only way to retain the styling after export is using the <code>apply</code>-based methods:</p> <ul> <li><a href="https://pandas.pydata.org/docs/reference/api/pandas.io.formats.style.Styler.apply.html" rel="noreferrer"><code>df.style.apply</code></a> / <a href="https://pandas.pydata.org/docs/reference/api/pandas.io.formats.style.Styler.applymap.html" rel="noreferrer"><code>df.style.applymap</code></a> are the styling counterparts to <code>df.apply</code> / <code>df.applymap</code> and work analogously</li> <li><a href="https://pandas.pydata.org/docs/reference/api/pandas.io.formats.style.Styler.apply_index.html" rel="noreferrer"><code>df.style.apply_index</code></a> / <a href="https://pandas.pydata.org/docs/reference/api/pandas.io.formats.style.Styler.applymap_index.html" rel="noreferrer"><code>df.style.applymap_index</code></a> are the index styling counterparts <strong>(requires pandas 1.4.0+)</strong></li> </ul> <p>For the given sample, use <a href="https://pandas.pydata.org/docs/reference/api/pandas.io.formats.style.Styler.apply.html" rel="noreferrer"><code>df.style.apply</code></a> to style each column with alternating row colors and <a href="https://pandas.pydata.org/docs/reference/api/pandas.io.formats.style.Styler.applymap_index.html" rel="noreferrer"><code>df.style.applymap_index</code></a> to style all row/col indexes:</p> <pre><code>css_alt_rows = 'background-color: powderblue; color: black;' css_indexes = 'background-color: steelblue; color: white;' (df.style.apply(lambda col: np.where(col.index % 2, css_alt_rows, None)) # alternating rows .applymap_index(lambda _: css_indexes, axis=0) # row indexes (pandas 1.4.0+) .applymap_index(lambda _: css_indexes, axis=1) # col indexes (pandas 1.4.0+) ).to_excel('styled.xlsx', engine='openpyxl') </code></pre> <p><a href="https://i.stack.imgur.com/2X3e3.png" rel="noreferrer"><img src="https://i.stack.imgur.com/2X3e3.png" width="400"></a></p> <hr /> <p>If you only care about the appearance in Jupyter, another option is to set properties for targeted selectors using <a href="https://pandas.pydata.org/docs/reference/api/pandas.io.formats.style.Styler.set_table_styles.html" rel="noreferrer"><code>df.style.set_table_styles</code></a> <strong>(requires pandas 1.2.0+):</strong></p> <pre><code># pandas 1.2.0+ df.style.set_table_styles([ {'selector': 'tr:nth-child(even)', 'props': css_alt_rows}, {'selector': 'th', 'props': css_indexes}, ]) </code></pre>
python|pandas|dataframe|pandas-styles
6
4,277
71,549,682
Transform dictionary list into python dataframe
<p>I have a file with several dictionaries and I want to turn it into a pandas dataframe, but I can't. When I try to get the first value to be in a column, like &quot;Fuel station 1&quot; and everything else is in a second column all together.</p> <pre><code>{'Fuel station 1': {'00404850000317': {'01-01-2019': {'DataDaVenda': '2019-01-01 19:04:22', 'Descrição': 'GASOLINA COMUM', 'Preço': 4.289544}, '01-01-2020': {'DataDaVenda': '2020-01-01 19:18:09', 'Descrição': 'GASOLINA C COMUM (b:2)', 'Preço': 4.49}, '01-01-2022': {'DataDaVenda': '2021-12-31 19:24:20', 'Descrição': 'GASOLINA C COMUM (b:1)', 'Preço': 6.49}}}, {'Fuel station 2': {'00404850000317': {'01-01-2019': {'DataDaVenda': '2019-01-01 19:04:22', 'Descrição': 'GASOLINA COMUM', 'Preço': 4.289544}, '01-01-2021': {'DataDaVenda': '2021-01-01 18:48:55', 'Descrição': 'GASOLINA C COMUM (b:1)', 'Preço': 4.59}, '01-01-2022': {'DataDaVenda': '2021-12-31 19:24:20', 'Descrição': 'GASOLINA C COMUM (b:1)', 'Preço': 6.49}}} </code></pre> <p>Desired output format:</p> <p><a href="https://i.stack.imgur.com/wV9yM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wV9yM.png" alt="output format" /></a></p>
<p>Well your data is nested on multiple levels. So first of all you are going to have to transform it into a format that pandas can handle. One way would be a records format (list of dicts), where each of the keys which belong to multiple entries are their own fields:</p> <pre><code>import pandas # slightly fixed your brackets data = { 'Fuel station 1': {'00404850000317': { '01-01-2019': {'DataDaVenda': '2019-01-01 19:04:22', 'Descrição': 'GASOLINA COMUM', 'Preço': 4.289544}, '01-01-2020': {'DataDaVenda': '2020-01-01 19:18:09', 'Descrição': 'GASOLINA C COMUM (b:2)', 'Preço': 4.49}, '01-01-2022': {'DataDaVenda': '2021-12-31 19:24:20', 'Descrição': 'GASOLINA C COMUM (b:1)', 'Preço': 6.49}}}, 'Fuel station 2': {'00404850000317': { '01-01-2019': {'DataDaVenda': '2019-01-01 19:04:22', 'Descrição': 'GASOLINA COMUM', 'Preço': 4.289544}, '01-01-2021': {'DataDaVenda': '2021-01-01 18:48:55', 'Descrição': 'GASOLINA C COMUM (b:1)', 'Preço': 4.59}, '01-01-2022': {'DataDaVenda': '2021-12-31 19:24:20', 'Descrição': 'GASOLINA C COMUM (b:1)', 'Preço': 6.49}}}} </code></pre> <p>To flat list of dicts:</p> <pre><code>reformatted_data = [] for fuel_st, v in data.items(): for id_, v_ in v.items(): for date, v__ in v_.items(): reformatted_data.append({&quot;fuel station&quot;: fuel_st, &quot;id&quot;: id_, &quot;date&quot;: date}) for k___, v___ in v__.items(): reformatted_data[-1][k___] = v___ df = pandas.DataFrame.from_records(reformatted_data) </code></pre> <p>Which returns:</p> <pre><code>print(df) &gt; fuel station id date DataDaVenda Descrição Preço 0 Fuel station 1 00404850000317 01-01-2019 2019-01-01 19:04:22 GASOLINA COMUM 4.289544 1 Fuel station 1 00404850000317 01-01-2020 2020-01-01 19:18:09 GASOLINA C COMUM (b:2) 4.490000 2 Fuel station 1 00404850000317 01-01-2022 2021-12-31 19:24:20 GASOLINA C COMUM (b:1) 6.490000 3 Fuel station 2 00404850000317 01-01-2019 2019-01-01 19:04:22 GASOLINA COMUM 4.289544 4 Fuel station 2 00404850000317 01-01-2021 2021-01-01 18:48:55 GASOLINA C COMUM (b:1) 4.590000 5 Fuel station 2 00404850000317 01-01-2022 2021-12-31 19:24:20 GASOLINA C COMUM (b:1) 6.490000 </code></pre>
python|pandas|dataframe|dictionary
2
4,278
71,640,786
Dask to_parquet throws exception "No such file or directory"
<p>The following Dask code attempts to store a dataframe in parquet, read it again, add a column, and store again the dataframe with the column added.</p> <p>This is the code:</p> <pre><code>import pandas as pd import dask.dataframe as dd pdf = pd.DataFrame({ 'height': [6.21, 5.12, 5.85], 'weight': [150, 126, 133] }) ddf = dd.from_pandas(pdf, npartitions=3) ddf.to_parquet('C:\\temp\\test3', engine='pyarrow', overwrite=True) ddf2 = dd.read_parquet('C:\\temp\\test3') ddf2['new_column'] = 1 ddf2.to_parquet('C:\\temp\\test3', engine='pyarrow', overwrite=True) # &lt;- this one fails </code></pre> <p>The error I get is:</p> <pre><code>FileNotFoundError: [Errno 2] No such file or directory: 'C:/temp/test3/part.0.parquet' </code></pre> <p>If I check directory <code>temp3</code> is empty.</p> <p>I think that when the second <code>to_parquet</code> is executed, since <code>overwrite=True</code> it does an implicit <code>compute()</code> and the process starts in the <code>read_parquet</code>, but since overwrite deleted the files it doesn't find it. Is that the case?</p> <p>In any way, how to make this work? Note that in the real scenario the dataframe doesn't fit in memory.</p> <p><strong>UPDATE</strong></p> <p>I'm not trying to update the parquet file, I need to write it again overwriting the existing one.</p>
<p>This works, use a different file name when you do a <code>to_parquet</code> and then delete the old parquet directory:</p> <pre><code>ddf = dd.from_pandas(pdf, npartitions=3) ddf.to_parquet('C:\\temp\\OLD_FILE_NAME', engine='pyarrow', overwrite=True) ddf2 = dd.read_parquet('C:\\temp\\OLD_FILE_NAME') ddf2['new_column'] = 1 ddf2.to_parquet('C:\\temp\\NEW_FILE_NAME', engine='pyarrow', overwrite=True) path_to_delete = os.path.dirname('C:\\temp\\OLD_FILE_NAME\\') shutil.rmtree(path_to_delete) </code></pre>
python|pandas|dask|dask-distributed
1
4,279
69,946,709
Filling a new column with values from a different column
<p>Supposing I have a dataframe like so : <a href="https://i.stack.imgur.com/rvi0P.png" rel="nofollow noreferrer">dataframe</a></p> <p>If I have to make a new column, which has the values from column 3 like so 4 N/A -1.135632 -1.044236 1.071804 0.271860 -1.087401 0.524988 -1.039268 0.844885 -1.469388 -0.968914</p> <p>i.e, entry 1 of column 4 is filled with entry 0 of column 3, entry 2 of column 4 is filled with entry 1 of column 3 and so on...until the nth entry in the 4th column is filled with the (n-1)th entry of the 3rd column</p>
<p><code>df['column_4'] = df['column_3'].shift(1)</code></p>
python|pandas
1
4,280
69,980,921
.isin() function is returning an empty set when filtering an object column in DataFrame
<p>Reading and appending excel files to create DataFrame:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import os folder = r'C:\mypathtodocuments' files = os.listdir(folder) df = pd.DataFrame() for file in files: if file.endswith('.xlsx'): df = df.append(pd.read_excel(os.path.join(folder,file))) #Drop extra columns from wrong data df1 = df[['FIRST_NM', 'LAST_NM', 'CITY_AD']] </code></pre> <p>Preview of <code>CITY_AD</code> column:</p> <pre><code>&gt;&gt;&gt; df1[&quot;CITY_AD&quot;] 0 EL PASO 1 HOUSTON 2 HOUSTON 3 CONROE 4 MCKINNEY 5 MCKINNEY 6 KATY 7 TOMBALL 8 TOMBALL 9 SPRING 10 SPRING </code></pre> <p>Filter DataFrame with <code>.isin()</code> function to only include cities <code>HOUSTON</code> and <code>CONROE</code>:</p> <pre><code>df1[df1[&quot;CITY_AD&quot;].isin([&quot;HOUSTON&quot;,&quot;CONROE&quot;])] </code></pre> <p>This returns an empty set... How can I get it to filter correctly?</p>
<p>Try this:</p> <pre class="lang-py prettyprint-override"><code>df1[&quot;CITY_AD&quot;] = df1[&quot;CITY_AD&quot;].str.strip() df1[df1[&quot;CITY_AD&quot;].isin([&quot;HOUSTON&quot;,&quot;CONROE&quot;])] </code></pre>
python|pandas|dataframe|isin
1
4,281
43,243,319
Is it possible to keep reference link between list and numpy array
<p>If I create a list in python, and assign a different list to it, changes of the first list are reflected in the second list:</p> <pre><code>a = [1, 2, 3] b = a a[0] = 0 print(b) &gt;&gt;&gt; [0, 2, 3] </code></pre> <p>Is it possible to achieve this behavior when creating a numpy array from a list? What I want:</p> <pre><code>import numpy as np a = [1, 2, 3] b = np.array(a) a[0] = 0 print(b) &gt;&gt;&gt; [ 0 2 3 ] </code></pre> <p>But what actually happens is that b is <code>[ 1 2 3 ]</code>. I realize that this is difficult due to dynamic resizing of the list. But if I could tell numpy that this list is never resized, it should work somehow. Is this behavior achievable? Or am I missing some really bad drawbacks?</p>
<p>Fundamentally the issue is that Python lists <em>are not really arrays</em>. OK, CPython lists are ArrayLists, but they are arrays of Py_Object pointers, so they can hold heterogenous data. See <a href="http://www.laurentluce.com/posts/python-list-implementation/" rel="nofollow noreferrer">here</a> for an excellent exposition on the implementation details of CPython lists. Also, they are resizable, and all the <code>malloc</code> and <code>realloc</code> gets taken care of under the hood. However, you <em>can</em> achieve something like what you want if you use vanilla Python arrays available in the <a href="https://docs.python.org/3/library/array.html" rel="nofollow noreferrer"><code>array</code></a> module.</p> <pre><code>&gt;&gt;&gt; import numpy as np # third party &gt;&gt;&gt; import array # standard library module </code></pre> <p>Let's make a <em>real</em> array:</p> <pre><code>&gt;&gt;&gt; a = array.array('i', [1,2,3]) &gt;&gt;&gt; a array('i', [1, 2, 3]) </code></pre> <p>We can use <code>numpy.frombuffer</code> if we want our <code>np.array</code> to share the underyling memory of the buffer:</p> <pre><code>&gt;&gt;&gt; arr = np.frombuffer(a, dtype='int32') &gt;&gt;&gt; arr array([1, 2, 3], dtype=int32) </code></pre> <h2>EDIT: WARNING</h2> <p>As stated by @user2357112 in the comments:</p> <blockquote> <p>Watch out - <code>numpy.frombuffer</code> is still using the old buffer protocol (or on Python 3, the compatibility functions that wrap the new buffer protocol in an old-style interface), so it's not very memory-safe. If you create a NumPy array from an <code>array.array</code> or <code>bytearray</code> with <code>frombuffer</code>, you must not change the size of the underlying array. Doing so risks arbitrary memory corruption and segfaults when you access the NumPy array</p> </blockquote> <p>Note, I had to explicitly pass <code>dtype='int32'</code> because I initialized my <code>array.array</code> with the <code>i</code> signed int typecode, which on my system corresponds to a 32 bit int. Now, presto:</p> <pre><code>&gt;&gt;&gt; a array('i', [1, 2, 3]) &gt;&gt;&gt; a[0] = 88 &gt;&gt;&gt; a array('i', [88, 2, 3]) &gt;&gt;&gt; arr array([88, 2, 3], dtype=int32) &gt;&gt;&gt; </code></pre> <p>Now, if we use <code>dtype=object</code>, we actually can share the underlying objects. However, with numerical types, we <em>can't mutate</em>, only replace. However, we can wrap a Python <code>int</code> in a class to make a mutable object:</p> <pre><code>&gt;&gt;&gt; class MutableInt: ... def __init__(self, val): ... self.val = val ... def __repr__(self): ... return repr(self.val) ... &gt;&gt;&gt; obj_list = [MutableInt(i) for i in range(1, 8)] &gt;&gt;&gt; obj_list [1, 2, 3, 4, 5, 6, 7] </code></pre> <p>Now, we create an array that consists of the same <em>objects</em>:</p> <pre><code>&gt;&gt;&gt; obj_array = np.array(obj_list, dtype=object) &gt;&gt;&gt; obj_array array([1, 2, 3, 4, 5, 6, 7], dtype=object) </code></pre> <p>Now, we can mutate the int wrapper in the list:</p> <pre><code>&gt;&gt;&gt; obj_list[0].val = 88 &gt;&gt;&gt; obj_list [88, 2, 3, 4, 5, 6, 7] </code></pre> <p>And the effects are visible in the <code>numpy</code> array!:</p> <pre><code>&gt;&gt;&gt; obj_array array([88, 2, 3, 4, 5, 6, 7], dtype=object) </code></pre> <p>Note, though, you've now essentially created a less useful version of a Python <code>list</code>, one that isn't resizable, and doesn't have the nice O(1) amortized <code>append</code> behavior. We also lose any memory efficiency gains that a <code>numpy</code> array might give you!</p> <p>Also, note that in the above the <code>obj_list</code> and <code>obj_array</code> are not sharing the same underlying buffer, they are making *two different arrays of holding the same Py_Obj pointer values:</p> <pre><code>&gt;&gt;&gt; obj_list[1] = {} &gt;&gt;&gt; obj_array array([88, 2, 3, 4, 5, 6, 7], dtype=object) &gt;&gt;&gt; obj_list [88, {}, 3, 4, 5, 6, 7] &gt;&gt;&gt; </code></pre> <p>We cannot access the underlying buffer to a python <code>list</code> because this is not exposed. Theoretically, they <em>could</em> if they exposed the buffer protocol: <a href="https://docs.python.org/3/c-api/buffer.html#bufferobjects" rel="nofollow noreferrer">https://docs.python.org/3/c-api/buffer.html#bufferobjects</a></p> <p>But they don't. <code>bytes</code> and <code>bytearray</code> objects <em>do</em> expose the buffer protocol. <code>bytes</code> are essentially Python 2 <code>str</code>, and <code>bytearray</code> is a mutable version of <code>bytes</code>, so they are essentially mutable <code>char</code> arrays like in C:</p> <pre><code>&gt;&gt;&gt; barr = bytearray([65, 66, 67, 68]) &gt;&gt;&gt; barr bytearray(b'ABCD') </code></pre> <p>Now, let's make a <code>numpy</code> array that shares the underlying buffer:</p> <pre><code>&gt;&gt;&gt; byte_array = np.frombuffer(barr, dtype='int8') &gt;&gt;&gt; byte_array array([65, 66, 67, 68], dtype=int8) </code></pre> <p>Now, we will see changes reflected across both objects:</p> <pre><code>&gt;&gt;&gt; byte_array[1] = 98 &gt;&gt;&gt; byte_array array([65, 98, 67, 68], dtype=int8) &gt;&gt;&gt; barr bytearray(b'AbCD') </code></pre> <p>Now, before you think you can use this to subvert the immutability of Python <code>bytes</code> objects, think again:</p> <pre><code>&gt;&gt;&gt; bs = bytes([65, 66, 67, 68]) &gt;&gt;&gt; bs b'ABCD' &gt;&gt;&gt; byte_array = np.frombuffer(bs, dtype='int8') &gt;&gt;&gt; byte_array array([65, 66, 67, 68], dtype=int8) &gt;&gt;&gt; bs b'ABCD' &gt;&gt;&gt; byte_array[1] = 98 Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; ValueError: assignment destination is read-only &gt;&gt;&gt; </code></pre>
python|arrays|numpy
3
4,282
72,450,166
Pandas Regex: Read specific columns only from csv with regex patterns
<p>Given a large CSV file(large enough to exceed RAM), I want to read only specific columns following some patterns. The columns can be any of the following: <code>S_0, S_1, ...D_1, D_2</code> etc. For example, a chunk from the data frame looks like this:</p> <p><a href="https://i.stack.imgur.com/fW1es.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fW1es.png" alt="enter image description here" /></a></p> <p>And the regex pattern would be for example anyu column that starts with <code>S</code>: <code>S_\d.*</code>.</p> <p>Now, how do I apply this with <code>pd.read_csv(/path/, __)</code> to read the specific columns as mentioned?</p>
<p>You can first read few rows and try <code>DataFrame.filter</code> to get possible columns</p> <pre class="lang-py prettyprint-override"><code>cols = pd.readcsv('path', nrows=10).filter(regex='S_\d*').columns df = pd.readcsv('path', usecols=cols) </code></pre>
python|regex|pandas|dataframe
2
4,283
72,268,400
How to proceed with calculations on dataframe
<p>I dont know how to proceed with a calculation on this database.</p> <p>Database example:</p> <pre><code>Indicator Market Sales Costs Volume Real Internal 30512 -16577 12469 Real External 23 -15 8 Real Other 65 -38 25 ... ... ... ... ... ... ... ... ... ... ... ... Budget Internal 0.0 0.0 0.0 Budget External 3.5 -2.3 60.0 Budget Other 6.2 -3.9 90.8 </code></pre> <p>First I need to collapse &quot;market&quot; into 1 by adding sales, costs and volume. i.e:</p> <pre><code>Indicator Market Sales Costs Volume Real Total 30600 -16630 12502 ... ... ... ... ... ... ... ... ... ... ... ... Budget Total 9.7 -6.2 150.8 </code></pre> <p>Then I need to calculate the &quot;Cost effect&quot; with the following formula:</p> <p>Cost effect: ((real costs/real volume)-(budget cost/budget volume)) x ((real volume + budget volume)/2).</p> <p>This would be: Cost effect: ((-16630/12502)-(-6.2/150.8))*(12502+150.8)/2 = -8155.2</p> <p>I've tried all day but without results. Should I use pandas for this?</p> <p>Any help would be greatly appreciated.</p>
<p>This works</p> <pre><code># aggregate Costs and Volumes by Indicator aggregate = df.groupby('Indicator')[['Costs', 'Volume']].sum() # plug the values into the cost effect formula cost_effect = (aggregate.loc['Real', 'Costs'] / aggregate.loc['Real', 'Volume'] - aggregate.loc['Budget', 'Costs'] / aggregate.loc['Budget', 'Volume']) * aggregate['Volume'].sum() / 2 # -8155.19213384214 </code></pre> <p>The latter outcome can be derived a little more concisely by using the difference between the ratios</p> <pre><code>cost_effect = (aggregate['Costs'] / aggregate['Volume']).diff().iat[-1] * aggregate['Volume'].sum() / 2 # -8155.19213384214 </code></pre> <p>If you need to group by 'Indicator' and for example 'Country':</p> <pre><code>aggregate = df.groupby(['Indicator', 'Country']).sum() </code></pre>
python|pandas|dataframe|formula
1
4,284
72,287,885
Concatenate two List in a 2D Array in Python with Numpy
<p>So, i have 2 list that i want to concatenate with numpy. For now, i'm tring to do something like this :</p> <p><code>LeGraphiqueMatLab = np.array([LesDatesMatLab, LeGraphique], dtype=np.float64)</code></p> <p>But it gives me an error saying : &quot;ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 2 dimensions. The detected shape was (2, 2) + inhomogeneous part.&quot;</p> <p>Do I need to use np.array on each list first and then try to add them ?</p> <p>Thanks</p>
<p>You can use <a href="https://numpy.org/doc/stable/reference/generated/numpy.concatenate.html" rel="nofollow noreferrer">np.concatenate</a> like that:</p> <pre><code>a = [1, 2] b = [5, 6] np.concatenate((a, b)) #output array([1, 2, 5, 6]) </code></pre>
python|arrays|numpy
0
4,285
72,280,810
I am getting "Failed precondition" error message
<p>I am attempting to run a GoogLeNet code, but when I run the following via the terminal:</p> <pre><code>python.exe googlenet_cifar10.py --model output/minigooglenet_cifar10.hdf5 --output output </code></pre> <p>I am new to python, so I attempted to research it but having issues finding a solution for it. This is out my output.</p> <pre><code>PS C:\Users\JoshG\PycharmProjects\GoogLeNet&gt; python.exe googlenet_cifar10.py --model output/minigooglenet_cifar10.hdf5 --output output 2022-05-17 23:35:08.398853: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cudart64_110.dll [INFO] loading CIFAR-10 data... [INFO] compiling model... 2022-05-17 23:35:10.617462: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library nvcuda.dll 2022-05-17 23:35:10.629376: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: pciBusID: 0000:06:00.0 name: NVIDIA GeForce RTX 3080 computeCapability: 8.6 coreClock: 1.71GHz coreCount: 68 deviceMemorySize: 10.00GiB deviceMemoryBandwidth: 707.88GiB/s 2022-05-17 23:35:10.629433: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cudart64_110.dll 2022-05-17 23:35:10.643459: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cublas64_11.dll 2022-05-17 23:35:10.643494: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cublasLt64_11.dll 2022-05-17 23:35:10.646207: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cufft64_10.dll 2022-05-17 23:35:10.647015: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library curand64_10.dll 2022-05-17 23:35:10.653755: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cusolver64_11.dll 2022-05-17 23:35:10.655811: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cusparse64_11.dll 2022-05-17 23:35:10.656141: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cudnn64_8.dll 2022-05-17 23:35:10.656230: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0 2022-05-17 23:35:10.656508: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX AVX2 To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2022-05-17 23:35:10.657260: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1733] Found device 0 with properties: pciBusID: 0000:06:00.0 name: NVIDIA GeForce RTX 3080 computeCapability: 8.6 coreClock: 1.71GHz coreCount: 68 deviceMemorySize: 10.00GiB deviceMemoryBandwidth: 707.88GiB/s 2022-05-17 23:35:10.657440: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1871] Adding visible gpu devices: 0 2022-05-17 23:35:10.968121: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1258] Device interconnect StreamExecutor with strength 1 edge matrix: 2022-05-17 23:35:10.968243: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1264] 0 2022-05-17 23:35:10.968320: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1277] 0: N 2022-05-17 23:35:10.968494: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1418] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 7440 MB memory) -&gt; physical GPU (device: 0, name: NVIDIA GeForce RTX 3080, pci bus id: 0000:06:00.0, compute capability: 8.6) [INFO] training network... 2022-05-17 23:35:11.692348: I tensorflow/compiler/mlir/mlir_graph_optimization_pass.cc:176] None of the MLIR Optimization Passes are enabled (registered 2) Epoch 1/5 2022-05-17 23:35:21.386941: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cudnn64_8.dll 2022-05-17 23:35:21.964261: I tensorflow/stream_executor/cuda/cuda_dnn.cc:359] Loaded cuDNN version 8101 2022-05-17 23:35:22.716959: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cublas64_11.dll 2022-05-17 23:35:23.236591: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library cublasLt64_11.dll 2022-05-17 23:35:23.711216: I tensorflow/stream_executor/cuda/cuda_blas.cc:1838] TensorFloat-32 will be used for the matrix multiplication. This will only be logged once. 781/781 [==============================] - 27s 19ms/step - loss: 1.7178 - accuracy: 0.3670 - val_loss: 1.2684 - val_accuracy: 0.5425 Epoch 2/5 781/781 [==============================] - 13s 17ms/step - loss: 1.1309 - accuracy: 0.5967 - val_loss: 1.1878 - val_accuracy: 0.5845 Traceback (most recent call last): File &quot;C:\Users\JoshG\PycharmProjects\GoogLeNet\googlenet_cifar10.py&quot;, line 84, in &lt;module&gt; model.fit(aug.flow(trainX, trainY, batch_size=64), File &quot;C:\Users\JoshG\AppData\Local\Programs\Python\Python39\lib\site-packages\keras\engine\training.py&quot;, line 1204, in fit callbacks.on_epoch_end(epoch, epoch_logs) File &quot;C:\Users\JoshG\AppData\Local\Programs\Python\Python39\lib\site-packages\keras\callbacks.py&quot;, line 414, in on_epoch_end callback.on_epoch_end(epoch, numpy_logs) File &quot;C:\Users\JoshG\PycharmProjects\GoogLeNet\pipeline\callbacks\trainingmonitor.py&quot;, line 58, in on_epoch_end plt.plot(N, self.H[&quot;acc&quot;], label = &quot;train_acc&quot;) KeyError: 'acc' 2022-05-17 23:35:52.743176: W tensorflow/core/kernels/data/generator_dataset_op.cc:107] Error occurred when finalizing GeneratorDataset iterator: Failed precondition: Python interpreter state is not initialized. The process may be terminated. [[{{node PyFunc}}]] PS C:\Users\JoshG\PycharmProjects\GoogLeNet&gt; </code></pre> <p>I get the following error message:</p> <pre><code>2022-05-17 17:19:00.803770: W tensorflow/core/kernels/data/generator_dataset_op.cc:107] Error occurred when finalizing GeneratorDataset iterator: Failed precondition: Python interpreter state is not initialized. The process may be terminated. [[{{node PyFunc}}]] </code></pre> <p>Below is the entire code and i am using CIFAR-10 Dataset</p> <pre><code>import matplotlib matplotlib.use(&quot;Agg&quot;) # import packages from sklearn.metrics import classification_report from sklearn.preprocessing import LabelBinarizer from pipeline.nn.conv import MiniGoogLeNet from pipeline.callbacks import TrainingMonitor from keras.preprocessing.image import ImageDataGenerator from keras.callbacks import LearningRateScheduler from keras.optimizers import SGD from keras.datasets import cifar10 import numpy as np import argparse import os import tensorflow as tf # define the total number of epochs to train for along with initial learning rate NUM_EPOCHS =70 INIT_LR = 5e-3 def poly_decay(epoch): # initialize the maximum number of epochs, base learning rate, # and power of the polynomial maxEpochs = NUM_EPOCHS baseLR = INIT_LR power = 1.0 # compute the new learning rate based on polynomial decay alpha = baseLR * (1 - (epoch / float(maxEpochs))) ** power # return the new learning rate return alpha # construct the argument parser ap = argparse.ArgumentParser() ap.add_argument(&quot;-m&quot;, &quot;--model&quot;, required = True, help = &quot;path to output model&quot;) ap.add_argument(&quot;-o&quot;, &quot;--output&quot;, required = True, help = &quot;path to output directory (logs, plots, etc.)&quot;) args = vars(ap.parse_args()) # load the training and testing data, converting the image from integers to floats print(&quot;[INFO] loading CIFAR-10 data...&quot;) ((trainX, trainY), (testX, testY)) = cifar10.load_data() trainX = trainX.astype(&quot;float&quot;) testX = testX.astype(&quot;float&quot;) # apply mean subtraction to the data mean = np.mean(trainX, axis = 0) trainX -= mean testX -= mean # convert the labels from integers to vectors lb = LabelBinarizer() trainY = lb.fit_transform(trainY) testY = lb.transform(testY) # initialize the label name for CIFAR-10 dataset labelNames = [&quot;airplane&quot;, &quot;automobile&quot;, &quot;bird&quot;, &quot;cat&quot;, &quot;deer&quot;, &quot;dog&quot;, &quot;frog&quot;, &quot;horse&quot;, &quot;ship&quot;, &quot;truck&quot;] # construct the image generator for data augmentation aug = ImageDataGenerator(width_shift_range = 0.1, height_shift_range = 0.1, horizontal_flip = True, fill_mode = &quot;nearest&quot;) # construct the set of callbacks figPath = os.path.sep.join([args[&quot;output&quot;], &quot;{}.png&quot;.format(os.getpid())]) jsonPath = os.path.sep.join([args[&quot;output&quot;], &quot;{}.json&quot;.format(os.getpid())]) callbacks = [TrainingMonitor(figPath, jsonPath = jsonPath), LearningRateScheduler(poly_decay)] # initialize the optimizer and model print(&quot;[INFO] compiling model...&quot;) opt = SGD(lr = INIT_LR, momentum = 0.9) model = MiniGoogLeNet.build(width = 32, height = 32, depth = 3, classes = 10) #model.compile(loss = &quot;categorical_crossentropy&quot;, optimizer = opt, metrics = [&quot;accuracy&quot;] model.compile(optimizer = tf.keras.optimizers.Adam(learning_rate=0.001), loss = 'categorical_crossentropy', metrics = [&quot;accuracy&quot;]) # train the network print(&quot;[INFO] training network...&quot;) model.fit(aug.flow(trainX, trainY, batch_size = 10), validation_data = (testX, testY), steps_per_epoch = len(trainX) // 64, epochs = NUM_EPOCHS, callbacks = callbacks, verbose = 1) # evaluate network print(&quot;[INFO] evaluating network...&quot;) predictions = model.predict(testX, batch_size = 64) print(classification_report(testY.argmax(axis = 1), predictions.argmax(axis = 1), target_names = labelNames)) # save the network to disk print(&quot;[INFO] serializing network...&quot;) model.save(args[&quot;model&quot;]) print('Test1') #Run Command: python.exe googlenet_cifar10.py --model output/minigooglenet_cifar10.hdf5 --output output </code></pre>
<p>I fixed the issue. The solution was: replacing <strong>[&quot;acc&quot;]</strong> with <strong>[&quot;accuracy&quot;]</strong> everywhere.</p> <p>In my case, I was unable to plot the parameters of the history of my training. I had to replace</p> <pre><code>plt.plot(N, self.H[&quot;acc&quot;], label = &quot;train_acc&quot;) plt.plot(N, self.H[&quot;val_acc&quot;], label = &quot;val_acc&quot;) </code></pre> <p>to</p> <pre><code> plt.plot(N, self.H[&quot;accuracy&quot;], label = &quot;train_accuracy&quot;) plt.plot(N, self.H[&quot;val_accuracy&quot;], label = &quot;val_accuracy&quot;) </code></pre>
python|tensorflow|deep-learning|conv-neural-network
0
4,286
72,194,578
How to merge two different versions same dataframe in python pandas?
<p>I have the two different versions of same data frame. In fact, they were two different excels with the same columns updated by two different persons. They may have their own entries as well as the same data. And it looks like this.</p> <pre><code>df1 df2 A B C A B C prod1 cat1 type1 prod1 cat1 type1 prod2 cat2 prod2 cat3 type2 prod3 cat4 type3 prod4 cat5 prod4 cat5 type4 </code></pre> <p>What I want to do is that, based on col A, I will merge this two data frames, drop for duplicates, and fill the missing one with the other dataframe has, and if both row has values, will use df2 as priority to take the values. And The final result should look like this.</p> <pre><code>final df A B C prod1 cat1 type1 prod2 cat3 type2 prod3 cat4 type3 prod4 cat5 type4 </code></pre> <p>How can I achieve that in python pandas?</p> <p>What I have tried is that I changed df2 column names except col A, merged (<code>left_on='A'</code>), and add new columns, and fill the values by using <code>apply</code> based on df1 columns and df2 columns, but it did not give me the correct answer.</p>
<p>You can mutually patch the data frames both ways, stack them, and then eliminate duplicates:</p> <pre><code>pd.concat([df1.fillna(df2), df2.fillna(df1)])\ # Patching and Stacking .drop_duplicates(subset=['A']) # Dropping dups # A B C #0 prod1 cat1 type1 #1 prod2 cat2 type2 #2 prod3 cat4 type3 #3 prod4 cat5 type4 </code></pre>
python|pandas|dataframe
0
4,287
45,553,038
How to compile custom ops in tensorflow without having to dynamically import them in python?
<p>I checked through tensorflow documentation and they seem to only give information about compiling a custom op through a bazel rule:</p> <pre><code>load("//tensorflow:tensorflow.bzl", "tf_custom_op_library") tf_custom_op_library( name = "zero_out.so", srcs = ["zero_out.cc"], ) </code></pre> <p>Once bazel builds it, you get a zero_out.so file which you can import into python like below:</p> <pre><code>import tensorflow as tf zero_out_module = tf.load_op_library('./zero_out.so') </code></pre> <p>Is there anyway you can link custom_ops during the bazel build of tensorflow so that you don't need to manually import custom ops through tf.load_op_library?</p>
<p>There is no officially supported extension point to pull in your own ops other than dynamically loading them.</p> <p>If you build tensorflow from source and are willing to hack it it's not hard to pretend your ops are core ops, but it's not supported.</p>
python|tensorflow|bazel
0
4,288
45,428,326
Pandas resample on OHLC data from 1min to 1H
<p>I use OHLC re-sampling of 1min time series data in Pandas, the 15min will work perfectly, for example on the following dataframe:</p> <pre><code>ohlc_dict = {'Open':'first', 'High':'max', 'Low':'min', 'Close': 'last'} df.resample('15Min').apply(ohlc_dict).dropna(how='any').loc['2011-02-01'] Date Time Open High Low Close ------------------------------------------------------------------ 2011-02-01 09:30:00 3081.940 3086.860 3077.832 3081.214 2011-02-01 09:45:00 3082.422 3083.730 3071.922 3073.801 2011-02-01 10:00:00 3073.303 3078.345 3069.130 3078.345 2011-02-01 10:15:00 3078.563 3078.563 3071.522 3072.279 2011-02-01 10:30:00 3071.873 3071.873 3063.497 3067.364 2011-02-01 10:45:00 3066.735 3070.523 3063.402 3069.974 2011-02-01 11:00:00 3069.561 3069.981 3066.286 3069.981 2011-02-01 11:15:00 3070.602 3074.088 3070.373 3073.919 2011-02-01 13:00:00 3074.778 3074.823 3069.925 3069.925 2011-02-01 13:15:00 3070.096 3070.903 3063.457 3063.457 2011-02-01 13:30:00 3063.929 3067.358 3063.929 3067.358 2011-02-01 13:45:00 3067.570 3072.455 3067.570 3072.247 2011-02-01 14:00:00 3072.927 3081.357 3072.767 3080.175 2011-02-01 14:15:00 3078.843 3079.435 3076.733 3076.782 2011-02-01 14:30:00 3076.721 3081.980 3076.721 3081.912 2011-02-01 14:45:00 3082.822 3083.381 3076.722 3077.283 </code></pre> <p>However, when I resample 1min to 1H, the problem comes out. I use default setting, and find the time start from 9 am, but the markert open at 9:30 am.</p> <pre><code>df.resample('1H').apply(ohlc_dict).dropna(how='any').loc['2011-02-01'] </code></pre> <p><a href="https://i.stack.imgur.com/sm1hk.png" rel="nofollow noreferrer">1HourOHLC Wrong in Morning</a></p> <p>I then try to change the <code>base</code> setting, but fail in the afternoon session. The market should open at 13 pm and end at 15 pm, so there should be 13 pm, 14 pm, 15 pm, total 3 bars.</p> <pre><code>df.resample('60MIN',base=30).apply(ohlc_dict).dropna(how='any').loc['2011-02-01'] </code></pre> <p><a href="https://i.stack.imgur.com/j8fQy.png" rel="nofollow noreferrer">1HourOHLC Wrong in afternoon</a></p> <p>In conclusion, the problem is I want it fitting in market and has 6 bars <code>(9:30,10:30,11:30,1:00,2:00,3:00)</code>, but <code>resample</code> in <code>pandas</code> only give me 5 bars <code>(9:30,10:30,11:30,1:30,2:30)</code></p> <p>I am searching for a long time on net. But no use. Please help or try to give some ideas how to achieve this. Thanks.</p>
<p>I had the same issue and could'nt find help online. So i wrote this script to convert 1 min OHLC data into 1 hour.</p> <p>This assumes market timings 9:15am to 3:30pm. If market timings are different simply edit the start_time and end_time to suit your needs.</p> <p>I havent put any additional checks in case trading was suspended during market hours.</p> <p>Hope the code is helpful to someone. :)</p> <p><strong>Sample csv format</strong></p> <pre><code>Date,O,H,L,C,V 2020-03-12 09:15:00,3860,3867.8,3763.35,3830,58630 2020-03-12 09:16:00,3840.05,3859.4,3809.65,3834.6,67155 2020-03-12 09:17:00,3832.55,3855.4,3823.75,3852,51891 2020-03-12 09:18:00,3851.65,3860.95,3846.35,3859,42205 2020-03-12 09:19:00,3859.45,3860,3848.1,3851.55,33194 </code></pre> <p><strong>Code</strong></p> <pre><code>from pandas import read_csv, to_datetime, DataFrame from datetime import time file_path = 'BAJFINANCE-EQ.csv' def add(data, b): # utility function # appends the value in dictionary 'b' # to corresponding key in dictionary 'data' for (key, value) in b.items(): data[key].append(value) df = read_csv(file_path, parse_dates=True, infer_datetime_format=True, na_filter=False) df['Date'] = to_datetime(df['Date'], format='%Y-%m-%d %H:%M:%S') # stores hourly data to convert to dataframe data = { 'Date': [], 'O': [], 'H': [], 'L': [], 'C': [], 'V': [] } start_time = [time(9, 15), time(10, 15), time(11, 15), time( 12, 15), time(13, 15), time(14, 15), time(15, 15)] end_time = [time(10, 14), time(11, 14), time(12, 14), time( 13, 14), time(14, 14), time(15, 14), time(15, 29)] # Market timings 9:15am to 3:30pm (6 hours 15 mins) # We create 6 hourly bars and one 15 min bar # as usually depicted in candlestick charts i = 0 no_bars = df.shape[0] while i &lt; no_bars: if df.loc[i]['Date'].time() in end_time: end_idx = i + 1 hour_df = df[start_idx:end_idx] add(data, { 'Date': df.loc[start_idx]['Date'], 'O': hour_df['O'].iloc[0], 'H': hour_df['H'].max(), 'L': hour_df['L'].min(), 'C': hour_df['C'].iloc[-1], 'V': hour_df['V'].sum() }) if df.loc[i]['Date'].time() in start_time: start_idx = i # optional optimisation for large datasets # skip ahead to loop faster i += 55 i += 1 df = DataFrame(data=data).set_index(keys=['Date']) # df.to_csv('out.csv') print(df) </code></pre>
pandas|dataframe|resampling
2
4,289
45,564,922
pandas how to groupby a period time and then get back a df after filtration inside the group?
<p>Basically, now I have a set of data from some routers(AP). The routers would probe user's devices every 3 seconds and give us user's MAC number(tag_mac). </p> <p>In order to clean those data(since at a period of time, different APs would give us back same tag_macs if the user is near other aps ), I just need the APs with the strongest signal(indicated by rssi) within every 10 seconds(just take the average). This is a sample of my data.</p> <pre><code> ap_mac rssi tag_mac time 0 048b422149fa -63 a40dbc018db7 2017-07-01 08:00:00 1 048b4223e63d -72 a40dbc018db7 2017-07-01 08:00:00 2 048b4223e63d -72 a40dbc018db7 2017-07-01 08:00:00 3 048b4223e63d -72 a40dbc018db7 2017-07-01 08:00:00 4 048b4223e63d -72 a40dbc018db7 2017-07-01 08:00:00 5 048b422149ff -50 30b49e3715d0 2017-07-01 08:00:00 6 048b422149ff -50 30b49e3715d0 2017-07-01 08:00:00 7 048b422149ff -50 30b49e3715d0 2017-07-01 08:00:00 8 048b422149ff -50 30b49e3715d0 2017-07-01 08:00:00 9 048b422149ff -50 30b49e3715d0 2017-07-01 08:00:00 </code></pre> <p>What I need is a filtered dataframe where I dropped all the rows has weaker rssi within every 10 seconds time period. So what I have left is a cleaned data where for each tag_mac I only have ap_macs with the strongest rssi.</p> <p>Can anyone help me with it? Thanks!</p>
<p>I am assuming <code>df</code> as the DataFrame</p> <pre><code>#this makes sure that the 'date' column is in the required format df['time'] = pd.to_datetime(df['time'] , format='%Y-%m-%d %H:%M:%S') new_df = pd.DataFrame(columns=['ap_mac','tag_mac','rssi','to','from']) #start date - first date in the dataframe 'df' start = pd.Timestamp(df.loc[0,'time']) #end date is the last date in the dataframe 'df' end = pd.Timestamp(df.loc[df.shape[0]-1,'time']) upper = lower = start indices_array =[] while (end - upper &gt;= pd.Timedelta(seconds=10)): upper = upper + pd.Timedelta(seconds=10) #data within a 10 second range is extracted into the variable data data = df[upper&gt;df['time']][df['time']&gt;=lower] for i in data['tag_mac'].unique(): var = data.loc[data['tag_mac']==i].groupby('ap_mac').mean() #in the new_df rssi contains average values new_df = new_df.append({'rssi':var.max()[0],'ap_mac':var.idxmax()[0],'tag_mac':i,'to':upper,'from':lower},ignore_index=True) lower = upper </code></pre> <p>your huge dataset, as you mentioned, is condensed into the DataFrame <code>new_df</code> containing only the values you require</p> <p>i've added to new columns <code>to</code> and <code>from</code> in the dataframe <code>new_df</code> showing the time range in which the reading is present</p> <p><code>new_df</code> contains all the <code>tag_mac</code>s and their corresponding <code>ap_mac</code>s that have max <strong>Average</strong> <code>rssi</code> values sampled every ten seconds.</p> <p>if you face any difficulties feel free to leave a comment</p>
python|pandas|dataframe|pandas-groupby
1
4,290
62,699,627
TypeError: Parameter value is not iterable or distribution
<p>I am new to Python and wanted to implement a simple Matrix Factorization Classifier.</p> <p>As I read in another post, there are some possibilities which one can use and I chose <code>sklearn decomposition.NMF</code>: <a href="https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.NMF.html" rel="nofollow noreferrer">https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.NMF.html</a></p> <p>Unfortunately I get the following error:</p> <pre><code>TypeError: Parameter value is not iterable or distribution (key='n_components', value=2) </code></pre> <p>I was trying this:</p> <pre><code>self.clf = decomposition.NMF() self.random_parameters = [ {&quot;n_components&quot;: 2, &quot;init&quot;: None, &quot;solver&quot;: 'cd', &quot;beta_loss&quot;: 'frobenius', &quot;tol&quot;: 0.0001, &quot;max_iter&quot;: 200,&quot;random_state&quot;: None, &quot;alpha&quot;: 0.0, &quot;l1_ratio&quot;: 0.0, &quot;verbose&quot;: 0, &quot;shuffle&quot;: False} ] </code></pre> <p>The interesting thing is, that I implemented the <code>RandomForestClassifier</code> from <code>Sklearn</code> before,and it works great:</p> <pre><code>self.clf = ensemble.RandomForestClassifier() self.random_parameters = [ {&quot;n_estimators&quot;: stats.randint(20, 200), &quot;criterion&quot;: [&quot;gini&quot;], &quot;max_depth&quot;: stats.randint(1, 1500)}, {&quot;n_estimators&quot;: stats.randint(20, 200), &quot;criterion&quot;: [&quot;gini&quot;], &quot;max_depth&quot;: [None]}] </code></pre> <p>I also got this from the sklearn site: <a href="https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html" rel="nofollow noreferrer">https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html</a></p> <p>I was Googling for hours now and cannot find an appropriate solution unfortunately.</p> <p>If somebody could help, I would be very grateful! Best wishes and stay healthy!</p>
<p>For <strong>n_estimators</strong>, Try passing a list of variable , I think it will work !</p> <p>self.clf = ensemble.RandomForestClassifier()<br /> self.random_parameters = [ {&quot;n_estimators&quot;: <strong>[20,50,100,200]</strong>, &quot;criterion&quot;: [&quot;gini&quot;], &quot;max_depth&quot;: stats.randint(1, 1500)}, {&quot;n_estimators&quot;: <strong>[20,50,100,200]</strong>, &quot;criterion&quot;: [&quot;gini&quot;], &quot;max_depth&quot;: [None]}]</p>
python|scikit-learn|sklearn-pandas|matrix-factorization
0
4,291
54,255,230
Status of final rank in pandas
<p>I am working on the ranking of domestic competition of soccer, I have the following dataframe.</p> <pre><code>df = pd.DataFrame() df ['Season'] = ['1314','1314','1314','1314','1314','1314','1314','1314','1314','1415','1415','1415','1415','1415','1415','1415','1415','1415'] df ['Team'] = ['A','B','C','A','B','C','A','B','C','A','B','C','A','B','C','A','B','C'] df ['GW'] = [1,1,1,2,2,2,3,3,3,1,1,1,2,2,2,3,3,3] df['Position'] = [1,2,3,3,1,2,2,3,1,2,1,3,2,1,3,3,2,1] df = df.sort_values (['Season','Team']) df['Position_Change']=df.groupby(['Season','Team'])['Position'].apply(lambda x : x.diff().fillna(0)) </code></pre> <p>Above code can track the rank and position change. Now I wanted to assign a status to the team which ends as first as <strong>Champion</strong>. That means the status champion will be assigned to that team in all GW. And the other teams as the position where they ended at last week of the competition (in this sample last GW is 3) My expected output is as follow:</p> <p><a href="https://i.stack.imgur.com/1K9uV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1K9uV.png" alt="enter image description here"></a></p> <p>Here is the original dataset: <a href="https://drive.google.com/open?id=1UBxw8KCL1128RTvS6lVFC2lOslYjhDxz" rel="nofollow noreferrer">Click to download the dataset</a></p> <p>Your advice would be greatly appreciated. Thanks,</p> <p>Zep</p>
<p>IIUC you need something like below:</p> <pre><code>df1 = df.groupby(['Season','Team'])['Position'].apply(lambda x : np.select([(x.iloc[-1]==1),(x.iloc[-1]==2),(x.iloc[-1]==3)],['Champion','Second','Third'])).reset_index().rename(columns={'Position':'Status'}) print(df.merge(df1,on=['Team','Season'])) Season Team GW Position Position_Change Status 0 1314 A 1 1 0.0 Second 1 1314 A 2 3 2.0 Second 2 1314 A 3 2 -1.0 Second 3 1314 B 1 2 0.0 Third 4 1314 B 2 1 -1.0 Third 5 1314 B 3 3 2.0 Third 6 1314 C 1 3 0.0 Champion 7 1314 C 2 2 -1.0 Champion 8 1314 C 3 1 -1.0 Champion 9 1415 A 1 2 0.0 Third 10 1415 A 2 2 0.0 Third 11 1415 A 3 3 1.0 Third 12 1415 B 1 1 0.0 Second 13 1415 B 2 1 0.0 Second 14 1415 B 3 2 1.0 Second 15 1415 C 1 3 0.0 Champion 16 1415 C 2 3 0.0 Champion 17 1415 C 3 1 -2.0 Champion </code></pre> <p><em>Based on Chat, replace code for df1 in the original code by:</em></p> <pre><code>df1 = df.groupby(['Season','Team'])['Position'].apply(lambda x : np.select([(x.iloc[-1]==1),(2&lt;=x.iloc[-1]&lt;=4),(5&lt;=x.iloc[-1]&lt;=6),(7&lt;=x.iloc[-1]&lt;=17),(x.iloc[-1] &gt; 17)],['Champion','UCL','UEL','Other','Relegation'])).reset_index().rename(columns={'Position':'Status'}) </code></pre>
python|pandas
2
4,292
54,387,512
cv2 convert Range and copyTo functions of c++ to python
<p>I'm writing a python video stabilizer and in some part of the code i need to copy 2 images into a canvas.</p> <p>I tried to convert this c++ code to python but i wasn't able.</p> <pre><code>Mat cur2; warpAffine(cur, cur2, T, cur.size()); cur2 = cur2(Range(vert_border, cur2.rows-vert_border), Range(HORIZONTAL_BORDER_CROP, cur2.cols-HORIZONTAL_BORDER_CROP)); // Resize cur2 back to cur size, for better side by side comparison resize(cur2, cur2, cur.size()); // Now draw the original and stablised side by side for coolness Mat canvas = Mat::zeros(cur.rows, cur.cols*2+10, cur.type()); cur.copyTo(canvas(Range::all(), Range(0, cur2.cols))); cur2.copyTo(canvas(Range::all(), Range(cur2.cols+10, cur2.cols*2+10))); </code></pre> <p>I wrote this code but i got error:</p> <pre><code>ret, frame = cap.read() new_frame = transform(frame,data[counter]) #some kind of low pass filter canvas = np.zeros ((frame_height, frame_width*2+10,3)) np.copyto (canvas[:frame_width], frame) np.copyto (canvas[frame_width+10:frame_width*2+10], new_frame) </code></pre> <p>I got </p> <blockquote> <p>"couldnt boradcast from shape into shape"</p> </blockquote> <p>err. But i think i used canvas in wrong way. in cpp code there is <code>canvas(Range::all(), Range(0, cur2.cols))</code> which i dont know how to use it in python </p> <p>How can i use Range function and copyTo function in python? And how should i copy an image to a specific part of canvas?</p> <p>Any help?</p>
<p>cv::Mat are actually numpy arrays in python. And in this case, you should use numpy functions and not OpenCV ones.</p> <p>For the copyTo as clone, use copy() as in:</p> <pre><code>a = np.zeros((10,10,3), dtype=np.uint8) b = a.copy() </code></pre> <p>For ranges, in numpy is easier... just use:</p> <pre><code>a[y1:y2, x1:x2,:] </code></pre> <p>which means from row y1 to row y2 and from column x1 to column x2. In case you need all, just leave the <code>:</code> alone like all rows:</p> <pre><code>a[:, x1:x2,:] </code></pre> <p>The last colon is for channels, in this case all channels, but you can also limit it. And if you need only 1 column, or channel you can put the number directly instead of using a "range" like </p> <pre><code>a[4, x1:x2, 0] </code></pre> <p>You can also drop the last colon of the channels, and it will use all of them. Like:</p> <pre><code>a[1:3, 4:8] </code></pre> <p>Finally, to copy a value to a place in the image you can do something like:</p> <pre><code>bigImage[y1:y2, x1:x2] = image </code></pre> <p>You have to make sure that image fits in this place (channels included). That means, if image is of size 640x480 you can not do this:</p> <pre><code>bigImage[10:20, 20:30] = image </code></pre> <p>but you can do something like</p> <pre><code>bigImage[10:20, 20:30] = image[10:20, 10:20] </code></pre> <p>assuming both have the same number of channels</p>
python|c++|numpy|opencv|cv2
2
4,293
54,437,652
How to create a Keras face classifier between myself and others with a restrictive data set?
<p>For the past 2 months I've been trying to create a classification model that can distinguish between myself and other people with Keras. I started from the dogs vs cats classifier and substituted the data set. Since then I have tweaked the network and the data set with some success. Also I have tried to augment my data set in many different combinations(flip, rotate, grayscale, lighten &amp; darken the gamma; my augmentation turns 1 picture into 9).</p> <p>For training I use my laptop's webcam to capture my face in different orientations and angles and I then split it in 3 (1/3 for validation and 2/3 for training). For the negative examples I have another data set of random people divided in the same way.</p> <ul> <li>validation: <ul> <li>person: 300</li> <li>other: 300 </li> </ul></li> <li>train: <ul> <li>person: 600</li> <li>other: 600</li> </ul></li> </ul> <p>To check my model I use some family photos on which I achieved around 80% accuracy but for this I only use 60 pictures, 36 of which are of myself.</p> <pre><code>img_width, img_height = 150, 150 if K.image_data_format() == 'channels_first': input_shape = (3, img_width, img_height) else: input_shape = (img_width, img_height, 3) model = Sequential() model.add(Conv2D(32, (3, 3), input_shape=input_shape)) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(32, (3, 3))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Conv2D(64, (3, 3))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Flatten()) model.add(Dense(64)) model.add(Activation('relu')) model.add(Dropout(0.5)) model.add(Dense(1)) model.add(Activation('sigmoid')) model.compile(loss='binary_crossentropy', optimizer='rmsprop', metrics=['accuracy']) train_datagen = ImageDataGenerator( rescale=1. / 255, shear_range=0.2, zoom_range=0.2, horizontal_flip=True ) test_datagen = ImageDataGenerator(rescale=1. / 255) train_generator = train_datagen.flow_from_directory( train_data_dir, target_size=(img_width, img_height), batch_size=batch_size, class_mode='binary') print(train_generator.class_indices) validation_generator = test_datagen.flow_from_directory( validation_data_dir, target_size=(img_width, img_height), batch_size=batch_size, class_mode='binary') print(validation_generator.class_indices) model.fit_generator( train_generator, steps_per_epoch=train_samples // batch_size, epochs=epochs, callbacks=[tensorboard], validation_data=validation_generator, validation_steps=validation_samples // batch_size) model.save('model.h5') </code></pre> <p>All of my training attempts go pretty much the same way. First 1-2 epochs have close acc and loss values while the following ones jump to acc: 0.9 with loss: 0.1. </p> <p>My assumption is that the problem is in the data set. What should I do in order to achieve a reasonable degree or accuracy by only using webcam taken photos?</p>
<p>Given the amount of data you have, a better approach would be to use transfer learning instead of training from scratch. You can start with one of the pre-trained models for ImageNet like Resnet or Inception. But I suspect models trained on large face dataset may perform better. You can check the facenet implementation from <a href="https://github.com/davidsandberg/facenet" rel="nofollow noreferrer">here</a>. You can train only the last fully connected layer weights and 'freeze' the earlier layers. How to classify using Facenet can be found <a href="https://github.com/davidsandberg/facenet/wiki/Classifier-training-of-inception-resnet-v1" rel="nofollow noreferrer">here</a>. </p>
python|tensorflow|machine-learning|keras|classification
1
4,294
73,531,628
Apply numpy broadcast_to on each vector in an array
<p>I want to apply something like this:</p> <pre><code>a = np.array([1,2,3]) np.broadcast_to(a, (3,3)) array([[1, 2, 3], [1, 2, 3], [1, 2, 3]]) </code></pre> <p>On each vector in a multi-vector array:</p> <pre><code>a = np.array([[1,2,3], [4,5,6]]) np.broadcast_to(a, (2,3,3)) ValueError: operands could not be broadcast together with remapped shapes [original-&gt;remapped]: (2,3) and requested shape (2,3,3) </code></pre> <p>To get something like this:</p> <pre><code>array([[[1, 2, 3], [1, 2, 3], [1, 2, 3]], [[4, 5, 6], [4, 5, 6], [4, 5, 6]]]) </code></pre>
<p>One way is to use list-comprehension and broadcast each of the inner array:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; np.array([np.broadcast_to(i, (3,3)) for i in a]) array([[[1, 2, 3], [1, 2, 3], [1, 2, 3]], [[4, 5, 6], [4, 5, 6], [4, 5, 6]]]) </code></pre> <p>Or, you can just add an extra dimension to <code>a</code> then call broadcast_to over it:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; np.broadcast_to(a[:,None], (2,3,3)) array([[[1, 2, 3], [1, 2, 3], [1, 2, 3]], [[4, 5, 6], [4, 5, 6], [4, 5, 6]]]) </code></pre>
python|numpy|array-broadcasting
3
4,295
71,184,758
Using multiprocessing to speed up filling of a numpy array
<p>I am trying to solve a problem that involves minimizing a certain function. The function contains a numpy array whose elements are filled by calling a function. The array generation is taking a huge amount of time, and I mean that. The array that I'm generating is defined below</p> <pre><code>def cov_p(Phi, l_mat, s_f, sigma_11, sigma_22, N_p): ans = np.zeros([N_p, N_p, 2, 2], dtype=complex) for i in range(N_p): for j in range(N_p): ans[i][j] = np.linalg.multi_dot([L_0, R_final( Phi, l_mat, s_f, (i-j)*tstep), L_0.T])+np.array([[sigma_11, 0], [0, sigma_22]])*I(i, j) return ans.transpose(0, 2, 1, 3).reshape(2*N_p, -1) </code></pre> <p>where L_0 and I are defined below</p> <pre><code>L_0 = np.eye(2) def I(m, p): if m == p: return 1 return 0 </code></pre> <p>As can be seen , each element references the function R_final, which basically returns a 2 by 2 complex matrix. Then I concatenate all those 2 by 2 matrices to form a matrix of size 2N_p by 2N_p. R_final is defined below.</p> <pre><code>def R_x(Phi, l_mat, s_f, t, i, j): real_integral = quad(S_xr, -np.inf, np.inf, args=(Phi, l_mat, s_f, t, i, j), limit=10000) imag_integral = quad(S_xc, -np.inf, np.inf, args=(Phi, l_mat, s_f, t, i, j), limit=10000) return real_integral[0]+1j*imag_integral[0] def R_final(Phi, l_mat, s_f, t): return np.array([[R_x(Phi, l_mat, s_f, t, i, j) for j in range(2)] for i in range(2)]) </code></pre> <p>where S_xr, S_xc are shown below</p> <pre><code>def S_xr(omega, Phi, l_mat, s_f, t, i, j): h_x = np.real(np.linalg.multi_dot( [Phi, np.linalg.inv(-1j*omega*np.linalg.inv(l_mat)+np.eye(np.shape(l_mat)[0])), Phi.T])) #Taking only the real value. Check again!!! ans = np.exp(1j*omega*t)*np.linalg.multi_dot([np.conjugate(h_x), s_f, h_x.T]) return np.real(ans[i][j]) def S_xc(omega, Phi, l_mat, s_f, t, i, j): h_x = np.real(np.linalg.multi_dot( [Phi, np.linalg.inv(-1j*omega*np.linalg.inv(l_mat)+np.eye(np.shape(l_mat)[0])), Phi.T])) #Taking only the real value. Check again!!! ans = np.exp(1j*omega*t)*np.linalg.multi_dot([np.conjugate(h_x), s_f, h_x.T]) return np.imag(ans[i][j]) </code></pre> <p>Try calculating cov_p for the following values of listed parameters.</p> <pre><code>phi = np.array([[-0.0529255 +0.00662948j, -0.0529255 -0.00662948j, -0.03050694-0.00190298j, -0.03050694+0.00190298j], [-0.04149906+0.00171591j, -0.04149906-0.00171591j, 0.01974404-0.00194719j, 0.01974404+0.00194719j]]) lamb_mat = np.array([[-1.00390867 +6.28783994j, 0. +0.j , 0. +0.j , 0. +0.j ], [ 0. +0.j , -1.00390867 -6.28783994j, 0. +0.j , 0. +0.j ], [ 0. +0.j , 0. +0.j , -0.25859133+12.09860357j, 0. +0.j ], [ 0. +0.j , 0. +0.j , 0. +0.j , -0.25859133-12.09860357j]]) S_f = np.array([[100,0],[0,100]]) tstep = 0.1 sigma_11 = 0.3 sigma_22 = 0.4 #Try finding cov_p for the following set of inputs cov_p(phi, lamb_mat, S_f, sigma_11, sigma_22, 10) </code></pre> <p>What I want to ask is how I can speed up filling of cov_p by dividing the operation into multiple processes. It's actually the R_final matrices to the right and bottom of cov_p which are taking time. Any help will be appreciated.</p>
<p>You can easily parallelize the outermost loop using <code>Pool</code> of multiprocessing. The idea is to split the <code>ans</code> array in the last dimension and merge it once each part has been computed. Note that the <code>tstep</code> variable needs to be sent to processes. Here is the resulting code containing mainly the modified parts:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np from scipy.integrate import quad from multiprocessing import Pool # [...] -- Code of the following unchanged function here # I, R_x, R_final, S_xr, S_xc def cov_p_work(params): Phi, l_mat, s_f, sigma_11, sigma_22, N_p, tstep, i = params ans = np.zeros([N_p, 2, 2], dtype=complex) L_0 = np.eye(2) for j in range(N_p): ans[j] = np.linalg.multi_dot([L_0, R_final( Phi, l_mat, s_f, (i-j)*tstep), L_0.T])+np.array([[sigma_11, 0], [0, sigma_22]])*I(i, j) return ans def cov_p(Phi, l_mat, s_f, sigma_11, sigma_22, N_p, tstep): with Pool() as p: params = [(Phi, l_mat, s_f, sigma_11, sigma_22, N_p, tstep, i) for i in range(N_p)] result = p.map(cov_p_work, params) ans = np.array(result) return ans.transpose(0, 2, 1, 3).reshape(2*N_p, -1) if __name__ == '__main__': # [...] -- Unchanged parameters here #Try finding cov_p for the following set of inputs cov_p(phi, lamb_mat, S_f, sigma_11, sigma_22, 10, tstep) </code></pre> <p>On my 6-core machine, this is about <strong>5 times faster</strong> (96.9 seconds --&gt; 20.9 seconds). Note that it barely scale with more core since <code>N_p</code> is quite small here (which can cause some imperfect load balancing).</p>
python|numpy|optimization|scipy|multiprocessing
0
4,296
60,391,111
Pandas Series Calculation
<p>My column (faq_helpful) have values <strong>either</strong> 0,1 or blanks. If it is blank, i wont bother about it. I want to find how many 0 and 1 are there but apparently, its returning the same value for both which is wrong. </p> <pre><code>for question, question_df in df_raw.groupby(['faq_question']): count_0 = question_df['faq_helpful'].isin([0]).count() print(count_0) # returns 25 count_1 = question_df['faq_helpful'].isin([1]).count() print(count_1) # also return 25 which is wrong total = count_0 + count_1 </code></pre>
<p>Solution based on your code:</p> <pre><code>count = [] for question, question_df in df_raw.groupby(['faq_question']): count.append(len(question_df['faq_question'])) print(count) [6, 9] # numbers based on my example total = sum(count) print(total) 15 </code></pre> <p>Pandas has an in-built function to count values in a Series, <code>pandas.Series.value_counts()</code>, <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.value_counts.html" rel="nofollow noreferrer">here</a>. This function will sort and count all values in a Series. It returns a series, with the index indicating the counted value (in your case 0 and 1) and as its value the number of occurrences of it.</p> <pre><code>k = df_raw['faq_question'].value_counts() print(k) 1.0 9 0.0 6 Name: faq_question, dtype: int64 total = sum(k) </code></pre> <p>Example code that was used to generate the above example, including 0/1/nan in the relevant column:</p> <pre><code>df_raw = pd.DataFrame(np.array([ [0,0,1,1,0,1,float("NaN"),0,1,1,1,1,0,0,float("NaN"),1,1], [9,4,float("NaN"),4,9,4,0,5,2,2,5,5,2,5,8,float("NaN"),1]]).T, columns=["faq_question","other"]) </code></pre>
pandas|pandas-groupby|series
0
4,297
60,448,067
Can't import gpu version of tensorflow
<p>I can't use tensorflow because I get this error message when I try to import tensorflow:</p> <pre><code>2020-02-28 09:31:24.742077: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_100.dll'; dlerror: cudart64_100.dll not found Traceback (most recent call last): File "&lt;input&gt;", line 1, in &lt;module&gt; File "C:\Program Files\JetBrains\PyCharm Community Edition 2019.3.3\plugins\python-ce\helpers\pydev\_pydev_bundle\pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) File "C:\Users\Maximal\Documents\Python\PyCharm\Projekt1\venv\lib\site-packages\tensorflow\__init__.py", line 98, in &lt;module&gt; from tensorflow_core import * File "C:\Program Files\JetBrains\PyCharm Community Edition 2019.3.3\plugins\python-ce\helpers\pydev\_pydev_bundle\pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) File "C:\Users\Maximal\Documents\Python\PyCharm\Projekt1\venv\lib\site-packages\tensorflow_core\__init__.py", line 40, in &lt;module&gt; from tensorflow.python.tools import module_util as _module_util File "C:\Program Files\JetBrains\PyCharm Community Edition 2019.3.3\plugins\python-ce\helpers\pydev\_pydev_bundle\pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) File "C:\Users\Maximal\Documents\Python\PyCharm\Projekt1\venv\lib\site-packages\tensorflow\__init__.py", line 50, in __getattr__ module = self._load() File "C:\Users\Maximal\Documents\Python\PyCharm\Projekt1\venv\lib\site-packages\tensorflow\__init__.py", line 44, in _load module = _importlib.import_module(self.__name__) File "C:\Users\Maximal\AppData\Local\Programs\Python\Python36\lib\importlib\__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "C:\Users\Maximal\Documents\Python\PyCharm\Projekt1\venv\lib\site-packages\tensorflow_core\python\__init__.py", line 52, in &lt;module&gt; from tensorflow.core.framework.graph_pb2 import * File "C:\Program Files\JetBrains\PyCharm Community Edition 2019.3.3\plugins\python-ce\helpers\pydev\_pydev_bundle\pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) File "C:\Users\Maximal\Documents\Python\PyCharm\Projekt1\venv\lib\site-packages\tensorflow_core\core\framework\graph_pb2.py", line 7, in &lt;module&gt; from google.protobuf import descriptor as _descriptor File "C:\Program Files\JetBrains\PyCharm Community Edition 2019.3.3\plugins\python-ce\helpers\pydev\_pydev_bundle\pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) File "C:\Users\Maximal\Documents\Python\PyCharm\Projekt1\venv\lib\site-packages\google\protobuf\descriptor.py", line 47, in &lt;module&gt; from google.protobuf.pyext import _message File "C:\Program Files\JetBrains\PyCharm Community Edition 2019.3.3\plugins\python-ce\helpers\pydev\_pydev_bundle\pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) </code></pre> <p>I have Cuda toolkit 10.0 installed and the corresponding version of cuDNN as well as visual studio. At first I got the message:</p> <pre class="lang-py prettyprint-override"><code>Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found </code></pre> <p>So I checked the cuda folder and found only a cudart64_100.dll. After some research I found out that cudart_101.dll is part of the version 10.1 of cuda toolkit. So I intalled 10.1 with the corresponding version of cuDNN. But now I get the message:</p> <pre><code>Could not load dynamic library 'cudart64_100.dll'; dlerror: cudart64_100.dll not found </code></pre> <p>This makes no sense to me and I didn't find a way to fix this problem. I use the latest version of PyCharm, Python 3.6, Tensorflow-gpu 2.0.0, and Cuda 10.0 and 10.1</p>
<p>At this point I would simply completely remove (e.g. de-install) both CUDA packages (10.0 and 10.1), as it is probably not a good idea to have both at the same time. After de-installing, make sure you check all your folders so that nothing is left. </p> <p>Then, check whether or not your NVIDIA drivers are up to date. </p> <p>Closely follow the installation instructions for <a href="https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html#install-windows" rel="nofollow noreferrer">Windows</a>. <strong>Make sure you get a CuDNN package that is compatible with the CUDA version you installed</strong>. When you install CuDNN through <a href="https://developer.nvidia.com/cudnn" rel="nofollow noreferrer">this</a> link, it will give you a wizard where you can pick different CuDNN versions for different CUDA versions. Pick the one for version 10.1, and place the <code>cudnn64_7.dll</code>, <code>cudnn.h</code> and the <code>cudnn.lib</code> in the right folders of your CUDA installation directory, respectively. </p> <p>Then, I would go about and create a new conda environment, something like</p> <pre><code>conda create --name DeepLearning3.6 python=3.6 tensorflow-gpu numpy scipy pandas scikit-learn pandas </code></pre> <p>will do (Or whatever packages you might need additionally, I usually use these as a starting point but make sure your environment <em>at least</em> includes tensorflow-gpu and numpy). Of course you can also use pip and a venv, both should work that is up to you. </p> <p>Then, do not forget to point to the right environment in PyCharm ( <code>Ctrl + Alt + S</code>, select the right project interpreter, which in this example would be called <code>DeepLearning3.6</code>). Try to run the script again, it should work now. </p>
python|tensorflow|nvidia
0
4,298
72,623,353
how to convert datetime.date to datetime.datetime?
<p>in below I got datetime.date and datetime.datetime class.</p> <pre><code> start=df.index[0] start_input_1=datetime.strptime(start_input[:10],'%Y-%m-%d') print(start_input[:10]) print(type(start)) print(type(start_input_1)) 2021-09-01 &lt;class 'datetime.date'&gt; &lt;class 'datetime.datetime'&gt; </code></pre> <p>Now if I below comparison, then I got an error of</p> <p>cannot compare datetime.datetime to datetime.date</p> <pre><code>if start_input_1&gt;=start: start_plot=start_date_input_1 </code></pre> <p>so how do I convert datetime.date into datetime.datetime so I can compare two dates?</p> <p>if I use pd.to_datetime(), it is converted to timestamp instead of datetime. Thanks for your help.</p> <pre><code>start=pd.to_datetime(start) print(start) class 'pandas._libs.tslibs.timestamps.Timestamp' </code></pre>
<p>You can either check if dates are the same by taking date from the <code>datetime</code> object, or add default time (00:00:00) to the <code>date</code> object to make it <code>datetime</code>:</p> <pre class="lang-py prettyprint-override"><code>from datetime import datetime, date dt = datetime.fromisoformat(&quot;2020-01-01T10:00:00&quot;) d = date.fromisoformat(&quot;2020-01-01&quot;) print(dt, d) # 2020-01-01 00:00:00 2020-01-01 print(type(dt), type(d)) # &lt;class 'datetime.datetime'&gt; &lt;class 'datetime.date'&gt; # Check only by date, do not check time if d == dt.date(): print(&quot;Same date&quot;) # Check by date, but both have to be 00:00:00 d = datetime.fromisoformat(d.isoformat()) if d == dt: print(&quot;Same date and 00:00:00&quot;) </code></pre>
python|pandas
1
4,299
59,784,212
i need to solve this "ModuleNotFoundError: No module named 'tensorflow'"
<p>I have installed <code>tensorflow</code> using link and installed it from cmd in windows 10.</p> <p>But what is happening is when i run my program in <code>pycharm</code> it says the above error.</p> <p>And i even installed <code>tensorflow</code> in my <code>pycharm</code> also still it is the same thing**</p> <p>Error:</p> <pre><code>“ModuleNotFoundError: No module named 'tensorflow'” </code></pre>
<p>Check which version of Python you are using vs the library installed in which location.</p> <p>Check </p> <p><code>import sys</code></p> <p><code>print(sys.path)</code></p> <p>It will be you <code>site-packages</code> folder. Usually the libraries will be there. </p> <p>Check the above in PyCharm and Command Prompt.</p> <p>Also, check for <code>pip show tensorflow</code> or <code>pip show tensorflow-gpu</code></p>
python|tensorflow|keras
0