Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
375,700
72,321,381
renaming the column names in dataframe using some parts of original column name
<p>I have this huge data frame with very long column names</p> <pre><code>import pandas as pd df = pd.DataFrame({'mynumber': [11, 20, 25], 'Raja_trial1:gill234_pit_type_id@rng': [4, 5, 42], 'Raja_trial1:Perm_king_que@pmcx[x1]': [0, 2, 3], 'Dun_mere_fer45:Hisota_gul_har23@cyu[9]': [4, 5, 42], 'Pit_chb1:reet_kaam_nix@opdx[x1]': [2, 1, 1], }) </code></pre> <p>and I would like to rename some of the columns like below dataframe.</p> <pre><code>outputdf = pd.DataFrame({'mynumber': [11, 20, 25], 'trial1:type_id': [4, 5, 42], 'trial1:king_que': [0, 2, 3], 'fere45:gul_har23': [4, 5, 42], 'chb1:kaam_nix': [2, 1, 1], }) </code></pre>
<p>You can achieve this with a single regex:</p> <pre><code>df.columns = df.columns.str.replace(r'.*?([^_]+:).+?([^_]+_[^_]+)@.*', r'\1\2', regex=True) </code></pre> <p>output:</p> <pre><code> mynumber trial1:type_id trial1:king_que fer45:gul_har23 chb1:kaam_nix 0 11 4 0 4 2 1 20 5 2 5 1 2 25 42 3 42 1 </code></pre> <p>To understand ho this works, you can check the <a href="https://regex101.com/r/JP7XW1/1" rel="nofollow noreferrer">regex demo</a>.</p>
pandas|dataframe|multiple-columns
2
375,701
72,333,215
pandas data frame splitting by column values using Parallel Processing
<p>I have a really large pandas dataframe and I am trying split it into multiple ones by stock names and save them to csv.</p> <pre><code> stock date time spread time_diff VOD 01-01 9:05 0.01 0:07 VOD 01-01 9:12 0.03 0:52 VOD 01-01 10:04 0.02 0:11 VOD 01-01 10:15 0.01 0:10 BAT 01-01 10:25 0.03 0:39 BAT 01-01 11:04 0.02 22:00 BAT 01-02 9:04 0.02 0:05 BAT 01-01 10:15 0.01 0:10 BOA 01-01 10:25 0.03 0:39 BOA 01-01 11:04 0.02 22:00 BOA 01-02 9:04 0.02 0:05 </code></pre> <p>I know how to do this in. conventional way</p> <pre><code>def split_save(df): ids = df['stock'].unique() for id in ids: df = df[df['stock']==id] df.to_csv(f'{my_path}/{id}.csv') </code></pre> <p>However, since I got a really large dataframe and thousands of stocks, I want to multiprocessing for acceleration.</p> <p>Any thought ? (I might also try pyspark later.)</p> <p>Thank you !</p>
<p>Being I/O involved I don't expect the selection of the dataframe to be the main blocking point.</p> <p>So far, I can provide you two solutions to speed it up:</p> <p><strong>Threading:</strong> Just launch each stock in a different thread or in a <a href="https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.ThreadPoolExecutor" rel="nofollow noreferrer">ThreadPoolExecutor</a></p> <pre><code>def dump_csv(df, ticker): df.groupby(ticker).to_csv(f'{my_path}/{ticker}.csv') # We can use a with statement to ensure threads are cleaned up promptly with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor: futures = {executor.submit(df, ticker):ticker for ticker in df['stock'].unique()} for future in concurrent.futures.as_completed(futures): print(f&quot;Dumped ticker {futures[future]}&quot;) </code></pre> <p>(code not tested, adapted from the example)</p> <p><strong>Working in a ZIP file</strong>: For storing many many files, zip archives is a very good option, but it should be supported by the &quot;reader&quot;.</p> <p>For the sake of completeness:</p> <pre><code>with ZipFile('stocks.zip', 'w', compression=zipfile.ZIP_DEFLATED) as zf: ids = df['stock'].unique() for id in ids: zf.writestr(f'{id}.csv', df.groupby(ticker).to_csv()) </code></pre>
python|pandas|multiprocessing
2
375,702
72,170,136
Convert irregular array of arrays into an array
<p>I have an array of values and an array of arrays of indices that looks like this (the real arrays are way bigger):</p> <pre><code>import numpy as np A = np.array([np.array([0,1,2]),np.array([0,4]),np.array([1,3,5])]) B = np.array([5,10,3,7,8,4]) for a in A: np.max(B[a]) </code></pre> <p>The endgame would be to remove the loop to save some computing time, but the main issue is that the irregularities in size in the A array keeps me from doing a simple C=B[A]. Is there a way out?</p> <p>edit: My issue is apparently not clear enough, so I'll try to clarify. In the end, I want this array: [10,8,10] meaning an array giving the max of B[0],B[1],B[2], the max of B[0],B[4] and the max of B[1],B[3], B[5].</p>
<p>In order for this to work, you need to concatenate your arrays into a single array.</p> <pre><code>import numpy as np A = np.array([np.array([0,1,2]),np.array([0,4]),np.array([1,3,5])]) B = np.array([5,10,3,7,8,4]) print(B[np.concatenate(A)].max()) </code></pre> <p>This gives the same max result without the loop.</p> <p>Notice that initializing <code>A</code> as an <code>np.array</code> of <code>np.array</code>s is deprecated.</p>
python|numpy
0
375,703
72,440,137
Specifying dtype for parquet partition fields with dask.dataframe.read_parquet
<p>I have a parquet dataset structured like:</p> <pre><code>/path/to/dataset/a=True/b=1/data.parquet /path/to/dataset/a=False/b=1/data.parquet /path/to/dataset/a=True/b=2/data.parquet /path/to/dataset/a=False/b=2/data.parquet ... </code></pre> <p>how do i specify the dtypes of partition fields (here, <code>a</code> and <code>b</code>) when calling <code>dd.read_parquet</code> on a directory like this?</p> <p>i am using the <code>pyarrow</code> engine. do i need to specify a kwarg for a pyarrow function? if so, what would this be?</p> <p>or, can i just call <code>astype(dict(a=&quot;bool&quot;, b=&quot;int&quot;))</code> or something similar?</p> <p>later on in my code, I am calling <code>DataFrame.query</code> to filter values, so dtype is important for boolean values, for example.</p>
<p>If you are using <code>pyarrow</code> as the underlying engine you can pass a <a href="https://arrow.apache.org/docs/python/dataset.html#different-partitioning-schemes" rel="nofollow noreferrer">partitioning argument</a> to specify the schema of the partition.</p> <p><code>dask.dataframe.read_parquet</code> will pass that argument along if you provide it. See <code>**kwargs</code> in the <a href="https://docs.dask.org/en/latest/generated/dask.dataframe.read_parquet.html" rel="nofollow noreferrer">doc</a>.</p> <pre class="lang-py prettyprint-override"><code>import pyarrow as pa import pyarrow.dataset as ds partitioning = ds.partitioning( pa.schema([pa.field(&quot;a&quot;, pa.bool_()), pa.field('b', pa.int32())]), flavor=&quot;hive&quot; ) dd.read_parquet(directory=&quot;/path/to/dataset/&quot;, engine='pyarrow', partitioning=partitioning) </code></pre>
python|pandas|dask|parquet|pyarrow
0
375,704
72,206,172
What is the time complexity of numpy.linalg.det?
<p>The documentation for <a href="https://numpy.org/doc/stable/reference/generated/numpy.linalg.det.html" rel="nofollow noreferrer"><code>numpy.linalg.det</code></a> states that</p> <blockquote> <p>The determinant is computed via LU factorization using the <a href="https://en.wikipedia.org/wiki/LAPACK" rel="nofollow noreferrer">LAPACK</a> routine z/<a href="https://en.wikipedia.org/wiki/LAPACK" rel="nofollow noreferrer">dgetrf</a>.</p> </blockquote> <p>I ran the following run time tests and fit polynomials of degrees 2, 3, and 4 because that covers the least worst options in <a href="https://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations#Matrix_algebra" rel="nofollow noreferrer">this table</a>. That table also mentions that an LU decomposition approach takes $O(n^3)$ time, but then the theoretical complexity of LU decomposition given <a href="https://en.wikipedia.org/wiki/LU_decomposition#Theoretical_complexity" rel="nofollow noreferrer">here</a> is $O(n^{2.376})$. Naturally the choice of algorithm matters, but I am not sure what available time complexities I should expect from <a href="https://numpy.org/doc/stable/reference/generated/numpy.linalg.det.html" rel="nofollow noreferrer"><code>numpy.linalg.det</code></a>.</p> <pre class="lang-py prettyprint-override"><code>from timeit import timeit import matplotlib.pyplot as plt import numpy as np from sklearn.linear_model import LinearRegression from sklearn.preprocessing import PolynomialFeatures sizes = np.arange(1,10001, 100) times = [] for size in sizes: A = np.ones((size, size)) time = timeit('np.linalg.det(A)', globals={'np':np, 'A':A}, number=1) times.append(time) print(size, time) sizes = sizes.reshape(-1,1) times = np.array(times).reshape(-1,1) quad_sizes = PolynomialFeatures(degree=2).fit_transform(sizes) quad_times = LinearRegression().fit(quad_sizes, times).predict(quad_sizes) cubic_sizes = PolynomialFeatures(degree=3).fit_transform(sizes) cubic_times = LinearRegression().fit(cubic_sizes, times).predict(cubic_sizes) quartic_sizes = PolynomialFeatures(degree=4).fit_transform(sizes) quartic_times = LinearRegression().fit(quartic_sizes, times).predict(quartic_sizes) plt.scatter(sizes, times, label='Data', color='k', alpha=0.5) plt.plot(sizes, quad_times, label='Quadratic', color='r') plt.plot(sizes, cubic_times, label='Cubic', color='g') plt.plot(sizes, quartic_times, label='Quartic', color='b') plt.xlabel('Matrix Dimension n') plt.ylabel('Time (seconds)') plt.legend() plt.show() </code></pre> <p>The output of the above is given as the following plot.</p> <p><a href="https://i.stack.imgur.com/xRTe8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xRTe8.png" alt="enter image description here" /></a></p> <p>Since none of the available complexities get down to quadratic time, I am unsurprising that visually the quadratic model had the worst fit. Both the cubic and quartic models had excellent visual fit, and unsurprisingly their residuals are closely correlated.</p> <p><a href="https://i.stack.imgur.com/Ozaoe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ozaoe.png" alt="enter image description here" /></a></p> <hr /> <p>Some related questions exist, but they do not have an answer for this specific implementation.</p> <ul> <li><a href="https://stackoverflow.com/questions/62748765/space-complexity-of-matrix-inversion-determinant-and-adjoint">Space complexity of matrix inversion, determinant and adjoint</a></li> <li><a href="https://stackoverflow.com/questions/67648438/time-and-space-complexity-of-determinant-of-a-matrix">Time and space complexity of determinant of a matrix</a></li> <li><a href="https://stackoverflow.com/questions/40687798/experimentally-determining-computing-complexity-of-matrix-determinant">Experimentally determining computing complexity of matrix determinant</a></li> </ul> <p>Since this implementation is used by a lot of Python programmers world-wide, it may benefit the understanding of a lot of people if an answer was tracked down.</p>
<p><strong>TL;DR</strong>: it is between <code>O(n^2.81)</code> and <code>O(n^3)</code> regarding the target BLAS implementation.</p> <p>Indeed, Numpy uses a LU decomposition (in the log space). The actual implementation can be found <a href="https://github.com/numpy/numpy/blob/4adc87dff15a247e417d50f10cc4def8e1c17a03/numpy/linalg/umath_linalg.c.src#L1148" rel="nofollow noreferrer">here</a>. It indeed uses the <code>sgetrf</code>/<code>dgetrf</code> primitive of LAPACK. Multiple libraries provides such a libraries. The most famous is the one of NetLib though it is not the fastest. The Intel MKL is an example of library providing a fast implementation. Fast LU decomposition algorithms use tiling methods so to use a matrix multiplication internally. Their do that because the matrix multiplication is one of the most optimized methods linear algebra libraries (for example the MKL, BLIS, and OpenBLAS generally succeed to reach nearly optimal performance on modern processors). More generally, <strong>the complexity of the LU decomposition is the one of the matrix multiplication</strong>.</p> <p>The complexity of the <em>naive</em> squared matrix multiplication is <code>O(n^3)</code>. Faster algorithms exists like <a href="https://en.wikipedia.org/wiki/Strassen_algorithm" rel="nofollow noreferrer"><em>Strassen</em></a> (running in <code>~O(n^2.81)</code> time) which is often used for big matrices. The <em>Coppersmith–Winograd</em> algorithm achieves a significantly better complexity (<code>~O(n^2.38)</code>), but <em>no linear algebra libraries actually use it</em> since it is a <a href="https://en.wikipedia.org/wiki/Galactic_algorithm" rel="nofollow noreferrer"><strong>galactic algorithm</strong></a>. Put it shortly, such algorithm is theoretically <em>asymptotically better</em> than others but the hidden constant make it <em>impractical</em> for any <em>real-world</em> usage. For more information about the complexity of the matrix multiplication, please read <a href="https://en.wikipedia.org/wiki/Computational_complexity_of_matrix_multiplication" rel="nofollow noreferrer">this article</a>. Thus, <strong>in practice, the complexity of the matrix multiplication is between <code>O(n^2.81)</code> and <code>O(n^3)</code></strong> regarding the target BLAS implementation (which is dependent of your platform and your configuration of Numpy).</p>
python|numpy|time-complexity|linear-algebra|determinants
1
375,705
72,262,198
How can I save multiple dataframes onto one excel file (as separate sheets) without this error occurring?
<p>I have the following Python code:</p> <pre><code>import pandas as pd path=r&quot;C:\Users\Wali\Example.xls&quot; df1=pd.read_excel(path, sheet_name = [0]) df2=pd.read_excel(path, sheet_name = [1]) with pd.ExcelWriter(r&quot;C:\Users\Wali\Example2.xls&quot;) as writer: # use to_excel function and specify the sheet_name and index # to store the dataframe in specified sheet df1.to_excel(writer, sheet_name=&quot;1&quot;, index=0) df2.to_excel(writer, sheet_name=&quot;2&quot;, index=1) </code></pre> <p>I'm reading the excel file which contains two sheets and then saving those sheets into a new excel file but unfortunately I'm receiving the following error:</p> <pre><code>AttributeError: 'dict' object has no attribute 'to_excel' </code></pre> <p>Any ideas on how I can fix this?. Thanks.</p>
<p>Change [0] to 0 in pd.read_excel(path, sheet_name = [0]) will resolve this issue</p> <pre><code>import pandas as pd path=r&quot;test_book.xlsx&quot; df1=pd.read_excel(path, sheet_name = 0) df2=pd.read_excel(path, sheet_name = 1) with pd.ExcelWriter(r&quot;test_book1.xlsx&quot;) as writer: # use to_excel function and specify the sheet_name and index # to store the dataframe in specified sheet df1.to_excel(writer, sheet_name=&quot;1&quot;, index=0) df2.to_excel(writer, sheet_name=&quot;2&quot;, index=1) </code></pre>
python|pandas|dataframe
0
375,706
72,347,136
Is there a way to iterate through list of string in a dataframe?
<p>I wrote the following code. I want to replace the number &quot;1&quot; with &quot;0&quot; whenever it appear twice or more for a particular universal_id and the number &quot;1&quot; that is left should be in the row where days are the lowest. The below code does the work but I want to iterate over more then one universal_id. Column &quot;e&quot; is ok for 'efra&quot; I want this to do for other ID's and other columns.</p> <pre><code>pdf1 = pd.DataFrame( [[1, 0,1, 0,1, 60, 'fdaf'], [1, 1,0, 0,1, 350, 'fdaf'], [1, 1,0, 0,1, 420, 'erfa'], [0, 1,0, 0,1, 410, 'erfa']], columns=['A', 'B', 'c', 'd', 'e', 'days','universal_id']) pdf1['A'] = np.where(pdf1['days']==pdf1['days'].min(),1,0) zet = pdf1.loc[pdf1['e'].isin([1]) &amp; pdf1['universal_id'].str.contains('erfa')] zet['e'] = np.where(zet['days']==zet['days'].min(),1,0) pdf1.loc[zet.index, :] = zet[:] pdf1 </code></pre> <p>Output:</p> <pre><code> A B c d e days universal_id 0 1 0 1 0 1 60 fdaf 1 0 1 0 0 1 350 fdaf 2 0 1 0 0 0 420 erfa 3 0 1 0 0 1 410 erfa </code></pre>
<p>You can use:</p> <pre><code>df2 = pdf1.sort_values(by='days') m1 = df2['A'].eq(1) m2 = df2[['A', 'universal_id']].duplicated() pdf1.loc[m1&amp;m2, 'A'] = 0 </code></pre> <p>output:</p> <pre><code> A B c d e days universal_id 0 1 0 1 0 1 60 fdaf 1 0 1 0 0 1 350 fdaf 2 1 1 0 0 1 420 erfa 3 0 1 0 0 1 410 erfa </code></pre> <p>for e, f you want to follow the same logic:</p> <pre><code>m1 = df2['A'].eq(1) m3 = df2[['e', 'universal_id']].duplicated() pdf1.loc[m1&amp;m3, 'e'] = 0 </code></pre> <p>output:</p> <pre><code> A B c d e days universal_id 0 1 0 1 0 1 60 fdaf 1 0 1 0 0 0 350 fdaf 2 1 1 0 0 0 420 erfa 3 0 1 0 0 1 410 erfa </code></pre>
python|pandas|dataframe|iterator
0
375,707
72,363,673
groupby and join with pandas dataframe
<p>Here is part of the data of scaffold_table <a href="https://i.stack.imgur.com/uLcHO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uLcHO.png" alt="enter image description here" /></a></p> <pre><code>import pandas as pd scaffold_table = pd.DataFrame({ 'Position':[2000]*5, 'Company':['Amazon', 'Amazon', 'Alphabet', 'Amazon', 'Alphabet'], 'Date':['2020-05-26','2020-05-27','2020-05-27','2020-05-28','2020-05-28'], 'Ticker':['AMZN','AMZN','GOOG','AMZN','GOOG'], 'Open':[2458.,2404.9899,1417.25,2384.330078,1396.859985], 'Volume':[3568200,5056900,1685800,3190200,1692200], 'Daily Return':[-0.006164,-0.004736,0.000579,-0.003854,-0.000783], 'Daily PnL':[-12.327054,-9.472236,1.157283,-7.708126,-1.565741], 'Cumulative PnL/Ticker':[-12.327054,-21.799290,1.157283,-29.507417,-0.408459]}) </code></pre> <p>I would like to create a summary table that returns the overall yield per ticker. The overall yield should be calculated as the total PnL per ticker divided by the last date's position per ticker</p> <pre><code># Create a summary table of your average daily PnL, total PnL, and overall yield per ticker summary_table = pd.DataFrame(scaffold_table.groupby(['Date','Ticker'])['Daily PnL'].mean()) position_ticker = pd.DataFrame(scaffold_table.groupby(['Date','Ticker'])['Position'].sum()) # the total PnL is the sum of PnL per Ticker after two years period totals = summary_table.droplevel('Date').groupby('Ticker').sum().rename(columns={'Daily PnL':'total PnL'}) summary_table = summary_table.join(totals, on='Ticker') summary_table = summary_table.join(position_ticker, on = ['Date','Ticker'], how='inner') summary_table['Yield'] = summary_table.loc['2022-04-29']['total PnL']/summary_table.loc['2022-04-29']['Position'] summary_table </code></pre> <p>But the yield is showing NaN, could anyone take a look at my codes? I used ['2022-04-29'] because it is the last date, but I think there are some codes to return the last date without explicitly inputting that. <a href="https://i.stack.imgur.com/yawzf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yawzf.png" alt="enter image description here" /></a></p>
<p>I solved the problem with the following code</p> <pre><code># we want the overall yield per ticker, so total PnL/Position on the last date summary_table['Yield'] = summary_table['total PnL']/summary_table.loc['2022-04-29']['Position'] </code></pre> <p>This does not specify the date for total PnL since it's the sum by ticker without regard of the date.</p>
python|pandas
0
375,708
72,277,791
Finding the line number of specific values in pandas dataframe - python
<p>I am working on a school project and I am trying to simulate a library's catalogue system. I have .csv files that hold all the data I need but I am having a problem with checking if an inputted title, author, bar code, etc. is in the data set. I have searched around for quite a while trying different solutions but nothing is working. The idea that I have right now is that if I can find at what line the inputted data, then I can use .loc[] to get the needed info. Is this the right track? is there another, more efficient way to do this?</p> <pre><code>import pandas mainData = pandas.read_csv(&quot;mainData.csv&quot;) barcodes = mainData[&quot;Barcode&quot;] authors = mainData[&quot;Author&quot;] titles = mainData[&quot;Title/Subtitle&quot;] callNumbers = mainData[&quot;Call Number&quot;] k = &quot;Han, Jenny,&quot; for i in authors: if k == i: print(&quot;Success&quot;) k = authors.index[k] print(authors[k]) else: print(&quot;Fail&quot; + k) # Please Note: This code only checks for an author match and has all other fields left out as I thought this code was too inefficient to add the rest of the fields. The code also does not find the line on witch the matched are located, therefore .loc[] can not be used to print out all the data found. </code></pre> <p>This is the code I am using right now, It outputs the result along with an error <code>Python IndexError: only integers, slices (\`:\`), ellipsis (\`\...\`), numpy.newaxis (\`None\`) and integer or boolean arrays are valid indices</code> and is very slow. I would like the code to be able to output the books and their respective info. I have found the the .loc[] feature (mentioned above) outputs the info quite nicely. <a href="https://docs.google.com/spreadsheets/d/1jDLfMKNS2cx17OGHs6Qa17LzlZwZngNJLm_PayHv_lg/edit?usp=sharing" rel="nofollow noreferrer">Here is the data I am using</a> .</p> <p><em>Edit:</em> I have been able to reduce the time it takes for the program to run and made a functional &quot;prototype&quot;</p> <pre><code>authorFirst = authorFirst.lower() authorFirst = authorFirst.title() authorFirst += &quot;,&quot; authorSecond = input(&quot;Enter author's last name: &quot;) authorSecond = authorSecond.lower() authorSecond = authorSecond.title() authorSecond += &quot;, &quot; authorInput = authorSecond + authorFirst print(mainData[mainData[&quot;Author&quot;].isin([authorInput])]) bookChoice = input(&quot;Please Enter the number to the left of the barcode to select a book: &quot;) print(mainData.loc[int(bookChoice)]) </code></pre> <p>id provides the functionality that I am looking for but I feel that there has to be a better way of doing it. (Not asking the user to input the row number). Idk if this is possible tho.</p> <p>I am new to python and this is my first time using pandas so i'm sorry if this is really shitty and hurts your brain.</p> <p>Thank-you so much for your time!</p>
<p>Pandas does not really need to find the numeric index of something, to do indexing.</p> <p>Since you have not provided any starting point or data, I'll just provide a few pointers here as there are mans ways to <a href="https://pandas.pydata.org/docs/user_guide/10min.html#boolean-indexing" rel="nofollow noreferrer">match and index things in pandas</a>.</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd # build a library library = pd.DataFrame({ &quot;Author&quot;: [&quot;H.G. Wells&quot;, &quot;Hubert Selby Jr.&quot;, &quot;Ken Kesey&quot;], &quot;Title&quot;: [ &quot;The War of the Worlds&quot;, &quot;Requiem for a Dream&quot;, &quot;One Flew Over the Cuckoo's Nest&quot;, ], &quot;Published&quot;: [1898, 1979, 1962], }) # find on some characteristics mask_wells = library.Author.str.contains(&quot;Wells&quot;) mask_rfad = library[&quot;Title&quot;] == &quot;Requiem for a Dream&quot; mask_xixth = library[&quot;Published&quot;] &lt; 1900 </code></pre>
python|pandas|dataframe
0
375,709
72,143,352
Round number down to the next 1000 in python
<p>can somebody tell me how i can round down to the nearest thousand. So far I tried it with math.round(), the truncate function but i couldn't find my math skills to work out for me. As a example for some people I want that 4520 ends up in beeing 4000.</p>
<p>In Python, you can do</p> <pre><code>print((number // 1000)*1000) </code></pre>
python|numpy|rounding
2
375,710
72,356,325
Python - When plotting using both matplotlib and pandas, the x-axis is accurate using pandas, but not matplotlib
<p>(This is my first StackOverflow question ever!)</p> <p>I have a pandas dataframe that contains solar irradiance values in 15-minute intervals over the course of a single day. This dataframe's index is a &quot;DatetimeIndex&quot; (dtype='datetime64[ns, America/New_York]', and is localized to its respective timezone. So, the index values start as &quot;1997-01-20 00:15:00-05:00&quot;, &quot;1997-01-20 00:30:00-05:00&quot; ... &quot;1997-01-21 00:00:00-5:00&quot; (notice the last entry is 12AM of the next day).</p> <p>When plotting this dataframe (named <code>poa_irradiance_swd_corrected</code>) by itself, <strong>everything looks great</strong>. Here's an example of my code and its output:</p> <p>Code:</p> <pre><code>import pandas as pd import numpy as np import matplotlib.pyplot as plt from matplotlib.dates import DateFormatter import pvlib as pv import datetime as dt import math import pytz import time plt.figure() poa_irradiance_swd_corrected.plot().get_figure().set_facecolor('white') </code></pre> <p>Output: <a href="https://i.stack.imgur.com/w7UoA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/w7UoA.png" alt="enter image description here" /></a></p> <p>The highest &quot;poa_global&quot; irradiance measurements in this dataframe are between 12:00 PM (noon) and 1:30-ish. This is accurate.</p> <p>I also have two other dataframes (named <code>poa_irradiance_swd_flat</code> and <code>modeled_poa_irradiance_swd</code>) with similar irradiance measurements. To compare the irradiance measurements visually, I put all 3 dataframes into a single figure using matplotlib's subplot function. <strong>However, the X-axis of the subplots are very inaccurate</strong>. The code and output are shown below:</p> <p>Code:</p> <pre><code>fig, axs = plt.subplots(2,2, sharex=True,sharey=True, facecolor='white',figsize=(12,8)) fig.suptitle('Sunny Winter Day',size='xx-large',weight='bold') axs[0,0].plot(poa_irradiance_swd_flat) axs[0,0].set_title('POA Irradiance (Flat)') axs[0,1].plot(modeled_poa_irradiance_swd) axs[0,1].set_title('Modeled Data (Tilted)') axs[1,0].plot(poa_irradiance_swd_corrected) axs[1,0].set_title('POA Irradiance (Tilted)') axs[1,1].plot(modeled_poa_irradiance_swd) axs[1,1].set_title('Modeled Data (Tilted)') for axs in axs.flat: axs.set(ylabel='Irradiance $W/m^2$') axs.xaxis.set_major_formatter(DateFormatter('%H:%M')) plt.tight_layout(pad=0, w_pad=0, h_pad=3) fig.autofmt_xdate() </code></pre> <p>Output:<a href="https://i.stack.imgur.com/QBxcW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QBxcW.png" alt="enter image description here" /></a> Note that the two graphs on the right are the exact same. I simply used the graphs to the right to compare to the graphs to the left</p> <p><strong>Notice how the x-axis is completely off by 6-ish hours.</strong> I've cross-checked the dataframes themselves, and again, the highest irradiance times coincide with the times between 12:00 PM (noon) and around 1:30-ish... <strong>not</strong> around 18:00 (which is 6:00 PM and does not make sense). I even used the same exact dataframe in the <code>axs[1,0].plot(poa_irradiance_swd_corrected)</code> line of code.</p> <p><strong>How can I get the X-axis to display the accurate times?</strong></p> <p><strong>EDIT</strong> (in response to scespinoza):</p> <p>So I tried passing the respective <code>.index</code> and <code>.values</code> values from the dataframe, and it successfully plotted... but still with the same problem.</p> <p>Then, I tried passing the <code>ax</code> commands into the plot function itself like this: <code>poa_irradiance_swd_flat.plot(ax=axs[0, 0])</code> for each subplot. After importing matplotlib as mpl, I also added <code>mpl.rcParams['timezone'] = 'UTC'</code> to the top of my code.</p> <p>Now, I'm getting a ValueError:</p> <pre><code>ValueError Traceback (most recent call last) File ~/opt/miniconda3/envs/pvlib/lib/python3.10/site-packages/IPython/core/formatters.py:339, in BaseFormatter.__call__(self, obj) 337 pass 338 else: --&gt; 339 return printer(obj) 340 # Finally look for special method names 341 method = get_real_method(obj, self.print_method) File ~/opt/miniconda3/envs/pvlib/lib/python3.10/site-packages/IPython/core/pylabtools.py:151, in print_figure(fig, fmt, bbox_inches, base64, **kwargs) 148 from matplotlib.backend_bases import FigureCanvasBase 149 FigureCanvasBase(fig) --&gt; 151 fig.canvas.print_figure(bytes_io, **kw) 152 data = bytes_io.getvalue() 153 if fmt == 'svg': File ~/opt/miniconda3/envs/pvlib/lib/python3.10/site-packages/matplotlib/backend_bases.py:2295, in FigureCanvasBase.print_figure(self, filename, dpi, facecolor, edgecolor, orientation, format, bbox_inches, pad_inches, bbox_extra_artists, backend, **kwargs) 2289 renderer = _get_renderer( 2290 self.figure, 2291 functools.partial( 2292 print_method, orientation=orientation) 2293 ) 2294 with getattr(renderer, &quot;_draw_disabled&quot;, nullcontext)(): -&gt; 2295 self.figure.draw(renderer) 2297 if bbox_inches: 2298 if bbox_inches == &quot;tight&quot;: File ~/opt/miniconda3/envs/pvlib/lib/python3.10/site-packages/matplotlib/artist.py:73, in _finalize_rasterization.&lt;locals&gt;.draw_wrapper(artist, renderer, *args, **kwargs) 71 @wraps(draw) 72 def draw_wrapper(artist, renderer, *args, **kwargs): ---&gt; 73 result = draw(artist, renderer, *args, **kwargs) 74 if renderer._rasterizing: 75 renderer.stop_rasterizing() File ~/opt/miniconda3/envs/pvlib/lib/python3.10/site-packages/matplotlib/artist.py:50, in allow_rasterization.&lt;locals&gt;.draw_wrapper(artist, renderer) 47 if artist.get_agg_filter() is not None: 48 renderer.start_filter() ---&gt; 50 return draw(artist, renderer) 51 finally: 52 if artist.get_agg_filter() is not None: File ~/opt/miniconda3/envs/pvlib/lib/python3.10/site-packages/matplotlib/figure.py:2810, in Figure.draw(self, renderer) 2807 # ValueError can occur when resizing a window. 2809 self.patch.draw(renderer) -&gt; 2810 mimage._draw_list_compositing_images( 2811 renderer, self, artists, self.suppressComposite) 2813 for sfig in self.subfigs: 2814 sfig.draw(renderer) File ~/opt/miniconda3/envs/pvlib/lib/python3.10/site-packages/matplotlib/image.py:132, in _draw_list_compositing_images(renderer, parent, artists, suppress_composite) 130 if not_composite or not has_images: 131 for a in artists: --&gt; 132 a.draw(renderer) 133 else: 134 # Composite any adjacent images together 135 image_group = [] File ~/opt/miniconda3/envs/pvlib/lib/python3.10/site-packages/matplotlib/artist.py:50, in allow_rasterization.&lt;locals&gt;.draw_wrapper(artist, renderer) 47 if artist.get_agg_filter() is not None: 48 renderer.start_filter() ---&gt; 50 return draw(artist, renderer) 51 finally: 52 if artist.get_agg_filter() is not None: File ~/opt/miniconda3/envs/pvlib/lib/python3.10/site-packages/matplotlib/axes/_base.py:3046, in _AxesBase.draw(self, renderer) 3043 for spine in self.spines.values(): 3044 artists.remove(spine) -&gt; 3046 self._update_title_position(renderer) 3048 if not self.axison: 3049 for _axis in self._get_axis_list(): File ~/opt/miniconda3/envs/pvlib/lib/python3.10/site-packages/matplotlib/axes/_base.py:2986, in _AxesBase._update_title_position(self, renderer) 2983 for ax in axs: 2984 if (ax.xaxis.get_ticks_position() in ['top', 'unknown'] 2985 or ax.xaxis.get_label_position() == 'top'): -&gt; 2986 bb = ax.xaxis.get_tightbbox(renderer) 2987 else: 2988 bb = ax.get_window_extent(renderer) File ~/opt/miniconda3/envs/pvlib/lib/python3.10/site-packages/matplotlib/axis.py:1103, in Axis.get_tightbbox(self, renderer, for_layout_only) 1100 if not self.get_visible(): 1101 return -&gt; 1103 ticks_to_draw = self._update_ticks() 1105 self._update_label_position(renderer) 1107 # go back to just this axis's tick labels File ~/opt/miniconda3/envs/pvlib/lib/python3.10/site-packages/matplotlib/axis.py:1046, in Axis._update_ticks(self) 1041 &quot;&quot;&quot; 1042 Update ticks (position and labels) using the current data interval of 1043 the axes. Return the list of ticks that will be drawn. 1044 &quot;&quot;&quot; 1045 major_locs = self.get_majorticklocs() -&gt; 1046 major_labels = self.major.formatter.format_ticks(major_locs) 1047 major_ticks = self.get_major_ticks(len(major_locs)) 1048 self.major.formatter.set_locs(major_locs) File ~/opt/miniconda3/envs/pvlib/lib/python3.10/site-packages/matplotlib/ticker.py:224, in Formatter.format_ticks(self, values) 222 &quot;&quot;&quot;Return the tick labels for all the ticks at once.&quot;&quot;&quot; 223 self.set_locs(values) --&gt; 224 return [self(value, i) for i, value in enumerate(values)] File ~/opt/miniconda3/envs/pvlib/lib/python3.10/site-packages/matplotlib/ticker.py:224, in &lt;listcomp&gt;(.0) 222 &quot;&quot;&quot;Return the tick labels for all the ticks at once.&quot;&quot;&quot; 223 self.set_locs(values) --&gt; 224 return [self(value, i) for i, value in enumerate(values)] File ~/opt/miniconda3/envs/pvlib/lib/python3.10/site-packages/matplotlib/dates.py:636, in DateFormatter.__call__(self, x, pos) 635 def __call__(self, x, pos=0): --&gt; 636 result = num2date(x, self.tz).strftime(self.fmt) 637 return _wrap_in_tex(result) if self._usetex else result File ~/opt/miniconda3/envs/pvlib/lib/python3.10/site-packages/matplotlib/dates.py:528, in num2date(x, tz) 526 if tz is None: 527 tz = _get_rc_timezone() --&gt; 528 return _from_ordinalf_np_vectorized(x, tz).tolist() File ~/opt/miniconda3/envs/pvlib/lib/python3.10/site-packages/numpy/lib/function_base.py:2163, in vectorize.__call__(self, *args, **kwargs) 2160 vargs = [args[_i] for _i in inds] 2161 vargs.extend([kwargs[_n] for _n in names]) -&gt; 2163 return self._vectorize_call(func=func, args=vargs) File ~/opt/miniconda3/envs/pvlib/lib/python3.10/site-packages/numpy/lib/function_base.py:2246, in vectorize._vectorize_call(self, func, args) 2243 # Convert args to object arrays first 2244 inputs = [asanyarray(a, dtype=object) for a in args] -&gt; 2246 outputs = ufunc(*inputs) 2248 if ufunc.nout == 1: 2249 res = asanyarray(outputs, dtype=otypes[0]) File ~/opt/miniconda3/envs/pvlib/lib/python3.10/site-packages/matplotlib/dates.py:350, in _from_ordinalf(x, tz) 347 dt = (np.datetime64(get_epoch()) + 348 np.timedelta64(int(np.round(x * MUSECONDS_PER_DAY)), 'us')) 349 if dt &lt; np.datetime64('0001-01-01') or dt &gt;= np.datetime64('10000-01-01'): --&gt; 350 raise ValueError(f'Date ordinal {x} converts to {dt} (using ' 351 f'epoch {get_epoch()}), but Matplotlib dates must be ' 352 'between year 0001 and 9999.') 353 # convert from datetime64 to datetime: 354 dt = dt.tolist() ValueError: Date ordinal 14228655 converts to 40926-09-26T00:00:00.000000 (using epoch 1970-01-01T00:00:00), but Matplotlib dates must be between year 0001 and 9999. </code></pre>
<p>Can't really reproduce your problem, but maybe you can try passing the index and values of the series by separate to the plot function.</p> <pre><code>... axs[0,0].plot(poa_irradiance_swd_flat.index, poa_irradiance_swd_flat.values) ... </code></pre> <p>Also, note that you can pass an <code>ax</code> attribute to the <code>.plot()</code> function of a <code>Series</code>, so you can refactor your code to something like this:</p> <pre><code>fig, axs = plt.subplots(2,2, sharex=True,sharey=True, facecolor='white',figsize=(12,8)) fig.suptitle('Sunny Winter Day',size='xx-large',weight='bold') poa_irradiance_swd_flat.plot(ax=axs[0, 0]) axs[0,0].set_title('POA Irradiance (Flat)') modeled_poa_irradiance_swd.plot(ax=axs[0, 1]) axs[0,1].set_title('Modeled Data (Tilted)') poa_irradiance_swd_corrected.plot(ax=axs[1, 0]) axs[1,0].set_title('POA Irradiance (Tilted)') modeled_poa_irradiance_swd.plot(ax=axs[1, 1]) axs[1,1].set_title('Modeled Data (Tilted)') for axs in axs.flat: axs.set(ylabel='Irradiance $W/m^2$') # axs.xaxis.set_major_formatter(DateFormatter('%H:%M')) plt.tight_layout(pad=0, w_pad=0, h_pad=3) fig.autofmt_xdate() </code></pre> <h3>Update</h3> <p>I think the error may come from <code>DateFormatter</code> converting the timezone of your date values. Try adding this on the top of your program.</p> <pre><code>import matplotlib as mpl mpl.rcParams['timezone'] = 'UTC' </code></pre>
python|pandas|matplotlib|pvlib
0
375,711
72,244,954
Pandas - Setting column value, based on a function that runs on another column
<p>I have been all over the place to try and get this to work (new to datascience). It's obviously because I don't get how the datastructure of Panda fully works.</p> <p>I have this code:</p> <pre class="lang-py prettyprint-override"><code>def getSearchedValue(identifier): full_str = anedf[&quot;Diskret data&quot;].astype(str) value=&quot;&quot; if full_str.str.find(identifier) &lt;= -1: start_index = full_str.str.find(identifier)+len(identifier)+1 end_index = full_str[start_index:].find(&quot;|&quot;)+start_index value = full_str[start_index:end_index].astype(str) return value for col in anedf.columns: if col.count(&quot;#&quot;) &gt; 0: anedf[col] = getSearchedValue(col) </code></pre> <p>What i'm trying to do is iterate over my columns. I have around 260 in my dataframe. If they contain the character #, it should try to fill values based on whats in my &quot;Diskret data&quot; column. Data in the &quot;Diskret data&quot; column is completely messed up but in the form <code>CCC#111~VALUE|DDD#222~VALUE|</code> &lt;- Until there is no more identifiers + values. All identifiers are not present in each row, and they come in no specific order. The function works if I run it with hard coded strings in regular Python document. But with the dataframe I get various error like:</p> <pre><code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all(). Input In [119], in &lt;cell line: 12&gt;() 12 for col in anedf.columns: 13 if col.count(&quot;#&quot;) &gt; 0: ---&gt; 14 anedf[col] = getSearchedValue(col) Input In [119], in getSearchedValue(identifier) 4 full_str = anedf[&quot;Diskret data&quot;].astype(str) 5 value=&quot;&quot; ----&gt; 6 if full_str.str.find(identifier) &lt;= -1: 7 start_index = full_str.str.find(identifier)+len(identifier)+1 8 end_index = full_str[start_index:].find(&quot;|&quot;)+start_index </code></pre> <p>I guess this is because it evaluate against all rows (Series) which obviously provides some false and true errors. But how can I make the evaluation and assignment so it it's evaluating+assigning like this:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>Diskret data</th> <th>CCC#111</th> <th>JJSDJ#1234</th> </tr> </thead> <tbody> <tr> <td>CCC#111~1IBBB#2323~2234</td> <td>1 (copied from &quot;Diskret data&quot;)</td> <td>0</td> </tr> <tr> <td>JJSDJ#1234~Heart attack</td> <td>0 (or skipped since the row does not contain a value for the identifier)</td> <td>Heart attack</td> </tr> </tbody> </table> </div> <p>The plan is to drop the &quot;Diskret data&quot; when the assignment is done, so I have the data in a more structured way.</p> <p>--- Update--- By request:</p> <p>I have included a picture of how I visualize the problem, And what I seemingly can't make it do.</p> <p><a href="https://i.ibb.co/jf6ZBpN/problem.png" rel="nofollow noreferrer">Problem visualisation</a></p>
<p>With regex you could do something like:</p> <pre><code>def map_(list_) -&gt; pd.Series: if list_: idx, values = zip(*list_) return pd.Series(values, idx) else: return pd.Series(dtype=object) series = pd.Series( ['CCC#111~1|BBB#2323~2234', 'JJSDJ#1234~Heart attack'] ) reg_series = series.str.findall(r'([^~|]+)~([^~|]+)') reg_series.apply(map_) </code></pre> <h2>Breaking this down:</h2> <p>Create a new series by running a map on each row that turns your long string into a list of tuples</p> <p>Create a new series by running a map on each row that turns your long string into a list of tuples.</p> <pre><code>reg_series = series.str.findall(r'([^~|]+)~([^~|]+)') reg_series # output: # 0 [(CCC#111, 1), (BBB#2323, 2234)] # 1 [(JJSDJ#1234, Heart attack)] </code></pre> <p>Then we create a <code>map_</code> function. This function takes each row of <code>reg_series</code> and maps it to two rows: the first with only the &quot;keys&quot; and the other with only the &quot;values&quot;. We then create series of this with the index as the keys and the values as the values.</p> <p><em>Edit:</em> We added in a <code>if</code>/<code>else</code> statement that check whether the list exists. If it does not, we return an empty series of type object.</p> <pre><code>def map_(list_) -&gt; pd.Series: if list_: idx, values = zip(*list_) return pd.Series(values, idx) else: return pd.Series(dtype=object) ... print(idx, values) # first row # output: # ('CCC#111', 'BBB#2323') (1, 2234) </code></pre> <p>Finally we run <code>apply</code> on the series to create a dataframe that takes the outputs from <code>map_</code> for each row and zips them together in columnar format.</p> <pre><code>reg_series.apply(map_) # output: # CCC#111 BBB#2323 JJSDJ#1234 # 0 1 2234 NaN # 1 NaN NaN Heart attack </code></pre>
pandas|jupyter-notebook
0
375,712
72,438,669
GeoPandas - MultiPolygon to Polygon geometry
<p><code>geometry</code> in my geopandas dataframe is of type <code>Polygon</code> and <code>MultiPolygon</code>. I'd like to convert the <code>MultiPolygons</code> to <code>Polygons</code> as I am having issues with running some spatial functions on the data.</p> <p>Sample data file: <a href="https://www.dropbox.com/s/14ni2mfppt5dn7x/gdf%20%281%29.csv?dl=0" rel="nofollow noreferrer">https://www.dropbox.com/s/14ni2mfppt5dn7x/gdf%20%281%29.csv?dl=0</a></p> <pre><code>from shapely.geometry import MultiPolygon, Polygon import geopandas as gpd from shapely import wkt gdf = gpd.read_file() # To GeoPandas gdf['geometry'] = gdf['zip_code_geom'].apply(wkt.loads) # Set Geometry gdf = gdf.GeoDataFrame(df_rent_geo_v7, geometry='geometry') # MultiPolygon to Polygon gdf = gdf.explode(column='geometry', ignore_index=True, index_parts=False) </code></pre> <p>I have tried using <code>[explode][1]</code> as suggested in other similar questions, but it doesn't convert <code>MultiPolygons</code> to <code>Polygons</code>.</p>
<p>There are bad geometries in your example data. This will convert the valid ones and store the bad ones in the bad_geom_dict for further investigation. Explode works on the valid geoms.</p> <pre><code>bad_geom_dict = {} for idx, row in gdf.iterrows(): try: value = row['zip_code_geom'] wkt.loads(value) gdf['geometry'] = wkt.loads(value) except Exception as e: print(e) bad_geom_dict[idx] = value </code></pre>
pandas|geospatial|geopandas
0
375,713
72,312,508
How to do regex split and replace on dataframes columns in Pandas?
<p>QUESTION: Is there a function to simplify this whole process?</p> <p>So I was trying to clean data. Data source: <a href="http://unstats.un.org/unsd/environment/excel_file_tables/2013/Energy%20Indicators.xls" rel="nofollow noreferrer">UN Energy Table</a> (will automatically download xls file) And the data in question is the 'Country' column after this xls is turned into a dataframe.</p> <p><img src="https://i.stack.imgur.com/ypwHS.png" alt="Picture of dataframe head bcs I can't add pics yet:)" /></p> <p>The task was to remove the parentheses and numbers attached on the Country name. What I did was, find all country names containing numbers or parentheses, turning it into a list, finding the clean name and replacing them one by one through a loop.</p> <pre><code># Finding all dirty country names dirtyNames = df1[df1['Country'].str.contains('[A-Za-z ][0-9/(/)]')==True] # Changing them to list dirtyNames = dirtyNames['Country'].tolist() for name in dirtyNames: clean = re.split('[0-9/(/)]', name)[0] df1.replace(name,clean, inplace=True) </code></pre> <p>but is there a function for this? I feel like there must be a function for it if I have to make a loop. I tried examples from the internet, fixing my dataset into these,</p> <pre><code>df['first_five_Letter']=df['Country (region)'].str.extract(r'(^w{5})') </code></pre> <p>and other similar methods, but I keep getting the <strong>AttributeError: Can only use .str accessor with string values!</strong> error.</p>
<p>You can use</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import numpy as np df = pd.DataFrame({'Country':['XXX(12)', 'YYYY5000', '(ZZZ)15', np.nan]}) df.loc[pd.isna(df['Country']), 'Country'] = &quot;&quot; df['Country'] = df['Country'].astype(str).str.replace(r'[0-9()]+', '', regex=True) df.loc[df['Country'] == '', 'Country'] = np.nan </code></pre> <p>Here,</p> <ul> <li><code>df.loc[pd.isna(df['Country']), 'Country'] = &quot;&quot;</code> - converts all <code>NaN</code> values to empty strings</li> <li><code>.astype(str)</code> - converts data to string type</li> <li><code>.str.replace(r'[0-9()]+', '', regex=True)</code> - removes all digits, <code>(</code> and <code>)</code> chars</li> <li><code>df.loc[df['Country'] == '', 'Country'] = np.nan</code> - converts empty strings back to <code>NaN</code>.</li> </ul>
python|regex|pandas|dataframe
0
375,714
50,516,450
mysqlsh in Python mode, cannot import modules
<p>New to Mysqlsh I'm trying to import module pandas, but I'm getting:</p> <pre><code> MySQL Py &gt; import pandas as pd Traceback (most recent call last): File "&lt;string&gt;", line 1, in &lt;module&gt; ImportError: No module named pandas Mysqlsh ver. 8.0.11 Python ver. 2.7.15 Updated OSX </code></pre> <p>I can import pandas in Python, and must admit, that I know nothing about dependencies between Python and MySQL Shell</p>
<p>Your module has to be in one of those paths listed by <code>sys.path</code>.</p> <pre><code> MySQL Py &gt; import sys MySQL Py &gt; sys.path </code></pre> <p>Install <code>pandas</code> globally, or append your <code>pandas</code> install path to <code>sys.path</code> in <code>mysqlsh</code>.</p>
python|pandas|import
0
375,715
50,499,223
Make subplots of the histogram in pandas dataframe using matpolot library?
<p><strong>I have the following data separate by tab:</strong></p> <pre><code>CHROM ms02g:PI num_Vars_by_PI range_of_PI total_haplotypes total_Vars 1 1,2 60,6 2820,81 2 66 2 9,8,10,7,11 94,78,10,69,25 89910,1102167,600,1621365,636 5 276 3 5,3,4,6 6,12,14,17 908,394,759,115656 4 49 4 17,18,22,16,19,21,20 22,11,3,16,7,12,6 1463,171,149,256,157,388,195 7 77 5 13,15,12,14 56,25,96,107 2600821,858,5666,1792 4 284 7 24,26,29,25,27,23,30,28,31 12,31,19,6,12,23,9,37,25 968,3353,489,116,523,1933,823,2655,331 9 174 8 33,32 53,35 1603,2991338 2 88 </code></pre> <br> <p><strong>I am using this code to build a histogram plots with subplots for each <code>CHROM</code>:</strong></p> <pre><code>with open(outputdir + '/' + 'hap_size_byVar_'+ soi +'_'+ prefix+'.png', 'wb') as fig_initial: fig, ax = plt.subplots(nrows=len(hap_stats), sharex=True) for i, data in hap_stats.iterrows(): # first convert data to list of integers data_i = [int(x) for x in data['num_Vars_by_PI'].split(',')] ax[i].hist(data_i, label=str(data['CHROM']), alpha=0.5) ax[i].legend() plt.xlabel('size of the haplotype (number of variants)') plt.ylabel('frequency of the haplotypes') plt.suptitle('histogram of size of the haplotype (number of variants) \n' 'for each chromosome') plt.savefig(fig_initial) </code></pre> <br> <p><strong>Everything is fine except two problems:</strong></p> <ol> <li>The <strong>Y-label <code>frequency of the haplotypes</code></strong> is not adjusted properly in this output plot.</li> </ol> <p><a href="https://i.stack.imgur.com/BphRR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BphRR.png" alt="enter image description here" /></a></p> <br> <br> <ol start="2"> <li>When the data contain only one row (see data below) the subplot are not possible and I get <strong><code>TypeError</code></strong>, even though it should be able to make the subgroup with only one index.</li> </ol> <p><strong>Dataframe with only one line of data:</strong></p> <pre><code> CHROM ms02g:PI num_Vars_by_PI range_of_PI total_haplotypes total_Vars 2 9,8,10,7,11 94,78,10,69,25 89910,1102167,600,1621365,636 5 276 </code></pre> <p><strong><code>TypeError : </code></strong></p> <pre><code>Traceback (most recent call last): File &quot;phase-Extender.py&quot;, line 1806, in &lt;module&gt; main() File &quot;phase-Extender.py&quot;, line 502, in main compute_haplotype_stats(initial_haplotype, soi, prefix='initial') File &quot;phase-Extender.py&quot;, line 1719, in compute_haplotype_stats ax[i].hist(data_i, label=str(data['CHROM']), alpha=0.5) TypeError: 'AxesSubplot' object does not support indexing </code></pre> <br> <p><strong>How can I fix these two issues ?</strong></p>
<p>Your first problem comes from the fact that you are using <code>plt.ylabel()</code> at the end of your loop. pyplot functions act on the current active axes object, which, in this case, is the last one created by <code>subplots()</code>. If you want your label to be centered over your subplots, the easiest might be to create a text object centered vertically in the figure.</p> <pre><code># replace plt.ylabel('frequency of the haplotypes') with: fig.text(.02, .5, 'frequency of the haplotypes', ha='center', va='center', rotation='vertical') </code></pre> <p>you can play around with the x-position (0.02) until you find a position you're happy with. The coordinates are in figure coordinates, (0,0) is bottom left (1,1) is top right. Using 0.5 as y position ensures the label is centered in the figure.</p> <p>The second problem is due to the fact that, when <code>numrows=1</code> <code>plt.subplots()</code> returns directly the axes object, instead of a list of axes. There are two options to circumvent this problem</p> <p>1 - test whether you have only one line, and then replace <code>ax</code> with a list:</p> <pre><code>fig, ax = plt.subplots(nrows=len(hap_stats), sharex=True) if len(hap_stats)==1: ax = [ax] (...) </code></pre> <p>2 - use the option <code>squeeze=False</code> in your call to <code>plt.subplots()</code>. <a href="https://matplotlib.org/api/_as_gen/matplotlib.pyplot.subplots.html?highlight=subplots#matplotlib.pyplot.subplots" rel="nofollow noreferrer">As explained in the documentation</a>, using this option will force <code>subplots()</code>to always return a <strong>2D</strong> array. Therefore you'll have to modify a bit how you are indexing your axes:</p> <pre><code>fig, ax = plt.subplots(nrows=len(hap_stats), sharex=True, squeeze=False) for i, data in hap_stats.iterrows(): (...) ax[i,0].hist(data_i, label=str(data['CHROM']), alpha=0.5) (...) </code></pre>
python-3.x|pandas|matplotlib|plot|histogram
2
375,716
50,667,036
Dataframe slicing from cell after reading csv
<p>I am reading data from Twitter analytics with CSV and DataFrames. </p> <p>I want to <strong>extract url from certain cell</strong></p> <p>The output is this process is the following</p> <pre><code>tweet number tweet id tweet link tweet text 1 1.0086341313026E+018 "tweet link goes here" tweet text goes here https://example.com" </code></pre> <p>How can I slice this "tweet text" to get the url of it? I cannot slice it using [-1:-12] because there are many tweets with different characters number. </p>
<p>I believe that you want:</p> <pre><code>print (df['tweet text'].str[-12:-1]) 0 example.com Name: tweet text, dtype: object </code></pre> <p>More general solution is with <a href="https://stackoverflow.com/a/37807990/2901002">regex</a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.findall.html" rel="nofollow noreferrer"><code>str.findall</code></a> for list of all links and if necessary select first by indexing with <code>str[0]</code>:</p> <pre><code>pat = r'(?:http|ftp|https)://(?:[\w_-]+(?:(?:\.[\w_-]+)+))(?:[\w.,@?^=%&amp;:/~+#-]*[\w@?^=%&amp;/~+#-])?' print (df['tweet text'].str.findall(pat).str[0]) 0 https://example.com Name: tweet text, dtype: object </code></pre>
python|string|python-3.x|pandas|series
3
375,717
50,449,876
Getting the sum of groupby as a new column with distinct values in Pandas
<p>This is how my data look like:</p> <pre><code>id date rt dnm 101122 2017-01-24 0.0 70 101122 2017-01-08 0.0 49 101122 2017-04-13 0.02976 67 101122 2017-08-03 1.02565 39 101122 2016-12-01 0.0 46 101122 2017-01-25 0.0 69 101122 2017-01-02 0.0 76 101122 2017-07-18 0.02631 38 101122 2016-06-02 0.0 120 221344 2016-10-21 0.00182 176 221344 2016-09-21 0.47732 194 221344 2016-06-23 0.0 169 221344 2017-10-10 0.91391 151 221344 2017-04-29 0.0 33 221344 2017-02-05 0.0 31 221344 2017-10-16 0.0 196 221344 2016-09-25 0.0 33 221344 2016-07-17 0.0 21 221344 2016-07-21 0.0 46 615695 2017-07-12 0.0 21 615695 2017-07-05 0.0 18 615695 2016-07-11 0.0 38 615695 2016-07-19 0.03655 29 615695 2017-05-27 0.0 23 615695 2017-12-22 0.0 20 615695 2017-04-25 0.0 34 615695 2017-03-23 0.0 20 615695 2016-09-23 0.0 25 615695 2016-06-18 0.0 25 </code></pre> <p>I'm trying to get the sum of 'dmn' column for each 'id' and give this new column a name like 'sum_values'. After that I need to get the id's that have the 'sum_values' higher than 300. The following code generates the first part:</p> <pre><code>data = pd.read_csv(file_name, sep='\t', header=0, parse_dates=[1], infer_datetime_format=True); test = (data.assign(sum_values = data.groupby('id')['dnm'].transform(np.sum)) .query('sum_values &gt; 300')) </code></pre> <p>This will add a new column named 'sum_values' and repeat the sum value for each id several times. I need to get a unique value of 'id' and 'sum_values' column. But I can't figure out how/where to add the nunique().</p> <p>This is the desired outcome:</p> <pre><code>id sum_values(&gt;300) 101122 574 221344 1050 </code></pre> <p>Any ideas? </p>
<p><strong><code>groupby</code></strong> with <strong><code>sum</code></strong></p> <pre><code>d = df.groupby('id')['dnm'].sum() </code></pre> <p><strong><code>indexing</code></strong></p> <pre><code>d[d &gt; 500] id 101122 574 221344 1050 Name: dnm, dtype: int64 </code></pre> <p>If you want the column name in the output, just use <code>d[d &gt; 500].reset_index()</code></p>
python|pandas|dataframe
5
375,718
50,471,122
linalg.matrix_power(A,n) for a huge $n$ and a huge $A$
<p>I'm trying to use linalg to find $P^{500}$ where $ P$ is a 9x9 matrix but Python displays the following: <a href="https://i.stack.imgur.com/ll47k.png" rel="nofollow noreferrer">Matrix full of inf</a></p> <p>I think this is too much for this method so my question is, there is annother library to find $P^{500}$? Must I surrender? Thank you all in advance</p>
<p>Use the eigendecomposition and then exponentiate the matrix of eigenvalues. Like this. You end up getting an inf up in the first column. Unless you control the type of matrix by their eigenvalues this won't happen I believe. In other words, your eigenvalues have to be bounded. You can generate a random matrix by the <a href="https://en.wikipedia.org/wiki/Schur_decomposition" rel="nofollow noreferrer">Schur decomposition</a> putting the eigenvalues along the diagonal. This is a post I have about generating a matrix with given <a href="https://linearalgebra.quora.com/Generating-a-matrix-with-integer-eigenvalues" rel="nofollow noreferrer">eigenvalues.</a> This should be the way that method works anyways. </p> <pre><code> % Generate random 9x9 matrix n=9; A = randn(n); [V,D] = eig(A); p = 500; Dp = D^p; Ap = V^(-1)*Dp*V; Ap1 = mpower(A,p); </code></pre>
numpy|linear-algebra|matrix-multiplication
1
375,719
50,552,756
From pandas dataframe, how to find the number of duplicate comments for each user?
<p>I have dataframe with list of usernames and their comments, see format below.</p> <p>What would be the quickest and most efficient approach to find repetitive duplicate comments (spam) for each user? </p> <p>Dataframe format: </p> <pre><code>Author | Comment casy Nice picture! linda I like this casy Nice picture! tom I disagree bob Follow me bob Follow me bob Follow me bob Follow me casy Nice picture! casy Wow! linda Interesting post linda Check my profile bob Dissapointing casy Wow! </code></pre> <p>I want to get the result in the following format, so the resulting table would be: </p> <pre><code>Author | Number of dup. comments (descending) | Comment bob 4 Follow me casy 3 Nice picture casy 2 Wow! bob 1 Dissapointing linda 1 I like this linda 1 Check my profile linda 1 Interesting post tom 1 I disagree </code></pre>
<p>Use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.size.html" rel="nofollow noreferrer"><code>size</code></a> first, then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html" rel="nofollow noreferrer"><code>sort_values</code></a>, create columns by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html" rel="nofollow noreferrer"><code>reset_index</code></a> and last if necessary change order of columns by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html" rel="nofollow noreferrer"><code>reindex</code></a>:</p> <pre><code>df = (df.groupby(['Author', 'Comment'], sort=False).size() .sort_values(ascending=False) .reset_index(name='Number') .reindex(columns=['Author','Number','Comment'])) print (df) Author Number Comment 0 bob 4 Follow me 1 casy 3 Nice picture! 2 casy 2 Wow! 3 bob 1 Dissapointing 4 linda 1 Check my profile 5 linda 1 Interesting post 6 tom 1 I disagree 7 linda 1 I like this </code></pre>
python|pandas|dataframe
4
375,720
50,251,750
How to normalize a seaborn countplot with multiple categorical variables
<p>I have created a seaborn <code>countplot</code> for multiple categorical variables of a dataframe but instead of count I want to have percentages?</p> <p>What is the best option to use? Barplots? Can I use a query like the below one to get the barplots at once?</p> <pre><code>for i, col in enumerate(df_categorical.columns): plt.figure(i) sns.countplot(x=col,hue='Response',data=df_categorical) </code></pre> <p>this query gives me the <code>countplot</code> for all variables at once</p> <p>Thanks!</p> <p>Data looks like this:</p> <pre><code> State Response Coverage Education Effective To Date EmploymentStatus Gender Location Code Marital Status Policy Type Policy Renew Offer Type Sales Channel Vehicle Class Vehicle Size 0 Washington No Basic Bachelor 2/24/11 Employed F Suburban Married Corporate Auto Corporate L3 Offer1 Agent Two-Door Car Medsize 1 Arizona No Extended Bachelor 1/31/11 Unemployed F Suburban Single Personal Auto Personal L3 Offer3 Agent Four-Door Car Medsize 2 Nevada No Premium Bachelor 2/19/11 Employed F Suburban Married Personal Auto Personal L3 Offer1 Agent Two-Door Car Medsize 3 California No Basic Bachelor 1/20/11 Unemployed M Suburban Married Corporate Auto Corporate L2 Offer1 Call Center SUV Medsize 4 Washington No Basic Bachelor 2/3/11 Employed M Rural Single Personal Auto Personal L1 Offer1 Agent Four-Door Car Medsize </code></pre>
<p>Consider a <code>groupby.transform</code> to calculate percentage column, then run <code>barplot</code> with <em>x</em> for original value column and <em>y</em> for percent column.</p> <p><strong>Data</strong> <em>(only converted two No to Yes responses to original posted data)</em></p> <pre><code>from io import StringIO import pandas as pd import seaborn as sns import matplotlib.pyplot as plt txt = ''' State Response Coverage Education "Effective To Date" EmploymentStatus Gender "Location Code" "Marital Status" "Policy Type" Policy "Renew Offer Type" "Sales Channel" "Vehicle Class" "Vehicle Size" 0 Washington No Basic Bachelor "2/24/11" Employed F Suburban Married "Corporate Auto" "Corporate L3" Offer1 Agent "Two-Door Car" Medsize 1 Arizona No Extended Bachelor "1/31/11" Unemployed F Suburban Single "Personal Auto" "Personal L3" Offer3 Agent "Four-Door Car" Medsize 2 Nevada Yes Premium Bachelor "2/19/11" Employed F Suburban Married "Personal Auto" "Personal L3" Offer1 Agent "Two-Door Car" Medsize 3 California No Basic Bachelor "1/20/11" Unemployed M Suburban Married "Corporate Auto" "Corporate L2" Offer1 "Call Center" SUV Medsize 4 Washington Yes Basic Bachelor "2/3/11" Employed M Rural Single "Personal Auto" "Personal L1" Offer1 Agent "Four-Door Car" Medsize''' df_categorical = pd.read_table(StringIO(txt), sep="\s+") </code></pre> <p><strong>Plot</strong> <em>(single figure of multiple plots across two columns)</em></p> <pre><code>fig = plt.figure(figsize=(10,30)) for i, col in enumerate(df_categorical.columns): # PERCENT COLUMN CALCULATION df_categorical[col+'_pct'] = df_categorical.groupby(['Response', col])[col]\ .transform(lambda x: len(x)) / len(df_categorical) plt.subplot(8, 2, i+1) sns.barplot(x=col, y=col+'_pct', hue='Response', data=df_categorical)\ .set(xlabel=col, ylabel='Percent') plt.tight_layout() plt.show() plt.clf() plt.close('all') </code></pre> <p><a href="https://i.stack.imgur.com/munAM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/munAM.png" alt="Plot Output"></a></p>
python|pandas|matplotlib|seaborn
0
375,721
50,366,851
Postpone function execution syntactically
<p>I've got a quite extensive simulation tool written in <code>python</code>, which requires the user to call functions to set up the environment in a strict order since <code>np.ndarrays</code> are at first created (and changed by appending etc.) and afterwards memory views to specific cells of these arrays are defined.<br> Currently each part of the environment requires around 4 different function calls to be set up, with easily <code>&gt;&gt; 100</code> parts.<br> Thus I need to combine each part's function calls by <strong>syntactically</strong> (not based on timers) postponing the execution of some functions until all preceding functions have been executed, while still maintaining the strict order to be able to use memory views.</p> <p>Futhermore all functions to be called by the user use <a href="https://www.python.org/dev/peps/pep-3102/" rel="nofollow noreferrer">PEP 3102</a> style keyword-only arguments to reduce the probability of input-errors <strong>and</strong> all are <strong>instance methods</strong> with <code>self</code> as the first parameter, with <code>self</code> containing the references to the arrays to construct the memory views to.</p> <p>My current implementation is using <code>list</code>s to store the functions and the <code>dict</code> for each function's keyworded arguments. This is shown here, omitting the class and self parameters to make it short:</p> <pre><code>def fun1(*, x, y): # easy minimal example function 1 print(x * y) def fun2(*, x, y, z): # easy minimal example function 2 print((x + y) / z) fun_list = [] # list to store the functions and kwargs fun_list.append([fun1, {'x': 3.4, 'y': 7.0}]) # add functions and kwargs fun_list.append([fun2, {'x':1., 'y':12.8, 'z': np.pi}]) fun_list.append([fun2, {'x':0.3, 'y':2.4, 'z': 1.}]) for fun in fun_list: fun[0](**fun[1]) </code></pre> <p>What I'd like to implement is using a <code>decorator</code> to postpone the function execution by adding a <code>generator</code>, to be able to pass all arguments to the functions as they are called, but not execute them, as shown below:</p> <pre><code>def postpone(myfun): # define generator decorator def inner_fun(*args, **kwargs): yield myfun(*args, **kwargs) return inner_fun fun_list_dec = [] # list to store the decorated functions fun_list_dec.append(postpone(fun1)(x=3.4, y=7.0)) # add decorated functions fun_list_dec.append(postpone(fun2)(x=1., y=12.8, z=np.pi)) fun_list_dec.append(postpone(fun2)(x=0.3, y=2.4, z=1.)) for fun in fun_list_dec: # execute functions next(fun) </code></pre> <p>Which is the best (most pythonic) method to do so? Are there any drawbacks?<br> And most important: Will my references to <code>np.ndarrays</code> passed to the functions within <code>self</code> still be a reference, so that the memory addresses of these arrays are still correct when executing the functions, <strong>if</strong> the memory addresses change <strong>in between</strong> saving the function calls to a list (or being decorated) and executing them?<br> Execution speed does not matter here.</p>
<p>Using a generators here doesn't make much sense. You are essentially simulating partial-application. Therefore, this seems like a use-case for <code>functools.partial</code>. Since you are sticking with key-word only arguments, this will work just fine:</p> <pre><code>In [1]: def fun1(*, x, y): # easy minimal example function 1 ...: print(x * y) ...: def fun2(*, x, y, z): # easy minimal example function 2 ...: print((x + y) / z) ...: In [2]: from functools import partial In [3]: fun_list = [] In [4]: fun_list.append(partial(fun1, x=3.4, y=7.0)) In [5]: fun_list.append(partial(fun2, x=1., y=12.8, z=3.14)) In [6]: fun_list.append(partial(fun2, x=0.3, y=2.4,z=1.)) In [7]: for f in fun_list: ...: f() ...: 23.8 4.3949044585987265 2.6999999999999997 </code></pre> <p>You don't <em>have</em> to use <code>functools.partial</code> either, you can do your partial application "manually", just to demonstrate:</p> <pre><code>In [8]: fun_list.append(lambda:fun1(x=5.4, y=8.7)) In [9]: fun_list[-1]() 46.98 </code></pre>
python|numpy|generator|decorator
3
375,722
50,660,820
Transforming dates in chronological order using pandas dataframe
<p>I need help with comparing dates in different rows and in different columns and making sure that they follow a chronological order.</p> <p>First, I group data based on <strong>Id</strong> and <strong>group</strong> columns. Next, each date value is supposed to occur in the future. </p> <p>The first group [1111 + A ] contains an error because the dates don't follow a chronological order :</p> <pre><code>1/1/2016 &gt; 2/20/2016 &gt; **2/19/2016** &gt; 4/25/2016 &gt; **4/1/2016** &gt; 5/1/2016 </code></pre> <p><strong>Current result</strong></p> <pre><code> id start end group 0 1111 01/01/2016 02/20/2016 A 1 1111 02/19/2016 04/25/2016 A 2 1111 04/01/2016 05/01/2016 A 3 2345 05/01/2016 05/28/2016 B 4 2345 05/29/2016 06/28/2016 B 5 1234 08/01/2016 09/16/2016 F 6 9882 01/01/2016 08/29/2016 D 7 9992 03/01/2016 03/15/2016 C 8 9992 03/16/2016 08/03/2016 C 9 9992 05/16/2016 09/16/2016 C 10 9992 09/17/2016 10/16/2016 C 11 9992 10/17/2016 12/13/2016 C </code></pre> <p>The answer should be:</p> <pre><code>1/1/2016 &gt; 2/20/2016 &gt; **2/21/2016** &gt; 4/25/2016 &gt; **4/26/2016** &gt; 5/1/2016 </code></pre> <p><strong>Desired output</strong></p> <pre><code> id start end group 0 1111 01/01/2016 02/20/2016 A 1 1111 02/21/2016 04/25/2016 A 2 1111 04/26/2018 05/01/2016 A 3 2345 05/01/2016 05/28/2016 B 4 2345 05/29/2016 06/28/2016 B 5 1234 08/01/2016 09/16/2016 F 6 9882 01/01/2016 08/29/2016 C 7 9992 03/01/2016 03/15/2016 C 8 9992 03/16/2016 08/03/2016 C 9 9992 08/04/2016 09/16/2016 C 10 9992 09/17/2016 10/16/2016 C 11 9992 10/17/2016 12/13/2016 C </code></pre> <p>Any help will be greatly appreciated. </p>
<p>One way is to apply your logic to each group, then concatenate your groups.</p> <pre><code># convert series to datetime df['start'] = pd.to_datetime(df['start']) df['end'] = pd.to_datetime(df['end']) # iterate groups and add results to grps list grps = [] for _, group in df.groupby(['id', 'group'], sort=False): end_shift = group['end'].shift() group.loc[group['start'] &lt;= end_shift, 'start'] = end_shift + pd.DateOffset(1) grps.append(group) # concatenate dataframes in grps to build a single dataframe res = pd.concat(grps, ignore_index=True) print(res) id start end group 0 1111 2016-01-01 2016-02-20 A 1 1111 2016-02-21 2016-04-25 A 2 1111 2016-04-26 2016-05-01 A 3 2345 2016-05-01 2016-05-28 B 4 2345 2016-05-29 2016-06-28 B 5 1234 2016-08-01 2016-09-16 F 6 9882 2016-01-01 2016-08-29 D 7 9992 2016-03-01 2016-03-15 C 8 9992 2016-03-16 2016-08-03 C 9 9992 2016-08-04 2016-09-16 C 10 9992 2016-09-17 2016-10-16 C 11 9992 2016-10-17 2016-12-13 C </code></pre>
python|pandas|datetime|dataframe|pandas-groupby
2
375,723
50,287,517
Distributed tensorflow source code
<p>I wanted to check the source code of the distributed training feature of tensorflow and its overall structure. Worker-PS relations, etc. However I am lost in tensorflow's repository. Can someone guide me through the repository and point the source code I am looking for? </p>
<p>Unfortunately, not all tensorflow code (especially the part related to distributed computation) is open source. To quote Aurélien Géron from <a href="http://shop.oreilly.com/product/0636920052289.do" rel="nofollow noreferrer">Hands-On Machine Learning with Scikit-Learn and TensorFlow</a>:</p> <blockquote> <p>The TensorFlow <a href="http://download.tensorflow.org/paper/whitepaper2015.pdf" rel="nofollow noreferrer">whitepaper</a> presents a friendly dynamic placer algorithm that auto-magically distributes operations across all available devices, taking into account things like the measured computation time in previous runs of the graph, estimations of the size of the input and output tensors to each operation, the amount of RAM available in each device, communication delay when transferring data in and out of devices, hints and constraints from the user, and more. Unfortunately, this sophisticated algorithm is internal to Google; it <strong>was not released</strong> in the open source version of TensorFlow.</p> </blockquote> <p>But here are the main entry points of <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/docs_src/deploy/distributed.md" rel="nofollow noreferrer">TF distributed</a> in the public repo:</p> <ul> <li><code>Cluster</code> in <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/grappler/cluster.py" rel="nofollow noreferrer"><code>tensorflow/python/grappler/cluster.py</code></a></li> <li><code>Server</code> and <code>ClusterSpec</code> in <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/training/server_lib.py" rel="nofollow noreferrer"><code>tensorflow/python/training/server_lib.py</code></a></li> <li><code>worker_service.proto</code> in <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/protobuf/worker_service.proto" rel="nofollow noreferrer"><code>tensorflow/core/protobuf/worker_service.proto</code></a></li> </ul> <p>To dive deep you'll need to enter native C++ code in <a href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/core/distributed_runtime" rel="nofollow noreferrer"><code>tensorflow/core/distributed_runtime</code></a> package, e.g., here's <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/distributed_runtime/rpc/grpc_tensorflow_server.cc" rel="nofollow noreferrer">gRPC server implementation</a>.</p>
github|tensorflow|distributed-computing|distributed
1
375,724
50,632,993
trouble converting object to float
<p>I have an object that I would like to convert to a currency format:</p> <pre><code>df_final.sum_funded.head() 0 472161.07 1 719768.97 2 23148.11 3 1215078.15 4 0 Name: sum_funded, dtype: object </code></pre> <p>I've tried numerous iterations, including:</p> <pre><code>"${:,.0f}".format(df_final.sum_funded.astype(float) ) </code></pre> <p>which produces the error:</p> <pre><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-285-dd77177c4126&gt; in &lt;module&gt;() ----&gt; 1 "${:,.0f}".format(df_final.sum_funded.astype(float) ) 2 3 4 ValueError: Unknown format code 'f' for object of type 'str' </code></pre> <p>Why is it converting it to <code>str</code> when I'm doing an explicit float conversion?</p>
<p>You have to use pandas Series's <code>map</code> function to apply the formatter on <strong>each</strong> element.</p> <pre><code>df_final.sum_funded.map("${:,.0f}".format) </code></pre>
python|pandas|numpy
2
375,725
50,265,938
Select rows for a specific month in Pandas
<p>I have a dataframe with 12 hourly data for over 10 years. All data are stored in date wise. I would like to extract columns containing the data from a specific month (note that month is not in standard 1, 2, 3, format). The rows of the 'date' column which I have looks like this:</p> <pre><code>01-May-07 02-May-07 . . . 31-Oct-17 </code></pre> <p>How do I select only columns which contain data only for May, Jun etc.</p> <p>Initially I thought that I can extract using <code>df[df['DATE'].str.contains('May')]</code>. But it did not work as expected resulting in output as input.</p> <p>Edit 1</p> <pre><code>DATE TIME MOONPH SPEED GUST CLOUD AMOUNT DRY WET DEW RH 01-May-07 230 NM7 6 0 4 27.4 25.4 25.4 86 01-May-07 330 NM7 4 0 4 27.4 25.4 25.4 86 01-May-07 430 NM7 3 0 4 27.4 25.4 25.4 86 01-May-07 530 NM7 2 0 4 27.4 25.4 25.4 89 01-May-07 630 NM7 3 0 5 27.4 26 25.4 85 01-May-07 700 NM7 0 0 4 27.8 26 25.4 81 01-May-07 730 NM7 0 0 4 27.8 26 25.4 81 01-May-07 800 NM7 2 0 4 27.8 26 25.4 81 01-May-07 830 NM7 5 0 4 29.2 26 24.6 76 01-May-07 900 NM7 5 0 4 29.2 26 24.6 76 01-May-07 930 NM7 5 0 2 29.8 26 24.6 76 01-May-07 1000 NM7 5 0 4 30.8 26 24.6 76 01-May-07 1030 NM7 5 0 4 30.8 26 24.6 76 01-May-07 1100 NM7 6 0 4 31.4 26 24.6 68 . . . 01-May-17 1630 NM7 8 0 5 32.6 27.4 25.6 68 01-May-17 1930 NM7 8 0 5 32 27.4 25.6 69 01-May-17 430 NM7 0 0 5 27.2 25 24 83 01-May-17 30 NM7 0 0 5 29.6 27.2 26.2 82 01-May-17 530 NM7 0 0 5 26.6 24.4 23.4 83 01-May-17 130 NM7 0 0 5 28 25.6 24.6 82 01-May-17 630 NM7 0 0 5 26.8 24.4 23.3 81 01-May-17 730 NM7 0 0 5 27.2 24.4 23.4 80 01-May-17 330 NM7 0 0 5 27.2 25 24 83 01-May-17 1230 NM7 10 0 5 32.8 28.2 25.2 64 01-May-17 2330 NM7 4 0 4 30 26.4 24.9 75 01-May-17 2230 NM7 5 0 4 30 26.8 25.5 77 01-May-17 2130 NM7 4 0 4 30 26.8 25.5 77 01-May-17 830 NM7 2 0 5 27.2 24.4 23.4 78 01-May-17 930 NM7 3 0 5 31.2 27.2 25.6 72 01-May-17 1830 NM7 8 0 5 32 27.4 25.6 69 01-May-17 1130 NM7 6 0 5 32.8 28.2 25.2 64 01-May-17 2030 NM7 6 0 4 32 26.8 25.4 76 01-May-17 1330 NM7 10 0 5 33 27.6 25.4 64 01-May-17 1430 NM7 10 0 5 33 27.6 25.2 65 </code></pre>
<p>I think need convert <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.to_datetime.html" rel="noreferrer"><code>to_datetime</code></a> and then compare with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.month.html" rel="noreferrer"><code>month</code></a> or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.dt.strftime.html" rel="noreferrer"><code>strftime</code></a> and <code>%B</code> for <a href="http://strftime.org/" rel="noreferrer"><code>month names</code></a>:</p> <pre><code>df = pd.DataFrame({'DATE': ['01-May-07', '02-May-07', '31-Oct-17']}) print (df) DATE 0 01-May-07 1 02-May-07 2 31-Oct-17 </code></pre> <hr> <pre><code>df = df[pd.to_datetime(df['DATE']).dt.month == 5] </code></pre> <pre><code>df = df[pd.to_datetime(df['DATE']).dt.strftime('%B') == 'May'] </code></pre> <hr> <pre><code>print (df) DATE 0 01-May-07 1 02-May-07 </code></pre> <p>If need working with datetimes later:</p> <pre><code>df['DATE'] = pd.to_datetime(df['DATE']) df = df[df['DATE'].dt.month == 5] #df = df[df['DATE'].dt.strftime('%B') == 'May'] print (df) DATE 0 2007-05-01 1 2007-05-02 </code></pre> <p>EDIT:</p> <p>If dont need working with <code>datetimes</code>, for me with your data your approach working:</p> <pre><code>df = df[df['DATE'].str.contains('May')] </code></pre>
python|pandas
12
375,726
50,308,569
Switch fasta seq depending on a dataframe
<p>I have actually 2 fasta file named: </p> <pre><code>result1_aa.fasta result2_aa.fasta </code></pre> <p>In these files I have my sequences such :</p> <p>file <code>result1_aa.fasta</code>:</p> <pre><code>&gt;gene1_B ATTGGACCA &gt;gene2_A ATTAGGAC &gt;gene90_B ATTAGCCACA &gt;gene65_B ATTGAG </code></pre> <p>file <code>result2_aa.fasta</code>:</p> <pre><code>&gt;gene78_A ATTGGACCA &gt;gene45_B ATTAGGAC &gt;gene93_B ATTAGCCACA &gt;gene54_A ATTGACA </code></pre> <p>and I have a dataframe such : </p> <pre><code>geneA geneB gene78_A gene1_B gene2_A gene45_B gene90_A gene93_B gene54_A gene65_B </code></pre> <p>they are actually in order (see the <code>_number</code>) And what I would like is to get 2 new fasta file with the same order as the dataframe above, here it would be: </p> <p>file <code>result1_aa_new.fasta:</code></p> <pre><code>&gt;gene78_A ATTGGACCA &gt;gene2_A ATTAGGAC &gt;gene90_A ATTAGCCACA &gt;gene54_A ATTGACA </code></pre> <p>file <code>result2_new_aa.fasta:</code></p> <pre><code>&gt;gene1_B ATTGGACCA &gt;gene45_B ATTAGGAC &gt;gene93_B ATTAGCCACA &gt;gene65_B ATTGAG </code></pre> <p>I tried some solutions but I cannot manage to keep the order in my fasta file as the dataframe...</p> <p>With the Ami's solution:</p> <pre><code> from Bio import SeqIO import sys from Bio.SeqRecord import SeqRecord import pandas as pd seq_0042_aa=open("seq_0042_aa.fasta","w") seq_0042_dna=open("seq_0042_dna.fasta","w") seq_0035_aa=open("seq_0035_aa.fasta","w") seq_0035_dna=open("seq_0035_dna.fasta","w") dN_dS_sorted=pd.read_table("dn_ds.out_sorted",sep='\t') seq1_id=dN_dS_sorted["seq1_id"] #first row seq2_id=dN_dS_sorted["seq2_id"] #second row from Bio import SeqIO results1 = list(SeqIO.parse("result1_aa.fasta", "fasta")) results1 = pd.DataFrame({'f_id': [r.id for r in results1], 'f_seq': results1}) results1 = pd.merge(dN_dS_sorted, results1, left_on="seq1_id", right_on='f_id', how='left').dropna() results1 = list(results1.f_seq.values) with open("out.fasta", "w") as output_handle: SeqIO.write(results1, output_handle, "fasta") results2 = list(SeqIO.parse("result2_aa.fasta", "fasta")) results2 = pd.DataFrame({'f_id': [r.id for r in results2], 'f_seq': results2}) results2 = pd.merge(dN_dS_sorted, results2, left_on="seq2_id", right_on='f_id', how='left').dropna() results2 = list(results2.f_seq.values) with open("out2.fasta", "w") as output_handle: SeqIO.write(results2, output_handle, "fasta") </code></pre> <p>here is the head of my dataframe: </p> <pre><code>seq1_id seq2_id dN dS g66097.t1_0035_0035 g13600.t1_0042_0042 0.10455938989199982 0.3122332927029104 g45594.t1_0035_0035 g1464.t1_0042_0042 0.5208761055250978 5.430485421797574 g50055.t1_0035_0035 g34744.t1_0042_0035 0.08040473491714645 0.4233916132491867 g34020.t1_0035_0035 g12096.t1_0042_0042 0.4385191689737516 26.834927363887587 g28436.t1_0035_0042 g35222.t1_0042_0035 0.055299811368483165 0.1181241496387666 </code></pre> <p>Then, in the output I should get: </p> <p>output1:</p> <pre><code>&gt;g66097.t1_0035_0035 ATTGGAGATA &gt;g45594.t1_0035_0035 TAGGAGGAGA &gt;g34020.t1_0035_0035 ATGGGAT &gt;g28436.t1_0035_0042 ATTGGAGA </code></pre> <p>and output2:</p> <pre><code>&gt;g13600.t1_0042_0042 ATGGGAGAGA &gt;g1464.t1_0042_0042 ATGGAGGAGA &gt;g12096.t1_0042_0042 ATGGAGGAA &gt;g35222.t1_0042_0035 ATGGAGAG </code></pre> <p>but I actually get: output1:</p> <pre><code>&gt;g28436.t1_0035_0042 ATGAGAGAGA &gt;g1005.t1_0035_0035 ATAGGAGATA &gt;g28456.t1_0035_0035 ATGGAGATA &gt;g30148.t1_0035_0042 ATGGAGA </code></pre> <p>and output2:</p> <pre><code>&gt;g35222.t1_0042_0035 ATAGGAGA &gt;g11524.t1_0042_0042 ATAGGAGA &gt;g31669.t1_0042_0035 ATGAGAGA &gt;g37790.t1_0042_0035 ATGAGGAGA </code></pre> <p>Here is the head of the fastafile1:</p> <pre><code>&gt;g13600.t1_0042_0042 AGATAGAGA &gt;g1464.t1_0042_0042 AGATTAGA &gt;g34744.t1_0042_0035 ATAGAGGA &gt;g12096.t1_0042_0042 AGATATGA </code></pre> <p>here is the head of the fastafile2:</p> <pre><code>&gt;g66097.t1_0035_0035 AGATTAGAGA &gt;g45594.t1_0035_0035 AGTATAGAGA &gt;g50055.t1_0035_0035 ATAGGAGAGA &gt;g34020.t1_0035_0035 ATAGGAGAG </code></pre>
<p>Let's do the first file. Again using <a href="http://biopython.org/wiki/SeqIO" rel="nofollow noreferrer">BioPython</a>, </p> <pre><code>from Bio import SeqIO results1 = list(SeqIO.parse("result1_aa.fasta", "fasta")) results1 = pd.DataFrame({'f_id': [r.id for r in results1], 'f_seq': results1}) </code></pre> <p>Now merge them:</p> <pre><code>results1 = pd.merge(df, results1, left_on='results_on', right_on='id', how='left').dropna() </code></pre> <p>(this assumes the column name is <code>results_on</code> - you didn't specify it.</p> <p>Now get the sorted records:</p> <pre><code>results1 = list(results1.f_seq.values) </code></pre> <p>Write it out:</p> <pre><code>with open("out.fasta", "w") as output_handle: SeqIO.write(results1, output_handle, "fasta") </code></pre>
python-3.x|pandas|parsing|fasta
2
375,727
50,237,486
tf.data.Iterator.get_next(): How to advance in tf.while_loop?
<p>Currently I try to implement all training in a Tensorflow while loop, but I've got problems with the Tensorflow dataset API's Iterator.</p> <p>Usually, when calling sess.run(), Iterator.get_next() advances to the next element. However, I need to advance to the next element INSIDE one run. How do I do this?</p> <p>The following small example shows my problem:</p> <pre><code>import tensorflow as tf import numpy as np def for_loop(condition, modifier, body_op, idx=0): idx = tf.convert_to_tensor(idx) def body(i): with tf.control_dependencies([body_op(i)]): return [modifier(i)] # do the loop: loop = tf.while_loop(condition, body, [idx]) return loop x = np.arange(10) data = tf.data.Dataset.from_tensor_slices(x) data = data.repeat() iterator = data.make_initializable_iterator() smpl = iterator.get_next() loop = for_loop( condition=lambda i: tf.less(i, 5), modifier=lambda i: tf.add(i, 1), body_op=lambda i: tf.Print(smpl, [smpl], message="This is sample: ") ) sess = tf.InteractiveSession() sess.run(iterator.initializer) sess.run(loop) </code></pre> <p>Output:</p> <pre><code>This is sample: [0] This is sample: [0] This is sample: [0] This is sample: [0] This is sample: [0] </code></pre> <p>I always get exactly the same element.</p>
<p>You need to call <code>iterator.get_next()</code> every time you want to "iterate inside one run".</p> <p>For instance in your toy example, just replace your <code>body_op</code> with:</p> <pre class="lang-python prettyprint-override"><code> body_op=lambda i: tf.Print(i, [iterator.get_next()], message="This is sample: ") # This is sample: [0] # This is sample: [1] # This is sample: [2] # This is sample: [3] # This is sample: [4] </code></pre>
python|python-3.x|tensorflow
2
375,728
50,547,249
Tensorflow tutorial on MNIST
<p><a href="https://www.tensorflow.org/tutorials/layers" rel="nofollow noreferrer">This</a> Tensorflow tutorial loads an already existing dataset (MNIST) into the code. Instead of that I want to insert my own training and testing images.</p> <pre><code>def main(unused_argv): # Load training and eval data mnist = tf.contrib.learn.datasets.load_dataset(&quot;mnist&quot;) train_data = mnist.train.images # Returns np.array train_labels = np.asarray(mnist.train.labels, dtype=np.int32) eval_data = mnist.test.images # Returns np.array eval_labels = np.asarray(mnist.test.labels, dtype=np.int32) </code></pre> <p>It says it returns an np array of raw pixel values.</p> <p>My question:</p> <p><strong>1. How do I create such a numpy array for my own image set?</strong> I want to do this so I can directly substitute my numpy array instead of this MNIST data in the sample code and train the model on my data (0-9 and A-Z).</p> <p><strong>EDIT:</strong> On further analysis, I've realized that the pixel values in <code>mnist.train.images</code> and <code>mnist.test.images</code> have been normalized between 0 to 1 from 0 to 255 ( I suppose) How does this normalization help?</p> <p>Folder structure: Training and testing folder are in the same folder</p> <pre><code>Training folder: --&gt; 0 --&gt;Image_Of_0.png --&gt; 1 --&gt;Image_Of_1.png . . . --&gt; Z --&gt;Image_Of_Z.png Testing folder: --&gt; 0 --&gt;Image_Of_0.png --&gt; 1 --&gt;Image_Of_1.png . . . --&gt; Z --&gt;Image_Of_Z.png </code></pre> <p>Code I wrote:</p> <pre><code>Names = [['C:\\Users\\xx\\Project\\training-images', 'train',9490], ['C:\\Users\\xx\\Project\\test-images', 'test',3175]] #9490 is the number of training files in total (All the PNGs) #3175 is the number of testing files in total (All the PNGs) for name in Names: FileList = [] for dirname in os.listdir(name[0]): path = os.path.join(name[0], dirname) for filename in os.listdir(path): if filename.endswith(&quot;.png&quot;): FileList.append(os.path.join(name[0], dirname, filename)) print(FileList) ## Creates list of all PNG files in training and testing folder x_data = np.array([np.array(cv2.imread(filename)) for filename in FileList]) pixels = x_data.flatten().reshape(name[2], 2352) #2352 = 28 * 28 * 3 image print(pixels) </code></pre> <p>Can the pixels array created be supplied as the training and testing data i.e would it have the same format as the data being supplied in the sample code?</p> <p><strong>2. Similarly what numpy array must be created for all the labels? (Folder names)</strong></p>
<p><em>1. How do I create such a numpy array for my own image set?</em></p> <p>TensorFlow accepts data in multiple ways (tf.data, feed_dict, QueueRunner). What you should be using is TFRecord which is accessible via tf.data API. It is also <a href="https://www.tensorflow.org/api_guides/python/reading_data" rel="nofollow noreferrer">recommended format</a>. Lets say you have folder containing images and you want to convert it to tfrecord file.</p> <pre><code>import tensorflow as tf import numpy as np import glob from PIL import Image # Converting the values into features # _int64 is used for numeric values def _int64_feature(value): return tf.train.Feature(int64_list=tf.train.Int64List(value=[value])) # _bytes is used for string/char values def _bytes_feature(value): return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value])) tfrecord_filename = 'something.tfrecords' # Initiating the writer and creating the tfrecords file. writer = tf.python_io.TFRecordWriter(tfrecord_filename) # Loading the location of all files - image dataset # Considering our image dataset has apple or orange # The images are named as apple01.jpg, apple02.jpg .. , orange01.jpg .. etc. images = glob.glob('data/*.jpg') for image in images[:1]: img = Image.open(image) img = np.array(img.resize((32,32))) label = 0 if 'apple' in image else 1 feature = { 'label': _int64_feature(label), 'image': _bytes_feature(img.tostring()) } #create an example protocol buffer example = tf.train.Example(features=tf.train.Features(feature=feature)) #writing the serialized example. writer.write(example.SerializeToString()) writer.close() </code></pre> <p>Now to read this tfrecord file and do stuff</p> <pre><code>import tensorflow as tf import glob reader = tf.TFRecordReader() filenames = glob.glob('*.tfrecords') filename_queue = tf.train.string_input_producer( filenames) _, serialized_example = reader.read(filename_queue) feature_set = { 'image': tf.FixedLenFeature([], tf.string), 'label': tf.FixedLenFeature([], tf.int64) } features = tf.parse_single_example( serialized_example, features= feature_set ) label = features['label'] with tf.Session() as sess: print sess.run([image,label]) </code></pre> <p>Here's an example for MNIST in <a href="https://github.com/tensorflow/tensorflow/blob/r1.8/tensorflow/examples/how_tos/reading_data/convert_to_records.py" rel="nofollow noreferrer">tensorflow/examples</a></p> <p>Cheers!</p>
python|numpy|tensorflow|deep-learning|tensor
0
375,729
50,499,144
Dynamically Change Lookback Period
<p>I am trying to dynamically adjust the lookback period of a pandas dataframe to run regressions on different lengths of stock data. As an easy example take an MA cross.</p> <pre><code>date Prices Diff signal 20150101 8.5 -1.5 FALSE 20150101 11.5 0.3 TRUE 20150102 14.5 4.5 FALSE 20150103 16.67 3.66 FALSE 20150104 18 2 FALSE 20150105 18.5 0.5 FALSE 20150106 18.17 -2.17 TRUE 20150107 17 -3 FALSE </code></pre> <p>Diff is the difference between 2 moving averages and signal identifies a cross with true/false. Numbers are arbitrary, just examples. There could be 10, 50, 100, etc rows between each signal.</p> <p>Now I would like to run a regression on the prices for whatever length exists between signals, so at row 6 I would want prices[-4:], and row 8 I would want prices[-1:].</p> <p>Could anyone help me out with this problem? Looping backwards in each row until I find the latest signal seems inefficient. Should I simply assign an index value to a variable whenever a signal occurs, and use that variable to define the lookback period? I'm still relatively new to Python and not sure how to go about this. </p> <p>Any help would be appreciated. Thank you!</p>
<p>I have actually solved this problem, but the solution is not very elegant. If anyone has an elegant solution, please let me know.</p> <pre><code> if diff[-1] &gt; 0 and diff[-2] &lt; 0: signal = True over_count += 1 under_count = 0 elif diff[-1] &lt; 0 and diff[-2] &gt; 0: signal = False under_count += 1 over_count = 0 elif signal: over_count += 1 elif not signal: under_count += 1 </code></pre>
python|pandas
0
375,730
50,447,683
Merge 2 dataframe an add NaN if no hit
<p>I have 2 dataframes:</p> <pre><code>qseqid sseqid pident length mismatch seq1 seq24 78 789 45 seq2 seq12 73 790 44 seq3 seq34 12 77 42 seq4 seq90 70 790 41 </code></pre> <p>and another one such:</p> <pre><code>seq2_id tax_inf seq3 Virus seq1 Eucaryote </code></pre> <p>and I would to merge these two df such as:</p> <pre><code>qseqid sseqid pident length mismatch tax_inf seq1 seq24 78 789 45 Eucaryote seq2 seq12 73 790 44 NaN seq3 seq34 12 77 42 Virus seq4 seq90 70 790 41 NaN </code></pre>
<p>I believe you need,</p> <pre><code> pd.merge(df1,df2.rename(columns={'seq2_id':'qseqid'}),on='qseqid',how='outer') </code></pre>
python|pandas|merge
1
375,731
50,594,318
numpy array indicator operation
<p>I want to modify an empty bitmap by given indicators (x and y axis). For every coordinate given by the indicators the value should be raised by one.</p> <p>So far so good everything seems to work. But if I have some similar indicators in my array of indicators it will only raise the value once.</p> <pre><code>&gt;&gt;&gt; img array([[0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0]]) &gt;&gt;&gt; inds array([[0, 0], [3, 4], [3, 4]]) </code></pre> <p>Operation:</p> <pre><code>&gt;&gt;&gt; img[inds[:,1], inds[:,0]] += 1 </code></pre> <p>Result:</p> <pre><code>&gt;&gt;&gt; img array([[1, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 1, 0]]) </code></pre> <p>Expected result:</p> <pre><code>&gt;&gt;&gt; img array([[1, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 2, 0]]) </code></pre> <p>Does someone have an idea how to solve this? Preferably a fast approach without the use of loops.</p>
<p>This is one way. Counting algorithm <a href="https://stackoverflow.com/a/27001112/9209546">courtesy of @AlexRiley</a>.</p> <p>For performance implications of relative sizes of <code>img</code> and <code>inds</code>, see <a href="https://stackoverflow.com/a/50594865/9209546">@PaulPanzer's answer</a>.</p> <pre><code># count occurrences of each row and return array counts = (inds[:, None] == inds).all(axis=2).sum(axis=1) # apply indices and counts img[inds[:,1], inds[:,0]] += counts print(img) array([[1, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 0, 0], [0, 0, 0, 2, 0]]) </code></pre>
python|arrays|numpy
6
375,732
50,424,151
How to solve an equation which has a solution variable in form of function at both side of equation
<p>I want to plot a graph of an equation in Python which has a solution variable at both sides in form of some function. The equation is:</p> <pre><code>i = Ip - Io*(exp((V+i*R1)/(n*Vt)) - 1) - (V +I*R1)/(R2) </code></pre> <p>where <code>Ip, Io, n, R1, R2, Vt</code> are some constants.</p> <p>I want to iterate <code>V</code> in the range <code>(0,10)</code> and want to get values for <code>i</code> using Python and plot a <code>V-i</code> graph.</p> <pre><code>import numpy as np from sympy import * import matplotlib.pyplot as plt r = 50 V = np.linspace(0,10,r) def current(): current = [] for t in V: i = np.zeros(r) Ipv = 3 Rs = 0.221 Rsh = 415 n = 2 m = 1.5 T = 302 Eg = 1.14 K = 1.3 Vt = T/11600 Io = K*(T**m)*exp(-Eg/(n*Vt)) i = Ipv - Io *(exp((t + Rs*i)/(n*Vt)) - 1) - (t + Rs * i)/Rsh current.append(i) return np.array(current) Icurrent = current() plt.plot(V,Icurrent) plt.show() </code></pre> <p>I did this but this not working.</p> <p>Any suggestions welcome.</p>
<p>It seems, your problem is that you mix <code>numpy</code> arrays with a scalar <code>math</code> function. <a href="https://stackoverflow.com/questions/48226089/scipy-curve-fit-doesnt-like-math-module">Don't do this.</a> Substitute it with the appropriate <code>numpy</code> function:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt r = 50 V = np.linspace(0,10,r) print(V) def current(): current = [] for t in V: i = np.zeros(r) Ipv = 3 Rs = 0.221 Rsh = 415 n = 2 m = 1.5 T = 302 Eg = 1.14 K = 1.3 Vt = T/11600 Io = K*(T**m)*np.exp(-Eg/(n*Vt)) i = Ipv - Io *(np.exp((t + Rs*i)/(n*Vt)) - 1) - (t + Rs * i)/Rsh current.append(i) return np.array(current) Icurrent = current() plt.plot(V,Icurrent) plt.show() </code></pre> <p>Output:<br> <a href="https://i.stack.imgur.com/BZp1f.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BZp1f.jpg" alt="enter image description here"></a></p> <p>I would have suggested using <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fsolve.html#scipy.optimize.fsolve" rel="nofollow noreferrer"><code>scipy.fsolve</code></a>, but seemingly your approach is working.</p>
python|python-3.x|numpy|sympy
1
375,733
50,521,987
Match column values to dict
<p>I have a dict and a dataframe like the examples v and df below. I want to search through the items in df and return the item that has the maximum number of field values in common with the values in the dict. In this case it would be item 3. I was thinking maybe using apply with a lambda function, or transposing the df. I just can't quiet get my head around it. If anyone has a slick way to do this or any tips they're greatly appreciated.</p> <p>input:</p> <pre><code>v={'size':1,'color':red} df: item size color 2 2 red 3 1 red Output: 3 </code></pre>
<p>Create one line <code>DataFrame</code> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html" rel="nofollow noreferrer"><code>merge</code></a> with original:</p> <pre><code>a = pd.DataFrame(v, index=[0]).merge(df)['item'] print (a) 0 3 Name: item, dtype: int64 </code></pre> <p>Another solution with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.query.html" rel="nofollow noreferrer"><code>query</code></a>, but if strings values of <code>dict</code> is necessary add another <code>"</code>:</p> <pre><code>v1 = {k: '"{}"'.format(v) if isinstance(v, str) else v for k, v in v.items()} print (v1) {'size': 1, 'color': '"red"'} df = df.query(' &amp; '.join(['{}=={}'.format(i,j) for i, j in v1.items()]))['item'] print (df) 1 3 Name: item, dtype: int64 </code></pre> <p>In output are possible 3 ways - <code>Series</code> with more values, one value or empty, so helper function was created:</p> <pre><code>def get_val(v): x = pd.DataFrame(v, index=[0]).merge(df)['item'] if x.empty: return 'Not found' elif len(x) == 1: return x.values[0] else: return x.values.tolist() </code></pre> <pre><code>print (get_val({'size':1,'color':'red'})) 3 print (get_val({'size':10,'color':'red'})) Not found print (get_val({'color':'red'})) [2, 3] </code></pre>
python|python-2.7|pandas
2
375,734
50,585,096
Pandas Dataframe to_sql with a Decimal type
<p>My Dataframe won't send to a SQLite database using the "to_sql" method, when the datatype is Decimal:</p> <pre><code>con = sqlite3.connect("test.db") df = pd.DataFrame({"a":[decimal.Decimal(0)]}) df.to_sql(name="table", con=con) </code></pre> <p>error:</p> <blockquote> <p>sqlite3.InterfaceError: Error binding parameter 0 - probably unsupported type.</p> </blockquote> <p>Is there a way around it? I would prefer to store the Decimal (in the database) as "text"</p>
<p>Hi I had the same problem, I solved it by converting the Decimal column in the Dataframe into a SQLAlchemy Numeric datatype:</p> <pre><code>import pandas as pd from sqlalchemy import Numeric from sqlalchemy import create_engine engine = create_engine("sqlite:////test.db") df = pd.DataFrame({"a":[decimal.Decimal(0)]} df.to_sql( name="tariff", con=engine, dtype={"a": Numeric()} ) </code></pre>
python|pandas|sqlite
2
375,735
50,660,949
Moving a dataframe column and changing column order
<p>I have a dataframe called <code>df</code> which has the following columns header of data:</p> <pre><code>date A B C D E F G H I 07/03/2016 2.08 1 NaN NaN 1029 2 2.65 4861688 -0.0388 08/03/2016 2.20 1 NaN NaN 1089 2 2.20 5770819 -0.0447 : : 09/03/2016 2.14 1 NaN NaN 1059 2 2.01 5547959 -0.0514 10/03/2016 2.25 1 NaN NaN 1089 2 1.95 4064482 -0.0520 </code></pre> <p>Is there a way to change the order of the columns so that column F is moved to a position that is after column H. The resulting <code>df</code> would look like:</p> <pre><code>date A B C D E F G H F I 07/03/2016 2.08 1 NaN NaN 1029 2 2.65 4861688 2 -0.0388 08/03/2016 2.20 1 NaN NaN 1089 2 2.20 5770819 2 -0.0447 : : 09/03/2016 2.14 1 NaN NaN 1059 2 2.01 5547959 2 -0.0514 10/03/2016 2.25 1 NaN NaN 1089 2 1.95 4064482 2 -0.0520 </code></pre>
<p>Use <code>df.insert</code> with <code>df.columns.get_loc</code> to dynamically determine the position of insertion.</p> <pre><code>col = df['F'] # df.pop('F') # if you want it removed df.insert(df.columns.get_loc('H') + 1, col.name, col, allow_duplicates=True) </code></pre> <p></p> <pre><code>df date A B C D E F G H F I 0 07/03/2016 2.08 1 NaN NaN 1029 2 2.65 4861688 2 -0.0388 1 08/03/2016 2.20 1 NaN NaN 1089 2 2.20 5770819 2 -0.0447 ... </code></pre>
python|pandas|dataframe
9
375,736
50,349,209
Excel query automation in pandas
<p>I am working on automation with Python. In one step, I need to take a column and check that only the required values are in the sheet. (We use filter in Excel and select the required values.)</p> <p>What function in pandas can help me?</p>
<p>you can use pandas to achieve any tasks with a table. </p> <pre><code> import pandas as pd df = pd.read_csv('myexcelfile.xlsx') df.filter(items=['one', 'three']) </code></pre> <p>for more on filter refrer [pandas filter]:<a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.filter.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.filter.html</a></p>
python|pandas
1
375,737
50,489,949
How to split and store data from dataframe in pandas
<p>I have an Xl which as values mentioned below:</p> <pre><code>KF &lt;-- Col Name Values: Ab122323,pop 89,HG903434 FG903434,99 </code></pre> <p>I need to split the values using ',' and then count the length of each value and just store the value which as len = 8 and store it as a list --> into an excel</p>
<p>You can use <code>pd.Series.apply</code> with a generator expression. You will meet a <code>StopIteration</code> error if an item of length 8 cannot be found.</p> <pre><code>df = pd.DataFrame({'KF': ['Ab122323,pop', '89,HG903434', 'FG903434,99']}) df['Filter'] = df['KF'].apply(lambda x: next(i for i in x.split(',') if len(i)==8)) df[['Filter']].to_excel('file.xlsx', index=False) print(df) KF Filter 0 Ab122323,pop Ab122323 1 89,HG903434 HG903434 2 FG903434,99 FG903434 </code></pre>
python|pandas|dataframe|split
1
375,738
50,445,428
Why did pandas give "0.66-0.36" when I tried to add two columns?
<p>I am trying to do a simple summation with column name <code>Tangible Book Value</code> and <code>Earnings Per Share</code>: </p> <pre><code>df['price_asset_EPS'] = (df["Tangible Book Value"]) + (df["Earnings Per Share"]) </code></pre> <p>However, the result doesn't evaluate the numbers and also the plus is missing as below </p> <pre><code>0.66-0.36 1.440.0 </code></pre> <p>What I have missed in between?</p>
<p><strong>Looks like both columns are strings (not float):</strong> </p> <pre><code>0.66-0.36 1.440.0 </code></pre> <p>see how <strong>'+' on those columns did string concatenation instead of addition</strong>? It concatenated "0.66" and "-0.36", then "1.44" and "0.0".</p> <p>As to <strong>why</strong> those columns are strings not float, look at the dtype that <code>pandas.read_csv</code> gave them. There are many duplicate questions here telling you how to specify the right dtypes to read_csv.</p>
python|pandas
2
375,739
50,295,457
Euler rotation of ellipsoid expressed by coordinate matrices in python
<p>Objective: Apply an Euler rotation to an ellipsoid, then plot it using matplotlib and mplot3d.</p> <p>I found a function which applies an Euler rotation to a vector or array of vectors:</p> <pre><code>import numpy as np from scipy.linalg import expm def rot_euler(v, xyz): ''' Rotate vector v (or array of vectors) by the euler angles xyz ''' # https://stackoverflow.com/questions/6802577/python-rotation-of-3d-vector for theta, axis in zip(xyz, np.eye(3)): v = np.dot(np.array(v), expm(np.cross(np.eye(3), axis*-theta))) return v </code></pre> <p>However, the ellipsoid I need to rotate is represented as a set of three coordinate matrices (in order to be plotable using ax.plot_surface()):</p> <pre><code>import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D from numpy import pi,sin,cos fig, ax = plt.subplots(subplot_kw=dict(projection='3d')) ax.set_aspect('equal','box') ax.set_xlim3d(-1,1) ax.set_ylim3d(-1,1) ax.set_zlim3d(-1,1) ax.view_init(90,90) ellipseSteps= 100 diamCoef = 500 widthCoef = 25 coefs = (widthCoef, diamCoef, diamCoef) # Coefficients in a0/c x**2 + a1/c y**2 + a2/c z**2 = 1 # Radii corresponding to the coefficients: rx, ry, rz = 1/np.sqrt(coefs) # Set of all spherical angles: u = np.linspace(0, 2 * pi, ellipseSteps) v = np.linspace(0, pi, ellipseSteps) # Cartesian coordinates that correspond to the spherical angles: # (this is the equation of an ellipsoid): ex = rx * np.outer(cos(u), sin(v)) ey = ry * np.outer(sin(u), sin(v)) ez = rz * np.outer(np.ones_like(u), cos(v)) # Plot: ax.plot_surface(ex, ey, ez, rstride=4, cstride=4, color='blue') plt.show() </code></pre> <p>How can I apply the Euler rotation to this object? I thought of just converting the object from three coordinate matrices to vectors and then feeding that into the existing formula, but it occurs to me that doing so might be computationally inefficient... which makes me wonder if the rotation function can be modified to work on the coordinate matrices?</p> <p>I realize it's probably a rather trivial ask, but it's been years since I last did any linear algebra, and would very much appreciate the advice of an expert here.</p> <p>Thanks! </p>
<p>While I don't think you can do much better than applying the rotation point-wise there still is room for significant economy.</p> <p>(1) Using the matrix exponential to compute simple rotation matrices is ridiculously wasteful. Much better to use the scalar exponential or sine and cosine</p> <p>(2) To a lesser extent the same applies to using the cross product for shuffling. Indexing is preferable here.</p> <p>(3) The order of matrix multiplication matters. When bulk rotating more than three vectors, the left most multiplication should be done last.</p> <p>Together these measures speed up the computation by a factor of six:</p> <p>(insert before last two lines of original script)</p> <pre><code>from scipy.linalg import block_diag from timeit import timeit def rot_euler_better(v, xyz): TD = np.multiply.outer(np.exp(1j * np.asanyarray(xyz)), [[1], [1j]]).view(float) x, y, z = (block_diag(1, TD[i])[np.ix_(*2*(np.arange(-i, 3-i),))] for i in range(3)) return v @ (x @ y @ z) # example xyz = np.pi * np.array((1/6, -2/3, 1/4)) print("Same result:", np.allclose(rot_euler(np.array((*map(np.ravel, (ex, ey, ez)),)).T, xyz), rot_euler_better(np.array((*map(np.ravel, (ex, ey, ez)),)).T, xyz)) print("OP: ", timeit(lambda: rot_euler(np.array((*map(np.ravel, (ex, ey, ez)),)).T, xyz), number=1000), "ms") print("optimized:", timeit(lambda: rot_euler_better(np.array((*map(np.ravel, (ex, ey, ez)),)).T, xyz), number=1000), "ms") ex, ey, ez = map(np.reshape, rot_euler_better(np.array((*map(np.ravel, (ex, ey, ez)),)).T, xyz).T, map(np.shape, (ex, ey, ez))) </code></pre> <p>Output:</p> <hr> <pre><code>Same result: True OP: 2.1019406360574067 ms optimized: 0.3485010238364339 ms </code></pre>
python|numpy|matplotlib|rotation
2
375,740
50,653,962
pandas.read_excel with identical column names in excel
<p>When i import an excel table with pandas.read_excel there is a problem (or a feature :-) ) with identical column names. For example the excel-file has two columns named "dummy", after the import in a datframe the second column is named "dummy.1". Is there a way to import without the renaming option ?</p>
<p>Now I don't see the point why you would want this. However, as I could think of a workaround I might as well post it.</p> <p><a href="https://i.stack.imgur.com/mDjBF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mDjBF.png" alt="enter image description here"></a></p> <pre><code>import pandas as pd cols = pd.read_excel('text.xlsx', header=None,nrows=1).values[0] # read first row df = pd.read_excel('text.xlsx', header=None, skiprows=1) # skip 1 row df.columns = cols print(df) </code></pre> <p>Returns:</p> <pre><code> col1 col1 0 1 1 1 2 2 2 3 3 </code></pre>
python|excel|pandas
3
375,741
50,348,640
NP.max function error in Python 3
<p>I am learning Background removal with OpenCV code from <a href="http://www.codepasta.com/site/vision/segmentation/" rel="nofollow noreferrer">http://www.codepasta.com/site/vision/segmentation/</a></p> <p>The error is from np.max</p> <pre><code>edgeImg = np.max( np.array([ edgedetect(blurred[:,:, 0]), edgedetect(blurred[:,:, 1]), edgedetect(blurred[:,:, 2]) ]), axis=0 ) </code></pre> <p>When I change from np.max into np.maximum the error as follow:</p> <pre><code>Traceback (most recent call last): File "deteksipinggir-sobel.py", line 77, in &lt;module&gt; segment('078.jpg') File "deteksipinggir-sobel.py", line 49, in segment np.array([edgedetect(blurred[:, :, 0]), edgedetect(blurred[:, :, 1]), edgedetect(blurred[:, :, 2])]), axis=0) ValueError: invalid number of arguments </code></pre> <p>When I change from np.max into np.amax the error as follow:</p> <pre><code>Traceback (most recent call last): File "deteksipinggir-sobel.py", line 77, in &lt;module&gt; segment('078.jpg') File "deteksipinggir-sobel.py", line 49, in segment np.array([edgedetect(blurred[:, :, 0]), edgedetect(blurred[:, :, 1]), edgedetect(blurred[:, :, 2])]), axis=0) File "E:\Anaconda3\lib\site-packages\numpy\core\fromnumeric.py", line 2272, in amax out=out, **kwargs) File "E:\Anaconda3\lib\site-packages\numpy\core\_methods.py", line 26, in _amax return umr_maximum(a, axis, None, out, keepdims) TypeError: '&gt;=' not supported between instances of 'NoneType' and 'NoneType' </code></pre> <p>I am using python 3, please help, thanks.</p>
<p>From the numpy documentation <a href="https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.maximum.html" rel="nofollow noreferrer">here</a>: <code>numpy.maximum</code> expects two input arrays as arguments:</p> <pre><code>numpy.maximum(arr1, arr2) </code></pre> <p>Whereas <code>numpy.max</code> only requires one input array as an argument:</p> <pre><code>numpy.max(arr1) </code></pre> <p>Make sure you are providing <code>numpy.maximum</code> at least two arrays as arguments.</p>
python|numpy|opencv
3
375,742
50,558,768
.how to get top 5 most occurring names in a column
<p>My Dataframe looks something like this.</p> <p><img src="https://i.stack.imgur.com/8OuCu.png" alt="enter image description here"></p> <p>I need to find top 5 most occurring names in Name column of this table </p>
<p>try</p> <pre><code>Dataframe.Name.value_counts().head() </code></pre>
python|python-3.x|pandas
0
375,743
50,246,105
Pandas - Resample/GroupBy DateTime Index and perform calculations
<p>I will try my best to explain what I need help with. I have the following df (thousands if not millions of rows) with a datetime index like the sample below:</p> <pre><code>INDEX COL A COL B 2018-05-07 21:53:13.731 0.365127 9391.800000 2018-05-07 21:53:16.201 0.666127 9391.800000 2018-05-07 21:53:18.038 0.143104 9391.800000 2018-05-07 21:53:18.243 0.025643 9391.800000 2018-05-07 21:53:18.265 0.640484 9391.800000 2018-05-07 21:53:18.906 -0.100000 9391.793421 2018-05-07 21:53:19.829 0.559516 9391.800000 2018-05-07 21:53:19.846 0.100000 9391.800000 2018-05-07 21:53:19.870 0.006560 9391.800000 2018-05-07 21:53:20.734 0.666076 9391.800000 2018-05-07 21:53:20.775 0.666076 9391.800000 2018-05-07 21:53:28.607 0.100000 9391.800000 2018-05-07 21:53:28.610 0.041991 9391.800000 2018-05-07 21:53:29.283 -0.053518 9391.793421 2018-05-07 21:53:47.322 -0.046302 9391.793421 2018-05-07 21:53:49.182 0.100000 9391.800000 </code></pre> <p>What I would like to do is group the rows in 5 second intervals and perform (sometimes complex) calculations on each 5 second interval/subset.</p> <p>Let's say for example I want to calculate the percentage of positive vs negative values in column A within each 5 second block.</p> <p><code>2018-05-07 21:53:10</code> to <code>2018-05-07 21:53:15</code> only contains one row and column A is a positive so I would create a new column C with <code>100%</code>.</p> <p>Similarly <code>2018-05-07 21:53:15</code> to <code>2018-05-07 21:53:20</code> has 8 rows in column A, 7 which are positive and 1 of which is negative. So column C would be <code>87.5%</code>.</p> <p>I would post sample code but I'm really unsure the best way to do this. A sample output (new df) may be something like the below with COL D being simply the minimum number in COL B for that 5 second grouping:</p> <pre><code>INDEX COL C COL D (MIN) 2018-05-07 21:53:10 100% 9391.800000 2018-05-07 21:53:15 12.5% 9391.793421 2018-05-07 21:53:20 100% 9391.800000 2018-05-07 21:53:25 66.7% 9391.793421 2018-05-07 21:53:30 nan nan 2018-05-07 21:53:35 nan nan 2018-05-07 21:53:40 nan nan 2018-05-07 21:53:45 100% 9391.793421 </code></pre> <p><strong>Please keep in mind I want to do many different calculations over each grouping.</strong> So using built-in <code>.sum()</code>, <code>.mean()</code>, <code>.agg()</code> etc will not suffice for more complex calculations.</p> <p>Appreciate any help and am happy to clarify the question if needed.</p>
<p>I believe need for percentage of positive values need mean of values <code>&gt;0</code>:</p> <pre><code>df = df.resample('5S').agg({'COL A': lambda x: (x &gt; 0).mean() * 100, 'COL B': 'min'}) print (df) COL A COL B INDEX 2018-05-07 21:53:10 100.000000 9391.800000 2018-05-07 21:53:15 87.500000 9391.793421 2018-05-07 21:53:20 100.000000 9391.800000 2018-05-07 21:53:25 66.666667 9391.793421 2018-05-07 21:53:30 NaN NaN 2018-05-07 21:53:35 NaN NaN 2018-05-07 21:53:40 NaN NaN 2018-05-07 21:53:45 50.000000 9391.793421 </code></pre> <p>and for percentage of negative values need mean of <code>&lt;0</code>:</p> <pre><code>df = df.resample('5S').agg({'COL A': lambda x: (x &lt; 0).mean() * 100, 'COL B': 'min'}) print (df) COL A COL B INDEX 2018-05-07 21:53:10 0.000000 9391.800000 2018-05-07 21:53:15 12.500000 9391.793421 2018-05-07 21:53:20 0.000000 9391.800000 2018-05-07 21:53:25 33.333333 9391.793421 2018-05-07 21:53:30 NaN NaN 2018-05-07 21:53:35 NaN NaN 2018-05-07 21:53:40 NaN NaN 2018-05-07 21:53:45 50.000000 9391.793421 </code></pre> <p>As @Alexander pointed <code>0</code> is neither positive nor negative. So the best is remove it before count:</p> <pre><code>df = df.resample('5S').agg({'COL A': lambda x: (x[x.ne(0)] &gt; 0).mean() * 100, 'COL B': 'min'}) </code></pre>
python|pandas|datetime|time-series
3
375,744
50,645,240
Pandas json_normalize fails with null values in JSON
<p>i have below json which i get from external webservice :</p> <pre><code>text=""" [{ "id":"1", "name" : "abc", "address":{ "flat":"123", "city":"paris", "street":null }, "error":null }] </code></pre> <p>Now i want to create dataframe from this json. When i try below :</p> <pre><code>from pandas.io.json import json_normalize import json import pandas as pd resp_json = json.loads(text) response = json_normalize(resp_json) </code></pre> <p>But this gives me below error:</p> <p><code>Error at response = json_normalize(resp_json) KeyError : 'street'</code></p> <p>I believe its because street attribute has value as null thats why it is throwing this error. How can this be resolved?</p> <p>If i do like below, I am able to resolve but ideally its not the right solution</p> <pre><code>text = text.replace('"street":null','"street":""') </code></pre> <p><strong>NOTE:</strong> - When I use python verion 3.6.3 :: Anaconda Inc. and pandas version 0.20.3 I do not see this issue and json_normalize is able to work properly. This is my local machine setup.</p> <p>On production machine we have - Python - 3.5.1 and pandas 0.23.0. There we encounter above issue.</p>
<p>This appears to be a bug in the latest version of pandas:</p> <p><a href="https://github.com/pandas-dev/pandas/issues/21158" rel="nofollow noreferrer">https://github.com/pandas-dev/pandas/issues/21158</a></p> <p>I'm running pandas '0.23.0' and I can reproduce the same error. You can see in the github discussion thread that error arises due to condition case when <em>null</em> value occurs on the nesting level greater than 0. It seems to have been changed around two months ago that seems to have made it's way into 0.23.0 release two weeks ago:</p> <p><a href="https://github.com/pandas-dev/pandas/commit/01882ba5b4c21b0caf2e6b9279fb01967aa5d650#diff-9c654764f5f21c8e9d58d9ebf14de86d" rel="nofollow noreferrer">https://github.com/pandas-dev/pandas/commit/01882ba5b4c21b0caf2e6b9279fb01967aa5d650#diff-9c654764f5f21c8e9d58d9ebf14de86d</a></p> <p>Other than waiting for the new release or downgrading your production env (which is not a good idea, since it will quite likely break things), you could think of how to handle multiple package versions in your env. Pip is not capable of doing so unless you create different virtual environments, neither is conda I believe. What you could do, if you really need to load files like those, is to load the '0.22.0' package as a local module by cloning it from git as a temporary, hacky, solution - just to load your dict. But there might be some dataframe API inconsistencies when you load with 0.22.0 and try to use it with 0.23.0. </p> <p>Your solution of converting strings might not be that bad after-all.</p> <p>Happy hacking.</p>
python|json|python-3.x|pandas
1
375,745
50,464,140
How to plot a power curve by following code?
<p>In the code, I tried to plot a graph Power(p) vs voltage (Vpv) but my code is not giving the result. </p> <pre><code>import numpy as np import math from numpy import * import matplotlib.pyplot as plt plt.style.use('ggplot') r = 50 Vpv = np.linspace(0,0.6,r) # Vpv = panel voltage Rs = 0 # series resistance Rsh = math.inf # parallel resistance n = 2 # ideality constant depends on semiconductor material m = 1.5 # another constant depends on dopping T = 298 # temperature in kelvin Eg = 1.14 # band gap energy in ev K = 0.13 # constant Vt = T/11600 #thermal voltage Io = K*(T**m)*exp(-Eg/(n*Vt)) # Io = diode current print(Io) Isc = Io*(10**9) def current(): current = [] #initializing current array as null for t in Vpv: Ipv = np.zeros(r) #initializing panel current(Ipv) as zero Ipv = Isc - Io *(exp((t + Rs*Ipv)/(n*Vt)) - 1) - (t + Rs*Ipv)/Rsh current.append(Ipv) return np.array(current) Icurrent = current() #power = Vpv * Icurrent power = np.multiply(Vpv, Icurrent) plt.plot(Vpv,power,'b') #plt.plot(Vpv,Icurrent,'r') plt.xlabel('Panel VOltage(V)') plt.ylabel('Panel Current(A) and Power(W)') plt.show() </code></pre> <p>Is also tried to use array multiplication like <code>no.multiply(arr1,arr2)</code> but this is also not working.</p> <p>I am getting the following graph using array multiplication - <img src="https://i.stack.imgur.com/W9Ozd.png" alt="Output of the above code"></p> <p>but it should come in the following shape - <img src="https://i.stack.imgur.com/U6Usn.jpg" alt="Expected shape of output"></p> <p>Any suggestion Welcome. </p>
<p>I have solved my problem. I got correct graphs.</p> <pre><code>import numpy as np import math from numpy import * import matplotlib.pyplot as plt plt.style.use('ggplot') r = 50 Vpv = np.linspace(0,1.1,r) # Vpv = panel voltage Rs = 0 # series resistance Rsh = math.inf # parallel resistance n = 2 # ideality constant depends on semiconductor material m = 1.5 # another constant depends on dopping T = 298 # temperature in kelvin Eg = 1.14 # band gap energy in ev K = 0.13 # constant Vt = T/11600 #thermal voltage Io = K*(T**m)*exp(-Eg/(n*Vt)) # Io = diode current print(Io) Isc = Io*(10**9) def current(): current = [] #initializing current array as null for t in Vpv: Ipv = 0 #initializing panel current(Ipv) as zero Ipv = Isc - Io *(exp((t + Rs*Ipv)/(n*Vt)) - 1) - (t + Rs*Ipv)/Rsh current.append(Ipv) return np.array(current) Icurrent = current() p = Vpv * Icurrent plt.plot(Vpv,p,'b') plt.plot(Vpv,Icurrent,'r') plt.xlabel('Panel VOltage(V)') plt.ylabel('Panel Current(A)') plt.ylim(0,max(Icurrent)+ 10) plt.plot((-0.1, max(Vpv)), (0,0), 'k') plt.plot((0,0),(-3,8), 'k') plt.show() </code></pre>
python|numpy|matplotlib|python-3.6
0
375,746
50,512,655
Saving numpy arrays as a dictionary
<p>I'm saving 2 Numpy arrays as a dictionary.<br> When I load the data from the binary file, I get another <code>ndarray</code>. Can I use the loaded Numpy array as a dictionary? <br/> <br/> Here is my code and the output of my script:</p> <pre><code>import numpy as np x = np.arange(10) y = np.array([100, 101, 102, 103, 104, 105, 106, 107]) z = {'X': x, 'Y': y} np.save('./data.npy', z) z1 = np.load('./data.npy') print(type(z1)) print(z1) print(z1['X']) #this line will generate an error </code></pre> <blockquote> <p>Output: {'X': array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]), 'Y': array([100, 101, 102, 103, 104, 105, 106, 107])}</p> </blockquote>
<p>Yes, you can access the underlying dictionary in a 0-dimensional array. Try <code>z1[()]</code>.</p> <p>Here's a demo:</p> <pre><code>np.save('./data.npy', z) d = np.load('./data.npy')[()] print(type(d)) &lt;class 'dict'&gt; print(d['X']) [0 1 2 3 4 5 6 7 8 9] </code></pre>
python|arrays|numpy|dictionary
5
375,747
50,501,480
How to create a DataFrame of a single column from a list where the first element is the column name in python
<p>I have the below data in a csv and I am trying to create a dataframe of 1 column by selecting each column from the csv at a time.</p> <pre><code>sv_m1 rev ioip 0 15.31 40 0 64.9 0 0 18.36 20 0 62.85 0 0 10.31 20 0 12.84 10 0 69.95 0 0 32.81 20 </code></pre> <p>The list that I get, the first value is the column name and remaining are values.</p> <pre><code>input_file = open('df_seg_sample.csv', 'r') c_reader = csv.reader(input_file, delimiter=',') #Read column column = [x[1] for x in c_reader] label = column[0] column = column[1:] df_column = pd.DataFrame.from_records(data = column,columns = label) </code></pre> <p>However this gives me an error:</p> <pre><code> TypeError: Index(...) must be called with a collection of some kind, 'sv_m1' was passed </code></pre> <p>core is actually the column name.</p> <p>How can I create this df? The column name of the df will be the first element in the list and all other items in the list will be the column values.</p> <p>The reason for not using pandas.read_csv is: The dataframe is huge and hogs up a lot of memory. So I want to read in a column at a time, do some processing and write it to another csv.</p>
<p>I think need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html" rel="nofollow noreferrer"><code>read_csv</code></a> here with <code>usecols</code> parameter for filter second column:</p> <pre><code>df = pd.read_csv('df_seg_sample.csv', usecols=[1]) print (df) rev 0 15.31 1 64.90 2 18.36 3 62.85 4 10.31 5 12.84 6 69.95 7 32.81 </code></pre> <p>But if want use your solution is necssary add <code>[]</code> for one item list for column name and use only <code>DataFrame</code> contructor:</p> <pre><code>data = [x[1] for x in c_reader] print (data) ['rev', '15.31', '64.9', '18.36', '62.85', '10.31', '12.84', '69.95', '32.81'] df = pd.DataFrame(data[1:], columns=[data[0]]) print (df) rev 0 15.31 1 64.9 2 18.36 3 62.85 4 10.31 5 12.84 6 69.95 7 32.81 </code></pre>
python|list|pandas
1
375,748
45,324,695
TensorFlow error: "logits and labels must be same size", warmspringwinds "tutorial"
<p>I'm currently following this <a href="http://warmspringwinds.github.io/tensorflow/tf-slim/2016/12/18/image-segmentation-with-tensorflow-using-cnns-and-conditional-random-fields/" rel="nofollow noreferrer"> tutorial</a> and after I did some changes because of the tensorflow update, I got this error:</p> <blockquote> <p>tensorflow.python.framework.errors_impl.InvalidArgumentError: logits and labels must be same size: logits_size=[399360,2] labels_size=[409920,2] [[Node: SoftmaxCrossEntropyWithLogits = SoftmaxCrossEntropyWithLogits[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"](Reshape_2, Reshape_3)]]. </p> </blockquote> <p>Can anyone help me with this one?</p> <p>Changes in the code:</p> <pre><code>#Replaced concat_dim=2 with axis=2 combined_mask = tf.concat(axis=2, values=[bit_mask_class, bit_mask_background]) #Update the import of urllib2 to urllib3 #Replace tf.pack with tf.stack upsampled_logits_shape = tf.stack([ downsampled_logits_shape[0], downsampled_logits_shape[1] * upsample_factor, downsampled_logits_shape[2] * upsample_factor, downsampled_logits_shape[3]]) </code></pre>
<p>The error is raised because the number of logits is <code>399360</code> while you are providing to the function with <code>409920</code> labels. The function <code>tf.nn.softmax_cross_entropy_with_logits</code> expects one label for each logit, and it crashes because you are providing more labels than logits.</p> <p>As to why it happens, you should post the changes you made to the code.</p>
machine-learning|tensorflow|image-segmentation
0
375,749
45,570,773
How to input a condition into df.assign?
<p>I have the following datatype:</p> <pre><code>id arrival_time departure_time start end capacity Train A 0 2016-05-19 08:25:00 A B 2 Train A 2016-05-19 13:50:00 2016-05-19 16:00:00 B H 2 Train A 2016-05-19 21:25:00 2016-05-20 07:25:00 C I 3 Train B 0 2016-05-24 12:50:00 D J 3 Train B 2016-05-24 18:30:00 2016-05-25 20:00:00 E K 2 Train B 2016-05-26 12:15:00 2016-05-26 19:45:00 K L 3 </code></pre> <p>I would like to add a column called source and sink and if the time difference between arrival and departure is less than 3 hours, the source is the starting of the trip and the sink is only when the trip breaks (ie when time_difference is more than 3 hours,</p> <pre><code>time difference source sink - A H 02:10:00 A H 10:00:00 C I - D J 01:30:00 E L 19:30:00 E L </code></pre> <p>So i use the following condition: </p> <pre><code>df = df.assign(timediff=(df.departure_time - df.arrival_time)) df = df.assign(source = np.where(df.timediff.dt.seconds / 3600 &lt; 3, df.shift(1).start, df.start)) df = df.assign(sink = np.where(df.timediff.dt.seconds.shift(1) / 3600 &gt; 3, df.shift(-1).end, df.end)) </code></pre> <p>But I would like to implement this condition, only if the end of first line matches with start of second line, grouped by their id.</p> <p>I used the following condition to implement this,</p> <pre><code>df = df.assign(timediff=(df.departure_time - df.arrival_time)) n1= (df['end'] == df ['start'].shift() &amp; df.timediff.dt.seconds / 3600 &lt; 3) df = df.assign(source = np.where(n1, df.shift(1).start, df.start)) df = df.assign(sink = np.where(n1, df.shift(-1).end, df.end)) </code></pre> <p>But i get the following error:</p> <pre><code>TypeError: unsupported operand type(s) for &amp;: 'str' and 'bool' </code></pre>
<p>It's kind of hazy what your question is, so I'll just address the error you have been getting.</p> <p>Your n1 calculation hinges on an expression <code>statement 1 &amp; statement 2</code>, if you have written it correctly. Right now it executes: <code>df['end'] == (df['start'.shift() &amp; df.timediff.dt.seconds / 3600 &lt;3)</code>. Which compares a string (df['start'].shift()) to a boolean (df.timediff.dt.seconds/3600 &lt; 3)</p> <p>So the fix is to add brackets around your statements:</p> <pre><code>n1= ((df['end'] == df['start'].shift()) &amp; (df.timediff.dt.seconds / 3600 &lt; 3)) </code></pre> <p>The last expression does not need brackets, as the expressions are executed back to front, but it never hurts to be explicit.</p>
python|pandas
0
375,750
45,428,574
Actor-Critic model never converges
<p>I'm trying to implement Actor-Critic using Keras &amp; Tensorflow. However, it never converges and I can't figure out why. I decreased the learning rate but it did not change.</p> <p><em>The code is in python3.5.1 and tensorflow1.2.1</em></p> <pre><code>import gym import itertools import matplotlib import numpy as np import sys import tensorflow as tf import collections from keras.models import Model from keras.layers import Input, Dense from keras.utils import to_categorical from keras import backend as K env = gym.make('CartPole-v0') NUM_STATE = env.env.observation_space.shape[0] NUM_ACTIONS = env.env.action_space.n LEARNING_RATE = 0.0005 TARGET_AVG_REWARD = 195 class Actor_Critic(): def __init__(self): l_input = Input(shape=(NUM_STATE, )) l_dense = Dense(16, activation='relu')(l_input) ## Policy Network action_probs = Dense(NUM_ACTIONS, activation='softmax')(l_dense) policy_network = Model(input=l_input, output=action_probs) ## Value Network state_value = Dense(1, activation='linear')(l_dense) value_network = Model(input=l_input, output=state_value) graph = self._build_graph(policy_network, value_network) self.state, self.action, self.target, self.action_probs, self.state_value, self.minimize, self.loss = graph def _build_graph(self, policy_network, value_network): state = tf.placeholder(tf.float32) action = tf.placeholder(tf.float32, shape=(None, NUM_ACTIONS)) target = tf.placeholder(tf.float32, shape=(None)) action_probs = policy_network(state) state_value = value_network(state)[0] advantage = tf.stop_gradient(target) - state_value log_prob = tf.log(tf.reduce_sum(action_probs * action, reduction_indices=1)) p_loss = -log_prob * advantage v_loss = tf.reduce_mean(tf.square(advantage)) loss = p_loss + (0.5 * v_loss) # optimizer = tf.train.RMSPropOptimizer(LEARNING_RATE, decay=.99) optimizer = tf.train.AdamOptimizer(LEARNING_RATE) minimize = optimizer.minimize(loss) return state, action, target, action_probs, state_value, minimize, loss, def predict_policy(self, sess, state): return sess.run(self.action_probs, { self.state: [state] }) def predict_value(self, sess, state): return sess.run(self.state_value, { self.state: [state] }) def update(self, sess, state, action, target): feed_dict = {self.state:[state], self.target:target, self.action:to_categorical(action, NUM_ACTIONS)} _, loss = sess.run([self.minimize, self.loss], feed_dict) return loss def train(env, sess, estimator, num_episodes, discount_factor=1.0): Transition = collections.namedtuple("Transition", ["state", "action", "reward", "loss"]) last_100 = np.zeros(100) for i_episode in range(num_episodes): # Reset the environment and pick the fisrst action state = env.reset() episode = [] # One step in the environment for t in itertools.count(): # Take a step action_probs = estimator.predict_policy(sess, state)[0] action = np.random.choice(np.arange(len(action_probs)), p=action_probs) next_state, reward, done, _ = env.step(action) target = reward + (0 if done else discount_factor * estimator.predict_value(sess, next_state)) # Update our policy estimator loss = estimator.update(sess, state, action, target) # Keep track of the transition episode.append(Transition(state=state, action=action, reward=reward, loss=loss)) if done: break state = next_state total_reward = sum(e.reward for e in episode) last_100[i_episode % 100] = total_reward last_100_avg = sum(last_100) / 100 total_loss = sum(e.loss for e in episode) print('episode %s loss: %f reward: %f last 100: %f' % (i_episode, total_loss, total_reward, last_100_avg)) if last_100_avg &gt;= TARGET_AVG_REWARD: break return estimator = Actor_Critic() with tf.Session() as sess: sess.run(tf.global_variables_initializer()) stats = train(env, sess, estimator, 2000, discount_factor=0.99) </code></pre> <p>Here is the log at the beginning:(last 100 is the average reward of last 100 episodes. It automatically increases in the first 100 episodes so ignore it.)</p> <pre><code>episode 0 loss: 17.662344 reward: 15.000000 last 100: 0.150000 episode 1 loss: 15.319713 reward: 13.000000 last 100: 0.280000 episode 2 loss: 38.097054 reward: 32.000000 last 100: 0.600000 episode 3 loss: 22.229492 reward: 19.000000 last 100: 0.790000 episode 4 loss: 31.027534 reward: 26.000000 last 100: 1.050000 episode 5 loss: 21.037663 reward: 18.000000 last 100: 1.230000 episode 6 loss: 18.750641 reward: 16.000000 last 100: 1.390000 episode 7 loss: 23.268227 reward: 20.000000 last 100: 1.590000 episode 8 loss: 27.251028 reward: 23.000000 last 100: 1.820000 episode 9 loss: 20.008078 reward: 17.000000 last 100: 1.990000 episode 10 loss: 28.213932 reward: 24.000000 last 100: 2.230000 episode 11 loss: 28.109922 reward: 23.000000 last 100: 2.460000 episode 12 loss: 25.068121 reward: 21.000000 last 100: 2.670000 episode 13 loss: 59.581238 reward: 50.000000 last 100: 3.170000 episode 14 loss: 26.618759 reward: 22.000000 last 100: 3.390000 episode 15 loss: 28.847467 reward: 24.000000 last 100: 3.630000 episode 16 loss: 22.534216 reward: 17.000000 last 100: 3.800000 episode 17 loss: 19.760979 reward: 15.000000 last 100: 3.950000 episode 18 loss: 31.018209 reward: 25.000000 last 100: 4.200000 episode 19 loss: 22.938683 reward: 16.000000 last 100: 4.360000 episode 20 loss: 30.372072 reward: 24.000000 last 100: 4.600000 </code></pre> <p>After 500 episodes, not only is it not improving, but it is actually worse than the beginning.</p> <pre><code>episode 501 loss: 97.043335 reward: 8.000000 last 100: 13.500000 episode 502 loss: 101.957603 reward: 11.000000 last 100: 13.510000 episode 503 loss: 100.277809 reward: 11.000000 last 100: 13.520000 episode 504 loss: 96.754257 reward: 9.000000 last 100: 13.510000 episode 505 loss: 99.436943 reward: 11.000000 last 100: 13.530000 episode 506 loss: 105.161621 reward: 16.000000 last 100: 13.580000 episode 507 loss: 65.993591 reward: 12.000000 last 100: 13.610000 episode 508 loss: 59.837429 reward: 9.000000 last 100: 13.600000 episode 509 loss: 92.478806 reward: 9.000000 last 100: 13.570000 episode 510 loss: 96.697289 reward: 14.000000 last 100: 13.620000 episode 511 loss: 94.611366 reward: 10.000000 last 100: 13.620000 episode 512 loss: 100.259460 reward: 15.000000 last 100: 13.680000 episode 513 loss: 88.776451 reward: 10.000000 last 100: 13.690000 episode 514 loss: 86.659203 reward: 9.000000 last 100: 13.700000 episode 515 loss: 105.494476 reward: 17.000000 last 100: 13.770000 episode 516 loss: 90.662186 reward: 12.000000 last 100: 13.770000 episode 517 loss: 90.777634 reward: 12.000000 last 100: 13.810000 episode 518 loss: 91.290558 reward: 14.000000 last 100: 13.860000 episode 519 loss: 94.902023 reward: 11.000000 last 100: 13.870000 episode 520 loss: 86.746582 reward: 12.000000 last 100: 13.900000 </code></pre> <p>On the other hand, the plain Policy Gradient does converge.</p> <pre><code>import gym import itertools import matplotlib import numpy as np import sys import tensorflow as tf import collections from keras.models import Model from keras.layers import Input, Dense from keras.utils import to_categorical from keras import backend as K env = gym.make('CartPole-v0') NUM_STATE = env.env.observation_space.shape[0] NUM_ACTIONS = env.env.action_space.n LEARNING_RATE = 0.0005 TARGET_AVG_REWARD = 195 class PolicyEstimator(): """ Policy Function approximator. """ def __init__(self): l_input = Input(shape=(NUM_STATE, )) l_dense = Dense(16, activation='relu')(l_input) action_probs = Dense(NUM_ACTIONS, activation='softmax')(l_dense) model = Model(inputs=[l_input], outputs=[action_probs]) self.state, self.action, self.target, self.action_probs, self.minimize, self.loss = self._build_graph(model) def _build_graph(self, model): state = tf.placeholder(tf.float32) action = tf.placeholder(tf.float32, shape=(None, NUM_ACTIONS)) target = tf.placeholder(tf.float32, shape=(None)) action_probs = model(state) log_prob = tf.log(tf.reduce_sum(action_probs * action, reduction_indices=1)) loss = -log_prob * target # optimizer = tf.train.RMSPropOptimizer(LEARNING_RATE, decay=.99) optimizer = tf.train.AdamOptimizer(LEARNING_RATE) minimize = optimizer.minimize(loss) return state, action, target, action_probs, minimize, loss def predict(self, sess, state): return sess.run(self.action_probs, { self.state: [state] }) def update(self, sess, state, action, target): feed_dict = {self.state:[state], self.target:[target], self.action:to_categorical(action, NUM_ACTIONS)} _, loss = sess.run([self.minimize, self.loss], feed_dict) return loss def train(env, sess, estimator_policy, num_episodes, discount_factor=1.0): Transition = collections.namedtuple("Transition", ["state", "action", "reward"]) last_100 = np.zeros(100) for i_episode in range(num_episodes): # Reset the environment and pick the fisrst action state = env.reset() episode = [] # One step in the environment for t in itertools.count(): # Take a step action_probs = estimator_policy.predict(sess, state)[0] action = np.random.choice(np.arange(len(action_probs)), p=action_probs) next_state, reward, done, _ = env.step(action) # Keep track of the transition episode.append(Transition(state=state, action=action, reward=reward)) if done: break state = next_state # Go through the episode and make policy updates for t, transition in enumerate(episode): # The return after this timestep target = sum(discount_factor**i * t2.reward for i, t2 in enumerate(episode[t:])) # Update our policy estimator loss = estimator_policy.update(sess, transition.state, transition.action, target) total_reward = sum(e.reward for e in episode) last_100[i_episode % 100] = total_reward last_100_avg = sum(last_100) / 100 print('episode %s reward: %f last 100: %f' % (i_episode, total_reward, last_100_avg)) if last_100_avg &gt;= TARGET_AVG_REWARD: break return policy_estimator = PolicyEstimator() with tf.Session() as sess: sess.run(tf.global_variables_initializer()) stats = train(env, sess, policy_estimator, 2000, discount_factor=1.0) </code></pre> <p>Reference code</p> <p><a href="https://github.com/jaara/AI-blog/blob/master/CartPole-A3C.py" rel="nofollow noreferrer">https://github.com/jaara/AI-blog/blob/master/CartPole-A3C.py</a></p> <p><a href="https://github.com/coreylynch/async-rl" rel="nofollow noreferrer">https://github.com/coreylynch/async-rl</a></p> <p>Any help is appreciated.</p> <p>[Update]</p> <p>I changed the code in <code>_build_graph</code> from</p> <pre><code>advantage = tf.stop_gradient(target) - state_value log_prob = tf.log(tf.reduce_sum(action_probs * action, reduction_indices=1)) p_loss = -log_prob * advantage v_loss = tf.reduce_mean(tf.square(advantage)) loss = p_loss + (0.5 * v_loss) </code></pre> <p>to</p> <pre><code>advantage = target - state_value log_prob = tf.log(tf.reduce_sum(action_probs * action, reduction_indices=1)) p_loss = -log_prob * tf.stop_gradient(advantage) v_loss = 0.5 * tf.reduce_mean(tf.square(advantage)) loss = p_loss + v_loss </code></pre> <p>It got better and hit 200 rewards(maximum) a lot. However, after 4000 episodes, it still hasn't hit 195 average.</p>
<p>First obvious thing is wrong gradient being stopped in advantage:</p> <pre><code>advantage = tf.stop_gradient(target) - state_value </code></pre> <p>should be</p> <pre><code>advantage = target - tf.stop_gradient(state_value) </code></pre> <p>Since there is no gradient for target either way (it is a constant) and what you want to achieve is lack of gradient flowing through Value network (baseline) for the policy gradient. You have a separate loss for the baseline (which looks fine).</p> <p>Another possible error is way you reduce losses. You are explicitely calling reduce_mean for v_loss, but never for p_loss. Consequently scaling is off and your value network probably learns way slower (since you average across first - probably time - dimension).</p>
python|tensorflow|deep-learning|keras|reinforcement-learning
1
375,751
45,709,488
If one row in two columns contain the same string python pandas
<p>I have a dataframe looking like this:</p> <pre><code> id k1 k2 same 1 re_setup oo_setup true 2 oo_setup oo_setup true 3 alerting bounce false 4 bounce re_oversetup false 5 re_oversetup alerting false 6 alerting_s re_setup false 7 re_oversetup oo_setup true 8 alerting bounce false </code></pre> <p>So, I need to classified rows where string 'setup' is contained or not.</p> <pre><code>And simple output would be: id k1 k2 same 1 re_setup oo_setup true 2 oo_setup oo_setup true 3 alerting bounce false 4 bounce re_setup false 5 re_setup alerting false 6 alerting_s re_setup false 7 re_setup oo_setup true 8 alerting bounce false </code></pre> <p>I've tried something with this, but as I expact, I have error with selecting multiple columns. </p> <pre><code>data['same'] = data[data['k1', 'k2'].str.contains('setup')==True] </code></pre>
<p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html" rel="nofollow noreferrer"><code>apply</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.contains.html" rel="nofollow noreferrer"><code>str.contains</code></a>, because it working only with <code>Series</code> (one column):</p> <pre><code>print (data[['k1', 'k2']].apply(lambda x: x.str.contains('setup'))) k1 k2 0 True True 1 True True 2 False False 3 False True 4 True False 5 False True 6 True True 7 False False </code></pre> <p>Then add <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.all.html" rel="nofollow noreferrer"><code>DataFrame.all</code></a> for check if all <code>True</code>s per row</p> <pre><code>data['same'] = data[['k1', 'k2']].apply(lambda x: x.str.contains('setup')).all(1) print (data) id k1 k2 same 0 1 re_setup oo_setup True 1 2 oo_setup oo_setup True 2 3 alerting bounce False 3 4 bounce re_setup False 4 5 re_setup alerting False 5 6 alerting_s re_setup False 6 7 re_setup oo_setup True 7 8 alerting bounce False </code></pre> <p>or <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.any.html" rel="nofollow noreferrer"><code>DataFrame.any</code></a> for check at least one <code>True</code> per row:</p> <pre><code>data['same'] = data[['k1', 'k2']].applymap(lambda x: 'setup' in x).any(1) print (data) id k1 k2 same 0 1 re_setup oo_setup True 1 2 oo_setup oo_setup True 2 3 alerting bounce False 3 4 bounce re_setup True 4 5 re_setup alerting True 5 6 alerting_s re_setup True 6 7 re_setup oo_setup True 7 8 alerting bounce False </code></pre> <p>Another solutions with <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.applymap.html" rel="nofollow noreferrer"><code>applymap</code></a> for elemnt wise check:</p> <pre><code>data['same'] = data[['k1', 'k2']].applymap(lambda x: 'setup' in x).all(1) print (data) id k1 k2 same 0 1 re_setup oo_setup True 1 2 oo_setup oo_setup True 2 3 alerting bounce False 3 4 bounce re_setup False 4 5 re_setup alerting False 5 6 alerting_s re_setup False 6 7 re_setup oo_setup True 7 8 alerting bounce False </code></pre> <p>If only 2 columns simple chain conditions with <code>&amp;</code> like <code>all</code> or <code>|</code> like <code>any</code>:</p> <pre><code>data['same'] = data['k1'].str.contains('setup') &amp; data['k2'].str.contains('setup') print (data) id k1 k2 same 0 1 re_setup oo_setup True 1 2 oo_setup oo_setup True 2 3 alerting bounce False 3 4 bounce re_setup False 4 5 re_setup alerting False 5 6 alerting_s re_setup False 6 7 re_setup oo_setup True 7 8 alerting bounce False </code></pre>
python|string|pandas|dataframe
4
375,752
45,429,652
How do I repeat or tile a numpy array but change the value in one element each time it is tiled?
<p>Let's say there is a numpy array <code>a = [1,1,1,0]</code></p> <p>I want to tile or repeat this array 3 times, but make the last element increase by 1 every time it is tiled/repeated.</p> <p>That is, I want </p> <pre><code>result = [[1,1,1,0], [1,1,1,1], [1,1,1,2]] </code></pre> <p>in the end.</p> <p>I think I saw someone use a function to do this, but I cannot remember what that function was. Or I might be wrong.</p>
<pre><code>import numpy as np a = np.array([1, 1, 1, 0]) #how often to repeat the array along first dimension? b = 20 #repeat b times along first dimension, one time along second x = np.tile(a, (b,1)) print(x) #just some consecutive numbers y = np.arange(20) print(y) #overwrite fourth column of array x[:, 3] = y print(x) </code></pre>
numpy|repeat|tile
4
375,753
45,538,740
How to let Tensorflow object detection api use gray image to train(just 1 channel for input tensor)?
<p>I just want to get real time speed when using model trained by tensorflow object detection api, the input tensor has shape[1, width, height,3], it is 3 channels,but I think if I can just use 1 channel to train my model, it just need gray images as input, therefore , this can reduce the computational complexity, which for my app, speed is very important.</p>
<p>We don't have any pretrained models that do one channel images. If you're interested in creating a new model, consider adding a new <a href="https://github.com/tensorflow/models/blob/4f32535fe7040bb1e429ad0e3c948a492a89482d/research/object_detection/g3doc/defining_your_own_model.md#defining-a-new-faster-r-cnn-or-ssd-feature-extractor" rel="nofollow noreferrer">SSD feature extractor</a>.</p> <p>That being said, you might not see tangible speed ups to your model. Using only one channel only affects the input layer of the network, not the intermediate layers. It might be better to consider using a thinner Mobilenet feature extractor if performance is your highest concern.</p>
tensorflow|object-detection
2
375,754
45,577,884
Pandas value counts save output to file
<p>I use pandas's value_counts() method to get the number of times each value in a column appears. Although the output looks like what I expected, attempting to save it using numpy savetxt or pandas to_csv returns only one column (with counts). I'd like to be able to save both.</p>
<p>One way would be to use reset_index and then to_csv</p> <pre><code>df['key'].value_counts().reset_index().to_csv('df.csv') </code></pre>
pandas|numpy
14
375,755
45,293,449
Creating a New Numpy Array from Elements in a Numpy Array
<p>Can't seem to figure this one out. Very new to numpy.</p> <p>I have a numpy array of shape <code>(200,1,1000,1000)</code> which corresponds to (number of images, channel, x_of_image, y_of_image). So I have 200 images with 1 channel that are 1000x1000 pixels each.</p> <p>I want to take each of the 200 images <code>(1,1000,1000)</code>, do a operation on the image portion <code>(1000,1000)</code>, and append/concatenate it to a brand new array. </p> <pre><code>new_array = np.array([]) for image in original_array: new_array = np.concatenate(new_array,original_array[0].operation()) </code></pre> <p>New array would end up being the exact same shape as the original <code>(200,1,1000,1000)</code> just with different images because of the operation performed. </p> <p>Bonus: How would I just do the operation on some percentage of the array, say 50%? This would output an array of <code>(100,1,1000,1000)</code></p>
<p>Avoid calling <code>np.concatenate</code>in a loop. It allocates a new array and copies everything. This is slow and you may run into memory problems if the discarded copies pile up without being garbage collected.</p> <p>How this should be done depends mostly on the operations you perform on the images. Most numpy operations are designed to work very well with multi-dimensional arrays. </p> <ol> <li><p>Try to express the operation with numpy array functions. For example, normalizing the images to a range of 0..1 could be done like this:</p> <pre><code>new_array = original_array - original_array.min(axis=(-1, -2), keepdims=True) new_array /= new_array.max(axis=(-1, -2), keepdims=True) </code></pre></li> <li><p>If the image operations are too complex to be broken down into numpy functions, allocate the new array first and modify it in place.</p> <pre><code>new_array = np.empty_like(original_array) for i in range(new_array.shape[0]): new_array[i] = complicated_operation(original_array[i]) </code></pre> <p>Or copy the original array and work only on the copy:</p> <pre><code>new_array = original_array.copy() for image in new_array: image[:] = complicated_operation(image) </code></pre></li> <li><p>For some reason you do not want to pre-allocate, then store the images in a temporary list of arrays and concatenate them in the end:</p> <pre><code>new_images = [] for image in original_array: new_images.append(image.operation()) new_array = np.stack(new_images) </code></pre></li> <li><p>If you really want to successively concatenate arrays, note that the arrays-to-be-concatenated are passed to the function as one sequence, like this:</p> <pre><code>new_array = np.array([]) for image in original_array: new_array = np.concatenate([new_array, image.operation()]) </code></pre></li> </ol> <p>Bonus: look up <a href="https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html" rel="nofollow noreferrer">slicing</a>. This is very basic numpy/Python and should definitely be in your toolbox.</p> <pre><code> original_array[::2, :, :, :] # take every second image </code></pre>
image|numpy
3
375,756
45,527,627
Pandas copy dataframe keeping only max value for rows with same index
<p>If I have a dataframe that looks like</p> <pre><code> value otherstuff 0 4 x 0 5 x 0 2 x 1 2 x 2 3 x 2 7 x </code></pre> <p>what is a succinct way to get a new dataframe that looks like</p> <pre><code> value otherstuff 0 5 x 1 2 x 2 7 x </code></pre> <p>where rows with the same index have been dropped so only the row with the maximum 'value' remains? As far as I am aware there is no option in df.drop_duplicates to keep the max, only the first or last occurrence.</p>
<p>You can use <code>max</code> with <code>level=0</code>:</p> <pre><code>df.max(level=0) </code></pre> <p>Output:</p> <pre><code> value otherstuff 0 5 x 1 2 x 2 7 x </code></pre> <p>OR, to address other columns mentioned in comments:</p> <pre><code>df.groupby(level=0,group_keys=False)\ .apply(lambda x: x.loc[x['value']==x['value'].max()]) </code></pre> <p>Output:</p> <pre><code> value otherstuff 0 5 x 1 2 x 2 7 x </code></pre>
python|pandas|dataframe
5
375,757
45,608,960
tensorflow multi gpu tower error: loss = tower_loss(scope) . ValueError: Variable tower_1/loss/xentropy_mean/avg/ does not exist
<p>When I use multi gpu in tensorflow, and Errors came out as follows:</p> <pre><code> Traceback (most recent call last): File "multi_gpu_train.py", line 290, in &lt;module&gt; tf.app.run() File "/usr/lib/python2.7/site-packages/tensorflow/python/platform/app.py", line 48, in run _sys.exit(main(_sys.argv[:1] + flags_passthrough)) File "multi_gpu_train.py", line 286, in main train() File "multi_gpu_train.py", line 187, in train loss = tower_loss(scope) File "multi_gpu_train.py", line 94, in tower_loss loss_averages_op = loss_averages.apply(losses + [total_loss]) File "/usr/lib/python2.7/site-packages/tensorflow/python/training/moving_averages.py", line 375, in apply colocate_with_primary=(var.op.type in ["Variable", "VariableV2"])) File "/usr/lib/python2.7/site-packages/tensorflow/python/training/slot_creator.py", line 174, in create_zeros_slot colocate_with_primary=colocate_with_primary) File "/usr/lib/python2.7/site-packages/tensorflow/python/training/slot_creator.py", line 149, in create_slot_with_initializer dtype) File "/usr/lib/python2.7/site-packages/tensorflow/python/training/slot_creator.py", line 66, in _create_slot_var validate_shape=validate_shape) File "/usr/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 1065, in get_variable use_resource=use_resource, custom_getter=custom_getter) File "/usr/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 962, in get_variable use_resource=use_resource, custom_getter=custom_getter) File "/usr/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 367, in get_variable validate_shape=validate_shape, use_resource=use_resource) File "/usr/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 352, in _true_getter use_resource=use_resource) File "/usr/lib/python2.7/site-packages/tensorflow/python/ops/variable_scope.py", line 682, in _get_single_variable "VarScope?" % name) ValueError: Variable tower_1/loss/xentropy_mean/avg/ does not exist, or was not created with tf.get_variable(). Did you mean to set reuse=None in VarScope? </code></pre> <p>and the main function is shown below, and it use tower_loss function</p> <pre><code> tower_grads = [] for i in xrange(FLAGS.num_gpus): with tf.device('/gpu:%d' % GPU[i]): with tf.name_scope('%s_%d' % (TOWER_NAME, GPU[i])) as scope: # Calculate the loss for one tower of the CIFAR model. This function # constructs the entire CIFAR model but shares the variables across # all towers. loss = tower_loss(scope) # reuse = True # Reuse variables for the next tower. tf.get_variable_scope().reuse_variables() # Retain the summaries from the final tower. summaries = tf.get_collection(tf.GraphKeys.SUMMARIES, scope) # Calculate the gradients for the batch of data on this CIFAR tower. grads = opt.compute_gradients(loss) # Keep track of the gradients across all towers. tower_grads.append(grads) # We must calculate the mean of each gradient. Note that this is the # synchronization point across all towers. grads = average_gradients(tower_grads) </code></pre> <p>the tower_loss function is shown below. error info shows that error cames out in tower_1, and it is ok with tower_0. that means the first iteration in </p> <pre><code>for i in xrange(FLAGS.num_gpus): </code></pre> <p>is successful and I don't know why.</p> <pre><code>def tower_loss(scope): """Calculate the total loss on a single tower running the CIFAR model. Args: scope: unique prefix string identifying the CIFAR tower, e.g. 'tower_0' Returns: Tensor of shape [] containing the total loss for a batch of data """ # Get images and labels for CIFAR-10. images, labels = load_train_data.input_pipeline(FLAGS.img_path, FLAGS.label_path, FLAGS.csv_file, FLAGS.batch_size,trainning=True) # Build inference Graph. vgg_net = vgg16.FCN8VGG('./../lane_seg/vgg16.npy') vgg_net.build(images,train=True,debug=False,num_classes=load_train_data.NUM_CLASSES) logits = vgg_net.upscore32 # Build the portion of the Graph calculating the losses. Note that we will # assemble the total_loss using a custom function below. labels = tf.squeeze(labels, squeeze_dims=[3]) loss_weights = [0.00588551861547, 0.500363638561, 0.493750842824] _ = weighted_loss(logits=logits,labels=labels,num_classes=load_train_data.NUM_CLASSES,head=loss_weights) # _ = cifar10.loss(logits, labels) # Assemble all of the losses for the current tower only. losses = tf.get_collection('losses', scope) # Calculate the total loss for the current tower. total_loss = tf.add_n(losses, name='total_loss') # Compute the moving average of all individual losses and the total loss. loss_averages = tf.train.ExponentialMovingAverage(0.9, name='avg') loss_averages_op = loss_averages.apply(losses + [total_loss]) # Attach a scalar summary to all individual losses and the total loss; do the # same for the averaged version of the losses. for l in losses + [total_loss]: # Remove 'tower_[0-9]/' from the name in case this is a multi-GPU training # session. This helps the clarity of presentation on tensorboard. loss_name = re.sub('%s_[0-9]*/' % TOWER_NAME, '', l.op.name) # Name each loss as '(raw)' and name the moving average version of the loss # as the original loss name. tf.summary.scalar(loss_name +' (raw)', l) tf.summary.scalar(loss_name, loss_averages.average(l)) with tf.control_dependencies([loss_averages_op]): total_loss = tf.identity(total_loss) return total_loss </code></pre>
<p>I have found the answer, The code below is a old version, and the newest code is posted in <a href="https://github.com/tensorflow/models/blob/master/tutorials/image/cifar10/cifar10_multi_gpu_train.py" rel="nofollow noreferrer">https://github.com/tensorflow/models/blob/master/tutorials/image/cifar10/cifar10_multi_gpu_train.py</a></p> <p>update the code, and it can run successflully! </p>
tensorflow|multi-gpu
1
375,758
45,368,931
cost function outputs 'nan' in tensorflow
<p>While studying the tensorflow, I faced a problem.<br> The cost function output 'nan'. </p> <p>And, if you find any other wrong in source code let me know the links for it.</p> <p>I am trying to send the cost function value to my trained model, but its not working.</p> <pre><code>tf.reset_default_graph() tf.set_random_seed(777) X = tf.placeholder(tf.float32, [None, 20, 20, 3]) Y = tf.placeholder(tf.float32, [None, 1]) with tf.variable_scope('conv1') as scope: W1 = tf.Variable(tf.random_normal([4, 4, 3, 32], stddev=0.01), name='weight1') L1 = tf.nn.conv2d(X, W1, strides=[1, 1, 1, 1], padding='SAME') L1 = tf.nn.relu(L1) L1 = tf.nn.max_pool(L1, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME') L1 = tf.reshape(L1, [-1, 10 * 10 * 32]) W1_hist = tf.summary.histogram('conv_weight1', W1) L1_hist = tf.summary.histogram('conv_layer1', L1) with tf.name_scope('fully_connected_layer1') as scope: W2 = tf.get_variable('W2', shape=[10 * 10 * 32, 1], initializer=tf.contrib.layers.xavier_initializer()) b = tf.Variable(tf.random_normal([1])) hypothesis = tf.matmul(L1, W2) + b W2_hist = tf.summary.histogram('fully_connected_weight1', W2) b_hist = tf.summary.histogram('fully_connected_bias', b) hypothesis_hist = tf.summary.histogram('hypothesis', hypothesis) with tf.name_scope('cost') as scope: cost = -tf.reduce_mean(Y * tf.log(hypothesis) + (1 - Y) * tf.log(1 - hypothesis)) cost_summary = tf.summary.scalar('cost', cost) with tf.name_scope('train_optimizer') as scope: optimizer = tf.train.AdamOptimizer(learning_rate=0.0001).minimize(cost) predicted = tf.cast(hypothesis &gt; 0.5, dtype=tf.float32) accuracy = tf.reduce_mean(tf.cast(tf.equal(predicted, Y), dtype=tf.float32)) accuracy_summary = tf.summary.scalar('accuracy', accuracy) train_data_batch, train_labels_batch = tf.train.batch([train_data, train_labels], enqueue_many=True , batch_size=100, allow_smaller_final_batch=True) with tf.Session() as sess: # tensorboard --logdir=./logs/planesnet2_log merged_summary = tf.summary.merge_all() writer = tf.summary.FileWriter('./logs/planesnet2_log') writer.add_graph(sess.graph) sess.run(tf.global_variables_initializer()) coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(coord=coord) total_cost = 0 for step in range(20): x_batch, y_batch = sess.run([train_data_batch, train_labels_batch]) feed_dict = {X: x_batch, Y: y_batch} _, cost_val = sess.run([optimizer, cost], feed_dict = feed_dict) total_cost += cost_val print('total_cost: ', total_cost, 'cost_val: ', cost_val) coord.request_stop() coord.join(threads) </code></pre>
<p>You use a cross entropy loss without a sigmoid activation function to <code>hypothesis</code>, thus your values are not bounded in ]0,1]. The log function is not defined for negative values and it most likely get somes. Add a sigmoid and epsilon factor to avoid negative or 0 values and you should be fine.</p>
python|tensorflow|neural-network|deep-learning
4
375,759
45,607,589
Resetting the shape of input placeholder in a tensorflow meta graph
<p>I trained a neural network in tensorflow. At the time of training, I explicitly defined the shape of my input placeholder for a batch size of 20, like this <code>[20,224,224,3]</code>. I defined the batch size explicitly because thee was a <code>split</code> layer in the network and passing <code>None</code> as a batch size was throwing error on that. Is there any way that I can change the shape of input placeholder at inference time so that I can make my inference on a single image?</p>
<p>If you have the *.meta file of saved checkpoint you can reset the input to the graph.</p> <pre><code># Set the correct data type and shape; shape can be (None, 224, 224, 3) also new_placeholder = tf.placeholder(tf.float32, shape=(1, 224, 224, 3), name='inputs_new_name') # here you need to state the name of the placeholder you used in your original input placeholder saver = tf.import_graph_def(path/to/.meta, input_map={"original_inputs_placeholder_name:0": new_placeholder}) saver.restore(/path/to/your_checkpoint) </code></pre>
machine-learning|tensorflow|neural-network|deep-learning|conv-neural-network
3
375,760
45,365,300
sk-learn saved model to disk, but get only array
<p>When storing a <code>fitted_clf</code> sk-learn classifier like:</p> <pre><code>joblib.dump(fitted_clf, some_path) </code></pre> <p>Most of the time when loading it back into memory like:</p> <pre><code>joblib.load(some_path) </code></pre> <p>only an array of <code>array(['col1', 'col2], dtype=object)</code> is returned instead of loading the fitted pipeline.</p> <p>However, sometimes I get the real pipeline, but do not understand why this is not a consistent behavior.</p> <h1>edit</h1> <p>I think this has to do with different joblib verisions i.e. from <code>sklearn.externals import joblib</code> works, but when using regular <code>joblib</code> I only get an array</p>
<p>confirmed. Using <code>sklearn.externals import joblib</code> is fixing this to have consistent behavior.</p>
python|numpy|scikit-learn|pickle|joblib
4
375,761
45,332,960
Interweave two dataframes
<p>Suppose I have two dataframes <code>d1</code> and <code>d2</code></p> <pre><code>d1 = pd.DataFrame(np.ones((3, 3), dtype=int), list('abc'), [0, 1, 2]) d2 = pd.DataFrame(np.zeros((3, 2), dtype=int), list('abc'), [3, 4]) </code></pre> <hr> <pre><code>d1 0 1 2 a 1 1 1 b 1 1 1 c 1 1 1 </code></pre> <hr> <pre><code>d2 3 4 a 0 0 b 0 0 c 0 0 </code></pre> <hr> <p>What is an easy and generalized way to interweave two dataframes' columns. We can assume that the number of columns in <code>d2</code> is always one less than the number of columns in <code>d1</code>. And, the indices are the same.</p> <p>I want this:</p> <pre><code>pd.concat([d1[0], d2[3], d1[1], d2[4], d1[2]], axis=1) 0 3 1 4 2 a 1 0 1 0 1 b 1 0 1 0 1 c 1 0 1 0 1 </code></pre>
<p>Using <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.concat.html" rel="noreferrer"><code>pd.concat</code></a> to combine the DataFrames, and <a href="http://toolz.readthedocs.io/en/latest/api.html#toolz.itertoolz.interleave" rel="noreferrer"><code>toolz.interleave</code></a> reorder the columns:</p> <pre><code>from toolz import interleave pd.concat([d1, d2], axis=1)[list(interleave([d1, d2]))] </code></pre> <p>The resulting output is as expected:</p> <pre><code> 0 3 1 4 2 a 1 0 1 0 1 b 1 0 1 0 1 c 1 0 1 0 1 </code></pre>
python|pandas|numpy|dataframe
20
375,762
45,645,276
Negative dimension size caused by subtracting 3 from 1 for 'conv2d_2/convolution'
<p>I got this error message when declaring the input layer in Keras.</p> <blockquote> <p>ValueError: Negative dimension size caused by subtracting 3 from 1 for 'conv2d_2/convolution' (op: 'Conv2D') with input shapes: [?,1,28,28], [3,3,28,32].</p> </blockquote> <p>My code is like this</p> <pre><code>model.add(Convolution2D(32, 3, 3, activation='relu', input_shape=(1,28,28))) </code></pre> <p>Sample application: <a href="https://github.com/IntellijSys/tensorflow/blob/master/Keras.ipynb" rel="noreferrer">https://github.com/IntellijSys/tensorflow/blob/master/Keras.ipynb</a></p>
<p>By default, Convolution2D (<a href="https://keras.io/layers/convolutional/" rel="noreferrer">https://keras.io/layers/convolutional/</a>) expects the input to be in the format (samples, rows, cols, channels), which is "channels-last". Your data seems to be in the format (samples, channels, rows, cols). You should be able to fix this using the optional keyword <code>data_format = 'channels_first'</code> when declaring the Convolution2D layer.</p> <pre><code>model.add(Convolution2D(32, (3, 3), activation='relu', input_shape=(1,28,28), data_format='channels_first')) </code></pre>
python|tensorflow|neural-network|keras|keras-layer
45
375,763
45,447,182
python, numpy - in an array of string, compare element with previous for equality
<p>Let's consider the following array: <code>x = np.array(["john", "john", "ellis", "lambert", "john"])</code></p> <p>Is there a way to compare every element of the array to the previous and return a boolean array. In the present example, the result would be <code>[True,False,False,False]</code>.</p> <p>Is there any function (similar as <code>np.diff</code>) to achieve that?</p>
<p>You can do this with indexing:</p> <pre><code>array[:-1] == array[1:] </code></pre>
python|numpy
3
375,764
45,335,053
Reverse string columns in a pandas subset dataframe
<p>I have the following dataframe. </p> <pre><code> ID LOC Alice Bob Karen 0 1 CH 9|5 6|3 4|4 1 2 ES 1|1 0|8 2|0 2 3 DE 2|4 6|6 3|1 3 4 ES 3|9 1|2 4|2 </code></pre> <p>Alice and Bob columns contain string values. I want to reverse the strings in these columns conditional on the value of another column. For example, where LOC==ES, reversing the strings in the corresponding columns would look like: </p> <pre><code> ID LOC Alice Bob Karen 0 1 CH 9|5 6|3 4|4 1 2 ES 1|1 8|0 0|2 2 3 DE 2|4 6|6 3|1 3 4 ES 9|3 2|1 2|4 </code></pre> <p>Is there a fast way to perform this operation on all matching rows in a csv file with thousands rows? </p> <p>Thank you.</p>
<pre><code>#cols = ['Alice','Bob'] In [17]: cols = df.columns.drop(['ID','LOC']) In [18]: df.loc[df.LOC=='ES', cols] = df.loc[df.LOC=='ES', cols].apply(lambda x: x.str[::-1]) In [19]: df Out[19]: ID LOC Alice Bob Karen 0 1 CH 9|5 6|3 4|4 1 2 ES 1|1 8|0 0|2 2 3 DE 2|4 6|6 3|1 3 4 ES 9|3 2|1 2|4 </code></pre>
python|string|pandas|reverse
5
375,765
45,288,990
why does .str method change the shape of a pandas' series?
<h1>the type of the data</h1> <pre><code> In [1]: print(type(ebola_melt)) &lt;class 'pandas.core.frame.DataFrame'&gt; </code></pre> <h1>the column of interest is created such this</h1> <pre><code> In [2]: ebola_melt['str_split'] = ebola_melt['type_country'] .str.split('_') In [3]: print(type(ebola_melt['str_split'])) &lt;class 'pandas.core.series.Series'&gt; </code></pre> <h1>.get(0) with .str method applied</h1> <pre><code> In [4]: ebola_melt['str_split'].str.get(0) Out[4]: 0 Cases 1 Cases 2 Cases 3 Cases 4 Cases ... </code></pre> <h1>.get(0) without .str method applied</h1> <pre><code> In [5]: ebola_melt['str_split'].get(0) Out[5]: ['Cases', 'Guinea'] </code></pre>
<p>pandas.Series.str.get extracts elements from lists in the Series/Index.</p> <p>pandas.Series.get gets items from object for a given index value</p> <p>Please refer documentation <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.get.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.get.html</a> and <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.get.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.get.html</a></p> <p>so, </p> <pre><code> ebola_melt['str_split'] #contains ['Cases', 'Guinea'] ebola_melt['str_split'].str.get(0) #returns the values present in the zeorth index of all rows which is Cases ebola_melt['str_split'].get(0) #returns the row present in the zeorth index ebola_melt['str_split'].get #returns the all the rows </code></pre>
python|pandas|dataframe
0
375,766
45,444,606
Creating a standalone file using Pandas code
<p>I have little to no background in Python or computer science so I’ll try my best to explain what I want to accomplish. I have a Pandas script in Jupyter notebook that edits an Excel .csv file and exports it as an Excel .xlsx file. Basically the reason why we want to do this is because we get these same Excel spreadsheets full of unwanted and disorganized data from the same source. I want other people at my office that don’t have Python to be able to use this script to edit these spreadsheets. From what I understand, this involves creating a standalone file.</p> <p>Here is my code from Pandas that exports a new spreadsheet:</p> <pre><code>import pandas as pd from pandas import ExcelWriter test = pd.DataFrame.from_csv('J:/SDGE/test.csv', index_col=None) t = test for col in ['Bill Date']: t[col] = t[col].ffill() T = t[t.Meter.notnull()] T = T.reset_index(drop=True) writer = ExcelWriter('PythonExport.xlsx') T.to_excel(writer,'Sheet5') writer.save() </code></pre> <p>How can I make this code into a standalone executable file? I've seen other forums with responses to similar problems, but I still don't understand how to do this.</p>
<p>First, you need to change some parts in your code to make it work for anybody, without the need for them to edit the Python code. Secondly, you will need to convert your file to an executable (.exe).</p> <p>There is only one part in your code that needs to be changed to work for everyone: the csv file name and directory</p> <p>Since your code only works when the file "test.csv" is in the "J:/SDGE/" directory, you can follow one of the following solutions:</p> <ul> <li>Tell everyone who uses the program that the file must be in a precise public directory and named "test.csv" in order to work. (bad)</li> <li>Change your program to allow for input from the user. This is a little more complex, but is the solution that people probably want:</li> </ul> <p>Add an import for a file selector at the top:</p> <pre><code>from tkinter.filedialog import askopenfilename </code></pre> <p>Replace</p> <pre><code>'J:/SDGE/test.csv' </code></pre> <p>With</p> <pre><code>askopenfilename() </code></pre> <p>This should be the final python script:</p> <pre><code>import pandas as pd from pandas import ExcelWriter from tkinter.filedialog import askopenfilename #added this test = pd.DataFrame.from_csv(askopenfilename(), index_col=None) t = test for col in ['Bill Date']: t[col] = t[col].ffill() T = t[t.Meter.notnull()] T = T.reset_index(drop=True) writer = ExcelWriter('PythonExport.xlsx') T.to_excel(writer,'Sheet5') writer.save() </code></pre> <p>However, you want this as an executable program, that way others don't have to have python installed and know how to run the script. There are several ways to turn your new .py file into an executable. I would look into this <a href="https://stackoverflow.com/questions/11915462/how-to-convert-python-py-file-into-an-executable-file-for-use-cross-platform">thread</a>.</p>
python|excel|pandas
1
375,767
45,469,953
Lookup values from one dataframe in multiple columns of another dataframe?
<p>I currently have a dataframe (df1) with one columns being a list of numbers. I want to look up those numbers in another dataframe (df2) that has two integer columns and see if the number from df1 falls in between the range of those two columns and get the data from the matching row. Below is my current approach, is there a better way of doing this?</p> <pre><code>for index, row in df1.iterrows(): print df2[(df2['start'] &lt;= row['num']) &amp; (df2['end'] &gt;= row['num'])]['data'].iloc[0] </code></pre> <p>Here is what the head of df1 looks like:</p> <pre><code> num 0 1216942535 1 1220432129 2 1501931542 </code></pre> <p>head of df2:</p> <pre><code> organization_name start end 0 Service 2000 Srl 1478947232 1478947239 1 Autolinee F Lli Bucci Urbino P 1478947240 1478947247 2 S.M.S. DISTRIBUTION SRL 1478947248 1478947255 3 ALTOPACK SRL 1478947256 1478947263 4 COPYWORLD SRL 1478947264 1478947271 </code></pre>
<p>Basic use of <code>.loc</code> and boolean array logic : </p> <pre><code># parentheses are mandatory here result = df2.loc[(df1.num &lt; df2.end) &amp; (df1.num &gt; df2.start), "organization_name"] </code></pre> <p>Test with Minimal Wirking Example : </p> <pre><code>df1 = pd.DataFrame(np.random.randint(0, 10, 5)) df2 = pd.DataFrame({ "orgname": [str(i) for i in range(5)], "start": np.random.randint(-5, 5, 5), "end": np.random.randint(5, 15, 5) })[["orgname", "start", "end"]] df2.loc[(df1[0] &lt; df2.end) &amp; (df1[0] &gt; df2.start), "orgname"] </code></pre>
pandas|dataframe
2
375,768
45,465,117
Reverse every other row in TensorFlow
<p>Given a tensor <code>input</code> of undefined shape <code>H x W</code>, I would like to reverse every other row.</p> <p>In numpy, I would simply do</p> <pre><code>input[1::2, :] = input[1::2, ::-1] </code></pre> <p>but this is apparently not possible in TensorFlow.</p> <p>Note that the input shape is only <em>partially-known</em>, i.e., <code>input.shape == (None, None)</code>.</p> <p>Any ideas?</p>
<p>You can achieve the same using placeholder</p> <pre><code>input = tf.placeholder(shape=(None, None), dtype=tf.int32) # define axis to reverse axis_to_reverse=1 input_reversed = tf.reverse(input, [axis_to_reverse]) sess = tf.Session() _input_reversed = sess.run(input_reversed, {input: your array}) </code></pre>
python|tensorflow
0
375,769
45,385,661
Object Detection API - How to create an Ensemble of trainings?
<p>I already created an Ensemble for classification (average -or so- of predictions per images), or for Semantic Segmentation (average -or so- of predictions per pixels), but I don't really know how to proceed for Object Detection.. My guess would be to extract all the region proposals of all my networks, then to run my classifiers on the <em>X</em> best of them, and finally to average the predictions for all the bounding boxes. But how should I do that with architectures following the <a href="https://github.com/tensorflow/models/tree/master/object_detection" rel="nofollow noreferrer">Object Detection API</a>?</p> <p>I guess the regions proposals can be extracted using <code>extract_proposal_features</code>, and then reinserted to the model, but the only way I see to do that would be to create a complete new model with its own <code>predict</code> method etc, dealing will all the models of my Ensemble. Am I missing an other obvious / simpler method?</p>
<p>That's the basic idea, yes (the Resnet paper has a good explanation of how this is done for Faster R-CNN). Unfortunately we haven't released code to automate this ensembling process (and don't have any plans to). It's possible of course; you will have to manually set this up yourself.</p>
tensorflow|object-detection|ensemble-learning
2
375,770
45,366,998
AWS Jupyter Notebook EC2 Instance: Getting error while reading pandas csv from S3
<p>While reading a CSV from S3, the kernel is restarting with the below pop up:</p> <pre><code>Kernel Restarting The kernel appears to have died. It will restart automatically </code></pre> <p>Below is the code snippet:</p> <pre><code>import boto3 import pandas as pd from boto.s3.connection import S3Connection YOUR_ACCESS_KEY='******' YOUR_SECRET_KEY='******' YOUR_BUCKET='******' client = boto3.client('s3',aws_access_key_id=YOUR_ACCESS_KEY, aws_secret_access_key=YOUR_SECRET_KEY) client.download_file(YOUR_BUCKET, 'test.csv','test.csv') </code></pre> <p>Error is thrown from the below line :</p> <pre><code>test_df = pd.read_csv('test.csv') </code></pre> <p>But I can access other files such as a sample text file:</p> <pre><code>client.download_file(YOUR_BUCKET, 'sample.txt','sample.txt') print(open('sample.txt').read()) </code></pre> <p>I assumed this error was because of the huge size of the CSV file, but reading a 5MB CSV file is giving the same error.</p>
<p>It appears to be the bug with pyTorch.</p> <p><a href="https://github.com/jupyter/notebook/issues/2784" rel="nofollow noreferrer">https://github.com/jupyter/notebook/issues/2784</a></p> <p>Alternatives and multiple solutions discussed around there, the ticket is still open.</p> <p>Hope it helps.</p>
python|pandas|amazon-web-services|amazon-s3|jupyter-notebook
0
375,771
45,444,705
sympy expression that lambdify's to numpy maximum
<p>I'd like to create a sympy expression that lambdify's to numpy.maximum(). How can I do this? Attempt:</p> <pre><code>import numpy as np import sympy x = sympy.Symbol('x') expr = sympy.Max(2, x) f = sympy.lambdify(x, expr) f(np.arange(5)) </code></pre> <p>This leads to:</p> <pre><code>ValueError: setting an array element with a sequence. </code></pre> <p>Looks like a <code>sympy.Maximum()</code> analogy to <code>np.maximum()</code> was proposed in <a href="https://github.com/sympy/sympy/issues/11027" rel="nofollow noreferrer">sympy issue 11027</a> but never implemented.</p> <p>A clunky workaround:</p> <pre><code>maximum = sympy.Function('maximum') expr2 = maximum(2, x) f2 = lambda c: sympy.lambdify((maximum, x), expr2)(np.maximum, c) f2(np.arange(5)) </code></pre> <p>But I'd really prefer an expression that directly lambdify's to what I want.</p>
<p>The maximum of two numbers x, y is the same as <code>(x+y+abs(x-y))/2</code>. And <code>abs</code> lambdifies easily: </p> <pre><code>expr = (x + 2 + sympy.Abs(x-2))/2 f = sympy.lambdify(x, expr) f(np.arange(5)) # prints [ 2. 2. 2. 3. 4.] </code></pre>
numpy|sympy
2
375,772
45,550,006
Different x values with sharex
<p>I have two time indexed Series, and I want to plot them and share the x axis (the range of FEATURE contains the range of X):</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt import datetime start = pd.Timestamp('2017-01-01 08:00:00') end = pd.Timestamp('2017-01-01 10:00:00') X = pd.Series([1, 2, 3], [pd.Timestamp('2017-01-01 08:30:00'), pd.Timestamp('2017-01-01 09:30:00'), pd.Timestamp('2017-01-01 10:30:00')], ) FEATURE = pd.Series([0, 4, 2], [pd.Timestamp('2017-01-01 08:00:00'), pd.Timestamp('2017-01-01 09:00:00'), pd.Timestamp('2017-01-01 10:00:00')] ) f, (ax1, ax2) = plt.subplots(2, 1, sharex='all') X[start:end].plot(ax=ax1) FEATURE.plot(ax=ax2) plt.show() </code></pre> <p>But only FEATURE is shown. If I put X last, then only X is shown.</p>
<p>A possible solution is to plot the data with matplotlib instead of using the pandas plot wrapper.</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt import matplotlib.dates as mdates start = pd.Timestamp('2017-01-01 08:00:00') end = pd.Timestamp('2017-01-01 10:00:00') X = pd.Series([1, 2, 3], [pd.Timestamp('2017-01-01 08:30:00'), pd.Timestamp('2017-01-01 09:30:00'), pd.Timestamp('2017-01-01 10:30:00')], ) FEATURE = pd.Series([0, 4, 2], [pd.Timestamp('2017-01-01 08:00:00'), pd.Timestamp('2017-01-01 09:00:00'), pd.Timestamp('2017-01-01 10:00:00')] ) f, (ax1, ax2) = plt.subplots(2, 1, sharex='all') ax1.plot(X.index, X.values) ax2.plot(FEATURE.index, FEATURE.values) loc = mdates.MinuteLocator([0,30]) ax2.xaxis.set_major_locator(loc) ax2.xaxis.set_major_formatter(mdates.AutoDateFormatter(loc)) plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/05LnZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/05LnZ.png" alt="enter image description here"></a></p>
pandas|matplotlib
1
375,773
45,477,265
how to sum by different level with a dict in python
<p>e.g.:</p> <pre><code>example = { '1': nan, '1.1': nan, '1.1.1': nan, '1.1.1.1': 3.45, '1.1.1.2': 6.72, '1.1.1.3': 2.89, '1.1.1.4': 4.62, '1.1.2': 5.35, '1.1.3': 1.21, '1.1.4': 9.86, '1.2': 3.36, '1.3': 8.92 } </code></pre> <p>Of course it is only a part. The whole has 5 level at most.</p> <p>I want to calculate <code>1.1.1=1.1.1.1+1.1.1.2+1.1.1.3+1.1.1.4=3.45+6.72+2.89+4.62=17.68</code></p> <p>Then <code>1.1=1.1.1+1.1.2+1.1.3+1.1.4=17.68+5.35+1.21+9.86=34.1</code></p> <p>Then <code>1=1.1+1.2+1.3=34.1+3.36+8.92=46.38</code></p> <p>Maybe I should turn the dict into a hierarchical dict first?</p> <p>In fact it is originally a Series in pandas, but I guess it is hard to do that in pandas.</p>
<p>You can create <code>Series</code> first and then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.groupby.html" rel="nofollow noreferrer"><code>Series.groupby</code></a> by count of <code>.</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.count.html" rel="nofollow noreferrer"><code>str.count</code></a>, aggregate <code>sum</code>. Then swap order by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.iloc.html" rel="nofollow noreferrer"><code>iloc</code></a> with <code>[::-1]</code> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.cumsum.html" rel="nofollow noreferrer"><code>Series.cumsum</code></a>:</p> <p>Last rename <code>index</code> values by <code>dict</code>, <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rename_axis.html" rel="nofollow noreferrer"><code>rename_axis</code></a> and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.reset_index.html" rel="nofollow noreferrer"><code>reset_index</code></a>:</p> <pre><code>example = { '1': np.nan, '1.1': np.nan, '1.1.1': np.nan, '1.1.1.1': 3.45, '1.1.1.2': 6.72, '1.1.1.3': 2.89, '1.1.1.4': 4.62, '1.1.2': 5.35, '1.1.3': 1.21, '1.1.4': 9.86, '1.2': 3.36, '1.3': 8.92 } s = pd.Series(example) d = { x: '.'.join(['1'] * x ) for x in range(7)} print (d) {0: '', 1: '1', 2: '1.1', 3: '1.1.1', 4: '1.1.1.1', 5: '1.1.1.1.1', 6: '1.1.1.1.1.1'} df=s.groupby(s.index.str.count('\.')).sum().rename(d).rename_axis('a') \ .iloc[::-1].cumsum().rename_axis('a').reset_index(name='b') print (df) a b 0 1.1.1 17.68 1 1.1 34.10 2 1 46.38 3 NaN </code></pre> <p>And if necessary some cleaning:</p> <pre><code>df=s.groupby(s.index.str.count('\.')).sum().rename(d).rename_axis('a') \ .iloc[1:].iloc[::-1].cumsum().iloc[::-1].rename_axis('a').reset_index(name='b') print (df) a b 0 1 46.38 1 1.1 34.10 2 1.1.1 17.68 </code></pre>
python|pandas
0
375,774
62,679,353
Docker GPU enabled version (>19.03) does not load tensorflow successfully
<p>I want to use docker 19.03 and above in order to have GPU support. I currently have docker 19.03.12 in my system. I can run this command to check that Nvidia drivers are running:</p> <pre><code>docker run -it --rm --gpus all ubuntu nvidia-smi Wed Jul 1 14:25:55 2020 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 430.64 Driver Version: 430.64 CUDA Version: N/A | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 107... Off | 00000000:01:00.0 Off | N/A | | 26% 54C P5 13W / 180W | 734MiB / 8119MiB | 39% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| +-----------------------------------------------------------------------------+ </code></pre> <p>Also, if run locally my module works with GPU support just fine. But if I build a docker image and try to run it I get a message:</p> <blockquote> <pre><code>ImportError: libcuda.so.1: cannot open shared object file: No such file or directory </code></pre> </blockquote> <p>I am using cuda 9.0 with tensorflow 1.12.0 but I can switch to cuda 10.0 with tensorflow 1.15.<br /> As I get it the problem is that I am probably using a previous dockerfile version with commands which does not make it compatible with new docker GPU enabled version (19.03 and above).<br /> The actual commands are these:</p> <pre><code>FROM nvidia/cuda:9.0-base-ubuntu16.04 # Pick up some TF dependencies RUN apt-get update &amp;&amp; apt-get install -y --no-install-recommends \ build-essential \ cuda-command-line-tools-9-0 \ cuda-cublas-9-0 \ cuda-cufft-9-0 \ cuda-curand-9-0 \ cuda-cusolver-9-0 \ cuda-cusparse-9-0 \ libcudnn7=7.0.5.15-1+cuda9.0 \ libnccl2=2.2.13-1+cuda9.0 \ libfreetype6-dev \ libhdf5-serial-dev \ libpng12-dev \ libzmq3-dev \ pkg-config \ software-properties-common \ unzip \ &amp;&amp; \ apt-get clean &amp;&amp; \ rm -rf /var/lib/apt/lists/* RUN apt-get update &amp;&amp; \ apt-get install nvinfer-runtime-trt-repo-ubuntu1604-4.0.1-ga-cuda9.0 &amp;&amp; \ apt-get update &amp;&amp; \ apt-get install libnvinfer4=4.1.2-1+cuda9.0 </code></pre> <p>I could not find a docker base file for fundamental GPU usage either.</p> <p>In <a href="https://github.com/tensorflow/tensorflow/issues/10776" rel="nofollow noreferrer">this answer</a> there was a proposal for exposing <code>libcuda.so.1</code> but it did not work in my case.</p> <p>So, is there any workaround for this problem or a base dockerfile to adjust to?</p> <p>My system is Ubuntu 16.04.</p> <p>Edit:</p> <p>I just noticed that nvidia-smi from within docker does not display any cuda version:</p> <pre><code>CUDA Version: N/A </code></pre> <p>in contrast with the one locally run. So, this probably means that no cuda is loaded inside docker for some reason I guess.</p>
<p><strong>tldr;</strong></p> <p>A base Dockerfile which seems to work with docker 19.03+ &amp; cuda 10 is this:</p> <pre><code>FROM nvidia/cuda:10.0-base </code></pre> <p>which can be conbined with tf 1.14 but for some reason could not found tf 1.15.</p> <p>I just used this Dockerfile to test it:</p> <pre><code>FROM nvidia/cuda:10.0-base CMD nvidia-smi </code></pre> <p><strong>longer answer:</strong></p> <p>Well, after a lot of trials and errors (and frustration) I managed to make it work for docker 19.03.12+cuda 10 (although with tf 1.14 not 1.15).</p> <p>I used the code from <a href="https://towardsdatascience.com/how-to-properly-use-the-gpu-within-a-docker-container-4c699c78c6d1" rel="nofollow noreferrer">this post</a> and used the base Dockerfiles provided there.</p> <p>First I tried to check the <code>nvidia-smi</code> from within docker using Dockerfile:</p> <pre><code>FROM nvidia/cuda:10.0-base CMD nvidia-smi $docker build -t gpu_test . ... $docker run -it --gpus all gpu_test Fri Jul 3 07:31:05 2020 +-----------------------------------------------------------------------------+ | NVIDIA-SMI 430.64 Driver Version: 430.64 CUDA Version: 10.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 GeForce GTX 107... Off | 00000000:01:00.0 Off | N/A | | 45% 65C P2 142W / 180W | 8051MiB / 8119MiB | 100% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| +-----------------------------------------------------------------------------+ </code></pre> <p>which finally seems to find cuda binaries: <code>CUDA Version: 10.1</code>.</p> <p>Then, I made a minimal Dockerfile which could test the successful loading of tensorflow binary libraries within docker:</p> <pre><code>FROM nvidia/cuda:10.0-base # The following are just declaring variables and ultimately use ARG USE_PYTHON_3_NOT_2=True ARG _PY_SUFFIX=${USE_PYTHON_3_NOT_2:+3} ARG PYTHON=python${_PY_SUFFIX} ARG PIP=pip${_PY_SUFFIX} RUN apt-get update &amp;&amp; apt-get install -y \ ${PYTHON} \ ${PYTHON}-pip RUN ${PIP} install tensorflow_gpu==1.14.0 COPY bashrc /etc/bash.bashrc RUN chmod a+rwx /etc/bash.bashrc WORKDIR /src COPY *.py /src/ ENTRYPOINT [&quot;python3&quot;, &quot;tf_minimal.py&quot;] </code></pre> <p>and tf_minimal.py was simply:</p> <pre><code>import tensorflow as tf print(tf.__version__) </code></pre> <p>and for completeness I just post the bashrc file I am using:</p> <pre><code># Copyright 2018 The TensorFlow Authors. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the &quot;License&quot;); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an &quot;AS IS&quot; BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # ============================================================================== export PS1=&quot;\[\e[31m\]tf-docker\[\e[m\] \[\e[33m\]\w\[\e[m\] &gt; &quot; export TERM=xterm-256color alias grep=&quot;grep --color=auto&quot; alias ls=&quot;ls --color=auto&quot; echo -e &quot;\e[1;31m&quot; cat&lt;&lt;TF ________ _______________ ___ __/__________________________________ ____/__ /________ __ __ / _ _ \_ __ \_ ___/ __ \_ ___/_ /_ __ /_ __ \_ | /| / / _ / / __/ / / /(__ )/ /_/ / / _ __/ _ / / /_/ /_ |/ |/ / /_/ \___//_/ /_//____/ \____//_/ /_/ /_/ \____/____/|__/ TF echo -e &quot;\e[0;33m&quot; if [[ $EUID -eq 0 ]]; then cat &lt;&lt;WARN WARNING: You are running this container as root, which can cause new files in mounted volumes to be created as the root user on your host machine. To avoid this, run the container by specifying your user's userid: $ docker run -u \$(id -u):\$(id -g) args... WARN else cat &lt;&lt;EXPL You are running this container as user with ID $(id -u) and group $(id -g), which should map to the ID and group for your user on the Docker host. Great! EXPL fi # Turn off colors echo -e &quot;\e[m&quot; </code></pre>
docker|tensorflow
1
375,775
62,849,131
filtering data in pandas where string is in multiple columns
<p>I have a dataframe that looks like this:</p> <pre><code>team_1 score_1 team_2 score_2 AUS 2 SCO 1 ENG 1 ARG 0 JPN 0 ENG 2 </code></pre> <p>I can retreive all the data from a single team by using: <strong>#list specifiying team of interest</strong></p> <pre><code>team = ['ENG'] </code></pre> <p><strong>#slice the dataframe to show only the rows where the column 'Team 1' or 'Team 2' value is in the specified string list 'team'</strong></p> <pre><code>df.loc[df['team_1'].isin(team) | df['team_2'].isin(team)] </code></pre> <pre><code>team_1 score_1 team_2 score_2 ENG 1 ARG 0 JPN 0 ENG 2 </code></pre> <p>How can I now return only the score for my 'team' such as:</p> <pre><code>team score ENG 1 ENG 2 </code></pre> <p>Maybe creating an index to each team so as to filter out? Maybe encoding the team_1 and team_2 columns to filter out?</p>
<pre><code>new_df_1 = df[df.team_1 =='ENG'][['team_1', 'score_1']] new_df_1 =new_df_1.rename(columns={&quot;team_1&quot;:&quot;team&quot;, &quot;score_1&quot;:&quot;score&quot;}) # team score # 0 ENG 1 </code></pre> <h3></h3> <pre><code>new_df_2 = df[df.team_2 =='ENG'][['team_2', 'score_2']] new_df_2 = new_df_2.rename(columns={&quot;team_2&quot;:&quot;team&quot;, &quot;score_2&quot;:&quot;score&quot;}) # team score # 1 ENG 2 </code></pre> <p>then concat two dataframe:</p> <pre><code>pd.concat([new_df_1, new_df_2]) </code></pre> <p>the output is :</p> <pre><code> team score 0 ENG 1 1 ENG 2 </code></pre>
python|pandas|dataframe
1
375,776
62,662,468
Pandas set date as day(int)-month(str)-year(int)
<p>I am trying to change the formatting of a date column</p> <p>original: 2020/05/22</p> <p>Desired outcome: <strong>22/may/2020</strong></p> <p>so far I've done:</p> <p><code>.to_datetime</code></p> <p><code>dt.strftime('%d-%m-%Y')</code></p> <p>converting into: 22/05/2020</p> <p>how can I get the middle part to convert into alphabetical?</p>
<p>Try this, all the format codes are given here <a href="https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior" rel="nofollow noreferrer">date formats</a>:</p> <pre><code>df['Date'] = pd.to_datetime(df['Date']).dt.strftime('%d/%b/%Y') print(df) Date 0 22/May/2020 </code></pre>
python-3.x|pandas
1
375,777
62,705,854
np Select Rows that Match Edning
<p>I have a numpy 2-D array with rows as observations and columns as covariates. I would like to select the rows that match a specified example of the last <em>n</em> columns. For example with n=2:</p> <p><code>A = [[0,1,0],[3,0,1],[5,1,0]]</code> with <code>target=[1,0]</code> would return <code>B = [[0,1,0],[5,1,0]]</code>.</p>
<pre><code>import numpy as np A = np.array([[0,1,0],[3,0,1],[5,1,0]]) target = [1,0] B = A[(A[:, -len(target):] == target).all(axis=1)] print(B) # [[0 1 0] # [5 1 0]] </code></pre> <p><strong>Explanation</strong></p> <pre><code>print(A[:, -len(target):]) # [[1 0] # [0 1] # [1 0]] print(A[:, -len(target):] == target) # [[ True True] # [False False] # [ True True]] print((A[:, -len(target):] == target).all(axis=1)) # [ True False True] </code></pre>
python|numpy
3
375,778
62,506,502
Only want to consider a dataframe up to the present point
<p>I have a dataframe and I am trying to do something along the lines of</p> <pre><code>df['foo'] = np.where(myfunc(df) == 1, 10, 20) </code></pre> <p>but I only want to consider the dataframe up to the present, for example if my dataframe looked like</p> <pre><code> A B C 1 0.3 0.3 1.6 2 0.6 0.6 0.4 3 0.9 0.9 1.2 4 1.2 1.2 0.8 </code></pre> <p>and I was generating the value of 'foo' for the third row, I would be looking at the dataframe's first through third rows, but not the fourth row. Is it possible to accomplish this?</p>
<p>It is certainly possible. The dataframe up to the present is given by</p> <pre><code>df.iloc[:present], </code></pre> <p>and you can do whatever you want with it, in particular, use <code>where</code>, as described here: <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.where.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.where.html</a></p>
python|pandas
0
375,779
62,543,116
Pandas append if groupby sum condition is met
<p>Apologies if this is a repeated question Im not sure the specific syntax of what I want to do.</p> <p>I would like to iterate through large df where A and B are Index values and x,y,z are data columns</p> <pre><code>df= A B x y z 0.1 0.2 2 2 0 0.1 0.3 1 3 0 0.1 0.4 3 3 0 0.2 0.2 4 1 -1 0.2 0.3 5 3 0 0.2 0.1 6 1 0 0.3 0.2 1 1 0 0.3 0.5 1 2 0 0.3 0.7 2 1 0 </code></pre> <p>If the following condition is met:</p> <pre><code>df.groupby('A')['z'].sum()==0 </code></pre> <p>Append this whole groupby object to a new df or produce a df of all the groupby obj that fulfill this condition.</p> <p>Expected output:</p> <pre><code>new_df= A B x y z 0.1 0.2 2 2 0 0.1 0.3 1 3 0 0.1 0.4 3 3 0 0.3 0.2 1 1 0 0.3 0.5 1 2 0 0.3 0.7 2 1 0 </code></pre> <p>I'm trying something like</p> <pre><code>new_df = df.loc[df.groupby('A')['z'].sum())==0] </code></pre> <p>but this doesn't work.</p>
<p>Do a <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> using <code>level=0</code> in combination with <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.transform.html" rel="nofollow noreferrer"><code>transfrom</code></a> and then do a subset of the dataframe based on the boolean <code>mask</code></p> <pre><code>mask = df.groupby(level=0)['z'].transform(lambda s: s.sum()==0) new_df = df[mask].copy() new_df # x y z # A B # 0.1 0.2 2 2 0 # 0.3 1 3 0 # 0.4 3 3 0 # 0.3 0.2 1 1 0 # 0.5 1 2 0 # 0.7 2 1 0 </code></pre>
python|pandas
0
375,780
62,550,733
Pandas aggregate - Counting values over x
<p>I'm playing around with a data set and everything is sailing smoothly. I'm currently having an issue with generating a count of values over the value of 0.</p> <p>What I have is:</p> <pre><code>zz = g.aggregate({'Rain':['sum'],'TotalRainEvent':['max'],'TotalRainEvent':['count']}) print(zz) </code></pre> <p>Which returns:</p> <pre><code> Rain TotalRainEvent sum count Year Month 2010 1 0.0 31 2 4.8 28 3 27.8 31 4 30.6 30 5 89.8 31 ... ... 2020 2 11.0 29 3 40.9 31 4 11.1 30 5 107.3 31 6 46.4 22 [126 rows x 2 columns] </code></pre> <p>As you can see the count value is returning the number of records in the month. I'm only wanting to count values that are greater than 0.</p> <p>I'm able to create a count by creating another column and simply entering a 1 in there if there's a value in the 'TotalRainEvent' column, but I think it'd be better to learn how to manipulate the .aggregate function.</p> <p>Any help is appreciated,</p> <p>Thanks!</p>
<p>How about you do <code>g = g.replace(0,np.nan)</code> at the beginning and <code>g = g.replace(np.nan, 0)</code> at the end? I don't think np.nan values will be counted, per documentation.</p> <p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.aggregate.html" rel="nofollow noreferrer">https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.aggregate.html</a></p> <pre><code>g = g.replace(0,np.nan) zz = g.aggregate({'Rain':['sum'],'TotalRainEvent':['max'],'TotalRainEvent':['count']}) zz = zz.replace(np.nan, 0) g = g.replace(np.nan, 0) </code></pre> <p>print(zz)</p>
python-3.x|pandas|dataframe
1
375,781
62,838,401
How can I add a random catergory to a dataframe?
<p>I can't figure this out. I am doing some testing and trying to add random categories into a dataframe for testing but when I do it, it adds it for the all rows instead of randomly distributing it.</p> <p>Here's my code:</p> <pre><code>import random catergory = ['dog', 'cat', 'monkey'] df['animal'] = random.choice(catergory) df['animal'].value_counts() </code></pre> <p>Output:</p> <pre><code>monkey monkey 124705 Name: animal, dtype: int64 </code></pre> <p>I understand what it's doing(generating random call once and applying it to the entire DF) but how can I get it to generate the random value for each row.</p>
<p>Use <a href="https://docs.scipy.org/doc//numpy-1.15.0/reference/generated/numpy.random.choice.html" rel="nofollow noreferrer"><code>np.random.choice</code></a> along with <code>size</code> equal to length of dataframe to generate a random sample of given size:</p> <pre><code>df['animal'] = np.random.choice(catergory, size=len(df)) </code></pre> <p>Example:</p> <pre><code>np.random.seed(12345) df = pd.DataFrame({'ColA': np.random.randint(1, 10, 10)}) catergory = ['dog', 'cat', 'monkey'] df['animal'] = np.random.choice(catergory, size=len(df)) df['animal'].value_counts() </code></pre> <p>Result:</p> <pre><code>monkey 5 cat 4 dog 1 Name: animal, dtype: int64 </code></pre>
python|pandas
4
375,782
62,665,001
Why are there two sets of polynomial tools in numpy? Is one preferable to the other, or is it purely opinion?
<p><code>numpy</code> has two sets of polynomial tools, one in the base numpy library, and another in <code>numpy.polynomial</code>. Why are there two? Is one preferable over the other? Is this to maintain backwards compatibility perhaps, or are there significant differences I should be aware of?</p> <p>For example, <code>polyfit</code> and <code>polyval</code> are found in both libraries and appear to use the same algorithm, but their parameters are different (they expect coefficients in opposite orders).</p> <p>From <code>numpy</code>:</p> <pre><code>def polyfit(x, y, deg, rcond=None, full=False, w=None, cov=False) def polyval(p, x) </code></pre> <p>Coefficients are ordered from <strong>high to low</strong> degree.</p> <p>From <code>numpy.polynomial</code>:</p> <pre><code>def polyfit(x, y, deg, rcond=None, full=False, w=None) def polyval(x, c, tensor=True) </code></pre> <p>Coefficients are ordered from <strong>low to high</strong> degree.</p>
<p>I ran across this in the <a href="https://numpy.org/doc/stable/reference/routines.polynomials.html" rel="nofollow noreferrer">docs</a>:</p> <blockquote> <p>Prior to NumPy 1.4, numpy.poly1d was the class of choice and it is still available in order to maintain backward compatibility. However, the newer Polynomial package is more complete than numpy.poly1d and its convenience classes are better behaved in the numpy environment. Therefore numpy.polynomial is recommended for new coding.</p> </blockquote>
python|numpy|polynomials
0
375,783
62,874,134
Optimizing an edge search
<p>I have an Pandas dataframe in which I store binary data in an column of ~360.000 entries. I am looking for a way to find the changes between 0 -&gt; 1 and 1 -&gt; 0 in a more efficient way.</p> <p>Currently I iterate through it and check for the specific conditions by evaluating it for each index, which is maybe quite descriptive to read, but since the fuctionality is used several times, really is the bottleneck of a larger script. The last index is left unchecked, but this is not crutial.</p> <pre class="lang-py prettyprint-override"><code>for i in range(0, len(df.Binary) - 1): if df.Binarywindow[i] == 0 and df.Binarywindow[i+1] == 1: startedge.append(i) elif df.Binarywindow[i] == 1 and df.Binarywindow[i+1] == 0: endedge.append(i) </code></pre> <p>Can you help me rewrite it?</p>
<p>The method you mentioned will indeed yield quite slow results for large sets of data, due to the way that append() methods interact with memory. Essentially you are rewriting the same part of memory ~360,000 times, extending it with a single entry. You can speed this up significantly by converting to numpy arrays and using a single operation to search for the edges. I wrote a minimal example to demonstrate with a random set of binary data.</p> <pre><code>binaries = np.random.randint(0,2,200000) Binary = pd.DataFrame(binaries) t1 = time.time() startedge, endedge = pd.DataFrame([]), pd.DataFrame([]) for i in range(0, len(Binary) - 1): if Binary[0][i] == 0 and Binary[0][i+1] == 1: startedge.append([i]) elif Binary[0][i] == 1 and Binary[0][i+1] == 0: endedge.append([i]) t2 = time.time() print(f&quot;Looping through took {t2-t1} seconds&quot;) # Numpy based method, including conversion of the dataframe t1 = time.time() binary_array = np.array(Binary[0]) startedges = search_sequence_numpy(binary_array, np.array([0,1])) stopedges = search_sequence_numpy(binary_array, np.array([1,0])) t2 = time.time() print(f&quot;Converting to a numpy array and looping through required {t2-t1} seconds&quot;) </code></pre> <p>Output:</p> <pre><code>Looping through took 56.22933220863342 seconds Converting to a numpy array and looping through required 0.029932022094726562 seconds </code></pre> <p>For the sequence search function I used the code from this answer <a href="https://stackoverflow.com/questions/36522220/searching-a-sequence-in-a-numpy-array">Searching a sequence in a NumPy array</a></p> <pre><code>def search_sequence_numpy(arr,seq): &quot;&quot;&quot; Find sequence in an array using NumPy only. Parameters ---------- arr : input 1D array seq : input 1D array Output ------ Output : 1D Array of indices in the input array that satisfy the matching of input sequence in the input array. In case of no match, an empty list is returned. &quot;&quot;&quot; # Store sizes of input array and sequence Na, Nseq = arr.size, seq.size # Range of sequence r_seq = np.arange(Nseq) # Create a 2D array of sliding indices across the entire length of input array. # Match up with the input sequence &amp; get the matching starting indices. M = (arr[np.arange(Na-Nseq+1)[:,None] + r_seq] == seq).all(1) # Get the range of those indices as final output if M.any() &gt;0: return np.where(np.convolve(M,np.ones((Nseq),dtype=int))&gt;0)[0] else: return [] # No match found </code></pre>
python|pandas|list|search|optimization
0
375,784
62,620,832
How to use query inside a function and apply to other dataframes?
<p>I have a dataframe user and calls where common column is user_id. I need to drop values in user dataframe where churn is not null and remove those user_id rows in calls.</p> <pre><code>users = user_id,first_name,last_name,age,city,reg_date,plan,churn_date 1000,Anamaria,Bauer,45,&quot;Atlanta-Sandy Springs-Roswell, GA MSA&quot;,2018-12-24,ultimate, 1001,Mickey,Wilkerson,28,&quot;Seattle-Tacoma-Bellevue, WA MSA&quot;,2018-08-13,surf, 1002,Carlee,Hoffman,36,&quot;Las Vegas-Henderson-Paradise, NV MSA&quot;,2018-10-21,surf, 1003,Reynaldo,Jenkins,52,&quot;Tulsa, OK MSA&quot;,2018-01-28,surf, 1004,Leonila,Thompson,40,&quot;Seattle-Tacoma-Bellevue, WA MSA&quot;,2018-05-23,surf, 1005,Livia,Shields,31,&quot;Dallas-Fort Worth-Arlington, TX MSA&quot;,2018-11-29,surf, 1007,Eusebio,Welch,42,&quot;Grand Rapids-Kentwood, MI MSA&quot;,2018-07-11,surf, 1008,Emely,Hoffman,53,&quot;Orlando-Kissimmee-Sanford, FL MSA&quot;,2018-08-03,ultimate, 1009,Gerry,Little,19,&quot;San Jose-Sunnyvale-Santa Clara, CA MSA&quot;,2018-04-22,surf, 1010,Wilber,Blair,52,&quot;Dallas-Fort Worth-Arlington, TX MSA&quot;,2018-03-09,surf, </code></pre> <pre><code>calls = id,user_id,call_date,duration 1000_93,1000,2018-12-27,8.52 1000_145,1000,2018-12-27,13.66 1000_247,1000,2018-12-27,14.48 1000_309,1000,2018-12-28,5.76 1000_380,1000,2018-12-30,4.22 1000_388,1000,2018-12-31,2.2 1000_510,1000,2018-12-27,5.75 1000_521,1000,2018-12-28,14.18 1000_530,1000,2018-12-28,5.77 1000_544,1000,2018-12-26,4.4 </code></pre> <p>I want to create a function so that I can apply to other dataframes consists only of user_id after filtering out in original users dataframe. I want to drop the other user_id's in other dataframes too</p>
<p>You can try with simple filtering using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.query.html" rel="nofollow noreferrer"><code>.query</code></a></p> <pre><code>def filter_calls_df(user_id_df, calls_df): # filter out churn user_id usr_id = user_id_df.query(&quot;churn_date.notna()&quot;)['user_id'].unique() # filter calls dataframe calls_df = calls_df.query(&quot;user_id not in @usr_id&quot;) return calls_df </code></pre>
python|pandas|numpy|data-science
0
375,785
62,574,697
Complicated JSON to Pandas Dataframe
<p>I have a somewhat complicated json structure</p> <pre><code>[ { &quot;name&quot;: &quot;iphone 6&quot;, &quot;vol&quot;: 1600, &quot;keywords&quot;: [ { &quot;positions&quot;: [ { &quot;date&quot;: &quot;2020-06-18&quot;, &quot;pos&quot;: 25, &quot;change&quot;: 0, &quot;price&quot;: 0, &quot;is_map&quot;: 0, &quot;map_position&quot;: 0, &quot;paid_position&quot;: 0 }, { &quot;date&quot;: &quot;2020-06-19&quot;, &quot;pos&quot;: 28, &quot;change&quot;: -3, &quot;price&quot;: 0, &quot;is_map&quot;: 0, &quot;map_position&quot;: 0, &quot;paid_position&quot;: 0 } } ] } ] </code></pre> <p>I would like to get this json data into a pandas dataframe with the following format</p> <pre><code>Keyword Name | Volume | Date 1 | Date 2 | Date 3 | ... | Date N iphone 6 | 1600 | 25 | 28 | 29 | ... | Pos at Date N </code></pre> <p>I can think of a very complicated and time consuming way to do this, but I'm imagining there is a quick way to do this and I can't find a similar example on StackOverflow. Anyone able to help out here?</p>
<p>If <code>results</code> is your json (dictionary):</p> <pre><code>rows = [] for item in result: row = { 'Keyword Name': item['name'], 'Volume': item['vol'] } for idx, pos in enumerate(item['keywords'][0]['positions'], 1): row[f&quot;Date {idx}&quot;] = pos[&quot;pos&quot;] rows.append(row) pd.DataFrame(rows) </code></pre>
python|json|python-3.x|pandas|dataframe
2
375,786
62,639,478
Split column of list of names into all combinations of initials
<p>I have a dataframe with a column &quot;First names&quot; e.g. John Richard. I want to look for all 4 combinations of name + initial and store it in a seperate column. So in this case I would want to return [(J R, John R, J Richard, John Richard)]. I know I could write a for loop and loop over each element of the list, but is there a faster/more efficient way?</p> <p>Thanks!</p>
<p>Yes, python has efficient <code>itertools</code> implementation:</p> <pre><code>import pandas as pd from itertools import product df = pd.DataFrame([['John Richard'],['John Fitz Kennedy']],columns=['name']) def cart_prod(lst): for i in range(len(lst)): lst[i] = [lst[i],lst[i][0]] return [&quot; &quot;.join(i) for i in product(*lst)] df['new_names'] = df.name.str.split().apply(cart_prod) df </code></pre>
pandas|dataframe|split|names
0
375,787
62,479,022
Pandas Create DF for Table
<p>i'm trying to create a table but first a DF that has the elements i need for the table and I get this error: </p> <pre><code>File "&lt;ipython-input-400-241c1509eba9&gt;", line 4, in &lt;module&gt; [c1.iloc[:,1]], TypeError: 'module' object is not callable </code></pre> <p>This is the command I'm using to create the new DF "c1t":</p> <pre><code>c1t = pd({ "Range": ["50-75%", "75-90%", "90-110%", "110-125%","125-150%"],"adjusted_power": [c1.iloc[:,0]],"counts": [c1.iloc[:,1]], }).set_index("Range") </code></pre> <p>Here is the DF "c1":</p> <pre><code> adjusted_power counts 0 (9694.2, 14541.2] 2 1 (14541.2, 17449.5] 3 2 (17449.5, 21327.2] 20 3 (21327.2, 24235.4] 3 4 (24235.4, 29082.5] 1 type(c1) Out[412]: pandas.core.frame.DataFrame </code></pre> <p>My assumption that DF "c1" is callable must be false since I"m getting the callable error but I'm not sure how to include "c1" for the new DF (c1t). Thank you,</p>
<p>To create the new dataframe <code>c1t</code> the same as <code>c1</code> but with an index you can assign it (which returns a copy) and set the index on this copy:</p> <pre><code>c1t = c1.set_index(pd.Index([&quot;50-75%&quot;, &quot;75-90%&quot;, &quot;90-110%&quot;,&quot;110-125%&quot;,&quot;125-150%&quot;], name='Range')) </code></pre>
pandas|format|callable
0
375,788
62,751,785
How can I modify the single row in a Datafram while it is returning multiple rows at a time?
<p>I have a Dataframe with two columns, i.e, Transaction, &amp; Status.</p> <p>Expected Dataframe:</p> <pre><code>Transaction | Status ------------------------- 57230477 | Completed 57232288 | Completed 57232288 | 57232288 | 57228666 | Completed 57229869 | Completed 57233318 | Completed 57233318 | 57227149 | Completed 57227149 | 57222266 | Completed 57222266 | 57222266 | 57233319 | Completed 57233319 | 57230490 | Completed </code></pre> <p>What is happening in my code is:</p> <pre><code>for txn in df['Transaction'].unique(): df.loc[df['Transaction'] == txn, 'Status'] = 'Completed' </code></pre> <p>In this case what is happening is, it is assigning the <code>Status</code> in all the rows as <code>Completed</code>.</p> <p>What I'm getting:</p> <pre><code>Transaction | Status ------------------------- 57230477 | Completed 57232288 | Completed 57232288 | Completed 57232288 | Completed 57228666 | Completed 57229869 | Completed 57233318 | Completed 57233318 | Completed 57227149 | Completed 57227149 | Completed 57222266 | Completed 57222266 | Completed 57222266 | Completed 57233319 | Completed 57233319 | Completed 57230490 | Completed </code></pre> <p>So, my question is how can I just assign the value of <code>Status</code> as <code>Completed</code> to only the first occurrence of the <code>Transaction</code> like in the expected Dataframe at the top, i.e., just assign the values to the unique Transactions and skip the repeating Transactions.</p> <p>For example <code>57232288</code> is repeating 3 times, instead of assigning the <code>Completed</code> 3 time assign the value just once at the first occurrence of it.</p>
<p>One way is to use <code>drop_duplicates</code> and get the index, then assign directly:</p> <pre><code>df.loc[df.drop_duplicates(keep=&quot;first&quot;).index, &quot;Status&quot;] = &quot;Completed&quot; print (df) Transaction Status 0 57230477 Completed 1 57232288 Completed 2 57232288 3 57232288 4 57228666 Completed 5 57229869 Completed 6 57233318 Completed 7 57233318 8 57227149 Completed 9 57227149 10 57222266 Completed 11 57222266 12 57222266 13 57233319 Completed 14 57233319 15 57230490 Completed </code></pre>
python|python-3.x|pandas|dataframe
3
375,789
62,699,592
TypeError: unsupported operand type(s) for |: 'float' and 'bool' for if else condition
<p>I know this topic has been discussed a lot like <a href="https://stackoverflow.com/questions/49364654/typeerror-unsupported-operand-types-for-str-and-bool">TypeError: unsupported operand type(s) for |: &#39;str&#39; and &#39;bool&#39;</a>, but I found no one can solve my question. I have a massive dataframe: one column is df['Closed P/L']</p> <pre><code>df['Closed P/L'].loc[df['Closed P/L']!=0] 21 9.20 22 559.70 23 -455.30 24 481.67 25 -1825.50 27 -98.92 28 -473.94 29 2.80 31 21.20 33 28.00 34 -172.00 35 -12.87 36 137.02 37 11.04 39 739.23 40 323.59 </code></pre> <p>another column is df['Floating P/L']:</p> <pre><code>df['Floating P/L'].loc[df['Floating P/L']!=0] 39 -340.97 42 -844.20 43 -2383.84 44 -2415.48 45 -172.00 47 -1706.04 83 -259.61 91 -7544.43 </code></pre> <p>The rows which are not zero can be chosen easily, but</p> <pre><code>#exclude the day when closed pnl and floating pnl are zero if (df['Closed P/L'].loc[df['Closed P/L']!=0]) | (df['Floating P/L'].loc[df['Floating P/L']!=0]): df['Returns'] = df['New_Balance']/df['New_Balance'].shift(1) -1 else: df['Returns'] = 0 </code></pre> <pre><code>--------------------------------------------------------------------------- TypeError Traceback (most recent call last) ~\anaconda3\lib\site-packages\pandas\core\ops.py in na_op(x, y) 1788 try: -&gt; 1789 result = op(x, y) 1790 except TypeError: TypeError: unsupported operand type(s) for |: 'float' and 'bool' During handling of the above exception, another exception occurred: TypeError Traceback (most recent call last) &lt;ipython-input-27-7c03e0bf86bc&gt; in &lt;module&gt; 1 #exclude the day when closed pnl and floating pnl are both zero ----&gt; 2 if (df['Closed P/L'].loc[df['Closed P/L']!=0]) | (df['Floating P/L'].loc[df['Floating P/L']!=0]): 3 df['Returns'] = df['New_Balance']/df['New_Balance'].shift(1) -1 4 else: 5 df['Returns'] = 0 ~\anaconda3\lib\site-packages\pandas\core\ops.py in wrapper(self, other) 1848 filler = (fill_int if is_self_int_dtype and is_other_int_dtype 1849 else fill_bool) -&gt; 1850 res_values = na_op(self.values, ovalues) 1851 unfilled = self._constructor(res_values, 1852 index=self.index, name=res_name) ~\anaconda3\lib\site-packages\pandas\core\ops.py in na_op(x, y) 1795 x = ensure_object(x) 1796 y = ensure_object(y) -&gt; 1797 result = libops.vec_binop(x, y, op) 1798 else: 1799 # let null fall thru pandas\_libs\ops.pyx in pandas._libs.ops.vec_binop() pandas\_libs\ops.pyx in pandas._libs.ops.vec_binop() TypeError: unsupported operand type(s) for |: 'float' and 'bool' </code></pre> <p>I really dont get it, as I just use the suggestion discussed in Stackoverflow, I assume others may have the same problem, so I post it here</p>
<pre><code>df['Returns'] = 0 df.loc[(df['Closed P/L'] != 0) &amp; (df['Floating P/L'] != 0), 'Returns'] = df['New_Balance']/df['New_Balance'].shift(1) - 1 </code></pre>
python|pandas|dataframe
2
375,790
62,638,658
Calculating time difference between two different date formats in Python
<p>I am trying to identify hours between two dates. Date format is not consistent between two columns</p> <p>The below code works when the date format is similar. How can I convert the UTC date format into normal date month year</p> <pre><code>df['timebetween'] = (pd.to_datetime(df['datecolA'],dayfirst = True) - pd.to_datetime(df['datecolB'],dayfirst = True)) df['timebetween']= df['timebetween']/np.timedelta64(1,'h') </code></pre> <p>my data looks like below and I am interested in column timebetween which can be achieved from the above code if both date columns had same format</p> <pre><code>datecolA datecolB timebetween 29/06/2020 08:30:00 2018-12-02T11:32:00.000Z x hours 29/06/2020 08:30:00 2018-12-04T14:00:00.000Z y hours 29/06/2020 08:30:00 2017-02-02T14:36:00.000Z z hours 29/06/2020 08:30:00 2017-02-02T14:36:00.000Z n hours </code></pre>
<p>I think you need to remove <code>UTC</code> from <code>datecolB</code>:</p> <pre><code>df['datecolB'] = df.datecolB.dt.tz_localize(None) # or extract the time delta directly df['timebetween'] = (df.datecolA - df.datecolB.dt.tz_localize(None))/np.timedelta64(1,'h') </code></pre> <p>Output:</p> <pre><code> datecolA datecolB timebetween 0 2020-06-29 08:30:00 2018-12-02 11:32:00+00:00 13796.966667 1 2020-06-29 08:30:00 2018-12-04 14:00:00+00:00 13746.500000 2 2020-06-29 08:30:00 2017-02-02 14:36:00+00:00 29825.900000 3 2020-06-29 08:30:00 2017-02-02 14:36:00+00:00 29825.900000 </code></pre>
python|pandas|datetime-format|python-datetime
0
375,791
62,620,819
NumPy - formatting two arrays to one multi-dimensional array
<p>I have the following values:</p> <pre><code>grade_list = [[99 73 97 98] [98 71 70 99]] excercise_list = ['1' '2'] </code></pre> <p>Using Numpy, I want to convert it to one multidimensional array to have the average grade for each exercise (the first item in grade_list refers to the exercise number 1)</p> <p>The output should look like this: <code>[[1. 2.] [91.75 84.5]]</code></p> <p>Which means Avg. grade for exercise #1 - 91.75, and 84.5 for #2.</p> <p>How I can convert it using numpy? I have read about <a href="https://numpy.org/doc/stable/reference/generated/numpy.mean.html" rel="nofollow noreferrer">NumPy axis</a> parameter but not sure how to put it all together.</p>
<p>Axis 0 is the first nesting level (the two lists), axis 1 is the second level (four grades per entry in axis 0). You want to compute the mean along axis 1, so that axis 0 remains. So the mean grades are</p> <p><code>mean_grades = np.mean(grade_list, axis=1)</code>.</p> <p>Then you stack the two lists in another nested list, wrap that in a numpy array and set the type to float (your excercises are strings):</p> <p><code>result = np.array([excercise_list, mean_grades]).astype(float)</code>.</p>
python|numpy
0
375,792
62,665,293
How to extract only cetain part of a column
<p>I have a dataframe that looks like below.I want to keep only first percentage from column 3 and 4. How can this be achieved.Any help is appreciated</p> <pre><code>Metric Group Metric Type Tue23rd Week24 Productive % Available 83.2%Best Class:D7-92.6% 92.6%Best Class:WD-96.21% Productive % Available 85.2%Best Class:A7-98.6% 92.6%Best Class:LD-95.21% Productive % Available 89.2%Best Class:D7-94.6% 92.6%Best Class:WD-93.21% </code></pre> <p>Expected output is some thing like</p> <pre><code>Metric Group Metric Type Tue23rd Week24 Productive % Available 83.2% 92.6% Productive % Available 85.2% 92.6% Productive % Available 89.2% 92.6% </code></pre>
<p>You can use the built in <code>pd.Series.str.extract</code> method using regex:</p> <pre><code>df[&quot;Tue23rd&quot;].str.extract(&quot;([0-9\.%]+)Best&quot;) </code></pre>
python|pandas
0
375,793
62,634,062
How to export from NetwokX / OSMnx and back?
<p>I want to be able to export a graph made by OSMnx (in other words, NetworkX graph) into CSV and to call it back later. Couldn't find any good way to do that so I try to export it into Numpy / Pandas and to export that. So I built this little example:</p> <pre><code>import networkx as nx import osmnx as ox G = ox.graph_from_place('Bar Ilan University, Israel') F = nx.to_numpy_matrix(G) G = nx.from_numpy_matrix(F, create_using=nx.MultiDiGraph) ox.plot.plot_graph(G) </code></pre> <p>and it returns this error and nothing seems to help:</p> <blockquote> <p>--------------------------------------------------------------------------- TypeError Traceback (most recent call last) in 4 F = nx.to_numpy_matrix(G) 5 G = nx.from_numpy_matrix(F, create_using=nx.MultiDiGraph) ----&gt; 6 ox.plot.plot_graph(G)</p> <p>~\Anaconda3\envs\ox\lib\site-packages\osmnx\plot.py in plot_graph(G, bbox, fig_height, fig_width, margin, axis_off, equal_aspect, bgcolor, show, save, close, file_format, filename, dpi, annotate, node_color, node_size, node_alpha, node_edgecolor, node_zorder, edge_color, edge_linewidth, edge_alpha, use_geom) 353 354 log('Begin plotting the graph...') --&gt; 355 node_Xs = [float(x) for _, x in G.nodes(data='x')] 356 node_Ys = [float(y) for _, y in G.nodes(data='y')] 357</p> <p>~\Anaconda3\envs\ox\lib\site-packages\osmnx\plot.py in (.0) 353 354 log('Begin plotting the graph...') --&gt; 355 node_Xs = [float(x) for _, x in G.nodes(data='x')] 356 node_Ys = [float(y) for _, y in G.nodes(data='y')] 357</p> <p>TypeError: float() argument must be a string or a number, not 'NoneType'</p> </blockquote> <ul> <li>What can I do?</li> <li>Is there is any simple method that I'm missing?</li> </ul>
<p>The osmnx graph's nodes look like this:</p> <pre><code>G.nodes(data=True) NodeDataView({970069268: {'y': 32.0682358, 'x': 34.841011, 'osmid': 970069268}, 970069273: {'y': 32.0722176, 'x': 34.8442006, 'osmid': 970069273}, 970069285: {'y': 32.0695886, 'x': 34.8419506, 'osmid': 970069285, 'highway': 'mini_roundabout'}, [...] }) </code></pre> <p>Converting this graph into an adjacency matrix retains only the information about which nodes are connected, not about where the individual nodes are positioned, nor about any other attributes. So, the graph created from the adjacency matrix looks like this:</p> <pre><code>G.nodes(data=True) out: NodeDataView({0: {}, 1: {}, 2: {}, 3: {}, 4: {}, 5: {}, 6: {}, 7: {}, 8: {}, 9: {}, 10: {}, [...] }) </code></pre> <p>The osmnx plotting function tries to retrieve the position information:</p> <pre><code>node_Xs = [float(x) for _, x in G.nodes(data=&quot;x&quot;)] </code></pre> <p>but gets None, which is not the expected format, hence the error.</p> <h3>what you could do:</h3> <p>save the node info to csv:</p> <pre><code>def nodes_to_csv(G, savepath): unpacked = [pd.DataFrame({**{'node': node, **data}}, index=[i]) for i, (node, data) in enumerate(G.nodes(data=True))] df = pd.concat(unpacked) df.to_csv(savepath) return df def nodes_from_csv(path): df = pd.read_csv(path, index_col=0) nodes = [] for ix , row in df.iterrows(): d = dict(row) node = d.pop('node') nodes.append((node, {k:v for k,v in d.items() if v})) return nodes #saving and retrieving: nodes_to_csv(G, 'test.csv') nodes = nodes_from_csv('test.csv') # saving the adjacency info is more conveniently done with # to_pandas_edgelist, because to_numpy_matrix loses the node # names as well. adj_matrix = nx.to_pandas_edgelist(G) # graph construction: H = nx.from_pandas_edgelist(adj_matrix, create_using=nx.MultiDiGraph) H.update(nodes=nodes) # adds node info # osmnx stores metadata in the .graph attribute # the plotting function accesses that, so in order # to use the plotting function, # you need to also transfer metadata: q = G.graph.copy() H.graph=q ox.plot.plot_graph(H) </code></pre>
python|pandas|numpy|networkx|osmnx
2
375,794
62,641,136
How can I fill missing value in a particular case?
<p>I have a Dataframe which has two columns 'Key_Skills and 'Job_Title',(both of the column containing few missing values). Now, I want to impute '(Null) Key_Skills' value with the 'Job_Title' column which has filled 'Key_Skills' values.</p> <p><a href="https://i.stack.imgur.com/IWjUb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IWjUb.png" alt="df" /></a></p> <p>For example, in column 'Job_Title' at 36th index, there is an 'Accounts Manager' which has a 'Key Skills' value. Now, I want to --</p> <ol> <li>Firstly, Impute 'Key Skills' of all the 'Accounts Manager' with the same value(key skills) throughout the dataframe.</li> <li>Secondly, I want to fill 'Key skills' column completely with reference to 'Job Title' value as done in point no. 1 for every row.</li> </ol>
<p>I am not sure if i completely understand, but if I am interpreting this correctly the code below should work. What the code below does is group by job title, and if there are any NaN's as a skill set for the same title, it will fill it in. If you wanted to do it specifically for Account Manager, use the ffill and bfill on a filtered dataset only containing Account Manager. You would flip Job title and key skills to do it the other way.</p> <pre><code> df['Key Skills'] = df.loc[~df['Job Title'].isna()].groupby('Job Title')['Key Skills'].transform(lambda x: x.fillna(method = 'ffill').fillna(method='bfill')) </code></pre>
python|pandas|dataframe|missing-data
0
375,795
62,720,605
Average datetime.time Series
<p>I am trying to figure out the best way to average a series of <code>datetime.time</code> values with about 40 records in the format of <code>23:19:30</code> or <code>HH:MM:SS</code>, however, when I attempt to use the <code>to_timedelta</code> method and apply <code>.mean()</code> I run into an error:</p> <pre><code>ValueError: Invalid type for timedelta scalar: &lt;class 'datetime.time'&gt; </code></pre> <p>What would be the proper format for the time to be in to use <code>to_timedelta().mean()</code> or is there a better approach?</p> <p>Code:</p> <pre><code>bed_time_mean = pd.to_timedelta(bed_rise_df['start_time']).mean() </code></pre>
<p><code>pd.to_timedelta</code> accepts strings in H:M:S format as input, so you could convert your column with datetime.time objects to string first. Ex:</p> <pre><code>from datetime import time import pandas as pd df = pd.DataFrame({'t':[time(1,2,3), time(2,3,4), time(3,4,5)]}) pd.to_timedelta(df['t'].astype(str)).mean() # Timedelta('0 days 02:03:04') </code></pre>
pandas|datetime|time
1
375,796
62,565,448
standardizing data column-wise before using keras models
<p>I'm working with a large dataset whose data I want to standardize to use with a CNN.</p> <p>Does keras have a quick utility to standardize a block of numbers column-wise that you can use inside a Sequential model? I'm asking this as i expect eventually the data to be used on-line so ideally this standardization feature could be used on incoming data, ie a trailing moving average of mean and std to normalize the incoming data.</p> <pre><code>import numpy as np import pandas as pd np.random.seed(42) col_names = ['Column' + str(x+1) for x in range(5)] training_data = pd.DataFrame(np.random.randint(1,10 **6, 50).reshape(-1,5), columns = col_names) </code></pre>
<p>I am not sure about online, but using <code>sklearn</code>'s <code>StandardScaler()</code> should do the right thing, as described <a href="https://stackoverflow.com/questions/43816718/keras-regression-using-scikit-learn-standardscaler-with-pipeline-and-without-pip">here</a>, seems like the right thing.</p>
python|pandas|numpy|keras
1
375,797
62,837,561
Pandas Remove Outliers from DataFrame
<p>I am following the below logic,</p> <pre><code>from scipy import stats df = pd.DataFrame(np.random.randn(100, 3)) df[(np.abs(stats.zscore(df)) &lt; 3).all(axis=1)] </code></pre> <p>My df has multiple columns included value1, value2, description, task, etc. so I am having trouble dealing with A) half of my columns being text and B) removing outliers ONLY from the value1 column. I know the above code would remove rows that have outliers in either value1 or 2 - how would I adjust this to only look at value1?</p> <p>UPDATED CODE:</p> <pre><code>for y in yvar: temp = combo temp = temp[(temp['Financial Metric'] == y) &amp; (temp['Financial Value'] != 0)] temp = temp.loc[np.abs(stats.zscore(temp['Financial Value'])) &lt; 3] for x in xvar: temp2 = temp temp2 = temp2[(temp2['External Metric'] == x) &amp; (temp2['External Value'] != 0)] temp2 = temp2.loc[np.abs(stats.zscore(temp2['External Value'])) &lt; 3] c = len(temp2.index) r = temp2['Financial Value'].corr(temp2['External Value']) col1.append(y) col2.append(x) col3.append(r) col4.append(c) temp2.plot(x ='External Value', y='Financial Value', kind = 'scatter') </code></pre> <p><a href="https://i.stack.imgur.com/AzLRp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AzLRp.png" alt="enter image description here" /></a></p>
<p>You can use <code>loc</code> to filter the dataframe based on only the value1 column like this</p> <pre><code>df.loc[np.abs(stats.zscore(df['value1'])) &lt; 3] </code></pre>
python|pandas
0
375,798
62,583,490
Label areas within an image with Tensorflow
<p>I am new to the whole realm of machine learning, but I do have some prior experience with AWS' Rekognition. Within Rekognition, you're able to custom label different sections within your images, rather than just the entire image as a whole. I was looking to do something similar within Tensorflow, but despite looking throughout their documentation and searching on SO I've not been able to find anything.</p> <p>Is this just not doable in Tensorflow, or am I just missing something? If I train a model on separately modeled images, can it pick all of the occurrences of these in an image - Eg. an image with 5 people, all 5 are separately detected?</p> <p>If there is a better library that is more capable of this, I would love to know - or if there's a work around that I could use to implement this myself I am willing to try it.</p>
<p>It is possible to classify different areas of images in tensorflow using its object detection api. <a href="https://github.com/tensorflow/models/tree/master/research/object_detection" rel="nofollow noreferrer">See tensorflow object detection api</a></p> <p>You can work through the examples there, they also offer pretrained models to run examples with.</p> <p>Pytorch also offers multibox object detection <a href="https://github.com/sgrvinod/a-PyTorch-Tutorial-to-Object-Detection" rel="nofollow noreferrer">see pytorch example</a>, just use whatever framework suits you more.</p>
python|tensorflow|machine-learning|image-recognition|amazon-rekognition
2
375,799
62,817,662
using 2d array as indices of a 4d array
<p>I have a Numpy 2D array (4000,8000) from a <code>tensor.max()</code> operation, that stores the indices of the first dimension of a 4D array (30,4000,8000,3). I need to obtain a (4000,8000,3) array that uses the indices over this set of images and extract the pixels of each position in the 2D max array.</p> <pre><code>A = np.random.randint( 0, 29, (4000,8000), dtype=int) B = np.random.randint(0,255,(30,4000,8000,3),dtype=np.uint8) final = np.zeros((B.shape[1],B.shape[2],3)) r = 0 c = 0 for row in A: c = 0 for col in row: x = A[r,c] final[r,c] = B[x,r,c] c=c+1 r=r+1 print(final.shape) </code></pre> <p>Is there any vectorised way to do that? I am fighting with the RAM usage using loops. Thanks</p>
<p>You can use <a href="https://numpy.org/doc/stable/reference/generated/numpy.take_along_axis.html#numpy.take_along_axis" rel="nofollow noreferrer"><code>np.take_along_axis</code></a>.</p> <p>First let's create some data (you should have provided a <a href="http://stackoverflow.com/help/minimal-reproducible-example">reproducible example</a>):</p> <pre><code>&gt;&gt;&gt; N, H, W, C = 10, 20, 30, 3 &gt;&gt;&gt; arr = np.random.randn(N, H, W, C) &gt;&gt;&gt; indices = np.random.randint(0, N, size=(H, W)) </code></pre> <p>Then, we'll use <a href="https://numpy.org/doc/stable/reference/generated/numpy.take_along_axis.html#numpy.take_along_axis" rel="nofollow noreferrer"><code>np.take_along_axis</code></a>. But for that the <code>indices</code> array must be of the same shape than the <code>arr</code> array. So we are using <code>np.newaxis</code> to insert axis where shapes don't match.</p> <pre><code>&gt;&gt;&gt; res = np.take_along_axis(arr, indices[np.newaxis, ..., np.newaxis], axis=0) </code></pre> <p>It already gives usable output, but with a singleton dimension on first axis:</p> <pre><code>&gt;&gt;&gt; res.shape (1, 20, 30, 3) </code></pre> <p>So we can squeeze that:</p> <pre><code>&gt;&gt;&gt; res = np.squeeze(res) &gt;&gt;&gt; res.shape (20, 30, 3) </code></pre> <p>And eventually check if the data is as we wanted:</p> <pre><code>&gt;&gt;&gt; np.all(res[0, 0] == arr[indices[0, 0], 0, 0]) True &gt;&gt;&gt; np.all(res[5, 3] == arr[indices[5, 3], 5, 3]) True </code></pre>
arrays|numpy|pytorch|indices
1