Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
5,000
37,153,692
write_formula gives error unless i copy and paste exactly the same formula
<p>I have a python script that writes excel file in the end with xlsxwriter. Everything works but a formula is giving error upon launching and if i copy and paste the exactly same formula it gives the results expected. here is the line:</p> <pre><code>worksheet.write_formula('I2', '=SUMIF(B2:B{0};1;F2:F{0})'.format(len(df.index)+1)) </code></pre> <p>edit: i try to export as xml and i saw that xlsxwriter writes ; as |. I mean the error giving formula from xlsxwriter is:</p> <pre><code>&lt;Cell ss:Formula="of:=SUMIF([.B2:.B11]|1|[.F2:.F11])"&gt; &lt;Data ss:Type="String"&gt;Err:508&lt;/Data&gt; </code></pre> <p>Copy and pasted working formula is:</p> <pre><code>&lt;Cell ss:Formula="of:=SUMIF([.B2:.B11];1;[.F2:.F11])"&gt; &lt;Data ss:Type="Number"&gt;485&lt;/Data&gt; </code></pre> <p>I don't know what's the issue here. Thank you</p>
<p>Goto the given link i believe you will find your answer: <a href="https://xlsxwriter.readthedocs.io/working_with_formulas.html" rel="nofollow">XlsxWriter: Working with Formulas</a> </p> <p>Specifically the Non US Excel functions and syntax says:</p> <p>Excel stores formulas in the format of the US English version, regardless of the language or locale of the end-user's version of Excel. Therefore formulas must be written with the US style separator/range operator which is a comma (not semi-colon). A formula with multiple values should be written as follows:</p> <pre><code>worksheet.write_formula('A1', '=SUM(1, 2, 3)') # OK worksheet.write_formula('A2', '=SUM(1; 2; 3)') # Semi-colon. Error on load. </code></pre> <p>Hope this helps.</p>
python|pandas|xlsxwriter
2
5,001
37,267,806
How to use pd.read_table with StringIO file object?
<p>I checked out <a href="https://stackoverflow.com/questions/18383711/read-table-with-stringio-and-messy-file">read_table with stringIO and messy file</a> but it has some stuff I can't reproduce like this raw object. Anyways, I want to write a table to a <code>StringIO</code> file object and then open that <code>StringIO</code> file object in <code>pandas</code> with the <code>read_table</code> method but I am getting <code>EmptyDataError: No columns to parse from file</code>. The file I will be writing to will be too large to store in memory so I want to read it in chunks. Using <code>StringIO</code> as a test example. Using Python 3.5.1 btw</p> <pre><code>import numpy as np import pandas as pd from io import StringIO #StringIO to write to f = StringIO() #Write to StringIO dist = np.random.normal(100, 30, 10000) for idx,s in enumerate(dist): f.write('{}\t{}\t{}\n'.format("label_A-%d" % idx, "label_B-%d" % idx, str(s))) #Pandas DataFrame from it DF = pd.read_table(f,sep="\t",header=None) #EmptyDataError: No columns to parse from file </code></pre>
<p>StringIO uses a pointer to track the current position in the stream. Once you have written all data to the stream, use <code>f.seek(0)</code> to set the pointer back to the start.</p> <pre><code>import numpy as np import pandas as pd from io import StringIO #StringIO to write to f = StringIO() #Write to StringIO dist = np.random.normal(100, 30, 10000) for idx,s in enumerate(dist): f.write('{}\t{}\t{}\n'.format("label_A-%d" % idx, "label_B-%d" % idx, str(s))) # rewind the stream f.seek(0) #Pandas DataFrame from it DF = pd.read_table(f,sep="\t",header=None) #EmptyDataError: No columns to parse from file </code></pre>
python|pandas
4
5,002
41,962,580
Concatenate Using Lambda And Conditions
<p>I am trying to using lambda and map to create a new column within my dataframe. Essentially the new column will take column A if a criteria is met and column B is the criteria is not met. Please see my code below.</p> <pre><code>df['LS'] = df.['Long'].map(lambda y:df.Currency if y&gt;0 else df.StartDate) </code></pre> <p>However, when I do this the function returns the entire column to each item in my new column. </p> <p>In English I am going through each item y in column Long. If the item is > 0 then take the yth value in column "Currency". Otherwise take the yth value in column "Start".</p> <p>Iteration is extremely slow in running the above. Are there any other options?</p> <p>Thanks! James</p>
<p>Just do </p> <pre><code>df['LS']=np.where(df.Long&gt;0,df.Currency,df.StartDate) </code></pre> <p>which is the good vectored approach.</p> <p><code>df.Long.map</code> apply to each row, but return actually <code>df.State</code> or <code>df.current</code> which are Series.</p> <p>An other approach is to consider:</p> <pre><code>df.apply(lambda row : row[1] if row[0]&gt;0 else row[2],1) </code></pre> <p>will also work with <code>df.columns=Index(['Long', 'Currency', 'StartDate', ...])</code></p> <p>but it is not a vectored approach, so it is slow. (200x slower for 1000 rows in this case).</p>
python|pandas|lambda
1
5,003
41,925,592
python plotting data marker
<p>I am trying to make a data marker on a python plot that shows the x and y coordinates, preferably automatically if this is possible. Please keep in mind that I am new to python and do not have any experience using the marker functionality in matplotlib. I have FFT plots from .csv files that I am trying to compare to theoretical calculations, but I need a way of highlighting a specific point and dropping a marker that has the coordinate values similar to MATLAB. For reference, I am plotting an FFT of frequency intensity of a 100kHz sine wave with an amplitude of 1V, so I am trying to show that the spike at 100kHz is close to the calculated value of 3.98dBm in a 50ohm environment. Here is some of the data from my csv file around the point of interest (The third column is of no interest):</p> <pre><code>9.991250000000E+04 -8.399371E+01 0.000000E+00 9.992500000000E+04 -8.108232E+01 0.000000E+00 9.993750000000E+04 -7.181630E+01 0.000000E+00 9.995000000000E+04 -7.190387E+01 0.000000E+00 9.996250000000E+04 -7.961070E+01 0.000000E+00 9.997500000000E+04 -8.090104E+01 0.000000E+00 9.998750000000E+04 -1.479405E+01 0.000000E+00 1.000000000000E+05 3.740311E+00 0.000000E+00 1.000125000000E+05 -6.665535E-01 0.000000E+00 1.000250000000E+05 -7.868803E+01 0.000000E+00 1.000375000000E+05 -8.149953E+01 0.000000E+00 1.000500000000E+05 -7.948487E+01 0.000000E+00 1.000625000000E+05 -7.436191E+01 0.000000E+00 1.000750000000E+05 -8.068216E+01 0.000000E+00 1.000875000000E+05 -7.998886E+01 0.000000E+00 1.001000000000E+05 -8.316663E+01 0.000000E+00 </code></pre> <p>Here is how I am extracting the data</p> <pre><code>Frequency = data[:,0] Intensity = data[:,1] title("Frequency Intensity") xlabel("Frequency [Hz]") ylabel("Intensity [dBm]") plot(Frequency, Intensity) grid(); </code></pre> <p>Edit: I would like my plot to look something like this where x shows the frequency and y shows the intensity in dBm. I simply want the marker I place to show the x,y coordinates on the plot.</p> <p><img src="https://i.stack.imgur.com/QWlPd.jpg" alt="FFT plot desired marker"></p>
<p>Create a <code>pd.Series</code> from <code>data</code></p> <pre><code>s = pd.DataFrame({ 'Frequency [Hz]': data[:, 0], 'Intensity [dBm]': data[:, 1] }).set_index('Frequency [Hz]')['Intensity [dBm]'] </code></pre> <p>Then plot with <a href="http://matplotlib.org/users/annotations_intro.html" rel="nofollow noreferrer"><code>annotate</code></a></p> <pre><code>ax = s.plot(title='Frequency Intensity') ax.set_ylabel(s.name) point = (s.index[7], s.values[7]) ax.annotate('Marker', xy=point, xytext=(0.1, 0.95), textcoords='axes fraction', arrowprops=dict(facecolor='black', shrink=0.05), ) </code></pre> <p><a href="https://i.stack.imgur.com/Y8Hw4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Y8Hw4.png" alt="enter image description here"></a></p>
python|python-2.7|pandas|matplotlib
1
5,004
37,865,177
Trouble creating dataframe using pandas: ValueError/data type not understood
<p>I'm a bit new to pandas/numpy and have had trouble with this issue.</p> <p>I have a group of 10 lists, each of which have 58 elements in them (strings). When I try to join them into a dataframe</p> <pre><code>df = pd.Dataframe(a, b, c, d, e, f, g, h, i, j) </code></pre> <p>I get the error <code>"ValueError: Shape of passed values is (1, 58), indices imply (58, 58)"</code></p> <p>I started trying different combinations of the lists to check which list was causing the problem (I checked types and len etc.) and then I started getting the error "data type not understood". </p> <p>I've tried checking similar posts but nothing has been working for me so far. Does anyone have any suggestions for how I can tackle this issue?</p>
<p>Use this:</p> <pre><code>df = pd.Dataframe([a, b, c, d, e, f, g, h, i, j]) </code></pre> <p>The difference</p> <pre><code># You had df = pd.Dataframe( a, b, c, d, e, f, g, h, i, j ) # ^ \________________________/ # | | # data argument | # stuff pandas thought was something else # New code df = pd.Dataframe([a, b, c, d, e, f, g, h, i, j]) # \____________________________/ # | # data argument </code></pre> <p>The first argument represents the data. According to python, you were passing 10 arguments, only the first, <code>a</code>, was getting interpreted as the data argument. The way I've told you to do it, all those elements are passed within a list <code>[]</code> and it is that list that pandas will take as the data argument, which is what I think you want.</p>
python|pandas|dataframe
0
5,005
37,851,796
low_memory parameter in read_csv function
<p>What does the <code>low_memory</code> parameter do in the <code>read_csv</code> function from the pandas library?</p>
<p>This come from the docs themselves. Have you read them?</p> <blockquote> <p>low_memory : boolean, default True Internally process the file in chunks, resulting in lower memory use while parsing, but possibly mixed type inference. To ensure no mixed types either set False, or specify the type with the <code>dtype</code> parameter. Note that the entire file is read into a single DataFrame regardless, use the <code>chunksize</code> or <code>iterator</code> parameter to return the data in chunks. (Only valid with C parser)</p> </blockquote>
python|pandas|ipython|spyder
1
5,006
37,730,760
np.gradient - correct use?
<p>I'm trying to use np.gradient to calculate a derivative, but I'm getting strange results and want to check that I'm using it correctly to eliminate that as a possible error.</p> <p>A have a function y(x) over a range of equally spaced (but not unity) x-value data points. I compute the derivative by </p> <pre><code>deriv = np.gradient(y, dx) </code></pre> <p>Is this correct application? Some very wild values creep into my results, which only worsen as I iterate this function in a model I'm developing. </p>
<p>Looks right to me. Derivative of sin is cos. When I plot <code>np.gradient</code> of my sin function, it looks identical to when I plot cos directly.</p> <p>An example:</p> <pre><code>import numpy as np import pandas as pd x = np.arange(-2 * np.pi, 2 * np.pi, 0.01) y = np.sin(x) pd.Series(y).plot() </code></pre> <p><a href="https://i.stack.imgur.com/ThnTO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ThnTO.png" alt="enter image description here"></a></p> <pre><code>y2 = np.gradient(y, 0.01) pd.Series(y2).plot() </code></pre> <p><a href="https://i.stack.imgur.com/Xsr30.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Xsr30.png" alt="enter image description here"></a></p> <pre><code>y3 = np.cos(x) pd.Series(y3).plot() </code></pre> <p><a href="https://i.stack.imgur.com/4uRfL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4uRfL.png" alt="enter image description here"></a></p>
python|numpy|derivative
1
5,007
31,332,264
pandas plot xticks on x-axis
<p>I have a working code that displays a panda dataframe as 2 line graphs in a chart. I also have a dataframe that displays a bar graph on the same chart. For the 2 dataframes, i have date for the x-axis. Because the two dataframes have dates, my axis end up just having integers (1,2,3,4,5,6...) instead of the date.</p> <p>I thought this line <code>df1 = df.set_index(['date'])</code> specifies what i want as the x-axis already and when i dont plot the bar graph on the chart, the dates show up nicely, but when i do plot the bar graph, the integers show up on the axis instead. </p> <p>My 2 dataframes:</p> <pre><code>df1: date line1 line2 2015-01-01 15.00 23.00 2015-02-01 18.00 10.00 df2: date quant 2015-01-01 500 2015-02-01 600 </code></pre> <p>My code:</p> <pre><code>df1 =pd.DataFrame(result, columns =[ 'date','line1', 'line2']) df1 = df.set_index(['date']) df2 =pd.DataFrame(quantity, columns =[ 'quant','date']) fig = plt.figure() ax = fig.add_subplot(111) ax2=ax.twinx() ax.set_ylim(0,100) ax2.set_ylim(0,2100) df1.line1.plot( color = 'red', ax = ax) df1.line2.plot( color = 'blue', ax = ax) df2.["quant"].plot(kind = 'bar', ax =ax2, width =0.4) plt.show() </code></pre> <hr> <p><img src="https://i.stack.imgur.com/POyDq.png" alt="enter image description here"></p> <pre><code>df1: &lt;class 'pandas.core.frame.DataFrame'&gt; Int64Index: 12 entries, 0 to 11 Data columns (total 3 columns): date 12 non-null object line1 12 non-null float64 line2 12 non-null float64 dtypes: float64(2), object(1) memory usage: 384.0+ bytes None df2 &lt;class 'pandas.core.frame.DataFrame'&gt; Int64Index: 11 entries, 0 to 10 Data columns (total 2 columns): quant 11 non-null int64 date 11 non-null object dtypes: int64(1), object(1) memory usage: 264.0+ bytes None </code></pre>
<p>You can just use <code>ax.plot(df1.date, df1.line1)</code> and <code>matplotlib.pyplot</code> will automatically take care of the date.</p> <pre><code>import pandas as pd import numpy as np import matplotlib.pyplot as plt # your data # =================================== np.random.seed(0) df1 = pd.DataFrame(dict(date=pd.date_range('2015-01-01', periods=12, freq='MS'), line1=np.random.randint(10, 30, 12), line2=np.random.randint(20, 25, 12))) Out[64]: date line1 line2 0 2015-01-01 22 22 1 2015-02-01 25 21 2 2015-03-01 10 20 3 2015-04-01 13 21 4 2015-05-01 13 21 5 2015-06-01 17 20 6 2015-07-01 19 21 7 2015-08-01 29 24 8 2015-09-01 28 23 9 2015-10-01 14 20 10 2015-11-01 16 23 11 2015-12-01 22 20 df2 = pd.DataFrame(dict(date=pd.date_range('2015-01-01', periods=12, freq='MS'), quant=100*np.random.randint(3, 10, 12))) Out[66]: date quant 0 2015-01-01 500 1 2015-02-01 600 2 2015-03-01 300 3 2015-04-01 400 4 2015-05-01 600 5 2015-06-01 800 6 2015-07-01 600 7 2015-08-01 600 8 2015-09-01 900 9 2015-10-01 300 10 2015-11-01 400 11 2015-12-01 400 # plotting # =================================== fig, ax = plt.subplots(figsize=(10, 8)) ax.plot(df1.date, df1.line1, label='line1', c='r') ax.plot(df1.date, df1.line2, label='line2', c='b') ax2 = ax.twinx() ax2.set_ylabel('quant') ax2.bar(df2.date, df2.quant, width=20, alpha=0.1, color='g', label='quant') ax.legend(loc='best') ax.set_xticks(ax.get_xticks()[::2]) </code></pre> <p><img src="https://i.stack.imgur.com/VWcW6.png" alt="enter image description here"></p> <p>Follow-up (Updates):</p> <pre><code>import pandas as pd import numpy as np import matplotlib.pyplot as plt # your data # =================================== np.random.seed(0) df1 = pd.DataFrame(dict(date=pd.date_range('2015-01-01', periods=12, freq='MS'), line1=np.random.randint(10, 30, 12), line2=np.random.randint(20, 25, 12))).set_index('date') df2 = pd.DataFrame(dict(date=pd.date_range('2015-01-01', periods=12, freq='MS'), quant=100*np.random.randint(3, 10, 12))).set_index('date') df2 = df2.drop(df2.index[4]) print(df1) print(df2) line1 line2 date 2015-01-01 22 22 2015-02-01 25 21 2015-03-01 10 20 2015-04-01 13 21 2015-05-01 13 21 2015-06-01 17 20 2015-07-01 19 21 2015-08-01 29 24 2015-09-01 28 23 2015-10-01 14 20 2015-11-01 16 23 2015-12-01 22 20 quant date 2015-01-01 500 2015-02-01 600 2015-03-01 300 2015-04-01 400 2015-06-01 800 2015-07-01 600 2015-08-01 600 2015-09-01 900 2015-10-01 300 2015-11-01 400 2015-12-01 400 # plotting # =================================== fig, ax = plt.subplots(figsize=(10, 8)) ax.plot(df1.index, df1.line1, label='line1', c='r') ax.plot(df1.index, df1.line2, label='line2', c='b') ax2 = ax.twinx() ax2.set_ylabel('quant') ax2.bar(df2.index, df2.quant, width=20, alpha=0.1, color='g', label='quant') ax.legend(loc='best') ax.set_xticks(ax.get_xticks()[::2]) </code></pre> <p><img src="https://i.stack.imgur.com/WdNea.png" alt="enter image description here"></p>
python|pandas|matplotlib|plot|dataframe
1
5,008
31,676,627
Unexpected Numpy / Py3k coercion rules
<p>I was looking for a bug in a program, and I discovered that it was produced by an unexpected behavior from Numpy...</p> <p>When doing, e.g., a simple arithmetic operation on different integer types using Python3k and Numpy, like</p> <p>(numpy.uint64) + (int)</p> <p>the result is... a numpy.float64</p> <p>Here's an example:</p> <pre><code>v = numpy.array([10**16+1], dtype=numpy.uint64) print(v[0]) v[0] += 1 print(v[0]) </code></pre> <p>It produce the following result :</p> <pre><code>10000000000000001 10000000000000000 </code></pre> <p>Which can be quite unexpected when you're dealing with integers to avoid rounding errors...</p> <p>The above "problem" can easily be solved by replacing 1 by numpy.uint64(1), but I can see many bugs coming from this. What are the rules and logic behind this situation? Is there any documentation about the way coercions are done in such a case? I couldn't find it.</p> <p>I thought before that you could have some insight on the coercions by using .item() but it's even more misleading :</p> <pre><code>v = numpy.array([10**16+1], dtype=numpy.uint64) print(type(v[0].item())) v[0] = v[0].item() + 1 print(v[0]) </code></pre> <p>produces</p> <pre><code>&lt;class 'int'&gt; 10000000000000001 10000000000000002 </code></pre> <p>So .item() transforms the numpy.uint64 into int, and if you explicitely use it in the arithmetic operation, it works.</p> <p>I'm surprised (but I lack numpy experience, I guess), that, when 'a' corresponds to a numpy specific dtype,</p> <pre><code>a.item() + 1 </code></pre> <p>and</p> <pre><code>a + 1 </code></pre> <p>don't produce the same results... and thus gives different results when converted back to a numpy dtype.</p> <p>(The environment used is an up-to-date Pyzo distribution, via IEP, if it matters. I usually use Python 2, but I had to do a couple test in Py3k, and it was a convenient way to do it.)</p>
<p>As noted above:</p> <p>It works fine with:</p> <pre><code>dtype=np.int64 </code></pre> <p>instead of:</p> <pre><code>dtype=np.uint64 </code></pre> <p>both for python 2 and 3, numpy 1.6 and 1.9. </p> <p>Just use:</p> <pre><code>np.int64 </code></pre> <p>there is no reason to use <code>uint64</code>, overflowing at <code>2⁶⁴ - 1</code> or <code>2⁶³ - 1</code> is pretty much the same thing for all practical purposes.</p> <p><strong>References</strong></p> <ul> <li><a href="https://github.com/numpy/numpy/issues/5745" rel="nofollow noreferrer">numpy Issue #5745: uint64 converted silently to float64 when adding an int</a></li> </ul>
python|numpy|typing|coerce
0
5,009
64,287,610
How to sum up values of 'D' column for every row with the same combination of values from columns 'A','B' and 'C?
<p>I need to <strong>sum up values of 'D' column for every row with the same combination of values from columns 'A','B' and 'C</strong>. Eventually I need to create DataFrame with unique combinations of values from columns 'A','B' and 'C' with corresponding sum in column D.</p> <pre><code>import numpy as np df = pd.DataFrame(np.random.randint(0,3,size=(10,4)),columns=list('ABCD')) df OT: A B C D 0 0 2 0 2 1 0 1 2 1 2 0 0 2 0 3 1 2 2 2 4 0 2 2 2 5 0 2 2 2 6 2 2 2 1 7 2 1 1 1 8 1 0 2 0 9 1 2 0 0 </code></pre> <p>I've tried to create temporary data frame with empty cells</p> <pre><code>D = pd.DataFrame([i for i in range(len(df))]).rename(columns = {0:'D'}) D['D'] = '' D OT: D 0 1 2 3 4 5 6 7 8 9 </code></pre> <p>And use apply() to sum up all 'D' column values for unique row consisted of columns 'A','B' and 'C'. For example below line returns sum of values from 'D' column for 'A'=0,'B'=2,'C'=2:</p> <pre><code>df[(df['A']==0) &amp; (df['B']==2) &amp; (df['C']==2)]['D'].sum() OT: 4 </code></pre> <p>function:</p> <pre><code>def Sumup(cols): A = cols[0] B = cols[1] C = cols[2] D = cols[3] sum = df[(df['A']==A) &amp; (df['B']==B) &amp; (df['C']==C)]['D'].sum() return sum </code></pre> <p>apply on df and saved in temp df D['D']:</p> <pre><code>D['D'] = df[['A','B','C','D']].apply(Sumup) </code></pre> <p>Later I wanted to use drop_duplicates but I receive dataframe consisted of NaN's.</p> <pre><code>D OT: D 0 NaN 1 NaN 2 NaN 3 NaN 4 NaN 5 NaN 6 NaN 7 NaN 8 NaN 9 NaN </code></pre> <p>Anyone could give me a hint how to manage the NaN problem or what other approach can I apply to solve the original problem?</p>
<pre><code>df.groupby(['A','B','C']).sum() </code></pre>
python|pandas|dataframe|apply|nan
1
5,010
64,575,922
How is it possible for Numpy to use comma-separated subscripting with `:`?
<p>Consider the following example:</p> <pre><code>&gt;&gt;&gt; a=np.array([1,2,3,4]) &gt;&gt;&gt; a array([1, 2, 3, 4]) &gt;&gt;&gt; a[np.newaxis,:,np.newaxis] array([[[1], [2], [3], [4]]]) </code></pre> <p>How is it possible for Numpy to use the <code>:</code> (normally used for slicing arrays) as an index when using comma-separated subscripting?</p> <p>If I try to use comma-separated subscripting with either a Python list or a Python list-of-lists, I get a TypeError:</p> <pre><code>&gt;&gt;&gt; [[1,2],[3,4]][0,:] Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; TypeError: list indices must be integers or slices, not tuple </code></pre> <p>?</p>
<p>Define a simple class with a <code>getitem</code>, indexing method:</p> <pre><code>In [128]: class Foo(): ...: def __getitem__(self, arg): ...: print(type(arg), arg) ...: In [129]: f = Foo() </code></pre> <p>And look at what different indexes produce:</p> <pre><code>In [130]: f[:] &lt;class 'slice'&gt; slice(None, None, None) In [131]: f[1:2:3] &lt;class 'slice'&gt; slice(1, 2, 3) In [132]: f[:, [1,2,3]] &lt;class 'tuple'&gt; (slice(None, None, None), [1, 2, 3]) In [133]: f[:, :3] &lt;class 'tuple'&gt; (slice(None, None, None), slice(None, 3, None)) In [134]: f[(slice(1,None),3)] &lt;class 'tuple'&gt; (slice(1, None, None), 3) </code></pre> <p>For builtin classes like <code>list</code>, a tuple argument raises an error. But that's a class dependent issue, not a syntax one. <code>numpy.ndarray</code> accepts a tuple, as long as it's compatible with its shape.</p> <p>The syntax for a tuple index was added to Python to meet the needs of <code>numpy</code>. I don't think there are any builtin classes that use it.</p> <p>The <code>numpy.lib.index_tricks.py</code> module has several classes that take advantage of this behavior. Look at its code for more ideas.</p> <pre><code>In [137]: np.s_[3:] Out[137]: slice(3, None, None) In [139]: np.r_['0,2,1',[1,2,3],[4,5,6]] Out[139]: array([[1, 2, 3], [4, 5, 6]]) In [140]: np.c_[[1,2,3],[4,5,6]] Out[140]: array([[1, 4], [2, 5], [3, 6]]) </code></pre> <p>other &quot;indexing&quot; examples:</p> <pre><code>In [141]: f[...] &lt;class 'ellipsis'&gt; Ellipsis In [142]: f[[1,2,3]] &lt;class 'list'&gt; [1, 2, 3] In [143]: f[10] &lt;class 'int'&gt; 10 In [144]: f[{1:12}] &lt;class 'dict'&gt; {1: 12} </code></pre> <p>I don't know of any class that makes use of a dict argument, but the syntax allows it.</p>
python|python-3.x|numpy
2
5,011
48,957,011
using sklearn linear regression fit on timeseries + plotting
<p>I have the following timeseries outputted by get_DP():</p> <pre><code> DP date 1900-01-31 0.0357 1900-02-28 0.0362 1900-03-31 0.0371 1900-04-30 0.0379 ... ... 2015-09-30 0.0219 [1389 rows x 1 columns] </code></pre> <p><em>note: There is a DP value for every month from 1900-2015, I simply excluded them to avoid clutter</em></p> <p>I want to use a simple regression on this DataFrame to calculate the alpha &amp; beta (intercept and coefficient resectively) of this financial variable. I have the following code that is intended to do so:</p> <pre><code>reg = linear_model.LinearRegression() df = get_DP() df=df.reset_index() reg.fit(df['date'].values.reshape((1389,1)), df['DP'].values) print("beta: {}".format(reg.coef_)) print("alpha: {}".format(reg.intercept_)) plt.scatter(df['date'].values.reshape((1389,1)), df['DP'].values, color='black') plt.plot(df['date'].values.reshape((1389,1)), df['DP'].values, color='blue', linewidth=3) </code></pre> <p>However, I believe the reshaping of my x-axis data (the dates) messes up the entire regression, because the plot looks like so: <a href="https://i.stack.imgur.com/3PAst.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3PAst.png" alt="plot"></a></p> <p>Am I making a mistake? I'm not entirely sure what the best tool is for regression w/ DataFrame's since pandas removed their OLS function with 0.20.</p>
<p>try this one </p> <pre class="lang-py prettyprint-override"><code>reg = linear_model.LinearRegression() df = get_DP() df=df.reset_index() reg.fit(df.date.values.reshape(-1, 1), df.DP.values.reshape(-1, 1)) print("beta: {}".format(reg.coef_)) print("alpha: {}".format(reg.intercept_)) plt.scatter(df.date.dt.date, df.DP.values, color='black') plt.plot(df.date.dt.date, df.DP.values, color='blue', linewidth=3) </code></pre> <p>See <a href="https://pandas.pydata.org/pandas-docs/stable/reshaping.html" rel="nofollow noreferrer">reshape documentation</a></p>
python|pandas|plot
2
5,012
49,215,929
AttributeError: 'numpy.string_' object has no attribute 'items'
<p>In the following code</p> <pre><code>import time import nltk from nltk import word_tokenize import pandas as pd import numpy as np import matplotlib.pyplot as plt import networkx as nx import community ######################################################################################################################## #Reading all csv name files begin = time.clock() #record start time male_names= pd.read_csv('male_names.csv', ',') female_names= pd.read_csv('female_names.csv', ',') last_names= pd.read_csv('last_names.csv', ',') male_name=male_names['Names'].values female_name=female_names['Names'].values last_name=last_names['Names'].values ######################################################################################################################## #Book Testing, tokenization, creating a dictionary text_file =open("HP1.txt", "r").read() paragraph=text_file.split('\r\r') # Para is a list of strings(paragraphs) #print para lls=[[p] for p in paragraph] # List of lists of strings(paragraphs) tagged=[] for item in lls: for i in item: token=word_tokenize(i) # tokenize inside each paragraph tagged.append(nltk.pos_tag(token)) print tagged #tagged is a list of list of strings with taggs #print tagged[0][0][1] my_dict={} ######################################################################################################################## #Finding all matching names for lst in tagged: for i in range(0,len(lst)): if lst[i][1]=="NNP": # If the tagged is NNP key=lst[i][0] # We take this name as our key if ((key in male_name) or (key in female_name)): # If this key is in our dictionary if (i&lt;=len(lst)-2 and (lst[i+1][0] in last_name)): key=key+" "+lst[i+1][0] if key in my_dict: #We add the keys into out dictionary my_dict[key] += 1 else: my_dict[key] = 1 print my_dict # ######################################################################################################################## # #Find top ten keys word = np.array(my_dict.keys()) count = np.array(my_dict.values()) for i in range(0,len(word)): if " " in word[i]: string=word[i] l=string.split() for item in l: #item 1: Harry item 2: Potter for j in range(0,len(word)): if (i != j and item==word[j]): count[i]=count[i]+count[j] count[j]=0 word[j]="" print "hello" n=0 while(n&lt;len(count)): if count[n]==0: count = np.delete(count,n) word = np.delete(word,n) else: n=n+1 print word print count top = np.array([]) topcount = np.array([]) for i in range(10): max_index = np.argmax(count) top = np.append(top,word[max_index]) topcount = np.append(topcount,count[max_index]) word[max_index] = '' count[max_index] = 0 print print top print print topcount ######################################################################################################################## #initialize a adjacency matrix adj = np.zeros((10,10)) for para in tagged: #for each paragraph name_index = set() #set list to identify all unique names in one paragraph for each in para: #for each word in the paragraph if (each[1]=="NNP"): for i in range(0,len(top)): #iterate the top list to find if NNP is a top name if (each[0] in top[i]): name_index.add(i) #if found, add index of top list break name_index = list(name_index) #print name_index for i in range(0,len(name_index)): for j in range(i+1, len(name_index)): adj[name_index[i]][name_index[j]] +=1 adj[name_index[j]][name_index[i]] +=1 #add the frequency counts to the adj matrix for i in range(0,len(topcount)): print topcount[i] adj[i][i] = topcount[i] print adj ######################################################################################################################## G=nx.DiGraph(adj) ######################################################################################################################## def pagerank(G, alpha=0.85,max_iter=100, tol=1.0e-6, weight='weight'): if len(G) == 0: return {} # Create a copy in (right) stochastic form W = nx.stochastic_graph(G, weight=weight) N = W.number_of_nodes() # Choose fixed starting vector x = dict.fromkeys(W, 1.0 / N) # Assign uniform personalization vector if not given p = dict.fromkeys(W, 1.0 / N) dangling_weights = p dangling_nodes = [n for n in W if W.out_degree(n, weight=weight) == 0.0] # power iteration: make up to max_iter iterations for _ in range(max_iter): xlast = x x = dict.fromkeys(xlast.keys(), 0) danglesum = alpha * sum(xlast[n] for n in dangling_nodes) for n in x: # this matrix multiply looks odd because it is # doing a left multiply x^T=xlast^T*W for nbr in W[n]: x[nbr] += alpha * xlast[n] * W[n][nbr][weight] x[n] += danglesum * dangling_weights[n] + (1.0 - alpha) * p[n] # check convergence, l1 norm err = sum([abs(x[n] - xlast[n]) for n in x]) if err &lt; N*tol: return x # raise NetworkXError('pagerank: power iteration failed to converge ' # 'in %d iterations.' % max_iter) p_rank=pagerank(G).values() for i in range(0,len(p_rank)): p_rank[i]*=3000 print p_rank ###################################################################################################################### #community detection_ modularity H=G.to_undirected() communities = community.best_partition(H) global_modularity = community.modularity(communities, H) print(global_modularity) values = [communities.get(node) for node in H.nodes()] #edges all_weights = [] for (node1,node2,data) in G.edges(data=True): all_weights.append(data['weight']) #we'll use this when determining edge thickness print all_weights #Plot the edges - one by one pos=nx.spring_layout(G) labeldict = {} #dictionary of node to node names for i in range(0,len(top)): labeldict[i] = top[i] for weight in all_weights: #Form a filtered list with just the weight you want to draw weighted_edges = [(node1,node2) for (node1,node2,edge_attr) in G.edges(data=True) if edge_attr['weight']==weight] #multiplying by [num_nodes/sum(all_weights)] makes the graphs edges look cleaner width = weight/66 nx.draw_networkx_edges(G,pos,edgelist=weighted_edges,width=width,edges_color=values) nx.draw_networkx_nodes(G,pos,node_color=values,node_size=p_rank,with_labels = True) # customize positions of labels and font size pos_new = {} for k, v in pos.items(): pos_new[k] = (v[0], v[1]-0.13) nx.draw_networkx_labels(G,pos=pos_new,labels=labeldict, font_size=14, font_family='ubuntu') #============================================================================== # for k in range(10): # num=np.log(p_rank[k])*7 # nx.draw_networkx_labels(G,pos=pos_new[k],labels=labeldict[k], # font_size=num, # font_family='ubuntu') #============================================================================== #change labels plt.axis('off') plt.show() ######################################################################################################################## end = time.clock() print end - begin #calculate difference (elapsed time) </code></pre> <p>I'm trying to draw a network with networkx. Everything was smooth. But after I substituted the chunk of code</p> <pre><code>nx.draw_networkx_labels(G,pos=pos_new,labels=labeldict, font_size=14, font_family='ubuntu') </code></pre> <p>with</p> <pre><code>for k in range(10): num=np.log(p_rank[k])*7 nx.draw_networkx_labels(G,pos=pos_new[k],labels=labeldict[k], font_size=num, font_family='ubuntu') </code></pre> <p>in order to plot a graph with varying font size for labels. The program returns an error: AttributeError: 'numpy.string_' object has no attribute 'items'</p> <pre><code>Traceback (most recent call last): File "&lt;ipython-input-9-06fd195d7391&gt;", line 1, in &lt;module&gt; runfile('C:/Users/Irourong/Desktop/PIC 16/Project/Final/Final/final_project.py', wdir='C:/Users/Irourong/Desktop/PIC 16/Project/Final/Final') File "C:\Users\Irourong\Anaconda2\lib\site-packages\spyder\utils\site\sitecustomize.py", line 866, in runfile execfile(filename, namespace) File "C:\Users\Irourong\Anaconda2\lib\site-packages\spyder\utils\site\sitecustomize.py", line 87, in execfile exec(compile(scripttext, filename, 'exec'), glob, loc) File "C:/Users/Irourong/Desktop/PIC 16/Project/Final/Final/final_project.py", line 262, in &lt;module&gt; font_family='ubuntu') File "C:\Users\Irourong\Anaconda2\lib\site-packages\networkx\drawing\nx_pylab.py", line 791, in draw_networkx_labels for n, label in labels.items(): </code></pre> <p>How can I solve this please?</p> <p>Edit: this is what I got after modifying my code according to @Joel</p> <p><a href="https://i.stack.imgur.com/7c5KL.png" rel="nofollow noreferrer">https://i.stack.imgur.com/7c5KL.png</a></p>
<p>This code</p> <pre><code>for k in range(10): num=np.log(p_rank[k])*7 nx.draw_networkx_labels(G,pos=pos_new[k],labels=labeldict[k], font_size=num, font_family='ubuntu') </code></pre> <p>should be </p> <pre><code>for node in range(10): font_size = np.log(p_rank[node])*7 tmp_labels = {node: labeldict[node]} nx.draw_networkx_labels(G, pos=pos_new, labels = tmp_labels, font_size=font_size, font_family='ubuntu') </code></pre> <p>Here <code>tmp_labels</code> is a dict matching all of the nodes you're interested in labeling on the pass (just one node each pass) with their label.</p>
python|numpy|matplotlib|networkx
1
5,013
59,042,282
Training my simple model for colored images instead of grayscale
<p>I'm new to Python and Deep Learning with Keras. With some tutorials online for cat vs non-cat classification, I was able to compile this simple training code for my classification. However, my target application is <strong>fire</strong> detection so I think I need to use color images instead of this grayscale version (+ is it going to help?!). In other words, to get rid of <code>img = img.convert('L')</code> and train with colors on.</p> <p>While I was trying to increase the number of channels to <code>3</code>, I encountered this error:</p> <pre><code>training_images = np.array([i[0] for i in training_data]).reshape(-1, IMAGE_SIZE, IMAGE_SIZE, 3) ValueError: could not broadcast input array from shape (300,300,3) into shape (300,300) </code></pre> <p>How can I solve this error? </p> <p>Here is my original training code:</p> <pre><code>from keras.models import Sequential, load_model from keras.layers import Dense, Dropout, Flatten from keras.layers import Conv2D, MaxPooling2D from keras.layers.normalization import BatchNormalization from PIL import Image from random import shuffle, choice import numpy as np import os IMAGE_SIZE = 256 IMAGE_DIRECTORY = './data/test_set' def label_img(name): if name == 'cats': return np.array([1, 0]) elif name == 'notcats' : return np.array([0, 1]) def load_data(): print("Loading images...") train_data = [] directories = next(os.walk(IMAGE_DIRECTORY))[1] for dirname in directories: print("Loading {0}".format(dirname)) file_names = next(os.walk(os.path.join(IMAGE_DIRECTORY, dirname)))[2] for i in range(200): image_name = choice(file_names) image_path = os.path.join(IMAGE_DIRECTORY, dirname, image_name) label = label_img(dirname) if "DS_Store" not in image_path: img = Image.open(image_path) img = img.convert('L') img = img.resize((IMAGE_SIZE, IMAGE_SIZE), Image.ANTIALIAS) train_data.append([np.array(img), label]) return train_data def create_model(): model = Sequential() model.add(Conv2D(32, kernel_size = (3, 3), activation='relu', input_shape=(IMAGE_SIZE, IMAGE_SIZE, 1))) model.add(MaxPooling2D(pool_size=(2,2))) model.add(BatchNormalization()) model.add(Conv2D(64, kernel_size=(3,3), activation='relu')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(BatchNormalization()) model.add(Conv2D(128, kernel_size=(3,3), activation='relu')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(BatchNormalization()) model.add(Conv2D(128, kernel_size=(3,3), activation='relu')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(BatchNormalization()) model.add(Conv2D(64, kernel_size=(3,3), activation='relu')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(BatchNormalization()) model.add(Dropout(0.2)) model.add(Flatten()) model.add(Dense(256, activation='relu')) model.add(Dropout(0.2)) model.add(Dense(64, activation='relu')) model.add(Dense(2, activation = 'softmax')) return model training_data = load_data() training_images = np.array([i[0] for i in training_data]).reshape(-1, IMAGE_SIZE, IMAGE_SIZE, 1) training_labels = np.array([i[1] for i in training_data]) print('creating model') model = create_model() model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) print('training model') model.fit(training_images, training_labels, batch_size=50, epochs=10, verbose=1) model.save("model.h5") </code></pre>
<p>for transforming your images, you have to duplicate the channels of your single image. Note that you need only the "convered image afte duplicating the channels. you can write a function that does the following and pass it to the comprehension list</p> <pre><code>&gt;&gt;&gt; img = np.random.randint(low=0,high=255, size=(330,330)) &gt;&gt;&gt; converted=np.empty([330,330,3], dtype=int) &gt;&gt;&gt; red = img.copy() &gt;&gt;&gt; blue = img.copy() &gt;&gt;&gt; green = img.copy() &gt;&gt;&gt; converted[:,:,0]= red &gt;&gt;&gt; converted[:,:,1]= blue &gt;&gt;&gt; converted[:,:,2]= green &gt;&gt;&gt; from PIL import Image &gt;&gt;&gt; s = Image.fromarray(converted,'RGB') &gt;&gt;&gt; s.save("rand_image.png") &gt;&gt;&gt; s.show() </code></pre>
python|tensorflow|keras|deep-learning|classification
0
5,014
58,940,304
Split output variable into similarly sized dataframes and merge those
<p>My problem: Trying to <strong>split an output variable into similarly sized dataframes and merge those</strong>.</p> <p>Model output: "var"</p> <pre><code>{('Product1', 0): &lt;gurobi.Var listing[Product1,0] (value 1.0)&gt;, ('Product1', 1): &lt;gurobi.Var listing[Product1,1] (value 0.0)&gt;, ('Product1', 2): &lt;gurobi.Var listing[Product1,2] (value 0.0)&gt;, ('Product1', 3): &lt;gurobi.Var listing[Product1,3] (value 0.0)&gt;, ('Product2', 0): &lt;gurobi.Var listing[Product2,0] (value 1.0)&gt;, ('Product2', 1): &lt;gurobi.Var listing[Product2,1] (value 0.0)&gt;, ('Product2', 2): &lt;gurobi.Var listing[Product2,2] (value 0.0)&gt;, ('Product2', 3): &lt;gurobi.Var listing[Product2,3] (value 0.0)&gt;, ('Product3', 0): &lt;gurobi.Var listing[Product3,0] (value 1.0)&gt;, ('Product3', 1): &lt;gurobi.Var listing[Product3,1] (value 0.0)&gt;, ('Product3', 2): &lt;gurobi.Var listing[Product3,2] (value 0.0)&gt;, ('Product3', 3): &lt;gurobi.Var listing[Product3,3] (value 0.0)&gt;} &lt;class 'gurobipy.tupledict'&gt; </code></pre> <p>Desired output: The desired output should look like this:</p> <pre><code> 0 1 2 3 Product1 1.0 0.0 0.0 0.0 Product2 1.0 0.0 0.0 0.0 Product3 0.0 0.0 0.0 1.0 &lt;class 'pandas.core.frame.DataFrame'&gt; </code></pre> <p>My (very manual) attempt:</p> <p>1) I turned the output variable into a dataframe "df_listing":</p> <pre><code>dict_listing = {k : v.X for k,v in var.items()} df_listing = pd.DataFrame.from_dict(dict_listing, orient='index') df_listing = df_listing.rename(columns = {0: 'listing'}) listing (Product1, 0) 1.0 (Product1, 1) 0.0 (Product1, 2) 0.0 (Product1, 3) 0.0 (Product2, 0) 1.0 (Product2, 1) 0.0 (Product2, 2) 0.0 (Product2, 3) 0.0 (Product3, 0) 0.0 (Product3, 1) 0.0 (Product3, 2) 0.0 (Product3, 3) 1.0 &lt;class 'pandas.core.frame.DataFrame'&gt; </code></pre> <p>2) Transpose "df_listing":</p> <pre><code>df_listing = df_listing.transpose() </code></pre> <p>3) Use k, which is the number of columns - in this case it is 4 --> 0,1,2,3</p> <pre><code>df_Product1 = df_listing.iloc[:, 0*k:1*k] df_Product1.columns = list(range(k)) df_Product2 = df_listing.iloc[:, 1*k:2*k] df_Product2.columns = list(range(k)) df_Product3 = df_listing.iloc[:, 2*k:3*k] df_Product3.columns = list(range(k)) </code></pre> <p>4) Concatenate the three dataframes</p> <pre><code>input = [df_Product1, df_Product2, df_Product3] df_facingsProductAll = pd.concat(input) </code></pre> <p>My attempt was very manual, so I am looking for a) a more <strong>automized solution</strong>, probably using a for loop and b) have a more <strong>dynamic code so that the input could be more products</strong>, e.g. 5 products,</p> <p>Thanks for your help guys and girls!</p>
<p>You can try from this</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd from io import StringIO txt = """ listing (Product1,0) 1.0 (Product1,1) 0.0 (Product1,2) 0.0 (Product1,3) 0.0 (Product2,0) 1.0 (Product2,1) 0.0 (Product2,2) 0.0 (Product2,3) 0.0 (Product3,0) 0.0 (Product3,1) 0.0 (Product3,2) 0.0 (Product3,3) 1.0""" df = pd.read_csv(StringIO(txt), delim_whitespace=True) df = df.reset_index() # split index and concat to df df = pd.concat([df, pd.DataFrame(df["index"].str.split(",")\ .values.tolist(), columns=["a","b"])], axis=1) df = df.drop("index", axis=1) # remove brackets df["a"] = df["a"].str[1:] df["b"] = df["b"].str[:-1].astype(int) out = pd.pivot_table(df, index="a", columns="b", values="listing") </code></pre> <p>Output</p> <pre class="lang-py prettyprint-override"><code>b 0 1 2 3 a Product1 1.0 0.0 0.0 0.0 Product2 1.0 0.0 0.0 0.0 Product3 0.0 0.0 0.0 1.0 </code></pre> <p><strong>UPDATE</strong> </p> <p>In case you have a whitespace as <code>(Product1, 0)</code> you can procede as following:</p> <pre class="lang-py prettyprint-override"><code>txt = """ listing (Product1, 0) 1.0 (Product1, 1) 0.0 (Product1, 2) 0.0 (Product1, 3) 0.0 (Product2, 0) 1.0 (Product2, 1) 0.0 (Product2, 2) 0.0 (Product2, 3) 0.0 (Product3, 0) 0.0 (Product3, 1) 0.0 (Product3, 2) 0.0 (Product3, 3) 1.0""" df = pd.read_csv(StringIO(txt), delim_whitespace=True) df = df.reset_index()\ .rename(columns={"level_0":"a", "level_1":"b"}) # remove brackets df["a"] = df["a"].str[1:-1] df["b"] = df["b"].str[:-1].astype(int) </code></pre>
python|pandas|gurobi
1
5,015
58,664,602
Why do I lose indexes and column header information on using np.hstack when concatenating two df in python?
<p>I have two dataframes: Reprex:</p> <p>DF1</p> <pre><code>X Yes No Maybe </code></pre> <p>DF2</p> <pre><code>Y Yes No Maybe import pandas as pd import numpy as np train = pd.DataFrame(np.hstack([DF1,DF2])) </code></pre> <p>train</p> <pre><code>0 1 Yes Yes No No Maybe Maybe </code></pre> <p>Why do my headers change from X and Y. the train df should keep my original headers from both df. I tried making axis=1 and headers=true but it did not work. pd.concat is not effective because i end up with more rows than what are in my original df.</p> <p>I also tried</p> <pre><code>df.reset_index() </code></pre> <p>but even after that pd.concat gave me more rows than my original two dataframes have. </p>
<p>The reason is that <em>Numpy</em> methods operate not on DataFrames, but on underlying <em>Numpy</em> arrays, <strong>without</strong> any index or column data (indices of rows and columns names).</p> <p>To check this, run: <code>np.hstack([DF1, DF2])</code> and you will get:</p> <pre><code>array([['Yes', 'Yes'], ['No', 'No'], ['Maybe', 'Maybe']], dtype=object) </code></pre> <p>To keep column names, use e.g.:</p> <pre><code>pd.concat([DF1, DF2], axis=1) </code></pre> <p>This time the result will be:</p> <pre><code> X Y 0 Yes Yes 1 No No 2 Maybe Maybe </code></pre>
pandas|numpy|dataframe|concat
1
5,016
55,975,707
Replace a string with a shorter version of itself using pandas
<p>I have a pandas dataframe with one column of model variables and their corresponding statistics in another column. I've done some string manipulation to get a derived summary table to join the summary table from the model.<br> <code>lost_cost_final_table.loc[lost_cost_final_table['variable'].str.contains('class_cc', case = False), 'variable'] = lost_cost_final_table['variable'].str[:8]</code></p> <p>Full traceback.</p> <pre><code>--------------------------------------------------------------------------- ValueError Traceback (most recent call last) &lt;ipython-input-229-1dbe5bd14d4b&gt; in &lt;module&gt; ----&gt; 1 lost_cost_final_table.loc[lost_cost_final_table['variable'].str.contains('class_cc', case = False), 'variable'] = lost_cost_final_table['variable'].str[:8] 2 #lost_cost_final_table.loc[lost_cost_final_table['variable'].str.contains('class_v_age', case = False), 'variable'] = lost_cost_final_table['variable'].str[:11] 3 #lost_cost_final_table.loc[lost_cost_final_table['variable'].str.contains('married_age', case = False), 'variable'] = lost_cost_final_table['variable'].str[:11] 4 #lost_cost_final_table.loc[lost_cost_final_table['variable'].str.contains('state_model', case = False), 'variable'] = lost_cost_final_table['variable'].str[:11] 5 C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\indexing.py in __setitem__(self, key, value) 187 key = com._apply_if_callable(key, self.obj) 188 indexer = self._get_setitem_indexer(key) --&gt; 189 self._setitem_with_indexer(indexer, value) 190 191 def _validate_key(self, key, axis): C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\indexing.py in _setitem_with_indexer(self, indexer, value) 467 468 if isinstance(value, ABCSeries): --&gt; 469 value = self._align_series(indexer, value) 470 471 info_idx = indexer[info_axis] C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\indexing.py in _align_series(self, indexer, ser, multiindex_indexer) 732 return ser._values.copy() 733 --&gt; 734 return ser.reindex(new_ix)._values 735 736 # 2 dims C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\series.py in reindex(self, index, **kwargs) 3323 @Appender(generic._shared_docs['reindex'] % _shared_doc_kwargs) 3324 def reindex(self, index=None, **kwargs): -&gt; 3325 return super(Series, self).reindex(index=index, **kwargs) 3326 3327 def drop(self, labels=None, axis=0, index=None, columns=None, C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\generic.py in reindex(self, *args, **kwargs) 3687 # perform the reindex on the axes 3688 return self._reindex_axes(axes, level, limit, tolerance, method, -&gt; 3689 fill_value, copy).__finalize__(self) 3690 3691 def _reindex_axes(self, axes, level, limit, tolerance, method, fill_value, C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\generic.py in _reindex_axes(self, axes, level, limit, tolerance, method, fill_value, copy) 3705 obj = obj._reindex_with_indexers({axis: [new_index, indexer]}, 3706 fill_value=fill_value, -&gt; 3707 copy=copy, allow_dups=False) 3708 3709 return obj C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\generic.py in _reindex_with_indexers(self, reindexers, fill_value, copy, allow_dups) 3808 fill_value=fill_value, 3809 allow_dups=allow_dups, -&gt; 3810 copy=copy) 3811 3812 if copy and new_data is self._data: C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\internals.py in reindex_indexer(self, new_axis, indexer, axis, fill_value, allow_dups, copy) 4412 # some axes don't allow reindexing with dups 4413 if not allow_dups: -&gt; 4414 self.axes[axis]._can_reindex(indexer) 4415 4416 if axis &gt;= self.ndim: C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\indexes\base.py in _can_reindex(self, indexer) 3574 # trying to reindex on an axis with duplicates 3575 if not self.is_unique and len(indexer): -&gt; 3576 raise ValueError("cannot reindex from a duplicate axis") 3577 3578 def reindex(self, target, method=None, level=None, limit=None, ValueError: cannot reindex from a duplicate axis </code></pre> <p>However, when I replace with example, it works and the only difference is the data frame name. See below. I don't see where the difference between the two codes lines are. Any ideas?</p> <pre><code> variable = ['class_cc-Harley', 'class_cc_Sport', 'class_cc_Other', 'unit_driver_experience'] unique_value = [1200, 1400, 700, 45] p_value = [.0001, .0001, .0001, .049] dic = {'variable': variable, 'unique_value':unique_value, 'p_value':p_value} df = pd.DataFrame(dic) df.loc[df['variable'].str.contains('class_cc', case = False), 'variable'] = df['variable'].str[:8] </code></pre>
<p>The index of <code>lost_cost_final_table</code> is not unique, which can be fixed by running <code>reset_index</code>:</p> <pre><code>lost_cost_final_table.reset_index(inplace=True) </code></pre>
python|pandas
1
5,017
64,971,704
How to mix many distributions in one tensorflow probability layer?
<p>I have several <code>DistributionLambda</code> layers as the outputs of one model, and I would like to make a Concatenate-like operation into a new layer, in order to have only one output that is the mix of all the distributions, assuming they are independent. Then, I can apply a log-likelihood loss to the output of the model. Otherwise, I cannot apply the loss over a <code>Concatenate</code> layer, because it lost the <code>log_prob</code> method. I have been trying with the <code>Blockwise</code> distribution, but with no luck so far.</p> <p>Here an example code:</p> <pre class="lang-py prettyprint-override"><code>from tensorflow.keras import layers from tensorflow.keras import models from tensorflow.keras import optimizers from tensorflow_probability import distributions from tensorflow_probability import layers as tfp_layers def likelihood_loss(y_true, y_pred): &quot;&quot;&quot;Adding negative log likelihood loss.&quot;&quot;&quot; return -y_pred.log_prob(y_true) def distribution_fn(params): &quot;&quot;&quot;Distribution function.&quot;&quot;&quot; return distributions.Normal( params[:, 0], math.log(1.0 + math.exp(params[:, 1]))) output_steps = 3 ... lstm_layer = layers.LSTM(10, return_state=True) last_layer, l_h, l_c = lstm_layer(last_layer) lstm_state = [l_h, l_c] dense_layer = layers.Dense(2) last_layer = dense_layer(last_layer) last_layer = tfp_layers.DistributionLambda( make_distribution_fn=distribution_fn)(last_layer) output_layers = [last_layer] # Get output sequence, re-injecting the output of each step for number in range(1, output_steps): last_layer = layers.Reshape((1, 1))(last_layer) last_layer, l_h, l_c = lstm_layer(last_layer, initial_state=lstm_states) # Storing state for next time step lstm_states = [l_h, l_c] last_layer = tfp_layers.DistributionLambda( make_distribution_fn=distribution_fn)(dense_layer(last_layer)) output_layers.append(last_layer) # This does not work # last_layer = distributions.Blockwise(output_layers) # This works for the model but cannot compute loss # last_layer = layers.Concatenate(axis=1)(output_layers) the_model = models.Model(inputs=[input_layer], outputs=[last_layer]) the_model.compile(loss=likelihood_loss, optimizer=optimizers.Adam(lr=0.001)) </code></pre>
<p>The problem is your Input, not your output layer ;)</p> <p>Input:0 is referenced in your error message. Could you try to be more specific about your input?</p>
python|tensorflow|keras|tf.keras|tensorflow-probability
0
5,018
39,731,669
Merge Variables in Keras
<p>I'm building a convolutional neural network with Keras and would like to add a single node with the standard deviation of my data before the last fully connected layer.</p> <p>Here's a minimum code to reproduce the error:</p> <pre><code>from keras.layers import merge, Input, Dense from keras.layers import Convolution1D, Flatten from keras import backend as K input_img = Input(shape=(64, 4)) x = Convolution1D(48, 3, activation='relu', init='he_normal')(input_img) x = Flatten()(x) std = K.std(input_img, axis=1) x = merge([x, std], mode='concat', concat_axis=1) output = Dense(100, activation='softmax', init='he_normal')(x) </code></pre> <p>This results in the following <code>TypeError</code>:</p> <pre><code>----------------------------------------------------------------- TypeError Traceback (most recent call last) &lt;ipython-input-117-c1289ebe610e&gt; in &lt;module&gt;() 6 x = merge([x, std], mode='concat', concat_axis=1) 7 ----&gt; 8 output = Dense(100, activation='softmax', init='he_normal')(x) /home/ubuntu/anaconda2/envs/tensorflow/lib/python2.7/site-packages/keras/engine/topology.pyc in __call__(self, x, mask) 486 '`layer.build(batch_input_shape)`') 487 if len(input_shapes) == 1: --&gt; 488 self.build(input_shapes[0]) 489 else: 490 self.build(input_shapes) /home/ubuntu/anaconda2/envs/tensorflow/lib/python2.7/site-packages/keras/layers/core.pyc in build(self, input_shape) 701 702 self.W = self.init((input_dim, self.output_dim), --&gt; 703 name='{}_W'.format(self.name)) 704 if self.bias: 705 self.b = K.zeros((self.output_dim,), /home/ubuntu/anaconda2/envs/tensorflow/lib/python2.7/site-packages/keras/initializations.pyc in he_normal(shape, name, dim_ordering) 64 ''' 65 fan_in, fan_out = get_fans(shape, dim_ordering=dim_ordering) ---&gt; 66 s = np.sqrt(2. / fan_in) 67 return normal(shape, s, name=name) 68 TypeError: unsupported operand type(s) for /: 'float' and 'NoneType' </code></pre> <p>Any idea why?</p>
<p><code>std</code> is no Keras layer so it does not satisfy the layer input/output shape interface. The solution to this is to use a <a href="https://keras.io/layers/core/#lambda" rel="nofollow"><code>Lambda</code></a> layer wrapping <code>K.std</code>:</p> <pre><code>from keras.layers import merge, Input, Dense, Lambda from keras.layers import Convolution1D, Flatten from keras import backend as K input_img = Input(shape=(64, 4)) x = Convolution1D(48, 3, activation='relu', init='he_normal')(input_img) x = Flatten()(x) std = Lambda(lambda x: K.std(x, axis=1))(input_img) x = merge([x, std], mode='concat', concat_axis=1) output = Dense(100, activation='softmax', init='he_normal')(x) </code></pre>
python|tensorflow|keras
1
5,019
69,661,048
Stacking multiple arrays with multiple dimensions python
<p>I am trying to create and stack multiple multi-dimensional arrays in python and I seem not to be able to get it right.</p> <p>I have:</p> <pre><code> y_0 = np.random.uniform(-1.0,1.0, size=(1,1,s_conn.weights.shape[0],1)) y_1 = np.random.uniform(-500.0, 500.0, size=(1,1,s_conn.weights.shape[0],1)) y_2 = np.random.uniform(-50.,50., size=(1,1,s_conn.weights.shape[0],1)) y_3 = np.random.uniform(-6.0, 6.0, size=(1,1,s_conn.weights.shape[0],1)) y_4 = np.random.uniform(-20.0, 20.0, size=(1,1,s_conn.weights.shape[0],1)) y_5 = np.random.uniform(-500.,500., size=(1,1,s_conn.weights.shape[0],1)) </code></pre> <p>where s.conn is NxN matrix.</p> <p>And each array has dims: (1, 1, 10, 1) What I need is an array of shape: (1, 6, 10, 1)</p> <p>How do I get that? I tried np.stack and manual creation and reshaping but I keep getting strange/ incorrect outcomes.</p> <p>I would be grateful for a bit of advice.</p> <p>js</p>
<p>np.stack joins a sequence of arrays along a new axis, so it will create an undesired dimension to your matrix, consider using np.concatenate.</p> <pre><code>np.concatenate([y_0,y_1,y_2,y_3,y_4,y_5],axis = 1) </code></pre> <p>Note, your can also use Array.squeeze() to remove undesired dimensions (that may arise when using np.stach without using reshaping, for example the same result can be achived with:</p> <pre><code>np.stack([y_0,y_1,y_2,y_3,y_4,y_5],axis = 1).squeeze(axis=2) </code></pre>
python|arrays|numpy
0
5,020
69,451,925
Numpy matmul, treat each row in matrix as individual row vectors
<p>I have a code below:</p> <pre><code>import numpy as np wtsarray # shape(5000000,21) covmat # shape(21,21) portvol = np.zeros(shape=(wtsarray.shape[0],)) for i in range(0, wtsarray.shape[0]): portvol[i] = np.sqrt(np.dot(wtsarray[i].T, np.dot(covmat, wtsarray[i]))) * np.sqrt(mtx) </code></pre> <p>Nothing wrong with the above code, except that there's 5 million rows of row vector, and the for loop can be a little slow, I was wondering if you guys know of a way to vectorise it, so far I have tried with little success.</p> <p>Or if there is any way to treat each individual row in a numpy matrix as a row vector and perform the above operation?</p> <p>Thanks, if there are any suggestions on rephrasing my questions, please let me know as well.</p>
<pre class="lang-py prettyprint-override"><code>portvol = np.sqrt(np.sum(wtsarray * (wtsarray @ covmat.T), axis=1)) * np.sqrt(mtx) </code></pre> <p>should give you what you want. It replaces the first <code>np.dot</code> with elementwise multiplication followed by summation and it replaces the second <code>np.dot(covmat, wtsarray[i])</code> with matrix multiplication, <code>wtsarray @ covmat.T</code>.</p>
python|numpy
2
5,021
40,873,279
Geopandas : sort a sample of points like a cycle graph
<p>I'm trying geopandas to manipulate some points data. My final GeoDataFrame is represented there :</p> <p><a href="https://i.stack.imgur.com/UgKLf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UgKLf.png" alt="20 little points"></a></p> <p>In order to use <a href="http://geoffboeing.com/2016/11/osmnx-python-street-networks/" rel="nofollow noreferrer">an other Python module</a> which calculates the shortest road between two points with OSM data, I must sort my points <a href="https://en.wikipedia.org/wiki/Cycle_graph" rel="nofollow noreferrer">like a tour</a>. </p> <p>If not, the next Python module which calculates shortest road, but not necessarily between the nearest points. And the main problem is the constraint of a tour. </p> <p>If my points were only in a line, a basic sorting function on latitudes and longitudes of each point should be enough, like :</p> <pre><code>df1 = pd.read_csv("file.csv", sep = ",") df1 = df1.sort_values(['Latitude','Longitude'], ascending = [1,1]) # (I'm starting with pandas df before GeoDataFrame conversion) </code></pre> <p>If we start from the "upper" point of previous picture following this sorting, the second point of DataFrame will be the nearest of it, etc... Until the fifth point, wich is on the right of the picture (so not the nearest anymore)...</p> <p>So my question is : does someone know how achieve this special kind of sorting, or must I change my index manually ? </p>
<p>If I understand your question correctly, you want to rearrange the order of points in a way that they would create the shortest possible path. </p> <p>I have run into the same problem also. Here is the function that accepts regular dataframe (= with separate fields for each coordinate. I am sure you will be able to modify either function in order to accept geodataframe or dataframe in order to split geometry field into x and y fields.</p> <pre><code>def autoroute_points_df(points_df, x_col="e",y_col="n"): ''' Function, that converts a list of random points into ordered points, searching for the shortest possible distance between the points. Author: Marjan Moderc, 2016 ''' points_list = points_df[[x_col,y_col]].values.tolist() # arrange points in by ascending Y or X points_we = sorted(points_list, key=lambda x: x[0]) points_sn = sorted(points_list, key=lambda x: x[1]) # Calculate the general direction of points (North-South or West-East) - In order to decide where to start the path! westmost_point = points_we[0] eastmost_point = points_we[-1] deltay = eastmost_point[1] - westmost_point[1] deltax = eastmost_point[0] - westmost_point[0] alfa = math.degrees(math.atan2(deltay, deltax)) azimut = (90 - alfa) % 360 # If main directon is towards east (45°-135°), take westmost point as starting line. if (azimut &gt; 45 and azimut &lt; 135): points_list = points_we elif azimut &gt; 180: raise Exception("Error while computing the azimuth! It cant be bigger then 180 since first point is west and second is east.") else: points_list = points_sn # Create output (ordered df) and populate it with the first one already. ordered_points_df = pd.DataFrame(columns=points_df.columns) ordered_points_df = ordered_points_df.append(points_df.ix[(points_df[x_col]==points_list[0][0]) &amp; (points_df[y_col]==points_list[0][1])]) for iteration in range(0, len(points_list) - 1): already_ordered = ordered_points_df[[x_col,y_col]].values.tolist() current_point = already_ordered[-1] # current point possible_candidates = [i for i in points_list if i not in already_ordered] # list of candidates distance = 10000000000000000000000 best_candidate = None for candidate in possible_candidates: current_distance = Point(current_point).distance(Point(candidate)) if current_distance &lt; distance: best_candidate = candidate distance = current_distance ordered_points_df = ordered_points_df.append(points_df.ix[(points_df[x_col]==best_candidate[0]) &amp; (points_df[y_col]==best_candidate[1])]) return ordered_points_df </code></pre> <p>Hope it solves your problem!</p>
python|sorting|pandas|geopandas
1
5,022
53,923,354
Reorder columns based on column suffixes
<p>This is my code:</p> <pre><code>all_data = pd.merge(all_data, meanData, suffixes=["", "_mean"], how='left', on=['id', 'id2']) </code></pre> <p>Now, I want to merge <code>all_data</code> and <code>meanData</code>, but I want the columns of meanData to appear first. </p> <p>Like this:</p> <blockquote> <p>a_mean,b_mean,c_mean,a,b,c</p> </blockquote> <p>Not like this</p> <blockquote> <p>a,b,c,a_mean,b_mean,c_mean</p> </blockquote> <p>Note: I have a lot of columns, So i do not want to manually write code to change index.</p> <p>Sample Code (you can reproduce):</p> <pre><code>import pandas df = pd.DataFrame([[0,1, 2], [0,1, 3], [0,4, 6],[1,3,4],[1,4,2]], columns=['id','A', 'B']) features = ['A','B'] meanData = df.groupby(['id'])[features].agg('mean') df = pd.merge(df, meanData, suffixes=["", "_mean"], how='left', on=['id']) print(df.columns) </code></pre> <p>Output</p> <blockquote> <p>Index(['id', 'A', 'B', 'A_mean', 'B_mean'], dtype='object')</p> </blockquote> <p>Expected output:</p> <blockquote> <p>Index(['A_mean', 'B_mean','id', 'A', 'B'], dtype='object')</p> </blockquote>
<p>I think you can use <a href="https://pandas.pydata.org/pandas-docs/stable/groupby.html#transformation" rel="nofollow noreferrer"><code>transform</code></a> after the <code>groupby</code> to get the <code>mean</code> related to each row, and then <code>pd.concat</code> the dataframes such as:</p> <pre><code>new_df = pd.concat([(df.groupby('id')[features] .transform(np.mean).add_suffix('_mean')), df], axis=1) print (new_df) A_mean B_mean id A B 0 2.0 3.666667 0 1 2 1 2.0 3.666667 0 1 3 2 2.0 3.666667 0 4 6 3 3.5 3.000000 1 3 4 4 3.5 3.000000 1 4 2 </code></pre>
python|pandas
2
5,023
38,069,417
"ImportError: cannot import name SkipTest" while importing numpy in python
<p>I am having problem while importing numpy.</p> <p>Please fin below version information:</p> <pre><code># cat /etc/redhat-release CentOS release 6.5 (Final) # python -V Python 2.6.6 </code></pre> <p>I have already installed numpy (re-installed several times) using pip.</p> <pre><code># pip install numpy </code></pre> <p>However, when I try to import numpy, it shows error as below:</p> <pre><code># python Python 2.6.6 (r266:84292, Jul 23 2015, 15:22:56) [GCC 4.4.7 20120313 (Red Hat 4.4.7-11)] on linux2 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; &gt;&gt;&gt; import numpy Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; File "/usr/lib64/python2.6/site-packages/numpy/__init__.py", line 180, in &lt;module&gt; from . import add_newdocs File "/usr/lib64/python2.6/site-packages/numpy/add_newdocs.py", line 13, in &lt;module&gt; from numpy.lib import add_newdoc File "/usr/lib64/python2.6/site-packages/numpy/lib/__init__.py", line 8, in &lt;module&gt; from .type_check import * File "/usr/lib64/python2.6/site-packages/numpy/lib/type_check.py", line 11, in &lt;module&gt; import numpy.core.numeric as _nx File "/usr/lib64/python2.6/site-packages/numpy/core/__init__.py", line 58, in &lt;module&gt; from numpy.testing.nosetester import _numpy_tester File "/usr/lib64/python2.6/site-packages/numpy/testing/__init__.py", line 12, in &lt;module&gt; from . import decorators as dec File "/usr/lib64/python2.6/site-packages/numpy/testing/decorators.py", line 21, in &lt;module&gt; from .utils import SkipTest ImportError: cannot import name SkipTest &gt;&gt;&gt; </code></pre> <p>I have even tried with "yum reinstall python". In all cases(numpy and python itself) installation completed successfully. But still having same error as above "ImportError: cannot import name SkipTest".</p> <p>Any solution of workaround will be greatly appreciated.</p> <p>Thanks, Obaid</p>
<p>Seems like I have to upgrade my python version. In fact, after installing python2.7 everything is smooth.</p> <p>I have installed python2.7 followed by numpy as below:</p> <pre><code>yum install python27 scl enable python27 bash pip2.7 install numpy </code></pre> <p>Then I was able to import numpy using python2.7 cli.</p> <p>Thanks, Obaid</p>
python|numpy|importerror
1
5,024
38,484,896
R Iterate a calculation from a variable
<p>I have a data base of postal code. I want to create for each postal code, 4 variables which are the year, the month, the day and the hour from 01.01.2008 to 30.06.2008 The goal is to create an indicator that calculate a number of alarms that have been pulled.</p>
<p>I tried this :</p> <p>for (elt in c$CP) For each elt do</p> <pre><code> time_index=seq(from = as.POSIXct("2008-01-01 00:00"), to = as.POSIXct("2016-06-30 23:00"), by = "hour")) </code></pre> <p>At the end i want a unique database where i have 2 column CP (postal code) and time_index I will have to append each data base to make it unique</p>
r|database|pandas|data-science
0
5,025
38,383,477
numpy meshgrid filter out points
<p>I have a meshgrid in numpy. I make some calculations on the points. I want to filter out points that could not be calcutaled for some reason ( division by zero).</p> <pre><code>from numpy import arange, array Xout = arange(-400, 400, 20) Yout = arange(0, 400, 20) Zout = arange(0, 400, 20) Xout_3d, Yout_3d, Zout_3d = numpy.meshgrid(Xout,Yout,Zout) #some calculations # for example b = z / ( y - x ) </code></pre>
<p>To perform <code>z / ( y - x )</code> using those <code>3D</code> mesh arrays, you can create a mask of the valid ones. Now, the valid ones would be the ones where any pair of combinations between <code>y</code> and <code>x</code> aren't identical. So, this mask would be of shape <code>(M,N)</code>, where <code>M</code> and <code>N</code> are the lengths of the <code>Y</code> and <code>X</code> axes respectively. To get such a mask to span across all combinations between <code>X</code> and <code>Y</code>, we could use <a href="http://docs.scipy.org/doc/numpy-1.10.4/user/basics.broadcasting.html" rel="nofollow"><code>NumPy's broadcasting</code></a>. Thus, we would have such a mask like so -</p> <pre><code>mask = Yout[:,None] != Xout </code></pre> <p>Finally, and again using broadcasting to broadcast the mask along the first two axes of the<code>3D</code> arrays, we could perform such a division and choose between an invalid specifier and the actual division result using <a href="http://docs.scipy.org/doc/numpy/reference/generated/numpy.where.html" rel="nofollow"><code>np.where</code></a>, like so -</p> <pre><code>invalid_spec = 0 out = np.where(mask[...,None],Zout_3d/(Yout_3d-Xout_3d),invalid_spec) </code></pre> <hr> <p>Alternatively, we can directly get to such an output using broadcasting and thus avoid using <code>meshgrid</code> and having those heavy <code>3D</code> arrays in workspace. The idea is to simultaneously populate the <code>3D</code> grids and perform the subtraction and division computations, both on the fly. So, the implementation would look something like this -</p> <pre><code>np.where(mask[...,None],Zout/(Yout[:,None,None] - Xout[:,None]),invalid_spec) </code></pre>
python|numpy
1
5,026
66,284,509
Iterating each row with remaining rows in pandas data frame
<p>I am trying to iterate each row in dataframe with subsequent row. The first iteration works but I want to iterate for all other iterations like [111,.....] with remaining and continues. How can I achieve it using iterator?</p> <pre><code>test = [[1,2,3,4,5,6,7,8,9,10],[11,22,33,44,55,66,77,88,99,100],[111,222,333,444,555,666,777,888,999,1000],[1111,2222,3333,4444,5555,6666,7777,8888,9999,10000]] df=pd.DataFrame(test) row_iterator = df.iterrows() _, main_row = next(row_iterator) for i, row in row_iterator: print(&quot;---------Main Row-------------------&quot;) print(main_row) print(&quot;----------------------------&quot;) print(&quot;-----------Row-----------------&quot;) print(row) print(&quot;----------------------------&quot;) print(&quot;------------i----------------&quot;) print(i) print(&quot;----------------------------&quot;) </code></pre>
<p>You don't need to use <code>next</code> these cases, a for loop &quot;knows&quot; how to deal with iterators. Skip the <code>next</code>, just use the iterator directly:</p> <pre class="lang-py prettyprint-override"><code>for i, row in df.iterrows(): print(i, row) </code></pre>
python|pandas|database|iterator|iteration
0
5,027
66,041,108
Sorting a pandas dataframe based on number of values of a categorical column
<p>The sample dataset looks like this</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th style="text-align: left;">col1</th> <th style="text-align: center;">col2</th> <th style="text-align: right;">col3</th> </tr> </thead> <tbody> <tr> <td style="text-align: left;">A</td> <td style="text-align: center;">1</td> <td style="text-align: right;">as</td> </tr> <tr> <td style="text-align: left;">A</td> <td style="text-align: center;">2</td> <td style="text-align: right;">sd</td> </tr> <tr> <td style="text-align: left;">B</td> <td style="text-align: center;">3</td> <td style="text-align: right;">df</td> </tr> <tr> <td style="text-align: left;">C</td> <td style="text-align: center;">5</td> <td style="text-align: right;">fg</td> </tr> <tr> <td style="text-align: left;">D</td> <td style="text-align: center;">6</td> <td style="text-align: right;">gh</td> </tr> <tr> <td style="text-align: left;">A</td> <td style="text-align: center;">1</td> <td style="text-align: right;">hj</td> </tr> <tr> <td style="text-align: left;">B</td> <td style="text-align: center;">3</td> <td style="text-align: right;">jk</td> </tr> <tr> <td style="text-align: left;">B</td> <td style="text-align: center;">4</td> <td style="text-align: right;">kt</td> </tr> <tr> <td style="text-align: left;">A</td> <td style="text-align: center;">1</td> <td style="text-align: right;">re</td> </tr> <tr> <td style="text-align: left;">C</td> <td style="text-align: center;">5</td> <td style="text-align: right;">we</td> </tr> <tr> <td style="text-align: left;">D</td> <td style="text-align: center;">6</td> <td style="text-align: right;">qw</td> </tr> <tr> <td style="text-align: left;">D</td> <td style="text-align: center;">7</td> <td style="text-align: right;">aa</td> </tr> </tbody> </table> </div> <p>I want to sort the column col1 based on the number of occurences each item has, e.g. A has 4 occurences, B and D have 3 and C has 2 occurences. The dataframe should be sorted like A,A,A,A,B,B,B,D,D,D,C,C so that</p> <p>Is there a way to achieve the same? Can I use sort_values to get desired result?</p>
<p>Create helper column by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.map.html" rel="noreferrer"><code>Series.map</code></a> with <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.value_counts.html" rel="noreferrer"><code>Series.value_counts</code></a> and use it for sorting with <code>col1</code> by <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sort_values.html" rel="noreferrer"><code>DataFrame.sort_values</code></a>:</p> <pre><code>df['new'] = df['col1'].map(df['col1'].value_counts()) #alternative #df['new'] = df.groupby('col1')['col1'].transform('count') df1 = df.sort_values(['new','col1'], ascending=[False, True]).drop('new', axis=1) </code></pre> <p>One line solution:</p> <pre><code>df1 = (df.assign(new =df['col1'].map(df['col1'].value_counts())) .sort_values(['new','col1'], ascending=[False, True]) .drop('new', axis=1)) print (df1) col1 col2 col3 0 A 1 as 1 A 2 sd 5 A 1 hj 8 A 1 re 2 B 3 df 6 B 3 jk 7 B 4 kt 4 D 6 gh 10 D 6 qw 11 D 7 aa 3 C 5 fg 9 C 5 we </code></pre>
python|pandas|dataframe
5
5,028
52,657,837
Python Pandas grouping columns
<p>This is a Pandas question - my brain is too tired to figure this out today. Could someone please help me? I have a dataframe with many columns with one column as a category:</p> <pre><code>Category B C D .... Z 1 2 11 1.0 'HOME' .... 1 3 21 1.0 'HOME' .... 1 1 33 .9 'GOPHER' .... 2 4 34 0.6 'HUMM' ... 2 1 72 1.4 'VEEE' ... 3 5 23 2.3 'ETC ' .... 4 3 99 3.141 'PI' ... 4 4 1 2.634 'PI' ... </code></pre> <p>And want to get this (the text columns are really irrelevant)</p> <pre><code>Category B C D .... Z 1 6 11 2.9 'HOME' .... 2 5 34 2.6 'HUMM' ... 3 5 23 2.3 'ETC ' .... 4 7 100 5.775 'PI' ... </code></pre> <p>How do I go about doing this in Python Pandas? Do I use a group()?</p> <p>If df is my DataFrame, and the result is in newdf would be resulting data frame, then there would be one row in ndf['B'] with newdf['A'] = 1 and newdf['B'] would the sum of values in df['B'] for all rows where df['A'] was 1.<br> For the next category, there would be one row in ndf['B'] with newdf['A'] = 2 and newdf['B'] would the sum of values in df['B'] for all rows where df['A'] was 2</p> <p>and so on. </p> <p>I am trying to aggregate the sum of the columns based on the category in column A. For each category in A, I want to sum the rest of the columns with the same category. </p> <p>I hope I have explained it properly. Manually, this would be similar to </p> <pre><code>ndf['B'] = df[ df['A'] == 1 ].sum() ndf['C'] = df[ df['A'] == 1 ].sum() </code></pre> <p>Basically, can I use something like this:</p> <pre><code>for col in df.columns: if col.type(??) is number: ndf[col] = df[ df[col] == 1 ].sum() </code></pre> <p>and for each category in A; repeat </p> <pre><code>ndf['B'] = df[ df['A'] == 2 ].sum() ndf['C'] = df[ df['A'] == 3 ].sum() </code></pre> <p>I would then have to loop for each value in the category for A. </p> <p>Is this the right way to approach the problem?</p>
<p>You can use <code>GroupBy</code> + <code>agg</code> to specify a different function for each series. I have linked <code>C</code> and <code>Z</code> series to <code>'first'</code>, i.e. extract the first value from each group, as this is consistent with your desired output.</p> <pre><code>agg_rules = {'B': 'sum', 'C': 'first', 'D': 'sum', 'Z': 'first'} res = df.groupby('Category').agg(agg_rules).reset_index() print(res) Category B C D Z 0 1 6 11 2.900 'HOME' 1 2 5 34 2.000 'HUMM' 2 3 5 23 2.300 'ETC' 3 4 7 99 5.775 'PI' </code></pre>
python|pandas|dataframe|pandas-groupby
1
5,029
46,240,895
expanding multipolygon in geopandas dataframe
<p>I have a shapefile which contains both polygons and multipolygons as following:</p> <pre><code> name geometry 0 AB10 POLYGON ((-2.116454759005259 57.14656265903432... 1 AB11 (POLYGON ((-2.052573095588467 57.1342600856536... 2 AB12 (POLYGON ((-2.128066321470298 57.0368357386797... 3 AB13 POLYGON ((-2.261525922489881 57.10693578217748... 4 AB14 POLYGON ((-2.261525922489879 57.10693578217748... </code></pre> <p>The 2nd and 3rd row correspond to Multipolygon while the rest are polygons. I would like to expand the rows whose geometry is Multipolygon type into rows of Polygon as following.</p> <pre><code> name geometry 0 AB10 POLYGON ((-2.116454759005259 57.14656265903432... 1 AB11 POLYGON ((-2.052573095588467 57.1342600856536... 2 AB11 POLYGON ((-2.045849648028651 57.13076387483844... 3 AB12 POLYGON ((-2.128066321470298 57.0368357386797... 4 AB12 POLYGON ((-2.096125852304303 57.14808092585477 3 AB13 POLYGON ((-2.261525922489881 57.10693578217748... 4 AB14 POLYGON ((-2.261525922489879 57.10693578217748... </code></pre> <p>Note that the AB11 and AB12 Multipolygon have been expanded to multiple rows where each row corresponds to one polygon data.</p> <p>I think this is geopanda data manipulation. Is there a pythonic way to achieve the above?</p> <p>Thank you!</p>
<p>We can use numpy for more speed if you have only two columns. </p> <p>If you have a dataframe like </p> <pre> name geometry 0 0 polygn(x) 1 2 (polygn(x), polygn(x)) 2 3 polygn(x) 3 4 (polygn(x), polygn(x)) </pre> <p>Then numpy meshgrid will help </p> <pre><code>def cartesian(x): return np.vstack(np.array([np.array(np.meshgrid(*i)).T.reshape(-1,2) for i in x.values])) ndf = pd.DataFrame(cartesian(df),columns=df.columns) </code></pre> <p>Output:</p> <pre> name geometry 0 0 polygn(x) 1 2 polygn(x) 2 2 polygn(x) 3 3 polygn(x) 4 4 polygn(x) 5 4 polygn(x) </pre> <pre><code>%%timeit ndf = pd.DataFrame(cartesian(df),columns=df.columns) 1000 loops, best of 3: 679 µs per loop %%timeit df.set_index(['name'])['geometry'].apply(pd.Series).stack().reset_index() 100 loops, best of 3: 5.44 ms per loop </code></pre>
python|pandas|shapefile|geopandas
2
5,030
46,502,685
How to join several big arrays into one?
<p>I'm fairly new to python, and I have 5 big arrays A,B,C,D,E with shapes:</p> <pre><code>((1000000, 8), (1000000, 7), (1000000, 13840), (1000000, 204), (1000000, 3)) </code></pre> <p>dtypes:</p> <pre><code>(dtype('float64'), dtype('float64'), dtype('int64'), dtype('int64'), dtype('float64')) </code></pre> <p>Now i would like to join them all into a single array with a shape of</p> <pre><code>(1000000, 8+7+13840+204+3) = (1000000, 14062) </code></pre> <p>I have tried all possible ways (hstack/concate),</p> <pre><code>data_feature = np.concatenate((A,B,C,D,E), axis=1) data_feature = np.hstack([A,B,C,D,E]) data_feature = np.hstack((A,B,C,D,E)) data_feature = np.column_stack([A,B,C,D,E]) </code></pre> <p>but it all kills my system (Macbook Pro 2017/ 2.8GHz Intel Core i7/16 GB 2133 MHz LPDDR3), I think this could be a kernel problem, any suggestions that i can do this with my computer?</p>
<p>Given 64-bit (8 byte) values, you are trying to process:</p> <pre><code>1000000 * 14062 * 8 * 2 = 224'992'000'000 bytes </code></pre> <p>The 2 on the end is because you have inputs plus equal-size outputs.</p> <p>That is 209 GiB of data. You have 16 GiB of RAM. It is not feasible. You'll need to think harder about how you're processing your data, and how you can reduce it by a factor of 10. Or buy a machine with 192 GiB of RAM (which is very possible these days, just not on a laptop).</p>
python|arrays|numpy|memory-management|cpu-usage
3
5,031
58,369,571
Creating a new column via file name
<p>I will like to read multiple file and add a new column year. File Name: Shirt_2016, Shirt_2017, Shoe_2018, Shoe_2019,</p> <pre><code>rawfolder = 'c:/users/a/desktop/item' A = pd.DataFrame(pd.read_excel('%s/Shirt_2016' %(rawfolder), sheetname="sheet1", header=None) B = pd.DataFrame(pd.read_excel('%s/Shirt_2017' %(rawfolder), sheetname="sheet1", header=None) C = pd.DataFrame(pd.read_excel('%s/Shoe_2018' %(rawfolder), sheetname="sheet1", header=None) D = pd.DataFrame(pd.read_excel('%s/Shoe_2019' %(rawfolder), sheetname="sheet1", header=None) . .(Script to run) . </code></pre> <p>How do I create a year column extract with regards to '%s/Shoe_2019' and read the file at at time in the script. I have tried the following:</p> <pre><code>df['Year'] = (os.path.basename([A,B,C,D]).split('.')[0].split('_')[1]) </code></pre>
<p>I would create list with filenames </p> <pre><code>filenames = ['Shirt_2016', 'Shirt_2017', 'Shoe_2018', 'Shoe_2019', ...] </code></pre> <p>and then use <code>for</code>-loop to read files </p> <pre><code>rawfolder = 'c:/users/a/desktop/item' all_df = [] for name in filenames: path = os.path.join(rawfolder, name) temp_df = pd.DataFrame(pd.read_excel(path, sheetname="sheet1", header=None) all_df.append(temp_df) </code></pre> <p>and use this list to create column</p> <pre><code>df['Year'] = [name[-4:] for name in filenames] </code></pre>
python|pandas
0
5,032
69,094,812
Plotting a vector field using quiver
<p>I'm trying to reproduce a 2D vector map with components</p> <pre><code> v = 100/a * exp(-1/a^2 * ((x+0.55)^2+y^2))(-y,x) - 100/a * exp(-1/a^2 * ((x-0.55)^2+y^2))(-y,x) </code></pre> <p>and here are my codes. It did not give the map I want (see attached <a href="https://i.stack.imgur.com/Xqkjm.png" rel="nofollow noreferrer">vector map</a>). Could someone please help me with it?</p> <pre><code>import numpy as np import matplotlib.pyplot as plt import math grid_resolution = 25 grid_size = 2*grid_resolution+1 a = 0.2 x = np.linspace(-1,1,grid_size) y = np.linspace(-1,1,grid_size) X,Y = np.meshgrid(x, y) vx = np.zeros((grid_size,grid_size)) vy = np.zeros((grid_size,grid_size)) for i in range(0,grid_size): for j in range(0,grid_size): x0 = x[j] y0 = y[i] xx = (x0 + 0.55) ** 2 + y0 ** 2 yy = (x0 - 0.55) ** 2 + y0 ** 2 expf1 = math.exp(-xx / (a ** 2)) expf2 = math.exp(-yy / (a ** 2)) vx[i,j] = 100 / a * (-expf1 + expf2) * y0 vy[i,j] = 100 / a * (expf1 - expf2) * x0 fig, ax = plt.subplots() ax.quiver(X, Y, vx, vy) ax.set_aspect('equal') plt.show() </code></pre>
<p>In the last passage, when you compute <code>vx[i,j]</code> and <code>vy[i,j]</code>, you are computing vector field components in <code>(x0, y0)</code>, while you should compute it in the current point, so <code>(x0 ± 0.55, y0)</code>. Moreover, you should change the sign of <code>vx</code> and <code>vy</code> in order to draw a vector field like the one you linked.</p> <pre><code>import numpy as np import matplotlib.pyplot as plt import math grid_resolution = 25 grid_size = 2*grid_resolution + 1 a = 0.2 x = np.linspace(-1, 1, grid_size) y = np.linspace(-1, 1, grid_size) X, Y = np.meshgrid(x, y) vx = np.zeros((grid_size, grid_size)) vy = np.zeros((grid_size, grid_size)) for i in range(0, grid_size): for j in range(0, grid_size): x0 = x[j] y0 = y[i] xx = (x0 + 0.55)**2 + y0**2 yy = (x0 - 0.55)**2 + y0**2 expf1 = math.exp(-xx/(a**2)) expf2 = math.exp(-yy/(a**2)) vx[i, j] = -100/a*(-expf1 + expf2)*y0 if x0 &gt; 0: vy[i, j] = -100/a*(expf1 - expf2)*(x0 - 0.55) else: vy[i, j] = -100/a*(expf1 - expf2)*(x0 + 0.55) fig, ax = plt.subplots() ax.quiver(X,Y,vx,vy) ax.set_aspect('equal') plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/dcIbc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dcIbc.png" alt="enter image description here" /></a></p>
python|numpy|matplotlib|math|plot
3
5,033
44,479,384
pandas rolling apply doesn't do anything
<p>I have a DataFrame like this:</p> <pre><code>df2 = pd.DataFrame({'date': ['2015-01-01', '2015-01-02', '2015-01-03'], 'value': ['a', 'b', 'a']}) date value 0 2015-01-01 a 1 2015-01-02 b 2 2015-01-03 a </code></pre> <p>I'm trying to understand how to apply a custom rolling function to it. I've tried doing this:</p> <pre><code>df2.rolling(2).apply(lambda x: 1) </code></pre> <p>But this gives me the original DataFrame back:</p> <pre><code> date value 0 2015-01-01 a 1 2015-01-02 b 2 2015-01-03 a </code></pre> <p>If I have a different DataFrame, like this:</p> <pre><code>df3 = pd.DataFrame({'a': [1, 2, 3], 'value': [4, 5, 6]}) </code></pre> <p>The same rolling apply seems to work:</p> <pre><code>df3.rolling(2).apply(lambda x: 1) a value 0 NaN NaN 1 1.0 1.0 2 1.0 1.0 </code></pre> <p>Why is this not working for the first DataFrame?</p> <p>Pandas version: 0.20.2</p> <p>Python version: 2.7.10</p> <p><strong>Update</strong></p> <p>So, I've realized that <code>df2</code>'s columns are object-type, whereas the output of my lambda function is an integer. <code>df3</code>'s columns are both integer columns. I'm assuming that this is why the <code>apply</code> isn't working. </p> <p>The following <strong>doesn't</strong> work:</p> <pre><code>df2.rolling(2).apply(lambda x: 'a') date value 0 2015-01-01 a 1 2015-01-02 b 2 2015-01-03 a </code></pre> <p>Furthermore, say I want to concatenate the characters in the <code>value</code> column on a rolling basis, so that the output of the lambda function is a string, rather than an integer. The following also doesn't work:</p> <pre><code>df2.rolling(2).apply(lambda x: '.'.join(x)) date value 0 2015-01-01 a 1 2015-01-02 b 2 2015-01-03 a </code></pre> <p>What's going on here? Can rolling operations be applied to object-type columns in pandas?</p>
<p>Here is one way this could be approached. Noting that <code>rolling</code> is a wrapper for <code>numpy</code> methods and the efficiency associated with those, this is <em>not</em> that. This merely provides a similiar api, to allow rolling on non-numeric columns:</p> <h3>Code:</h3> <pre><code>import pandas as pd class MyDataFrame(pd.DataFrame): @property def _constructor(self): return MyDataFrame def rolling_object(self, window, column, default): return pd.concat( [self[column].shift(i) for i in range(window)], axis=1).fillna(default).T </code></pre> <p>This creates a custom dataframe class that has a <code>rolling_object</code> method. It does not well match the pandas way in that it only operates on a single column at a time.</p> <h3>Test Code:</h3> <pre><code>df2 = MyDataFrame({'date': ['2015-01-01', '2015-01-02', '2015-01-03'], 'value': ['a', 'b', 'c'], 'num': [1, 2, 3] }) print(df2.rolling_object(2, 'value', '').apply(lambda x: '.'.join(x))) </code></pre> <h3>Results:</h3> <pre><code>0 a. 1 b.a 2 c.b dtype: object </code></pre>
python|pandas
3
5,034
44,779,315
Detrending data with nan value in scipy.signal
<p>I have a time series dataset with some nan values in it. I want to detrend this data:</p> <p>I tried by doing this:</p> <pre><code>scipy.signal.detrend(y) </code></pre> <p>then I got this error:</p> <pre><code>ValueError: array must not contain infs or NaNs </code></pre> <p>Then I tried with:</p> <pre><code>scipy.signal.detrend(y.dropna()) </code></pre> <p>But I lost data order.</p> <p>How to solve this porblem?</p>
<p>For future reference there is a digital signal processing Stack site, <a href="https://dsp.stackexchange.com/">https://dsp.stackexchange.com/</a>. I would suggest using that in the future for signal processing related questions.</p> <hr> <p>The easiest way I can think of is to manually detrend your data. You can do this easily by computing least squares. Least squares will take into account both your <code>x</code> and <code>y</code> values, so you can drop out the <code>x</code> values corresponding to where <code>y = NaN</code>.</p> <p>You can grab the indices of the non-<code>NaN</code> values with <code>not_nan_ind = ~np.isnan(y)</code>, and then do linear regression with the non-<code>NaN</code> values of <code>y</code> and the corresponding <code>x</code> values with, say, <a href="https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.stats.linregress.html" rel="noreferrer"><code>scipy.stats.linregress()</code></a>:</p> <pre><code>m, b, r_val, p_val, std_err = stats.linregress(x[not_nan_ind],y[not_nan_ind]) </code></pre> <p>Then you can simply subtract off this line from your data <code>y</code> to obtain the detrended data:</p> <pre><code>detrend_y = y - (m*x + b) </code></pre> <p>And that's all you need. For example with some dummy data:</p> <pre><code>import numpy as np from matplotlib import pyplot as plt from scipy import stats # create data x = np.linspace(0, 2*np.pi, 500) y = np.random.normal(0.3*x, np.random.rand(len(x))) drops = np.random.rand(len(x)) y[drops&gt;.95] = np.NaN # add some random NaNs into y plt.plot(x, y) </code></pre> <p><a href="https://i.stack.imgur.com/SjU33.png" rel="noreferrer"><img src="https://i.stack.imgur.com/SjU33.png" alt="Data with some NaN values"></a></p> <pre><code># find linear regression line, subtract off data to detrend not_nan_ind = ~np.isnan(y) m, b, r_val, p_val, std_err = stats.linregress(x[not_nan_ind],y[not_nan_ind]) detrend_y = y - (m*x + b) plt.plot(x, detrend_y) </code></pre> <p><a href="https://i.stack.imgur.com/n2YTS.png" rel="noreferrer"><img src="https://i.stack.imgur.com/n2YTS.png" alt="Detrended data"></a></p>
python|numpy|scipy|trend
5
5,035
60,974,056
Tensorflow Keras - feeding input to multiple model layers in parallel
<p>With tensorflow.keras (Tensorflow 2), I want to feed my input into different layers of my model. So we are looking at a graph where the input layers branches off into 3 lines to go to 3 different convolutional layers. It has 3 outputs.</p> <p>Pseudocode is something like this:</p> <pre><code>inputs = Input() conv1 = Conv2D()(inputs) conv2 = Conv2D()(inputs) conv3 = Conv2D()(inputs) model = Model(inputs=inputs, outputs=[conv1, conv2, conv3]) </code></pre> <p>But I'm getting the following error when I try to fit the model with a tf DataSet stream:</p> <pre><code>ValueError: Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 3 array(s), for inputs ['conv2d_1', 'conv2d_2', 'conv2d_3'] but instead got the following list of 1 arrays: [&lt;tf.Tensor 'ExpandDims:0' shape=(None, 1) dtype=int32&gt;] </code></pre> <p>I have verified that my code works fine if I comment out the branches and set <code>outputs=conv1</code>.</p> <p>Note: I am not trying to feed in multiple different inputs (there are many questions and answers on here that solve this). Just one input which should branch off.</p>
<p>Issue solved. I should be providing a 3-array of labels.</p>
tensorflow|keras|neural-network
0
5,036
61,118,179
How to replace row value without changing the other values in dataframe pandas?
<p>I am running one python script. I want to change particular row value without changing the other value. Can you please help me how to do this?</p> <p>example:</p> <pre><code>df1 Table Count case 20 recordtype 50 consumer 70 settlement 150 address 250 bridge 130 </code></pre> <p>I ran the process for only 'case' &amp; 'consumer' job. Now new count in case and consumer is 80 &amp; 150 but i am getting the file like this.</p> <pre><code>Table Count case 20 recordtype 50 consumer 70 settlement 150 address 250 case 80 consumer 150 </code></pre> <p>It is not replacing the count value. It is just adding new column. But i want result like this:</p> <pre><code>df Table Count case 80 recordtype 50 consumer 150 settlement 150 address 250 </code></pre> <p>Can you please help me based on the Table name how can i replace the value?</p> <p>I am using the below code:</p> <pre><code>if(os.path.isfile('/medaff/Scripts/python/count.txt')): df_s.to_csv('/medaff/Scripts/python/count.txt',mode='a', sep = '|', index= False, header=False) else: df_s.to_csv('/medaff/Scripts/python/count.txt', sep = '|', index= False) </code></pre>
<p>I fixed the issue using below code:</p> <pre><code>if(os.path.isfile('/medaff/Scripts/python/count.txt')): df_s1 = pd.read_csv('/medaff/Scripts/python/count.txt', delimiter='|') for index, row in df_s.iterrows(): print(row) print(row['Master Job Name']) print(row['Current_Count']) idx1=(df_s1['Master Job Name'] == row['Master Job Name']) df_s1.at[idx1, 'Current_Count'] = row['Current_Count'] set1 = set(list(df_s['Master Job Name'].unique())) set2 = set(list(df_s1['Master Job Name'].unique())) set1 = list(set1 - set2) df_s_new = df_s[df_s['Master Job Name'].isin(set1)] df_s1 = df_s1.append(df_s_new) df_s1.to_csv('/medaff/Scripts/python/count.txt', sep='|', index=False) else: df_s.to_csv('/medaff/Scripts/python/count.txt', mode='a', sep = '|', index= False) </code></pre>
python|pandas|csv|dataframe
0
5,037
71,532,830
How do I parse a multi nested (5/6) JSON object and convert it to a dataframe?
<p><strong>Problem:</strong> I have a multi nested JSON file that I need to parse and convert to a pandas dataframe where every field is a column. I've taken 2 approaches:</p> <ol> <li>Convert raw file to data dictionary</li> <li>Convert raw file to JSON object</li> </ol> <p><strong>For data dictionary I've tried:</strong></p> <ul> <li><p><code>df = pd.json_normalize(data)</code></p> <ul> <li>This leaves me with a DF that parses and loads up to <code>field12</code> (see dummy JSON below), the rest of the data is loaded into that single cell. Have also tried adding <code>max_level=</code> and I get the same result no matter what number I use</li> </ul> </li> <li><p>I've tried <code>dt = dt.explode('field12')</code></p> <ul> <li>This leaves me with a single column of the headers in the 3rd nest with no data and the rest of the data is missing</li> </ul> </li> </ul> <p><strong>For JSON object I've tried:</strong></p> <ul> <li><code>pd.read_json(json_object)</code> <ul> <li>This leaves me with a 4x9 table with the 1st column being the headers in the 2nd nest and the columns being the headers in the 1st nest, the rest of the data is stored in a single cell again</li> </ul> </li> </ul> <p>Please help, very lost with this one!</p> <p>Below is a dummy JSON object that is an exact structure replica of the file I'm working with:</p> <pre><code>{ &quot;field1&quot;:&quot;dummyString&quot;, &quot;field2&quot;: null, &quot;field3&quot;:&quot; dummyString&quot;, &quot;field4&quot;:{ &quot;field5&quot;:&quot; dummyString&quot;, &quot;field6&quot;:&quot; dummyString&quot;, &quot;field7&quot;:&quot; dummyString&quot;, &quot;field8&quot;:&quot; dummyString&quot;, &quot;field9&quot;:&quot;dummyNumber&quot;, &quot;field10&quot;:null, &quot;field11&quot;:&quot; dummyString&quot;, &quot;field12&quot;:[ { &quot;field13&quot;:&quot; dummyString&quot;, &quot;field14&quot;:&quot; dummyString&quot;, &quot;field15&quot;:{ &quot;field16&quot;:&quot; dummyString&quot;, &quot;field17&quot;:&quot; dummyString&quot;, &quot;field18&quot;:&quot; dummyString&quot; }, &quot;field19&quot;:&quot; dummyString&quot;, &quot;field20&quot;:&quot; dummyString&quot;, &quot;field21&quot;:&quot; dummyString&quot;, &quot;field22&quot;:&quot; dummyString&quot;, &quot;field23&quot;:&quot; dummyString&quot;, &quot;field24&quot;: &quot;dummyNumber&quot;, &quot;field25&quot;: &quot;dummyNumber&quot;, &quot;field26&quot;:null, &quot;field27&quot;:&quot; dummyString&quot;, &quot;field28&quot;:&quot; dummyString&quot;, &quot;field29&quot;:&quot; dummyString&quot;, &quot;field30&quot;: &quot;dummyBoolean&quot;, &quot;field31&quot;: &quot;dummyNumber&quot;, &quot;field32&quot;:&quot; dummyString&quot;, &quot;field33&quot;:null, &quot;field34&quot;:[ { &quot;field35&quot;:&quot;dummyString&quot;, &quot;field36&quot;:null, &quot;field37&quot;:{ &quot;field38&quot;:&quot;dummyString&quot;, &quot;field39&quot;:&quot;dummyString&quot;, &quot;field40&quot;:&quot; dummyString&quot;, &quot;field41&quot;:&quot; dummyString&quot;, &quot;field42&quot;:&quot; dummyString&quot;, &quot;field43&quot;:&quot; dummyString&quot;, &quot;field44&quot;:null, &quot;field45&quot;:&quot; dummyString&quot;, &quot;field46&quot;:&quot; dummyString&quot; }, &quot;field47&quot;:null, &quot;field48&quot;:null, &quot;field49&quot;:null, &quot;field50&quot;:null, &quot;field51&quot;: &quot;dummyNumber&quot;, &quot;field52&quot;:&quot;dummyString&quot;, &quot;field53&quot;:&quot;dummyString&quot;, &quot;field54&quot;:&quot;dummyBoolean&quot;, &quot;field55&quot;: &quot;dummyBoolean&quot;, &quot;field56&quot;:&quot;dummyString&quot;, &quot;field57&quot;:null, &quot;field58&quot;:null },{ &quot;field59&quot;:{ &quot;field60&quot;:null, &quot;field61&quot;:null, &quot;field62&quot;:null } } ] } ], &quot;field63&quot;:&quot;dummyStringl&quot; } } </code></pre>
<p>I wrote a function to flatten a dict like yours.</p> <pre><code>def flatten(d): ret = {**d} for k, v in d.items(): if isinstance(v, dict): ret.pop(k) ret = {**ret, **flatten(v)} elif isinstance(v, list): ret.pop(k) for item in v: ret = {**ret, **flatten(item)} return ret df = pd.json_normalize(flatten(j)) </code></pre> <p>Output:</p> <pre><code>&gt;&gt;&gt; df field1 field2 field3 field5 field6 field7 field8 field9 field10 ... field41 field42 field43 field44 field45 field46 field60 field61 field62 0 dummyString None dummyString dummyString dummyString dummyString dummyString dummyNumber None ... dummyString dummyString dummyString None dummyString dummyString None None None [1 rows x 57 columns] </code></pre>
python|json|pandas|dataframe|nested
0
5,038
71,724,930
Improve speed of getpixel and putpixel
<p>Using PIL, I'm applying a rainbow filter to the given image using <code>getpixel</code> and <code>setpixel</code>. One issue, this method is very slow. It takes around 10 seconds to finish one image.</p> <pre class="lang-py prettyprint-override"><code>def Rainbow(i): x = 1 - abs(((i / 60) % 2) - 1) i %= 360 if (i &gt;= 0 and i &lt; 60 ): r,g,b = 1, x, 0 if (i &gt;= 60 and i &lt; 120): r,g,b = x, 1, 0 if (i &gt;= 120 and i &lt; 180): r,g,b = 0, 1, x if (i &gt;= 180 and i &lt; 240): r,g,b = 0, x, 1 if (i &gt;= 240 and i &lt; 300): r,g,b = x, 0, 1 if (i &gt;= 300 and i &lt; 360): r,g,b = 1, 0, x res = (int(r * 255), int(g * 255), int(b * 255)) return res def RainbowFilter(img): for x in range(img.size[0]): for y in range(img.size[1]): intensity = sum(img.getpixel((x, y))) img.putpixel((x, y), Rainbow(intensity + x + y)) return img im = Image.open('cat.jpg') rainbow_im = RainbowFilter(im) rainbow_im.save('rainbow_im.png') </code></pre> <p><a href="https://i.stack.imgur.com/sIbSY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sIbSY.jpg" alt="Input image" /></a> <a href="https://i.stack.imgur.com/8XWHL.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8XWHL.jpg" alt="Result" /></a> Can you help me improve my specific algorithm, using exclusively Numpy or Pillow features, to resolve the issue mentioned?</p>
<p>I was intrigued by this and decided to have a go at optimising the code from @I'mahdi.</p> <p>My ideas were as follows:</p> <ul> <li><p>Create and zero the output image up-front and avoid writing the already zeroed elements in the main loops</p> </li> <li><p>Only use parallelised <code>nb.prange()</code> for the <em>outer</em> loop since, if you have 12 CPU cores, that will already create 12 threads</p> </li> <li><p>Avoid creating a new 3-element Numpy array in each iteration to assign to the RGB elements of the output array - just assign the two non-zero values directly</p> </li> <li><p>Drastically reduce the number of tests in the <code>if</code> statements. The original code uses up to 12 tests to determine in which of the 6 sectors <code>i</code> falls. It will do all 12 if <code>i</code> is in the last sector. My code does it in 2-4 tests, more like a binary search.</p> </li> </ul> <hr /> <pre><code>#!/usr/bin/env python3 from PIL import Image import numba as nb import numpy as np def Rainbow(i): x = 1 - abs(((i / 60) % 2) - 1) i %= 360 if (i &gt;= 0 and i &lt; 60 ): r,g,b = 1, x, 0 if (i &gt;= 60 and i &lt; 120): r,g,b = x, 1, 0 if (i &gt;= 120 and i &lt; 180): r,g,b = 0, 1, x if (i &gt;= 180 and i &lt; 240): r,g,b = 0, x, 1 if (i &gt;= 240 and i &lt; 300): r,g,b = x, 0, 1 if (i &gt;= 300 and i &lt; 360): r,g,b = 1, 0, x res = (int(r * 255), int(g * 255), int(b * 255)) return res def RainbowFilter(img): for x in range(img.size[0]): for y in range(img.size[1]): intensity = sum(img.getpixel((x, y))) img.putpixel((x, y), Rainbow(intensity + x + y)) return img @nb.njit(parallel=True) def imahdi(img): intensity = img.sum(axis=-1) row , col = img.shape[:2] for r in nb.prange(row): for c in nb.prange(col): i = (intensity[r,c] + r + c) x = 1 - abs(((i / 60) % 2) - 1) i %= 360 res = np.zeros(3) if (i &gt;= 0 and i &lt; 60 ): res = np.array([1, x, 0]) elif (i &gt;= 60 and i &lt; 120): res = np.array([x, 1, 0]) elif (i &gt;= 120 and i &lt; 180): res = np.array([0, 1, x]) elif (i &gt;= 180 and i &lt; 240): res = np.array([0, x, 1]) elif (i &gt;= 240 and i &lt; 300): res = np.array([x, 0, 1]) elif (i &gt;= 300 and i &lt; 360): res = np.array([1, 0, x]) img[r,c] = res * 255 return img @nb.njit(parallel=True) def mark(img): intensity = img.sum(axis=-1) row , col = img.shape[:2] # Create zeroed result image res = np.zeros_like(img) for r in nb.prange(row): # Only make outer loop parallel else inner one will make more threads than cores for c in range(col): i = (intensity[r,c] + r + c) x = 1 - abs(((i / 60) % 2) - 1) x = int(x * 255) i %= 360 # Split the problem space in half in one test - like binary search if i &lt; 180: if i &lt; 60: # Don't create whole new array here # Don't assign zeroes, array is empty already res[r,c,0] = 255 res[r,c,1] = x elif i &lt; 120: res[r,c,0] = x res[r,c,1] = 255 else: res[r,c,1] = 255 res[r,c,2] = x else: if i &lt; 240: res[r,c,1] = x res[r,c,2] = 255 elif i &lt; 300: res[r,c,0] = x res[r,c,2] = 255 else: res[r,c,0] = 255 res[r,c,2] = x return res orig = Image.open('cat.jpg') res = RainbowFilter(orig) res.save('result.png') im = np.asarray(orig) res = imahdi(im) Image.fromarray(res).save('imahdi.ppm') res = mark(im) Image.fromarray(res).save('mark.ppm') </code></pre> <p>Here are the timings:</p> <pre><code>In [17]: %timeit res = RainbowFilter(orig) 11.7 s ± 80 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [18]: %timeit res = imahdi(im) 1.52 s ± 4.81 ms per loop (mean ± std. dev. of 7 runs, 1 loop each) In [13]: %timeit res=mark(im) 35.6 ms ± 928 µs per loop (mean ± std. dev. of 7 runs, 10 loops each) </code></pre>
python|numpy|image-processing|python-imaging-library
2
5,039
71,565,032
How to see city map when ploting with Geopandas lib
<p>I have just started learinig Geopandas lib in Python. I have a dataset with Lat(E) and Lon(N) of car accidents in Belgrade.</p> <p>I want to plot those dots on the map of Belgrade.</p> <p>This is my code:</p> <pre><code>import pandas as pd import geopandas as gpd import matplotlib.pyplot as plt pd.set_option('display.max_rows', 150) pd.set_option('display.max_columns', 200) pd.set_option('display.width', 5000) # reading csv into geopandas geo_df = gpd.read_file('SaobracajBeograd.csv') geo_df.columns = [&quot;ID&quot;, &quot;Date,Time&quot;, &quot;E&quot;, &quot;N&quot;, &quot;Outcome&quot;, &quot;Type&quot;, &quot;Description&quot;, &quot;geometry&quot;] geo_df.geometry = gpd.points_from_xy(geo_df.E, geo_df.N) #print(geo_df) # reading built in dataset for each city world_cities = gpd.read_file(gpd.datasets.get_path('naturalearth_cities')) # I want to plot geometry column only for Belgrade ax = world_cities[world_cities.name == 'Belgrade'].plot(figsize=(7, 7), alpha=0.5, edgecolor='black') geo_df.plot(ax=ax, color='red') plt.show() </code></pre> <p>This is the result that I get:</p> <p><a href="https://i.stack.imgur.com/7vESD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7vESD.png" alt="enter image description here" /></a></p> <p>How can I prettify this plot, so that I can see the map of the city ( with streets if possible, in color) and with smaller red dots?</p>
<ul> <li>as per comments, <strong>folium</strong> provides base map of overall geometry</li> <li>have added two layers <ol> <li>Belgrade, I have obtained this geometry from <strong>osmnx</strong> this is beyond the scope of this question so have just included the polygon as a <strong>WKT</strong> string</li> <li>the points that you provided via link in comments</li> </ol> </li> </ul> <pre><code>from pathlib import Path import pandas as pd import geopandas as gpd import shapely import folium # downloaded data df = pd.read_csv( Path.home().joinpath(&quot;Downloads/SaobracajBeograd.csv&quot;), names=[&quot;ID&quot;, &quot;Date,Time&quot;, &quot;E&quot;, &quot;N&quot;, &quot;Outcome&quot;, &quot;Type&quot;, &quot;Description&quot;], ) # create geodataframe, NB CRS geo_df = gpd.GeoDataFrame( df, geometry=gpd.points_from_xy(df[&quot;E&quot;], df[&quot;N&quot;]), crs=&quot;epsg:4386&quot; ) # couldn't find belgrade geometry, used osmnx and simplified geometry as a WKT string belgrade_poly = shapely.wkt.loads( &quot;POLYGON ((20.2213764 44.9154621, 20.2252450 44.9070062, 20.2399466 44.9067193, 20.2525385 44.8939145, 20.2419942 44.8842235, 20.2610016 44.8826597, 20.2794675 44.8754192, 20.2858284 44.8447802, 20.2856918 44.8332410, 20.3257447 44.8342507, 20.3328068 44.8098272, 20.3367239 44.8080890, 20.3339619 44.8058144, 20.3353253 44.8011005, 20.3336310 44.8003791, 20.3360230 44.7898245, 20.3384687 44.7907875, 20.3405086 44.7859144, 20.3417344 44.7872272, 20.3474466 44.7713203, 20.3509860 44.7687822, 20.3398029 44.7558716, 20.3220093 44.7448572, 20.3160895 44.7387338, 20.3235092 44.7345531, 20.3359605 44.7308053, 20.3437350 44.7301552, 20.3450306 44.7243651, 20.3497410 44.7209764, 20.3521450 44.7143627, 20.3633795 44.7046060, 20.3830709 44.7030441, 20.3845248 44.7011631, 20.3847991 44.7032182, 20.3924066 44.7036702, 20.4038881 44.6984458, 20.4097684 44.6992834, 20.4129839 44.7024603, 20.4192098 44.7021308, 20.4217436 44.7034920, 20.4251744 44.6976337, 20.4279418 44.6980838, 20.4313251 44.6940680, 20.4358368 44.6933579, 20.4402665 44.6905161, 20.4452138 44.6910160, 20.4495428 44.6880459, 20.4539572 44.6888231, 20.4529809 44.6911331, 20.4550753 44.6919188, 20.4534174 44.6929137, 20.4571253 44.6957696, 20.4570013 44.7008391, 20.4614601 44.7027894, 20.4646634 44.7018970, 20.4674388 44.7050131, 20.4753542 44.7039532, 20.4760757 44.7050260, 20.4802055 44.7033479, 20.4867635 44.7061539, 20.4983359 44.7022445, 20.5049892 44.7021663, 20.5071809 44.7071295, 20.5027682 44.7154832, 20.5028502 44.7217294, 20.5001912 44.7225288, 20.5007294 44.7251513, 20.5093727 44.7271542, 20.5316662 44.7248060, 20.5385861 44.7270519, 20.5390058 44.7329843, 20.5483761 44.7280993, 20.5513810 44.7308508, 20.5510751 44.7340860, 20.5483958 44.7345580, 20.5503614 44.7352316, 20.5509440 44.7434333, 20.5416617 44.7521169, 20.5358563 44.7553171, 20.5348919 44.7609694, 20.5393015 44.7624855, 20.5449353 44.7698750, 20.5490005 44.7708792, 20.5488362 44.7733456, 20.5647717 44.7649237, 20.5711431 44.7707818, 20.5772388 44.7711074, 20.5798915 44.7727751, 20.5852472 44.7808647, 20.5817268 44.7826053, 20.5823183 44.7845765, 20.5792147 44.7843299, 20.5777701 44.7872565, 20.5744279 44.7854098, 20.5740215 44.7886805, 20.5693220 44.7911579, 20.5655386 44.7906451, 20.5635444 44.7921747, 20.5598333 44.7901679, 20.5536143 44.7898282, 20.5502434 44.7909478, 20.5435002 44.8022967, 20.5424780 44.8073064, 20.5474459 44.8103678, 20.5530335 44.8102412, 20.5652728 44.8188428, 20.5738545 44.8279189, 20.5724006 44.8315147, 20.5776931 44.8371416, 20.5765153 44.8378971, 20.5863097 44.8427122, 20.5826128 44.8462544, 20.5762290 44.8486489, 20.5825139 44.8520894, 20.5953933 44.8552493, 20.6206689 44.8543410, 20.6212821 44.8560293, 20.6173687 44.8574761, 20.5961883 44.8615803, 20.5928447 44.8609861, 20.5911876 44.8626994, 20.6019440 44.8670619, 20.6196285 44.8673213, 20.6232109 44.8693710, 20.6164092 44.8815202, 20.6152606 44.8895682, 20.5777643 44.8860527, 20.5311826 44.8712209, 20.5230234 44.8646244, 20.5226088 44.8685278, 20.5187616 44.8654899, 20.5197414 44.8694015, 20.5132944 44.8687179, 20.5076686 44.8735038, 20.5065584 44.8670548, 20.4991594 44.8719635, 20.4938631 44.8734651, 20.4821047 44.8723679, 20.4737899 44.8677144, 20.4661802 44.8592493, 20.4594505 44.8560945, 20.4600397 44.8546034, 20.4650988 44.8535738, 20.4600110 44.8491680, 20.4623204 44.8477906, 20.4603705 44.8445375, 20.4711373 44.8342913, 20.4706338 44.8317839, 20.4498025 44.8343946, 20.4244846 44.8431449, 20.4138827 44.8526577, 20.3912248 44.8598333, 20.3749815 44.8683583, 20.3617778 44.8791076, 20.3436922 44.9103973, 20.3390650 44.9117584, 20.3011288 44.9426876, 20.2946156 44.9402419, 20.2960052 44.9381397, 20.2746476 44.9304194, 20.2703905 44.9345682, 20.2213764 44.9154621))&quot; ) # plot belgrade city limits m = gpd.GeoDataFrame(geometry=[belgrade_poly], crs=&quot;epsg:4326&quot;).explore(name=&quot;Belgrade&quot;, height=300, width=500) # plot the points, just for demo purposes plot outcomes as different colors m = geo_df.explore(m=m, column=&quot;Outcome&quot;, cmap=[&quot;red&quot;,&quot;green&quot;,&quot;blue&quot;], name=&quot;points&quot;) # add layer control so layers can be switched on / off folium.LayerControl().add_to(m) m </code></pre> <p><a href="https://i.stack.imgur.com/m6S95.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/m6S95.png" alt="enter image description here" /></a></p> <h3>supplementary update</h3> <p>Obtain Belgrade geometry</p> <pre><code>import osmnx as ox gdf = ox.geocode_to_gdf({'city': 'Belgrade'}) </code></pre>
python|gis|geospatial|geopandas
2
5,040
71,771,578
df.drop_duplicates, is not working what am i doing wrong?
<p>i am trying to search a text for chapters and then extract the text chapter by chapter. my search array returns the chapter name and the start and end positions in the text. it looks like this.</p> <pre><code> SearchTerm Start End 0 ITEM 1. 7219 47441.0 1 ITEM 2. 47441 57712.0 2 ITEM 3. 57712 76730.0 3 ITEM 4. 76730 106927.0 4 ITEM 5. 106927 111973.0 5 ITEM 6. 111973 120362.0 6 ITEM 7. 120362 237727.0 7 ITEM 8. 237727 830655.0 8 ITEM 9. 830655 833033.0 9 ITEM 10. 833033 833709.0 10 ITEM 11. 833709 834662.0 11 ITEM 12. 834662 846594.0 12 ITEM 13. 846594 847172.0 13 ITEM 14. 847172 849550.0 14 ITEM 15. 849550 877408.0 15 Item 15. 877408 913873.0 16 ITEM 1. 913873 914661.0 17 ITEM 2. 914661 914735.0 18 ITEM 3. 914735 914816.0 19 ITEM 4. 914816 915164.0 20 ITEM 6. 915164 915290.0 21 ITEM 7. 915290 915640.0 22 ITEM 8. 915640 917398.0 23 ITEM 9. 917398 917637.0 24 ITEM 10. 917637 917752.0 25 ITEM 11. 917752 917878.0 26 ITEM 12. 917878 918005.0 27 ITEM 13. 918005 918116.0 28 ITEM 14. 918116 918316.0 29 ITEM 15. 918316 919863.0 </code></pre> <p>it contains duplicates because my search finds the table of contents and the chapters. so i want to drop duplicates and keep the last entries.</p> <p>i have tried:</p> <pre><code>df2= matches_array.drop_duplicates(subset=[&quot;SearchTerm&quot;],keep='last',inplace=True) df2= matches_array.drop_duplicates(subset=[&quot;SearchTerm&quot;],keep='last',inplace=False) matches_array.drop_duplicates(subset=[&quot;SearchTerm&quot;],keep=&quot;last&quot;,inplace=False) matches_array.drop_duplicates(subset=['SearchTerm'],keep='last',inplace=True) </code></pre> <p>and several other variations with ignore index, but i cannot get it to work. what am i doing wrong?</p> <p>edit:</p> <blockquote> <p>{'SearchTerm': ['ITEM\xa01.', 'ITEM\xa02.', 'ITEM\xa03.',<br /> 'ITEM\xa04.', 'ITEM\xa05.', 'ITEM\xa06.', 'ITEM\xa07.',<br /> 'ITEM\xa08.', 'ITEM\xa09.', 'ITEM\xa010.', 'ITEM\xa011.',<br /> 'ITEM\xa012.', 'ITEM\xa013.', 'ITEM\xa014.', 'ITEM\xa015.',<br /> 'Item\xa015.', 'ITEM 1.', 'ITEM 2.', 'ITEM 3.', 'ITEM 4.',<br /> 'ITEM 6.', 'ITEM 7.', 'ITEM 8.', 'ITEM 9.', 'ITEM 10.',<br /> 'ITEM 11.', 'ITEM 12.', 'ITEM 13.', 'ITEM 14.', 'ITEM 15.'], 'Start': [7219, 47441, 57712, 76730, 106927, 111973,<br /> 120362, 237727, 830655, 833033, 833709, 834662, 846594,<br /> 847172, 849550, 877408, 913873, 914661, 914735, 914816,<br /> 915164, 915290, 915640, 917398, 917637, 917752, 917878,<br /> 918005, 918116, 918316], 'End': [47441.0, 57712.0, 76730.0, 106927.0, 111973.0, 120362.0, 237727.0, 830655.0, 833033.0, 833709.0, 834662.0, 846594.0, 847172.0, 849550.0, 877408.0, 913873.0, 914661.0, 914735.0, 914816.0, 915164.0, 915290.0, 915640.0, 917398.0, 917637.0, 917752.0, 917878.0, 918005.0, 918116.0, 918316.0, 919863.0]}</p> </blockquote>
<pre><code>How about using group by to pick the last one? (You can change the max of End if that's what you want) df = pd.read_csv(&quot;/tmp/Book2.csv&quot;) df.sort_values(by=['Search Term', 'Start']).groupby('Search Term').max('Start') Search Term Start End ITEM 1 913873 914661 ITEM 10 917637 917752 ITEM 11 917752 917878 ITEM 12 917878 918005 ITEM 13 918005 918116 ITEM 14 918116 918316 ITEM 15 918316 919863 ITEM 2 914661 914735 ITEM 3 914735 914816 ITEM 4 914816 915164 ITEM 5 106927 111973 ITEM 6 915164 915290 ITEM 7 915290 915640 ITEM 8 915640 917398 ITEM 9 917398 917637 </code></pre>
python|pandas
0
5,041
42,473,383
Django Pandas AWS
<p>I am attempting to deploy a Django project on AWS Elastic Beanstalk. One of my views makes use of Pandas to generate some data.</p> <p>I was able to get Pandas to compile properly on my EBS hosted site. I was noticing however that the browser would become "hung" when I tried to access any pages. I removed the view with the Pandas and the pandas import and the problem went away. However, when I add the Pandas import back, the problem recurs, leading me to believe it is a problem with Pandas. Also, if I remove the view that utilizes Pandas, but keep the "import pandas" statement, the problem remains. As soon as I remove "import pandas as pd" the problem goes away.</p> <p>When I SSH into the instance and run manage.py shell I can import Pandas properly and have no problems whatsoever - so I know Pandas has compiled properly. </p> <p>I checked the logs and nothing jumps out at me. Any help would be greatly appreciated!</p>
<p>I've had problems using panda w/django on a micro aws ec2 instance because of too little memory. Upgrading the instance solved the problem for me - </p> <p>If you are using a t2.micro for example, i might be worth upgrading to a larger instance just to see if the problem magically disappears - like it did for me.</p> <p>Perhaps not a completely satisfactory answer, but t might help you narrow down the problem.</p>
python|django|pandas|amazon-web-services|amazon-elastic-beanstalk
2
5,042
43,354,696
Trying to append content to numpy array
<p>I have a script that searches Twitter for a certain term and then prints out a number of attributes for the returned results.</p> <p>I'm trying to Just a blank array is returned. Any ideas why?</p> <pre><code>public_tweets = api.search("Trump") tweets_array = np.empty((0,3)) for tweet in public_tweets: userid = api.get_user(tweet.user.id) username = userid.screen_name location = tweet.user.location tweetText = tweet.text analysis = TextBlob(tweet.text) polarity = analysis.sentiment.polarity np.append(tweets_array, [[username, location, tweetText]], axis=0) print(tweets_array) </code></pre> <p>The behavior I am trying to achieve is something like..</p> <pre><code>array = [] array.append([item1, item2, item3]) array.append([item4,item5, item6]) </code></pre> <p><code>array</code> is now <code>[item1, item2, item3],[item4, item5, item6]</code>.</p> <p>But in Numpy :)</p>
<p><code>np.append</code> doesn't modify the array, you need to assign the result back:</p> <pre><code>tweets_array = np.append(tweets_array, [[username, location, tweetText]], axis=0) </code></pre> <p>Check <code>help(np.append)</code>:</p> <blockquote> <p>Note that <code>append</code> does not occur in-place: a new array is allocated and filled.</p> </blockquote> <p>In the second example, you are calling list's <code>append</code> method which happens in place; This is different from <code>np.append</code>.</p>
arrays|loops|numpy|tweepy|textblob
0
5,043
43,195,510
Speeding up a linear transform using parallel Cython
<p>I need to speed up the calculation of a linear transform which is roughly of the following form:</p> <pre><code>import numpy as np N=10000 input=np.random.random(N) x=np.linspace(0,100,N) y=np.linspace(0,30,N) X,Y=np.meshgrid(x,y,sparse=True) output=np.dot(np.cos(X*Y),input) </code></pre> <p>That is, I evaluate the cosine on a regular grid and multiply my input by the resulting matrix. In reality, the kernel function (here, the cosine) is more complicated, in particular it is not periodic. Hence, no simplification of FFT-type is possible!</p> <p>The above transform takes about 5 seconds on my multi-core machine. Now, I definitely need to speed this up. A simple first try is to use numexpr:</p> <pre><code>import numpy as np import numexpr as ne N=10000 input=np.random.random(N) x=np.linspace(0,100,N) y=np.linspace(0,30,N) X,Y=np.meshgrid(x,y,sparse=True) output=np.dot(ne.evaluate('cos(X*Y)'),input) </code></pre> <p>This makes use of parallel computing and reduces the execution time to about 0.9 seconds. This is quite nice, but not sufficient for my purpose. So, my next try is to employ parallel Cython:</p> <pre><code>import numpy as np from cython.parallel import prange cimport numpy as np cimport cython from libc.math cimport cos DTYPE = np.float64 ctypedef np.float64_t DTYPE_t @cython.boundscheck(False) @cython.wraparound(False) @cython.nonecheck(False) def transform(double[:] x, double[:] y, double[:] input): cdef unsigned int N = x.shape[0] cdef double[:] output = np.zeros(N) cdef unsigned int row, col for row in prange(N, nogil= True): for col in range(N): output[row] += cos(x[row]*y[col])*input[col] return output </code></pre> <p>I compile this by executing </p> <pre><code>from distutils.core import setup from distutils.extension import Extension from Cython.Distutils import build_ext ext_modules=[ Extension("cythontransform", ["cythontransform.pyx"], libraries=["m"], extra_compile_args = ["-O3", "-ffast-math", "-march=native", "-fopenmp" ], extra_link_args=['-fopenmp'] ) ] setup( name = "cythontransform", cmdclass = {"build_ext": build_ext}, ext_modules = ext_modules ) </code></pre> <p>from the command line. Calling the transform via</p> <pre><code>import numpy as np from cythontransform import transform N=10000 input=np.random.random(N) x=np.linspace(0,100,N) y=np.linspace(0,30,N) output=transform(x,y,input) </code></pre> <p>yields a rather weak improvement, giving roughly 0.7 seconds. </p> <p>Is someone aware of the possibility of further improvements of the Cython code? </p> <p>Or, alternatively, is there some other framework (PyOpenCL, Pythran, Numba, ...) that is better suited to this problem?</p>
<p>On my laptop, the following <a href="https://github.com/serge-sans-paille/pythran" rel="nofollow noreferrer">pythran</a> version:</p> <pre><code>#pythran export transform(float64[], float64[], float64[]) import numpy as np def transform(x, y, input): N = x.shape[0] output = np.zeros(N) #omp parallel for for row in range(N): for col in range(N): output[row] += np.cos(x[row]*y[col])*input[col] return output </code></pre> <p>Compiled with</p> <pre><code>pythran python -Ofast dd.py -fopenmp </code></pre> <p>Runs roughly two times faster than the cython version you propose. I did not investigate why this happens though...</p>
numpy|parallel-processing|cython|numexpr
1
5,044
43,058,241
Applying a function to a 3D array in numpy
<p>I have a 3D numpy.ndarray (think of an image with RGB) like</p> <pre><code>a = np.arange(12).reshape(2,2,3) '''array( [[[ 0, 1, 2], [ 3, 4, 5]], [[ 6, 7, 8], [ 9, 10, 11]]])''' </code></pre> <p>and a function that handles a list input;</p> <pre><code>my_sum = lambda x: x[0] + x[1] + x[2] </code></pre> <p><strong>What should I do to apply this function to each pixel?</strong> (or each 1D element of the 2D array)</p> <h1>What I have tried</h1> <h2>np.apply_along_axis</h2> <p><a href="https://stackoverflow.com/questions/22424096/apply-functions-to-3d-numpy-array">This question</a> is the kind of same as mine. So, I first tried it.</p> <pre><code>np.apply_along_axis(my_sum, 0, a.T).T #EDIT np.apply_along_axis(my_sum, -1, a) is better </code></pre> <p>at first, I thought this was the solution but this was too slow, because <a href="https://stackoverflow.com/questions/23849097/numpy-np-apply-along-axis-function-speed-up">np.apply_along_axis is not for speed</a></p> <h1>np.vectorize</h1> <p>I applied np.vetorize to my_func.</p> <pre><code>vector_my_func = np.vectorize(my_sum) </code></pre> <p>However, I have no idea even on how this vectorized function can be called.</p> <pre><code>vector_my_func(0,1,2) #=&gt; TypeError: &lt;lambda&gt;() takes 1 positional argument but 3 were given vector_my_func(np.arange(3)) #=&gt; IndexError: invalid index to scalar variable. vector_my_func(np.arange(12).reshape(4,3)) #=&gt; IndexError: invalid index to scalar variable. vector_my_func(np.arange(12).reshape(2,2,3)) #=&gt; IndexError: invalid index to scalar variable. </code></pre> <p>I am totally at loss on how this should be done.</p> <h1>EDIT</h1> <p>benchmark results for suggested methods. (used jupyter notebook and restarted kernel for each test)</p> <pre><code>a = np.ones((1000,1000,3)) my_sum = lambda x: x[0] + x[1] + x[2] my_sum_ellipsis = lambda x: x[..., 0] + x[..., 1] + x[..., 2] vector_my_sum = np.vectorize(my_sum, signature='(i)-&gt;()') </code></pre> <pre><code>%timeit np.apply_along_axis(my_sum, -1, a) #1 loop, best of 3: 3.72 s per loop %timeit vector_my_sum(a) #1 loop, best of 3: 2.78 s per loop %timeit my_sum(a.transpose(2,0,1)) #100 loops, best of 3: 12 ms per loop %timeit my_sum_ellipsis(a) #100 loops, best of 3: 12.2 ms per loop %timeit my_sum(np.moveaxis(a, -1, 0)) #100 loops, best of 3: 12.2 ms per loop </code></pre>
<p>One option is to transpose the numpy array, swap the third axis to the first, and then you can apply the function directly to it:</p> <pre><code>my_sum(a.transpose(2,0,1)) #array([[ 3, 12], # [21, 30]]) </code></pre> <p>Or rewrite the sum function as:</p> <pre><code>my_sum = lambda x: x[..., 0] + x[..., 1] + x[..., 2] my_sum(a) #array([[ 3, 12], # [21, 30]]) </code></pre>
python|numpy|image-processing|matrix
4
5,045
72,155,950
how to set the gradient for a network in pytorch
<p>I have a model in pytorch. The model can take any shape but lets assume this is the model</p> <pre><code>torch_model = Sequential( Flatten(), Linear(28 * 28, 256), Dropout(.4), ReLU(), BatchNorm1d(256), ReLU(), Linear(256, 128), Dropout(.4), ReLU(), BatchNorm1d(128), ReLU(), Linear(128, 10), Softmax() ) </code></pre> <p>I am using SGD optimizer, I want to set the gradient for each of the layers so the SGD algorithm will move the parameters in the direction I want.</p> <p>Lets say I want all the gradients for all the layers to be ones (<code>torch.ones_like(gradient_shape)</code>) how can I do this? Thanks?</p>
<p>In PyTorch, with a model defined as yours above, you can iterate over the layers like this:</p> <pre><code>for layer in list(torch_model.modules())[1:]: print(layer) </code></pre> <p>You have to add the <code>[1:]</code> since the first module returned is the sequential module itself. In any layer, you can access the weights with <code>layer.weight</code>. However, it is important to remember that some layers, like Flatten and Dropout, don't have weights. A way to check, and then add 1 to each weight would be:</p> <pre><code>for layer in list(torch_model.modules())[1:]: if hasattr(layer, 'weight'): with torch.no_grad(): for i in range(layer.weight.shape[0]): layer.weight[i] = layer.weight[i] + 1 </code></pre> <p>I tested the above on your model and it does add 1 to every weight. Worth noting that it won't work without <code>torch.no_grad()</code> as you don't want pytorch tracking the changes.</p>
python|neural-network|pytorch|gradient-descent|stochastic-gradient
0
5,046
72,322,230
Looping (for) through some data, concatenating along the way, appending to a list in final, but dimensions don't match in python
<p>Here is the code:</p> <pre><code>#new_right_v2 = [] for i in range(rows): r1_p5_first_half = np.concatenate( (new_right[i,:312].reshape(1,-1), new_right[i,625:937].reshape(1,-1), new_right[i,1250:1562].reshape(1,-1), new_right[i,1875:2187].reshape(1,-1)),axis=1) #print(r1_p5_first_half.shape) r1_p5_second_half = np.concatenate( (new_right[i,313:625].reshape(1,-1), new_right[i,938:1250].reshape(1,-1), new_right[i,1563:1875].reshape(1,-1), new_right[i,2187:2499].reshape(1,-1)),axis=1) new_right_v2.append(r1_p5_first_half) new_right_v2.append(r1_p5_second_half) new_right_v2 </code></pre> <p>But when I run to check dimensions:</p> <pre><code>for i in range(40): print(new_right_v2[i].shape) </code></pre> <p>This output comes: (2, 1248) (2, 1248) (2, 1248) (2, 1248) (2, 1248) (2, 1248) (2, 1248) (2, 1248) (2, 1248) (2, 1248) (2, 1248) (2, 1248) (2, 1248) (2, 1248) (2, 1248) (2, 1248) (2, 1248) (2, 1248) (2, 1248) (2, 1248) (4, 624) (4, 624) (4, 624) (4, 624) (4, 624) (4, 624) (4, 624) (4, 624) (4, 624) (4, 624) (4, 624) (4, 624) (4, 624) (4, 624) (4, 624) (4, 624) (4, 624) (4, 624) (4, 624) (4, 624)</p> <p>I don't know what happens after 10th iteration. Any idea?</p>
<p>I am actually not sure what dimensions are you checking, as <code>new_right_v2</code> seems to be a list</p> <p>Maybe you can apply the operations matrix-wise to all rows at once?</p> <pre><code>first_half = np.concatenate( new_right[:,:312], new_right[:,625:937], new_right[:,1250:1562], new_right[:,1875:2187]) ).T </code></pre>
python|numpy
1
5,047
72,355,950
How to change function values inside an if loop
<pre><code>import numpy as np from ufl import cofac, sqrt def f(X): fx = sqrt(X[0]**2+X[1]**2) return fx X = np.array([1.0, 2.0]) Y = f(X) print(Y) if f(X)&lt;=3: f(X)=0.0 print(f(X)) exit() </code></pre> <p>The above function calculates the distance of a point from the origin. If the distance is less than 3 units, <strong>I need to assign the value 0.0 to the function</strong>.</p> <p>But the above code gives me no difference in results before and after the &quot;if&quot; loop. How to assign values to functions inside an if loop?</p>
<p>Sounds like you want the <code>f()</code> function to return different results depending on the values in X.</p> <pre><code>def f(X): fx = sqrt(X[0]**2+X[1]**2) if fx &gt; 3: return fx else: return 0.0 </code></pre>
python|arrays|numpy
1
5,048
62,560,844
Python: Average over bins
<p>I created bins for age and have a productivity factor (Prod). Now I want to group the bins and calculate average over Prod. So that in the end I have age categories with their average productivity.</p> <pre><code> bin Prod 1 (40, 50] 72.920192 2 (30, 40] 51.582848 3 (20, 30] 17.478928 4 (20, 30] 49.205143 6 (50, 60] 38.416232 7 (50, 60] 57.782620 9 (50, 60] 56.718825 10 (50, 60] 75.326448 11 (20, 30] 75.327148 12 (40, 50] 106.354800 </code></pre>
<p>Use <code>df.groupby('bin')['Prod'].mean()</code>.</p>
python|numpy
0
5,049
62,781,688
Two different styles of Tensorflow implementation for the same network architecture lead to two different results and behaviors?
<ul> <li>OS Platform: Linux Centos 7.6</li> <li>Distribution: Intel Xeon Gold 6152 (22x3.70 GHz);</li> <li>GPU Model: NVIDIA Tesla V100 32 GB;</li> <li>Number of nodes/CPU/Cores/GPU: 26/52/1144/104;</li> <li>TensorFlow installed from (source or binary): official webpage</li> <li>TensorFlow version (use command below): 2.1.0</li> <li>Python version: 3.6.8</li> </ul> <p><strong>Description of issue:</strong></p> <p>While I was implementing my proposed method, using the second style of implementation (see below), I realized that the performance of the algorithm is indeed strange. To be more precise, the accuracy decreases and loss value increases while the number of epochs increases.</p> <p>So I narrow down the problem and finally, I decided to modify some codes from TensorFlow official page to check what is happening. As it is explained in TF v2 official webpage there are two styles of implementation which I have adopted as follows.</p> <ul> <li><p>I have modified the code provided in &quot;getting started of TF v2&quot; the link below:</p> <p><a href="https://www.tensorflow.org/tutorials/quickstart/beginner" rel="nofollow noreferrer">TensorFlow 2 quickstart for beginners</a></p> </li> </ul> <p>as follows:</p> <pre><code>import tensorflow as tf from sklearn.preprocessing import OneHotEncoder from sklearn.datasets import make_classification from sklearn.model_selection import train_test_split learning_rate = 1e-4 batch_size = 100 n_classes = 2 n_units = 80 # Generate synthetic data / load data sets x_in, y_in = make_classification(n_samples=1000, n_features=10, n_informative=4, n_redundant=2, n_repeated=2, n_classes=2, n_clusters_per_class=2, weights=[0.5, 0.5], flip_y=0.01, class_sep=1.0, hypercube=True, shift=0.0, scale=1.0, shuffle=True, random_state=42) x_in = x_in.astype('float32') y_in = y_in.astype('float32').reshape(-1, 1) one_hot_encoder = OneHotEncoder(sparse=False) y_in = one_hot_encoder.fit_transform(y_in) y_in = y_in.astype('float32') x_train, x_test, y_train, y_test = train_test_split(x_in, y_in, test_size=0.4, random_state=42, shuffle=True) x_test, x_val, y_test, y_val = train_test_split(x_test, y_test, test_size=0.5, random_state=42, shuffle=True) print(&quot;shapes:&quot;, x_train.shape, y_train.shape, x_test.shape, y_test.shape, x_val.shape, y_val.shape) V = x_train.shape[1] model = tf.keras.models.Sequential([ tf.keras.layers.Dense(n_units, activation='relu', input_shape=(V,)), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(n_classes) ]) loss_fn = tf.keras.losses.BinaryCrossentropy(from_logits=True) model.compile(optimizer='adam', loss=loss_fn, metrics=['accuracy']) model.fit(x_train, y_train, epochs=5) model.evaluate(x_test, y_test, verbose=2) </code></pre> <p>the output is as it is expected, as one can see below:</p> <pre><code>600/600 [==============================] - 0s 419us/sample - loss: 0.7114 - accuracy: 0.5350 Epoch 2/5 600/600 [==============================] - 0s 42us/sample - loss: 0.6149 - accuracy: 0.6050 Epoch 3/5 600/600 [==============================] - 0s 39us/sample - loss: 0.5450 - accuracy: 0.6925 Epoch 4/5 600/600 [==============================] - 0s 46us/sample - loss: 0.4895 - accuracy: 0.7425 Epoch 5/5 600/600 [==============================] - 0s 40us/sample - loss: 0.4579 - accuracy: 0.7825 test: 200/200 - 0s - loss: 0.4110 - accuracy: 0.8350 </code></pre> <p>To be more precise, the training accuracy increases and the loss value decrease as the number epochs increases (which is expected and it is normal).</p> <p>HOWEVER, the following chunk of code which is adapted from the link below:</p> <p><a href="https://www.tensorflow.org/tutorials/quickstart/advanced" rel="nofollow noreferrer">TensorFlow 2 quickstart for experts</a></p> <p>as follows:</p> <pre><code>import tensorflow as tf from sklearn.preprocessing import OneHotEncoder from sklearn.datasets import make_classification from sklearn.model_selection import train_test_split learning_rate = 1e-4 batch_size = 100 n_classes = 2 n_units = 80 # Generate synthetic data / load data sets x_in, y_in = make_classification(n_samples=1000, n_features=10, n_informative=4, n_redundant=2, n_repeated=2, n_classes=2, n_clusters_per_class=2, weights=[0.5, 0.5],flip_y=0.01, class_sep=1.0, hypercube=True, shift=0.0, scale=1.0, shuffle=True, random_state=42) x_in = x_in.astype('float32') y_in = y_in.astype('float32').reshape(-1, 1) one_hot_encoder = OneHotEncoder(sparse=False) y_in = one_hot_encoder.fit_transform(y_in) y_in = y_in.astype('float32') x_train, x_test, y_train, y_test = train_test_split(x_in, y_in, test_size=0.4, random_state=42, shuffle=True) x_test, x_val, y_test, y_val = train_test_split(x_test, y_test, test_size=0.5, random_state=42, shuffle=True) print(&quot;shapes:&quot;, x_train.shape, y_train.shape, x_test.shape, y_test.shape, x_val.shape, y_val.shape) training_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(batch_size) valid_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val)).batch(batch_size) testing_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(batch_size) V = x_train.shape[1] class MyModel(tf.keras.models.Model): def __init__(self): super(MyModel, self).__init__() self.d1 = tf.keras.layers.Dense(n_units, activation='relu', input_shape=(V,)) self.d2 = tf.keras.layers.Dropout(0.2) self.d3 = tf.keras.layers.Dense(n_classes,) def call(self, x): x = self.d1(x) x = self.d2(x) return self.d3(x) # Create an instance of the model model = MyModel() loss_object = tf.keras.losses.BinaryCrossentropy(from_logits=True) optimizer = tf.keras.optimizers.Adam() train_loss = tf.keras.metrics.Mean(name='train_loss') train_accuracy = tf.keras.metrics.BinaryCrossentropy(name='train_accuracy') test_loss = tf.keras.metrics.Mean(name='test_loss') test_accuracy = tf.keras.metrics.BinaryCrossentropy(name='test_accuracy') @tf.function def train_step(images, labels): with tf.GradientTape() as tape: # training=True is only needed if there are layers with different # behavior during training versus inference (e.g. Dropout). predictions = model(images,) # training=True loss = loss_object(labels, predictions) gradients = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(gradients, model.trainable_variables)) train_loss(loss) train_accuracy(labels, predictions) @tf.function def test_step(images, labels): # training=False is only needed if there are layers with different # behavior during training versus inference (e.g. Dropout). predictions = model(images,) # training=False t_loss = loss_object(labels, predictions) test_loss(t_loss) test_accuracy(labels, predictions) EPOCHS = 5 for epoch in range(EPOCHS): # Reset the metrics at the start of the next epoch train_loss.reset_states() train_accuracy.reset_states() test_loss.reset_states() test_accuracy.reset_states() for images, labels in training_dataset: train_step(images, labels) for test_images, test_labels in testing_dataset: test_step(test_images, test_labels) template = 'Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, Test Accuracy: {}' print(template.format(epoch + 1,train_loss.result(), train_accuracy.result(), test_loss.result(), test_accuracy.result())) </code></pre> <p>Behaves indeed strange. Here is the output of this piece of code:</p> <pre><code>Epoch 1, Loss: 0.7299721837043762, Accuracy: 3.8341376781463623, Test Loss: 0.7290592193603516, Test Accuracy: 3.6925911903381348 Epoch 2, Loss: 0.6725851893424988, Accuracy: 3.1141700744628906, Test Loss: 0.6695905923843384, Test Accuracy: 3.2315549850463867 Epoch 3, Loss: 0.6256862878799438, Accuracy: 2.75959849357605, Test Loss: 0.6216427087783813, Test Accuracy: 2.920461416244507 Epoch 4, Loss: 0.5873140096664429, Accuracy: 2.4249706268310547, Test Loss: 0.5828182101249695, Test Accuracy: 2.575272560119629 Epoch 5, Loss: 0.555053174495697, Accuracy: 2.2128372192382812, Test Loss: 0.5501811504364014, Test Accuracy: 2.264410972595215 </code></pre> <p>As one can see, not only the values of accuracy are strange but also instead of increasing, once the number of epochs increase, they decrease?</p> <p>May you please explain what is happening here?</p>
<p>As it is pointed in the comment I made mistake in using the evaluation metrics. I should have used BinaryAccuracy.</p> <p>Moreover, it is better to edit the call in the advance version as follows:</p> <pre><code>def call(self, x, training=False): x = self.d1(x) if training: x = self.d2(x, training=training) return self.d3(x) </code></pre>
python|tensorflow|machine-learning|keras
0
5,050
62,503,373
TypeError: get_file() missing 1 required positional argument: 'origin'
<p>I'm trying to use my own .txt file in tensorflow, but when I run it in jupyter notebook i get this</p> <pre><code>TypeError Traceback (most recent call last) &lt;ipython-input-28-b7f323158fac&gt; in &lt;module&gt; 5 import time 6 ----&gt; 7 path_to_file = tf.keras.utils.get_file('https://storage.googleapis.com/jezuz/Jezuz.txt?x-goog-signature=287cd6ea9dbdc21b1002c85e9c064a70b422b3cf7458bf047cb5e476a5073e248d1c25c1ee9b7bfe6c981b3ef2cec705426beb592d5d6feeb8bbe376ca388d41f9b854c63c64c0d8b08483da05ca7c1483bfdd5b3dc9ac45f39bd2a35296e44771dd1b2396760f1f1690fd392564a50c083b557cb607d8ad7d7bc8b95492e62b8884e15d404b7f0e1b7471afcb3aca07875ae4ef82388f41708e557b6cdb79ab6934aa61a8057ab23ee4589c514cc18878fe66282343e0e1786b0d9b4eefa52f2041256c5ddfcc7f0ff06ae096f49f1c5426cb4a9375a47f220cce09ebf919e5a603c7f5cc4b72af5f7b1c1416a0817cb39c6631104d4558033a0c60995d5c7e&amp;x-goog-algorithm=GOOG4-RSA-SHA256&amp;x-goog-credential=jezuz-819%40cybernetic-day-275502.iam.gserviceaccount.com%2F20200621%2Fus%2Fstorage%2Fgoog4_request&amp;x-goog-date=20200621T190757Z&amp;x-goog-expires=600&amp;x-goog-signedheaders=host') 8 with open('Jezuz.txt') as file_object: 9 contents = file_object.read() TypeError: get_file() missing 1 required positional argument: 'origin' </code></pre> <p>this is my code</p> <pre><code>import tensorflow as tf import numpy as np import os import time path_to_file = tf.keras.utils.get_file('https://storage.googleapis.com/jezuz/Jezuz.txt?x-goog-signature=287cd6ea9dbdc21b1002c85e9c064a70b422b3cf7458bf047cb5e476a5073e248d1c25c1ee9b7bfe6c981b3ef2cec705426beb592d5d6feeb8bbe376ca388d41f9b854c63c64c0d8b08483da05ca7c1483bfdd5b3dc9ac45f39bd2a35296e44771dd1b2396760f1f1690fd392564a50c083b557cb607d8ad7d7bc8b95492e62b8884e15d404b7f0e1b7471afcb3aca07875ae4ef82388f41708e557b6cdb79ab6934aa61a8057ab23ee4589c514cc18878fe66282343e0e1786b0d9b4eefa52f2041256c5ddfcc7f0ff06ae096f49f1c5426cb4a9375a47f220cce09ebf919e5a603c7f5cc4b72af5f7b1c1416a0817cb39c6631104d4558033a0c60995d5c7e&amp;x-goog-algorithm=GOOG4-RSA-SHA256&amp;x-goog-credential=jezuz-819%40cybernetic-day-275502.iam.gserviceaccount.com%2F20200621%2Fus%2Fstorage%2Fgoog4_request&amp;x-goog-date=20200621T190757Z&amp;x-goog-expires=600&amp;x-goog-signedheaders=host') with open('Jezuz.txt') as file_object: contents = file_object.read() print(contents) </code></pre>
<p>I believe what you want here is:</p> <pre class="lang-py prettyprint-override"><code>path_to_file = tf.keras.utils.get_file('Jezuz.txt', 'https://storage.googleapis.com/jezuz/Jezuz.txt?x-goog-signature=287cd6ea9dbdc21b1002c85e9c064a70b422b3cf7458bf047cb5e476a5073e248d1c25c1ee9b7bfe6c981b3ef2cec705426beb592d5d6feeb8bbe376ca388d41f9b854c63c64c0d8b08483da05ca7c1483bfdd5b3dc9ac45f39bd2a35296e44771dd1b2396760f1f1690fd392564a50c083b557cb607d8ad7d7bc8b95492e62b8884e15d404b7f0e1b7471afcb3aca07875ae4ef82388f41708e557b6cdb79ab6934aa61a8057ab23ee4589c514cc18878fe66282343e0e1786b0d9b4eefa52f2041256c5ddfcc7f0ff06ae096f49f1c5426cb4a9375a47f220cce09ebf919e5a603c7f5cc4b72af5f7b1c1416a0817cb39c6631104d4558033a0c60995d5c7e&amp;x-goog-algorithm=GOOG4-RSA-SHA256&amp;x-goog-credential=jezuz-819%40cybernetic-day-275502.iam.gserviceaccount.com%2F20200621%2Fus%2Fstorage%2Fgoog4_request&amp;x-goog-date=20200621T190757Z&amp;x-goog-expires=600&amp;x-goog-signedheaders=host') </code></pre>
python|tensorflow
0
5,051
62,502,529
Understanding the double star notation in Python in an algorithm
<p>I am confused about the following code in Python:</p> <pre><code>import numpy as np from numpy.random import rand, randn def generate_data (beta, n): u= np.random.rand(n,1) y= (u**np.arange(0,4))@beta return y np.random.seed(12) beta = np.array([[10,-140,400,-250]]).T n = 5 y = generate_data(beta, n) print(y) </code></pre> <p>I really do not understand the meaning of <code>u**np.arange(0,4)</code>, especially since <code>u</code> is a vector of dimension n times 1 (where n is arbitrary) and <code>np.arange(0,4)</code> is a vector of dimension 1 times 4. Nonetheless, this algorithm <strong>works</strong>.</p> <p>I therefore tried the following:</p> <pre><code>import numpy as np u= np.array([1,2,3,4,5,6]).T beta = np.array([[10,-140,400,-250]]).T y = (u ** np.arange(0,4)) @ beta print (y) </code></pre> <p>This time <code>n</code> is set to be 6. However, this algorithm <strong>does not work</strong> and there is an error message about the dimensions.</p> <p>Can anyone please tell me about the meaning of the mysterious <code>u ** np.arange(0,4)</code>?</p>
<p>The ** will do the power operation element wise. Here is some example code that will make it clear:</p> <pre><code>&gt;&gt;&gt; a = np.array([2,3,4]) &gt;&gt;&gt; b = np.array([1,2,3]) &gt;&gt;&gt; a**b array([ 2, 9, 64], dtype=int32) </code></pre> <p>As you can see, the 0th element of a is raised to the power of the 0th element of b, the 1st element of a is raised to the power of the 1st element of b, and so on.</p> <p>EDIT:</p> <p>My original answer didn't address part of your question. Here's an example to show why it worked with an arbitrary value of n.</p> <p>Let <code>a</code> be a numpy array with dimension <code>(6,1)</code>.</p> <pre><code>&gt;&gt;&gt; a = np.array([[1], [2], [3], [4], [5], [6]]) &gt;&gt;&gt; a.shape (6, 1) &gt;&gt;&gt; b = np.array([1,2,3]) &gt;&gt;&gt; a**b array([[ 1, 1, 1], [ 2, 4, 8], [ 3, 9, 27], [ 4, 16, 64], [ 5, 25, 125], [ 6, 36, 216]], dtype=int32) </code></pre> <p>Notice that the output array has dimension (6,3). 6 is the first dimension of <code>a</code>, and 3 is the first dimension of <code>b</code>. When there is a dimension mismatch, the operator raises each element of <code>a</code> to the power of each element to <code>b</code>.</p> <p>The reason your test example didn't work is because of a little detail. In your second code block (the code to test the operator), <code>u</code> had a shape of <code>(6,)</code> instead of <code>(6,1)</code>. <code>(6,)</code> probably doesn't work due to a small incompatibility between numpy arrays and the python ** operator.</p>
python|python-3.x|numpy
1
5,052
54,497,443
Exporting pandas dataframe to CSV
<p>I'm loading a SQL table into a dataframe, and then pushing it directly into a CSV. The Problem is the export. I require:</p> <pre><code>value|value|value </code></pre> <p>and I'm getting:</p> <pre><code>"(value|value|value)" </code></pre> <p>How do I get out of that?</p> <p>Here's my code:</p> <pre><code>for row in self.roster.itertuples(): SQL = self.GenerateSQL(row) self.filename = '{}_{}.csv'.format(row.tablename, now.strftime("%Y-%m-%d")) # Open the file f = open(os.path.join(self.path, self.filename), 'w') # Create a connection and get a cursor cursor = self.conn.cursor() # Execute the query cursor.execute(SQL) # Get data in batches rowcount = 0 while True: # Read the data df = pd.DataFrame(cursor.fetchmany(1000)) # We are done if there are no data if len(df) == 0: break # Let's write to the file else: rowcount += len(df.index) print('Number of rows exported: {}'.format(str(rowcount))) df.to_csv(f, header=False, sep='|', index=False) # Clean up f.close() cursor.close() </code></pre> <p>Appreciate any insight.</p> <p>UPDATE #1 This is an output of the df during the 1000 record cycles.</p> <pre><code>[1000 rows x 1 columns] Number of rows exported: 10000 0 0 [11054, Smart Session (30 Minute) , smartsessi... 1 [11055, Best Practices, bestpractices, 2018-06... 2 [11056, Smart Session (30 Minute) , smartsessi... 3 [11057, Best Practices, bestpractices, 2018-06... </code></pre> <p>two records:</p> <pre><code> 0 0 [1, Offrs.com Live Training, livetraining, 201... 1 [2, Offrs.com Live Training, livetraining, 201... </code></pre>
<p>Provided that you can use <code>sqlalchemy</code> package, you would be able to take advantage of the <code>pd.read_sql</code> function which handles querying the database and retrieving the data.</p> <pre><code>import pandas as pd pd.set_option('display.max_columns', 500) pd.set_option('display.width', 1000) from sqlalchemy import create_engine engine = create_engine('postgresql://postgres@localhost:5432/sample') df = pd.read_sql_query('select * from climate limit 3',con=engine) df.to_csv('out.csv', header=False, sep='|', index=False) </code></pre> <p>Alternatively, you can still use the cursor. However, you need to split the rows fetched into individual pieces before constructing a data frame. Currently, the whole row with multiple database table columns is put into a single dataframe row.</p>
python|pandas|csv
0
5,053
73,543,076
How to transform Array into CSV Format in Python using DataFrame?
<p>I am currently fetching data from an RestAPI which I then want to process. However, the platform I use needs the data to be transformed with the pandas dataframe. In order to get it into the correct format, I need to transform this response:</p> <pre><code>data = { &quot;apple&quot;:{ &quot;price&quot;: 0.89, &quot;category&quot;: &quot;fruit&quot;, &quot;weight&quot;: 13.88 }, &quot;carrot&quot;:{ &quot;price&quot;: 1.87, &quot;category&quot;: &quot;vegetable&quot;, &quot;weight&quot;: 3.23 } } </code></pre> <p>into this format:</p> <pre><code>data = { &quot;product&quot;: { &quot;apple&quot;, &quot;carrot&quot; }, &quot;price&quot;: { 0.89, 1.87 }, &quot;category&quot;: { &quot;fruit&quot;, &quot;vegetable&quot; }, &quot;weight&quot;: { 13.88, 3.23 } } </code></pre>
<p>You can use:</p> <pre><code>df = pd.DataFrame(data) out = df.T.rename_axis('product').reset_index() </code></pre> <p>output:</p> <pre><code> product price category weight 0 apple 0.89 fruit 13.88 1 carrot 1.87 vegetable 3.23 </code></pre> <p>as dictionary:</p> <pre><code>out = df.T.rename_axis('product').reset_index().to_dict('list') </code></pre> <p>output:</p> <pre><code>{'product': ['apple', 'carrot'], 'price': [0.89, 1.87], 'category': ['fruit', 'vegetable'], 'weight': [3.23, 13.88]} </code></pre>
python|pandas|dataframe
2
5,054
73,689,334
Keras save model with named layers
<p>I have a keras model where each layer has a specific name</p> <pre><code>def build_model(): input_layer = keras.Input(shape=input_shape, name='input') conv1 = layers.Conv2D(32, kernel_size=(3, 3), activation=&quot;relu&quot;, name='conv1')(input_layer) maxpool1 = layers.MaxPooling2D(pool_size=(2, 2), name='maxpool1')(conv1) conv2 = layers.Conv2D(64, kernel_size=(3, 3), activation=&quot;relu&quot;, name='conv2')(maxpool1) maxpool2 = layers.MaxPooling2D(pool_size=(2, 2), name='maxpool2')(conv2) flatten = layers.Flatten(name='flatten')(maxpool2) dropout = layers.Dropout(0.5, name='dropout')(flatten) dense = layers.Dense(num_classes, activation=&quot;softmax&quot;, name='dense')(dropout) return keras.Model(inputs=(input_layer,), outputs=(dense,)) model = build_model() model.compile(...) model.fit(...) </code></pre> <p>and I would like to save the model in its entirety so that I can load it later. Saving and loading the model looks like this</p> <pre><code>model.save('path/model.h5') new_model = tf.keras.models.load_model('path/model.h5') </code></pre> <p>Unfortunately I get the following error when I try to load the file</p> <pre><code>&gt;&gt;&gt; model = keras.models.load_model('model.h5') Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;/Users/username/anaconda3/envs/enviromentname/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/save.py&quot;, line 146, in load_model return hdf5_format.load_model_from_hdf5(filepath, custom_objects, compile) File &quot;/Users/username/anaconda3/envs/enviromentname/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/hdf5_format.py&quot;, line 166, in load_model_from_hdf5 model_config = json.loads(model_config.decode('utf-8')) AttributeError: 'str' object has no attribute 'decode' </code></pre> <p>I also tried to save only the weights but I get the same error</p> <pre><code>&gt;&gt;&gt; model.save_weights('path/model_weights.h5') &gt;&gt;&gt; model.load_weights('path/model_weights.h5') Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;/Users/username/anaconda3/envs/enviromentname/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py&quot;, line 181, in load_weights return super(Model, self).load_weights(filepath, by_name) File &quot;/Users/username/anaconda3/envs/enviromentname/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/network.py&quot;, line 1177, in load_weights saving.load_weights_from_hdf5_group(f, self.layers) File &quot;/Users/username/anaconda3/envs/enviromentname/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/hdf5_format.py&quot;, line 651, in load_weights_from_hdf5_group original_keras_version = f.attrs['keras_version'].decode('utf8') AttributeError: 'str' object has no attribute 'decode' </code></pre> <p>Following what has been discussed in <a href="https://stackoverflow.com/questions/53740577/does-any-one-got-attributeerror-str-object-has-no-attribute-decode-whi">this question</a> I tried downgrading h5py but the error remains. How can I fix it?</p> <p>EDIT: Following Dr.Snoopy answer I checked again if h5py was correcty downgraded and also matched Keras version between Google Colab and my local machine. It works now with h5py version 2.10.0</p>
<p>In order to save the entire model I would use the <a href="https://www.tensorflow.org/api_docs/python/tf/keras/models/save_model" rel="nofollow noreferrer">SavedModel</a> format, that saves your model as a <code>.pb</code>:</p> <pre><code>import tensorflow as tf tf.keras.models.save_model(model, &quot;test_model&quot;) loded_model = tf.keras.models.load_model(&quot;test_model&quot;) </code></pre>
python|python-3.x|tensorflow|keras|tf.keras
0
5,055
73,623,626
Why is my pytorch Autoencoder giving me a "mat1 and mat2 shapes cannot be multiplied" error?
<p>I know this is because the shapes don't match for the multiplication, but why when my code is similar to most example code I found:</p> <pre><code>import torch.nn as nn ... #input is a 256x256 image num_input_channels = 3 self.encoder = nn.Sequential( nn.Conv2d(num_input_channels*2**0, num_input_channels*2**1, kernel_size=3, padding=1, stride=2), #1 6 128 128 nn.Tanh(), nn.Conv2d(num_input_channels*2**1, num_input_channels*2**2, kernel_size=3, padding=1, stride=2), #1 12 64 64 nn.Tanh(), nn.Conv2d(num_input_channels*2**2, num_input_channels*2**3, kernel_size=3, padding=1, stride=2), #1 24 32 32 nn.Tanh(), nn.Conv2d(num_input_channels*2**3, num_input_channels*2**4, kernel_size=3, padding=1, stride=2), #1 48 16 16 nn.Tanh(), nn.Conv2d(num_input_channels*2**4, num_input_channels*2**5, kernel_size=3, padding=1, stride=2), #1 96 8 8 nn.Tanh(), nn.Conv2d(num_input_channels*2**5, num_input_channels*2**6, kernel_size=3, padding=1, stride=2), #1 192 4 4 nn.LeakyReLU(), nn.Conv2d(num_input_channels*2**6, num_input_channels*2**7, kernel_size=3, padding=1, stride=2), #1 384 2 2 nn.LeakyReLU(), nn.Conv2d(num_input_channels*2**7, num_input_channels*2**8, kernel_size=2, padding=0, stride=1), #1 768 1 1 nn.LeakyReLU(), nn.Flatten(), nn.Linear(768, 1024*32), nn.ReLU(), nn.Linear(1024*32, 256), nn.ReLU(), ).cuda() </code></pre> <p>I get the error &quot;RuntimeError: mat1 and mat2 shapes cannot be multiplied (768x1 and 768x32768)&quot;</p> <p>To my understanding I should end up with a Tensor of shape [1,768,1,1] after the convolutions and [1,768] after flattening, so I can use a fully connected Linear layer that goes to 1024*32 in size (by which I tried to add some more ways for the neural net to store data/knowledge).</p> <p>Using <code>nn.Linear(1,1024*32)</code> runs with a warning later: &quot;UserWarning: Using a target size (torch.Size([3, 256, 256])) that is different to the input size (torch.Size([768, 3, 256, 256]))&quot;. I think it comes from my decoder, though</p> <p>What am I not understanding correctly here?</p>
<p>All <code>torch.nn</code> Modules require batched inputs, and it seems in your case you have no batch dimension. Without knowing your code I'm assuming you are using</p> <pre><code>my_input.shape == (3, 256, 256) </code></pre> <p>But you will need to add a batch dimension, that is, you need to have</p> <pre><code>my_input.shape == (1, 3, 256, 256) </code></pre> <p>You can easily do that by introducing a dummy dimension using:</p> <pre><code>my_input = my_input[None, ...] </code></pre>
python|pytorch|autoencoder
1
5,056
71,298,878
Convert JSON response from request into Pandas DataFrame
<p>I want to iterate over a dataframe by rows and use cell values to pull data from an api and assign the response to a new column. I got the response and everything works but i need to convert the json response into a dataframe column so i wrote a function similar to this</p> <pre><code>def get(): response=requests.get(url + id) return response.json() </code></pre> <p>and applied this function to every row</p> <pre><code>d['res'] = d.apply(lambda row: get()) </code></pre> <p>The problem that i got a json format in the column. how can i extract what i need from the reponse and put it in columns.</p> <pre><code> { 'code': 'OK', 'items': [{'start': '2021-03-21T00:00:00.000', 'end': '2021-03-31T00:00:00.000', 'location': {'code': None, 'position': {'lat': 47.464699, 'lon': 8.54917}, 'country_code': None}, 'source': 'geoeditor', 'title': 'test 25.03.2021', 'body': 'test description', 'severity': None, 'category': None, 'relatedEntities': None, 'relevant': None, 'raw': {'active': True, 'id': 82482150, 'layerId': 'disruption_il', 'locationType': 'POINT', 'name': 'New Location', 'changed': '2021-03-25T20:49:51Z', 'groupId': None, 'identifiers': [{'name': 'ref_id', 'value': '9ded7375-bea2-4466-96a9-fd5c42f9a562'}], 'properties': {'title': 'test 25.03.2021', 'source': 'disruption_news_event', 'to_date': '2021-03-31', 'relevant': 'true', 'from_date': '2021-03-21', 'description': 'test description'}, 'relationships': [{'referenceIdentifierValue': 'ZRH', 'relationshipId': 'event_impacts_airport', 'referenceLayerId': 'airport_status', 'referenceIdentifierName': 'iata_code'}]}}], 'totalItems': 1, 'errors': []} </code></pre> <p>how can i extract data from <strong>items</strong> and put it on columns e.g:</p> <pre><code>col 1 = start col 2 = end col n = country_code etc... </code></pre>
<p>Did you try <code>json_normalize</code> method?</p> <p>There is an example of using:</p> <pre><code>import json # load data using Python JSON module with open('data/nested_mix.json','r') as f: data = json.loads(f.read()) # Normalizing data df = pd.json_normalize(data, record_path =['students']) </code></pre> <p>I found that in this article: <a href="https://towardsdatascience.com/how-to-convert-json-into-a-pandas-dataframe-100b2ae1e0d8" rel="nofollow noreferrer">https://towardsdatascience.com/how-to-convert-json-into-a-pandas-dataframe-100b2ae1e0d8</a></p> <p>Generally, there are some examples of transforming json data to pandas data frame, so maybe that article would be helpful for you.</p>
python|json|pandas|dataframe
1
5,057
71,384,835
Merge two lines into one and create Pandas DataFrame
<p>I have a file with data which is not easy to make stucure ready to create dataframe.</p> <pre><code>SFE, 8924, 3,CONV,1,R5.0 1.267065000E-04 1.267065000E-04 1.267065000E-04 1.267065000E-04 SFE, 8924, 3,CONV,2,R5.0 761.000000 761.000000 761.000000 761.000000 SFE, 8925, 3,CONV,1,R5.0 1.289895000E-04 1.289895000E-04 1.289895000E-04 1.289895000E-04 SFE, 8925, 3,CONV,2,R5.0 761.000000 761.000000 761.000000 761.000000 </code></pre> <p>There are spaces, multispaces, comas and tabs. How to merge 1st and 2nd line together (3rd+4th and so on)?</p> <p>Desired outcome:</p> <pre><code>SFE,8924,3,CONV,1,R5.0,1.267065000E-04,1.267065000E-04,1.267065000E-04,1.267065000E-04 SFE,8924,3,CONV,2,R5.0,761.000000,761.000000,761.000000,761.000000 SFE,8925,3,CONV,1,R5.0,1.289895000E-04,1.289895000E-04,1.289895000E-04,1.289895000E-04 SFE,8925,3,CONV,2,R5.0,761.000000,761.000000,761.000000,761.000000 </code></pre> <p>and then pandas should have no problem creating the df.</p> <p>For now I have such a code (file has some text at the beginning, so I read starting in 45line):</p> <pre><code>data=[] file = open('7HA03_thermal_final_filled.txt', 'r+') with file as f: lines=f.readlines()[45:] for line in lines: data.append(line) file.close() df=pd.DataFrame(data) </code></pre> <p>Tried to play with odds and even lines but still have one column with strings. Can share more not successful code but I believe there is some easier way to join lines and clear it from different separators.</p>
<p>This should work:</p> <pre><code>data = [] with open(&quot;7HA03_thermal_final_filled.txt&quot;) as f: content = f.readlines() for i in range(1, len(content)+1): if i % 2 == 0: first_line = content[i-2].strip() first_line = &quot;&quot;.join(first_line.split()) second_line = content[i-1].strip() second_line = &quot; &quot;.join(second_line.split()) second_line_modified = &quot;,&quot;.join(x for x in second_line.strip().split(' ')) data.append(f'{first_line},{second_line_modified}') df = pd.DataFrame([string.split(&quot;,&quot;) for string in data]) </code></pre>
python|pandas
0
5,058
71,354,133
Python how to round Timestamp object to the previous full hour
<p>Hi have a list of timestamp objects:</p> <pre><code>Timestamp('2021-07-07 10:00:03'), Timestamp('2021-07-07 10:02:13'), Timestamp('2021-03-07 12:40:24') </code></pre> <p>And I want to round each element at the hour level, to get:</p> <pre><code>Timestamp('2021-07-07 10:00:00'), Timestamp('2021-07-07 10:00:00'), Timestamp('2021-03-07 12:00:00') </code></pre> <p>The type of each element is</p> <blockquote> <p>Pandas Timestamp (pandas._libs.tslibs.timestamps.Timestamp)</p> </blockquote> <p>. What is the best way to do so?</p>
<p>Given</p> <pre><code>&gt;&gt;&gt; df time 0 2021-07-07 10:00:03 1 2021-07-07 10:02:13 2 2021-03-07 12:40:24 </code></pre> <p>Use</p> <pre><code>&gt;&gt;&gt; df['time'] = df['time'].dt.floor('1h') &gt;&gt;&gt; df time 0 2021-07-07 10:00:00 1 2021-07-07 10:00:00 2 2021-03-07 12:00:00 </code></pre>
python|pandas|dataframe|datetime|timestamp
0
5,059
71,153,198
How to find a word with letters in specific places within a dataframe - Jupyter
<p>I am trying to find words with letters in specific positions within my dataframe. My dataframe is a list of all 5 letter words in English in all lower-case and no special characters (i.e. only alpha characters).</p> <p>df = list of 5 letter words</p> <p>word = column of words</p> <p>Code:</p> <pre><code>firstLetter = input('First Letter = ') secondLetter = input('Second Letter = ') thirdLetter = input('Third Letter = ') fourthLetter = input('Fourth Letter = ') fifthLetter = input('Fifth Letter = ') total = str(firstLetter)+str(secondLetter)+str(thirdLetter)+str(fourthLetter)+str(fifthLetter) df[df['word'].str.contains(total)]['word'] </code></pre> <p>This will find all words that contain user inputed letters in the order inputed. While useful, it's not exactly what I'd like to do. How would I search for words that only contain letters in specific positions and print that list. For example:</p> <pre><code>First letter = t Second Letter = r Third Letter = Fourth Letter = i Fifth letter = n Out: Train </code></pre> <p>I'm quite new to both python and jupyter and I would like to thank you for your help in advance.</p>
<p>Here is necessary return Trues if no value is typing in input (empty string), so mask for test values by positions is:</p> <pre><code>firstLetter = input('First Letter = ') secondLetter = input('Second Letter = ') thirdLetter = input('Third Letter = ') fourthLetter = input('Fourth Letter = ') fifthLetter = input('Fifth Letter = ') m1 = df['word'].str[0].eq(firstLetter) | (not bool(firstLetter)) m2 = df['word'].str[1].eq(secondLetter) | (not bool(secondLetter)) m3 = df['word'].str[2].eq(thirdLetter) | (not bool(thirdLetter)) m4 = df['word'].str[3].eq(fourthLetter) | (not bool(fourthLetter)) m5 = df['word'].str[4].eq(fifthLetter) | (not bool(fifthLetter)) s = df.loc[m1 &amp; m2 &amp; m3 &amp; m4 &amp; m5, 'word'] </code></pre> <p>Or is possible create more general solution from above:</p> <pre><code>firstLetter = input('First Letter = ') secondLetter = input('Second Letter = ') thirdLetter = input('Third Letter = ') fourthLetter = input('Fourth Letter = ') fifthLetter = input('Fifth Letter = ') tup = (firstLetter, secondLetter, thirdLetter, fourthLetter, fifthLetter) m = [df['word'].str[i].eq(v) | (not bool(v)) for i, v in enumerate(tup)] s = df.loc[np.logical_and.reduce(m), 'word'] </code></pre> <p><strong>Test</strong>:</p> <pre><code>print (df) word 0 train 1 yrasn firstLetter = input('First Letter = ') secondLetter = input('Second Letter = ') thirdLetter = input('Third Letter = ') fourthLetter = input('Fourth Letter = ') fifthLetter = input('Fifth Letter = ') First Letter = t Second Letter = r Third Letter = Fourth Letter = i Fifth Letter = n tup = (firstLetter, secondLetter, thirdLetter, fourthLetter, fifthLetter) print (tup) ('t', 'r', '', 'i', 'n') m = [df['word'].str[i].eq(v) | (not bool(v)) for i, v in enumerate(tup)] s = df.loc[np.logical_and.reduce(m), 'word'] print (s) 0 train Name: word, dtype: object </code></pre>
python|pandas|jupyter-notebook
2
5,060
60,731,718
Pandas group by and filter
<p>I have the following .csv</p> <pre><code> Name Location Product Type number Greg 1 Fruit grape 1 Greg 1 Fruit apple 2 Greg 1 Bakery bread 5 Greg 1 Bakery roll 8 Greg 2 Fruit grape 7 Greg 2 Fruit apple 1 Greg 3 Fruit grape 2 Greg 4 Bakery roll 3 Greg 4 Bakery bread 4 Sam 5 Fruit apple 7 Sam 5 Fruit grape 9 Sam 5 Fruit apple 10 Sam 6 Bakery roll 11 Sam 6 Bakery bread 12 Sam 7 Fruit orange 13 Sam 7 Bakery roll 14 Tim 8 Fruit bread 16 Zack 9 Bakery roll 17 Zack 10 Fruit apple 19 Zack 10 Fruit grape 20 </code></pre> <p>I would like to put this into pandas and group by name, location where there is more than one location with more than two products. I would still want to maintain the 'number' for the products</p> <p>So something Like this as an example since Greg at location 1 has two products</p> <pre><code>name location product type Greg 1 Fruit, bakery grape,apple,bread,roll </code></pre> <p>I am struggling with the groupby and ultimately getting this back to a data frame that I could .to_csv </p>
<p>IIUC use <code>transform</code> with <code>nunique</code> </p> <pre><code>df1=df[df.groupby(['Name','Location']).Product.transform('nunique')&gt;1] Name Location Product Type number 0 Greg 1 Fruit grape 1 1 Greg 1 Fruit apple 2 2 Greg 1 Bakery bread 5 3 Greg 1 Bakery roll 8 14 Sam 7 Fruit orange 13 15 Sam 7 Bakery roll 14 </code></pre>
python|pandas
1
5,061
72,527,661
How to merge 2 rows into 1 row with 2 different columns?
<p>I would like to merge 2 or 3 row into 1 row like below.</p> <p>Original chart:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>CaseNr</th> <th>AccType</th> <th>Vehicle</th> <th>Model</th> <th>VehicleMass</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>1</td> <td>PassengerCar</td> <td>A1</td> <td>1217</td> </tr> <tr> <td>A</td> <td>1</td> <td>Train</td> <td>Train</td> <td>99999</td> </tr> <tr> <td>B</td> <td>2</td> <td>PassengerCar</td> <td>B7</td> <td>1400</td> </tr> <tr> <td>B</td> <td>2</td> <td>Train</td> <td>Train</td> <td>99999</td> </tr> <tr> <td>C</td> <td>3</td> <td>PassengerCar</td> <td>C2</td> <td>1295</td> </tr> </tbody> </table> </div> <p>Modified chart that I want:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>CaseNr</th> <th>AccType</th> <th>Vehicle_1</th> <th>Vehicle_2</th> <th>Model_1</th> <th>Model_2</th> <th>VehicleMass_1</th> <th>Vehicle Mass_2</th> <th>MainFac</th> </tr> </thead> <tbody> <tr> <td>A</td> <td>1</td> <td>PassengerCar</td> <td>Train</td> <td>A1</td> <td>Train</td> <td>1217</td> <td>99999</td> <td>5</td> </tr> <tr> <td>B</td> <td>2</td> <td>PassengerCar</td> <td>Train</td> <td>B7</td> <td>Train</td> <td>1400</td> <td>99999</td> <td>6</td> </tr> <tr> <td>C</td> <td>3</td> <td>PassengerCar</td> <td>NaN</td> <td>C2</td> <td>NaN</td> <td>1295</td> <td>NaN</td> <td>2</td> </tr> </tbody> </table> </div> <p>As you could notice the chart, the data field include vehicle crash information. Therefore, 1, 2, or 3 vehicles could be involved in one accident case.</p> <p>I searched all <code>merge</code> options but I could not find how to do this.</p>
<p>Use <code>pivot</code> to reshape your data. This is called reshaping to wide and not merging.</p> <pre><code>df1 = df.assign(name = df.groupby('CaseNr').cumcount() + 1).pivot(['CaseNr', 'AccType'], 'name') df1.columns = df1.columns.map(lambda x:f'{x[0]}_{x[1]}') df1.reset_index() CaseNr AccType Vehicle_1 ... Model_2 VehicleMass_1 VehicleMass_2 0 A 1 PassengerCar ... Train 1217.0 99999.0 1 B 2 PassengerCar ... Train 1400.0 99999.0 2 C 3 PassengerCar ... NaN 1295.0 NaN </code></pre> <p>If you have <code>janitor</code> installed you can do:</p> <pre><code>import janitor df.assign(name = df.groupby('CaseNr').cumcount() + 1).pivot_wider(['CaseNr', 'AccType'], 'name') CaseNr AccType Vehicle_1 ... Model_2 VehicleMass_1 VehicleMass_2 0 A 1 PassengerCar ... Train 1217.0 99999.0 1 B 2 PassengerCar ... Train 1400.0 99999.0 2 C 3 PassengerCar ... NaN 1295.0 NaN </code></pre>
python|pandas|dataframe|merge|concatenation
1
5,062
59,481,023
Seaborn: how to get labels nicely centered in a column?
<p>This is the code of my plot:</p> <pre class="lang-py prettyprint-override"><code>from matplotlib import pyplot as plt import pandas as pd import seaborn as sns data = pd.read_csv('titanic.csv') ax = sns.countplot(x='Survived', hue='Pclass', data=data, palette="pastel") ax.set_title("Survival in terms of Pclass", fontsize=20) for p in ax.patches: ax.annotate(f'\n{p.get_height()}', (p.get_x()+0.2, p.get_height()), ha='center', va='top', color='white', size=18) </code></pre> <p>This is my plot: <a href="https://i.stack.imgur.com/EqFmx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EqFmx.png" alt="resulting plot"></a></p> <p>How can I move the values inside the graph columns nicely in the middle of each column? Now, for example number 119 is slightly cut. </p>
<p>What you want is to have the text centered with respect to the patch. You already have the x-position. The width of the patch is <code>p.get_width()</code>. Just divide it by 2 and add it to the x-position to get the center.</p> <p>You can add <code>plt.tight_layout()</code> to nicely fit the plot and its labels into the image. <code>ax.set_xticklabels(['No', 'Yes'])</code> sets the labels of the x-axis.</p> <p>Using the Titanic dataset:</p> <pre class="lang-py prettyprint-override"><code>from matplotlib import pyplot as plt import seaborn as sns data = sns.load_dataset('titanic') ax = sns.countplot(x='survived', hue='pclass', data=data, palette=&quot;plasma&quot;) ax.set_title(&quot;Survival in terms of Pclass&quot;, fontsize=20) ax.set_xticklabels(['No', 'Yes']) for p in ax.patches: ax.annotate(f'\n{p.get_height()}', (p.get_x() + p.get_width() / 2, p.get_height()), ha='center', va='top', color='white', size=14) plt.tight_layout() plt.show() </code></pre> <p><a href="https://i.stack.imgur.com/D5Kyd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/D5Kyd.png" alt="the result" /></a></p>
python|pandas|seaborn
4
5,063
59,831,473
How to loop over dataframe and remove rows?
<p>I'm trying to loop over a dataframe and remove rows where the value in the 'player_fifa_api_id' column is equal to the value in the previous row. For some reason, my code isnt working:</p> <pre><code>for i in range(0,len(test)-1): print("{} lines out of {} processed".format(i,len(test))) if test['player_fifa_api_id'].iloc[i+1] == test['player_fifa_api_id'].iloc[i]: test.drop(test.index[i]) </code></pre> <p>Does anyone know where im going wrong? Here's a screenshot of the format of the dataframe<a href="https://i.stack.imgur.com/WtrRV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WtrRV.png" alt="enter image description here"></a></p>
<p>You should avoid looping through a dataframe. There is often much faster and more elegant solutions using vectorized functions. In your case, filter for the rows you want:</p> <pre><code>player_id = test['player_fifa_api_id'] # if the current row is not equal to the previous row, then keep the current row keep = player_id != player_id.shift() # filter for the rows you want to keep result = test[keep] </code></pre>
python|pandas|dataframe
3
5,064
59,837,519
Pandas read_excel - returning nan for cells having formula
<p>I have an excel file that contains accounting data and also the file use formula for some of the cells. When I use pandas read_excel to read the values in the file, it returns <code>nan</code> value for cells having formula's. I have also used openpyxl, but still having the same issue.</p> <p>Is there any way to read values instead of formula for cells having formula.</p> <p>I have also attached the xlsx file used. </p> <p><a href="https://drive.google.com/file/d/1aOTYwtKTrqyjF16vDizzzvMkUsvdgRe1/view?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/file/d/1aOTYwtKTrqyjF16vDizzzvMkUsvdgRe1/view?usp=sharing</a></p> <p>Thanks...</p>
<p>Before working with your excelsheet make sure you have set the permissions of your excel file to read and write,if it is a read only file,then change to read-write.</p> <pre><code>import pandas as pd your_data = pd.read_excel('yourfile.xlsx',sheet_name='your_sheet_name') print(your_data) #checking your_data.dropna(inplace=True) your_data.rename(columns = {'Unnamed: 1':'Total'},inplace =True) print(your_data['Total'].tolist()) #The column name where your formula is being calculated. </code></pre>
python|pandas
2
5,065
59,874,176
TF_NewTensor Segmentation Fault: Possible Bug?
<p>I'm using Tensorflow 2.1 git master branch (commit id:db8a74a737cc735bb2a4800731d21f2de6d04961) and compile it locally. Playing around with the C API to call <code>TF_LoadSessionFromSavedModel</code> but seems to get segmentation fault. I've managed to drill down the error in the sample code below.</p> <p><code>TF_NewTensor</code> call is crashing and causing a segmentation fault.</p> <pre class="lang-cpp prettyprint-override"><code> int main() { TF_Tensor** InputValues = (TF_Tensor**)malloc(sizeof(TF_Tensor*)*1); int ndims = 1; int64_t* dims = malloc(sizeof(int64_t)); int ndata = sizeof(int32_t); int32_t* data = malloc(sizeof(int32_t)); dims[0] = 1; data[0] = 10; // Crash on the next line TF_Tensor* int_tensor = TF_NewTensor(TF_INT32, dims, ndims, data, ndata, NULL, NULL); if(int_tensor == NULL) { printf("ERROR"); } else { printf("OK"); } return 0; } </code></pre> <p>However, when i move the <code>TF_Tensor** InputValues = (TF_Tensor**)malloc(sizeof(TF_Tensor*)*1);</code> after the <code>TF_NewTensor</code> call, it doesn't crash. Like below:</p> <pre class="lang-cpp prettyprint-override"><code> int main() { int ndims = 1; int64_t* dims = malloc(sizeof(int64_t)); int ndata = sizeof(int32_t); int32_t* data = malloc(sizeof(int32_t)); dims[0] = 1; data[0] = 10; // NO more crash TF_Tensor* int_tensor = TF_NewTensor(TF_INT32, dims, ndims, data, ndata, NULL, NULL); if(int_tensor == NULL) { printf("ERROR"); } else { printf("OK"); } TF_Tensor** InputValues = (TF_Tensor**)malloc(sizeof(TF_Tensor*)*1); return 0; } </code></pre> <p>Is it a possible bug or I'm using it wrong? I don't understand how <code>malloc</code>q an independent variable could cause a segmentation fault.</p> <p>can anybody reproduce?</p> <p>I'm using gcc (Ubuntu 9.2.1-9ubuntu2) 9.2.1 20191008 to compile.</p> <p><strong>UPDATE:</strong></p> <p>can be further simplified the error as below. This is even without the <code>InputValues</code> being allocated.</p> <pre class="lang-cpp prettyprint-override"><code> #include &lt;stdlib.h&gt; #include &lt;stdio.h&gt; #include "tensorflow/c/c_api.h" int main() { int ndims = 1; int ndata = 1; int64_t dims[] = { 1 }; int32_t data[] = { 10 }; TF_Tensor* int_tensor = TF_NewTensor(TF_INT32, dims, ndims, data, ndata, NULL, NULL); if(int_tensor == NULL) { printf("ERROR Tensor"); } else { printf("OK"); } return 0; } </code></pre> <p>compile with</p> <p><code>gcc -I&lt;tensorflow_path&gt;/include/ -L&lt;tensorflow_path&gt;/lib test.c -ltensorflow -o test2.out</code></p> <p><strong>Solution</strong></p> <p>As point up by Raz, pass empty <code>deallocater</code> instead of NULL, and <code>ndata</code> should be size in terms of byte. </p> <pre class="lang-cpp prettyprint-override"><code>#include "tensorflow/c/c_api.h" void NoOpDeallocator(void* data, size_t a, void* b) {} int main(){ int ndims = 2; int64_t dims[] = {1,1}; int64_t data[] = {20}; int ndata = sizeof(int64_t); // This is tricky, it number of bytes not number of element TF_Tensor* int_tensor = TF_NewTensor(TF_INT64, dims, ndims, data, ndata, &amp;NoOpDeallocator, 0); if (int_tensor != NULL)\ printf("TF_NewTensor is OK\n"); else printf("ERROR: Failed TF_NewTensor\n"); } </code></pre> <p>checkout my Github on full code of running/compile TensorFlow's C API <a href="https://github.com/AmirulOm/tensorflow_capi_sample" rel="nofollow noreferrer">here</a></p>
<p>You set <code>ndata</code> to be <code>sizeof(int32_t)</code> which is 4. Your <code>ndata</code> is passed as <code>len</code> argument to <code>TF_NewTensor()</code> which represents the number of elements in <code>data</code> (can be seen in <a href="https://github.com/tensorflow/tensorflow/blob/v2.1.0/tensorflow/c/tf_tensor.h#L77-L80" rel="nofollow noreferrer">GitHub</a>). Therefore, it should be set to 1 in your example, as you have a single element.</p> <p>By the way, you can avoid using <code>malloc()</code> here (as you don't check for return values, and this may be error-pront and less elegant in general) and just use local variables instead.</p> <p><strong>UPDATE</strong></p> <p>In addition, you pass <code>NULL</code> both for <code>deallocator</code> and <code>deallocator_arg</code>. I'm pretty sure this is the issue as the comment states <em>"Clients must provide a custom deallocator function..."</em> (can be seen <a href="https://github.com/tensorflow/tensorflow/blob/v2.1.0/tensorflow/c/tf_tensor.h#L71" rel="nofollow noreferrer">here</a>). The <code>deallocator</code> is called by the <code>TF_NewTensor()</code> (can be seen <a href="https://github.com/tensorflow/tensorflow/blob/v2.1.0/tensorflow/c/tf_tensor.cc#L134" rel="nofollow noreferrer">here</a>) and this may be the cause for the segmentation fault.</p> <p>So, summing it all up, try the next code:</p> <pre><code>void my_deallocator(void * data, size_t len, void * arg) { printf("Deallocator called with data %p\n", data); } void main() { int64_t dims[] = { 1 }; int32_t data[] = { 10 }; ... = TF_NewTensor(TF_INT32, dims, /*num_dims=*/ 1, data, /*len=*/ 1, my_deallocator, NULL); } </code></pre>
c++|c|linux|tensorflow|tensorflow-c++
3
5,066
40,656,632
Add successive rows in Pandas if they match on some columns
<p>I have a dataframe like the following one:</p> <pre><code>ID URL seconds 1 Email 9 1 Email 3 1 App 5 1 App 9 1 Faceboook 50 1 Faceboook 7 1 Faceboook 39 1 Faceboook 10 1 Email 39 1 Email 5 1 Email 57 1 Faceboook 7 1 Faceboook 32 1 Faceboook 3 2 App 11 2 App 10 2 Email 56 2 Faceboook 9 2 Faceboook 46 2 Faceboook 16 2 Email 21 </code></pre> <p>I want to sum the 'seconds' column for successive views of the same URL by the same ID. That's the result I'm looking for:</p> <pre><code>ID URL seconds 1 Email 12 1 App 14 1 Faceboook 106 1 Email 101 1 Faceboook 42 2 App 21 2 Email 56 2 Faceboook 71 2 Email 21 </code></pre> <p><code>df.groupBy(['ID', 'URL']).sum()</code> would not work in this case as it would sum all cases of the same URL for the same ID, not only the successive ones.</p> <p>Any ideas?</p>
<p>You can use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby</code></a> by <code>Series</code> created by compare by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.ne.html" rel="nofollow noreferrer"><code>ne</code></a> column <code>URL</code> and shifted, last use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.cumsum.html" rel="nofollow noreferrer"><code>cumsum</code></a> with <code>boolean mask</code>:</p> <pre><code>print ((df.URL.ne(df.URL.shift())).cumsum()) 0 1 1 1 2 2 3 2 4 3 5 3 6 3 7 3 8 4 9 4 10 4 11 5 12 5 13 5 14 6 15 6 16 7 17 8 18 8 19 8 20 9 Name: URL, dtype: int32 </code></pre> <pre><code>print (df['seconds'].groupby([(df.URL.ne(df.URL.shift())).cumsum(), df.ID, df.URL]).sum()) URL ID URL 1 1 Email 12 2 1 App 14 3 1 Faceboook 106 4 1 Email 101 5 1 Faceboook 42 6 2 App 21 7 2 Email 56 8 2 Faceboook 71 9 2 Email 21 Name: seconds, dtype: int64 print (df['seconds'].groupby([(df.URL.ne(df.URL.shift())).cumsum(), df.ID, df.URL]) .sum() .reset_index(level=0, drop=True) .reset_index()) ID URL seconds 0 1 Email 12 1 1 App 14 2 1 Faceboook 106 3 1 Email 101 4 1 Faceboook 42 5 2 App 21 6 2 Email 56 7 2 Faceboook 71 8 2 Email 21 </code></pre>
python|pandas
3
5,067
40,725,188
How to calculate the mean of n consecutive columns?
<p>I have a dataframe like this:</p> <pre><code>import pandas as pd df = pd.DataFrame({'A_1': [1, 2], 'A_2': [3, 4], 'A_3': [5, 6], 'A_4': [7, 8], 'B_1': [0, 2], 'B_2': [4, 4], 'B_3': [9, 6], 'B_4': [5, 8]}) A_1 A_2 A_3 A_4 B_1 B_2 B_3 B_4 0 1 3 5 7 0 4 9 5 1 2 4 6 8 2 4 6 8 </code></pre> <p>which I would like to convert into a dataframe that looks like this:</p> <pre><code> A_G1 A_G2 B_G1 B_G2 0 2 6 2 7 1 3 7 3 7 </code></pre> <p>Thereby, <code>A_G1</code> is the <code>mean</code> of the columns <code>A_1</code> and <code>A_2</code>, <code>A_G2</code> is the <code>mean</code> of the columns <code>A_3</code> and <code>A_4</code>; the same applies to <code>B_G1</code> and <code>B_G2</code>. So what I would like to do is to calculate the mean of two consecutive columns and add the result as a new column into a dataframe.</p> <p>A straightforward implementation could look like this:</p> <pre><code>res_df = pd.DataFrame() for i in range(0, len(df.columns), 2): temp_df = df[[i, i + 1]].mean(axis=1) res_df = pd.concat([res_df, temp_df], axis=1) </code></pre> <p>which gives me the desired output (except for the column names):</p> <pre><code> 0 0 0 0 0 2 6 2 7 1 3 7 3 7 </code></pre> <p>Is there any better way of doing this i.e. a vectorized way?</p>
<p>This might work for you:</p> <pre><code>In [15]: df.rolling(window=2,axis=1).mean().iloc[:,1::2] Out[15]: A_2 A_4 B_2 B_4 0 2.0 6.0 2.0 7.0 1 3.0 7.0 3.0 7.0 </code></pre> <p>But I haven't tested it against your "straightforward" implementation.</p>
python|performance|pandas|optimization|vectorization
4
5,068
40,515,418
Filter columns based on a value (Pandas): TypeError: Could not compare ['a'] with block values
<p>I'm trying filter a DataFrame columns based on a value.</p> <pre><code>In[41]: df = pd.DataFrame({'A':['a',2,3,4,5], 'B':[6,7,8,9,10]}) In[42]: df Out[42]: A B 0 a 6 1 2 7 2 3 8 3 4 9 4 5 10 </code></pre> <p>Filtering columns:</p> <pre><code>In[43]: df.loc[:, (df != 6).iloc[0]] Out[43]: A 0 a 1 2 2 3 3 4 4 5 </code></pre> <p>It works! But, When I used strings,</p> <pre><code>In[44]: df.loc[:, (df != 'a').iloc[0]] </code></pre> <p>I'm getting this error: <code>TypeError: Could not compare ['a'] with block values</code></p>
<p>You are trying to compare string 'a' with numeric values in column B.</p> <p>If you want your code to work, first promote dtype of column B as numpy.object, It will work. </p> <pre><code>df.B = df.B.astype(np.object) </code></pre> <p>Always check data types of the columns before performing the operations using</p> <pre><code>df.info() </code></pre>
python-3.x|pandas
1
5,069
40,556,159
Read_CSV file faster
<p>I'm having a bit of trouble reading 203 mb file quickly within the pandas dataframe. I want to know if there is a faster way I may be able to do this. Below is my function: </p> <pre><code>import pandas as pd import numpy as np def file(filename): df = pd.read_csv(filename, header=None, sep='delimiter', engine='python', skiprows=1) df = pd.DataFrame(df[0].str.split(',').tolist()) df = df.drop(df.columns[range(4,70)], axis=1) df.columns = ['time','id1','id2','amount'] return df </code></pre> <p>When i used the magic <code>%timeit</code> function it took about 6 seconds to read the file and upload it into python notebook. What can i do to speed this up? </p> <p>Thanks! </p>
<p><strong>UPDATE:</strong> looking at your logic you don't seem to need to use first <code>sep='delimiter'</code> as you will use (split) only the first (index=0) column, so you can simply do this:</p> <pre><code>df = pd.read_csv(filename, header=None, usecols=[0,1,2,3], names=['time','id1','id2','amount'], skipinitialspace=True, skiprows=1) </code></pre> <p>PS per default <code>read_csv()</code> will use <code>C</code> engine, which is faster, if <code>sep</code> is not longer than 1 character or if it's <code>\s+</code> </p> <p><strong>OLD answer:</strong></p> <p>First of all don't read columns that you don't need (or those that you are going to drop: <code>df.drop(df.columns[range(4,70)], axis=1)</code>):</p> <pre><code>df = pd.read_csv(filename, header=None, usecols=[0], names=['txt'], sep='delimiter', skiprows=1) </code></pre> <p>then split a single parsed columns into four:</p> <pre><code>df[['time','id1','id2','amount']] = df.pop('txt').str.split(',', expand=True) </code></pre> <p>PS i would strongly recommend to convert your data to HDF5 format (if you can) and forget about all those problems with CSV files ;)</p>
python|csv|pandas|dataframe|data-science
2
5,070
61,635,934
Python /Pandas / pd.to_datetime
<p>I want to convert date column (datetime64[ns]) into day-month-year format with this code , </p> <pre><code>pd.to_datetime(final['Date'],format = ('%d-%b-%Y')) </code></pre> <p>but it remains the same:</p> <pre><code>0 2019-12-31 1 2020-01-01 2 2020-01-02 3 2020-01-03 4 2020-01-04 ... 112 2020-04-21 113 2020-04-22 114 2020-04-23 115 2020-04-24 116 2020-04-25 </code></pre>
<p>you can use </p> <pre><code>df['Date'] = df['Date'].dt.strftime('%d-%m-%Y') </code></pre> <p>are you spelling month wrong? it says -%b instead of -%m</p>
python|pandas|datetime
0
5,071
57,910,344
Plot horizontal duration with pandas
<p>I'm trying to create a horizontal graph that would illustrate duration of processes. Here's my sample data:</p> <p><a href="https://i.stack.imgur.com/7tvvM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7tvvM.png" alt="enter image description here"></a></p> <p>Some code to put in Jupyter Notebook:</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt import matplotlib.dates as dt import seaborn as sns df = pd.DataFrame( { 'PROC_NAME': ['data_load', 'data_send', 'data_load', 'data_send', 'data_load', 'data_send', 'data_load', 'data_send'], 'START_TS': ['2019-06-25 03:30', '2019-06-25 07:15', '2019-06-26 03:30', '2019-06-26 07:19', '2019-06-26 08:54', '2019-06-27 03:30', '2019-06-27 08:51', '2019-06-28 03:30'], 'END_TS': ['2019-06-25 03:51', '2019-06-25 07:52', '2019-06-26 03:40', '2019-06-26 07:43', '2019-06-26 09:21', '2019-06-27 04:16', '2019-06-27 09:32', '2019-06-28 04:02'] }) df.head() </code></pre> <p>I'd like to create a horizontal bar chart that would illustrate the run durations per day, like:</p> <p><a href="https://i.stack.imgur.com/xLv70.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xLv70.png" alt="enter image description here"></a> [RIGHT]</p> <p>So it should be bit like Gantt-chart, but with just one line per process with multiple bars in a line. A Gantt-chart would put each instance in a separate line - and this is not something I'd like to achieve:</p> <p><a href="https://i.stack.imgur.com/n9r69.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/n9r69.png" alt="enter image description here"></a> [WRONG]</p> <p>I'd appreciate your help.</p>
<p>Got it! Big thanks to @jdhao for this <a href="https://stackoverflow.com/a/50444098/2834065">answer</a>. (C'mon, check it out and upvote!)</p> <p>Here's the code for the source data again - I've added some more data to improve the example:</p> <pre><code>Id | PROC_NAME | START_TS | END_TS --------------------------------------------------------------------- 0 | data_load | 2019-06-25 03:30:00 | 2019-06-25 03:51:00 1 | data_send | 2019-06-25 07:15:00 | 2019-06-25 07:52:00 2 | data_load | 2019-06-26 03:30:00 | 2019-06-26 03:40:00 3 | data_send | 2019-06-26 07:19:00 | 2019-06-26 07:43:00 4 | data_load | 2019-06-26 08:54:00 | 2019-06-26 09:21:00 5 | data_send | 2019-06-27 03:30:00 | 2019-06-27 04:16:00 6 | data_load | 2019-06-27 08:51:00 | 2019-06-27 09:32:00 7 | data_send | 2019-06-28 03:30:00 | 2019-06-28 04:02:00 8 | data_extraction | 2019-06-25 03:21:00 | 2019-06-25 03:51:00 9 | data_extraction | 2019-06-25 06:45:00 | 2019-06-25 07:32:00 10 | data_extraction | 2019-06-26 03:30:00 | 2019-06-26 06:40:00 11 | data_extraction | 2019-06-26 07:19:00 | 2019-06-26 07:43:00 12 | data_extraction | 2019-06-26 10:54:00 | 2019-06-26 11:21:00 13 | data_extraction | 2019-06-27 05:30:00 | 2019-06-27 08:16:00 14 | data_extraction | 2019-06-27 09:51:00 | 2019-06-27 11:32:00 15 | data_extraction | 2019-06-28 02:30:00 | 2019-06-28 04:02:00 </code></pre> <p>Here's the code for Jupyter:</p> <pre><code>import pandas as pd import matplotlib.pyplot as plt import matplotlib.dates as dt df = pd.DataFrame( { 'PROC_NAME': ['data_load', 'data_send', 'data_load', 'data_send', 'data_load', 'data_send', 'data_load', 'data_send', 'data_extraction', 'data_extraction', 'data_extraction', 'data_extraction', 'data_extraction', 'data_extraction', 'data_extraction', 'data_extraction',], 'START_TS': ['2019-06-25 03:30', '2019-06-25 07:15', '2019-06-26 03:30', '2019-06-26 07:19', '2019-06-26 08:54', '2019-06-27 03:30', '2019-06-27 08:51', '2019-06-28 03:30', '2019-06-25 03:21', '2019-06-25 06:45', '2019-06-26 03:30', '2019-06-26 07:19', '2019-06-26 10:54', '2019-06-27 05:30', '2019-06-27 09:51', '2019-06-28 02:30'], 'END_TS': ['2019-06-25 03:51', '2019-06-25 07:52', '2019-06-26 03:40', '2019-06-26 07:43', '2019-06-26 09:21', '2019-06-27 04:16', '2019-06-27 09:32', '2019-06-28 04:02', '2019-06-25 03:51', '2019-06-25 07:32', '2019-06-26 06:40', '2019-06-26 07:43', '2019-06-26 11:21', '2019-06-27 08:16', '2019-06-27 11:32', '2019-06-28 04:02'] }) #convert input to datetime: df.START_TS = pd.to_datetime(df.START_TS, format = '%Y-%m-%d %H:%M') df.END_TS = pd.to_datetime(df.END_TS, format = '%Y-%m-%d %H:%M') df.head() </code></pre> <p>And the solution to my problem, using <code>pyplot.hlines</code>:</p> <pre><code>fig = plt.figure() fig.set_figheight(2) fig.set_figwidth(15) ax = fig.add_subplot(211) plt.xticks(rotation='25') #format dates on x axis ax.xaxis.set_major_formatter(mdates.DateFormatter('%b %d %H:%M')) ax = ax.xaxis_date() ax = plt.hlines(df.PROC_NAME, dt.date2num(df.START_TS), dt.date2num(df.END_TS), lw = 10, # make the lines wider and looking more like ribbon color = 'b' # add some color ) </code></pre> <p>Finally, the result, where I'm able to clearly see run times and overlaps:</p> <p><a href="https://i.stack.imgur.com/vTQ2w.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vTQ2w.png" alt="enter image description here"></a></p>
python|pandas|matplotlib|charts
2
5,072
57,973,920
Failed to convert Tensorflow .pd to json
<p>Trying to convert the saved model to json for tensorflow js. Followed the example from <a href="https://github.com/tensorflow/tfjs/tree/master/tfjs-converter" rel="nofollow noreferrer">https://github.com/tensorflow/tfjs/tree/master/tfjs-converter</a></p> <p>Version: tensorflowjs 1.2.9</p> <p>Dependency versions: keras 2.2.4-tf tensorflow 1.14.0</p> <p>Ran this cmd:</p> <pre><code>tensorflowjs_converter --input_format=tf_saved_model --output_format=tfjs_graph_model --signature_name=serving_default --saved_model_tags=serve /saved_model /web_model </code></pre> <p>Having this error message while running the code:</p> <pre><code>F .\tensorflow/core/grappler/graph_view.h:332] Check failed: st.ok() Non unique node name detected: SecondStageFeatureExtractor/InceptionV2/Mixed_5c/Branch_2/Conv2d_0c_3x3/weights </code></pre>
<p>It seems that the converter only works on Google Colab without any issue. Thanks everyone for the input.</p>
tensorflow.js|tensorflowjs-converter
0
5,073
57,768,344
how to visualize columns of a dataframe python as a plot?
<p>I have a dataframe that looks like below:</p> <pre><code>DateTime ID Temperature 2019-03-01 18:36:01 3 21 2019-04-01 18:36:01 3 21 2019-18-01 08:30:01 2 18 2019-12-01 18:36:01 2 12 </code></pre> <p>I would like to visualize this as a plot, where I need the datetime in x-axis, and Temperature on the y axis with a hue of IDs, I tried the below, but i need to see the Temperature distribution for every point more clearly. Is there any other visualization technique?</p> <pre><code>x= df['DateTime'].values y= df['Temperature'].values hue=df['ID'].values plt.scatter(x, y,hue,color = "red") </code></pre>
<p>you can try:</p> <pre><code>df.set_index('DateTime').plot() </code></pre> <p>output:</p> <p><a href="https://i.stack.imgur.com/wCAgq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wCAgq.png" alt="enter image description here"></a></p> <p>or you can use: </p> <pre><code>df.set_index('DateTime').plot(style="x-", figsize=(15, 10)) </code></pre> <p>output: <a href="https://i.stack.imgur.com/n0ZAw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/n0ZAw.png" alt="enter image description here"></a></p>
python|pandas|data-visualization|scatter-plot
1
5,074
57,949,435
Replacing with Nan
<p>I am trying to replace the placeholder '.' string with NaN in the total revenue column. This is the code used to create the df. </p> <pre><code>raw_data = {'Rank': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 'Company': ['Microsoft', 'Oracle', "IBM", 'SAP', 'Symantec', 'EMC', 'VMware', 'HP', 'Salesforce.com', 'Intuit'], 'Company_HQ': ['USA', 'USA', 'USA', 'Germany', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA'], 'Software_revenue': ['$62,014', '$29,881', '$29,286', '$18,777', '$6,138', '$5,844', '$5,520', '$5,082', '$4,820', '$4,324'], 'Total_revenue': ['93,456', '38,828', '92,793', '23,289', '6,615', ".", '6,035', '110,577', '5,274', '4,573'], 'Percent_revenue_total': ['66.36%', '76.96%', '31.56%', '80.63%', '92.79%', '23.91%', '91.47%', '4.60%', '91.40%', '94.55%']} df = pd.DataFrame(raw_data, columns = ['Rank', 'Company', 'Company_HQ', 'Software_revenue', 'Total_revenue', 'Percent_revenue_total']) df </code></pre> <p>I have tried using: </p> <pre><code>import numpy as np df['Total_revenue'] = df['Total_revenue'].replace('.', np.nan, regex=True) df </code></pre> <p>However, this replaces the entire column with Nan instead of just the placeholder '.' value. </p>
<p>In my opinion "replace" is not required as user wanted to change "." Whole to nan. Inistead this will also work. It finds rows with "." And assign nan to it</p> <pre class="lang-py prettyprint-override"><code>df.loc[df['Total_revenue']==".", 'Total_revenue'] = np.nan </code></pre>
pandas
0
5,075
49,500,510
How can I do this pandas lookup with a series?
<p>I have a Series <code>S</code>:</p> <pre><code> attr first last visit andrew alexander baseline abc andrew alexander followup abc bruce alexander baseline abc bruce alexander followup xyz fuzzy dunlop baseline xyz fuzzy dunlop followup abc </code></pre> <p>and a DataFrame <code>df</code>:</p> <pre><code> abc xyz first last visit andrew alexander baseline 1 7 andrew alexander followup 2 8 bruce alexander baseline 3 9 bruce alexander followup 4 10 fuzzy dunlop baseline 5 11 fuzzy dunlop followup 6 12 </code></pre> <p>How can I get a new series <code>S2</code>, where for each index in <code>S</code>, the value is selected from <code>df</code>. If I was to use a loop, I'd do it this way:</p> <pre><code>lookup = pd.Series(index=S.index) for ix, attr in S.iteritems(): lookup.loc[ix] = df.loc[ix, attr] </code></pre> <p>Is there a more direct way to do this with a pandas function?</p> <p>The result should look like this:</p> <pre><code>first last visit andrew alexander baseline 1 andrew alexander followup 2 bruce alexander baseline 3 bruce alexander followup 10 fuzzy dunlop baseline 11 fuzzy dunlop followup 6 </code></pre>
<p>IIUC, you can use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.lookup.html" rel="nofollow noreferrer">DataFrame.lookup()</a>:</p> <pre><code>In [7]: pd.Series(df.lookup(s.index, s['attr']), index=df.index) Out[7]: first last visit andrew alexander baseline 1 followup 2 bruce alexander baseline 3 followup 10 fuzzy dunlop baseline 11 followup 6 dtype: int64 </code></pre> <p>if <code>s</code> is Series (not a DataFrame):</p> <pre><code>In [10]: pd.Series(df.lookup(s.index, s), index=df.index) Out[10]: first last visit andrew alexander baseline 1 followup 2 bruce alexander baseline 3 followup 10 fuzzy dunlop baseline 11 followup 6 dtype: int64 </code></pre>
python|pandas
3
5,076
49,582,555
How to check if the argmax of a tensor is equal to any argmax of another tensor which has several equal max?
<p>So usually in single label classification, we use the following</p> <pre><code>correct_preds = tf.equal(tf.argmax(preds, 1), tf.argmax(self.label, 1)) </code></pre> <p>But I am working with multi label classification so I'd like to know how to do that where there are several ones in the label vector. So what I have so far is below</p> <pre><code> a = tf.constant([0.2, 0.4, 0.3, 0.1]) b = tf.constant([0,1.0,1,0]) empty_tensor = tf.zeros([0]) for index in range(b.get_shape()[0]): empty_tensor = tf.cond(tf.equal(b[index],tf.constant(1, dtype = tf.float32)), lambda: tf.concat([empty_tensor,tf.constant([index], dtype = tf.float32)], axis = 0), lambda: empty_tensor) temp, _ = tf.setdiff1d([tf.argmax(a)], tf.cast(empty_tensor, dtype= tf.int64)) output, _ = tf.setdiff1d([tf.argmax(a)], tf.cast(temp, dtype = tf.int64)) </code></pre> <p>So this gives me the indice at which max(preds) happens and where there is a 1 in self.label. In the above example it gives [1] and if the argmax do not match, then I get [].</p> <p>The issue that I have is that I do not how to proceed from there as I would like something like the following </p> <pre><code>correct_preds = tf.equal(tf.argmax(preds, 1), tf.argmax(self.label, 1)) self.accuracy = tf.reduce_sum(tf.cast(correct_preds, tf.float32)) </code></pre> <p>which is straightforward for single label classification.</p> <p>Thanks a lot</p>
<p>I don't think you can achieve this with softmax so I am assuming that you are using sigmoids for your preds. If you are using sigmoids, your outputs will be each (independently) be between 0 and 1. You can define a threshold for each, perhaps 0.5, and then convert your sigmoid <code>preds</code> into the <code>label</code> encoding (0's and 1's) by doing <code>preds &gt; 0.5</code>.</p> <p>If prediction is [0 1] and label is [1 1], do you want to report that as completely or partially wrong? I am going to assume the former. In that case, you would remove the tf.argmax call and instead check if the <code>preds</code> and <code>label</code> are exactly the same vectors, which would look like <code>tf.reduce_all(tf.equal(preds, label), axis=0)</code>. For the latter, the code would look like <code>tf.reduce_sum(tf.equal(preds, label), axis=0)</code>.</p>
tensorflow|indices|tensor|argmax
1
5,077
73,189,717
How can i transform the dataset so that one line has one id based on column value?
<p>I have such type of dataset</p> <p><img src="https://i.stack.imgur.com/v8rne.png" alt="1" /></p> <p>and want to do this dataset based on column 't_value'</p> <p><img src="https://i.stack.imgur.com/p2qcp.png" alt="2" /></p> <p>I am pretty new to Python, I understand that we need to use loop but in what way? Also i dont know how to insert attractive tables with data( I would be gratfull for any help!</p>
<p>You can use <code>pivot_table()</code> from <code>pandas</code></p> <p>try:</p> <pre><code>new_df = pd.pivot_table( df, # the first dataframe index = &quot;id&quot;, # aggregating column columns = &quot;t_value&quot; # mapped column aggfunc = &quot;first&quot; ) # merge the multiindex columns new_df.columns = [x[0] + &quot;_&quot; + str(x[1]) for x in new_df.columns] #reset index to retrieve &quot;id&quot; column new_df.reset_index(inplace=True) </code></pre> <p>It is better than a 'for-loop'.</p> <p><a href="https://stackoverflow.com/a/73168057/13707709">My reference</a></p>
python|pandas|dataframe
1
5,078
35,277,207
Pandas Set Data From the Last Period As New DataFrame Column
<p>I have a Pandas DataFrame:</p> <pre><code>import pandas as pd df = pd.DataFrame([['A', '2014-01-01', '2014-01-07', 1.2], ['B', '2014-01-01', '2014-01-07', 2.5], ['C', '2014-01-01', '2014-01-07', 3.], ['A', '2014-01-08', '2014-01-14', 13.], ['B', '2014-01-08', '2014-01-14', 2.], ['C', '2014-01-08', '2014-01-14', 1.], ['A', '2014-01-15', '2014-01-21', 10.], ['A', '2014-01-21', '2014-01-27', 98.], ['B', '2014-01-21', '2014-01-27', -5.], ['C', '2014-01-21', '2014-01-27', -72.], ['A', '2014-01-22', '2014-01-28', 8.], ['B', '2014-01-22', '2014-01-28', 25.], ['C', '2014-01-22', '2014-01-28', -23.], ['A', '2014-01-22', '2014-02-22', 8.], ['B', '2014-01-22', '2014-02-22', 25.], ['C', '2014-01-22', '2014-02-22', -23.], ], columns=['Group', 'Start Date', 'End Date', 'Value']) </code></pre> <p>And the output looks like this:</p> <pre><code> Group Start Date End Date Value 0 A 2014-01-01 2014-01-07 1.2 1 B 2014-01-01 2014-01-07 2.5 2 C 2014-01-01 2014-01-07 3.0 3 A 2014-01-08 2014-01-14 13.0 4 B 2014-01-08 2014-01-14 2.0 5 C 2014-01-08 2014-01-14 1.0 6 A 2014-01-15 2014-01-21 10.0 7 A 2014-01-21 2014-01-27 98.0 8 B 2014-01-21 2014-01-27 -5.0 9 C 2014-01-21 2014-01-27 -72.0 10 A 2014-01-22 2014-01-28 8.0 11 B 2014-01-22 2014-01-28 25.0 12 C 2014-01-22 2014-01-28 -23.0 13 A 2014-01-22 2014-02-22 8.0 14 B 2014-01-22 2014-02-22 25.0 15 C 2014-01-22 2014-02-22 -23.0 </code></pre> <p>I am trying to <strong>add a new column with data from the same group in the previous period (if it exists)</strong>. So, the output should look like this:</p> <pre><code> Group Start Date End Date Value Last Period Value 0 A 2014-01-01 2014-01-07 1.2 NaN 1 B 2014-01-01 2014-01-07 2.5 NaN 2 C 2014-01-01 2014-01-07 3.0 NaN 3 A 2014-01-08 2014-01-14 13.0 1.2 4 B 2014-01-08 2014-01-14 2.0 2.5 5 C 2014-01-08 2014-01-14 1.0 3.0 6 A 2014-01-15 2014-01-21 10.0 13.0 7 A 2014-01-21 2014-01-27 98.0 NaN 8 B 2014-01-21 2014-01-27 -5.0 NaN 9 C 2014-01-21 2014-01-27 -72.0 NaN 10 A 2014-01-22 2014-01-28 8.0 10.0 11 B 2014-01-22 2014-01-28 25.0 NaN 12 C 2014-01-22 2014-01-28 -23.0 NaN 13 A 2014-01-22 2014-02-22 8.0 NaN 14 B 2014-01-22 2014-02-22 25.0 NaN 15 C 2014-01-22 2014-02-22 -23.0 NaN </code></pre> <p>Notice that the rows with NaN do not have a corresponding value with the same group and that is in the last period. So, rows that span 7 days (one week) need to be matched with the same row with the same group but from the previous week.</p>
<p>Suppose we compute the duration between <code>Start</code> and <code>End</code> for each row:</p> <pre><code>df['duration'] = df['End']-df['Start'] </code></pre> <p>and suppose we also compute the previous Start value based on that duration:</p> <pre><code>df['Prev'] = df['Start'] - df['duration'] - pd.Timedelta(days=1) </code></pre> <p>Then we can express the desired DataFrame as the result of a <em>merge</em> between <code>df</code> and itself where we merge rows whose <code>Group</code>, <code>duration</code> and <code>Prev</code> (in one DataFrame) match the <code>Group</code>, <code>duration</code> and <code>Start</code> (in the other DataFrame):</p> <pre><code>import pandas as pd df = pd.DataFrame([['A', '2014-01-01', '2014-01-07', 1.2], ['B', '2014-01-01', '2014-01-07', 2.5], ['C', '2014-01-01', '2014-01-07', 3.], ['A', '2014-01-08', '2014-01-14', 3.], ['B', '2014-01-08', '2014-01-14', 2.], ['C', '2014-01-08', '2014-01-14', 1.], ['A', '2014-01-15', '2014-01-21', 10.], ['A', '2014-01-21', '2014-01-27', 98.], ['B', '2014-01-21', '2014-01-27', -5.], ['C', '2014-01-21', '2014-01-27', -72.], ['A', '2014-01-22', '2014-01-28', 8.], ['B', '2014-01-22', '2014-01-28', 25.], ['C', '2014-01-22', '2014-01-28', -23.], ['A', '2014-01-22', '2014-02-22', 8.], ['B', '2014-01-22', '2014-02-22', 25.], ['C', '2014-01-22', '2014-02-22', -23.], ], columns=['Group', 'Start', 'End', 'Value']) for col in ['Start', 'End']: df[col] = pd.to_datetime(df[col]) df['duration'] = df['End']-df['Start'] df['Prev'] = df['Start'] - df['duration'] - pd.Timedelta(days=1) result = pd.merge(df, df[['Group','duration','Start','Value']], how='left', left_on=['Group','duration','Prev'], right_on=['Group','duration','Start'], suffixes=['', '_y']) result = result[['Group', 'Start', 'End', 'Value', 'Value_y']] result = result.rename(columns={'Value_y':'Prev Value'}) print(result) </code></pre> <p>yields</p> <pre><code> Group Start End Value Prev Value 0 A 2014-01-01 2014-01-07 1.2 NaN 1 B 2014-01-01 2014-01-07 2.5 NaN 2 C 2014-01-01 2014-01-07 3.0 NaN 3 A 2014-01-08 2014-01-14 3.0 1.2 4 B 2014-01-08 2014-01-14 2.0 2.5 5 C 2014-01-08 2014-01-14 1.0 3.0 6 A 2014-01-15 2014-01-21 10.0 3.0 7 A 2014-01-21 2014-01-27 98.0 NaN 8 B 2014-01-21 2014-01-27 -5.0 NaN 9 C 2014-01-21 2014-01-27 -72.0 NaN 10 A 2014-01-22 2014-01-28 8.0 10.0 11 B 2014-01-22 2014-01-28 25.0 NaN 12 C 2014-01-22 2014-01-28 -23.0 NaN 13 A 2014-01-22 2014-02-22 8.0 NaN 14 B 2014-01-22 2014-02-22 25.0 NaN 15 C 2014-01-22 2014-02-22 -23.0 NaN </code></pre> <hr> <p>In the comments, Artur Nowak asks about the time complexity of <code>pd.merge</code>. I believe it is doing a <code>O(N + M)</code> hash join where <code>N</code> is the size of the hashed table, and <code>M</code> the size of the lookup table. Here is some code to test the performance of <code>pd.merge</code> as a function of DataFrame size empirically. </p> <pre><code>import collections import string import timeit import numpy as np import pandas as pd from scipy import stats import matplotlib.pyplot as plt timing = collections.defaultdict(list) def make_df(ngroups, ndur, ndates): groups = list(string.uppercase[:ngroups]) durations = range(ndur) start = pd.date_range('2000-1-1', periods=ndates, freq='D') index = pd.MultiIndex.from_product([start, durations, groups], names=['Start', 'duration', 'Group']) values = np.arange(len(index)) df = pd.DataFrame({'Value': values}, index=index).reset_index() df['End'] = df['Start'] + pd.to_timedelta(df['duration'], unit='D') df = df.drop('duration', axis=1) df = df[['Group', 'Start', 'End', 'Value']] df['duration'] = df['End']-df['Start'] df['Prev'] = df['Start'] - df['duration'] - pd.Timedelta(days=1) return df def using_merge(df): result = pd.merge(df, df[['Group','duration','Start','Value']], how='left', left_on=['Group','duration','Prev'], right_on=['Group','duration','Start'], suffixes=['', '_y']) return result Ns = np.array([10**i for i in range(5)]) for n in Ns: timing['merge'].append(timeit.timeit( 'using_merge(df)', 'from __main__ import using_merge, make_df; df = make_df(10, 10, {})'.format(n), number=5)) print(timing['merge']) slope, intercept, rval, pval, stderr = stats.linregress(Ns, timing['merge']) print(slope, intercept, rval, pval, stderr) plt.plot(Ns, timing['merge'], label='merge') plt.plot(Ns, slope*Ns + intercept) plt.legend(loc='best') plt.show() </code></pre> <p>This suggests for DataFrames of tens of thousands of rows, <code>pd.merge</code>'s speed is roughly linear.</p> <p><a href="https://i.stack.imgur.com/L0Fay.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/L0Fay.png" alt="enter image description here"></a></p>
python|pandas
1
5,079
30,969,282
How to use geopy vicenty distance over dataframe columns?
<p>I have a dataframe with location column which contains lat,long location as follows</p> <pre><code> deviceid location 1102ADb75 [12.9404578177, 77.5548244743] </code></pre> <p>How to get the distance between consecutive rows using geopy's vicenty function? I tried following code</p> <pre><code>from geopy.distance import vincenty vincenty(df['location'].shift(-1), df['location']).miles </code></pre> <p>It returns following error - <strong><em>TypeError: __new__() takes at most 4 arguments (5 given)</em></strong></p> <p>EDIT - where df is a Pandas dataframe containing deviceId &amp; Location columns as shown above Also </p> <pre><code>print type(df) class 'pandas.core.frame.DataFrame' </code></pre>
<p>Based on <a href="https://github.com/geopy/geopy/blob/ba50914042dea4ee1ce45deb9ef226751faefc3b/geopy/distance.py#L301-304" rel="nofollow">geopy's github</a> you should pass two tuples to the <code>vincenty</code> function:</p> <pre><code> &gt;&gt;&gt; from geopy.distance import vincenty &gt;&gt;&gt; point_a = (41.49008, -71.312796) &gt;&gt;&gt; point_b = (41.499498, -81.695391) &gt;&gt;&gt; print(vincenty(point_a, point_b).miles) 538.3904451566326 </code></pre> <p><strong>EDIT</strong>:</p> <pre><code>import pandas as pd from geopy.distance import vincenty data = [[101, [41.49008, -71.312796]], [202, [41.499498, -81.695391]]] df = pd.DataFrame(data=data, columns=['deviceid', 'location']) print df &gt;&gt;&gt; deviceid location &gt;&gt;&gt; 0 101 [41.49008, -71.312796] &gt;&gt;&gt; 1 202 [41.499498, -81.695391] print vincenty(df['location'][0], df['location'][1]).miles &gt;&gt;&gt; 538.390445157 </code></pre>
python|pandas|dataframe|geopy
3
5,080
67,291,957
Is there a Python Function for this?
<p>Ok, So I manually wrote out a function to do this, but I am wondering if there is a built in python/pandas/numpy/... function for this. Essentially what I want is</p> <pre><code>data_col = data.loc[data['col3'] == 'a'] data_final = data_col['col2'] </code></pre> <p>But I want it for all of the values of col3. So it goes from: <a href="https://i.stack.imgur.com/hvbuY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hvbuY.png" alt="enter image description here" /></a></p> <p>to :<a href="https://i.stack.imgur.com/d0cqA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/d0cqA.png" alt="enter image description here" /></a></p> <p>Note how the values from col1 are not present. If there is no function that you can think of that does something like this, don't worry about making one. I already finished it and it suited my needs. Just curious, I haven't finished all of my courses in school yet so I am not super familiar with all of the functions.</p>
<p><strong>Code</strong></p> <pre><code>df = df.pivot_table( 'col2', columns='col3', aggfunc=(lambda x:x.to_list()) ).apply(pd.Series.explode).rename_axis(None, axis=1).reset_index(drop=True)) </code></pre> <p><strong>Output</strong></p> <pre><code> a b c d e 0 6 7 8 9 10 1 10 9 8 7 6 </code></pre> <p><strong>Explanation</strong></p> <ol> <li>By using pivot table we can list all elements of col2 based on groups of col3 and then we can explode the list.</li> <li>rename_axis and reset_index are used at the end are to convert the index as per requirement.</li> </ol>
python|pandas
2
5,081
67,482,379
Are individual gradients in a batch summed or averaged in a Neural Network?
<p>I am building a neural network from scratch. Currently having a batch of 32 training examples, and for each individual example, I calculate the derivatives (gradient) and sum them.</p> <p>After I sum the 32 training examples' gradients, I apply:<code>weight += d_weight * -learning rate;</code></p> <p>The question is: Should I sum (as of now) or average the 32 gradients?</p> <p>Or alternative solution:</p> <p>Should I calculate each 32 gradients for each loss output (as of now), or average the cross entropy loss outputs and then calculate a single gradient?</p> <p>I have looked at multiple sources and it's not clear what the answer is. Also the optimal learning rate in my software is lower than 0.0001 for Mnist training. That is different than the 0.01 to 0.05 that I have seen in other neural networks.</p>
<p>Well, it depends on what you want to achieve. The loss function acts as a guide to train the neural network to become better at a task.</p> <p>If we sum the cross entropy loss outputs, we incur more loss in proportion to the batch size, since our loss grows linearly in proportion to the mini-batch size during training.</p> <p>Whereas, if we take the average, our loss is indifferent to the batch size since we are taking an average.</p> <p>For your use case, I recommend taking the average, as that ensures that your loss function is decoupled from hyperparameters such as the aforementioned batch size.</p> <p>Another intuitive example is that by averaging the loss, we normalize the loss output and that also helps stabilize training, since our network becomes less sensitive to learning rate. If in the case we use sum, we might get exploding gradient issues, which forces us to use a much lower learning rate, thus making our network more sensitive to hyperparameter values.</p>
python|tensorflow|math|neural-network
1
5,082
67,572,955
How to groupby other column and get the last date without NaT in pandas?
<p>I want to group by ID column and get the last date without NaT in pandas. When I try <code>dropna</code> , I got an error<code>Cannot access callable attribute 'dropna' of 'SeriesGroupBy' objects,try using the 'apply' method</code> If I don't drop or ignore the NaT, this will use NaT to be the last date. How can I ignore the NaT and find the last date?</p> <p>This is my code:</p> <pre><code>import pandas as pd import numpy as np # making data frame from csv file df = pd.read_csv(&quot;sample.csv&quot;,delimiter='|') df['LAST_PURCHASE_DATE'] = pd.to_datetime(df['LAST_PURCHASE_DATE'],errors = 'coerce').dt.strftime('%Y-%m-%d %H:%M:%S') df['most_recent_date'] = df.groupby(df['VIP_ID_SOURCE'])['LAST_PURCHASE_DATE'].dropna().transform('max') df['keep'] = np.where( df['most_recent_date'] == df['LAST_PURCHASE_DATE'], 'yes,jsut same tier', 'same tier,last purchase date dup by ' + df['VIP_ID_SOURCE'].astype(str) ) df['both'] = df['VIP_ID_SOURCE']+df['LAST_PURCHASE_DATE'].astype(str) df.loc[df.duplicated(subset=['both'], keep = False),'keep'] = 'same tier same last purchase date ' df = df.drop(columns = ['both','most_recent_date']) print(df) </code></pre> <p>sample csv(Null is a word not really null):</p> <pre><code>VIP_ID_SOURCE|TIER|LAST_PURCHASE_DATE|keep F08210020403|FO|2014-05-17 00:00:00|yes F08210020905|FO|2014-04-18 00:00:00|yes F08210020905|FO|Null|yes F08210020403|FO|Null|yes C01073019552|FO|2016-09-18 00:00:00|yes C01073019552|FO|2016-05-10 00:00:00|yes F08210022302|FO|Null|yes F08210022302|FO|Null|yes </code></pre> <p>expect output:</p> <pre><code>VIP_ID_SOURCE|TIER|LAST_PURCHASE_DATE|keep F08210020403|FO|2014-05-17 00:00:00|yes,jsut same tier F08210020905|FO|2014-04-18 00:00:00|yes,jsut same tier F08210020905|FO|Null|same tier,last purchase date dup by F08210020905 F08210020403|FO|Null|same tier,last purchase date dup by F08210020403 C01073019552|FO|2016-09-18 00:00:00|yes,jsut same tier C01073019552|FO|2016-05-10 00:00:00|same tier,last purchase date dup by C01073019552 F08210022302|FO|Null|same tier same last purchase date F08210022302|FO|Null|same tier same last purchase date </code></pre> <p>Any help would be very much appreciated.</p>
<p>I think there are several issues in your code.</p> <p>First I would propose that you transform your date in <code>LAST_DATE_PURCHASE</code> into datetime and not into a string. Then you can apply <code>.max(numeric_only=True)</code> instead of transform. I assign the resulting dataframe to a new one, which I join afterwards with <code>df</code> via <code>df = df.join(most_recent_date, on=&quot;VIP_ID_SOURCE&quot;)</code>.</p> <p>So the final code looks like:</p> <pre><code>import pandas as pd import numpy as np # making data frame from csv file df = pd.read_csv(&quot;sample.csv&quot;,delimiter='|') df['LAST_PURCHASE_DATE'] = pd.to_datetime(df['LAST_PURCHASE_DATE'],errors = 'coerce') most_recent_date = df.groupby(df['VIP_ID_SOURCE'])['LAST_PURCHASE_DATE'].max() most_recent_date= most_recent_date.rename(&quot;most_recent_date&quot;) df = df.join(most_recent_date, on=&quot;VIP_ID_SOURCE&quot;) df['keep'] = np.where( df['most_recent_date'] == df['LAST_PURCHASE_DATE'], 'yes,jsut same tier', 'same tier,last purchase date dup by ' + df['VIP_ID_SOURCE'].astype(str) ) df['both'] = df['VIP_ID_SOURCE']+df['LAST_PURCHASE_DATE'].astype(str) df.loc[df.duplicated(subset=['both'], keep = False),'keep'] = 'same tier same last purchase date ' df = df.drop(columns = ['both','most_recent_date']) print(df) </code></pre>
python|pandas|dataframe|date
2
5,083
67,276,357
Pandas Error: Reading one column as python Values (Float / Int Values) and other column as numpy.float64
<p>I'm using Pandas to transform some sporting data. One column is the home team stats and the 2nd column is away team stats.</p> <p>The stats are read from an excel file. When i print a dictionary from the dataframe all of the away team stats are floats (but many should be integers). When I print the type of each column values the first column will show up as Integers and floats while all of the 2nd column consists of numpy.float64 values.</p> <p>How can I get both columns to be integers and float values?</p> <p>Here is the python script and output:</p> <pre><code>import pandas as pd import numpy as np pd.options.mode.chained_assignment = None # Remove warning. default='warn' teams_df = pd.read_excel(&quot;STATS.xlsm&quot;, skiprows=8, nrows=12, usecols=[0,2]) new_teams_df = teams_df.rename(columns={&quot;Unnamed: 0&quot;: &quot;HOME&quot;, &quot;Unnamed: 2&quot;: &quot;AWAY&quot;}) new_teams_df = new_teams_df.dropna() print(&quot;\n********************\n Data Frame as dict \n********************&quot;) print(new_teams_df.to_dict()) print(&quot;\nHome Column Row 1 Type: &quot; + str(type(new_teams_df.at[1,'HOME']))) print(&quot;Away Column Row 1 Type: &quot; + str(type(new_teams_df.at[1,'AWAY']))) print(&quot;\nHome Column Row 10 Type: &quot; + str(type(new_teams_df.at[10,'HOME']))) print(&quot;Away Column Row 10 Type: &quot; + str(type(new_teams_df.at[10,'AWAY']))) </code></pre> <p>Outputs</p> <pre><code>******************** Data Frame as dict ******************** {'HOME': {0: 342, 1: 232, 2: 110, 3: 23, 4: 27, 7: 23, 8: 0.5652, 9: 26.3, 10: 14.9, 11: 44}, 'AWAY': {0: 339.0, 1: 214.0, 2: 125.0, 3: 45.0, 4: 25.0, 7: 18.0, 8: 0.5, 9: 37.7, 10: 18.8, 11: 43.0}} Home Column Row 1 Type: &lt;class 'int'&gt; Away Column Row 1 Type: &lt;class 'numpy.float64'&gt; Home Column Row 10 Type: &lt;class 'float'&gt; Away Column Row 10 Type: &lt;class 'numpy.float64'&gt; </code></pre> <p>Strange issue because the data is coming straight from a stats website onto an Excel file. Both columns should be exactly the same. Is there a away to convert the away column back to python objects. Some rows would need to be floats and the rest integers.</p> <p>Thanks!</p>
<p>The issue is that the int data type does not have Nan values by default: Many of the values may be blank for away. Resolution is</p> <p>In version 0.24.+ pandas has gained the ability to hold integer dtypes with missing values.</p> <p>Nullable Integer Data Type.</p> <p>Pandas can represent integer data with possibly missing values using arrays.IntegerArray. This is an extension types implemented within pandas. It is not the default dtype for integers, and will not be inferred; you must explicitly pass the dtype into array() or Series:</p> <pre><code>arr = pd.array([1, 2, np.nan], dtype=pd.Int64Dtype()) pd.Series(arr) 0 1 1 2 2 NaN dtype: Int64 </code></pre> <p>For convert column to nullable integers use:</p> <pre><code>df['myCol'] = df['myCol'].astype('Int64') </code></pre>
python|python-3.x|pandas|dataframe|numpy
1
5,084
34,602,356
Creating a new column that combines content of two other columns in a list
<p>Say I have a <code>DataFrame</code> as follows: </p> <p><a href="https://i.stack.imgur.com/7NZNA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7NZNA.png" alt="enter image description here"></a></p> <p>I'd like to create a new <code>column</code> whose value is the 2nd and 3rd columns combined into a list in a cell.</p> <p>i.e. </p> <pre><code>combined [-8589.95, -6492.41] [-1475.30, 249.52] </code></pre> <p>Any ideas how to do this? I get this error: </p> <pre><code>ValueError: Length of values does not match length of index </code></pre> <p>when I try to do something like this:</p> <pre><code>DF['combined'] = [DF['chicago_bound1'], DF['chicago_bound2']] </code></pre>
<p>Try:</p> <pre><code>df['combined'] = list(zip(df.chicago_bound1, df.chicago_bound2)) </code></pre> <p>or</p> <pre><code>df['combined'] = df.apply(lambda x: [[x.chicago_bound1, x.chicago_bound2]], axis=1) </code></pre>
python|pandas|dataframe
4
5,085
34,661,318
REPLACE rows in mysql database table with pandas DataFrame
<p>Python Version - 2.7.6</p> <p>Pandas Version - 0.17.1</p> <p>MySQLdb Version - 1.2.5</p> <p>In my database ( <code>PRODUCT</code> ) , I have a table ( <code>XML_FEED</code> ). The table XML_FEED is huge ( Millions of record ) I have a pandas.DataFrame() ( <code>PROCESSED_DF</code> ). The dataframe has thousands of rows.</p> <p>Now I need to run this </p> <pre><code>REPLACE INTO TABLE PRODUCT.XML_FEED (COL1, COL2, COL3, COL4, COL5), VALUES (PROCESSED_DF.values) </code></pre> <p>Question:-</p> <p>Is there a way to run <code>REPLACE INTO TABLE</code> in pandas? I already checked <code>pandas.DataFrame.to_sql()</code> but that is not what I need. I do not prefer to read <code>XML_FEED</code> table in pandas because it very huge.</p>
<p>With the release of pandas 0.24.0, there is now an <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/io.html#io-sql-method" rel="noreferrer">official way</a> to achieve this by passing a custom insert method to the <code>to_sql</code> function. </p> <p>I was able to achieve the behavior of <code>REPLACE INTO</code> by passing this callable to <code>to_sql</code>:</p> <pre class="lang-py prettyprint-override"><code>def mysql_replace_into(table, conn, keys, data_iter): from sqlalchemy.dialects.mysql import insert from sqlalchemy.ext.compiler import compiles from sqlalchemy.sql.expression import Insert @compiles(Insert) def replace_string(insert, compiler, **kw): s = compiler.visit_insert(insert, **kw) s = s.replace("INSERT INTO", "REPLACE INTO") return s data = [dict(zip(keys, row)) for row in data_iter] conn.execute(table.table.insert(replace_string=""), data) </code></pre> <p>You would pass it like so:</p> <pre class="lang-py prettyprint-override"><code>df.to_sql(db, if_exists='append', method=mysql_replace_into) </code></pre> <p>Alternatively, if you want the behavior of <code>INSERT ... ON DUPLICATE KEY UPDATE ...</code> instead, you can use this:</p> <pre class="lang-py prettyprint-override"><code>def mysql_replace_into(table, conn, keys, data_iter): from sqlalchemy.dialects.mysql import insert data = [dict(zip(keys, row)) for row in data_iter] stmt = insert(table.table).values(data) update_stmt = stmt.on_duplicate_key_update(**dict(zip(stmt.inserted.keys(), stmt.inserted.values()))) conn.execute(update_stmt) </code></pre> <p>Credits to <a href="https://stackoverflow.com/a/11762400/1919794">https://stackoverflow.com/a/11762400/1919794</a> for the compile method.</p>
python|mysql|pandas|replace
15
5,086
60,119,503
How to format datetime in a dataframe the way I want?
<p>I cannot find the correct format for this datetime. I have tried several formats, <code>%Y/%m/%d%I:%M:%S%p</code> is the closest format I can find for the example below.</p> <pre><code>df['datetime'] = '2019-11-13 16:28:05.779' df['datetime'] = pd.to_datetime(df['datetime'], format="%Y/%m/%d%I:%M:%S%p") </code></pre> <p>Result:</p> <pre class="lang-none prettyprint-override"><code>ValueError: time data '2019-11-13 16:28:05.779' does not match format '%Y/%m/%d%I:%M:%S%p' (match) </code></pre>
<p>You can solve this probably by using the parameter <code>infer_datetime_format=True</code>. Here's an example:</p> <pre><code>df = {} df['datetime'] = '2019-11-13 16:28:05.779' df['datetime'] = pd.to_datetime(df['datetime'], infer_datetime_format=True) print(df['datetime']) print(type(df['datetime']) </code></pre> <p>Output:</p> <pre><code>2019-11-13 16:28:05.779000 &lt;class 'pandas._libs.tslibs.timestamps.Timestamp'&gt; </code></pre>
python|pandas|dataframe|datetime
4
5,087
60,104,458
Pyarrow: TypeError: an integer is required (got type str)
<p>I have a dataframe with following dtype:</p> <pre><code>[2020-02-06 19:15:06,579] {logging_mixin.py:95} INFO - campanha object chave_sistema_origem int64 valor_ajustado object </code></pre> <p>The column <code>valor_ajustado</code> has some value that is throwing me an exception when I try to write a parquet file using <code>df.to_parquet(buffer, index=False)</code></p> <pre><code>[2020-02-06 19:15:06,597] {taskinstance.py:1047} ERROR - an integer is required (got type str) ... File &quot;/Users/jackhammer/.virtualenvs/python373/lib/python3.7/site-packages/pyarrow/pandas_compat.py&quot;, line 540, in convert_column result = pa.array(col, type=type_, from_pandas=True, safe=safe) File &quot;pyarrow/array.pxi&quot;, line 207, in pyarrow.lib.array File &quot;pyarrow/array.pxi&quot;, line 78, in pyarrow.lib._ndarray_to_array </code></pre> <p>I know that column <code>valor_ajustado</code> has values like:</p> <blockquote> <p>0</p> <p>123,48</p> <p>1</p> <p>493,987</p> </blockquote> <p>Anyone knows why it's trying to manipulate integers instead of keep column as an object?</p>
<p>There is no data type in Apache Arrow to hold Python objects so a supported strong data type has to be inferred (this is also true of Parquet files). I would cleansing the <code>valor_adjustado</code> column to make sure all the values are numeric (there must be a string or some other bad value within). </p>
python|pandas|parquet
4
5,088
60,182,142
Adding columns of list within list in pandas
<p>I have column names in list within list with different size like [["a","b","c"],["d","e"],["f"]] also few of the columns contains NaN. </p> <h2>|a b c d e f|</h2> <h2>|1 2 3 4 5 6|</h2> <h2>|1 2 3 Nan NaN 6|</h2> <h2>|1 2 3 4 inf 6|</h2> <p>The result should be the sum of a list within a list like g=a+b+c, h=d+e, i=f which are column names. NaN sum should result NaN, not 0. How to do that in a loop?</p> <p>Expected Output</p> <h2>|g h i|</h2> <h2>|6 9 6|</h2> <h2>|6 NaN 6|</h2> <p>|6 inf 6|</p>
<p>Use list comprehension:</p> <pre><code>L = [["a","b","c"],["d","e"],["f"]] a = [df[x].sum(axis=1, min_count=1) for x in L] </code></pre> <p>Loop solution:</p> <pre><code>a = [] for x in L: a.append(df[x].sum(axis=1, min_count=1)) </code></pre> <hr> <pre><code>print (a) [0 6 1 6 2 6 dtype: int64, 0 9.0 1 NaN 2 inf dtype: float64, 0 6 1 6 2 6 dtype: int64] </code></pre> <p>And then add <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.concat.html" rel="nofollow noreferrer"><code>concat</code></a>:</p> <pre><code>df1 = pd.concat(a, axis=1, keys=['g','h','i']) print (df1) g h i 0 6 9.0 6 1 6 NaN 6 2 6 inf 6 </code></pre>
python|pandas
3
5,089
65,136,547
How to do Transfer Learning without ImageNet weights?
<p>This is a description of my project:</p> <p><strong>Dataset1:</strong> The bigger dataset, contains binary classes of images.</p> <p><strong>Dataset2</strong>: Contains <code>2</code> classes that are very similar in appearance to <code>Dataset1</code>. I want to make a model that is using transfer learning by learning from <code>Dataset1</code> and apply the weights with less learning rate in <code>Dataset2</code>.</p> <p>Therefore I’m looking to train the entire <code>VGG16</code> on <code>dataset1</code>, then using transfer learning to finetune the last layers for <code>dataset2</code>. I do not want to use the pre-trained imagenet database. This is the code I am using, and I have saved the wights from it:</p> <pre><code> from tensorflow.keras.layers import Input, Lambda, Dense, Flatten from tensorflow.keras.models import Model from tensorflow.keras.applications.vgg16 import VGG16 from tensorflow.keras.applications.vgg16 import preprocess_input from tensorflow.keras.preprocessing import image from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.models import Sequential import numpy as np from glob import glob import matplotlib.pyplot as plt vgg = VGG16(input_shape=(244, 244, 3), weights=None, include_top=False) # don't train existing weights for layer in vgg.layers: layer.trainable = False x = Flatten()(vgg.output) import tensorflow.keras prediction = tensorflow.keras.layers.Dense(2, activation='softmax')(x) model = Model(inputs=vgg.input, outputs=prediction) model.compile( loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'] ) from keras.preprocessing.image import ImageDataGenerator train_datagen = ImageDataGenerator(rescale = 1./255, shear_range = 0.2, zoom_range = 0.2, horizontal_flip = True) test_datagen = ImageDataGenerator(rescale = 1./255) training_set = train_datagen.flow_from_directory('chest_xray/train', target_size = (224, 224), batch_size = 32, class_mode = 'categorical') test_set = train_datagen.flow_from_directory('chest_xray/test', target_size = (224, 224), batch_size = 32, class_mode = 'categorical') # fit the model r = model.fit_generator( training_set, validation_data=test_set, epochs=5, steps_per_epoch=len(training_set), validation_steps=len(test_set) ) model.save_weights('first_try.h5') </code></pre>
<h2>Update</h2> <p>Based on your query, it seems that the class number won't be different in <strong>Dataset2</strong>. At the same time, you also don't want to use image net weight. So, in that case, you don't need to map or store the weight (as described below). Just load the model and weight and train on <strong>Dataset2</strong>. Freeze the all trained layer from <strong>Dataset1</strong> and train the last layer on <strong>Dataset2</strong>; really straight forward.</p> <blockquote> <p>In my below response, though you're not needed the full information, I am keeping that anyway for future reference.</p> </blockquote> <hr /> <p>Here is a small demonstration of what you probably need. Hope it gives you some insight. Here we will train the <code>CIRFAR</code> data set which has <code>10</code> classes and try to use it for transfer learning with on different data set which probably has different input sizes and a different number of classes.</p> <h2>Preparing the CIFAR (10 Classes)</h2> <pre class="lang-py prettyprint-override"><code>import numpy as np import tensorflow as tf from tensorflow.keras.models import Model from tensorflow.keras.layers import Dense, GlobalAveragePooling2D, Dropout (x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data() # train set / data x_train = x_train.astype('float32') / 255 # validation set / data x_test = x_test.astype('float32') / 255 # train set / target y_train = tf.keras.utils.to_categorical(y_train, num_classes=10) # validation set / target y_test = tf.keras.utils.to_categorical(y_test, num_classes=10) print(x_train.shape, y_train.shape) print(x_test.shape, y_test.shape) ''' (50000, 32, 32, 3) (50000, 10) (10000, 32, 32, 3) (10000, 10) ''' </code></pre> <h2>Model</h2> <pre><code># declare input shape input = tf.keras.Input(shape=(32,32,3)) # Block 1 x = tf.keras.layers.Conv2D(32, 3, strides=2, activation=&quot;relu&quot;)(input) x = tf.keras.layers.MaxPooling2D(3)(x) # Now that we apply global max pooling. gap = tf.keras.layers.GlobalMaxPooling2D()(x) # Finally, we add a classification layer. output = tf.keras.layers.Dense(10, activation='softmax')(gap) # bind all func_model = tf.keras.Model(input, output) ''' Model: &quot;functional_3&quot; _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_2 (InputLayer) [(None, 32, 32, 3)] 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 15, 15, 32) 896 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 (None, 5, 5, 32) 0 _________________________________________________________________ global_max_pooling2d_1 (Glob (None, 32) 0 _________________________________________________________________ dense_1 (Dense) (None, 10) 330 ================================================================= Total params: 1,226 Trainable params: 1,226 Non-trainable params: 0 ''' </code></pre> <p>Run the model to get some weight matrices as follows:</p> <pre><code># compile print('\nFunctional API') func_model.compile( loss = tf.keras.losses.CategoricalCrossentropy(), metrics = tf.keras.metrics.CategoricalAccuracy(), optimizer = tf.keras.optimizers.Adam()) # fit func_model.fit(x_train, y_train, batch_size=128, epochs=1) </code></pre> <h2>Transfer Learning</h2> <p>Let's use it for <strong>MNIST</strong>. It also has <code>10</code> classes but for sake of need a different number of classes, we will make <code>even</code> and <code>odd</code> categories from it (<strong>2</strong> classes). Below how we will prepare these data sets</p> <pre><code>(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data() # train set / data x_train = np.expand_dims(x_train, axis=-1) x_train = np.repeat(x_train, 3, axis=-1) x_train = x_train.astype('float32') / 255 # train set / target y_train = tf.keras.utils.to_categorical((y_train % 2 == 0).astype(int), num_classes=2) # validation set / data x_test = np.expand_dims(x_test, axis=-1) x_test = np.repeat(x_test, 3, axis=-1) x_test = x_test.astype('float32') / 255 # validation set / target y_test = tf.keras.utils.to_categorical((y_test % 2 == 0).astype(int), num_classes=2) print(x_train.shape, y_train.shape) print(x_test.shape, y_test.shape) ''' (60000, 28, 28, 3) (60000, 2) (10000, 28, 28, 3) (10000, 2) ''' </code></pre> <p>If you're familiar with the usage of ImageNet pretrained weight in the <code>keras</code> model, you probably use <code>include_top</code>. By setting it <code>False</code> we can easily load a weight file that has no top information of the pretrained models. So here we need to manually (kinda) do that. We need to grab the weight matrices until the last activation layer (in our case which is <code>Dense(10, softmax)</code>). And put it in the new instance of the base model, and + we add a new classifier layer (in our case that will be <code>Dense(2, softmax)</code>.</p> <pre><code>for i, layer in enumerate(func_model.layers): print(i,'\t',layer.trainable,'\t :',layer.name) ''' Train_Bool : Layer Names 0 True : input_1 1 True : conv2d 2 True : max_pooling2d 3 True : global_max_pooling2d # &lt; we go till here to grab the weight and biases 4 True : dense # 10 classes (from previous model) ''' </code></pre> <p><strong>Get Weights</strong></p> <pre><code>sparsified_weights = [] for w in func_model.get_layer(name='global_max_pooling2d').get_weights(): sparsified_weights.append(w) </code></pre> <p>By that, we map the weight from the old model except for the classifier layers (<code>Dense</code>). Please note, here we grab the weight until the <code>GAP</code> layer, which is there right before the classifier.</p> <p>Now, we will create a new model, the same as the old model except for the last layer (<code>10 Dense</code>), and at the same time add a new <code>Dense</code> with <code>2</code> unit.</p> <pre><code>predictions = Dense(2, activation='softmax')(func_model.layers[-2].output) new_func_model = Model(inputs=func_model.inputs, outputs = predictions) </code></pre> <p>And now we can set weight as follows to the new model:</p> <pre><code>new_func_model.get_layer(name='global_max_pooling2d').set_weights(sparsified_weights) </code></pre> <p>You can check to verify as follows; all will be the same except the last layer.</p> <pre><code>func_model.get_weights() # last layer, Dense (10) new_func_model.get_weights() # last layer, Dense (2) </code></pre> <p>Now you can train the model with new data set, in our case which was <strong>MNIST</strong></p> <pre><code>new_func_model.compile(optimizer='adam', loss='categorical_crossentropy') new_func_model.summary() ''' _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 32, 32, 3)] 0 _________________________________________________________________ conv2d (Conv2D) (None, 15, 15, 32) 896 _________________________________________________________________ max_pooling2d (MaxPooling2D) (None, 5, 5, 32) 0 _________________________________________________________________ global_max_pooling2d (Global (None, 32) 0 _________________________________________________________________ dense_6 (Dense) (None, 2) 66 ================================================================= Total params: 962 Trainable params: 962 Non-trainable params: 0 ''' # compile print('\nFunctional API') new_func_model.compile( loss = tf.keras.losses.CategoricalCrossentropy(), metrics = tf.keras.metrics.CategoricalAccuracy(), optimizer = tf.keras.optimizers.Adam()) # fit new_func_model.fit(x_train, y_train, batch_size=128, epochs=1) </code></pre> <pre><code>WARNING:tensorflow:Model was constructed with shape (None, 32, 32, 3) for input Tensor(&quot;input_1:0&quot;, shape=(None, 32, 32, 3), dtype=float32), but it was called on an input with incompatible shape (None, 28, 28, 3). WARNING:tensorflow:Model was constructed with shape (None, 32, 32, 3) for input Tensor(&quot;input_1:0&quot;, shape=(None, 32, 32, 3), dtype=float32), but it was called on an input with incompatible shape (None, 28, 28, 3). 469/469 [==============================] - 1s 3ms/step - loss: 0.6453 - categorical_accuracy: 0.6447 &lt;tensorflow.python.keras.callbacks.History at 0x7f7af016feb8&gt; </code></pre>
python|tensorflow|keras
2
5,090
50,014,314
Append string of column index to DataFrame columns
<p>I am working on a project using Learning to Rank. Below is the example dataset format (taken from <a href="https://www.microsoft.com/en-us/research/project/letor-learning-rank-information-retrieval/" rel="nofollow noreferrer">https://www.microsoft.com/en-us/research/project/letor-learning-rank-information-retrieval/</a>). The first column is the rank, second column is query id, and the followings are <code>[feature number]:[feature value]</code></p> <pre><code>1008 qid:10 1:0.004356 2:0.080000 3:0.036364 4:0.000000 … 46:0.00000 1007 qid:10 1:0.004901 2:0.000000 3:0.036364 4:0.333333 … 46:0.000000 1006 qid:10 1:0.019058 2:0.240000 3:0.072727 4:0.500000 … 46:0.000000 </code></pre> <p>Right now, I am successfully convert my data into this following format in <code>Pandas.DataFrame</code>.</p> <pre><code>10 qid:354714443278337 3500 1 122.0 156.0 13.0 1698.0 1840.0 92.28260 ... ... </code></pre> <p>The first two column is already fine. What I need next is appending feature number to the remaining columns (e.g. first feature from <code>3500</code> become <code>1:3500</code>)</p> <p>I know I can append a string to columns by using this following command.</p> <pre><code>df['col'] = 'str' + df['col'].astype(str) </code></pre> <p>Look at the first feature, <code>3500</code>, is located at column index 2, so what I can think of is appending <code>column index - 1</code> for each column. How do I append the string based on the column number? </p> <p>Any help would be appreciated.</p>
<p>I think need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.radd.html" rel="nofollow noreferrer"><code>DataFrame.radd</code></a> for add columns names from right side and <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.iloc.html" rel="nofollow noreferrer"><code>iloc</code></a> for select from second column to end:</p> <pre><code>print (df) 0 1 2 3 4 5 6 7 8 \ 0 10 qid:354714443278337 3500 1 122.0 156.0 13.0 1698.0 1840.0 1 10 qid:354714443278337 3500 1 122.0 156.0 13.0 1698.0 1840.0 9 0 92.2826 1 92.2826 df.iloc[:, 2:] = df.iloc[:, 2:].astype(str).radd(':').radd((df.columns[2:] - 1).astype(str)) print (df) 0 1 2 3 4 5 6 7 \ 0 10 qid:354714443278337 1:3500 2:1 3:122.0 4:156.0 5:13.0 6:1698.0 1 10 qid:354714443278337 1:3500 2:1 3:122.0 4:156.0 5:13.0 6:1698.0 8 9 0 7:1840.0 8:92.2826 1 7:1840.0 8:92.2826 </code></pre>
python|pandas|dataframe
1
5,091
49,872,843
pandas reading excel tables from pandas-exported json
<p>So I have a small table from excel, which I'd like to read in Pandas. Actually, I have several of the likes, and I'd like to just embed them directly in my script rather than keeping track of separate files.</p> <p>My file could be a table like this <a href="https://i.stack.imgur.com/tGuw9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tGuw9.png" alt="enter image description here"></a></p> <p>And now I want to make it embeddable:</p> <pre><code>import pandas as pd pd.set_option("display.width", 1000) df = pd.read_excel("/your/excel/here/TEST.xlsx") my_json = df.to_json() # print of the above json that I want to keep in the script read_this = {"970":{"0.0":0.0,"0.975301809":0.153,"1.950603618":0.711,"2.925905427":1.269,"3.901207236":1.7775,"4.876509045":1.3125,"5.851810854":0.8475,"6.827112663":0.3825,"7.802414472":0.0,"8.777716281":0.0,"9.75301809":0.0},"1250":{"0.0":0.72,"0.975301809":0.6608,"1.950603618":0.5616,"2.925905427":0.4624,"3.901207236":0.3632,"4.876509045":0.36,"5.851810854":0.36,"6.827112663":0.36,"7.802414472":0.36,"8.777716281":0.36,"9.75301809":0.36},"2000":{"0.0":0.36,"0.975301809":1.18368,"1.950603618":3.50496,"2.925905427":5.383636362,"3.901207236":6.398181817,"4.876509045":9.031304347,"5.851810854":12.91304348,"6.827112663":14.7792,"7.802414472":15.8208,"8.777716281":16.56,"9.75301809":16.56},"3000":{"0.0":2.16,"0.975301809":5.03712,"1.950603618":9.85824,"2.925905427":13.33152,"3.901207236":15.83136,"4.876509045":18.57375,"5.851810854":21.50325,"6.827112663":24.43275,"7.802414472":818.440258,"8.777716281":1625.416258,"9.75301809":2041.92},"4000":{"0.0":8.64,"0.975301809":10.95428571,"1.950603618":16.26857143,"2.925905427":24.38666667,"3.901207236":33.48,"4.876509045":36.50666666,"5.851810854":34.85333333,"6.827112663":387.6812305,"7.802414472":1301.771077,"8.777716281":2215.860923,"9.75301809":2908.8},"5000":{"0.0":7.2,"0.975301809":134.1889811,"1.950603618":492.0670188,"2.925905427":849.9450564,"3.901207236":1207.823094,"4.876509045":1632.171428,"5.851810854":2814.281143,"6.827112663":3996.390856,"7.802414472":5178.500572,"8.777716281":6360.610284,"9.75301809":7542.72},"5500":{"0.0":285.48,"0.975301809":548.6879999,"1.950603618":1290.456,"2.925905427":2032.224,"3.901207236":2773.992,"4.876509045":3515.76,"5.851810854":5088.96,"6.827112663":6662.16,"7.802414472":8235.36,"8.777716281":9808.56,"9.75301809":11381.76},"6000":{"0.0":563.76,"0.975301809":963.1870186,"1.950603618":2088.844981,"2.925905427":3214.502943,"3.901207236":4340.160906,"4.876509045":5399.348572,"5.851810854":7363.638857,"6.827112663":9327.929144,"7.802414472":11292.21943,"8.777716281":13256.50972,"9.75301809":15220.8}} new_df = pd.DataFrame.from_dict(read_this) print("original\n", df, "\n") print("from json\n", new_df) </code></pre> <p>And I get the following</p> <pre><code>original 970 1250 2000 3000 4000 5000 5500 6000 0.000000 0.0000 0.7200 0.360000 2.160000 8.640000 7.200000 285.480 563.760000 0.975302 0.1530 0.6608 1.183680 5.037120 10.954286 134.188981 548.688 963.187019 1.950604 0.7110 0.5616 3.504960 9.858240 16.268571 492.067019 1290.456 2088.844981 2.925905 1.2690 0.4624 5.383636 13.331520 24.386667 849.945056 2032.224 3214.502943 3.901207 1.7775 0.3632 6.398182 15.831360 33.480000 1207.823094 2773.992 4340.160906 4.876509 1.3125 0.3600 9.031304 18.573750 36.506667 1632.171428 3515.760 5399.348572 5.851811 0.8475 0.3600 12.913043 21.503250 34.853333 2814.281143 5088.960 7363.638857 6.827113 0.3825 0.3600 14.779200 24.432750 387.681231 3996.390856 6662.160 9327.929144 7.802414 0.0000 0.3600 15.820800 818.440258 1301.771077 5178.500572 8235.360 11292.219430 8.777716 0.0000 0.3600 16.560000 1625.416258 2215.860923 6360.610284 9808.560 13256.509720 9.753018 0.0000 0.3600 16.560000 2041.920000 2908.800000 7542.720000 11381.760 15220.800000 from json 1250 2000 3000 4000 5000 5500 6000 970 0.0 0.7200 0.360000 2.160000 8.640000 7.200000 285.480 563.760000 0.0000 0.975301809 0.6608 1.183680 5.037120 10.954286 134.188981 548.688 963.187019 0.1530 1.950603618 0.5616 3.504960 9.858240 16.268571 492.067019 1290.456 2088.844981 0.7110 2.925905427 0.4624 5.383636 13.331520 24.386667 849.945056 2032.224 3214.502943 1.2690 3.901207236 0.3632 6.398182 15.831360 33.480000 1207.823094 2773.992 4340.160906 1.7775 4.876509045 0.3600 9.031304 18.573750 36.506667 1632.171428 3515.760 5399.348572 1.3125 5.851810854 0.3600 12.913043 21.503250 34.853333 2814.281143 5088.960 7363.638857 0.8475 6.827112663 0.3600 14.779200 24.432750 387.681231 3996.390856 6662.160 9327.929144 0.3825 7.802414472 0.3600 15.820800 818.440258 1301.771077 5178.500572 8235.360 11292.219430 0.0000 8.777716281 0.3600 16.560000 1625.416258 2215.860923 6360.610284 9808.560 13256.509720 0.0000 9.75301809 0.3600 16.560000 2041.920000 2908.800000 7542.720000 11381.760 15220.800000 0.0000 </code></pre> <p>So close, but not really the same. How can I preserve the original structure as an embeddable line of text?</p> <p>Pastebin of excel file available <a href="https://pastebin.com/Ci46inDA" rel="nofollow noreferrer">here</a></p>
<p>OK, finally I understand what you want: include the content of your Excel file (i.e. a 2D matrix) directly as a variable in the source code of your script, so that you don't have to read the file anymore. Am I right ?</p> <p>The native data structure able to store 2D matrices is a <strong>list of lists</strong>. This can be obtained from your Excel file by the following code:</p> <pre><code>import pandas as pd df = pd.read_excel("/your/excel/here/TEST.xlsx") print("mat =", df.values.tolist()) </code></pre> <p>which should print something like:</p> <pre><code>mat = [['', 970, 1250....], [0.00, 0, 0.72...], ...] </code></pre> <p>Then you simply copy the printed lines with your mouse and paste them at the beginning of your code, to create a matrix <code>mat</code> that stores your data.</p> <p>If you need panda DataFrame, simply change the <code>print</code> line to:</p> <pre><code>print("df = pd.DataFrame(%s)" % df.values.tolist()) </code></pre> <p>and apply the same copy/paste process</p>
python|pandas
1
5,092
63,915,693
Imput NaNs with the mean in column and find percentage of missing values
<p>I want to impute the mean value at all the missing values of the column <code>Product_Base_Margin</code> and then print the percentage of missing values in each column.</p> <p>My current code:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import pandas as pd df = pd.read_csv('https://query.data.world/s/Hfu_PsEuD1Z_yJHmGaxWTxvkz7W_b0') df = df[~np.isnan(df['Product_Base_Margin'])] print(round(100*(df.isnull().sum()/len(df.index)), 2)) </code></pre> <p>Expected output:</p> <pre><code>Ord_id 0.00 Prod_id 0.00 Ship_id 0.00 Cust_id 0.00 Sales 0.24 Discount 0.65 Order_Quantity 0.65 Profit 0.65 Shipping_Cost 0.65 Product_Base_Margin 0.00 dtype: float64 </code></pre> <p>What am I doing wrong?</p>
<p>Please try this:</p> <pre><code>df['Product_Base_Margin'] = df['Product_Base_Margin'].mean() print(round(100*(df.isnull().sum()/len(df.index)), 2)) </code></pre>
python|pandas|numpy
0
5,093
63,802,819
Hide a line on plotly line graph
<p>Imagine I have lines A, B, C, D, and E. I want lines A, B, and C to appear on the plotly line chart. I want the user to have the option to add lines D and E but D and E should be hidden by default.</p> <p>Any suggestions on how to do this?</p> <p>Example, how would I hide Australia by default.</p> <pre><code>import plotly.express as px df = px.data.gapminder().query(&quot;continent=='Oceania'&quot;) fig = px.line(df, x=&quot;year&quot;, y=&quot;lifeExp&quot;, color='country') fig.show() </code></pre>
<p>You need to play with the parameter <code>visible</code> setting it as <code>legendonly</code> within every trace</p> <pre class="lang-py prettyprint-override"><code>import plotly.express as px countries_to_hide = [&quot;Australia&quot;] df = px.data.gapminder().query(&quot;continent=='Oceania'&quot;) fig = px.line(df, x=&quot;year&quot;, y=&quot;lifeExp&quot;, color='country') fig.for_each_trace(lambda trace: trace.update(visible=&quot;legendonly&quot;) if trace.name in countries_to_hide else ()) fig.show() </code></pre> <p><a href="https://i.stack.imgur.com/s0Y5E.png" rel="noreferrer"><img src="https://i.stack.imgur.com/s0Y5E.png" alt="enter image description here" /></a></p>
python|pandas|plotly
13
5,094
63,935,283
How to create a mask for all relatively white parts of an image using numpy?
<p>Say I have 2 white images (RGB 800x600 image) that is 'dirty' at some unknown positions, I want to create a final combined image that has all the dirty parts of both images.</p> <p>Just adding the images together reduces the 'dirtyness' of each blob, since I half the pixel values and then add them (to stay in the 0-&gt;255 rgb range), this is amplified when you have more than 2 images.</p> <p>What I want to do is create a mask for all relatively white pixels in the 3 channel image, I've seen that if all RGB values are within 10-15 of each other, a pixel is relatively white. How would I create this mask using numpy?</p> <p>Pseudo code for what I want to do:</p> <pre><code>img = cv2.imread(img) #BGR image mask = np.where( BGR within 10 of each other) </code></pre> <p>Then I can use the first image, and replace pixels on it where the second picture is not masked, keeping the 'dirtyness level' relatively dirty. (I know some dirtyness of the second image will replace that of the first, but that's okay)</p> <p>Edit: People asked for images so I created some sample images, the white would not always be so exactly white as in these samples which is why I need to use a 'within 10 BGR' range.</p> <p>Image 1 <a href="https://i.stack.imgur.com/tEMNO.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tEMNO.jpg" alt="Image 1" /></a></p> <p>Image 2 <a href="https://i.stack.imgur.com/2fv2Q.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2fv2Q.jpg" alt="Image 2" /></a></p> <p>Image 3 (combined, ignore the difference in yellow blob from image 2 to here, they should be the same) <a href="https://i.stack.imgur.com/HbHjK.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HbHjK.jpg" alt="Image 3" /></a></p>
<p>I would suggest you convert to <a href="https://en.wikipedia.org/wiki/HSL_and_HSV" rel="nofollow noreferrer">HSV colourspace</a> and look for saturated (colourful) pixels like this:</p> <pre><code>import cv2 # Load background and foreground images bg = cv2.imread('A.jpg') fg = cv2.imread('B.jpg') # Convert to HSV colourspace and extract just the Saturation Sat = cv2.cvtColor(fg, cv2.COLOR_BGR2HSV)[..., 1] # Find best (Otsu) threshold to divide black from white, and apply it _ , mask = cv2.threshold(Sat,0,1,cv2.THRESH_BINARY+cv2.THRESH_OTSU) # At each pixel, choose foreground where mask is set and background elsewhere res = np.where(mask[...,np.newaxis], fg, bg) # Save the result cv2.imwrite('result.png', res) </code></pre> <p><a href="https://i.stack.imgur.com/IGVvS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IGVvS.png" alt="enter image description here" /></a></p> <p>Note that you can modify this if it picks up too many or too few coloured pixels. If it picks up too few, you could dilate the mask and if it it picks up too many, you could erode the mask. You could also blur the image a little bit before masking which might not be a bad idea as it is a <em>&quot;nasty&quot;</em> JPEG with compression artefacts in it. You could change the saturation test and make it more clinical and targeted if you only wanted to allow certain colours through, or a certain brightness or a comnbination.</p>
python|numpy|opencv
1
5,095
47,060,565
Tensorflow only works under root after drivers update
<p>I had a working Tensorflow for Python installation on my Ubuntu 16.04.3 LTS Xenial / nVidia GTX 1080 Ti machine. Then, the <strong>nVidia drivers got updated</strong> from 374 to 384.90 (<code>nvidia-smi</code> reports <code>NVIDIA-SMI 384.90</code>).</p> <p>Since then, I've <strong>only been able to run my programs under <code>root</code> or in CPU mode</strong>. For instance, when run using a regular user account, the <a href="https://github.com/tensorflow/models/tree/master/official/mnist" rel="nofollow noreferrer">MNIST example</a> kept failing with a <code>CUDNN_STATUS_INTERNAL_ERROR</code> error:</p> <pre><code>E tensorflow/stream_executor/cuda/cuda_dnn.cc:371] could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR E tensorflow/stream_executor/cuda/cuda_dnn.cc:338] could not destroy cudnn handle: CUDNN_STATUS_BAD_PARAM F tensorflow/core/kernels/conv_ops.cc:672] Check failed: stream-&gt;parent()-&gt;GetConvolveAlgorithms( conv_parameters.ShouldIncludeWinogradNonfusedAlgo&lt;T&gt;(), &amp;algorithms) </code></pre> <p>I have <strong>tried reinstalling the drivers/CUDA/cudnn</strong> in various combination several times, following the <a href="https://www.tensorflow.org/install/install_linux" rel="nofollow noreferrer">official installation guide for TF r1.3</a> throughout the process.</p> <p>Whatever solutions I found online (mostly suggesting this is an issue with memory, which for 10GB cards trying to run MNIST is unlikely) have been tried out but have not been helpful, e.g.:</p> <ul> <li><a href="https://github.com/tensorflow/models/issues/1064" rel="nofollow noreferrer">https://github.com/tensorflow/models/issues/1064</a></li> <li><a href="https://github.com/tensorflow/tensorflow/issues/6606" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/issues/6606</a></li> <li><a href="https://github.com/tensorflow/tensorflow/issues/8879" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/issues/8879</a></li> <li><a href="https://github.com/tensorflow/tensorflow/issues/9132" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/issues/9132</a></li> </ul> <p>Any idea how to resolve this issue?</p> <h3>Details</h3> <p>The update, as detailed in the logs in <code>/var/log/apt/history.log</code></p> <pre><code>Start-Date: 2017-10-25 06:54:42¬ Commandline: /usr/bin/unattended-upgrade¬ Install: nvidia-384-dev:amd64 (384.90-0ubuntu0.16.04.1, automatic), libcuda1-384:amd64 (384.90-0ubuntu0.16.04.1, automatic), nvidia-opencl-icd-384:amd64 (384.90-0ubuntu0.16.04.1, automatic), nvidia-384:amd64 (384.90-0ubuntu0.16.04.1, automatic)¬ Upgrade: libcurl3:amd64 (7.47.0-1ubuntu2.3, 7.47.0-1ubuntu2.4), libcuda1-375:amd64 (375.66-0ubuntu0.16.04.1, 384.90-0ubuntu0.16.04.1), libicu55:amd64 (55.1-7ubuntu0.2, 55.1-7ubuntu0.3), chromium-browser:amd64 (61.0.3163.100-0ubuntu0.16.04.1306, 62.0.3202.62-0ubuntu0.16.04.1308), chromium-codecs-ffmpeg-extra:amd64 (61.0.3163.100-0ubuntu0.16.04.1306, 62.0.3202.62-0ubuntu0.16.04.1308), nvidia-375-dev:amd64 (375.66-0ubuntu0.16.04.1, 384.90-0ubuntu0.16.04.1), libwebkit2gtk-4.0-37:amd64 (2.16.6-0ubuntu0.16.04.1, 2.18.0-0ubuntu0.16.04.2), mysql-common:amd64 (5.7.19-0ubuntu0.16.04.1, 5.7.20-0ubuntu0.16.04.1), libmysqlclient20:amd64 (5.7.19-0ubuntu0.16.04.1, 5.7.20-0ubuntu0.16.04.1), libicu-dev:amd64 (55.1-7ubuntu0.2, 55.1-7ubuntu0.3), icu-devtools:amd64 (55.1-7ubuntu0.2, 55.1-7ubuntu0.3), chromium-browser-l10n:amd64 (61.0.3163.100-0ubuntu0.16.04.1306, 62.0.3202.62-0ubuntu0.16.04.1308), curl:amd64 (7.47.0-1ubuntu2.3, 7.47.0-1ubuntu2.4), libjavascriptcoregtk-4.0-18:amd64 (2.16.6-0ubuntu0.16.04.1, 2.18.0-0ubuntu0.16.04.2), nvidia-opencl-icd-375:amd64 (375.66-0ubuntu0.16.04.1, 384.90-0ubuntu0.16.04.1), libcurl3-gnutls:amd64 (7.47.0-1ubuntu2.3, 7.47.0-1ubuntu2.4), nvidia-375:amd64 (375.66-0ubuntu0.16.04.1, 384.90-0ubuntu0.16.04.1), libwebkit2gtk-4.0-37-gtk2:amd64 (2.16.6-0ubuntu0.16.04.1, 2.18.0-0ubuntu0.16.04.2)¬ End-Date: 2017-10-25 06:56:00 </code></pre> <p>I would be able to run, again under a regular user, the simple validation program from the <a href="https://www.tensorflow.org/install/install_linux#validate_your_installation" rel="nofollow noreferrer">Tensorflow installation docs</a></p> <pre><code>import tensorflow as tf hello = tf.constant('Hello, TensorFlow!') sess = tf.Session() print(sess.run(hello)) </code></pre> <p>but the <a href="https://github.com/tensorflow/models/tree/master/official/mnist" rel="nofollow noreferrer">MNIST example</a> kept failing:</p> <pre><code>(venv-test)$~/tensorflow-validate/models/official/mnist$ python mnist.py INFO:tensorflow:Using default config. INFO:tensorflow:Using config: {'_keep_checkpoint_every_n_hours': 10000, '_tf_random_seed': 1, '_keep_checkpoint_max': 5, '_session_config': None, '_model_dir': '/tmp/mnist_model', '_save_summary_steps': 100, '_log_step_count_steps': 100, '_save_checkpoints_secs': 600, '_save_checkpoints_steps': None} INFO:tensorflow:Create CheckpointSaverHook. 2017-10-31 18:39:05.951324: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. 2017-10-31 18:39:05.951342: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. 2017-10-31 18:39:05.951346: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. 2017-10-31 18:39:05.951348: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations. 2017-10-31 18:39:05.951366: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations. 2017-10-31 18:39:06.591310: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:893] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero 2017-10-31 18:39:06.591682: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 0 with properties: name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate (GHz) 1.582 pciBusID 0000:01:00.0 Total memory: 10.91GiB Free memory: 10.75GiB 2017-10-31 18:39:06.591693: I tensorflow/core/common_runtime/gpu/gpu_device.cc:976] DMA: 0 2017-10-31 18:39:06.591696: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 0: Y 2017-10-31 18:39:06.591701: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -&gt; (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:01:00.0) 2017-10-31 18:39:07.977441: E tensorflow/stream_executor/cuda/cuda_dnn.cc:371] could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR 2017-10-31 18:39:07.977466: E tensorflow/stream_executor/cuda/cuda_dnn.cc:338] could not destroy cudnn handle: CUDNN_STATUS_BAD_PARAM 2017-10-31 18:39:07.977472: F tensorflow/core/kernels/conv_ops.cc:672] Check failed: stream-&gt;parent()-&gt;GetConvolveAlgorithms( conv_parameters.ShouldIncludeWinogradNonfusedAlgo&lt;T&gt;(), &amp;algorithms) </code></pre> <p>I attempted reinstalling as follows:</p> <p>Install CUDA 8</p> <pre><code>$ sudo apt install cuda-8-0 Reading package lists... Done Building dependency tree Reading state information... Done cuda-8-0 is already the newest version (8.0.61-1). 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. </code></pre> <p>and</p> <pre><code>$ echo $CUDA_HOME /usr/local/cuda-8.0 $ echo $LD_LIBRARY_PATH /usr/local/cuda-8.0/lib64 </code></pre> <p><code>libcupti-dev</code> is installed</p> <pre><code>$ sudo apt-get install libcupti-dev Reading package lists... Done Building dependency tree Reading state information... Done libcupti-dev is already the newest version (7.5.18-0ubuntu1). 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. </code></pre> <p>Create a new environment using <code>virtualenv</code></p> <pre><code>$ virtualenv --system-site-packages -p python3 ~/venv-test Already using interpreter /usr/bin/python3 Using base prefix '/usr' New python executable in /home/represent/venv-test/bin/python3 Also creating executable in /home/represent/venv-test/bin/python Installing setuptools, pip, wheel...done. </code></pre> <p>installed Tensorflow using <code>pip</code> in <code>virtualenv</code></p> <p>(venv-test) represent@gatekeeper:/data/installers$ sudo pip3 install --upgrade tensorflow-gpu ... Successfully installed bleach-1.5.0 html5lib-0.9999999 markdown-2.6.9 numpy-1.13.3 protobuf-3.4.0 setuptools-36.6.0 six-1.11.0 tensorflow-gpu-1.3.0 tensorflow-tensorboard-0.1.8 wheel-0.30.0</p> <p>Installed cuDNN 6.0.12</p> <pre><code>$ sudo dpkg -i libcudnn6_6.0.21-1+cuda8.0_amd64.deb Selecting previously unselected package libcudnn6. (Reading database ... 226608 files and directories currently installed.) Preparing to unpack libcudnn6_6.0.21-1+cuda8.0_amd64.deb ... Unpacking libcudnn6 (6.0.21-1+cuda8.0) ... Setting up libcudnn6 (6.0.21-1+cuda8.0) ... Processing triggers for libc-bin (2.23-0ubuntu9) ... /sbin/ldconfig.real: /usr/lib/nvidia-384/libEGL.so.1 is not a symbolic link /sbin/ldconfig.real: /usr/lib32/nvidia-384/libEGL.so.1 is not a symbolic link </code></pre> <p>and the dev package</p> <pre><code>$ sudo dpkg -i libcudnn6-dev_6.0.21-1+cuda8.0_amd64.deb Selecting previously unselected package libcudnn6-dev. (Reading database ... 226614 files and directories currently installed.) Preparing to unpack libcudnn6-dev_6.0.21-1+cuda8.0_amd64.deb ... Unpacking libcudnn6-dev (6.0.21-1+cuda8.0) ... Setting up libcudnn6-dev (6.0.21-1+cuda8.0) ... update-alternatives: using /usr/include/x86_64-linux-gnu/cudnn_v6.h to provide /usr/include/cudnn.h (libcudnn) in auto mode </code></pre> <p>Validating the installation</p> <pre><code>(venv-test)$ python Python 3.5.2 (default, Sep 14 2017, 22:51:06) [GCC 5.4.0 20160609] on linux Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import tensorflow as tf &gt;&gt;&gt; tf.VERSION '1.3.0' </code></pre>
<p>We worked on this together with Jan Benes and found that the solution was to add to add our non-root user to <code>nvidia-persistenced</code> group. For example by <code>sudo usermod -a -G nvidia-persistenced our-nonroot-user</code>.</p> <p>The reason behind this is, that default installation of nvidia driver (<code>nvidia-384</code> in our case) on Ubuntu creates user and group named <code>nvidia-persistenced</code> and this user is then used to run <a href="http://docs.nvidia.com/deploy/driver-persistence/index.html#persistence-daemon" rel="nofollow noreferrer" title="NVIDIA Persistence daemon">NVIDIA Persistence Daemon</a>. If our user non-root didn't have access to files and written by this daemon, the MNIST example failed. It didn't fail for root (as it has access to everything) and stopped failing after we added our non-root user to the <code>nvidia-persistenced</code> group.</p>
tensorflow|cuda|nvidia
2
5,096
46,980,958
Generating all the possible values in a ndarray in numpy?
<p>I am using gambit in python to simulate a world in a game theoretic manner. One construct of gambit is to save the "outcomes" for a set of decisions each player involved takes. This is of the form: </p> <p><code>game[d1,d2,d3,...,dn][n] = payoff </code></p> <p>where <code>d1</code> is the index of decision made by player 1, <code>d2</code> is the index of decision made by player 2, and so on and <code>n</code> is the index of the player for whom the <code>payoff</code> is being stored.</p> <p>Now, there may be variable number of players, so the dimension of the index passed into the <code>game</code> object may change</p> <p>how do I generate the series from <code>[0,0,...,0]</code> through <code>[8,8,...,8]</code> (where dimension = number of players = n) so that I can store them into <code>[d1,d2,d3,...,dn]</code>? </p>
<p>Take a look at python's <code>itertools</code> module. It sounds like the <code>product</code> function will do what you want. </p> <p>Example:</p> <pre><code>import itertools as it list(it.product(*[range(2)]*3)) </code></pre> <p>Gives all lists of length three with two elements</p> <pre><code>[(0, 0, 0), (0, 0, 1), (0, 1, 0), (0, 1, 1), (1, 0, 0), (1, 0, 1), (1, 1, 0), (1, 1, 1)] </code></pre> <p>There's lots of other possibilities with <code>itertools</code>, in particular it also provides <code>permutations</code> and <code>combinations</code>.</p>
python|numpy|multidimensional-array|game-theory|gambit
0
5,097
46,682,285
Fill array based on sparse information
<p>I have the following sparsity structure to describe the underlying dense array <code>A</code>:</p> <pre><code>a = np.array([1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1]) b = np.array([1, 5, 2, 3]) </code></pre> <p><code>a</code> contains <code>1</code> whenever <code>A</code> changes the value. <code>b</code> contains the new value whenever <code>A</code> changes value. That is, my example of <code>a</code>, <code>b</code> yields the following array:</p> <pre><code>A = np.array([1, 1, 1, 1, 1, 5, 5, 2, 2, 2, 3]) </code></pre> <p>How can I efficiently recover <code>A</code> given the sparse information? I'd be particularly interested in a solution that can be scaled up when <code>b</code> is n-dimensional.</p> <hr> <p>In 2d, we would have the same <code>a</code>, but</p> <pre><code>bb = np.array([[1, 5, 2, 2], [2, -1, 0, 1]]) </code></pre> <p>which yields</p> <pre><code>AA = np.array([[1, 1, 1, 1, 1, 5, 5, 2, 2, 2, 3], [2, 2, 2, 2, 2, -1, -1, 0, 0, 0, 1]]) </code></pre>
<p>Pretty simple really with <code>cumsum</code>. Use <code>cumsum</code> to get those <em>intervaled</em> indices and then index into the data array.</p> <p>Thus, for <code>1D</code> data -</p> <pre><code>idx = a.cumsum(-1)-1 out = b[idx] </code></pre> <p>For <code>2D</code> data -</p> <pre><code>out = bb[np.arange(bb.shape[0])[:,None],idx] </code></pre> <p>For generic <code>n-dim</code> data, simply use <code>np.take</code> to index along the last axis and thus would cover for generic <code>n-dim</code> cases, like so -</p> <pre><code>np.take(b_ndarray,idx,axis=-1) </code></pre> <p><strong>Sample runs</strong></p> <pre><code>In [80]: a # sparse array that defines the intervals/indices Out[80]: array([1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1]) In [81]: b # 1D data array Out[81]: array([1, 5, 2, 3]) In [82]: bb # 2D data array Out[82]: array([[ 1, 5, 2, 2], [ 2, -1, 0, 1]]) In [93]: idx = a.cumsum(-1)-1 # Get the intervaled indices In [94]: idx Out[94]: array([0, 0, 0, 0, 0, 1, 1, 2, 2, 2, 3]) In [84]: np.take(b,idx,axis=-1) # output for 1D data Out[84]: array([1, 1, 1, 1, 1, 5, 5, 2, 2, 2, 3]) In [85]: np.take(bb,idx,axis=-1) # output for 2D data Out[85]: array([[ 1, 1, 1, 1, 1, 5, 5, 2, 2, 2, 2], [ 2, 2, 2, 2, 2, -1, -1, 0, 0, 0, 1]]) </code></pre> <p>Let's test out for some random <code>3D</code> data too -</p> <pre><code>In [89]: bbb = np.random.randint(-4,5,(2,3,4)) In [90]: bbb Out[90]: array([[[-1, 0, 0, 4], [ 0, -1, 3, 1], [ 1, -4, -3, 1]], [[-1, -4, 1, -4], [-3, -2, 0, -2], [-4, -1, -2, -4]]]) In [91]: np.take(bbb,idx,axis=-1) Out[91]: array([[[-1, -1, -1, -1, -1, 0, 0, 0, 0, 0, 4], [ 0, 0, 0, 0, 0, -1, -1, 3, 3, 3, 1], [ 1, 1, 1, 1, 1, -4, -4, -3, -3, -3, 1]], [[-1, -1, -1, -1, -1, -4, -4, 1, 1, 1, -4], [-3, -3, -3, -3, -3, -2, -2, 0, 0, 0, -2], [-4, -4, -4, -4, -4, -1, -1, -2, -2, -2, -4]]]) </code></pre> <hr> <p><strong>Runtime test</strong></p> <p>Other approach(es) -</p> <pre><code>def diff_repeat_1d(a, b): # @Kasramvd's soln for 1D inds = np.concatenate((np.where(a)[0], [a.size])) durations = np.diff(inds) return np.repeat(b, durations) def diff_repeat_2d(a, b): # @Kasramvd's soln for 2D inds = np.concatenate((np.where(a)[0], [a.size])) durations = np.diff(inds) return np.repeat(bb, durations, axis=1) </code></pre> <p>Timings on 1D data -</p> <pre><code>In [199]: a = np.array([1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1]) ...: b = np.array([1, 5, 2, 3]) ...: In [200]: a = np.tile(a,100000) ...: b = np.tile(b,100000) ...: In [201]: %timeit diff_repeat_1d(a, b) # @Kasramvd's soln 100 loops, best of 3: 8.42 ms per loop In [202]: %timeit np.take(b,a.cumsum()-1,axis=-1) 100 loops, best of 3: 4.53 ms per loop </code></pre> <p>Timings on 2D data -</p> <pre><code>In [203]: bb = np.array([[1, 5, 2, 2], [2, -1, 0, 1]]) In [204]: bb = np.tile(bb,100000) In [206]: %timeit diff_repeat_2d(a, bb) # @Kasramvd's soln 100 loops, best of 3: 12.1 ms per loop In [207]: %timeit np.take(bb,a.cumsum()-1,axis=-1) 100 loops, best of 3: 5.58 ms per loop </code></pre>
python|numpy
2
5,098
38,671,630
initialization of multiarray raised unreported exception python
<p>I am a new programmer who is picking up python. I recently am trying to learn about importing csv files using numpy. Here is my code:</p> <pre><code>import numpy as np x = np.loadtxt("abcd.py", delimiter = True, unpack = True) print(x) </code></pre> <p>The idle returns me with:</p> <pre><code>&gt;&gt; True &gt;&gt; Traceback (most recent call last): &gt;&gt; File "C:/Python34/Scripts/a.py", line 1, in &lt;module&gt; import numpy as np &gt;&gt; File "C:\Python34\lib\site-packages\numpy\__init__.py", line 180, in &lt;module&gt; from . import add_newdocs &gt;&gt; File "C:\Python34\lib\site-packages\numpy\add_newdocs.py", line 13, in &lt;module&gt; from numpy.lib import add_newdoc &gt;&gt; File "C:\Python34\lib\site-packages\numpy\lib\__init__.py", line 8, in &lt;module&gt; from .type_check import * &gt;&gt; File "C:\Python34\lib\site-packages\numpy\lib\type_check.py", line 11, in &lt;module&gt; import numpy.core.numeric as _nx &gt;&gt; File "C:\Python34\lib\site-packages\numpy\core\__init__.py", line 14, in &lt;module&gt; from . import multiarray &gt;&gt; SystemError: initialization of multiarray raised unreported exception </code></pre> <p>Why do I get the this system error and how can I remedy it?</p>
<p>I have experienced this problem too. This is cuased by a file named "datetime.py" in the same folder (exactly the same problem confronted by <a href="https://groups.google.com/a/continuum.io/forum/#!topic/anaconda/VqRUdoA-LnA" rel="nofollow noreferrer">Bruce</a>). Actually "datetime" is an existing python module. However, I do not know why running my own script, <em>e.g.</em> <code>plot.py</code> will invoke my <code>datetime.py</code> file (I have seen the output produced by my <code>datetime.py</code>, and there will be an auto-generated <code>datetime.cpython-36.pyc</code> in the <code>__pycache__</code> folder).</p> <p>Although I am not clear about how the error is triggered, after I rename my <code>datetime.py</code> file to other names, I can run the <code>plot.py</code> immediately. Therefore, I suggest you check if there are some files whose name collides with the system modules. (P.S. I use the Visual Studio Code to run python.)</p>
python|numpy
4
5,099
38,954,251
Python - find closest position and value
<p>I m trying to find the closest point for a couple X and Y given as to access to its values. In my case, in X directions (np.arrange(0,X.max(),1), i would like to extract the closest values from Y = 0 to obtain its values in "values array" :</p> <pre><code>import numpy as np import matplotlib.pyplot as plt #My coordinates are given here : X = np.array([0, 1, 2, 3, 4, 5, 0, 1, 2, 3, 4 ,5, 0, 1, 2, 3, 4 ,5]) Y = np.array([-2.5, -1.5, 0, 0, 2, 2.5, 2, 1.5, 1, -1, -1.2,-2.5, 0.2, 0.5, 0, -0.5, -0.1,0.05]) plt.scatter(X,Y);plt.show() #The corresponding values are : values = np.array([-1.1, -9, 10, 10, 20, 25, 21, 15, 0, 2, -2,-5, 2, 50, 0, -5, -1,5]) # I thought to use a for loop : def find_index(x,y): xi=np.searchsorted(X,x) yi=np.searchsorted(Y,y) return xi,yi for i in arange(float(0),float(X.max()),1): print i thisLat, thisLong = find_index(i,0) print thisLat, thisLong values[thisLat,thisLong] </code></pre> <p>But i obtained an error : "IndexError: too many indices"</p>
<p>You can use something faster than a <code>for</code> loop:</p> <pre><code>import numpy as np def find_nearest(array, value): ''' Find nearest value is an array ''' idx = (np.abs(array-value)).argmin() return idx haystack = np.arange(10) needle = 5.8 idf = find_nearest(haystack, needle) print haystack[idf] # This will return 6 </code></pre> <p>This function will return the index of the nearest value in the array provided (such that we don't use global variables). Note that this is search in a 1D array, just like you have for <code>X</code> and <code>Y</code>.</p>
python|numpy|indexing
4