Unnamed: 0
int64
0
378k
id
int64
49.9k
73.8M
title
stringlengths
15
150
question
stringlengths
37
64.2k
answer
stringlengths
37
44.1k
tags
stringlengths
5
106
score
int64
-10
5.87k
377,600
59,335,245
How to output an image with a CNN?
<p>I'm trying to do depth estimation with CNNs (this is my ultimate goal), but a problem that i found is: I just did image classifications with CNNs, using for example "CIFAR-10", "MNIST", "Cats vs Dogs", etc. To do depth estimation I need to output a new image (the NYUv2 dataset has the labeled images). So, I'll input an image like 256x256x3 and need to output another image with for example 228x228x3.</p> <p>What I need to do? Can I just do the convolutions for a while and after that decrease the features maps and increase the dimension? Thanks</p> <p>obs: I'm using Tensorflow 2.0</p>
<p>I suggest you use a type of <a href="https://arxiv.org/abs/1505.04597" rel="nofollow noreferrer">UNet</a>. This kind of architecture has downsampling layers, followed by up sampling layers to get back to the original spatial dimensions.</p>
python|tensorflow|keras|deep-learning|conv-neural-network
0
377,601
59,107,376
finding the frequency distribution of values in a column
<p>I have df (8360 x 3 columns)</p> <pre><code> Time A B 0 01.01.2018 00:00:00 0.019098 32.437083 1 01.01.2018 01:00:00 0.018871 32.462083 2 01.01.2018 02:00:00 0.018643 32.487083 3 01.01.2018 03:00:00 0.018416 32.512083 4 01.01.2018 04:00:00 0.018189 32.537083 5 01.01.2018 05:00:00 0.017961 32.562083 6 01.01.2018 06:00:00 0.017734 33.189708 7 01.01.2018 07:00:00 0.017507 34.122968 8 01.01.2018 08:00:00 0.017279 32.897831 9 01.01.2018 09:00:00 0.017052 32.482338 </code></pre> <p>and want to group the df after the numeric value of column B. I want to find out at what range the numbers in the column are increasing/decreasing the most (frequency distribution). Right now I just use <code>df.describe()</code> and play with the numbers. for example I found out that they are 300 values which are smaller than 1 <code>new_df = df[df['B'] &lt; 1]</code></p> <p>Is there a specific function to help me with this task?</p>
<p>To get idea about distribution of values just plot histogram. For example in Jupyter notebook:</p> <pre><code>%matplotlib inline df.B.hist() </code></pre> <p>or compute cumulative frequency histogram with scipy</p> <pre><code>import scipy.stats scipy.stats.cumfreq(df.B) </code></pre>
python-3.x|pandas|dataframe
0
377,602
59,468,830
string manipulation with python pandas and replacement function
<p>I'm trying to write a code that checks the sentences in a csv file and search for the words that are given from a second csv file and replace them,my code is as bellow it doesn't return any errors but it is not replacing any words for some reasons and printing back the same sentences without and replacement.</p> <pre><code> import string import pandas as pd text=pd.read_csv("sentences.csv") change=pd.read_csv("replace.csv") for row in text: print(text.replace(change['word'],change['replacement'])) </code></pre> <p>the sentences csv file looks like</p> <p><a href="https://i.stack.imgur.com/W7dPA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/W7dPA.png" alt="enter image description here"></a></p> <p>and the change csv file looks like</p> <p><a href="https://i.stack.imgur.com/UR91s.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UR91s.png" alt="enter image description here"></a></p>
<p>Try:</p> <pre><code>text=pd.read_csv("sentences.csv") change=pd.read_csv("replace.csv") toupdate = dict(zip(change.word, change.replacement)) text = text['sentences'].replace(toupdate, regex=True) print(text) </code></pre>
python|string|pandas|replace|nltk
3
377,603
59,149,967
Python plotting graph from csv problems
<p><a href="https://1drv.ms/u/s!AtXcEqW2iQpCjGnWFi8RveymnQwD?e=03kIMt" rel="nofollow noreferrer">csv file</a></p> <p>Hello so I have this csv file that I want to convert to a graph, what I want is it to pretty much graph the number of jobs in each region by city. I have the columns for both cities and countries in this csv file, I want to toss out the date created and just have the city and number of job offers. </p> <p>Here is the code I tried to use and it didn't work: </p> <pre><code>import pandas as pd from matplotlib.pyplot import pie, axis, show %matplotlib inline df = pd.read_csv ('compuTrabajo_business_summary_by_industry.csv') sums = df.groupby(df["country;"])["business count"].sum() axis('equal'); pie(sums, labels=sums.index); show() </code></pre> <p>Thanks for the help </p>
<p>As Abhinav Kinagi already answered, <code>pandas</code> assumes that your values are separated by commas. You can either change your csv-file or simply put <code>sep='|'</code>in <code>pd.read_csv</code>. Your code should be</p> <pre class="lang-py prettyprint-override"><code>%matplotlib inline import pandas as pd from matplotlib.pyplot import pie, axis, show df = pd.read_csv ('compuTrabajo_business_summary_by_industry.csv', sep='|') sums = df.groupby(df["country"])["business count"].sum() axis('equal'); pie(sums, labels=sums.index); show() </code></pre> <p>I also removed the ; after country.</p>
python|pandas|csv|matplotlib|graph
0
377,604
59,251,168
Different kind of Apostrophe in python string comparison
<p>I'm trying to do a string search operation using python and my its not working because I have three different kind of Apostrophe in my text <a href="https://i.stack.imgur.com/SI84O.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SI84O.png" alt="images of apostrophes"></a>. I imported by data from word documents. Example comparison text: </p> <blockquote> <p>Stimmt`s and Stimmt’s or Stimmt's</p> </blockquote> <p>They all return false when compared like </p> <pre><code>"’" == "'" </code></pre> <p>Any ideas on how to avoid this? </p> <p><strong>EDIT :</strong></p> <p>I think this difference in Apostrophe is caused by different such as utf-8 vs ascii (I imported by data from word documents). So replacing apostrophe is one solution but there might be other characters which might cause problem. So I'm looking for a way to make sure text is imported using proper encoding.</p>
<p>If you replace all unusual forms of the apostrophe before doing anything else you avoid running into any problems:</p> <p><code>df = df.replace("`|’", "'", regex=True)</code></p>
python|string|pandas
2
377,605
59,136,292
Pandas working with 2D array inside dataframe
<p>I have a Pandas DataFrame containing a 2D array as a column looking something like the following:</p> <pre><code>Name 2DValueList item 1 [ [ 0.0, 1.0 ], [ 0.0, 6.0 ], [ 0.0, 2.0 ] ] item 2 [ [ 0.0, 2.0 ], [ 0.0, 1.0 ], [ 0.0, 1.0 ] ] item 3 [ [ 0.0, 1.0 ], [ 0.0, 3.0 ], [ 0.0, 5.0 ], [ 0.0, 1.0 ] ] item 4 item 5 [ [ 0.0, 4.0 ], [ 0.0, 1.0 ], [ 0.0, 2.0 ] ] </code></pre> <p>The first value isn't relative to this question so I've just made them all 0. I'm only interested in the second values. Also notice the amount of pairs can vary or be empty.</p> <p><strong>I want to be able to make a new dataframe that just contains the top (largest) <em>n</em> elements from the array.</strong></p> <p>It would look like this for the top 2 elements:</p> <pre><code>Name 2DValueList item 1 [ [ 0.0, 6.0 ], [ 0.0, 2.0 ] ] item 2 [ [ 0.0, 2.0 ], [ 0.0, 1.0 ] ] item 3 [ [ 0.0, 5.0 ], [ 0.0, 3.0 ] ] item 4 item 5 [ [ 0.0, 4.0 ], [ 0.0, 2.0 ] ] </code></pre> <p>I would use pandas nlargest, but I'm not sure how to make it accept a column that is a 2D array.</p> <p>In reality, the 2D array holds thousands of value pairs and there are tens of thousands of rows. I'm open to better ways to hold this data that would be more versatile.</p>
<p>If every cell of <code>2DValueList</code> is list of lists, the efficient way is using <code>heapq.nlargest</code> with <code>itemgetter</code> together with list comprehension</p> <pre><code>from heapq import nlargest from operator import itemgetter df['new_list'] = [nlargest(2, x, key=itemgetter(1)) for x in df['2DValueList']] Out[119]: Name 2DValueList new_list 0 item 1 [[0, 1], [0, 6], [0, 2]] [[0, 6], [0, 2]] 1 item 2 [[0, 2], [0, 1], [0, 1]] [[0, 2], [0, 1]] 2 item 3 [[0, 1], [0, 3], [0, 5]] [[0, 5], [0, 3]] 3 item 4 [[0, 4], [0, 1], [0, 2]] [[0, 4], [0, 2]] </code></pre> <p>If each cell is a numpy 2darray, the above method still works fine. However, I think using numpy <code>argsort</code> would be better</p> <pre><code>df['new_list'] = [x[np.argsort(-x, axis=0)[:2,1]] for x in df['2DValueList']] Out[128]: Name 2DValueList new_list 0 item 1 [[0, 1], [0, 6], [0, 2]] [[0, 6], [0, 2]] 1 item 2 [[0, 2], [0, 1], [0, 1]] [[0, 2], [0, 1]] 2 item 3 [[0, 1], [0, 3], [0, 5]] [[0, 5], [0, 3]] 3 item 4 [[0, 4], [0, 1], [0, 2]] [[0, 4], [0, 2]] </code></pre> <p>Lastly, if you don't need the top n largest sub-array in sorted order, <code>argpartition</code> would be faster than <code>argsort</code></p>
python|arrays|pandas|data-structures
1
377,606
59,171,129
What is the best way to find approximate unordered arrays in python?
<p>Can you help to detect all approximate unordered arrays? For example, I have array (a) like this:</p> <pre><code>a = np.array([[(1.000, 2.000, 1.000), (1.000, 3.000, 2.000), (4.000, 3.000, 1.000)], [(1.000, 3.000, 2.000), (4.000, 3.000, 1.000), (1.001, 2.000, 1.000)], [(4.000, 2.999, 1.001), (1.000, 2.000, 1.000), (1.000, 3.000, 2.000)], [(5.000, 2.000, 2.000), (4.000, 3.000, 1.000), (2.000, 3.000, 1.000)]]) </code></pre> <p>The second line and third line are almost equal to first line with shuffling positions and accepting error of 0.001.</p> <p>Output: </p> <pre><code>[[[1.000 2.000 1.000] [1.000 3.000 2.000] [4.000 3.000 1.000]] [[5.000 2.000 2.000] [4.000 3.000 1.000] [2.000 3.000 1.000]]] </code></pre>
<p>There are two elements of your task: rounding, and taking the uniques. </p> <p>If you want to have two numbers be considered "the same" if they differ by less than .01, that's rather difficult. Among other things, this is nontransitive; A can be "close" to B and B "close" to C, without A being "close" to C. In math terms, this isn't an equivalence relation. Since another way of saying "take only one of each set of close lists" is "take only one member of each equivalence class", that fact that this isn't an equivalence relation is a problem. </p> <p>A subtly different, but more possible interpretation, is taking your problem statement as asking to round to two decimal places; this can be accomplished with</p> <pre><code>decimal_points = 2 rounded_tuples = [tuple(tuple(round(element, decimal_points) for element in inner_list) for inner_list in outer_list) for outer_list in a] </code></pre> <p>Taking uniques is much more difficult. One method is to convert everything to a canonical member of its equivalence class, then take the set of all such members. This is why I have tuples above; taking the set requires non-mutable data structures such as tuples.</p> <p>Now, rather than the equivalence relation being "close", it's "differ by a permutation". With this equivalence relation, one way of getting a canonical member is to sort each list. But now the problem is that we have lists of lists, so we have to have some ordering of the sublists. A simple ordering is lexographical ordering: sort according to the first element, then among each list with the same first element, sort according to the second, etc. </p> <pre><code>def sort_nested_tuple(nested_tuple): for i in range(len(nested_tuple[0])): nested_tuple = tuple(sorted(nested_tuple, key = lambda x: x[i])) return nested_tuple sorted_tuples = [sort_nested_tuple(outer_tuple) for outer_tuple in rounded_tuples] </code></pre> <p>Here, I'm going through each element and sorting by it. Since the last element is used as a key last, it overrides all previous keys. Only if two lists have the same last element is the ordering from the second to last element preserved. So this can be considered "little-endian" lexographical ordering, but it's not really important what ordering you have as long as it's consistent.</p> <p>Now we just have to take the set of the resulting tuples:</p> <pre><code>uniques = set(sorted_tuples) </code></pre> <p>This results in a set of tuples, rather than numpy objects, but you can convert back if you want. Also, you're getting the canonical list, which likely is not any of the original lists, so if you want to have a result consisting of lists that appeared in the original input, you'll have to do more work for that. A somewhat brute force method would be: </p> <pre><code>unique_originals = [] for unique in uniques: for original, sorted_tuple in zip(a, sorted_tuples): if sorted_tuple == unique: unique_originals.append(original) break </code></pre>
python|python-3.x|numpy
1
377,607
59,161,124
Divide each element by 2 and it should ignore "String" values
<p>Divide each element by 2 and it should ignore "String" values. End results should be in Pandas Data frame only</p> <pre><code>df=pd.DataFrame({'a':[3,6,9], 'b':[2,4,6], 'c':[1,2,3]}) print(df) </code></pre>
<p>You can do:</p> <p>I added a column with string values for demonstration</p> <pre><code>df=pd.DataFrame({'a':[3,6,9], 'b':[2,4,6], 'c':[1,2,3], 'd':['a', 'b', 'c']}) for i in list(df.keys()): try: df[i] = df[i]/2 except(TypeError): df[i] = df[i] print(df) </code></pre> <p>This gives:</p> <pre><code> a b c d 0 1.5 1.0 0.5 a 1 3.0 2.0 1.0 b 2 4.5 3.0 1.5 c </code></pre> <p>If you have columns with mixed integer and string types :</p> <pre><code>df=pd.DataFrame({'a':[3,6,9, 'a'], 'b':[2,4,6, 8], 'c':[1,2,3,4], 'd':['a', 'b', 'c', 'd']}) for i in list(df.keys()): try: df[i] = df[i]/2 except(TypeError): df[i] = df[i].astype(str) mynewlist = [int(s)/2 if s.isdigit() else s for s in list(df[i].values)] df[i] = mynewlist print(df) </code></pre> <p>which gives:</p> <pre><code> a b c d 0 1.5 1.0 0.5 a 1 3 2.0 1.0 b 2 4.5 3.0 1.5 c 3 a 0.5 0.5 d </code></pre>
python|pandas
0
377,608
59,414,122
is there an idiomatic panda way to get indices from 2 lists that represent a start and stop signal
<p>I have lists like this:</p> <pre><code>index A B 0 false true 1 false false 2 true false 3 false false 4 false false 5 false false 6 true false 7 false false 8 false true 9 false false 10 false false 11 true false 12 false false </code></pre> <p>the output I need is (2, 8)</p> <p>the logic is that once a true has been detected in A, I will look for the next true in B, ignoring any other true in A</p> <p>in practice it would do that:</p> <pre><code>index A B 0 false true // ignore the true in B 1 false false 2 true false &lt;- start 3 false false 4 false false 5 false false 6 true false // ignore the true in A 7 false false 8 false true &lt;- end 9 false false 10 false false 11 true false &lt;- start again 12 false false </code></pre> <p>so I can do it in a loop (pseudocode):</p> <pre><code>for i in a.index: for j in b.index[i + 1:]: if b[j]: # write i and j somewhere i = j break </code></pre> <p>It is pseudo code because the i = j line will not work</p> <p>Is there a panda-ish solution to implement this?</p> <p>It is very similar to my previous question (<a href="https://stackoverflow.com/questions/59400530/track-state-reversal-in-pandas-by-comparing-two-columns/59400721">track state reversal in Pandas by comparing two columns</a>) but the main difference is that, once a start (column 'A') has been detected, I want to ignore all of them until there is a stop (column 'B')</p> <p>The fastest 'loop' solution I have found so far is:</p> <pre><code>i = 0 while i &lt; len(A): start = A[i:].idxmax() stop = B[start + 1:].idxmax() print(start, stop) i = stop </code></pre>
<pre><code>a = [False, True, False, True] b = [True, False, False, True] if True in a: aNum = a.index(True) bNum = b[aNum:].index(True) + aNum if True in b[aNum:] else None else: aNum = None bNum = None print((aNum, bNum)) </code></pre> <h3>Output</h3> <pre><code>(1, 3) </code></pre>
python|pandas
1
377,609
59,182,550
Some array indexing in numpy
<pre><code> lookup = np.array([60, 40, 50, 60, 90]) </code></pre> <p>The values in the following arrays are equal to indices of lookup.</p> <pre><code> a = np.array([1, 2, 0, 4, 3, 2, 4, 2, 0]) b = np.array([0, 1, 2, 3, 3, 4, 1, 2, 1]) c = np.array([4, 2, 1, 4, 4, 0, 4, 4, 2]) array 1st column elements lookup value a 1 --&gt; 40 b 0 --&gt; 60 c 4 --&gt; 90 </code></pre> <p>Maximum is 90.</p> <p>So, first element of result is 4.</p> <p>This way,</p> <p>expected result = array([4, 2, 0, 4, 4, 4, 4, 4, 0])</p> <p>How to get it?</p> <p>I tried as:</p> <pre><code>d = np.vstack([a, b, c]) print (d) res = lookup[d] res = np.max(res, axis = 0) print (d[enumerate(lookup)]) </code></pre> <p>I got error</p> <p>IndexError: only integers, slices (:), ellipsis (...), numpy.newaxis (None) and integer or boolean arrays are valid indices </p>
<p>Do you want this:</p> <pre><code>d = np.vstack([a,b,c]) # option 1 rows = lookup[d].argmax(0) d[rows, np.arange(d.shape[1])] # option 2 (lookup[:,None] == lookup[d].max(0)).argmax(0) </code></pre> <p>Output:</p> <pre><code>array([4, 2, 0, 4, 4, 4, 4, 4, 0]) </code></pre>
numpy
1
377,610
59,376,225
Random number- same number on every run different number for each row
<p>i want the following function will return a different number for each row in a data frame but the same number every time the function runs.</p> <p>thanks.</p> <pre><code>def inc14(p): if p==1: return random.randint(1,2000) elif p==2: return random.randint(2001,3000) elif p==3: return random.randint(3001,4000) elif p==4: return random.randint(4001,5000) elif p==5: return random.randint(5001,7000) elif p==6: return random.randint(7001,9000) elif p==7: return random.randint(9001,12000) elif p==8: return random.randint(12001,15000) elif p==9: return random.randint(15001,20000) elif p==10: return random.randint(20001,40000) elif p==11: return 0.01 else: return np.NaN data['inc_cont14']=data['inc14'].apply(inc14) </code></pre>
<p><strong>Defined ranges doesn't matter:</strong></p> <p>Here a running example if the defined ranges doesn't matter, if they matter see below:</p> <pre><code>import random import pandas as pd random.seed(42) # Seed is here to always produce the same numbers data = {'Name':['Tom', 'nick', 'krish', 'jack'], 'Age':[20, 21, 19, 18]} df = pd.DataFrame(data) #create a dummy dataframe # The dataframe has 4 rows. So we need 4 random numbers. # If we want to generate 4 random numbers, without duplicates we can use random.sample # In this example we sample 4 random number in the range of 0-399 range_multiplier = 100 df['Random'] = random.sample(range(len(df.index)*range_multiplier), len(df.index)) print(df) </code></pre> <p>Output:</p> <pre><code> Name Age Random 0 Tom 20 327 1 nick 21 57 2 krish 19 12 3 jack 18 379 </code></pre> <p>You can run the same code and will get the same random number than I have if you use the same seed than I used.</p> <p><strong>Defined ranges matter:</strong></p> <p>And in case you need this ranges here the new function which is a lot shorter, but you have to prepare all the numbers.:</p> <pre><code>random.seed(42) # Seed is here to always produce the same numbers # for all p(1-10) and their ranges (1-2000, 2001-3000, 3001-4000,...) # we generate a dictionary with p as the key # and as value a list of all numbers in the defined range # without duplicates with random.sample p_numbers = { 1: random.sample(range(1, 2001), 2000), 2: random.sample(range(2001, 3001), 1000), ... 10: random.sample(range(20001,40001), 20000) } def inc14(p,p_numbers): if p &gt;= 1 and p&lt;=10: # take the first element of the number and remove it # from the list (to avoid taking it again) return p_numbers[p].pop(0) elif p == 11: return 0.01 else: return np.nan data['inc_cont14']=data['inc14'].apply(inc14,p_numbers) </code></pre> <p>We need the seed again to not get any duplicates.</p> <p>We create a dictionary with the available numbers for their p. if p is between 1 and 10 we take the number from the dictionary and remove it from there to not get it twice. </p>
python|pandas
1
377,611
59,136,086
Tensorflow 'NoneType' object has no attribute 'shape'
<pre><code>import tensorflow as tf import numpy as np import matplotlib.pyplot as plt import os import cv2 from tqdm import tqdm DATADIR ="C:/Users/Park/Project/TrainingData" CATEGORIES = ["doll", "machine", "puzzle"] for category in CATEGORIES: path = os.path.join(DATADIR, category) for img in os.listdir(path): img_array = cv2.imread(os.path.join(path, img), cv2.IMREAD_COLOR) print(" Image shape : ", img_array.shape) plt.imshow(img_array) plt.show() </code></pre> <p>Each folder contains 50 image files in jpg format. The output is up to the second 'machine' folder and the 'puzzle' folder is not output to the third folder. If you change the order of the folder names, the pictures are displayed regardless of the number of images. When I try to output the third folder, I get a 'NoneType' object has no attribute 'shape' error.</p>
<p>To surface the problem quickly, add</p> <pre><code>if img_array is None: print("imread failed on {}".format(img)) </code></pre>
python|tensorflow|operating-system|cv2
0
377,612
59,196,765
ParseError: Error tokenizing data. C error: Expected 50 fields in line 224599, saw 51
<p>I'm trying to <code>pd.concat</code> multiple .xlsx files in a master CSV and then combine this CSV with past CPU data which is also in CSV format. </p> <p>The first operation is a success (op 3 out of 8), however on the second pass (history + current data in CSV format - op 7 out of 8) I'm getting the ParseError as shown below. </p> <p>I've checked both files and there appears to be no separator conflict, data are in the correct columns etc.</p> <blockquote> <p>Error tokenizing data. C error: Expected 50 fields in line 224599, saw 51</p> </blockquote> <p>My code is as follows:</p> <pre><code>import pandas as pd import os import glob def sremove(fn): os.remove(fn) if os.path.exists(fn) else None def mergeit(): df = pd.concat(pd.read_excel(fl) for fl in path1) df.to_csv(path2, index = False) def mergeit2(): df = pd.concat(pd.read_csv(fl) for fl in path1) df.to_csv(path2, index = False) print("\n#Operation 3 - Incidents Dataset") print("Incidents Dataset operation has started") fn = "S:\\CPU CacheU Data\\201920\\Incidents_201920.csv" sremove (fn) print("Incidents 2019/20 file has been deleted - Operation 1 of 8") path1 = glob.glob('S:\*CPU CacheU Data\*Inc Dataset\Incidents Dataset*.xlsx') print ("Path 1 - Incidents 2019/20 folder has been read successfully - Operation 2 of 8") path2 = "S:\\CPU CacheU Data\\Incidents_201920.csv" print ("Path 2 - Incidents 2019/20 Dataset File has been read successfully - Operation 3 of 8") mergeit() print ("Action has been completed successfully - Incidents Dataset 2019/20 Updated - Operation 4 of 8") fn = "S:\\CPU CacheU Data\\Incidents_Dataset.csv" sremove(fn) print (" Incidents Dataset Old file has been deleted - Operation 5 of 8") path1 = glob.glob('S:\*CPU CacheU Data\*Incidents_*.csv') print ("Path 1 - Incidents folder has been read successfully - Operation 6 of 8") path2 = "S:\\CPU CacheU Data\\Incidents_Dataset.csv" print ("Path 2 - Incidents Dataset File has been read successfully - Operation 7 of 8") mergeit2() print ("Path 2 - Incidents Dataset File has been updated successfully - Operation 8 of 8") </code></pre> <p>A couple of notes:</p> <p>1) Op 3 out of 8 takes a really long time to run. I'm not sure if that's because of the xlsx to csv conversion.</p> <p>2) I've tried to add the <code>error_bad_lines = False</code> statement in the <code>def mergeit2()</code> function but it seems to be taking forever to generate the master file. </p>
<p>check separators in your csv file, maybe there are more commas inside cells , read_csv is taking by default <code>sep=','</code> Propably you should set different separator to open your csv file <code>pd.read_csv(sep=' ')</code></p>
pandas|concat
1
377,613
59,118,533
numpy multiple boolean index arrays
<p>I have an array which I want to use boolean indexing on, with multiple index arrays, each producing a different array. Example:</p> <pre><code>w = np.array([1,2,3]) b = np.array([[False, True, True], [True, False, False]]) </code></pre> <p>Should return something along the lines of:</p> <pre><code>[[2,3], [1]] </code></pre> <p>I assume that since the number of cells containing <code>True</code> can vary between masks, I cannot expect the result to reside in a 2d numpy array, but I'm still hoping for something more elegant than iterating over the masks the appending the result of indexing <code>w</code> by the i-th <code>b</code> mask to it.</p> <p>Am I missing a better option?</p> <p>Edit: The next step I want to do afterwards is to sum each of the arrays returned by <code>w[b]</code>, returning a list of scalars. If that somehow makes the problem easier, I'd love to know as well.</p>
<p>Assuming you want a list of numpy arrays you can simply use a comprehension:</p> <pre><code>w = np.array([1,2,3]) b = np.array([[False, True, True], [True, False, False]]) [w[bool] for bool in b] # [array([2, 3]), array([1])] </code></pre> <p>If your goal is just a sum of the masked values you use:</p> <pre><code>np.sum(w*b) # 6 </code></pre> <p>or</p> <pre><code>np.sum(w*b, axis=1) # array([5, 1]) # or b @ w </code></pre> <p>…since <code>False</code> times you number will be 0 and therefor won't effect the sum.</p>
python|numpy|indexing|boolean|mask
1
377,614
59,259,628
unable to write multi-index dataframe to excel
<p>I want to write a multi-index dataframe to excel:</p> <pre><code>col = [['info', '', 'key'], ['alert', 'date', 'price'], ['alert', 'date', 'amount']] df = pd.DataFrame(columns = pd.MultiIndex.from_tuples(col)) df.loc[0, :] = np.random.random(3) df.to_excel('data.xlsx', index = False) </code></pre> <p>However, an error occurs:</p> <pre><code>NotImplementedError: Writing to Excel with MultiIndex columns and no index ('index'=False) is not yet implemented. </code></pre> <p>I checked pandas version : <code>pd.__version__</code> and the result is <code>'0.25.3'</code>.</p> <p>How to solve the problem?</p> <p>Thank you.</p> <p><a href="https://i.stack.imgur.com/YTTke.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YTTke.png" alt="enter image description here"></a></p>
<p>After searching the web, I used <code>pywin32</code> to solve the problem.</p> <pre><code>import win32com.client as win32 df.to_excel('data.xlsx', index = True) excel = win32.gencache.EnsureDispatch('Excel.Application') excel.DisplayAlerts = False wb = excel.Workbooks.Open('data.xlsx') excel.Visible = True ws = wb.Worksheets('Sheet1') ws.Columns(1).EntireColumn.Delete() wb.SaveAs('data.xlsx') excel.Application.Quit() </code></pre>
pandas
1
377,615
59,046,447
Invalid Argument error while using keras model API inside an estimator model_fn
<p>The <code>model_fn</code> for custom estimator which I have built is as shown below,</p> <pre class="lang-py prettyprint-override"><code>def _model_fn(features, labels, mode): """ Mask RCNN Model function """ self.keras_model = self.build_graph(mode, config) outputs = self.keras_model(features) # ERROR STATEMENT # outputs = self.keras_model(list(features.values())) # Same ERROR with this statement # Predictions if mode == tf.estimator.ModeKeys.PREDICT: ... # Defining Prediction Spec # Training if mode == tf.estimator.ModeKeys.TRAIN: # Defining Loss and Training Spec ... # Evaluation ... </code></pre> <p>The <code>_model_fn()</code> receives arguments <code>features</code> and <code>labels</code> from <code>tf.data</code> in form:</p> <pre class="lang-py prettyprint-override"><code>features = { 'a' : (batch_size, h, w, 3) # dtype: float 'b' : (batch_size, n) # # dtype: float } # And labels = [] </code></pre> <p>The <code>self.keras_model</code> is built using <strong><code>tensorflow.keras.models.Model</code></strong> API with Input placeholders (defined using layer <code>tensorflow.keras.layers.Input()</code>) of name <code>'a'</code> and <code>'b'</code> for respective shapes.</p> <p>After running the estimator using <code>train_and_evaluate()</code> the <code>_model_fn</code> is running fine. The graph is initialized, but when the training starts I'm facing the following issue:</p> <blockquote> <p>tensorflow.python.framework.errors_impl.InvalidArgumentError: You must feed a value for placeholder tensor 'a' with dtype float and shape [?,128,128,3] [[{{node a}}]]</p> </blockquote> <p>I have worked with custom estimators before, this the first time using <strong><code>tensorflow.keras.models.Model</code></strong> API inside the <code>_model_fn</code> to compute the graph.</p>
<p>This problem occurs only with this particular model (Mask-RCNN). To overcome this problem slight modifications can be made in method <code>self.build_graph(mode, config)</code> as follows:</p> <pre class="lang-py prettyprint-override"><code>def build_graph(mode, config): # For Input placeholder definition a = KL.Input(tensor=features['a']) # Earlier # a = KL.Input(shape=[batch_size, h, w, 3], name='a') b = KL.Input(tensor=features['b']) # Earlier # b = KL.Input(shape=[batch_size, n], name='b') ... ... </code></pre> <p>These modifications wraps the feature tensor directly into <code>tensorflow.keras.layers.Input()</code>. Which can be later used to define input arguments while defining Model using <code>tensorflow.keras.models.Model</code>.</p>
python|tensorflow|keras|tensorflow-estimator
0
377,616
59,349,347
How can I extract values from a dataframe or filter based on some criteria in Python?
<p>I have a data frame with extracted values from some files. How can I filter or extract the first two rows of data after the value u in col 1. The col 1 value will have a range of 80 that I want to capture after the value u. The value u might be two or three files after a new filex in col 0 or none at all as shown below in file3.</p> <pre><code> 0 1 2 3 0 file1 value u file1 value u 1 file1 value u file1 value u 2 file1 value 85 file1 th_v 5 3 file1 value 10 file1 th_v 2 4 file1 value 10 file1 th_v 4 5 file1 value 88 file1 th_v 4 6 file2 value u file2 value u 7 file2 value 88 file2 th_v 7 8 file2 value 2 file2 th_v 4 9 file2 value 88 file2 th_v 3 10 file2 value 0 file2 th_v 1 11 file3 value 89 file3 th_v 5 12 file3 value 2 file3 th_v 5 13 file3 value 4 file3 th_v 1 output: 0 1 2 3 0 file1 value 85 file1 th_v 5 1 file1 value 10 file1 th_v 2 2 file2 value 88 file2 th_v 7 3 file2 value 2 file2 th_v 4 4 file3 value 89 file3 th_v 5 5 file3 value 2 file3 th_v 5 </code></pre>
<p>If same pattern of data (same groups of file columns and same ends of values columns) for pairs of columns is possible test if last value is numeric and then <a href="http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.head.html" rel="nofollow noreferrer"><code>GroupBy.head</code></a>:</p> <pre><code>df = df[df[1].str.contains('\d$')].groupby(0).head(2) </code></pre>
python|pandas|dataframe|filter
0
377,617
59,322,523
Predict outcome based on user input with Neural Network
<p>I have wrote a code for simple <code>neural network</code> in <code>Python</code>. Neural network uses <code>Sigmoid</code> function to predict outcome (0 or 1). My question is, how can I predict outcome based on my own input?</p> <p>For example, I want to make prediction for these input values:</p> <pre><code>input 1: 0.3 input 2: -0.1 input 3: 0.1 my_input = [0.3, -0.1, 0.1] </code></pre> <p>Where should I pass this parameters / inputs? This is the code that I have:</p> <pre><code>import numpy as np import pandas as pd df = pd.DataFrame({'input 1':[0.5, 0.3, 0, 0.1, 0.4, -0.4, 0.4, 0.1, -0.6, 0.2, 0.6, 0, 0.2, 0.2, -0.1, -0.1, 0, 0.4, -0.2, -0.4], 'input 2':[0.3, 0.6, -0.4, -0.2, 0.9, 0, 0.35, -0.4, -0.9, 0.4, 0.3, -0.1, 0.1, 0.3, 0.1, 0.1, 0.3, 0.1, 0.3, 0.3], 'input 3':[0, 0.4, 0, -0.1, 0.4, -0.2, 0.7, -0.3, -0.1, 0.1, 0.3, 0, 0.5, 0.4, -0.31, 0.1, 0.3, 0.1, 0.1, 0.2], 'result':[1, 1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 0]}) print(df) def sigmoid(x): return 1 / (1 + np.exp(-x)) def sigmoid_derivate(x): return x * (1 - x) features = df.iloc[:,:-1].to_numpy() results = df.iloc[:,-1:].to_numpy() np.random.seed(1) weights = 2 * np.random.random((3,1)) - 1 print('These are my random weights:\n') print(weights) for iteration in range(100000): input_layer = features outputs = sigmoid(np.dot(input_layer, weights)) error = results - outputs adjustments = error * sigmoid_derivate(outputs) weights += np.dot(input_layer.T, adjustments) df['output prediction'] = outputs.round(0) print(df) </code></pre> <p>So, output should be only one value, zero or one.</p> <p>Thanks for your help</p>
<p>Your prediction is by the same method as during training:</p> <pre><code>my_output = sigmoid(np.dot(my_input, weights)) </code></pre> <p>If you try using as input the first three examples of your training you will find correct outputs :</p> <pre><code>my_input = [0.3,-0.1,0.1] prediction: [1.] my_input = [0.5,.3,0] prediction: [1.] my_input = [0.0,-.4,0.0] prediction: [2.25648121e-13] </code></pre> <p>congratulations that you implemented your own training!</p>
python|numpy|machine-learning|neural-network
1
377,618
59,042,750
Use of logical_and with list of lists
<p>Suppose I have the following two lists:</p> <pre><code>l = [[], [1]] m = [0, 1] </code></pre> <p>If I check to see if elements are in a list:</p> <pre><code>&gt;&gt;&gt; np.array(m[1]) == 1 True &gt;&gt;&gt; 1 in np.array(l)[1] True </code></pre> <p>This works as expected.</p> <p>However, if I use the numpy <code>logical_and</code> operator, this fails:</p> <pre><code>&gt;&gt;&gt; np.logical_and(np.array(m) == 1, 1 in np.array(l)) array([False, False]) </code></pre> <p>Why are both positions in the array being evaluated as <code>False</code>?</p> <p>The goal is to evaluate lists of these forms element-wise. And, as per the documentation, <code>np.logical_and</code> is used to "Compute the truth value of x1 AND x2 element-wise."</p> <p>Since l is a list of lists, I am using the <code>in</code> operator to test the element-wise comparisons. </p> <p>Thus, I expect the output of <code>&gt;&gt;&gt; np.logical_and(np.array(m) == 1, 1 in np.array(l))</code></p> <p>to be </p> <pre><code>array([False, True]) </code></pre> <p>after all, <code>[1]</code> is just an element in <code>np.array(l)</code>:</p> <pre><code>&gt;&gt;&gt; for i in np.array(l): ... print(i) ... [] [1] </code></pre>
<p>So just analyzing your code</p> <pre><code>np.array(m) == 1 &gt;&gt;&gt; [False True] 1 in np.array(l) &gt;&gt;&gt; False </code></pre> <p>You're basically comparing <strong>False</strong> with <strong>[False True]</strong></p> <p><strong>Update</strong></p> <p>If you want the output to be [False True] then you should use logical_or instead.</p>
python|numpy|boolean
0
377,619
14,271,121
Assume zero for subsequent dimensions when slicing an array
<p>I have need to slice an array where I would like zero to be assumed for every dimension except the first.</p> <p>Given an array:</p> <pre><code>x = numpy.zeros((3,3,3)) </code></pre> <p>I would like the following behavior, but without needing to know the number of dimensions before hand:</p> <pre><code>y = a[:,0,0] </code></pre> <p>Essentially I am looking for something that would take the place of <a href="http://docs.scipy.org/doc/numpy/reference/arrays.indexing.html#basic-slicing" rel="nofollow">Ellipsis</a>, but instead of expanding to the needed number of <code>:</code> objects, it would expand into the needed number of zeros.</p> <p>Is there anything built in for this? If not, what is the best way to get the functionality that I need?</p> <p><br /> <br /> <strong>Edit:</strong> <br />One way to do this is to use:</p> <pre><code>y = x.ravel(0:temp.shape[0]) </code></pre> <p>This works fine, however in some cases (such as mine) <code>ravel</code> will need to create a copy of the array instead of a view. Since I am working with large arrays, I want a more memory efficient way of doing this.</p>
<p>You could create a indexing tuple, like this:</p> <pre><code>x = arange(3*3*3).reshape(3,3,3) s = (slice(None),) + (0,)*(x.ndim-1) print x[s] # array([ 0, 9, 18]) print x[:,0,0] # array([ 0, 9, 18]) </code></pre> <p>I guess you could also do:</p> <pre><code>x.transpose().flat[:3] </code></pre> <p>but I prefer the first approach, since it works for any dimension (rather than only the first), and it's obviously equally efficient to just writing <code>x[:,0,0]</code>, since it's just a different syntax.</p>
python|numpy
3
377,620
13,876,441
Workaround for bug with displaying quvier key and patch in matplotlib 1.20
<p>Hej,</p> <p>I'm using the latest version (1.2.0) of matplotlib distributed with macports. I run into an AssertionError (I guess stemming from internal test) running this code</p> <pre><code>#!/usr/bin/env python import numpy as np import matplotlib.pyplot as plt X,Y = np.meshgrid(np.arange(0, 2*np.pi, .2), np.arange(0, 2*np.pi, .2)) U = np.cos(X) V = np.sin(Y) Q = plt.quiver(U, V) plt.quiverkey(Q, 0.5, .9, 1., 'Label') plt.gca().add_patch(plt.Circle((10, 10), 1)) plt.savefig('test.pdf') </code></pre> <p>Three parts of this code are required for me to reproduce the error:</p> <ol> <li>The quiver plot has to have a key created with quiver key</li> <li>have to add an additional patch to the current axes</li> <li>I have to save the figure as a PDF (I can display it just fine)</li> </ol> <p>The bug is not dependent on the backend. The traceback I get reads</p> <pre><code>Traceback (most recent call last): File "./test_quiver.py", line 15, in &lt;module&gt; plt.savefig('test.pdf') File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/pyplot.py", line 472, in savefig return fig.savefig(*args, **kwargs) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/figure.py", line 1363, in savefig self.canvas.print_figure(*args, **kwargs) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/backend_bases.py", line 2093, in print_figure **kwargs) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/backend_bases.py", line 1845, in print_pdf return pdf.print_pdf(*args, **kwargs) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/backends/backend_pdf.py", line 2301, in print_pdf self.figure.draw(renderer) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/artist.py", line 54, in draw_wrapper draw(artist, renderer, *args, **kwargs) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/figure.py", line 999, in draw func(*args) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/artist.py", line 54, in draw_wrapper draw(artist, renderer, *args, **kwargs) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/axes.py", line 2086, in draw a.draw(renderer) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/artist.py", line 54, in draw_wrapper draw(artist, renderer, *args, **kwargs) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/quiver.py", line 306, in draw self.vector.draw(renderer) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/artist.py", line 54, in draw_wrapper draw(artist, renderer, *args, **kwargs) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/collections.py", line 755, in draw return Collection.draw(self, renderer) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/artist.py", line 54, in draw_wrapper draw(artist, renderer, *args, **kwargs) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/collections.py", line 259, in draw self._offset_position) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/backends/backend_pdf.py", line 1548, in draw_path_collection output(*self.gc.pop()) File "/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/matplotlib/backends/backend_pdf.py", line 2093, in pop assert self.parent is not None AssertionError </code></pre> <p>In case it's important: I'm on Mac OS X 10.7.5, using python 2.7.3 and matplotlib 1.2.0. Do you also get this error? Is it a bug in matplotlib? Is it system dependent? Is there some workaround?</p>
<p>You can save as eps or svg and convert to pdf. I found that the best way to produce small pdf files is to save as eps in matplotlib and then use epstopdf.</p> <p>svg also works fine, you can use Inkscape to convert to pdf. A side-effect of svg is that the text is converted to paths (no embedded fonts), which might be desirable in some circumstances. </p>
python|numpy|matplotlib
2
377,621
13,854,632
Which scipy.optimize.minimize is least sensitive to starting location?
<p>I'm trying to minimize a function using one of the <a href="http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html#scipy.optimize.minimize" rel="nofollow">scipy minimizers</a>. Unfortunately my function has plateaus of equal value so minimisers get stuck there. I was wondering which of the scipy optimisers would be least sensitive to this and why?</p> <p>I know I could start a number of times at random locations but I'm not able to do that with what I am currently working on and have to use on of these minimisers out of the box. </p>
<p>Add a linear function of the coordinates to your function to give some nonzero, but very small slope to the flat areas. If your minimum/maximum is in a flat area, you need to decide which part of the flat area to choose as your final answer, so you might as well bias the whole search. After this arrives at a minimum/maximum, rerun the optimization using that as the starting point and no added bias.</p> <p>If there is a way to determine the boundaries of the search space, then you might try a large number of starting locations that uniformly sample the search space.</p>
optimization|numpy|scipy
1
377,622
14,126,201
Performance/standard using 1d vs 2d vectors in numpy
<p>Is there a standard practice for representing vectors as 1d or 2d ndarrays in NumPy? I'm moving from MATLAB which represents vectors as 2d arrays.</p>
<p>In my experience, 1D is the norm in numpy for vectors. The only good reason to keep a vector of <code>n</code> elements as a 2D array of shape <code>(1, n)</code> or <code>(n, 1)</code> is in a linear algebra context, where you wanted to keep row and column vectors differentiated. As EitanT hinted on his now deleted answer, you would probably then want to use numpy's <code>matrix</code> type, which keeps 2D shape of returns except for single element access, e.g if <code>a</code> has shape <code>(m, n)</code> then <code>a[0]</code> has shape <code>(n,)</code> for type <code>ndarray</code>, but shape <code>(1, n)</code> for type <code>matrix</code>, although <code>a[0, 0]</code> returns a scalar in both cases.</p> <p>If you stick with 1D vector of shape <code>(n,)</code>, you can reshape on the fly for specific operations requiring the 2D shape:</p> <pre><code>a.reshape(-1, 1) # shape (n, 1) a[:, None] # shape (n, 1) a.reshape(1, -1) # shape (1, n) a[None, :] # shape (1, n) </code></pre> <p>Numpy will automatically reshape your 1D vectors to shape <code>(1, n)</code> when <a href="http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html" rel="nofollow">broadcasting</a> it for an operation with a 2D array involved.</p>
python|matlab|numpy|linear-algebra
4
377,623
44,871,420
TensorFlow dynamic_rnn input for regression
<p>I'm stuck trying to convert an existing tensorflow sequence to sequence classifier to a regressor.</p> <p>Currently I'm stuck in handling the input for <code>tf.nn.dynamic_rnn()</code>. According to the documentation and other answers, input should be in the shape of <code>(batch_size, sequence_length, input_size)</code>. However my input data has only two dimensions: <code>(sequence_length, batch_size)</code>.</p> <p>The original solution uses <code>tf.nn.embedding_lookup()</code> as an intermediate step before feeding input to <code>dynamic_rnn()</code>. If I understand correctly, I believe I don't need this step since I'm working on a regression problem, not a classification problem.</p> <p>Do I need the embedding_lookup step? If so, why? If not, how can I fit my <code>encoder_inputs</code> directly into <code>dynamic_rnn()</code>?</p> <p>Below is a working minimalized example of the general idea:</p> <pre><code>import numpy as np import tensorflow as tf tf.reset_default_graph() sess = tf.InteractiveSession() PAD = 0 EOS = 1 VOCAB_SIZE = 10 # Don't think I should need this for regression? input_embedding_size = 20 encoder_hidden_units = 20 decoder_hidden_units = encoder_hidden_units LENGTH_MIN = 3 LENGTH_MAX = 8 VOCAB_LOWER = 2 VOCAB_UPPER = VOCAB_SIZE BATCH_SIZE = 10 def get_random_sequences(): sequences = [] for j in range(BATCH_SIZE): random_numbers = np.random.randint(3, 10, size=8) sequences.append(random_numbers) sequences = np.asarray(sequences).T return(sequences) def next_feed(): batch = get_random_sequences() encoder_inputs_ = batch eos = np.ones(BATCH_SIZE) decoder_targets_ = np.hstack((batch.T, np.atleast_2d(eos).T)).T decoder_inputs_ = np.hstack((np.atleast_2d(eos).T, batch.T)).T #print(encoder_inputs_) #print(decoder_inputs_) return { encoder_inputs: encoder_inputs_, decoder_inputs: decoder_inputs_, decoder_targets: decoder_targets_, } ### "MAIN" # Placeholders encoder_inputs = tf.placeholder(shape=(LENGTH_MAX, BATCH_SIZE), dtype=tf.int32, name='encoder_inputs') decoder_targets = tf.placeholder(shape=(LENGTH_MAX + 1, BATCH_SIZE), dtype=tf.int32, name='decoder_targets') decoder_inputs = tf.placeholder(shape=(LENGTH_MAX + 1, BATCH_SIZE), dtype=tf.int32, name='decoder_inputs') # Don't think I should need this for regression problems embeddings = tf.Variable(tf.random_uniform([VOCAB_SIZE, input_embedding_size], -1.0, 1.0), dtype=tf.float32) encoder_inputs_embedded = tf.nn.embedding_lookup(embeddings, encoder_inputs) decoder_inputs_embedded = tf.nn.embedding_lookup(embeddings, decoder_inputs) # Encoder RNN encoder_cell = tf.contrib.rnn.LSTMCell(encoder_hidden_units) encoder_outputs, encoder_final_state = tf.nn.dynamic_rnn( encoder_cell, encoder_inputs_embedded, # Throws 'ValueError: Shape (8, 10) must have rank at least 3' if encoder_inputs is used dtype=tf.float32, time_major=True, ) # Decoder RNN decoder_cell = tf.contrib.rnn.LSTMCell(decoder_hidden_units) decoder_outputs, decoder_final_state = tf.nn.dynamic_rnn( decoder_cell, decoder_inputs_embedded, initial_state=encoder_final_state, dtype=tf.float32, time_major=True, scope="plain_decoder", ) decoder_logits = tf.contrib.layers.linear(decoder_outputs, VOCAB_SIZE) decoder_prediction = tf.argmax(decoder_logits, 2) # Loss function loss = tf.reduce_mean(tf.squared_difference(decoder_logits, tf.one_hot(decoder_targets, depth=VOCAB_SIZE, dtype=tf.float32))) train_op = tf.train.AdamOptimizer().minimize(loss) sess.run(tf.global_variables_initializer()) max_batches = 5000 batches_in_epoch = 500 print('Starting train') try: for batch in range(max_batches): feed = next_feed() _, l = sess.run([train_op, loss], feed) if batch == 0 or batch % batches_in_epoch == 0: print('batch {}'.format(batch)) print(' minibatch loss: {}'.format(sess.run(loss, feed))) predict_ = sess.run(decoder_prediction, feed) for i, (inp, pred) in enumerate(zip(feed[encoder_inputs].T, predict_.T)): print(' sample {}:'.format(i + 1)) print(' input &gt; {}'.format(inp)) print(' predicted &gt; {}'.format(pred)) if i &gt;= 2: break print() except KeyboardInterrupt: print('training interrupted') </code></pre> <p>I have read similar questions here on stackoverflow but find my self still puzzled as to how to solve this.</p> <p>EDIT: I think I should clarify that the code above works well, however the real desired output should mimic a noisy signal (text to speech for example) which is why I think I need continuous output values instead of words or letters.</p>
<p>If you are trying to do continuous why can't you just reshape your input placeholders to be of shape <code>[BATCH, TIME_STEPS, 1]</code> and add that one extra dimension into your input via <code>tf.expand_dims(input, 2)</code>. This way, your input would match the dimensions that <code>dynamic_rnn</code> expects (actually in your case, since you are doing <code>time_major=True</code> your input should be of shape <code>[TIME_STEPS, BATCH, 1])</code></p> <p>I'd be curious to know how you'd then handle the switch of the output dimension from your cell size to 1. Right now you have this line:</p> <pre><code>decoder_logits = tf.contrib.layers.linear(decoder_outputs, VOCAB_SIZE) </code></pre> <p>But since you are no longer doing a classification, then <code>VOCAB_SIZE</code> is just 1? I asked a similar question here a few days ago, but didn't get any responses. I'm doing it this way (using 1), but not sure whether it's appropriate (seems to sort-of work in practice, but not perfectly).</p>
python|tensorflow|regression
1
377,624
44,872,795
Convert to True/False values of Pandas Dataframe
<p>I have a rather big dataframe that looks a bit like this:</p> <pre><code> | obj1 | obj2 | obj3 | |------------------------ 0 | attr1 | attr2 | attr1 | 1 | attr2 | attr3 | NaN | 2 | attr3 | attrN | NaN | </code></pre> <p>I'm new(ish) to pandas but I can't figure out a way to make it look like this:</p> <pre><code> | obj1 | obj2 | obj3 | ------------------------ attr1 | True | False | True | attr2 | True | False | False | attr3 | True | False | False | </code></pre> <p>what's the most pythonic/fast way to go around this?</p> <p><strong>EDIT</strong></p> <p>I don't have any column in the dataframe with all the attributes. I could have an Obj4 that has an attribute which is not seen anywhere else</p>
<p>You need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>set_index</code></a> + <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.eq.html" rel="nofollow noreferrer"><code>eq</code></a>:</p> <pre><code>df = df.set_index('obj1', drop=False).rename_axis(None) df = df.eq(df['obj1'], axis=0) print (df) obj1 obj2 obj3 attr1 True False True attr2 True False False attr3 True False False </code></pre> <p>Similar solution:</p> <pre><code>df = df.set_index('obj1', drop=False).rename_axis(None) df = df.eq(df.index.values, axis=0) print (df) obj1 obj2 obj3 attr1 True False True attr2 True False False attr3 True False False </code></pre> <p>And numpy solution:</p> <pre><code>df = pd.DataFrame(df.values == df['obj1'].values[:, None], index=df['obj1'].values, columns=df.columns) print (df) obj1 obj2 obj3 attr1 True False True attr2 True False False attr3 True False False </code></pre> <p>EDIT:</p> <p>For compare all values it is not easy:</p> <pre><code>vals = df.stack().unique() L = [pd.Series(df[x].unique(), index=df[x].unique()).reindex(index=vals) for x in df.columns] df1 = pd.concat(L, axis=1, keys=df.columns) print (df1) obj1 obj2 obj3 attr1 attr1 NaN attr1 attr2 attr2 attr2 NaN attr3 attr3 attr3 NaN attrN NaN attrN NaN df1 = df1.eq(df1.index.values, axis=0) print (df1) obj1 obj2 obj3 attr1 True False True attr2 True True False attr3 True True False attrN False True False </code></pre> <p>EDIT1:</p> <p>Another solution for <code>df1</code>:</p> <pre><code>stacked = df.stack() #reshape to MultiIndex df1 = stacked.reset_index(name='A').set_index(['level_1','A']) #MultiIndex with all possible values mux = pd.MultiIndex.from_product([df1.index.levels[0], stacked.unique()]) #reindex by MultiIndex df1 = df1.reindex(index=mux) #replace non NaN values to second level of MultiIndex df1['level_0'] = df1['level_0'].mask(df1['level_0'].notnull(), df1.index.get_level_values(1)) #reshape back df1 = df1['level_0'].unstack(0) print (df1) obj1 obj2 obj3 attr1 attr1 NaN attr1 attr2 attr2 attr2 NaN attr3 attr3 attr3 NaN attrN NaN attrN NaN </code></pre>
python|pandas|dataframe
6
377,625
44,964,006
apply normalisation on groupBy data in Pandas DataFrame
<p>DataFrame columns: </p> <pre><code>['PercentSalaryHike', 'Attrition', 'EmployeeCountFraction'] </code></pre> <p>After Grouping by first two columns: EmployeeCount shows the <strong><em>fraction of people</em></strong> whose attrition is <strong><em>'yes'</em></strong> and rest <strong><em>'No'</em></strong> for that particular <strong><em>PercentSalaryHike</em></strong></p> <p><a href="https://i.stack.imgur.com/byU37.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/byU37.png" alt="DataFrame"></a></p> <p>After Resetting index, DataFrame looks like:</p> <p><a href="https://i.stack.imgur.com/RjqWu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RjqWu.png" alt="enter image description here"></a></p> <p>What I want exactly is to apply normalisation to simplify DataFrame. Should look like:</p> <pre><code>PercentSalaryHike Attrition-Yes Attrition-No 11 0.195238 0.804762 12 0.166667 0.833333 13 0.837321 0.163351 .. .. .. </code></pre> <p>The sample I have given applies groupBy on 2 fields. I want a general solution by which data grouped by n number of fields is normalised in such manner.</p>
<p>I think you need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.unstack.html" rel="nofollow noreferrer"><code>unstack</code></a> for reshape data, then <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.add_prefix.html" rel="nofollow noreferrer"><code>add_prefix</code></a>, <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reset_index.html" rel="nofollow noreferrer"><code>reset_index</code></a> and last <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rename_axis.html" rel="nofollow noreferrer"><code>rename_axis</code></a>:</p> <pre><code>df = df['EmployeeCountFraction'].unstack() .add_prefix('Attrition-') .reset_index() .rename_axis(None, axis=1) print (df) PercentSalaryHike Attrition-No Attrition-Yes 0 11 0.804762 0.195238 1 12 0.833333 0.166667 2 13 0.837321 0.163351 </code></pre>
python|pandas|dataframe|data-analysis
1
377,626
44,873,478
ValueError: size of tuple must match number of fields while using genfromtxt
<pre><code>import os import numpy as np os.getcwd() os.chdir('C:/Users/Gururaj/Desktop/tech/python/') data = np.genfromtxt("simple.txt", dtype=None, delimiter=",", names=str(False)) print(data[0]) </code></pre> <p>=========================error as below======================</p> <h2>this is being done in Jupyter notebook for python 3</h2> <pre><code> ValueError Traceback (most recent call last) &lt;ipython-input-34-c2ba85f75012&gt; in &lt;module&gt;() 3 os.getcwd() 4 os.chdir('C:/Users/Gururaj/Desktop/tech/python/') ----&gt; 5 data = np.genfromtxt("simple.txt", dtype=None, delimiter=",", names=str(False)) 6 print(data[0]) C:\ProgramData\Anaconda3\lib\site-packages\numpy\lib\npyio.py in genfromtxt(fname, dtype, comments, delimiter, skip_header, skip_footer, converters, missing_values, filling_values, usecols, names, excludelist, deletechars, replace_space, autostrip, case_sensitive, defaultfmt, unpack, usemask, loose, invalid_raise, max_rows) 1874 ddtype = list(zip(names, column_types)) 1875 mdtype = list(zip(names, [np.bool] * len(column_types))) -&gt; 1876 output = np.array(data, dtype=ddtype) 1877 if usemask: 1878 outputmask = np.array(masks, dtype=mdtype) ValueError: size of tuple must match number of fields. =========================================================== Data in file: this,is,me,learning monty,pythons,flying,circus sounds,exciting,but,hmm </code></pre> <p>Kindly help as I've only started learning Python basics now.</p>
<p>The <code>genfromtxt</code> docs say:</p> <blockquote> <p>names : {None, True, str, sequence}</p> </blockquote> <p>you set it to:</p> <pre><code>In [689]: str(False) Out[689]: 'False' </code></pre> <p>In other words, you've told it to name one field 'False'. Hence the mismatch between the number of fields and the number of columns.</p> <p>I suspect you want <code>names=None</code>, meaning, don't take the field names from the file.</p>
python|numpy|ipython|ipython-notebook|genfromtxt
1
377,627
45,039,027
Replace Some Columns of DataFrame with Another (Based on Column Names)
<p>I have a <code>DataFrame</code> <code>df1</code>:</p> <pre><code>| A | B | C | D | ----------------- | 0 | 1 | 3 | 4 | | 2 | 1 | 8 | 4 | | 0 | 2 | 3 | 1 | </code></pre> <p>and a <code>DataFrame</code> <code>df2</code>:</p> <pre><code>| A | D | --------- | 2 | 2 | | 3 | 2 | | 1 | 9 | </code></pre> <p>I want to replace column <code>A</code> and <code>D</code> of <code>df1</code> with the equivalent columns of <code>df2</code>.</p> <p>Surely I could do something like</p> <pre><code>df1['A'] = df2['A'] df1['D'] = df2['D'] </code></pre> <p>But I need a solution for doing this automatically since I have thousands of columns.</p>
<p>You can use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.combine_first.html" rel="noreferrer"><code>combine_first</code></a>:</p> <pre><code>df2.combine_first(df1) # A B C D #0 2 1.0 3.0 2 #1 3 1.0 8.0 2 #2 1 2.0 3.0 9 </code></pre>
python|pandas
6
377,628
45,105,651
Installed tensorflow, but pycharm ignores it
<p>I installed tensorflow by(answer from Joshua): <a href="https://stackoverflow.com/questions/43419795/how-to-install-tensorflow-on-anaconda-python-3-6">how to install tensorflow on anaconda python 3.6</a> If I test it in cmd:</p> <pre><code>D:\&gt;python Python 3.6.1 |Anaconda 4.4.0 (64-bit)| (default, May 11 2017, 13:25:24) [MSC v.1 900 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import tensorflow &gt;&gt;&gt; hello = tf.constant('Hello, TensorFlow!') Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; NameError: name 'tf' is not defined &gt;&gt;&gt; import tensorflow as tf &gt;&gt;&gt; hello = tf.constant('Hello, TensorFlow!') &gt;&gt;&gt; sess = tf.Session() 2017-07-14 16:21:53.235367: W d:\build\tensorflow\tensorflow- r1.2\tensorflow\cor e\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to us e SSE instructions, but these are available on your machine and could speed up C PU computations. 2017-07-14 16:21:53.508199: W d:\build\tensorflow\tensorflow- r1.2\tensorflow\cor e\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to us e SSE2 instructions, but these are available on your machine and could speed up CPU computations. 2017-07-14 16:21:53.511766: W d:\build\tensorflow\tensorflow- r1.2\tensorflow\cor e\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to us e SSE3 instructions, but these are available on your machine and could speed up CPU computations. 2017-07-14 16:21:53.515734: W d:\build\tensorflow\tensorflow- r1.2\tensorflow\cor e\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to us e SSE4.1 instructions, but these are available on your machine and could speed u p CPU computations. 2017-07-14 16:21:53.517818: W d:\build\tensorflow\tensorflow- r1.2\tensorflow\cor e\platform\cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to us e SSE4.2 instructions, but these are available on your machine and could speed u p CPU computations. &gt;&gt;&gt; print(sess.run(hello)) b'Hello, TensorFlow!' </code></pre> <p>So this shoulb be ok....but if I try t repeat this test in pycharm(even after I restarted pycharm): ModuleNotFoundError: No module named 'tensorflow'</p> <p>Any ideas why?</p>
<p>just install tensorflow from the project settings. You don't need anaconda.</p> <p><a href="https://i.stack.imgur.com/Sghqj.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/Sghqj.jpg" alt="enter image description here"></a></p>
python|tensorflow|pycharm
8
377,629
44,917,064
How to crosstab this table with pandas?
<p>I have this data and i want to cross-tabulate between the GDP level (above average vs. below average) vs. Level of alcohol consumption (above average vs. below average). and find the correlation.</p> <p><a href="https://i.stack.imgur.com/0dqjF.png" rel="nofollow noreferrer">data</a></p> <p>I'm trying this but is not what i want.</p> <pre><code>pd.crosstab(df['GDP'],df['Recorded_Consupmtion'], margins=True) </code></pre>
<p>IIUC:</p> <pre><code>df['GDP_Avg'] = np.where(df.GDP &lt; df.GDP.mean(),'Below Average','Above Average') df['RC_Avg'] = np.where(df.Recorded_Consupmtion &lt; df.Recorded_Consupmtion.mean(),'Below Average','Above Average') pd.crosstab(df['GDP_Avg'],df['RC_Avg'], margins=True) </code></pre> <p>Output:</p> <pre><code>RC_Avg Above Average Below Average All GDP_Avg Above Average 5 0 5 Below Average 1 3 4 All 6 3 9 </code></pre>
pandas|crosstab|data-science
1
377,630
45,128,032
Error while computing second derivatives in tensorflow
<p>I am training a model that requires computation of second derivatives (i.e) gradients of gradients. Here is a short snippet that does that:</p> <pre><code>mapping_loss = tf.losses.sparse_softmax_cross_entropy( 1 - adversary_label, adversary_logits) adversary_loss = tf.losses.sparse_softmax_cross_entropy( adversary_label, adversary_logits) ''' # doesnt work using tf.nn.softmax_cross_entropy_with_logits too. mapping_loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits( labels = tf.one_hot(1 - adversary_label, 2), logits = adversary_logits, name='loss1')) adversary_loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits( labels = tf.one_hot(adversary_label, 2), logits = adversary_logits, name = 'loss2')) ''' grads_target = tf.gradients(mapping_loss, target_vars.values()) grads_adv = tf.gradients(adversary_loss, adversary_vars.values()) grads_all = grads_target + grads_adv reg = 0.5*sum(tf.reduce_sum(tf.square(g)) for g in grads_all) Jgrads_target = tf.gradients(reg, target_vars.values()) Jgrads_adv = tf.gradients(reg, adversary_vars.values()) </code></pre> <p>I am getting the following error</p> <pre class="lang-none prettyprint-override"><code>Traceback (most recent call last): File "/scratch0/Projects/summer/adda/env/lib/python3.6/site-packages/tensorflow/python/ops/gradients_impl.py", line 455, in gradients grad_fn = ops.get_gradient_function(op) File "/scratch0/Projects/summer/adda/env/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 1682, in get_gradient_function return _gradient_registry.lookup(op_type) File "/scratch0/Projects/summer/adda/env/lib/python3.6/site-packages/tensorflow/python/framework/registry.py", line 93, in lookup "%s registry has no entry for: %s" % (self._name, name)) LookupError: gradient registry has no entry for: PreventGradient During handling of the above exception, another exception occurred: Traceback (most recent call last): File "tools/train_adda.py", line 215, in &lt;module&gt; main() File "/scratch0/Projects/summer/adda/env/lib/python3.6/site-packages/click/core.py", line 722, in __call__ return self.main(*args, **kwargs) File "/scratch0/Projects/summer/adda/env/lib/python3.6/site-packages/click/core.py", line 697, in main rv = self.invoke(ctx) File "/scratch0/Projects/summer/adda/env/lib/python3.6/site-packages/click/core.py", line 895, in invoke return ctx.invoke(self.callback, **ctx.params) File "/scratch0/Projects/summer/adda/env/lib/python3.6/site-packages/click/core.py", line 535, in invoke return callback(*args, **kwargs) File "tools/train_adda.py", line 137, in main Jgrads_target = tf.gradients(reg, list(target_vars.values())) File "/scratch0/Projects/summer/adda/env/lib/python3.6/site-packages/tensorflow/python/ops/gradients_impl.py", line 459, in gradients (op.name, op.type)) LookupError: No gradient defined for operation 'gradients_1/sparse_softmax_cross_entropy_loss_1/xentropy/xentropy_grad/PreventGradient' (op type: PreventGradient) </code></pre>
<p>It seems like TensorFlow does not support second derivatives of softmax cross entropy at the moment. See <a href="https://github.com/tensorflow/tensorflow/blob/c2ce4f68c744e6d328746b144ff1fcf98ac99e6c/tensorflow/python/ops/nn_grad.py#L449" rel="nofollow noreferrer">https://github.com/tensorflow/tensorflow/blob/c2ce4f68c744e6d328746b144ff1fcf98ac99e6c/tensorflow/python/ops/nn_grad.py#L449</a></p>
python|tensorflow
0
377,631
44,843,956
Fill a column in the dataframe based on similar values from another dataframe in pandas
<p>I have two dataframe:</p> <pre><code> df1 df2 № year № year 1 2010 373 2 2010 374 3 2010 375 4 2010 376 5 2010 ... ... 372 2017 373 2017 374 2017 375 2017 376 2017 377 2017 ... 899 2026 900 2026 901 2026 </code></pre> <p>I need to find all the values from column "№" of the df2 in the df1 and fill the column "year" in the df2 with values from the df1. The result should look like this:</p> <pre><code> df2 № year 373 2017 374 2017 375 2017 376 2017 ... </code></pre> <p>I tried to do it this way</p> <pre><code>df2['year'] = np.where(df2['№'] == df1['№'] , 'Insert value from df1['year'], '0') </code></pre> <p>I first tried to insert '1', instead of a year, to check if the code is working, it got me such an error</p> <pre><code>ValueError: Can only compare identically-labeled Series objects </code></pre> <p>Any suggestion, please?</p>
<p>I think need <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.map.html" rel="nofollow noreferrer"><code>map</code></a> by <code>Series</code> created by <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.set_index.html" rel="nofollow noreferrer"><code>set_index</code></a> - if some value not match get <code>NaN</code>s:</p> <pre><code>df2['year'] = df2['№'].map(df1.set_index('№')['year']) </code></pre> <p>If need replace <code>NaN</code>s to original values:</p> <pre><code>df2['year'] = df2['№'].map(df1.set_index('№')['year']).combine_first(df2['year']) </code></pre>
python|pandas
2
377,632
45,019,319
Pandas: Split a string and then create a new column?
<p><a href="https://i.stack.imgur.com/1AVDn.png" rel="noreferrer"><img src="https://i.stack.imgur.com/1AVDn.png" alt="enter image description here"></a></p> <p>Let's say you have Col1.</p> <p>How do you create the new column 'Col2' after you split the string values in Col1 until you see _?</p>
<p>Edit to handle strings without '_':</p> <pre><code>df['Col2'] = (np.where(df['Col1'].str.contains('_'), df['Col1'].str.split('_').str[1], df['Col1'])) </code></pre> <p>OR as COLDSPEED suggests in comments:</p> <pre><code>df['Col1'].str.split('_').str[-1] </code></pre> <p>You can use the .str access with indexing:</p> <pre><code>df['Col2'] = df['Col1'].str.split('_').str[1] </code></pre> <p>Example:</p> <pre><code>df = pd.DataFrame({'Col1':['Name_John','Name_Jay','Name_Sherry']}) df['Col2'] = df['Col1'].str.split('_').str[1] </code></pre> <p>Output:</p> <pre><code> Col1 Col2 0 Name_John John 1 Name_Jay Jay 2 Name_Sherry Sherry </code></pre>
python|pandas
22
377,633
45,088,560
Filtering on index levels in a pandas.DataFrame
<p>If I have a multiindex dataframe:</p> <pre><code>import pandas as pd df = pd.DataFrame([[1,2,3],[4,5,6],[7,8,9]],columns=['a','b','c']).set_index(['a','b']) </code></pre> <p>I can simply filter the dataframe on a column, for example:</p> <pre><code>df[df.c&gt;4] </code></pre> <p>But to do the same on the level of an index, say "b", I can't do:</p> <pre><code>df[df.b&gt;4] </code></pre> <p>Instead I can do:</p> <pre><code>df[df.index.get_level_values('b')&gt;4] </code></pre> <p>But is there a less verbose way to do this?</p>
<p>You can use <code>query</code> for better readability </p> <pre><code>In [795]: df.query('b &gt; 4') Out[795]: c a b 4 5 6 7 8 9 </code></pre>
python|pandas
13
377,634
45,134,654
Easily switching between feed_dict and queues for input to TensorFlow model
<p>Right now I have a model configured to take its inputs with <code>feed_dict</code>. The code looks something like this:</p> <pre><code># model.py class MyModel(object): def __init__(self, hyperparams): self.build_model(hyperparams) def build_model(self, hps): self.input_data = tf.placeholder(dtype=tf.float32, shape=[hps.batch_size, hps.nfeats]) self.labels = tf.placeholder(dtype=tf.float32, shape=[hps.batch_size]) # Define hidden layers, loss, training step, etc. # train.py model = MyModel(hps) for _ in range(100): x, y = some_python_function() # Read a batch from disk, preprocess sess.run(model.train_step, feed_dict={model.input_data: x, model.labels: y}) </code></pre> <p>For performance reasons, I'd like to switch to using queues for training. But I'd like to maintain the ability to use <code>feed_dict</code>, e.g. for inference or testing.</p> <p>Is there an elegant way to do this? What I'd like to do is, when using queues, 'swap out' the placeholder variables for the tensors returned by my queue's dequeue op. I thought that <code>tf.assign</code> would be the way to do this, i.e.:</p> <pre><code>single_x, single_y = tf.parse_single_example(...) x, y = tf.train.batch([single_x, single_y], batch_size) model = MyModel(hps) sess.run([tf.assign(model.input_data, x), tf.assign(model.labels, y)]) for _ in range(100): sess.run(model.train_step) </code></pre> <p>But this raises <code>AttributeError: 'Tensor' object has no attribute 'assign'</code>. The API docs for <a href="https://www.tensorflow.org/api_docs/python/tf/assign" rel="nofollow noreferrer"><code>tf.assign</code></a> describe the first argument as: "A mutable <code>Tensor</code>. Should be from a <code>Variable</code> node. May be uninitialized." Does this mean my placeholders aren't mutable? Can I make them so? Or am I approaching this the wrong way?</p> <p>Minimal runnable example <a href="https://gist.github.com/colinmorris/8ca883c04831c755ee3bc745cab52761" rel="nofollow noreferrer">here</a>.</p>
<p>You could separate the creation of the <code>Variables</code> and the <code>Operations</code> by:</p> <ul> <li>adding a <code>build_variables</code> method called at the instantiation of your <code>Model</code> class,</li> <li>changing the interface of the <code>build_model</code> method so it accepts your <code>x</code>and <code>y</code> tensors as arguments and so it builds the model <code>operations</code> based on them. </li> </ul> <p>This way you would reuse the variables and constants of your model. The downside being that the operations will be duplicated for the <code>placeholder</code> version and any other version.</p> <pre><code>import tensorflow as tf import numpy as np BATCH_SIZE = 2 class Model(object): def __init__(self): self.build_variables() def build_variables(self): self.w = tf.Variable(tf.random_normal([3, 1])) def build_model(self, x, y): self.x = x self.y = y self.output = tf.matmul(self.x, self.w) self.loss = tf.losses.absolute_difference(self.y, self.output) model = Model() sess = tf.InteractiveSession() sess.run(tf.global_variables_initializer()) def placeholder_run(): x = tf.placeholder(dtype=tf.float32, shape=[BATCH_SIZE, 3]) y = tf.placeholder(dtype=tf.float32, shape=[BATCH_SIZE, 1]) model.build_model(x, y) for i in range(3): x = np.random.rand(BATCH_SIZE, 3) y = x.sum(axis=1, keepdims=True) loss = sess.run(model.loss, feed_dict={model.x:x, model.y:y}) print(loss) def nonph_run(): x = tf.random_normal([BATCH_SIZE, 3]) y = tf.reduce_sum(x, axis=1, keep_dims=True) model.build_model(x, y) for i in range(3): loss = sess.run(model.loss) print(loss) if __name__ == '__main__': # Works placeholder_run() # Doesn't fail nonph_run() </code></pre>
python|tensorflow
3
377,635
44,862,754
Tensorflow: using an input-pipeline (.csv) as a dictionary for training
<p>I'm trying to train a model on a .csv dataset (5008 columns, 533 rows). I'm using a textreader to parse the data into two tensors, one holding the data to train on [example] and one holding the correct labels [label]:</p> <pre><code>def read_my_file_format(filename_queue): reader = tf.TextLineReader() key, record_string = reader.read(filename_queue) record_defaults = [[0.5] for row in range(5008)] #Left out most of the columns for obvious reasons col1, col2, col3, ..., col5008 = tf.decode_csv(record_string, record_defaults=record_defaults) example = tf.stack([col1, col2, col3, ..., col5007]) label = col5008 return example, label def input_pipeline(filenames, batch_size, num_epochs=None): filename_queue = tf.train.string_input_producer(filenames, num_epochs=num_epochs, shuffle=True) example, label = read_my_file_format(filename_queue) min_after_dequeue = 10000 capacity = min_after_dequeue + 3 * batch_size example_batch, label_batch = tf.train.shuffle_batch([example, label], batch_size=batch_size, capacity=capacity, min_after_dequeue=min_after_dequeue) return example_batch, label_batch </code></pre> <p>This part is working, when executing something like:</p> <pre><code>with tf.Session() as sess: ex_b, l_b = input_pipeline(["Tensorflow_vectors.csv"], 10, 1) print("Test: ",ex_b) </code></pre> <p>my result is <code>Test: Tensor("shuffle_batch:0", shape=(10, 5007), dtype=float32)</code></p> <p>So far this seems fine to me. Next I've created a simple model consising of two hidden layers (512 and 256 nodes respectively). Where things go wrong is when I'm trying to train the model:</p> <pre><code>batch_x, batch_y = input_pipeline(["Tensorflow_vectors.csv"], batch_size) _, cost = sess.run([optimizer, cost], feed_dict={x: batch_x.eval(), y: batch_y.eval()}) </code></pre> <p>I've based this approach on <a href="https://github.com/aymericdamien/TensorFlow-Examples/blob/master/notebooks/3_NeuralNetworks/multilayer_perceptron.ipynb" rel="nofollow noreferrer">this example that uses the MNIST database</a>. However, when I'm executing this, even when I'm just using <code>batch_size = 1</code>, Tensorflow just hangs. If I leave out the <code>.eval()</code> functions that should get the actual data from the tensors, I get the following response:</p> <pre><code>TypeError: The value of a feed cannot be a tf.Tensor object. Acceptable feed values include Python scalars, strings, lists, or numpy ndarrays. </code></pre> <p>Now this I can understand, but I don't understand why the program hangs when I do include the <code>.eval()</code> function and I don't know where I could find any information about this issue.</p> <p>EDIT: I included the most recent version of my entire script <a href="https://github.com/Voidling0/TFCSV2/blob/master/scriptv2.py" rel="nofollow noreferrer">here</a>. The program still hangs even though I implemented (as far as I know correctly) the solution that was offered by <strong>vijay m</strong></p>
<p>As the error says, you are trying to feed a tensor to <code>feed_dict</code>. You have defined a <code>input_pipeline</code> queue and you cant pass it as <code>feed_dict</code>. The proper way for the data to be passed to the model and train is shown in the code below:</p> <pre><code> # A queue which will return batches of inputs batch_x, batch_y = input_pipeline(["Tensorflow_vectors.csv"], batch_size) # Feed it to your neural network model: # Every time this is called, it will pull data from the queue. logits = neural_network(batch_x, batch_y, ...) # Define cost and optimizer cost = ... optimizer = ... # Evaluate the graph on a session: with tf.Session() as sess: init_op = ... sess.run(init_op) # Start the queues coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(sess=sess, coord=coord) # Loop through data and train for ( loop through steps ): _, cost = sess.run([optimizer, cost]) coord.request_stop() coord.join(threads) </code></pre>
tensorflow|tensor
0
377,636
44,871,405
Pandas Iterrows Row Number & Percentage
<p>I'm iterating over a dataframe with 1000s of rows. I ideally would like to know the progress of my loops - i.e. how many rows has it completed, what percentage of total rows has it completed etc.</p> <p>Is there a way I can print the row number or even better, the percentage of rows iterated over? </p> <p>My code it currently below. Currently, printing how it looks below right now displays some kind of tuple/list however all I need is the row number. This is probably simple.</p> <pre><code>for row in testDF.iterrows(): print("Currently on row: "+str(row)) </code></pre> <p>Ideal printed response:</p> <pre><code>Currently on row 1; Currently iterated 1% of rows Currently on row 2; Currently iterated 2% of rows Currently on row 3; Currently iterated 3% of rows Currently on row 4; Currently iterated 4% of rows Currently on row 5; Currently iterated 5% of rows </code></pre>
<p>First of all <code>iterrows</code> gives tuples of <code>(index, row)</code>. So the proper code is</p> <pre><code>for index, row in testDF.iterrows(): </code></pre> <p>Index in general case is not a number of row, it is some identifier (this is a power of pandas, but it makes some confusions as it behaves not as ordinary <code>list</code> in python where the index is the number of row). That is why we need to calculate the number of rows independently. We can introduce <code>line_number = 0</code> and increase it in each cirlce <code>line_number += 1</code>. But python gives us a ready tool for that: <code>enumerate</code>, which returns tuples of <code>(line_number, value)</code> instead of just <code>value</code>. So we come down to that code</p> <pre><code>for line_number, (index, row) in enumerate(testDF.iterrows()): print(&quot;Currently on row: {}; Currently iterated {}% of rows&quot;.format( line_number, 100*(line_number + 1)/len(testDF))) </code></pre> <p>P.S. python2 returns integer when you divide integers, that is why 999/1000 = 0, what you don't expect. So you can either force float or take <code>100*</code> to the beginning to get integer percent.</p>
python|pandas
8
377,637
44,953,066
Pandas Shift Converts Ints to Float AND Rounds
<p>When shifting column of integers, I know how to fix my column when Pandas automatically converts the integers to floats because of the presence of a NaN. <a href="https://stackoverflow.com/questions/41870093/pandas-shift-converts-my-column-from-integer-to-float">I basically use the method described here.</a></p> <p>However, if the shift introduces a NaN thereby converting all integers to floats, there's some rounding that happens (e.g. on epoch timestamps) so even recasting it back to integer doesn't replicate what it was originally. </p> <p>Any way to fix this? </p> <p>Example Data:</p> <pre><code>pd.DataFrame({'epochee':[1495571400259317500,1495571400260585120,1495571400260757200, 1495571400260866800]}) Out[19]: epoch 0 1495571790919317503 1 1495999999999999999 2 1495571400265555555 3 1495571400267777777 </code></pre> <p>Example Code:</p> <pre><code>df['prior_epochee'] = df['epochee'].shift(1) df.dropna(axis=0, how='any', inplace=True) df['prior_epochee'] = df['prior_epochee'].astype(int) </code></pre> <p>Resulting output:</p> <pre><code>Out[22]: epoch prior_epoch 1 1444444444444444444 1400000000000000000 2 1433333333333333333 1490000000000000000 3 1777777777777777777 1499999999999999948 </code></pre>
<p>Because you know what happens when <code>int</code> is casted as float due to <code>np.nan</code> <strong>and</strong> you know that you don't want the <code>np.nan</code> rows anyway, you can shift yourself with <code>numpy</code></p> <pre><code>df[1:].assign(prior_epoch=df.epoch.values[:-1]) epoch prior_epoch 1 1495571400260585120 1495571400259317500 2 1495571400260757200 1495571400260585120 3 1495571400260866800 1495571400260757200 </code></pre>
python|pandas|rounding
1
377,638
45,044,675
Unique random number sampling with Numpy
<p>I need to create a 10,000 x 50 array in which each row contains an ascending series of random numbers between 1 and 365, like so:</p> <pre><code>[[ 4 11 14 ..., 355 360 364] [ 2 13 15 ..., 356 361 361] [ 4 12 18 ..., 356 361 365] ..., [ 6 9 17 ..., 356 362 364] [ 1 10 19 ..., 352 357 360] [ 1 9 17 ..., 356 358 364]] </code></pre> <p>The only way I've figured out to do this is by way of an iterator:</p> <pre><code>sample_dates = np.array([np.sort(np.random.choice(365, 50, replace=False)) for _ in range(10000)]) </code></pre> <p>which works, but is pretty slow (~0.33 seconds to run) and I'm going to be doing this thousands of times). Is there a faster way to accomplish this?</p> <p>EDIT: From what I can tell, the most expensive part of this solution is the iteration and 10k individual calls to np.random.choice, not the sorting</p>
<p>The following solution does not use sort:</p> <pre><code>l = np.array([True]*50 + [False]*315) total = np.arange(1,366) sample_dates = np.array([total[np.random.permutation(l)] for _ in range(10000)]) </code></pre> <p>Hence it seems to be faster than the other suggested solutions (takes 0.44 seconds on my computer versus 0.77 for "Nils Werner"'s solution. The OP's solution took 0.81 seconds).</p>
python|performance|numpy
2
377,639
45,177,318
In pandas, is there some compact way to plot data across days of the week?
<p>I've got a simple dataframe with a set of values recorded against <code>datetime</code>s which are set to the index. Is there some compact way to get this data plotted across days of the week? I mean something like the following, with the days of the week across the horizontal axis and the data for different weeks plotted in various colors:</p> <p><img src="https://i.stack.imgur.com/L5yFe.jpg" alt=""></p> <p>My current code is as follows, but it seems bonkers complicated for what is a conceptually simple thing:</p> <pre><code>df["weekday"] = df["datetime"].dt.weekday df["weekday_name"] = df["datetime"].dt.weekday_name df["time_through_day"] = df["datetime"].map(lambda x: x - datetime.datetime.combine(x.date(), datetime.time())) def days_through_week(row): return row["weekday"] + row["time_through_day"] / (24 * np.timedelta64(1, "h")) df["days_through_week"] = df.apply(lambda row: days_through_week(row), axis = 1) datasets = [] dataset = [] previous_days_through_week = 0 for days_through_week, value in zip(df["days_through_week"], df["value"]): if abs(days_through_week - previous_days_through_week) &lt; 5: dataset.append([days_through_week, value]) else: datasets.append(dataset) dataset = [] previous_days_through_week = days_through_week for dataset in datasets: x = [datum[0] for datum in dataset] y = [datum[1] for datum in dataset] plt.plot(x, y, linestyle = "-", linewidth = 1.3) plt.ylabel("value") plt.xticks( [ 0.5, 1.5, 2.5, 3.5, 4.5, 5.5, 6.5], [ "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday"] ) </code></pre>
<h3>Setup:</h3> <pre><code>import pandas as pd import numpy as np import matplotlib.pyplot as plt import matplotlib.dates as mdates np.random.seed(51723) #Make some fake data to play with. Includes a partial week. n = pd.DatetimeIndex(start="2-Jan-2017", end="1-Mar-2017", freq="1H") df = pd.DataFrame(index=n, data=np.random.randn(n.size), columns=['A']) df.A = df.groupby(df.index.week).A.transform('cumsum') </code></pre> <h3>Plot:</h3> <pre><code>#list of names for xtick labels. Extra Monday for end. weekday_names = "Mon Tue Wed Thu Fri Sat Sun Mon".split(' ') fig, ax = plt.subplots() for name, group in df.groupby(df.index.week): start_day= group.index.min().to_pydatetime() #convert date to week age Xs = mdates.date2num(group.index.to_pydatetime()) \ - mdates.date2num(start_day) Ys = group.A ax.plot(Xs, Ys) ax.set_xticklabels(weekday_names) ax.set_xticks(range(0, len(weekday_names))) </code></pre> <p><a href="https://i.stack.imgur.com/j5hkG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/j5hkG.png" alt="plot of weekly chunks"></a></p>
python|pandas|datetime|matplotlib|weekday
2
377,640
45,179,829
How to split a 3D matrix/volume into a fixed size sub-volumes and then re-merge in python?
<p>How can I split a 3D numpy array into fixed size 3D sub-arrays, do some manipulation on the sub-arrays, and finally put them back in the same order to create the original big 3D volume?</p> <p>e.g. big volume is nxnxm</p> <p>so, I would like to split it to sub-vlumes of k x k x k, and do some manipulation on each sub volume and put them together again to create nxnxm</p>
<p>A simple solution would be to process your array with nested for-loops:</p> <pre><code>A = np.random.rand(5, 4) print "A:", A step = 2 newHeight = np.ceil(float(A.shape[0]) / step) newWidth = np.ceil(float(A.shape[1]) / step) B = np.zeros((newHeight, newWidth)) C = np.zeros(A.shape) for i in range(B.shape[0]): for j in range(B.shape[1]): B[i, j] = np.mean(A[i*step:(i+1)*step, j*step:(j+1)*step]) C[i*step:(i+1)*step, j*step:(j+1)*step] = B[i, j] print "B:", B print "C:", C </code></pre> <p>Output:</p> <pre><code>A: [[ 0.86754517 0.65107995 0.01074822 0.18394825] [ 0.03184878 0.07052286 0.44014168 0.84913463] [ 0.2982024 0.94988568 0.33208104 0.28697172] [ 0.36721371 0.9352932 0.22780242 0.13650031] [ 0.84073176 0.33792535 0.53240018 0.54008341]] B: [[ 0.40524919 0.37099319] [ 0.63764875 0.24583887] [ 0.58932856 0.53624179]] C: [[ 0.40524919 0.40524919 0.37099319 0.37099319] [ 0.40524919 0.40524919 0.37099319 0.37099319] [ 0.63764875 0.63764875 0.24583887 0.24583887] [ 0.63764875 0.63764875 0.24583887 0.24583887] [ 0.58932856 0.58932856 0.53624179 0.53624179]] </code></pre> <ul> <li><code>A</code> is the large input array</li> <li><code>B</code> is the small output array</li> <li><code>C</code> is the large output array</li> <li><code>step</code> is the size of each block, 20 in your case</li> <li><code>newHeight</code> and <code>newWidth</code> is the computed size of <code>B</code>: dividing the size of <code>A</code> by the window size <code>step</code> and rounding up</li> <li><code>i*step:(i+1)*step</code> and <code>j*step:(j+1)*step</code> are the vertical and horizontal ranges for each block in <code>A</code> and <code>C</code>, respectively.</li> </ul> <p>I'm using a small array of 5x4 as well as two dimensions only for simplicity and readability of the example results. It should be not to hard to extend this approach to three dimensions.</p>
python|numpy|matrix|split
1
377,641
45,117,849
Creating a new variable through another script in python
<p>I've been wondering how to accomplish this.</p> <p>I have 2 python scripts, the major one calls to the other and pass them 3 variables, then this script which is only one function returns a tuple of lenght 4. And finally I call this script through the first one.</p> <p>My question is is there any method to make that the tuple creates a new variable, e.g a matrix or various vectors in the first script and so avoid to call the function over and over.</p> <p><strong>Script 1</strong></p> <pre><code>import numpy as np import scipy.integrate as integrate import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D from matplotlib import cm from Funcion_gamma import Fgamma import time start_time = time.time() "Aluminio" E=7e10 #Modulo de Young v=0.33 #Modulo de Poison G=2.63e10 #Modulo de cizalladura h=0.00286 #Anchura de la placa a=0.85 #Lado 1 de la placa b=0.65 #Lado 2 de la placa rho=2703.6 #Densidad del material Kmuelle=3 "M y N" m=7 n=m Fgamma(Kmuelle,a,m) </code></pre> <p><strong>Script 2</strong></p> <pre><code>import sympy as sp import numpy as np def Fgamma(KK,aa,M): 'Definicion de las variables simbolicas' g, a, A, C, D, K, x = sp.symbols("g a A C D K x") 'Funcion Gartner y sus derivadas' X=(A*sp.cos(g*x/a))+(sp.sin(g*x/a))+(C*sp.exp(-g*x/a))+(D*sp.exp(-g*(a-x)/a)) dx1=sp.diff(X,x,1) dx2=sp.diff(X,x,2) 'Ecuaciones a resolver' eq1= sp.Eq(X.subs(x,0),0) eq2= sp.Eq(dx2.subs(x,0)/dx1.subs(x,0),K) eq3= sp.Eq(X.subs(x,a),0) eq4= sp.Eq(dx2.subs(x,a)/dx1.subs(x,a),-K) 'Resolucion de las ecuaciones' D1=sp.solve(eq1,D) eq33 = eq3.subs(D, D1[0]) C33=sp.solve(eq33,C) D11=D1[0].subs(C,C33[0]) eq22=eq2.subs([(C, C33[0]),(D,D11)]) A22=sp.solve(eq22,A) eq44=eq4.subs([(C,C33[0]),(D,D11)]) A44=sp.solve(eq44,A) 'Obtencion de A' AF=A22[0]-A44[0] AFF=sp.sympify(AF.subs([(K,KK),(a,aa)])) 'Calculo de las raices de gamma' nn=np.linspace(0,M,num=M,endpoint=False,dtype='int') gamma=np.zeros(M) Afinal=np.zeros(M) Cfinal=np.zeros(M) Dfinal=np.zeros(M) for jj in nn: gamma[jj]=sp.nsolve(sp.Eq(AFF,0),(3+(jj*np.pi))) Afinal[jj]=AFF.subs(g,gamma[jj]) Cfinal[jj]=C33[0].subs([(A,Afinal[jj]),(a,aa),(g,gamma[jj])]) Dfinal[jj]=D11.subs([(A,Afinal[jj]),(a,aa),(g,gamma[jj])]) print gamma return gamma,Afinal,Cfinal,Dfinal </code></pre>
<p>Suppose you have file name for both script be script1.py and script2.py in same directory To call the function named <code>Femma</code> in file <code>script2.py</code> in <code>script1.py</code> you will use.<br> In file script1.py<br> <code>import script2<br> script2.Femma(args)</code> </p>
python|function|numpy
0
377,642
44,889,376
How to go from printing [1, 2, 3] to printing 1, 2, 3 in a text file Python
<p>I am using the following loop to iterate over a numpy array and print into a separate text file.</p> <pre><code>c= np.array([1, 2, 3]) nc = c.astype(np.int) for x in nc: print &gt;&gt; thing_here, x </code></pre> <p>yet when I open the thing_here text file it prints my array as <code>[1, 2, 3]</code> rather than <code>1, 2, 3</code></p> <p>How can I get rid of the [ ]'s?</p>
<p>the <code>join</code> command will do this:</p> <pre><code>c = np.array([1,2,3]) c_joined = ' '.join(map(str,c)) </code></pre> <p>If the array is already a list of strings, you can ignore the <code>map()</code> command and just use:</p> <pre><code>c = np.array([1,2,3]) c_joined = ' '.join(c) </code></pre> <p>Then do the print to file commands using this joined string. As a further note, <code>np.savetxt()</code> is a useful command</p>
python|arrays|numpy
2
377,643
45,064,916
How to find the correlation between a group of values in a pandas dataframe column
<p>I have a dataframe df:</p> <pre><code>ID Var1 Var2 1 1.2 4 1 2.1 6 1 3.0 7 2 1.3 8 2 2.1 9 2 3.2 13 </code></pre> <p>I want to find the pearson correlation coefficient value between <code>Var1</code> and <code>Var2</code> for every <code>ID</code></p> <p>So the result should look like this:</p> <pre><code>ID Corr_Coef 1 0.98198 2 0.97073 </code></pre> <p>update:</p> <p>Must make sure all columns of variables are <code>int</code> or <code>float</code></p>
<p>To get your desired output format you could use <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.corrwith.html" rel="nofollow noreferrer"><code>.corrwith</code></a>:</p> <pre><code>corrs = (df[['Var1', 'ID']] .groupby('ID') .corrwith(df.Var2) .rename(columns={'Var1' : 'Corr_Coef'})) print(corrs) Corr_Coef ID 1 0.98198 2 0.97073 </code></pre> <p>Generalized solution:</p> <pre><code>import numpy as np def groupby_coef(df, col1, col2, on_index=True, squeeze=True, name='coef', keys=None, **kwargs): """Grouped correlation coefficient between two columns Flat result structure in contrast to `groupby.corr()`. Parameters ========== df : DataFrame col1 &amp; col2: str Columns for which to calculate correlation coefs on_index : bool, default True Specify whether you're grouping on index squeeze : bool, default True True -&gt; Series; False -&gt; DataFrame name : str, default 'coef' Name of DataFrame column if squeeze == True keys : column label or list of column labels / arrays Passed to `pd.DataFrame.set_index` **kwargs : Passed to `pd.DataFrame.groupby` """ # If we are grouping on something other than the index, then # set as index first to avoid hierarchical result. # Kludgy, but safer than trying to infer. if not on_index: df = df.set_index(keys=keys) if not kwargs: # Assume we're grouping on 0th level of index kwargs = {'level': 0} grouped = df[[col1]].groupby(**kwargs) res = grouped.corrwith(df[col2]) res.columns = [name] if squeeze: res = np.squeeze(res) return res </code></pre> <p>Examples:</p> <pre><code>df_1 = pd.DataFrame(np.random.randn(10, 2), index=[1]*5 + [2]*5).add_prefix('var') df_2 = df_1.reset_index().rename(columns={'index': 'var2'}) print(groupby_coef(df_1, 'var0', 'var1', level=0)) 1 7.424e-18 2 -9.481e-19 Name: coef, dtype: float64 print(groupby_coef(df_2, col1='var0', col2='var1', on_index=False, keys='var2')) var2 1 7.424e-18 2 -9.481e-19 Name: coef, dtype: float64 </code></pre>
python|pandas|dataframe
15
377,644
45,037,453
Group by multiple column and perform custom aggregation
<p>I have a dataframe example given below.</p> <pre><code> hour minute value 0 0 10 0 5 20 0 10 30 0 15 50 0 20 10 0 25 55 1 0 55 1 5 50 1 10 10 1 15 20 1 20 30 1 25 40 1 30 50 </code></pre> <p>.... like this for every hour for a day. I want to take mean,stdev of every hour and for every min and multiple each with the actual value for that hour and min as two new column. So the final dataframe would look like below.</p> <p>So for 0 hour &amp; 0 min the mean is mean(10,55) &amp; stdev(10,55) the value for new columns for 0 hour and 0 min would be mean(10,55)*10 &amp; stdev(10,55)*10 and the value for new columns for 1 hour and 0 min would be mean(10,55)*55 &amp; stdev(10,55)*55 same way it needs to iterate for all hour and min and aggregate</p> <pre><code> hour minute value mean*value stdev*value 0 0 10 325 318 0 5 20 700 424 1 0 55 1787 1750 1 5 50 1750 1060 </code></pre> <p>Currently i am doing iterate over rows first by hour and then by minute and doing the calculation of adding value against each.</p> <pre><code>for hour in df.hour: for minute in df.minute: trim_df = df.loc[(df[hour] == hour) &amp; (df[minute] == minute)] mean = trim_df [value].mean() stdev = trim_df [value].std() for index,row in trim_df.iterrows(): df.at[index, "mean*value"] = row["value"]*mean df.at[index, "stdev*value"] = row["value"]*stdev </code></pre> <p>My approach is taking huge time I am trying to use pandas group by feature but not able to convert this logic.</p>
<p>You can use <code>df.groupby(...).transform('mean')</code>to return a series with the mean of each group:</p> <pre><code>import pandas as pdf df = pd.DataFrame(columns = ['hour', 'minute', 'value'], data = [[ 0, 0, 10], [0, 5, 20], [0, 10, 30], [ 0, 15, 50], [0, 20, 10], [0, 25, 55], [1, 0, 55], [1, 5, 50], [1, 10, 10], [1, 15, 20], [1, 20, 30], [1, 25, 40], [1, 30, 50]]) df['mean_value'] = df.groupby(['minute'])['value'].transform('mean')*df.value df =&gt; hour minute value mean_value 0 0 0 10 325.0 1 0 5 20 700.0 2 0 10 30 600.0 3 0 15 50 1750.0 4 0 20 10 200.0 5 0 25 55 2612.5 6 1 0 55 1787.5 7 1 5 50 1750.0 8 1 10 10 200.0 9 1 15 20 700.0 10 1 20 30 600.0 11 1 25 40 1900.0 12 1 30 50 2500.0 </code></pre> <p>Do the same thing with <code>.transform('std')</code> to get the standard deviation series.</p>
python|pandas
2
377,645
57,107,719
Numpy shuffle 3-D numpy array by row
<p>Suppose I have the following 3D matrix:</p> <p>1 1 1</p> <p>2 2 2</p> <p>3 3 3</p> <p>and behind it (3rd dimension):</p> <p>a a a</p> <p>b b b</p> <p>c c c</p> <p>Defined as the following if I am correct:</p> <pre><code>import numpy as np x = np.array([[[1,1,1], [2,2,2], [3,3,3]], [["a","a","a"], ["b","b","b"], ["c","c","c"]]]) </code></pre> <p>And I want to randomly shuffle my 3D-array by row becoming something like this:</p> <p>2 2 2</p> <p>1 1 1</p> <p>3 3 3</p> <p>behind:</p> <p>b b b</p> <p>a a a</p> <p>c c c</p> <p>*Note that a always belongs to 1, b to 2 and c to 3 (same rows)</p> <p>How do I achieve this?</p>
<p>Using <code>np.random.shuffle</code>:</p> <pre><code>import numpy as np x = np.array([[[1,1,1], [2,2,2], [3,3,3]], [["a","a","a"], ["b","b","b"], ["c","c","c"]]]) ind = np.arange(x.shape[1]) np.random.shuffle(ind) x[:, ind, :] </code></pre> <p>Output:</p> <pre><code>array([[['1', '1', '1'], ['3', '3', '3'], ['2', '2', '2']], [['a', 'a', 'a'], ['c', 'c', 'c'], ['b', 'b', 'b']]], dtype='&lt;U21') </code></pre>
python|arrays|numpy|row|shuffle
2
377,646
56,994,738
How to utilize 100% of GPU memory with Tensorflow?
<p>I have a 32Gb graphics card and upon start of my script I see: </p> <pre><code>2019-07-11 01:26:19.985367: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 95.16G (102174818304 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY 2019-07-11 01:26:19.988090: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 85.64G (91957338112 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY 2019-07-11 01:26:19.990806: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 77.08G (82761605120 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY 2019-07-11 01:26:19.993527: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 69.37G (74485440512 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY 2019-07-11 01:26:19.996219: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 62.43G (67036893184 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY 2019-07-11 01:26:19.998911: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 56.19G (60333203456 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY 2019-07-11 01:26:20.001601: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 50.57G (54299881472 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY 2019-07-11 01:26:20.004296: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 45.51G (48869892096 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY 2019-07-11 01:26:20.006981: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 40.96G (43982901248 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY 2019-07-11 01:26:20.009660: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 36.87G (39584608256 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY 2019-07-11 01:26:20.012341: E tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 33.18G (35626147840 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY </code></pre> <p>After which TF settles with using 96% of my memory. And later, when it runs out of memory it tries to allocate 65G</p> <pre><code>tensorflow/stream_executor/cuda/cuda_driver.cc:936] failed to allocate 65.30G (70111285248 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY </code></pre> <p>My question is, what about remaining 1300MB (0.04*32480)? I would not mind using those before running OOM.</p> <p>How can I make TF utilize 99.9% of <strong>memory</strong> instead of 96%? </p> <p><strong>Update:</strong> nvidia-smi output</p> <pre><code>+-----------------------------------------------------------------------------+ | NVIDIA-SMI 418.40.04 Driver Version: 418.40.04 CUDA Version: 10.1 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla V100-SXM2... On | 00000000:00:16.0 Off | 0 | | N/A 66C P0 293W / 300W | 31274MiB / 32480MiB | 100% Default | </code></pre> <p>I am asking about these 1205MB (31274MiB - 32480MiB) remaining unused. Maybe they are there for a reason, maybe they are used just before OOM.</p>
<p>Monitoring GPU is not as simple as monitoring CPU. There are many parallel processes going on which could create a <code>bottleneck</code> for your GPU.</p> <p>There could be various problems like :<br> 1. Read/Write speed for your data<br> 2. Either CPU or disk is causing a bottleneck</p> <p>But I think it is pretty normal to use 96%. Not to mention nvidia-smi only shows for one specific instance.</p> <p>You can install <code>gpustat</code> and use it to monitor GPU live(you should be hitting 100% during OOM) </p> <pre><code>pip install gpustat gpustat -i </code></pre> <p>What can you do ?<br> 1. You can use <a href="https://www.tensorflow.org/api_docs/python/tf/data/make_initializable_iterator" rel="nofollow noreferrer">data_iterator</a> to process the data in parallel faster.<br> 2. Increase batch size. (I dont think this will work in your case as you are hitting <code>OOM</code>)<br> 3. You can overclock the GPU(not-recommended)</p> <p><a href="https://www.imgtec.com/blog/a-quick-guide-to-writing-opencl-kernels-for-rogue/" rel="nofollow noreferrer">Here</a> is a nice article for hardware accelaration.</p>
python|tensorflow
5
377,647
56,945,295
How to Load and Continue Training Model Saved as .H5 in Google Drive
<p>I followed the following tutorial in Google Colab to create a text generating RNN: <a href="https://www.tensorflow.org/tutorials/sequences/text_generation" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/sequences/text_generation</a> Then, I trained it with my own data. At the end, I also added the following code to save it to my Google Drive as a .h5 file, and it created a file in my drive.</p> <pre><code>model.save('my_model.h5') uploaded = drive.CreateFile({'title': 'FILE NAME HERE.h5'}) uploaded.SetContentFile('my_model.h5') uploaded.Upload() print('Uploaded file with ID {}'.format(uploaded.get('id'))) </code></pre> <p>Then, I opened a new notebook and tried to load it like so:</p> <pre><code>downloaded = drive.CreateFile({'id': "FILE ID HERE"}) downloaded.GetContentFile('my_model.h5') new_model = keras.models.load_model(downloaded) new_model.summary() </code></pre> <p>However, it gives me the error, among others: "'utf-8' codec can't decode byte 0x89 in position 0: invalid start byte"</p> <p>I already tried following other articles that demonstrate how to achieve my goal, and this is what I got.</p> <p>I would like to be able to continue training the model, without the original code if possible. How do I do that?</p>
<p>Press <code>Ctrl+Alt+P</code> or look at the bottom left of the screen, there you can find both the upload and download snippets. Just adapt it to h5 files instead of txt files:</p> <p><strong>Upload</strong>:</p> <pre><code># Import PyDrive and associated libraries. # This only needs to be done once in a notebook. from pydrive.auth import GoogleAuth from pydrive.drive import GoogleDrive from google.colab import auth from oauth2client.client import GoogleCredentials # Authenticate and create the PyDrive client. # This only needs to be done once in a notebook. auth.authenticate_user() gauth = GoogleAuth() gauth.credentials = GoogleCredentials.get_application_default() drive = GoogleDrive(gauth) # Create &amp; upload a h5 file. uploaded = drive.CreateFile({'title': 'my_keras_model.h5'}) uploaded.SetContentFile(&quot;my_keras_model.h5&quot;) uploaded.Upload() print('Uploaded file with ID {}'.format(uploaded.get('id'))) </code></pre> <p><strong>Download</strong>:</p> <pre><code># Import PyDrive and associated libraries. # This only needs to be done once per notebook. from pydrive.auth import GoogleAuth from pydrive.drive import GoogleDrive from google.colab import auth from oauth2client.client import GoogleCredentials # Authenticate and create the PyDrive client. # This only needs to be done once per notebook. auth.authenticate_user() gauth = GoogleAuth() gauth.credentials = GoogleCredentials.get_application_default() drive = GoogleDrive(gauth) # Download a file based on its file ID. # # A file ID looks like: laggVyWshwcyP6kEI-y_W3P8D26sz file_id = 'laggVyWshwcyP6kEI-y_W3P8D26sz' downloaded = drive.CreateFile({'id': file_id}) print('Downloaded content &quot;{}&quot;'.format(downloaded.GetContentFile(&quot;my_keras_model.h5&quot;))) </code></pre> <p>Hope this helps.</p>
python|tensorflow|google-colaboratory|pydrive
0
377,648
57,176,319
How to fill in the rows of the hourly timestamp with the last known price until the price column change and likewise continue further
<p>How to fill in the rate with the previously known price for each hour until the price changes likewise fill in the rate until it changes in future.</p> <pre><code> Provided Raw Dataset Date product price 2019-01-02 02:00:00 XVZ 22.00 2019-01-02 05:00:00 XVZ 24.00 2019-01-02 10:00:00 XVZ 24.50 2019-01-02 12:00:00 XVZ 23.00 2019-01-02 15:00:00 XVZ 27.00 2019-01-02 19:00:00 XVZ 21.00 Expected Desired RESULT: Date product price 2019-01-02 02:00:00 XVZ 22.00 2019-01-02 03:00:00 XVZ 22.00 2019-01-02 04:00:00 XVZ 22.00 2019-01-02 05:00:00 XVZ 24.00 2019-01-02 06:00:00 XVZ 24.00 2019-01-02 07:00:00 XVZ 24.00 2019-01-02 08:00:00 XVZ 24.00 2019-01-02 09:00:00 XVZ 24.00 2019-01-02 10:00:00 XVZ 24.50 2019-01-02 11:00:00 XVZ 24.50 2019-01-02 12:00:00 XVZ 23.00 2019-01-02 13:00:00 XVZ 23.00 2019-01-02 14:00:00 XVZ 23.00 2019-01-02 15:00:00 XVZ 27.00 2019-01-02 16:00:00 XVZ 27.00 . . 2019-01-02 19:00:00 XVZ 21.00 </code></pre>
<p>Try with <a href="https://www.google.com/search?q=df+resample&amp;rlz=1C1GCEU_enIN822IN823&amp;oq=df+resample&amp;aqs=chrome.0.35i39j0l5.2112j0j7&amp;sourceid=chrome&amp;ie=UTF-8" rel="nofollow noreferrer"><code>resample</code></a>:</p> <pre><code>df.set_index('Date').resample('H').ffill().reset_index() </code></pre> <p>Or <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.asfreq.html" rel="nofollow noreferrer"><code>asfreq</code></a>:</p> <pre><code>df.set_index('Date').asfreq('H').ffill().reset_index() #df.set_index('Date').asfreq('H',method='ffill').reset_index() </code></pre> <hr> <pre><code> Date product price 0 2019-01-02 02:00:00 XVZ 22.0 1 2019-01-02 03:00:00 XVZ 22.0 2 2019-01-02 04:00:00 XVZ 22.0 3 2019-01-02 05:00:00 XVZ 24.0 4 2019-01-02 06:00:00 XVZ 24.0 5 2019-01-02 07:00:00 XVZ 24.0 6 2019-01-02 08:00:00 XVZ 24.0 7 2019-01-02 09:00:00 XVZ 24.0 8 2019-01-02 10:00:00 XVZ 24.5 9 2019-01-02 11:00:00 XVZ 24.5 10 2019-01-02 12:00:00 XVZ 23.0 11 2019-01-02 13:00:00 XVZ 23.0 12 2019-01-02 14:00:00 XVZ 23.0 13 2019-01-02 15:00:00 XVZ 27.0 14 2019-01-02 16:00:00 XVZ 27.0 15 2019-01-02 17:00:00 XVZ 27.0 16 2019-01-02 18:00:00 XVZ 27.0 17 2019-01-02 19:00:00 XVZ 21.0 </code></pre>
python-3.x|pandas|numpy
0
377,649
57,121,057
Replace first few digits birthdate pandas
<p><strong>Background</strong></p> <p>I have the following sample df</p> <pre><code>import pandas as pd df = pd.DataFrame({'Birthdate':['This person was born Date of Birth: 5/6/1950 and other', 'no Date of Birth: nothing here', 'One Date of Birth: 01/01/2001 last here'], 'P_ID': [1,2,3], 'N_ID' : ['A1', 'A2', 'A3']} ) df Birthdate N_ID P_ID 0 This person was born Date of Birth: 5/6/1950 a... A1 1 1 no Date of Birth: nothing here A2 2 2 One Date of Birth: 01/01/2001 last here A3 3 </code></pre> <p><strong>Goal</strong></p> <p>Replace first few digits birthdate with <code>*BDAY*</code> so e.g. <code>5/6/1950</code> becomes <code>*BDAY*1950</code></p> <p><strong>Desired Output</strong></p> <pre><code> Birthdate N_ID P_ID 0 This person was born Date of Birth: *BDAY*1950 a... A1 1 1 no Date of Birth: nothing here A2 2 2 One last Date of Birth: *BDAY*2001 last here A3 3 </code></pre> <p><strong>Tried</strong></p> <p>From <a href="https://stackoverflow.com/questions/54595855/python-replace-first-five-characters-in-a-column-with-asterisks">python - Replace first five characters in a column with asterisks</a> I have tried the following code: <code>df.replace(r'Date of Birth: ^\d{3}-\d{2}', "*BDAY*", regex=True)</code> but it does not quite give me my desired output</p> <p><strong>Question</strong></p> <p>How do I achieve my desired output?</p>
<p>Try this:</p> <pre><code>df['Birthdate'] = df.Birthdate.str.replace(r'[0-9]?[0-9]/[0-9]?[0-9]/', '*BDAY*') Out[273]: Birthdate P_ID N_ID 0 This person was born Date of Birth: *BDAY*1950... 1 A1 1 no Date of Birth: nothing here 2 A2 2 One Date of Birth: *BDAY*2001 last here 3 A3 </code></pre>
regex|python-3.x|string|pandas|replace
1
377,650
57,011,682
New column of unique index values for a given column
<p>I have the following data frame:</p> <pre><code>data = dict(name=['a', 'a', 'a', 'b', 'b', 'c', 'd']) df = pd.DataFrame(data=data, columns=data.keys()) </code></pre> <p>How do I create a new row with values that correspond to the unique values of <code>name</code>?</p> <p>What I would like is the following, please:</p> <pre><code>df.index = [1, 1, 1, 2, 2, 3, 4] </code></pre>
<p>List three method </p> <pre><code>df.index=pd.factorize(df.name)[0]+1 df.index=df.name.astype('category').cat.codes+1 df.index=df.groupby('name').ngroup()+1 </code></pre>
python|pandas|dataframe|indexing|unique
2
377,651
56,871,550
Sum based on grouping in pandas dataframe?
<p>I have a pandas dataframe df which contains:</p> <pre><code>major men women rank Art 5 4 1 Art 3 5 3 Art 2 4 2 Engineer 7 8 3 Engineer 7 4 4 Business 5 5 4 Business 3 4 2 </code></pre> <p>Basically I am needing to find the total number of students including both men and women as one per major regardless of the rank column. So for Art for example, the total should be all men + women totaling 23, Engineer 26, Business 17. </p> <p>I have tried </p> <pre><code>df.groupby(['major_category']).sum() </code></pre> <p>But this separately sums the men and women rather than combining their totals.</p>
<p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.melt.html" rel="nofollow noreferrer"><code>melt()</code></a> then <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html" rel="nofollow noreferrer"><code>groupby()</code></a>:</p> <pre><code>df.drop('rank',1).melt('major').groupby('major',as_index=False).sum() </code></pre> <hr> <pre><code> major value 0 Art 23 1 Business 17 2 Engineer 26 </code></pre>
python|pandas|dataframe|sum
2
377,652
57,285,620
How to fix "OverflowError: Unsupported UTF-8 sequence length when encoding string"
<p>Getting follwoing error while converting pandas dataframe to json</p> <blockquote> <p>OverflowError: Unsupported UTF-8 sequence length when encoding string</p> </blockquote> <p>this is code to </p> <pre><code> bytes_to_write = data.to_json(orient='records').encode() fs = s3fs.S3FileSystem(key=aws_access_key_id, secret=aws_secret_access_key) with fs.open(file, 'wb') as f: f.write(bytes_to_write) </code></pre> <p>While data which trying to convert to json contain more <code>utf-8</code> codes</p> <p>How to solve this?</p>
<p>As this <a href="https://stackoverflow.com/questions/8422243/overflowerror-unsupported-utf-8-sequence-length-when-encoding-string#comment105608580_14567504">answer suggests</a>, I converted the data-frame using the function <code>.to_json()</code> and the <code>default_handler</code> parameter, you can find the documentation <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_json.html" rel="noreferrer">here</a>.</p> <p>You have to pay attention to the <code>default_handler=str</code> parameter so you don't get the mentioned error. You can read the details in the doc above.</p> <pre><code>dataframe.to_json('foo.json', default_handler=str) </code></pre> <p>Please don't forget to consider that the function can output the <code>json</code> in differents ways, the <code>orient='&lt;option&gt;'</code> parameter specifies that, as the doc says:</p> <pre><code>orient: str Indication of expected JSON string format. ... The format of the JSON string: - ‘split’ : dict like {‘index’ -&gt; [index], ‘columns’ -&gt; [columns], ‘data’ -&gt; [values]} - ‘records’ : list like [{column -&gt; value}, … , {column -&gt; value}] - ‘index’ : dict like {index -&gt; {column -&gt; value}} - ‘columns’ : dict like {column -&gt; {index -&gt; value}} - ‘values’ : just the values array - ‘table’ : dict like {‘schema’: {schema}, ‘data’: {data}} Describing the data, where data component is like orient='records'. </code></pre>
python|pandas|utf-8|to-json
5
377,653
57,268,610
Problem either with number of characters exceeding cell limit, or storing lists of variable length
<p>The problem:</p> <p>I have lists of genes expressed in 53 different tissues. Originally, this data was stored in a maximal array of the genes, with 'NaN' where there was no expression. I am trying to create new lists for each tissue that just have the genes expressed, as it was very inefficient to be searching through this array every time I was running my script. I have a code that finds the genes for each tissue as required, but I do not know how to store the ouptut.</p> <p>I was using pandas data frame, and then converting to csv. But this does not accept lists of varying length, unless I put this list as a single item. However, then when I save the data frame to a csv, it tries to squeeze this very long list (all genes exprssed for one tissue) into a single cell. I get an error of the string length exceeding the excel character-per-cell limit.</p> <p>Therefore I need a way of either dealing with this limit, or stroing my lists in a different way. I would rather just have one file for all lists.</p> <p>My code:</p> <pre><code>import csv import pandas as pd import math import numpy as np #Import list of tissues: df = pd.read_csv(r'E-MTAB-5214-query-results.tsv', skiprows = [0,1,2,3], sep='\t') tissuedict=df.to_dict() tissuelist = list(tissuedict.keys())[2:] all_genes = [gene for key,gene in tissuedict['Gene Name'].items()] data = [] for tissue in tissuelist: #Create array to keep track of the protein mRnaS in tissue that are not present in the network #initiate with first tissue, protein nanInd = [key for key,value in tissuedict[tissue].items() if math.isnan(value)] tissueExpression = np.delete(all_genes, nanInd) datatis = [tissue, tissueExpression.tolist()] print(datatis) data.append(datatis) print(data) df = pd.DataFrame(data) df.to_csv(r'tissue_expression_data.csv') </code></pre> <p>Link to data (either one):</p> <p><a href="https://github.com/joanna-lada/gene_data/blob/master/E-MTAB-5214-query-results.tsv" rel="nofollow noreferrer">https://github.com/joanna-lada/gene_data/blob/master/E-MTAB-5214-query-results.tsv</a></p> <p><a href="https://raw.githubusercontent.com/joanna-lada/gene_data/master/E-MTAB-5214-query-results.tsv" rel="nofollow noreferrer">https://raw.githubusercontent.com/joanna-lada/gene_data/master/E-MTAB-5214-query-results.tsv</a></p>
<p>IIUC you need lists of the gene names found in each tissue. This writes these lists as columns into a csv:</p> <pre><code>import pandas as pd df = pd.read_csv('E-MTAB-5214-query-results.tsv', skiprows = [0,1,2,3], sep='\t') df = df.drop(columns='Gene ID').set_index('Gene Name') res = pd.DataFrame() for c in df.columns: res = pd.concat([res, pd.Series(df[c].dropna().index, name=c)], axis=1) res.to_csv('E-MTAB-5214-query-results.csv', index=False) </code></pre> <p>(Writing them as rows would have been easier, but Excel can't import so many columns) Don't open the csv in Excel directly, but use a blank worksheet and import the csv (Data - External data, From text), otherwise you can't separate them into Excel columns in one run (at least in Excel 2010).</p> <p><a href="https://i.stack.imgur.com/GBO3I.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GBO3I.jpg" alt="enter image description here"></a></p>
python|pandas|csv|export-to-csv|large-files
1
377,654
56,956,350
Count non-na values by row and save total to a new variable in pandas
<p>I am new to python and I am trying to count non-na values, per row, and save the total to a new variable.</p> <p>I have the data frame: </p> <pre><code>data = {'x1': ["Yes", "Yes", "No"], 'x2': ["Yes",np.nan, "Yes"], 'x3': [np.nan, np.nan, "No"]} df = pd.DataFrame(data, columns = ['x1', 'x2', 'x3']) print(df) x1 x2 x3 0 Yes Yes NaN 1 Yes NaN NaN 2 No Yes No </code></pre> <p>What I am trying to do is count the number of answers for each row and then save that total into a new variable. The desired output would look like this:</p> <pre><code> x1 x2 x3 Total 0 Yes Yes NaN 2 1 Yes NaN NaN 1 2 No Yes No 3 </code></pre> <p>It seems pretty straightforward but I cannot figure it out. Any help would be greatly appreciated.</p> <p>Thank you</p>
<p>You just need to use <code>count()</code> with <code>axis=1</code>:</p> <pre><code>df['Total'] = df.count(axis=1) </code></pre> <p>Yields:</p> <pre><code> x1 x2 x3 Total 0 Yes Yes NaN 2 1 Yes NaN NaN 1 2 No Yes No 3 </code></pre>
python|pandas|dataframe|count
6
377,655
56,962,051
Pandas dataframe to_sql with data longer than 65536 characters
<p>I have a Pandas dataframe, where some columns have values longer than 65536 characters. When I tried to export the data to MySQL using <code>df.to_sql(con=engine, name=table_name, if_exists='replace', index=False)</code>, they were truncated to 65536 characters. </p> <p>Is there a way to automatically convert a column to LONGTEXT or BLOB (instead of TEXT) if it has values longer than 65536 so that the table content won't be truncated? </p>
<p>This might be a workaround. The only thing is you need to have the list of columns that need to be converted to LONGTEXT. </p> <pre><code>from sqlalchemy.dialects.mysql import LONGTEXT dtype = { "long_column_1": LONGTEXT, "long_column_2": LONGTEXT } pdf.to_sql(con=engine, name=table_name, if_exists='replace', index=False, dtype=dtype) </code></pre>
python|mysql|sql|pandas|dataframe
2
377,656
57,008,853
Convert Pandas Timestamp with Time-Offset Column
<p>I get daily reports which include a timestamp column and a UTC Offset column. Using pandas, I can convert the int Timestamp into a datetime64 type. I unfortunately can't figure out how to use the offset.</p> <p>Since the 'UTC Offset' column comes in as a string I have tried converting it to an int to help, but can't figure out how to use it. I tried using pd.offsets.Hour, but that can't use the column of offsets.</p> <pre><code>df = pd.read_csv(filename, encoding='utf-8', delimiter=r'\t',engine='python') df['Timestamp'] = pd.to_datetime(df[r'Stream Timestamp'],utc=True, unit='s') print(df[:3][r'Stream Timestamp']) 0 2019-05-01 14:21:37+00:00 1 2019-05-01 15:50:12+00:00 Name: Stream Timestamp, dtype: datetime64[ns, UTC] 0 -06:00 1 +01:00 2 -04:00 Name: UTC Offset, dtype: object df[r"UTC Offset"] = df[r"UTC Offset"].astype(int) </code></pre> <p>Optimally, I want to do something like this</p> <pre><code>df[r'Adjusted'] = df[r'Timestamp'] + pd.offsets.Hour(df[r'UTC Offset']) </code></pre> <p>However I can't seem to figure out how best to reference the column of offsets. I'm a little new to datetime in general, but any help would be appreciated!</p>
<p>I prefer to convert to datetime as it's easier to apply offsets than with native pandas time format.</p> <p>To get the timezone offset:</p> <pre><code>from tzlocal import get_localzone # pip install tzlocal millis = 1288483950000 ts = millis * 1e-3 local_dt = datetime.fromtimestamp(ts, get_localzone()) utc_offset = local_dt.utcoffset() hours_offset = utc_offset / timedelta(hours=1) </code></pre> <p>Then apply the offset:</p> <pre><code>df['dt'] = pd.to_datetime(df['timestamp'],infer_datetime_format=True,errors='ignore') df['dtos'] = df['dt'] + timedelta(hours=hours_offset) </code></pre>
python|pandas|datetime|utc|timezone-offset
0
377,657
57,175,344
Cython references to slots in a numpy array
<p>I have an object with a numpy array instance variable.</p> <p>Within a function, I want to declare local references to slots within that numpy array.</p> <p>E.g.,</p> <pre><code>cdef double&amp; x1 = self.array[0] </code></pre> <p>Reason being, I don't want to spend time instantiating new variables and copying values.</p> <p>Obviously the above code doesn't work. Something about c++ style references not supported. How do I do what I want to do?</p>
<p>C++ references aren't supported as local variables (even in Cython's C++ mode) because they need to be initialized upon creation and Cython prefers to generate code like:</p> <pre><code># start of function double&amp; x_ref # ... x_ref = something # assign # ... </code></pre> <p>This ensures that variable scope behaves in a "Python way" rather than a "C++ way". It does mean everything needs to be default constructable though.</p> <hr> <p>However, C++ references are usually implemented in terms of pointers, so the solution is just to use pointers yourself:</p> <pre><code>cdef double* x1 = &amp;self.array[1] x1[0] = 2 # use [0] to dereference pointer </code></pre> <p>Obviously C++ references make the syntax nicer (you don't have to worry about dereferences and getting addresses) but performance-wise it should be the same.</p>
numpy|pointers|reference|cython
1
377,658
57,223,665
How does TensorFlow generate gen_array_ops.py via array_ops.cc?
<p>TensorFlow generates code automatically. I am curious about how TF generates <code>gen_array_ops.py</code> by <a href="https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/ops/array_ops.cc" rel="nofollow noreferrer"><code>array_ops.cc</code></a> ?</p> <p>The generated python file is at <code>python3.6/site-packages/tensorflow/python/ops/gen_array_ops.py</code></p> <pre><code>"""Python wrappers around TensorFlow ops. This file is MACHINE GENERATED! Do not edit. Original C++ source file: array_ops.cc """ ... ... </code></pre>
<p>The Python code generation is done at building time through Bazel. You can find the relevant definition in <a href="https://github.com/tensorflow/tensorflow/blob/v1.14.0/tensorflow/tensorflow.bzl#L795-L919" rel="noreferrer"><code>tensorflow/tensorflow.bzl</code></a>, I will post here just the header:</p> <pre><code># Generates a Python library target wrapping the ops registered in "deps". # # Args: # name: used as the name of the generated target and as a name component of # the intermediate files. # out: name of the python file created by this rule. If None, then # "ops/gen_{name}.py" is used. # hidden: Optional list of ops names to make private in the Python module. # It is invalid to specify both "hidden" and "op_whitelist". # visibility: passed to py_library. # deps: list of dependencies for the intermediate tool used to generate the # python target. NOTE these `deps` are not applied to the final python # library target itself. # require_shape_functions: leave this as False. # hidden_file: optional file that contains a list of op names to make private # in the generated Python module. Each op name should be on a line by # itself. Lines that start with characters that are invalid op name # starting characters are treated as comments and ignored. # generated_target_name: name of the generated target (overrides the # "name" arg) # op_whitelist: if not empty, only op names in this list will be wrapped. It # is invalid to specify both "hidden" and "op_whitelist". # cc_linkopts: Optional linkopts to be added to tf_cc_binary that contains the # specified ops. def tf_gen_op_wrapper_py( name, out = None, hidden = None, visibility = None, deps = [], require_shape_functions = False, hidden_file = None, generated_target_name = None, op_whitelist = [], cc_linkopts = [], api_def_srcs = []): # ... </code></pre> <p>This is called indirectly through <code>tf_gen_op_wrapper_private_py</code> which you can find in <a href="https://github.com/tensorflow/tensorflow/blob/v1.14.0/tensorflow/python/build_defs.bzl" rel="noreferrer"><code>tensorflow/python/build_defs.bzl</code></a>. For the case of <code>array_ops</code>, you will find it in <a href="https://github.com/tensorflow/tensorflow/blob/v1.14.0/tensorflow/python/BUILD#L1843-L1851" rel="noreferrer"><code>tensorflow/python/BUILD</code></a>:</p> <pre><code>tf_gen_op_wrapper_private_py( name = "array_ops_gen", visibility = [ "//learning/brain/python/ops:__pkg__", "//tensorflow/compiler/tests:__pkg__", "//tensorflow/contrib/quantization:__pkg__", "//tensorflow/python/kernel_tests:__pkg__", ], ) </code></pre> <p>And what does this rule do? It calls a program which source you can find at <a href="https://github.com/tensorflow/tensorflow/blob/v1.14.0/tensorflow/python/framework/python_op_gen_main.cc" rel="noreferrer"><code>tensorflow/python/framework/python_op_gen_main.cc</code></a> (that is the main entry point, it uses other neighboring source files). Essentially, it is a program that goes through the ops registered through the <code>REGISTER_OP</code> macro (defined in <a href="https://github.com/tensorflow/tensorflow/blob/v1.14.0/tensorflow/core/framework/op.h" rel="noreferrer"><code>tensorflow/core/framework/op.h</code></a>) and produces Python code accordingly. I cannot go through the specifics now but you should be able to browse the code if you want to know the details.</p>
python|tensorflow
8
377,659
56,927,268
Error while loading a pretrained resnet model
<p>I am trying to load the pre-trained ResNet model in the below link <a href="https://drive.google.com/open?id=1xkVK92XLZOgYlpaRpG_-WP0Elzg4ewpw" rel="nofollow noreferrer">https://drive.google.com/open?id=1xkVK92XLZOgYlpaRpG_-WP0Elzg4ewpw</a></p> <p>But it gives RuntimeError: The Session graph is empty. Add operations to the graph before calling run().</p> <p>What could be the possible issue?</p> <pre><code>import tensorflow as tf import tensorflow.contrib.slim as slim # Let's load a previously saved meta graph in the default graph # This function returns a Saver saver = tf.train.import_meta_graph('model.ckpt-0.meta') # We can now access the default graph where all our metadata has been loaded graph = tf.get_default_graph() with tf.Session(graph=tf.Graph()) as sess: saver.restore(sess, 'model.ckpt-0.data-00000-of-00001') print('Worked') </code></pre>
<p>you must have a model(Rough house),and load parameter(Bed, furniture).now you need a Rough house(operations,such as:tf.Variable(),tf.add(),tf.nn.softmax_cross_entropy_with_logits()).</p>
tensorflow|resnet|pre-trained-model
0
377,660
57,053,378
Query PubMed with Python - How to get all article details from query to Pandas DataFrame and export them in CSV
<p>How can I get all article details from query on <a href="https://www.ncbi.nlm.nih.gov/pubmed/" rel="nofollow noreferrer">PubMed</a> to Pandas DataFrame and export them all into CSV.</p> <p>I need following article details:</p> <p><strong>pubmed_id, title, keywords, journal, abstract, conclusions,methods, results, copyrights, doi, publication_date, authors</strong></p>
<p>Here is how I did it. It's fully functional code, all you need to do is install pymed with <code>pip install pymed</code> . Function is here:</p> <pre><code>from pymed import PubMed pubmed = PubMed(tool="PubMedSearcher", email="myemail@ccc.com") ## PUT YOUR SEARCH TERM HERE ## search_term = "Your search term" results = pubmed.query(search_term, max_results=500) articleList = [] articleInfo = [] for article in results: # Print the type of object we've found (can be either PubMedBookArticle or PubMedArticle). # We need to convert it to dictionary with available function articleDict = article.toDict() articleList.append(articleDict) # Generate list of dict records which will hold all article details that could be fetch from PUBMED API for article in articleList: #Sometimes article['pubmed_id'] contains list separated with comma - take first pubmedId in that list - thats article pubmedId pubmedId = article['pubmed_id'].partition('\n')[0] # Append article info to dictionary articleInfo.append({u'pubmed_id':pubmedId, u'title':article['title'], u'keywords':article['keywords'], u'journal':article['journal'], u'abstract':article['abstract'], u'conclusions':article['conclusions'], u'methods':article['methods'], u'results': article['results'], u'copyrights':article['copyrights'], u'doi':article['doi'], u'publication_date':article['publication_date'], u'authors':article['authors']}) # Generate Pandas DataFrame from list of dictionaries articlesPD = pd.DataFrame.from_dict(articleInfo) export_csv = df.to_csv (r'C:\Users\YourUsernam\Desktop\export_dataframe.csv', index = None, header=True) #Print first 10 rows of dataframe print(articlesPD.head(10)) </code></pre>
python|pandas|dictionary|pubmed
9
377,661
57,115,074
tf.logging.__dict__[hparams.verbosity] / 10) KeyError: 'INFO'
<p>Trying to run GitHub codes of nonlocal recurrent networks. I am ending up getting this error. How to debug this error? </p> <p>Traceback (most recent call last): File "trainer.py", line 97, in tf.logging.<strong>dict</strong>[hparams.verbosity] / 10) KeyError: 'INFO'</p> <p>tried editing codes. but not working.</p>
<p>On tensorflow version 1.14, below code will cause the error.</p> <pre><code>tf.logging.dict[hparams.verbosity] </code></pre> <p>So you can fix the code like below. It will be ok on tf 1.14.</p> <pre><code>getattr(tf.logging, hparams.verbosity) </code></pre>
tensorflow
0
377,662
57,180,566
Capture all csv files within all subfolders in main directory - Python 3.x
<p>Below code is used to split csv files based on a given time value. The problem is this code won't capture all the csv files. For example inside TT1 folder there are several subfolders.And those subfolders have folders inside them. And within those sub-sub-folders there are csv files. When I give the path as path='/root/Desktop/TT1 it wont process all the files within those sub-sub-folders. How can I fix this please.</p> <p>AFTER @Serafeim 's answer (<a href="https://stackoverflow.com/a/57110519/5025009">https://stackoverflow.com/a/57110519/5025009</a>), I tried this:</p> <pre><code>import pandas as pd import numpy as np import glob import os path = '/root/Desktop/TT1/' mystep = 0.4 #define the function def data_splitter(df, name): max_time = df['Time'].max() # get max value of Time for the current csv file (df) myrange= np.arange(0, max_time, mystep) # build the threshold range for k in range(len(myrange)): # build the upper values temp = df[(df['Time'] &gt;= myrange[k]) &amp; (df['Time'] &lt; myrange[k] + mystep)] temp.to_csv("/root/Desktop/T1/{}_{}.csv".format(name, k)) for filename in glob.glob(os.path.join(path, '*.csv')): df = pd.read_csv(filename) name = os.path.split(filename)[1] # get the name of the file data_splitter(df, name) </code></pre>
<p>You can get automatically all the subfolders and change the path: If all the subfolders start with "Sub":</p> <pre><code>import pandas as pd import numpy as np import glob import os path = '/root/Desktop/TT1/' mystep = 0.4 #define the function def data_splitter(df, name): max_time = df['Time'].max() # get max value of Time for the current csv file (df) myrange= np.arange(0, max_time, mystep) # build the threshold range for k in range(len(myrange)): # build the upper values temp = df[(df['Time'] &gt;= myrange[k]) &amp; (df['Time'] &lt; myrange[k] + mystep)] temp.to_csv("/root/Desktop/T1/{}_{}.csv".format(name, k)) # use os.walk(path) on the main path to get ALL subfolders inside path for root,dirs,_ in os.walk(path): for d in dirs: path_sub = os.path.join(root,d) # this is the current subfolder for filename in glob.glob(os.path.join(path_sub, '*.csv')): df = pd.read_csv(filename) name = os.path.split(filename)[1] # get the name of the current csv file data_splitter(df, name) </code></pre>
python|python-3.x|pandas|csv|glob
1
377,663
57,115,873
ValueError: Error when checking target: expected dense_10 to have shape (1,) but got array with shape (19316,)
<p>I am running a CNN that check for images but does not classify. In fact, the output layer is a dense layer that have as argument the size of the images in the labels in 1d.</p> <p>As shown below in the code, I am using model.fit_generator() instead of model.fit and when it comes to start training the model the following error comes up:</p> <pre><code>ValueError: Error when checking target: expected dense_10 to have shape (1,) but got array with shape (19316,) </code></pre> <p>Why is this an error? The output of my dense is an array of 19316 elements, why does it expect it to have a shape of (1,) ?</p> <p>Here attached is also the summary of the model :</p> <hr> <h1>Layer (type) Output Shape Param #</h1> <p>conv2d_28 (Conv2D) (None, 26, 877, 32) 544 </p> <hr> <p>activation_37 (Activation) (None, 26, 877, 32) 0 </p> <hr> <p>max_pooling2d_28 (MaxPooling (None, 13, 438, 32) 0 </p> <hr> <p>conv2d_29 (Conv2D) (None, 12, 437, 16) 2064 </p> <hr> <p>activation_38 (Activation) (None, 12, 437, 16) 0 </p> <hr> <p>max_pooling2d_29 (MaxPooling (None, 6, 218, 16) 0 </p> <hr> <p>conv2d_30 (Conv2D) (None, 5, 217, 8) 520 </p> <hr> <p>activation_39 (Activation) (None, 5, 217, 8) 0 </p> <hr> <p>max_pooling2d_30 (MaxPooling (None, 2, 108, 8) 0 </p> <hr> <p>activation_40 (Activation) (None, 2, 108, 8) 0 </p> <hr> <p>flatten_10 (Flatten) (None, 1728) 0 </p> <hr> <p>dropout_10 (Dropout) (None, 1728) 0 </p> <hr> <p>dense_10 (Dense) (None, 19316) 33397364 </p> <p>=================================================================</p> <p>Total params: 33,400,492 Trainable params: 33,400,492 Non-trainable params: 0</p> <hr> <p>Any suggestions?</p> <p>Thanks a lot in advance!</p> <pre><code>def generator(data_arr, batch_size = 10): num = len(data_arr) if num % batch_size != 0 : num = int(num/batch_size) # Loop forever so the generator never terminates while True: for offset in range(0, num, batch_size): batch_samples = (data_arr[offset:offset+batch_size]) samples = [] labels = [] for batch_sample in batch_samples: samples.append(batch_sample[0]) labels.append((np.array(batch_sample[1].flatten)).transpose()) X_ = np.array(samples) Y_ = np.array(labels) X_ = X_[:, :, :, newaxis] print(X_.shape) print(Y_.shape) yield (X_, Y_) # compile and train the model using the generator function train_generator = generator(training_data, batch_size = 10) validation_generator = generator(val_data, batch_size = 10) run_opts = tf.RunOptions(report_tensor_allocations_upon_oom = True) model = Sequential() model.add(Conv2D(32, (4, 4), strides=(2, 2), input_shape = (55, 1756, 1))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size = (2, 2))) model.add(Conv2D(16, (2, 2))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size = (2, 2))) model.add(Conv2D(8, (2, 2))) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size = (2, 2))) model.add(Activation('softmax')) model.add(Flatten()) # this converts our 3D feature maps to 1D feature vectors model.add(Dropout(0.3)) model.add(Dense(19316)) model.compile(loss = 'sparse_categorical_crossentropy', optimizer = 'adam', metrics = ['accuracy'], options = run_opts) model.summary() batch_size = 20 nb_epoch = 6 model.fit_generator(train_generator, steps_per_epoch = len(training_data) , epochs = nb_epoch, validation_data = validation_generator, validation_steps = len(val_data)) </code></pre>
<p>You are currently using the <code>sparse_categorical_crossentropy</code> loss, which needs integer labels and does the one-hot encoding internally, but your labels are already one-hot encoded.</p> <p>So for this case you should revert back to the <code>categorical_crossentropy</code> loss.</p>
python|tensorflow|keras|deep-learning
1
377,664
57,186,654
Can't restore tensorflow variables
<p>I have a class as follows and the <code>load</code> function returns me the tensorflow saved graph. </p> <pre><code>class StoredGraph(): . . . def build_meta_saver(self, meta_file=None): meta_file = self._get_latest_checkpoint() + '.meta' if not meta_file else meta_file meta_saver = tf.train.import_meta_graph(meta_file) return meta_saver def load(self, sess, saverObj): saverObj.restore(sess, self._get_latest_checkpoint()) graph = tf.get_default_graph() return graph </code></pre> <p>I have another class lets call it <code>TrainNet()</code>.</p> <pre><code>class TrainNet(): . . . def train(dataset): self.train_graph = tf.Graph() meta_saver, saver = None, None GraphIO = StoredGraph(experiment_dir) latest_checkpoint = GraphIO._get_latest_checkpoint() with self.train_graph.as_default(): tf.set_random_seed(42) if not latest_checkpoint: #build graph here self.build_graph() else: meta_saver = GraphIO.build_meta_saver() # this loads the meta file with tf.Session(graph=self.train_graph) as sess: if not meta_saver: sess.run(tf.global_variables_initializer()) if latest_checkpoint: self.scaler, self.train_graph = GraphIO.load(sess, meta_saver) #here access placeholders using self.train_graph.get_tensor_by_name()... #and feed the values </code></pre> <p>In my training class I use the above class simply by loading the graph using <code>load</code> function as <code>self.train_graph = StoredGraphclass.load(sess,metasaver)</code></p> <p>My doubt is are all the variables restored by loading the saved graph ? Normally everyone defines the restoration operation in the same script like <code>saver.restore()</code> which restores all the variables of the graph. But I am calling <code>saver.restore()</code>in a different class and using the returned graph to access placeholders. </p> <p>I think this way not all the variables are restored. Is the above approach wrong ? This doubt arose when I checked the values of weights in two different <code>.meta</code> files written at different training steps, and the values were exactly the same meaning this variable wasnt updated or the restoration method has some fault. </p>
<p>As long as you have created all the necessary variables in your file and given them the <strong>same</strong> "name" (and of course the shape needs to be correct as well), <code>restore</code> will load all the appropriate values into the appropriate variables. <a href="https://www.tensorflow.org/guide/saved_model#restore_variables" rel="nofollow noreferrer">Here</a> you can find a toy example showing you how this can be done.</p>
python|tensorflow|deep-learning
0
377,665
56,910,427
Alternative to nested np.where statements to retain NaN values while creating a new pandas boolean column based on two other existing columns
<p>I'm trying to figure out a more straightforward alternative for evaluating and creating a new column in a pandas dataframe based on two other columns that contain either True, False, or NaN values. I want the new column to evaluate as follows relative to the two reference columns:</p> <ul> <li>If either True -> True</li> <li>If at least one False and neither True -> False</li> <li>If both NaN -> NaN</li> </ul> <p>I've figured out a solution using several nested np.where statements, but would prefer a more straightforward approach. For a single reference column, I figured out how to doing this (see shown below as col4), but can't figure out if there's a way to adapt this to factoring in multiple reference columns. </p> <p><strong>Current Solution:</strong></p> <pre><code>import pandas as pd import numpy as np d = {'col1': [True, True, True, False, False, False, np.nan, np.nan, np.nan], 'col2': [True, False, np.nan,True, False, np.nan,True, False, np.nan]} df = pd.DataFrame(data=d) df['col3'] = np.where( pd.notnull(df['col1']) &amp; pd.notnull(df['col2']), (df['col1'] == True) | (df['col2'] == True), np.where( pd.isnull(df['col1']) &amp; pd.isnull(df['col2']), np.nan, np.where(pd.notnull(df['col1']),df['col1'],df['col2']) ) ) </code></pre> <p><strong>Single Reference Column Solution:</strong></p> <pre><code>df['col4'] = df['col1'].map(lambda x: x, na_action='ignore') </code></pre>
<p><code>np.select()</code> is made for this type of job:</p> <pre><code>df['col3'] = pd.Series(np.select( [(df.col1 == True) | (df.col2 == True), (df.col1 == False) | (df.col2 == False)], [True, False], np.array(np.nan, object))) </code></pre> <p>Or, using only Pandas, but I think this way is less readable:</p> <pre><code>df['col3'] = df.col1.where(df.col1, df.col2.where(df.col2.notnull(), df.col1)) </code></pre>
python|pandas|numpy
1
377,666
57,113,188
While using GPU for PyTorch models, getting the CUDA error: Unknown error?
<p>I am trying to use a pre-trained model using PyTorch. While loading the model to GPU, it is giving the following error:</p> <pre><code>Traceback (most recent call last): File "model\vgg_model.py", line 45, in &lt;module&gt; vgg_model1 = VGGFeatureExtractor(True).double().to(device) File "C:\Users\myidi\Anaconda3\envs\openpose\lib\site-packages\torch\nn\modules\module.py", line 386, in to return self._apply(convert) File "C:\Users\myidi\Anaconda3\envs\openpose\lib\site-packages\torch\nn\modules\module.py", line 193, in _apply module._apply(fn) File "C:\Users\myidi\Anaconda3\envs\openpose\lib\site-packages\torch\nn\modules\module.py", line 193, in _apply module._apply(fn) File "C:\Users\myidi\Anaconda3\envs\openpose\lib\site-packages\torch\nn\modules\module.py", line 199, in _apply param.data = fn(param.data) File "C:\Users\myidi\Anaconda3\envs\openpose\lib\site-packages\torch\nn\modules\module.py", line 384, in convert return t.to(device, dtype if t.is_floating_point() else None, non_blocking) File "C:\Users\myidi\Anaconda3\envs\openpose\lib\site-packages\torch\cuda\__init__.py", line 163, in _lazy_init torch._C._cuda_init() RuntimeError: CUDA error: unknown error </code></pre> <p>I have a Windows 10 Laptop, Nvidia 940m GPU, Latest Pytorch and CUDA Toolkit 9.0 (Also tried on 10.0). </p> <p>I have tried re-installing the GPU drivers, restarted my machine, re-installed PyTorch, Torchvision and CUDA Toolkit. </p> <p>While using the following to see if PyTorch is detecting a GPU:</p> <pre><code>device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') </code></pre> <p>I am getting the following output: <code>device(type='cuda')</code>. </p> <p>What could be the possible issues? I have tried the solution mentioned here: <a href="https://github.com/pytorch/pytorch/issues/20990" rel="nofollow noreferrer">https://github.com/pytorch/pytorch/issues/20990</a> and the issue still persists. </p> <p>I simply put the <code>torch.cuda.current_device()</code> after <code>import torch</code> but the issue still persists.</p>
<p>Strangely, this worked by using CUDA Toolkit 10.1. I don't know why the latest one is not the default one on PyTorch website in the section where they provide the commands to download the libraries. </p> <p>Used the following command to install the libraries: <code>conda install pytorch torchvision cudatoolkit=10.1 -c pytorch</code></p>
pytorch
3
377,667
57,277,214
Multi-GPU training of AllenNLP coreference resolution
<p>I'm trying to replicate (or come close) to the results obtained by the <a href="https://www.aclweb.org/anthology/D17-1018" rel="nofollow noreferrer">End-to-end Neural Coreference Resolution</a> paper on the <a href="http://conll.cemantix.org/2012/introduction.html" rel="nofollow noreferrer">CoNLL-2012 shared task</a>. I intend to do some enhancements on top of this, so I decided to use <a href="http://conll.cemantix.org/2012/introduction.html" rel="nofollow noreferrer">AllenNLP's <code>CoreferenceResolver</code></a>. This is how I'm initialising &amp; training the model:</p> <pre class="lang-py prettyprint-override"><code>import torch from allennlp.common import Params from allennlp.data import Vocabulary from allennlp.data.dataset_readers import ConllCorefReader from allennlp.data.dataset_readers.dataset_utils import Ontonotes from allennlp.data.iterators import BasicIterator, MultiprocessIterator from allennlp.data.token_indexers import SingleIdTokenIndexer, TokenCharactersIndexer from allennlp.models import CoreferenceResolver from allennlp.modules import Embedding, FeedForward from allennlp.modules.seq2seq_encoders import PytorchSeq2SeqWrapper from allennlp.modules.seq2vec_encoders import CnnEncoder from allennlp.modules.text_field_embedders import BasicTextFieldEmbedder from allennlp.modules.token_embedders import TokenCharactersEncoder from allennlp.training import Trainer from allennlp.training.learning_rate_schedulers import LearningRateScheduler from torch.nn import LSTM, ReLU from torch.optim import Adam def read_data(directory_path): data = [] for file_path in Ontonotes().dataset_path_iterator(directory_path): data += dataset_reader.read(file_path) return data INPUT_FILE_PATH_TEMPLATE = "data/CoNLL-2012/v4/data/%s" dataset_reader = ConllCorefReader(10, {"tokens": SingleIdTokenIndexer(), "token_characters": TokenCharactersIndexer()}) training_data = read_data(INPUT_FILE_PATH_TEMPLATE % "train") validation_data = read_data(INPUT_FILE_PATH_TEMPLATE % "development") vocabulary = Vocabulary.from_instances(training_data + validation_data) model = CoreferenceResolver(vocab=vocabulary, text_field_embedder=BasicTextFieldEmbedder({"tokens": Embedding.from_params(vocabulary, Params({"embedding_dim": embeddings_dimension, "pretrained_file": "glove.840B.300d.txt"})), "token_characters": TokenCharactersEncoder(embedding=Embedding(num_embeddings=vocabulary.get_vocab_size("token_characters"), embedding_dim=8, vocab_namespace="token_characters"), encoder=CnnEncoder(embedding_dim=8, num_filters=50, ngram_filter_sizes=(3, 4, 5), output_dim=100))}), context_layer=PytorchSeq2SeqWrapper(LSTM(input_size=400, hidden_size=200, num_layers=1, dropout=0.2, bidirectional=True, batch_first=True)), mention_feedforward=FeedForward(input_dim=1220, num_layers=2, hidden_dims=[150, 150], activations=[ReLU(), ReLU()], dropout=[0.2, 0.2]), antecedent_feedforward=FeedForward(input_dim=3680, num_layers=2, hidden_dims=[150, 150], activations=[ReLU(), ReLU()], dropout=[0.2, 0.2]), feature_size=20, max_span_width=10, spans_per_word=0.4, max_antecedents=250, lexical_dropout=0.5) if torch.cuda.is_available(): cuda_device = 0 model = model.cuda(cuda_device) else: cuda_device = -1 iterator = BasicIterator(batch_size=1) iterator.index_with(vocabulary) optimiser = Adam(model.parameters(), weight_decay=0.1) Trainer(model=model, train_dataset=training_data, validation_dataset=validation_data, optimizer=optimiser, learning_rate_scheduler=LearningRateScheduler.from_params(optimiser, Params({"type": "step", "step_size": 100})), iterator=iterator, num_epochs=150, patience=1, cuda_device=cuda_device).train() </code></pre> <p>After reading the data I've trained the model but ran out of GPU memory: <code>RuntimeError: CUDA out of memory. Tried to allocate 4.43 GiB (GPU 0; 11.17 GiB total capacity; 3.96 GiB already allocated; 3.40 GiB free; 3.47 GiB cached)</code>. Therefore, I attempted to make use of multiple GPUs to train this model. I'm making use of Tesla K80s (which have 12GiB memory).</p> <p>I've tried making use of AllenNLP's <a href="https://allenai.github.io/allennlp-docs/api/allennlp.data.iterators.html#multiprocess-iterator" rel="nofollow noreferrer"><code>MultiprocessIterator</code></a>, by itialising the <code>iterator</code> as <code>MultiprocessIterator(BasicIterator(batch_size=1), num_workers=torch.cuda.device_count())</code>. However, only 1 GPU is being used (by monitoring the memory usage through the <code>nvidia-smi</code> command) &amp; got the error below. I also tried fiddling with its parameters (increasing <code>num_workers</code> or decreasing <code>output_queue_size</code>) &amp; the <code>ulimit</code> (as mentioned by <a href="https://github.com/pytorch/pytorch/issues/973#issuecomment-345088750" rel="nofollow noreferrer">this PyTorch issue</a>) to no avail.</p> <pre><code>Process Process-3: Traceback (most recent call last): File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap self.run() Traceback (most recent call last): File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap self.run() File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run self._target(*self._args, **self._kwargs) File "/home/user/.local/lib/python3.6/site-packages/allennlp/data/iterators/multiprocess_iterator.py", line 32, in _create_tensor_dicts output_queue.put(tensor_dict) File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run self._target(*self._args, **self._kwargs) File "/home/user/.local/lib/python3.6/site-packages/allennlp/data/iterators/multiprocess_iterator.py", line 32, in _create_tensor_dicts output_queue.put(tensor_dict) File "&lt;string&gt;", line 2, in put File "&lt;string&gt;", line 2, in put File "/usr/lib/python3.6/multiprocessing/managers.py", line 772, in _callmethod raise convert_to_error(kind, result) File "/usr/lib/python3.6/multiprocessing/managers.py", line 772, in _callmethod raise convert_to_error(kind, result) multiprocessing.managers.RemoteError: --------------------------------------------------------------------------- Traceback (most recent call last): File "/usr/lib/python3.6/multiprocessing/managers.py", line 228, in serve_client request = recv() File "/usr/lib/python3.6/multiprocessing/connection.py", line 251, in recv return _ForkingPickler.loads(buf.getbuffer()) File "/home/user/.local/lib/python3.6/site-packages/torch/multiprocessing/reductions.py", line 276, in rebuild_storage_fd fd = df.detach() File "/usr/lib/python3.6/multiprocessing/resource_sharer.py", line 58, in detach return reduction.recv_handle(conn) File "/usr/lib/python3.6/multiprocessing/reduction.py", line 182, in recv_handle return recvfds(s, 1)[0] File "/usr/lib/python3.6/multiprocessing/reduction.py", line 161, in recvfds len(ancdata)) RuntimeError: received 0 items of ancdata --------------------------------------------------------------------------- </code></pre> <p>I also tried achieving this through <a href="https://pytorch.org/docs/stable/nn.html#dataparallel" rel="nofollow noreferrer">PyTorch's DataParallel</a>, by wrapping the model's <code>context_layer</code>, <code>mention_feedforward</code>, <code>antecedent_feedforward</code> with a custom <code>DataParallelWrapper</code> (to provide compatibility with the AllenNLP-assumed class functions). Still, only 1 GPU is used &amp; it eventually runs out of memory as before.</p> <pre><code>class DataParallelWrapper(DataParallel): def __init__(self, module): super().__init__(module) def get_output_dim(self): return self.module.get_output_dim() def get_input_dim(self): return self.module.get_input_dim() def forward(self, *inputs): return self.module.forward(inputs) </code></pre>
<p>After some digging through the code I found out that AllenNLP does this under the hood directly through its <a href="https://allenai.github.io/allennlp-docs/api/allennlp.training.trainer.html#allennlp.training.trainer.Trainer" rel="nofollow noreferrer">Trainer</a>. The <code>cuda_device</code> can either be a single <code>int</code> (in the case of single-processing) or a <code>list</code> of <code>int</code>s (in the case of multi-processing):</p> <blockquote> <p><code>cuda_device</code> : <code>Union[int, List[int]]</code>, optional (default = -1) An integer or list of integers specifying the CUDA device(s) to use. If -1, the CPU is used.</p> </blockquote> <p>So all GPU devices needed should be passed on instead:</p> <pre class="lang-py prettyprint-override"><code>if torch.cuda.is_available(): cuda_device = list(range(torch.cuda.device_count())) model = model.cuda(cuda_device[0]) else: cuda_device = -1 </code></pre> <p>Note that the <code>model</code> still has to be manually moved to the GPU (via <code>model.cuda(...)</code>), as it would otherwise try to use multiple CPUs instead.</p>
python|pytorch|allennlp
2
377,668
57,210,071
How do I use the pandas.melt function to unpivot a few columns while keeping the rest intact
<p>I am working with a database with 66 columns and I wish to unpivot only 3 columns using python <code>pandas.melt</code> function. </p> <pre><code>df = pd.melt(df,value_vars=["RFR 1","RFR 2","RFR 3"],var_name="RFR Index",value_name="RFR Mode") </code></pre> <p>I'm finding all the other columns are dropped unless I set them as <code>id_vars</code>. How do I keep them all without listing all of them? (since there are so many of them)</p>
<p>IIUC, you can use <code>pandas.Index.difference</code> to get all columns of your dataframe that are not in your specified list. </p> <p>A bit of a nonsensical example, but:</p> <pre class="lang-py prettyprint-override"><code>df = pd.DataFrame(data=np.random.randn(5,10), columns=['a','b','c','d','e','f','g','h','i','j']) val_vars = ['e','f','g'] other_vars = df.columns.difference(val_vars) df.melt(id_vars=other_vars, value_vars=val_vars) </code></pre> <p>An alternative approach not using pandas-specific functionality would be to use sets: </p> <p><code>other_vars = set(df.columns) - set(val_vars)</code></p>
python|python-3.x|pandas
4
377,669
56,929,866
Merge operation is occupying full RAM
<p>I have 20 CSV files having a maximum size of 1 GB. In all these files, there are only two common columns "X", "Y". I am trying to merge these files on ["X", "Y"] to get a single file with all the columns. But, while doing so, I am getting <strong>MemoryError</strong> after merging 10 files.<br>Please help me to find a solution.<br> Please find the below specifications:<br><br></p> <pre><code>RAM: 504 GB CPU: 160 Core Python Version: 3.7.0 Pandas Version: 0.23.4 </code></pre> <p>Sample Code:</p> <pre><code>final_df = pd.DataFrame() for f in file_list: df = pd.read_csv(f) if final_df.empty: final_df = df else: final_df = final_df.merge(df, on = ["X","Y"], how = "left") return final_df </code></pre>
<p>have you tried to free up memory manually with garbage collector "gc.collect()"? You can do it at the end of each loop, like this:</p> <pre><code>import gc final_df = pd.DataFrame() for f in file_list: df = pd.read_csv(f) if final_df.empty: final_df = df else: final_df = final_df.merge(df, on = ["X","Y"], how = "left") gc.collect() return final_df </code></pre>
python-3.x|pandas
0
377,670
57,225,064
want to calculate values of new columns by picking up the formula defined in another column
<p>i have a df with n number of columns calculated dynamically. in that df it has one column which defines that what formula i need to apply to calculate values of another new column. That formula need to be applied on the existing columns of that df</p> <p>For example: df1</p> <pre><code>Col1 Col2 Col3 Col4(Formula) Col5(Calculatedby executingformulain Col4 2017 12 2 Col2/col3 6 2018 14 7 Col2*Col3 98 </code></pre> <p>So what i want that whatever formula is written in Col4 that will be performed on existing columns as given so that new values can be calculated of col5 at each row</p>
<p>Use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.eval.html" rel="noreferrer"><code>df.eval()</code></a>:</p> <pre><code>df['Col5']=np.diag(df.eval(df['Col4(Formula)'])) print(df) </code></pre> <hr> <pre><code> Col1 Col2 Col3 Col4(Formula) Col5 0 2017 12 2 Col2/Col3 6 1 2018 14 7 Col2*Col3 98 </code></pre>
python|pandas
5
377,671
56,974,974
Pandas resample() Series giving incorrect indexes
<p>I am trying to bin a multi-year time series of dates and float values. I'm trying to aggregate each day in to 15 minute bins. So I group the data set by day and then resample in 15 minute increments on each day.</p> <p>The results seemed odd so I took a closer look at the behaviour of the resampling. The code below summarizes the kind of results I observed (I run it in repl.it)</p> <pre><code>aindex = pd.to_datetime([ "2013-04-05 04:15:31", "2013-04-05 05:15:18", "2013-04-05 05:15:19", "2013-04-05 05:15:19", "2013-04-05 05:17:15", "2013-04-05 07:06:31", "2013-04-09 04:15:31", "2013-04-09 05:15:18", "2013-04-09 05:15:19", "2013-04-09 05:15:19", "2013-04-09 05:17:15", "2013-04-09 07:06:31", "2013-04-09 07:21:28", "2013-04-09 09:18:19", "2013-04-09 09:19:19", "2013-04-09 09:21:31"]) a = pd.Series([-4.50e+08, -4.80e+08, -6.10e+08, -5.80e+08, -5.70e+08, -5.710e+08, -4.598432e+08, -4.814140e+08, -6.109284e+08, -5.870819e+08, -5.759888e+08, -5.713363e+08, -5.275122e+07, -2.853787e+08, -2.523782e+08, -4.273267e+08],aindex) print(a) print(a.groupby(a.index).apply(lambda x: x)) print(a.resample("15T", base=0).apply(lambda x: x)) print(a.groupby(a.index).resample("15T").apply(lambda x: x)) </code></pre> <p>'groupby' behaves as expected but note that each value of 'x' is type pd.Series. 'resample' also returns type pd.Series but appears to miss values when I display it in repl.it or Jupyter but if you change .apply(lambda x: x) to .apply(lambda x: list(x)) you can see there are actually multiple values. 'groupby'+'resample' almost does what I expected ie. each day has 15 minute bins except the indexing is wrong anywhere a 'resample' returned more than one value.</p> <p>I'm trying to understand what I'm seeing so I can apply the process with confidence. Is this correct behaviour and if so why?</p> <p>Note: To clarify a bit more my expectations. If I look at the result of a resample for one day then resample includes empty bins:</p> <pre><code>2013-04-05 04:15:00 -450000000.0 2013-04-05 04:30:00 NaN 2013-04-05 04:45:00 NaN 2013-04-05 05:00:00 NaN 2013-04-05 05:15:00 -570000000.0 2013-04-05 05:30:00 NaN 2013-04-05 05:45:00 NaN 2013-04-05 06:00:00 NaN 2013-04-05 06:15:00 NaN 2013-04-05 06:30:00 NaN 2013-04-05 06:45:00 NaN 2013-04-05 07:00:00 -571000000.0 2013-04-05 07:15:00 NaN 2013-04-05 07:30:00 NaN 2013-04-05 07:45:00 NaN 2013-04-05 08:00:00 NaN 2013-04-05 08:15:00 NaN 2013-04-05 08:30:00 NaN 2013-04-05 08:45:00 NaN 2013-04-05 09:00:00 NaN 2013-04-05 09:15:00 NaN 2013-04-05 09:30:00 NaN 2013-04-05 09:45:00 NaN 2013-04-05 10:00:00 NaN </code></pre> <p>But if a groupby is done first I don't get empty bins. Why not?:</p> <pre><code>... 2013-04-05 04:15:31 2013-04-05 04:15:00 -450000000.0 2013-04-05 05:15:18 2013-04-05 05:15:00 -480000000.0 2013-04-05 05:15:19 2013-04-05 05:15:00 -580000000.0 2013-04-05 05:17:15 2013-04-05 05:15:00 -570000000.0 2013-04-05 07:06:31 2013-04-05 07:00:00 -571000000.0 ... </code></pre>
<p>Resample is a tricky function. The main issue with the resampling is that you need to select which value you want to keep (using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.last.html" rel="nofollow noreferrer"><code>pandas.DataFrame.last</code></a> or <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.first.html" rel="nofollow noreferrer"><code>pandas.DataFrame.first</code></a>).</p> <p>So doing:</p> <pre><code>&gt; a.resample("15T", base=0).last() 2013-04-05 04:15:00 -450000000.0 2013-04-05 04:30:00 NaN 2013-04-05 04:45:00 NaN 2013-04-05 05:00:00 NaN 2013-04-05 05:15:00 -570000000.0 </code></pre> <p>would remove the need of using <code>.apply(lambda x: x)</code> since it will keep that last element from the sample.</p> <p><code>pandas.DataFrame</code> resample already uses <code>mean</code> as default.</p> <p>To have an equivalent with <code>groupby</code>, you would need to do it is safer to group and then apply the mean so we can interpolate the value for the interval </p> <pre><code>&gt; a.groupby(a.index).mean().resample("15T", base=0).last() 2013-04-05 04:15:00 -450000000.0 2013-04-05 04:30:00 NaN 2013-04-05 04:45:00 NaN 2013-04-05 05:00:00 NaN 2013-04-05 05:15:00 -570000000.0 </code></pre> <p>I hope understood your question correctly. Let me know if it helps.</p> <p><strong>Edit</strong></p> <p>You could try to keep all the indices using:</p> <pre><code>&gt; a.resample('15T').asfreq() </code></pre> <p>But you will get: <code>ValueError: cannot reindex from a duplicate axis</code>. </p> <p>This is the main issue, that indices in pandas cannot be duplicated. That is why <code>groupby</code> with a mean works, since it groups the items in groups of one elements and then resample for that group.</p> <p>One way to accomplish this without <code>groupby</code>is using Multiindex dataframes:</p> <pre><code>&gt; a.to_frame().set_index([a.index, a.index.round('15T')]) 0 2013-04-05 04:15:31 2013-04-05 04:15:00 -450000000.0 2013-04-05 05:15:18 2013-04-05 05:15:00 -480000000.0 2013-04-05 05:15:19 2013-04-05 05:15:00 -610000000.0 2013-04-05 05:15:00 -580000000.0 2013-04-05 05:17:15 2013-04-05 05:15:00 -570000000.0 2013-04-05 07:06:31 2013-04-05 07:00:00 -571000000.0 2013-04-09 04:15:31 2013-04-09 04:15:00 -459843200.0 2013-04-09 05:15:18 2013-04-09 05:15:00 -481414000.0 2013-04-09 05:15:19 2013-04-09 05:15:00 -610928400.0 2013-04-09 05:15:00 -587081900.0 2013-04-09 05:17:15 2013-04-09 05:15:00 -575988800.0 2013-04-09 07:06:31 2013-04-09 07:00:00 -571336300.0 2013-04-09 07:21:28 2013-04-09 07:15:00 -52751220.0 2013-04-09 09:18:19 2013-04-09 09:15:00 -285378700.0 2013-04-09 09:19:19 2013-04-09 09:15:00 -252378200.0 2013-04-09 09:21:31 2013-04-09 09:15:00 -427326700.0 </code></pre> <p>Or, altering the index order to group by the rounded index:</p> <pre><code>&gt; a.to_frame().set_index([a.index.round('15T'), a.index]) 2013-04-05 04:15:00 2013-04-05 04:15:31 -450000000.0 2013-04-05 05:15:00 2013-04-05 05:15:18 -480000000.0 2013-04-05 05:15:19 -610000000.0 2013-04-05 05:15:19 -580000000.0 2013-04-05 05:17:15 -570000000.0 2013-04-05 07:00:00 2013-04-05 07:06:31 -571000000.0 2013-04-09 04:15:00 2013-04-09 04:15:31 -459843200.0 2013-04-09 05:15:00 2013-04-09 05:15:18 -481414000.0 2013-04-09 05:15:19 -610928400.0 2013-04-09 05:15:19 -587081900.0 2013-04-09 05:17:15 -575988800.0 2013-04-09 07:00:00 2013-04-09 07:06:31 -571336300.0 2013-04-09 07:15:00 2013-04-09 07:21:28 -52751220.0 2013-04-09 09:15:00 2013-04-09 09:18:19 -285378700.0 2013-04-09 09:19:19 -252378200.0 2013-04-09 09:21:31 -427326700.0 </code></pre>
python-3.x|pandas|time-series
2
377,672
57,200,908
Remove/replace columns values based on another columns using pandas
<p>I have a data frame like this: </p> <pre><code>df col1 col2 col3 ab 1 prab cd 2 cdff ef 3 eef </code></pre> <p>I want to remove col1 values from the col3 values</p> <p>the final data frame should look like&lt;</p> <pre><code>df col1 col2 col3 ab 1 pr cd 2 ff ef 3 e </code></pre> <p>How to do it using pandas in most effective way ?</p>
<p>Use <code>.apply</code> with <code>replace</code> over <code>axis=1</code>:</p> <pre><code>df['col3'] = df.apply(lambda x: x['col3'].replace(x['col1'], ''), axis=1) </code></pre> <p><strong>Output</strong></p> <pre><code> col1 col2 col3 0 ab 1 pr 1 cd 2 ff 2 ef 3 e </code></pre>
python|pandas|dataframe
2
377,673
57,078,594
Filling cell based on existing cells
<p>I have data in following format:</p> <pre><code>8A564 nan json 8A928 nan json 8A563 nan json 8A564 10616280 json 8A563 10616222 json 8A564 nan json 8B1BB 10982483 json 8A564 10616280 json </code></pre> <p>I would like to fill data in second column to match row that has same first column and non null value in second. So I would get following: </p> <pre><code>8A564 10616280 json 8A928 nan json 8A563 10616222 json 8A564 10616280 json 8A563 10616222 json 8A564 10616280 json 8B1BB 10982483 json 8A564 10616280 json </code></pre> <p>How can it be achieved?</p>
<h3><code>groupby</code> and <code>bfill</code></h3> <p>Keep in mind the the <code>0</code> in <code>groupby(0)</code> refers to the column named <code>0</code>. If your column has a different name, use that.</p> <pre><code>df.groupby(0).bfill() 0 1 2 0 8A564 10616280 json 1 8A928 NaN json 2 8A563 10616222 json 3 8A564 10616280 json 4 8A563 10616222 json 5 8A564 10616280 json 6 8B1BB 10982483 json 7 8A564 10616280 json </code></pre> <hr> <p>If the ordering of what is null doesn't lend itself to back filling, you can get the first non-null value.</p> <pre><code>df[1] = df.groupby(0)[1].transform('first') df 0 1 2 0 8A564 10616280 json 1 8A928 NaN json 2 8A563 10616222 json 3 8A564 10616280 json 4 8A563 10616222 json 5 8A564 10616280 json 6 8B1BB 10982483 json 7 8A564 10616280 json </code></pre>
python|pandas
5
377,674
57,236,261
Why casting input and model to float16 doesn't work?
<p>I'm trying to change inputs and a deep learning model to flaot16, since I'm using T4 GPU and they work much faster with fp16. Here's part of the code: I first have my model and then made some dummy data point for the sake of figuring the data casting figured out first (I ran it with the whole batch and got the same error).</p> <pre><code>model = CRNN().to(device) model = model.type(torch.cuda.HalfTensor) data_recon = torch.from_numpy(data_recon) data_truth = torch.from_numpy(data_truth) dummy = data_recon[0:1,:,:,:,:] # Gets just one batch dummy = dummy.to(device) dummy = dummy.type(torch.cuda.HalfTensor) model(dummy) </code></pre> <p>And here's the error I get:</p> <pre><code>&gt; --------------------------------------------------------------------------- RuntimeError Traceback (most recent call &gt; last) &lt;ipython-input-27-1fe8ecc524aa&gt; in &lt;module&gt; &gt; ----&gt; 1 model(dummy) &gt; &gt; /opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py &gt; in __call__(self, *input, **kwargs) &gt; 491 result = self._slow_forward(*input, **kwargs) &gt; 492 else: &gt; --&gt; 493 result = self.forward(*input, **kwargs) &gt; 494 for hook in self._forward_hooks.values(): &gt; 495 hook_result = hook(self, input, result) &gt; &gt; &lt;ipython-input-12-06f39f9304a1&gt; in forward(self, inputs, test) &gt; 57 &gt; 58 net['t%d_x0'%(i-1)] = net['t%d_x0'%(i-1)].view(times, batch, self.filter_size, width, &gt; height) &gt; ---&gt; 59 net['t%d_x0'%i] = self.bcrnn(inputs, net['t%d_x0'%(i-1)], test) &gt; 60 net['t%d_x0'%i] = net['t%d_x0'%i].view(-1, self.filter_size, width, height) &gt; 61 &gt; &gt; /opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py &gt; in __call__(self, *input, **kwargs) &gt; 491 result = self._slow_forward(*input, **kwargs) &gt; 492 else: &gt; --&gt; 493 result = self.forward(*input, **kwargs) &gt; 494 for hook in self._forward_hooks.values(): &gt; 495 hook_result = hook(self, input, result) &gt; &gt; &lt;ipython-input-11-b687949e9ce5&gt; in forward(self, inputs, &gt; input_iteration, test) &gt; 31 hidden = initial_hidden &gt; 32 for i in range(times): &gt; ---&gt; 33 hidden = self.CRNN(inputs[i], input_iteration[i], hidden) &gt; 34 output_forward.append(hidden) &gt; 35 output_forward = torch.cat(output_forward) &gt; &gt; /opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py &gt; in __call__(self, *input, **kwargs) &gt; 491 result = self._slow_forward(*input, **kwargs) &gt; 492 else: &gt; --&gt; 493 result = self.forward(*input, **kwargs) &gt; 494 for hook in self._forward_hooks.values(): &gt; 495 hook_result = hook(self, input, result) &gt; &gt; &lt;ipython-input-10-15c0b221226b&gt; in forward(self, inputs, &gt; hidden_iteration, hidden) &gt; 23 def forward(self, inputs, hidden_iteration, hidden): &gt; 24 in_to_hid = self.i2h(inputs) &gt; ---&gt; 25 hid_to_hid = self.h2h(hidden) &gt; 26 ih_to_ih = self.ih2ih(hidden_iteration) &gt; 27 &gt; &gt; /opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py &gt; in __call__(self, *input, **kwargs) &gt; 491 result = self._slow_forward(*input, **kwargs) &gt; 492 else: &gt; --&gt; 493 result = self.forward(*input, **kwargs) &gt; 494 for hook in self._forward_hooks.values(): &gt; 495 hook_result = hook(self, input, result) &gt; &gt; /opt/anaconda3/lib/python3.7/site-packages/torch/nn/modules/conv.py in &gt; forward(self, input) &gt; 336 _pair(0), self.dilation, self.groups) &gt; 337 return F.conv2d(input, self.weight, self.bias, self.stride, &gt; --&gt; 338 self.padding, self.dilation, self.groups) &gt; 339 &gt; 340 &gt; &gt; RuntimeError: Input type (torch.cuda.FloatTensor) and weight type &gt; (torch.cuda.HalfTensor) should be the same </code></pre>
<p>Check out your implementation of <code>CRNN</code>. My guess is that you have "hidden" state tensor stored in the model, but not as a "buffer" but just as a regular tensor. Therefore, when casting the model to float16 the hidden state remains float32 and causes you this error. </p> <p>Try to store the hidden state as a register in the module (see <a href="https://pytorch.org/docs/stable/nn.html#torch.nn.Module.register_buffer" rel="nofollow noreferrer"><code>register_buffer</code></a> for more info).<br> Alternatively, you can explicitly cast to float16 any member tensor in the module by overloading the <code>.to()</code> method of your model. </p>
python|pytorch
2
377,675
56,900,687
Reverse Rolling mean for DataFrame
<p>I am trying to create a fixture difficulty grid using a DataFrame. I want the mean for the next 5 fixtures for each team.</p> <p>I’m currently using df.rolling(5, min_periods=1).mean().shift(-4). This is working for the start but is pulling NANs at the end. I understand why NANs are returned – there is no DF to shift up. Ideally I’d like the NANs to become mean across the remaining values, value against 38 just being its current value?</p> <p>Fixture difficulties</p> <pre><code>ARS AVL BHA BOU 3 4 3 2 2 2 2 2 5 2 2 4 4 2 5 3 3 2 2 2 </code></pre> <p>Mean of next 5 fixtures</p> <pre><code>ARS AVL BHA BOU 3.4 2.4 2.8 2.6 3.2 2.4 2.8 2.6 3.6 2.4 3.2 2.6 3 2.4 3.6 2.6 2.6 2.4 3 2.4 </code></pre> <p>NAN on last records as nothing to shift up. </p> <pre><code>3.2 3.6 2.8 3.6 nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan </code></pre> <p>Can I adapt this approach or need a different one altogether to populate the NANs?</p>
<p>IIUC you need inverse values by indexing, use rolling and inverse back:</p> <pre><code>df1 = df.iloc[::-1].rolling(5, min_periods=1).mean().iloc[::-1] print (df1) ARS AVL BHA BOU 0 3.4 2.4 2.80 2.60 1 3.5 2.0 2.75 2.75 2 4.0 2.0 3.00 3.00 3 3.5 2.0 3.50 2.50 4 3.0 2.0 2.00 2.00 </code></pre>
pandas|dataframe|nan|shift|rolling-computation
2
377,676
57,253,539
Parse phone number and string into new columns in pandas dataframe
<p>I've got a list of addresses in a single column <code>address</code>, how would I go about parsing the phone number and restaurant category into new columns? My dataframe looks like this </p> <pre><code> address 0 Arnie Morton's of Chicago 435 S. La Cienega Blvd. Los Angeles 310-246-1501 Steakhouses 1 Art's Deli 12224 Ventura Blvd. Studio City 818-762-1221 Delis 2 Bel-Air Hotel 701 Stone Canyon Rd. Bel Air 310-472-1211 French Bistro </code></pre> <p>where I want to get</p> <pre><code> address | phone_number | category 0 Arnie Morton's of Chicago 435 S. La Cienega Blvd. Los Angeles | 310-246-1501 | Steakhouses 1 Art's Deli 12224 Ventura Blvd. Studio City | 818-762-1221 | Delis 2 Bel-Air Hotel 701 Stone Canyon Rd. Bel Air | 310-472-1211 | French Bistro </code></pre> <p>Does anybody have any suggestions?</p>
<p>Try using Regex with <code>str.extract</code>. </p> <p><strong>Ex:</strong></p> <pre><code>df = pd.DataFrame({'address':["Arnie Morton's of Chicago 435 S. La Cienega Blvd. Los Angeles 310-246-1501 Steakhouses", "Art's Deli 12224 Ventura Blvd. Studio City 818-762-1221 Delis", "Bel-Air Hotel 701 Stone Canyon Rd. Bel Air 310-472-1211 French Bistro"]}) df[["address", "phone_number", "category"]] = df["address"].str.extract(r"(?P&lt;address&gt;.*?)(?P&lt;phone_number&gt;\b\d{3}\-\d{3}\-\d{4}\b)(?P&lt;category&gt;.*$)") print(df) </code></pre> <p><strong>Output:</strong></p> <pre><code> address phone_number \ 0 Arnie Morton's of Chicago 435 S. La Cienega Bl... 310-246-1501 1 Art's Deli 12224 Ventura Blvd. Studio City 818-762-1221 2 Bel-Air Hotel 701 Stone Canyon Rd. Bel Air 310-472-1211 category 0 Steakhouses 1 Delis 2 French Bistro </code></pre> <p><strong>Note:</strong>: Assuming the content of address is always <code>address--phone_number--category</code> </p>
python|pandas
3
377,677
57,153,016
subsetting a data frame by partially matching another data frame in r (open to python/pandas solution)
<p><strong>basic problem description :</strong></p> <p>Let <code>df</code> be a data frame and <code>df_match</code> a one row data frame.</p> <p>I want to subset <code>df</code> such that only the rows remain whose non NA-Values are contained in the non-NA values of <code>df_match</code>.</p> <p><strong>A minimal example :</strong></p> <pre><code>df &lt;- data.frame(A = c("a1", "a1", "a2", NA, "a1", "a1"), B = c(NA,"b1", "b1", "b2", "b1",NA), C = c(NA,NA,NA,NA,"c1","c1"), D = c(NA,NA,NA,NA,"d1","d1"), stringsAsFactors = FALSE) # column D is not nessecary I imputed it to get a data frame when applying is.na() below df_match &lt;- data.frame(A= "a1", B = "b1", C = NA, D = NA, stringsAsFactors = FALSE) A B C D 1 a1 &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; 2 a1 b1 &lt;NA&gt; &lt;NA&gt; 3 a2 b1 &lt;NA&gt; &lt;NA&gt; 4 &lt;NA&gt; b2 &lt;NA&gt; &lt;NA&gt; 5 a1 b1 c1 d1 6 a1 &lt;NA&gt; c1 d1 &gt; df_match A B C D 1 a1 b1 NA NA </code></pre> <p>In the minimal example only the first two rows of <code>df</code> are correct w.r.t. "the partial matching".</p> <pre><code> A B C D 1 a1 &lt;NA&gt; NA NA 2 a1 b1 NA NA </code></pre> <p>The 3rd and 4th row has a wrong entry either in column A or column B.</p> <p>The 5th and 6th contain a value in a column which is not supported in <code>df_match</code> (i.e. the columns which have non NA values in df_match).</p> <pre><code> A B C D 1 a2 b1 &lt;NA&gt; &lt;NA&gt; 2 &lt;NA&gt; b2 &lt;NA&gt; &lt;NA&gt; 3 a1 b1 c1 d1 4 a1 &lt;NA&gt; c1 d1 </code></pre> <p><strong>Basic idea :</strong> </p> <p>was to match each row of <code>df</code> with <code>df_match</code> and store the result in a boolean matrix <code>M</code>.</p> <p>Then create a boolean vector indexed by the row number as follows : TRUE if </p> <p>1) the columns of <code>M</code> which have support on <code>df_match</code> ( i.e. the columns which have non-NA values in df_match) contain no false.</p> <p>2) the columns of <code>M</code> which do not have support in <code>df_match</code> contain no TRUE</p> <p><strong>My current solution to the minimal example :</strong></p> <pre><code>df &lt;- data.frame(A = c("a1", "a1", "a2", NA, "a1", "a1"), B = c(NA,"b1", "b1", "b2", "b1",NA), C = c(NA,NA,NA,NA,"c1","c1"), D = c(NA,NA,NA,NA,"d1","d1"), stringsAsFactors = FALSE) # column D is not nessecary I imputed it to get a data frame when applying is.na() below df_match &lt;- data.frame(A= "a1", B = "b1", C = NA, D = NA, stringsAsFactors = FALSE) library(dplyr) # create a boolean vector for condition 2 not_matchable &lt;- names(df_match)[is.na(df_match)] bol_no_matchable &lt;- df %&gt;% select(one_of(not_matchable)) %&gt;% is.na() %&gt;% apply(X = ., MARGIN = 1, any) # create a boolean vector for condition 1 matchable &lt;- names(df_match)[!is.na(df_match)] bol_matchable &lt;- sapply(1:nrow(df), function(row) { df[row,matchable] != df_match[,matchable] }) %&gt;% apply(X = ., MARGIN = 2, FUN = any) bol_matchable[is.na(bol_matchable)] &lt;- FALSE # filter the results df &lt;- df %&gt;% filter(!bol_matchable &amp; bol_no_matchable) </code></pre> <p><strong>Questions :</strong></p> <ul> <li>What general principles I can follow to rise the performance of subsetting problems?</li> <li>How can I improve the perfomance of the above code?</li> <li>How can I improve the perfomance of the below code concerning my real problem?</li> </ul> <p><strong>Problem:</strong> In the application the data frame <code>df</code> has a column <code>X</code> containing a column name where <code>df</code> is allowed to have values outside the support of <code>df_match</code>. (see below)</p> <p>Applying the logic from the basic minimal example my current solution is as follows:</p> <pre><code>df &lt;- data.frame(A = c("a1", "a1", "a2", NA, "a1", "a1"), B = c(NA,"b1", "b1", "b2", "b1",NA), C = c("c2",NA,"c1",NA,"c1","c1"), D = c(NA,"d2","d2","d2","d1","d1"), X = c("C","D","C","D","D","C"), stringsAsFactors = FALSE) bol &lt;- sapply(1:nrow(df), function(x) { # determine value in column X X &lt;- pull(df[x,], "X") not_matchable &lt;- setdiff(matchable, X) # create boolean vector for condition 1) bol_no_matchable &lt;- df[x,] %&gt;% select(one_of(not_matchable)) %&gt;% is.na() %&gt;% all() # create boolean vector for condition 2) bol_matchable &lt;- {df[x,not_matchable] != df_match[,not_matchable]} bol_matchable[is.na(bol_matchable)] &lt;- FALSE bol_matchable &lt;- any(bol_matchable) # combine both conditions bol &lt;- !bol_matchable &amp; bol_no_matchable }) </code></pre> <p>The above code is not as fast as I like to have it. As I want to apply this "function" to a dataframe <code>df</code> with ~50m rows and 100+ columns multiple times for arbitary data frames <code>df_match</code>.</p> <p>Hence any suggestions/ ideas for different approaches are welcome as well as comments towards subsetting.</p>
<p>You can <code>Map</code> over the columns of <code>df</code> and <code>df_match</code>, and for each column-pair return a vector whose elements are <code>TRUE</code> if the corresponding element of <code>df</code> is <code>NA</code> or equals the element of <code>df_match</code>. Then select the rows where the number of <code>TRUE</code>s (yielded by <code>rowSums</code>) is equal to the number of columns (i.e. all columns either match or are NA).</p> <p>Note: If the <code>df_match</code> value is <code>NA</code> and the <code>df</code> value is non-<code>NA</code>, the corresponding vector element output by <code>Map</code> will be <code>NA</code>, which is equivaluent to <code>FALSE</code> when using <code>rowSums</code> with <code>na.rm = TRUE</code></p> <pre><code>row_matches &lt;- rowSums(mapply(function(x, y) is.na(x) | x == y, df, df_match), na.rm = TRUE) df[row_matches == ncol(df),] # A B C D # 1 a1 &lt;NA&gt; &lt;NA&gt; &lt;NA&gt; # 2 a1 b1 &lt;NA&gt; &lt;NA&gt; </code></pre>
python|r|pandas|dataframe|subset
1
377,678
57,294,070
What is the difference between a numpy array of size (100, 1) and (100,)?
<p>I have two variables coming from diffrent functions and the first one <code>a</code> is:</p> <pre><code>&lt;class 'numpy.ndarray'&gt; (100,) </code></pre> <p>while the other one <code>b</code> is:</p> <pre><code>&lt;class 'numpy.ndarray'&gt; (100, 1) </code></pre> <p>If I try to correlate them via:</p> <pre><code>from scipy.stats import pearsonr p, r= pearsonr(a, b) </code></pre> <p>I get:</p> <pre><code> r = max(min(r, 1.0), -1.0) ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() </code></pre> <p>My questions are:</p> <ol> <li>What is the difference between a and b?</li> <li>How do I fix this?</li> </ol>
<p>(100,1) is 2d array of rows of length 1 like = <code>[[1],[2],[3],[4]]</code> and second one is 1d array <code>[1, 2, 3, 4 ]</code></p> <pre><code>a1 = np.array([[1],[2],[3],[4]]) a2 = np.array([1, 2, 3, 4 ]) </code></pre>
python|numpy
4
377,679
56,956,253
Pandas reindex an index of a multi index in decreasing order of the series values
<p>I have a pandas series with a multi index like:</p> <pre><code>A 385 0.463120 278 0.269023 190 0.244348 818 0.232505 64 0.199640 B 1889 0.381681 1568 0.284957 1543 0.259003 1950 0.241432 1396 0.197692 C 2485 0.859803 2980 0.823075 2588 0.774576 2748 0.613309 2055 0.607444 E 3081 0.815492 3523 0.666928 3638 0.628147 3623 0.554344 3400 0.506123 </code></pre> <p>I'd like to reindex the second index like this with pandas:</p> <pre><code>A 1 0.463120 2 0.269023 3 0.244348 4 0.232505 5 0.199640 B 1 0.381681 2 0.284957 3 0.259003 4 0.241432 5 0.197692 C 1 0.859803 2 0.823075 3 0.774576 4 0.613309 5 0.607444 D 1 0.815492 2 0.666928 3 0.628147 4 0.554344 5 0.506123 </code></pre> <p>I.e. that the second index is increasing as the values of the series are decreasing with a single value of first index.</p> <p><strong>Is there a way to do so just using pandas?</strong></p>
<p>You can use <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.cumcount.html" rel="nofollow noreferrer"><code>pandas.core.groupby.GroupBy.cumcount</code></a>:</p> <pre class="lang-py prettyprint-override"><code># create example data df = pd.DataFrame({'a':list(pd.util.testing.rands_array(1, 4, dtype='O')) * 5, 'b':np.random.rand(20) // .1, 'c':np.random.rand(20) // .01} ) df.set_index(['a','b'], inplace=True) df = df.sort_values(['a','c'], ascending=[True,False]) df['x'] = df.groupby('a').cumcount()+1 df = df.reset_index().set_index(['a','x']) </code></pre> <p>returns</p> <pre class="lang-py prettyprint-override"><code> b c a x a 1 5.0 89.0 2 4.0 84.0 3 2.0 83.0 4 3.0 41.0 5 4.0 30.0 k 1 7.0 70.0 2 7.0 64.0 3 9.0 46.0 4 6.0 16.0 5 4.0 8.0 p 1 5.0 71.0 2 7.0 70.0 3 6.0 54.0 4 0.0 16.0 5 7.0 1.0 w 1 6.0 61.0 2 2.0 57.0 3 3.0 53.0 4 6.0 38.0 5 0.0 22.0 </code></pre>
python|pandas
2
377,680
57,286,016
Edit concatenated csv with comment lines in the top - Python
<p>I have the following csv file myFile.csv that comes from a pandas dataframe exported:</p> <pre><code># Comment line with information related to the business customer_id column_1 column_2 column_3 123 A XX AG 456 B YY TT # Comment line with other information customer_id column_1 column_2 column_3 789 AA XX AG 111 BB YY TT </code></pre> <p>I want to edit this csv so that all lines starting with # are together in the beginning of the file. That way, I can keep a unique table concatenating both pieces of data and with unique columns. Like this:</p> <pre><code># Comment line with information related to the business # Comment line with other information customer_id column_1 column_2 column_3 123 A XX AG 456 B YY TT 789 AA XX AG 111 BB YY TT </code></pre> <p>My csv file looks like this:</p> <p><a href="https://i.stack.imgur.com/vt5C0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vt5C0.png" alt="enter image description here"></a></p> <p>Any ideas? Thank you very much!</p> <p><strong>Update:</strong></p> <p>I have this python code to generate a test df:</p> <pre class="lang-py prettyprint-override"><code> input_data = { 'customer_id': [123, 456], 'column_1': ['A', 'B'], 'column_2': ['XX', 'YY'], 'column_3': ['AG', 'TT'] } input_df = pd.DataFrame(input_data, columns=['customer_id', 'column_1', 'column_2', 'column_3']) input_df.to_csv("test-matrix.csv", index=False) a = "# Information as a comment" # I am running the following twice, so I can have the concatenated tables, as this will happen in my code with open("test-matrix.csv",'a') as file: file.write(a + '\n') input_df.to_csv(file, index=False) print("APPENDING!") with open("test-matrix.csv",'a') as file: file.write(a + '\n') input_df.to_csv(file, index=False) print("APPENDING!") df = pd.read_csv("test-matrix.csv") print(df) </code></pre>
<p>You can convert one CSV to another with following script:</p> <pre><code>comments = [] header = '' data = [] with open('myFile.csv', 'r') as f: lines = f.readlines() for i in range(len(lines)): if not lines[i].startswith('#') and not lines[i-1].startswith('#'): data.append(lines[i]) elif lines[i].startswith('#'): comments.append(lines[i]) elif lines[i-1].startswith('#'): header = lines[i] with open('result.csv', 'w') as f: f.writelines(comments) f.write(header) f.writelines(data) </code></pre> <p>Output file will be:</p> <pre><code># Comment line with information related to the business # Comment line with other information customer_id column_1 column_2 column_3 123 A XX AG 456 B YY TT 789 AA XX AG 111 BB YY TT </code></pre>
python|pandas|csv|header
0
377,681
56,956,454
Apply preprocessing to the dataset
<p>I am implementing a paper on image segmentation in pytorch. I am required to do some preprocessing steps but as I am trying it the first time so I am unable to incorporate them in the traditional pipeline. Following are the preprocessing steps-<br></p> <p>1) N(w, h) = I(w, h) − G(w, h), (1) where N is the normalized image, I is the original image, and G is the Gaussian blurred image with kernel size 65*65 and 0 mean and standard deviation 10.</p> <p>2)Normalizing the mean image and dividing each pixel by average standard deviation.</p> <p>Following is my code snippet for the above steps-</p> <pre><code>def gaussian_blur(img): image = cv2.GaussianBlur(image,(65,65),10) new_image = img - image return image def normalise(img): img_normalised = np.empty(img.shape) img_std = np.std(img) img_mean = np.mean(img) img_normalized = (img-img_mean)/imgs_std for i in range(img.shape[1]): img_normalized[i] = (img_normalized - np.mean(img_normalized))/np.std(img_normalized) return img_normalized </code></pre> <p>I am really not sure how to add above functions in the traditional pytorch data-loaders pipeline like first I should load the dataset using <code>ImageFolder</code> and then apply or first apply and then use <code>ImageFolder</code> method.</p>
<p>This is how I did it-</p> <p>The solution of the first part is first defining the required function and then calling in the transforms using the generic transforms in the following way-</p> <pre><code>def gaussian_blur(img): image = np.array(img) image_blur = cv2.GaussianBlur(image,(65,65),10) new_image = image - image_blur im = Image.fromarray(new_image) return im </code></pre> <p>Solution of second part is to go through every image and calculate the mean and std deviation and then finally calling the mean and std deviation values in the transforms.-</p> <pre><code> train_mean = [] train_std = [] for i,image in enumerate(train_loader,0): numpy_image = image[0].numpy() batch_mean = np.mean(numpy_image, axis=(0, 2, 3)) batch_std = np.std(numpy_image, axis=(0, 2, 3)) train_mean.append(batch_mean) train_std.append(batch_std) train_mean = torch.tensor(np.mean(train_mean, axis=0)) train_std = torch.tensor(np.mean(train_std, axis=0)) print('Mean:', train_mean) print('Std Dev:', train_std) </code></pre> <p>Final transform calling looks like this-</p> <pre><code>data_transforms = transforms.Compose([transforms.RandomCrop(512,512), transforms.Lambda(gaussian_blur), transforms.RandomRotation([+90,+180]), transforms.RandomRotation([+180,+270]), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize(mean=train_mean, std=train_std) ]) </code></pre>
python-3.x|deep-learning|pytorch|image-preprocessing
0
377,682
57,144,586
Tensorflow GradientTape "Gradients does not exist for variables" intermittently
<p>When training my network I am occasionally met with the warning: </p> <p><code>W0722 11:47:35.101842 140641577297728 optimizer_v2.py:928] Gradients does not exist for variables ['model/conv1d_x/Variable:0'] when minimizing the loss. </code></p> <p>This happens sporadically at infrequent intervals (maybe once in every 20 successful steps). My model basically has two paths which join together with concatenations at various positions in the network. To illustrate this, here is a simplified example of what I mean.</p> <pre><code>class myModel(tf.keras.Model): def __init__(self): self.conv1 = Conv2D(32) self.conv2 = Conv2D(32) self.conv3 = Conv2D(16) def call(self, inputs): net1 = self.conv1(inputs) net2 = self.conv2(inputs) net = tf.concat([net1, net2], axis=2) net = self.conv3(net) end_points = tf.nn.softmax(net) model = myModel() with tf.GradientTape() as tape: predicition = model(image) loss = myloss(labels, prediction) gradients = tape.gradient(loss, model.trainable_variables) optimizer.apply_gradients(zip(gradients, model.trainable_variables)) </code></pre> <p>In reality my network is much larger, but the variables that generally don't have gradients tend to be the ones at the top of the network. Before each <code>Conv2D</code> layer I also have a custom gradient. Sometimes when I the error appears I can notice that the gradient function for that layer has not been called.</p> <p>My question is how can the gradient tape sometimes take what appears to be different paths when propagating backwards through my network. My secondary question, is this caused by having two separate routes through my network (i.e. conv1 AND conv2). Is there a fundamental flaw in this network architecture?</p> <p>Ideally, could I define to the <code>GradientTape()</code> that it must find the gradients for each of the top layers? </p>
<p>I had an issue that seems similar - may be helpful or not sure depending on what your network actually looks like, but basically, I had a multi-output network and I realised that as I was applying gradients that corresponded to the outputs separately, so for each separate loss there was a branch of the network for which the gradient was zero, but this was totally valid and corresponded to the terminal layers immediately prior to the non-targeted outputs each time. For this reason, I ended up replacing any None gradients with tf.zeros_like and it was possible to proceed with training. Could you have the same problem with multiple input heads to your network, if it's always at the top of the graph?</p> <p>(ETA solution by Nguyễn Thu below is the code version of what I'm describing in above - exactly the same way that I dealt with it)</p> <p>I've seen other answers where gradients weren't calculating because tensors aren't watched by default - you have to add them, but looks like that's not your issue as you should be only dealing with model.trainable_variables, or perhaps your myLoss function is getting a NaN result or casting to a numpy array occasionally depending on your batch composition, which would explain the sporadic nature (e.g. perhaps it's on batches that have no instances of a minority class if your data is very imbalanced?)</p>
python|tensorflow|keras
16
377,683
57,241,212
Integer data type conversion from string
<p>I was working on one data frame series's column whose Data type was 'object' (str). its format was like '301,694'. </p> <p>I want data type of that column from panda series to be int or float. Received errors when I tried below code.</p> <p>please share knowledge.</p> <p>1) </p> <pre><code>df2['Total Ballots Counted'] = df2['Total Ballots Counted'].fillna(0).astype(int) </code></pre> <p>error received - invalid literal for int() with base 10: '301,694'</p> <p>2) </p> <pre><code>df2['Total Ballots Counted'] = pd.to_numeric(df2['Total Ballots Counted']) </code></pre> <p>error received - Unable to parse string "301,694" at position 1</p>
<p>Hope this helps: <code>df['colname'] = df['colname'].replace(',', '').astype(int)</code></p> <p>Another thing to do is :</p> <p><code>int(''.join([i for i in str(number).split('') if i is not ',']))</code> for each number in the column.</p>
python|pandas
0
377,684
57,187,901
tensorflow has been ineffective training, just a few hundred steps, loss quickly dropped to 0, accuracy reached 100%, has been bothering me for months
<p>I am trying to use tensorflow for image classification. There are 5 categories in total, and there are about 300 images in each category.</p> <p>But in the training process, just a few hundred steps, I have encountered some problems: 1, loss drops to 0, accuracy reaches 100% 2. After adding the verification set (I don't know if the code added is correct), the loss of the verification set is still reduced to 0, and the accuracy of the verification set still reaches 100% or 90+%. 3. When testing with test code, the results are very bad, almost no correctness is identified, but the maximum probability of testing is 90+%.</p> <p>This is my github code address: <a href="https://github.com/a87871660/Picture_classification" rel="nofollow noreferrer">https://github.com/a87871660/Picture_classification</a></p> <ol> <li>Through the tf.summary.image code, you can see that every 100 steps, the input images are different, so is there a problem with input errors?</li> <li>Is the code training properly? Where is the error?</li> <li>Is the calculation code of loss and accuracy wrong?</li> <li>How few data sets are there?</li> </ol>
<p>Mentioning the Solution here for the benefit of the community.</p> <p>Problem is resolved if we <code>Normalize the Data</code>, i.e., by dividing each Pixel Value by <code>255</code>. </p> <p>By dividing by <code>255</code>, the <code>0-255</code> range can be described with a <code>0.0-1.0</code> range where <code>0.0</code> means <code>0 (0x00)</code> and <code>1.0</code> means <code>255 (0xFF)</code>.</p> <p>Normalization will help you to <strong>remove distortions</strong> caused by <em>lights</em> and <em>shadows</em> in an image.</p> <p>Example Code for <code>Normalization</code> is shown below:</p> <pre><code>(X_train, y_train), (X_test, y_test) = tf.keras.datasets.fashion_mnist.load_data() X_train = X_train.astype(np.float32).reshape(-1, 28*28) / 255.0 X_train_reshaped = tf.reshape(X_train, shape=[-1, 28, 28, 1]) X_test = X_test.astype(np.float32).reshape(-1, 28*28) / 255.0 X_test_reshaped = tf.reshape(X_test, shape=[-1, 28, 28, 1]) </code></pre>
python|tensorflow|deep-learning|conv-neural-network
0
377,685
57,063,629
Simplify List of coordinates
<p>I have suplied a template image and a test image to the function <code>cv.matchTemplate</code>.</p> <p>After returning, I filter anything out under 95% match. The results are good and I am producing the desired result. The result is a list of tuples each tuple represented by <code>(x,y)</code> The problem is after filtering I have too many results. It seems that each potential match yields more than one point:</p> <pre><code>(150, 143) (151, 143) (152, 143) (153, 143) (154, 143) (155, 143) (149, 144) (150, 144) (151, 144) (152, 144) (153, 144) (154, 144) (155, 144) (156, 144) (694, 144) (695, 144) (696, 144) (697, 144) (698, 144) (148, 145) (149, 145) (150, 145) (151, 145) (152, 145) (153, 145) (154, 145) (155, 145) (156, 145) (157, 145) (692, 145) (693, 145) (694, 145) (695, 145) (696, 145) (697, 145) (698, 145) (699, 145) (147, 146) (148, 146) (149, 146) (150, 146) (151, 146) (152, 146) (153, 146) (154, 146) (155, 146) (156, 146) (157, 146) </code></pre> <p>All of these points are <code>tuples</code> that are in a single sorted <code>list</code> You can see the points can be "logically" grouped together in bunches that are not too different in their coordinates. In the above example output, there are 5 distinguishable "groups". The idea here is to <em>reduce</em> each <strong>group</strong> into <strong>one point</strong></p> <p>from above, this would be condensed to the following list:</p> <pre><code>(151,143) (694, 144) (148, 145) (692, 145) (147, 146) </code></pre> <p>Is there a way to do that?</p>
<p>Fixed this answer due to OP comment about all of the tuples being in a list. The first if condition if something you can change if you find that you want to be more/less strict for differences between points (e.g. if you want it to be within 5 pixels, you can do &lt;= 5 rather than == 1). </p> <pre><code>masterTest = [(1, 2), (1, 3), (2, 3), (4, 6), (4, 7), (4, 8)] #test array arrayHolder = [] #buffer that holds the first mini list compositeArray = [] #master list which holds a list of the tuples, grouped lastTuple = masterTest[0] #dummy variable arrayHolder.append(masterTest[0]) # add the first one so we have something to compare to masterTest.pop(0) # it's already in our data, don't want a dup for tuples in masterTest: if (((abs(tuples[0] - lastTuple[0]) == 1 and abs(tuples[1] - lastTuple[1]) == 0)) or (abs(tuples[1] - lastTuple[1]) == 1 and abs(tuples[0] - lastTuple[0]) == 0)): arrayHolder.append(tuples) else: compositeArray.append(arrayHolder.copy()) #add buffer to master list arrayHolder = [] #clear out the buffer arrayHolder.append(tuples) #restart a new buffer lastTuple = tuples # update last coordinate checked compositeArray.append(arrayHolder) #clears the buffer one last time pointArray = [] for lists in compositeArray: count = 0 xavg = sum([x[0] for x in lists])/len(lists) yavg = sum([x[1] for x in lists])/len(lists) pointArray.append(tuple((xavg, yavg))) print (pointArray) </code></pre> <p>You can use pythons round() function (its that simple, numberToRound.round()) if you want to do that.</p>
python|numpy|opencv
1
377,686
57,065,024
How to sort .iterrows() values?
<p>I can't seem to find an answer on this topic. I am trying to sort the values in my queryset. Right now, it's automatically sorted by TICKER_id:</p> <pre><code>TICKER_id DXJ -0.5 EWA 1.0 EWC 0.0 EWG -1.0 EWI -0.5 EWP -0.5 EWQ 0.5 EWU 0.0 EWW -0.5 EWY -1.0 EWZ 0.5 EZA 0.5 FEZ -0.5 INDA 0.0 MCHI -0.5 RSX 0.5 SPY 0.0 TUR -0.5 </code></pre> <p>I feel as if there is a way to sort by iterating through the list using .iterrows(). </p>
<p>As far as I can tell, Pandas uses the index to control <code>itterows</code> and will therefore go back to the normal order, even if you've resorted the dataframe, because the index goes with the row.</p> <p>I've been able to iterate over the df in the intended order by <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reset_index.html" rel="nofollow noreferrer">resetting the index</a>:</p> <pre><code>#%% import pandas as pd import matplotlib.pyplot as plt df = pd.DataFrame( [ {"TICKER_id": "DXJ", "val": -0.5}, {"TICKER_id": "EWA", "val": 1.0}, {"TICKER_id": "EWC", "val": 0.0}, {"TICKER_id": "EWG", "val": -1.0}, {"TICKER_id": "EWI", "val": -0.5}, {"TICKER_id": "EWP", "val": -0.5}, {"TICKER_id": "EWQ", "val": 0.5}, {"TICKER_id": "EWU", "val": 0.0}, {"TICKER_id": "EWW", "val": -0.5}, {"TICKER_id": "EWY", "val": -1.0}, {"TICKER_id": "EWZ", "val": 0.5}, {"TICKER_id": "EZA", "val": 0.5}, {"TICKER_id": "FEZ", "val": -0.5}, {"TICKER_id": "INDA", "val": 0.0}, {"TICKER_id": "MCHI", "val": -0.5}, {"TICKER_id": "RSX", "val": 0.5}, {"TICKER_id": "SPY", "val": 0.0}, {"TICKER_id": "TUR", "val": -0.5}, ] ) df.plot() # 1 #%% df.sort_values(by="val", ascending=False, inplace=True) df.reset_index(drop=True, inplace=True) df.plot() # 2 #%% for index, row in df.iterrows(): if len(row.TICKER_id) == 3: plt.scatter(index,row.val, c="r") else: plt.scatter(index,row.val, c="b") # 3 </code></pre> <p>This gives: 1. <a href="https://i.stack.imgur.com/QxPvC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QxPvC.png" alt="enter image description here"></a> 2. <a href="https://i.stack.imgur.com/fLLXO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fLLXO.png" alt="enter image description here"></a> 3. <a href="https://i.stack.imgur.com/A0ccC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/A0ccC.png" alt="enter image description here"></a></p> <p><a href="https://stackoverflow.com/questions/33165734/update-index-after-sorting-data-frame">This question is related to this</a>.</p>
python|pandas|numpy
2
377,687
57,235,451
'numpy.ndarray' object has no attribute 'iterrows' while predicting value using lstm in python
<p>I have a dataset with three inputs and trying to predict next value of X1 with the combination of previous inputs values. </p> <p>My three inputs are X1, X2, X3, X4.</p> <p>So here I am trying to predict next future value of X1. To predict the next X1 these four inputs combination affect with:</p> <pre><code>X1 + X2 - X3 -X4 </code></pre> <p>I wrote this code inside the class. Then I wrote the code to run the lstm . After that I wrote the code for predict value. Then it gave me this error. Can anyone help me to solve this problem?</p> <p>my code:</p> <pre><code>def model_predict(data): pred=[] for index, row in data.iterrows(): val = row['X1'] if np.isnan(val): data.iloc[index]['X1'] = pred[-1] row['X1'] = pred[-1] f = row['X1','X2','X3','X4'] s = row['X1'] - row['X2'] + row['X3'] -row['X4'] val = model.predict(s) pred.append(val) return np.array(pred) </code></pre> <p>After lstm code then I wrote the code for predict value:</p> <pre><code>pred = model_predict(x_test_n) </code></pre> <p>Gave me this error:</p> <pre><code> ` ---&gt; 5 pred = model_predict(x_test_n) def model_predict(data): pred=[] --&gt;for index, row in data.iterrows(): val = row['X1'] if np.isnan(val):` AttributeError: 'numpy.ndarray' object has no attribute 'iterrows' </code></pre>
<p>Apparenty, <em>data</em> argument of your function is a <em>Numpy</em> array, not a <em>DataFrame</em>. <em>Data</em>, as a <em>np.ndarray</em>, has also no named columns.</p> <p>One of possible solutions, keeping the argument as <em>np.ndarray</em> is:</p> <ul> <li>iterate over rows of this array using <em>np.apply_along_axis()</em>,</li> <li>refer to columns by indices (instead of names).</li> </ul> <p>Another solution is to create a <em>DataFrame</em> from <em>data</em>, setting proper column names and iterate on its rows.</p> <p><strong>One of possible solutions how to write the code without DataFrame</strong></p> <p>Assume that <em>data</em> is a <em>Numpy</em> table with 4 columns, containing respectively <em>X1</em>, <em>X2</em>, <em>X3</em> and <em>X4</em>:</p> <pre><code>[[ 1 2 3 4] [10 8 1 3] [20 6 2 5] [31 3 3 1]] </code></pre> <p>Then your function can be:</p> <pre><code>def model_predict(data): s = np.apply_along_axis(lambda row: row[0] + row[1] - row[2] - row[3], axis=1, arr=data) return model.predict(s) </code></pre> <p>Note that:</p> <ul> <li><em>s</em> - <strong>all</strong> input values to your model - can be computed in a single instruction, calling <em>apply_along_axis</em> for each row (axis=1),</li> <li>the predictions can also be computed "all at once", passing a <em>Numpy</em> vector - just <em>s</em>.</li> </ul> <p>For demonstration purpose, compute <em>s</em> and print it.</p>
python-3.x|pandas|machine-learning|lstm
0
377,688
56,977,881
Replace null values in a column corresponding to specific value in another column pandas
<p>I have a dataframe as below :</p> <pre><code>import pandas as pd df = pd.DataFrame({'Country': ['USA','USA','MEX','IND','UK','UK','UK'], 'Region': ['Americas','NaN','NaN','Asia','Europe','NaN','NaN'], 'Flower': ['Rose','Lily','Lily','Orchid','Petunia','Lotus','Dandelion']}) </code></pre> <p>I want to replace the <code>NaN</code> values in Region with that of other regions. By which I mean if the country is USA or Mexico the region should be Americas and if the country is UK the region should be Europe.</p> <p>The expected output is </p> <pre><code>result = pd.DataFrame({'Country': ['USA','USA','MEX','IND','UK','UK','UK'], 'Region': ['Americas','Americas','Americas','Asia','Europe','Europe','Europe'], 'Flower': ['Rose','Lily','Lily','Orchid','Petunia','Lotus','Dandelion']}) </code></pre> <p>I want to know an easy way to this without having to write if else statements.</p>
<p>You can try below code if ffill is not an option, </p> <pre><code>df['Region'] = np.select((df.Country.isin(['USA', 'MEX']), df.Country == 'UK'), ('Americas', 'Europe'), df.Region) </code></pre>
python|pandas
1
377,689
57,031,859
Fast combination of non-unique rows in numpy array, mapped to columns (i.e. fast pivot table problem, without Pandas)
<p>I wonder if anyone can offer any ideas or advice on the following coding problem please, where I'm particularly interested in a fast Python implementation (i.e. avoiding Pandas).</p> <p>I have a (dummy example) set of data like:</p> <pre><code>| User | Day | Place | Foo | Bar | 1 10 5 True False 1 11 8 True False 1 11 9 True False 2 11 9 True False 2 12 1 False True 1 12 2 False True </code></pre> <p>containing data for 2 users ("user1" and "user2") at a given day/place, where there's 2 boolean values of interest (called foo and bar here).</p> <p>I'm only interested in situations where data is logged for BOTH users at the same day &amp; place. With these relevant data rows, I then want to make new columns for the day/place entries that describe the user and foo/bar as bools.. e.g.</p> <pre><code>| Day | Place | User 1 Foo | User 1 Bar | User 2 Foo | User 2 Bar | 11 9 True False True False </code></pre> <p>Each column data is stored in numpy arrays. I appreciate this is an ideal problem for pandas, using the pivot table feature (e.g. Pandas solution is:</p> <pre><code>user = np.array([1, 1, 1, 2, 2, 1], dtype=int) day = np.array([10, 11, 11, 11, 12, 12], dtype=int) place = np.array([5,8,9,9,1,2], dtype=int) foo = np.array([1, 1, 1, 1, 0, 0], dtype=bool) bar = np.array([0, 0, 0, 0, 1, 1], dtype=bool) df = pd.DataFrame({ 'user': user, 'day': day, 'place': place, 'foo': foo, 'bar': bar, }) df2 = df.set_index(['day','place']).pivot(columns='user') df2.columns = ["User1_foo", "User2_foo", "User1_bar", "User2_bar"] df2 = df2.reset_index() df2.dropna(inplace=True) </code></pre> <p>but in my practical usage, I have millions of rows of data and profiling shows that the dataframe usage and pivot operation is a performance bottleneck.</p> <p>Therefore, how can I achieve the same output, i.e. numpy arrays for day, place and user1_foo, user1_bar, user2_foo, user2_bar for just the cases where there is data for both users at the same day AND place in the original input arrays?</p> <p>I wonder if somehow finding indexes from np.unique then inverting them would be a possible solution, but couldn't make it work. Therefore, any solutions (ideally fast executing) would be great thanks!</p>
<p><strong>Approach #1</strong></p> <p>Here's one based on dimensionality-reduction for memory-efficiency and <a href="https://docs.scipy.org/doc/numpy/reference/generated/numpy.searchsorted.html" rel="nofollow noreferrer"><code>np.searchsorted</code></a> for tracing back and looking for matching ones between the two users data -</p> <pre><code># Extract array data for efficiency, as we will work NumPy tools a = df.to_numpy(copy=False) #Pandas &gt;= 0.24, use df.values otherwise i = a[:,:3].astype(int) j = a[:,3:].astype(bool) # Test out without astype(int),astype(bool) conversions and see how they perform # Get grouped scalars for Day and place headers combined # This assumes that Day and Place data are positive integers g = i[:,2]*(i[:,1].max()+1) + i[:,1] # Get groups for user1,2 for original and grouped-scalar items m1 = i[:,0]==1 uj1,uj2 = j[m1],j[~m1] ui1 = i[m1] u1,u2 = g[m1],g[~m1] # Use searchsorted to look for matching ones between user-1,2 grouped scalars su1 = u1.argsort() ssu1_idx = np.searchsorted(u1,u2,sorter=su1) ssu1_idx[ssu1_idx==len(u1)] = 0 ssu1_idxc = su1[ssu1_idx] match_mask = u1[ssu1_idxc]==u2 match_idx = ssu1_idxc[match_mask] # Select matching items off original table p1,p2 = uj1[match_idx],uj2[match_mask] # Setup output arrays day_place = ui1[match_idx,1:] user1_bools = p1 user2_bools = p2 </code></pre> <p><strong>Approach #1-Extended : Generic <code>Day</code> and <code>Place</code> dtype data</strong></p> <p>We can extend to generic case when <code>Day</code> and <code>Place</code> data might not necessarily be positive integers. In that case, we can make use of dtype-combined view-based method to perform data-redcution. Thus, the only change needed would to get <code>g</code> differently and this would be a view-based array type and would be obtained like so -</p> <pre><code># https://stackoverflow.com/a/44999009/ @Divakar def view1D(a): # a is array a = np.ascontiguousarray(a) void_dt = np.dtype((np.void, a.dtype.itemsize * a.shape[1])) return a.view(void_dt).ravel() # Get grouped scalars for Day and place headers combined with dtype combined view g = view1D(i[:,1:]) </code></pre> <p><strong>Approach #2</strong></p> <p>We will use <code>lex-sorting</code> to group data in such a way that looking for identical elements in consecutive rows would tell us if there are matching ones across the two users. We will re-use <code>a,i,j</code> from <code>Approach#1</code>. The implementation would be -</p> <pre><code># Lexsort the i table sidx = np.lexsort(i.T) # OR sidx = i.dot(np.r_[1,i[:,:-1].max(0)+1].cumprod()).argsort() b = i[sidx] # Get matching conditions on consecutive rows m = (np.diff(b,axis=0)==[1,0,0]).all(1) # Or m = (b[:-1,1] == b[1:,1]) &amp; (b[:-1,2] == b[1:,2]) &amp; (np.diff(b[:,0])==1) # Trace back to original order by using sidx match1_idx,match2_idx = sidx[:-1][m],sidx[1:][m] # Index into relevant table and get desired array outputs day_place,user1_bools,user2_bools = i[match1_idx,1:],j[match1_idx],j[match2_idx] </code></pre> <p>Alternatively, we could use an extended mask of <code>m</code> to index into <code>sidx</code> and generate <code>match1_idx,match2_idx</code>. Rest of the code stays the same. Hence, we could do -</p> <pre><code>from scipy.ndimage import binary_dilation # Binary extend the mask to have the same length as the input. # Index into sidx with it. Use one-off offset and stepsize of 2 to get # user1,2 matching indices m_ext = binary_dilation(np.r_[m,False],np.ones(2,dtype=bool),origin=-1) match_idxs = sidx[m_ext] match1_idx,match2_idx = match_idxs[::2],match_idxs[1::2] </code></pre> <p><strong>Approach #3</strong></p> <p>Here's another based on <code>Approach #2</code> and ported over to <code>numba</code> for memory and hence perf. efficiency and we will re-use <code>a,i,j</code> from <code>approach #1</code> -</p> <pre><code>from numba import njit @njit def find_groups_numba(i_s,j_s,user_data,bools): n = len(i_s) found_iterID = 0 for iterID in range(n-1): if i_s[iterID,1] == i_s[iterID+1,1] and i_s[iterID,2] == i_s[iterID+1,2]: bools[found_iterID,0] = j_s[iterID,0] bools[found_iterID,1] = j_s[iterID,1] bools[found_iterID,2] = j_s[iterID+1,0] bools[found_iterID,3] = j_s[iterID+1,1] user_data[found_iterID,0] = i_s[iterID,1] user_data[found_iterID,1] = i_s[iterID,2] found_iterID += 1 return found_iterID # Lexsort the i table sidx = np.lexsort(i.T) # OR sidx = i.dot(np.r_[1,i[:,:-1].max(0)+1].cumprod()).argsort() i_s = i[sidx] j_s = j[sidx] n = len(i_s) user_data = np.empty((n//2,2),dtype=i.dtype) bools = np.empty((n//2,4),dtype=j.dtype) found_iterID = find_groups_numba(i_s,j_s,user_data,bools) out_bools = bools[:found_iterID] # Output bool out_userd = user_data[:found_iterID] # Output user-Day, Place data </code></pre> <p>Append with .copy() at last 2 steps if outputs must have their own memory spaces.</p> <p>Alternatively, we can offload the indexing operation back on NumPy side for a cleaner solution -</p> <pre><code>@njit def find_consec_matching_group_indices(i_s,idx): n = len(i_s) found_iterID = 0 for iterID in range(n-1): if i_s[iterID,1] == i_s[iterID+1,1] and i_s[iterID,2] == i_s[iterID+1,2]: idx[found_iterID] = iterID found_iterID += 1 return found_iterID # Lexsort the i table sidx = np.lexsort(i.T) # OR sidx = i.dot(np.r_[1,i[:,:-1].max(0)+1].cumprod()).argsort() i_s = i[sidx] j_s = j[sidx] idx = np.empty(len(i_s)//2,dtype=np.uint64) found_iterID = find_consec_matching_group_indices(i_s,idx) fidx = idx[:found_iterID] day_place,user1_bools,user2_bools = i_s[fidx,1:],j_s[fidx],j_s[fidx+1] </code></pre>
python|arrays|pandas|numpy|vectorization
4
377,690
45,871,314
Split DataFrame into a dictionary of groups from multiple columns
<p>I have a dataframe like this:</p> <pre><code> df = pd.DataFrame({ 'Client':['A','B','C','D','E'], 'Revenue':[100,120,50,40,30], 'FYoQ':['FY','Q','Q','Q','FY'], 'Quarter':[np.nan,1,3,4,np.nan], 'Year':[2017,2016,2015,2017,2016] }) </code></pre> <p>How do I split the data frame to get a 2 dimensional dictionary dataframe<br> ds[year][quarter] for each year and quarter.</p> <p>Right now I am able to do a 1 dimensional dictionary as follows:</p> <pre><code> years=df['Year'].unique().tolist() mc={elem:pd.DataFrame for elem in years} for year in years: mc[year]=df.loc[(df['Year']==year)] </code></pre> <p>This way I obtain a dictionary of dataframe mc[2015], mc[2016] etc.<br> And then I again have to apply the same thing to each of them. </p> <p>I was hoping there would be a modification of the code: </p> <pre><code> mc={elem:pd.DataFrame for elem in years} </code></pre> <p>to create a 2 dimensional (or even multi dimensional dictionary) at once, allowing for the splitting of data faster.</p>
<pre><code>from collections import defaultdict d = defaultdict(dict) [d[y].setdefault(q, g) for (y, q), g in df.groupby(['Year', 'Quarter'])]; d = dict(d) for y, v in d.items(): print(y) for q, s in v.items(): print(' ' + str(q)) p = s.__repr__() p = '\n'.join([' ' + l for l in p.split('\n')]) print(p, '\n') 2015 3.0 Client FYoQ Quarter Revenue Year 2 C Q 3.0 50 2015 2016 1.0 Client FYoQ Quarter Revenue Year 1 B Q 1.0 120 2016 2017 4.0 Client FYoQ Quarter Revenue Year 3 D Q 4.0 40 2017 </code></pre>
python|pandas|dictionary|dataframe|group-by
3
377,691
45,941,742
Apply functions to pandas groupby and indexing
<p>I am trying to understand the Pandas Groupby, but I'm currently seeing some behavior I don't understand. Basically, I have a dataset that looks like (only head shown):</p> <pre><code> userId movieId rating timestamp parsed_time 0 1 2 3.5 1112486027 2005-04-02 23:53:47 1 1 29 3.5 1112484676 2005-04-02 23:31:16 2 1 32 3.5 1112484819 2005-04-02 23:33:39 3 1 47 3.5 1112484727 2005-04-02 23:32:07 4 1 50 3.5 1112484580 2005-04-02 23:29:40 </code></pre> <p>I have checked the dataset for NaN/null values, and there are none. Now, I would like to compute the average rating of each movie, as well as the standard deviation.</p> <p>Getting the average rating is simple:</p> <pre><code>ratings = pd.read_csv('ratings.csv', sep=',') average_rating = ratings[['movieId','rating']].groupby('movieId',as_index=False).mean() average_ratings.rename(columns={'rating':'AverageRating'}, inplace=True) </code></pre> <p>which gives me something like:</p> <pre><code> movieId AverageRating 0 1 3.921240 1 2 3.211977 2 3 3.151040 3 4 2.861393 4 5 3.064592 </code></pre> <p>So this is all fine and well, and what I expect from the combination of <code>groupby()</code> and <code>mean()</code>. Now, I would like to do the same to compute the standard deviation of the movie ratings, and add this as a new column to the <code>average_rating</code> df:</p> <pre><code>average_rating['StdDev'] = ratings[['movieId','rating']].groupby('movieId').std() </code></pre> <p>which gives me:</p> <pre><code> movieId AverageRating StdDev 0 1 3.921240 NaN 1 2 3.211977 0.889012 2 3 3.151040 0.951150 3 4 2.861393 1.006642 4 5 3.064592 1.095702 </code></pre> <p>What puzzles me here, is the NaN that appears as the first entry in my StdDev column. If I extract manually the rows of, say movieId [1,2] and compute the mean and standard deviation just for those:</p> <pre><code>print('Mean movieID 1:') print(ratings[ratings['movieId']==1]['rating'].mean()) print('StdDev movieID 1:') print(ratings[ratings['movieId']==1]['rating'].std()) print('Mean movieID:') print(ratings[ratings['movieId']==2]['rating'].mean()) print('StdDev movieID 2:') print(ratings[ratings['movieId']==2]['rating'].std()) </code></pre> <p>I get returned:</p> <pre><code>Mean movieID 1: 3.921240 StdDev movieID 1: 0.889012 Mean movieID 2: 3.211977 StdDev movieID 2: 0.951150 </code></pre> <p>So to me it looks like the <code>groupby.std()</code> for some reason skips the first index, replaces that with a NaN, and then fill in the correct values, but shifted by one index. I do not understand this behavior, and it's not what I would expect. Can anyone explain this behavior to me on the second use of groupby, and how to avoid it/get it to do what I wanted?</p>
<p>The problem happens not during the computation of the standard deviation, but when assigning the result to the new column <code>StdDev</code>. The is because pandas does assignment by index, implicitly.</p> <p>The code below should work because the result of both <code>groupby</code> operations is indexed on <code>movieId</code>:</p> <pre><code># note how I remove as_index=False average_rating = ratings[['movieId','rating']].groupby('movieId').mean() average_rating['StdDev'] = ratings[['movieId','rating']].groupby('movieId').std() </code></pre> <p>Of course, you should do both in one go:</p> <pre><code>ratings[['movieId','rating']].groupby('movieId').agg(['mean', 'std']) </code></pre> <p>More elegant (or at least more standard):</p> <pre><code>ratings.groupby('movieId')['rating'].agg(['mean', 'std']) </code></pre>
python|pandas|dataframe|pandas-groupby
2
377,692
45,842,507
Merging multiple dataframe lines into aggregate lines
<p>For the following dataframe:</p> <pre><code>df = pd.DataFrame({'Name': {0: "A", 1: "A", 2:"A", 3: "B"}, 'Spec1': {0: '1', 1: '3', 2:'5', 3: '1'}, 'Spec2': {0: '2a', 1: np.nan, 2:np.nan, 3: np.nan} }, columns=['Name', 'Spec1', 'Spec2']) Name Spec1 Spec2 0 A 1 2a 1 A 3 NaN 2 A 5 NaN 3 B 1 NaN </code></pre> <p>I would like to aggregate the columns into:</p> <pre><code> Name Spec 0 A 1,3,5,2a 1 B 1 </code></pre> <p>Is there a more "pandas" way of doing this than just looping and keeping track of the values?</p>
<p>Another way</p> <pre><code>In [966]: (df.set_index('Name').unstack() .dropna().reset_index() .groupby('Name')[0].apply(','.join)) Out[966]: Name A 1,3,5,2a B 1 Name: 0, dtype: object </code></pre>
python|pandas
0
377,693
45,930,222
validation_batch_size is equal to train_batch_size in training CNN?
<p>I want to save the model with the highest accuracy, I need to take a batch of validation data in each step to validation after each step to train, the training data set will be reused because of epoch, but <strong>if</strong> <code>train_batch_size</code> <strong>equals to</strong> <code>validation_batch_size</code>, the validation data set will also be reused? because the validation data set is far less than the training data set. How should I do it? I mean to reuse the validation set without any problems? Or I set different sizes separately.</p> <pre><code>MAX_EPOCH = 10 for epoch in range(MAX_EPOCH): # training train_step = int(80000 / TRAIN_BATCH_SIZE) train_loss, train_acc = 0, 0 for step in range(epoch * train_step, (epoch + 1) * train_step): x_train, y_train = sess.run([x_train_batch, y_train_batch]) train_summary, _, err, ac = sess.run([merged, train_op, loss, acc], feed_dict={x: x_train, y_: y_train, mode: learn.ModeKeys.TRAIN, global_step: step}) train_loss += err train_acc += ac if (step + 1) % 100 == 0: train_writer.add_summary(train_summary, step) print("Epoch %d,train loss= %.2f,train accuracy=%.2f%%" % ( epoch, (train_loss / train_step), (train_acc / train_step * 100.0))) # validation val_step = int(20000 / VAL_BATCH_SIZE) val_loss, val_acc = 0, 0 for step in range(epoch * val_step, (epoch + 1) * val_step): x_val, y_val = sess.run([x_val_batch, y_val_batch]) val_summary, err, ac = sess.run([merged, loss, acc], feed_dict={x: x_val, y_: y_val, mode: learn.ModeKeys.EVAL, global_step: step}) val_loss += err val_acc += ac if (step + 1) % 100 == 0: valid_writer.add_summary(val_summary, step) print( "Epoch %d,validation loss= %.2f,validation accuracy=%.2f%%" % ( epoch, (val_loss / val_step), (val_acc / val_step * 100.0))) </code></pre>
<p>It is possible to use a different batch size during evaluation. </p> <p>That being said, you should use the same validation set every time you evaluate the model. Otherwise, the results can increase/decrease because the examples you evaluated on were inherently easier/more difficult compared to the previous evaluation.</p>
tensorflow|deep-learning|conv-neural-network
0
377,694
45,831,743
How to write a dictionary list to an excel file using python?
<p>I have script that create the array at runtime and it is as below </p> <pre><code>[{'Currency': 'Euro', 'Age of Bike': 12, 'Build Month': '08', 'Metric': '16694 km', 'Build Year': '2005', 'Website Link': u'https://www.autoscout24.nl/aanbod/motorhispania-benzine-geel-2c73a018-35a0-4e00-a1ed-1a3375ef4c4d', 'Country': 'Nederland', 'Brand': 'Motorhispania', 'Model': '', 'Price': '650'}, {'Currency': 'Euro', 'Age of Bike': 20, 'Build Month': '12', 'Metric': '75000 km', 'Build Year': '1996', 'Website Link': u'https://www.autoscout24.nl/aanbod/honda-cbr-1000-benzine-wit-d517ce56-0a7a-f055-e053-e350040a4a20', 'Country': 'Nederland', 'Brand': 'Honda', 'Model': 'CBR 1000', 'Price': '750'}, {'Currency': 'Euro', 'Age of Bike': 30, 'Build Month': '03', 'Metric': '63000 km', 'Build Year': '1987', 'Website Link': u'https://www.autoscout24.nl/aanbod/kawasaki-gpz-600-benzine-wit-80a2a256-c539-9d11-e053-e350040a17e5', 'Country': 'Nederland', 'Brand': 'Kawasaki', 'Model': 'GPZ 600', 'Price': '850'}, {'Currency': 'Euro', 'Age of Bike': 27, 'Build Month': '03', 'Metric': '61000 km', 'Build Year': '1990', 'Website Link': u'https://www.autoscout24.nl/aanbod/yamaha-xv-535-virago-virago-535-benzine-blauw-d6ee5657-3149-6b52-e053-e350040abf9d', 'Country': 'Nederland', 'Brand': 'Yamaha', 'Model': 'XV 535 Virago', 'Price': '1500'}, {'Currency': 'Euro', 'Age of Bike': 17, 'Build Month': '06', 'Metric': '51121 km', 'Build Year': '2000', 'Website Link': u'https://www.autoscout24.nl/aanbod/yamaha-fzs-600-fazer-benzine-zilver-c2dfe981-88da-4798-85d8-13f7e61fd7fc', 'Country': 'Nederland', 'Brand': 'Yamaha', 'Model': 'FZS 600', 'Price': '1595'}, {'Currency': 'Euro', 'Age of Bike': 8, 'Build Month': '07', 'Metric': '145771 km', 'Build Year': '2009', 'Website Link': u'https://www.autoscout24.nl/aanbod/bmw-r-1200-rt-benzine-grijs-3cab4057-2232-cc59-e053-e350040ae7fe', 'Country': 'Nederland', 'Brand': 'BMW', 'Model': 'R 1200 RT', 'Price': '4000'}] </code></pre> <p>and I want this to write in excel file like this</p> <pre><code>Currency | Age of Bike | Build Month | Metric Euro | 12 | 08 | 16694 km Euro | 20 | 12 | 75000 km Euro | 30 | 03 | 63000 km </code></pre> <p>so on using python library like pandas and xlswriter etc...</p>
<pre><code>data = ... # your data df = pd.DataFrame.from_dict(data) df = df[['Currency', 'Age of Bike', 'Build Month', 'Metric']] print(df) Currency Age of Bike Build Month Metric 0 Euro 12 08 16694 km 1 Euro 20 12 75000 km 2 Euro 30 03 63000 km 3 Euro 27 03 61000 km 4 Euro 17 06 51121 km 5 Euro 8 07 145771 km </code></pre> <p>Now, call <a href="https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_excel.html" rel="nofollow noreferrer"><code>df.to_excel</code></a>:</p> <pre><code>df.to_excel('data.xlsx') </code></pre> <p>Or, if you have multiple dataframes to save to multiple sheets:</p> <pre><code>writer = pd.ExcelWriter('data.xlsx') df.to_excel(writer, 'Sheet1') ... # more writes to the file writer.save() </code></pre>
python|excel|pandas|dataframe
3
377,695
45,847,006
ValueError: operands could not be broadcast together with shapes - inverse_transform- Python
<p>I know <code>ValueError</code> question has been asked many <a href="https://stackoverflow.com/questions/24560298/python-numpy-valueerror-operands-could-not-be-broadcast-together-with-shapes">times</a>. I am still struggling to find an answer because I am using <code>inverse_transform</code> in my code.</p> <p>Say I have an array <code>a</code> </p> <pre><code>a.shape &gt; (100,20) </code></pre> <p>and another array <code>b</code></p> <pre><code>b.shape &gt; (100,3) </code></pre> <p>When I did a <code>np.concatenate</code>, </p> <pre><code>hat = np.concatenate((a, b), axis=1) </code></pre> <p>Now shape of <code>hat</code> is </p> <pre><code>hat.shape (100,23) </code></pre> <p>After this, I tried to do this,</p> <pre><code>inversed_hat = scaler.inverse_transform(hat) </code></pre> <p>When I do this, I am getting an error:</p> <blockquote> <p>ValueError: operands could not be broadcast together with shapes (100,23) (25,) (100,23)</p> </blockquote> <p>Is this broadcast error in <code>inverse_transform</code>? Any suggestion will be helpful. Thanks in advance!</p>
<p><strike>Although you didn't specify, I'm assuming you are using <code>inverse_transform()</code> from scikit learn's <code>StandardScaler</code></strike>. You need to fit the data first.</p> <pre><code>import numpy as np from sklearn.preprocessing import MinMaxScaler In [1]: arr_a = np.random.randn(5*3).reshape((5, 3)) In [2]: arr_b = np.random.randn(5*2).reshape((5, 2)) In [3]: arr = np.concatenate((arr_a, arr_b), axis=1) In [4]: scaler = MinMaxScaler(feature_range=(0, 1)).fit(arr) In [5]: scaler.inverse_transform(arr) Out[5]: array([[ 0.19981115, 0.34855509, -1.02999482, -1.61848816, -0.26005923], [-0.81813499, 0.09873672, 1.53824716, -0.61643731, -0.70210801], [-0.45077786, 0.31584348, 0.98219019, -1.51364126, 0.69791054], [ 0.43664741, -0.16763207, -0.26148908, -2.13395823, 0.48079204], [-0.37367434, -0.16067958, -3.20451107, -0.76465428, 1.09761543]]) In [6]: new_arr = scaler.inverse_transform(arr) In [7]: new_arr.shape == arr.shape Out[7]: True </code></pre>
python|arrays|numpy|scikit-learn|broadcast
7
377,696
45,995,040
Parse ics file in Python. Icalendar package doesn't return start/end date and other properties
<p>Im trying to parse google calendar (<code>cal.ics</code>) using icalendar package and running this script:</p> <pre><code>from icalendar import Calendar, Event from datetime import datetime g = open('cal.ics','rb') gcal = Calendar.from_ical(g.read()) for component in gcal.walk(): print component.get('summary') print component.get('dtstart') g.close() </code></pre> <p>The output is correct <code>summary</code> but instead of <code>start date</code> im getting:</p> <pre><code>&lt;icalendar.prop.vDDDTypes object at 0x10bae9b10&gt; </code></pre> <p>Tried replacing <code>get</code> with <code>decoded</code> like this:</p> <pre><code> print component.decoded('summary') print component.decoded('dtstart') </code></pre> <p>...but got en error:</p> <pre><code>--------------------------------------------------------------------------- KeyError Traceback (most recent call last) &lt;ipython-input-57-d6a73c485b11&gt; in &lt;module&gt;() 8 # print component.content_line 9 print component.name ---&gt; 10 print component.decoded('summary') 11 print component.get('dtstart') 12 ~/anaconda/lib/python2.7/site-packages/icalendar/cal.pyc in decoded(self, name, default) 238 else: 239 if default is _marker: --&gt; 240 raise KeyError(name) 241 else: 242 return default KeyError: 'summary' </code></pre> <p>All the docs i found are about adding to the calendar while im trying to parse it and eventually move parts of it into pandas. If you have any ideas/suggestions please help.</p>
<p>I had this problem years ago- real source of frustration. Solution is to do </p> <pre><code>component.get('dtstart').dt </code></pre> <p>In general, when you're having a problem like this, try calling <code>dir()</code> on it.</p>
python|pandas|icalendar|google-calendar-api
0
377,697
45,779,307
Difference between tf.assign and assignment operator (=)
<p>I'm trying to understand the difference between tf.assign and the assignment operator(=). I have three sets of code</p> <p>First, using simple tf.assign</p> <pre><code>import tensorflow as tf with tf.Graph().as_default(): a = tf.Variable(1, name="a") assign_op = tf.assign(a, tf.add(a,1)) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) print sess.run(assign_op) print a.eval() print a.eval() </code></pre> <p>The output is expected as</p> <pre><code>2 2 2 </code></pre> <p>Second, using assignment operator</p> <pre><code>import tensorflow as tf with tf.Graph().as_default(): a = tf.Variable(1, name="a") a = a + 1 with tf.Session() as sess: sess.run(tf.global_variables_initializer()) print sess.run(a) print a.eval() print a.eval() </code></pre> <p>The results are still 2, 2, 2. </p> <p>Third, I use <strong>both</strong> tf.assign and assignment operator</p> <pre><code>import tensorflow as tf with tf.Graph().as_default(): a = tf.Variable(1, name="a") a = tf.assign(a, tf.add(a,1)) with tf.Session() as sess: sess.run(tf.global_variables_initializer()) print sess.run(a) print a.eval() print a.eval() </code></pre> <p>Now, the output becomes 2, 3, 4.</p> <p>My questions are</p> <ol> <li><p>In the 2nd snippet using (=), when I have sess.run(a), it seems I'm running an assign op. So does "a = a+1" internally create an assignment op like assign_op = tf.assign(a, a+1)? Is the op run by the session really just the assign_op? But when I run a.eval(), it doesn't continue to increment a, hence it appears eval is evaluating a "static" variable.</p></li> <li><p>I'm not sure how to explain the 3rd snippet. Why the two evals increment a, but the two evals in the 2nd snippet doesn't?</p></li> </ol> <p>Thanks.</p>
<p>The main confusion here is that doing <code>a = a + 1</code> will reassign the Python variable <code>a</code> to the resulting tensor of the addition operation <code>a + 1</code>. <code>tf.assign</code>, on the other hand, is an operation for setting the value of a TensorFlow variable.</p> <pre><code>a = tf.Variable(1, name="a") a = a + 1 </code></pre> <p>This is equivalent to:</p> <pre><code>a = tf.add(tf.Variable(1, name="a"), 1) </code></pre> <p>With that in mind:</p> <blockquote> <p>In the 2nd snippet using (=), when I have sess.run(a), it seems I'm running an assign op. So does "a = a+1" internally create an assignment op like assign_op = tf.assign(a, a+1)? [...]</p> </blockquote> <p>It might look so, but not true. As explained above, this will only reassign the Python variable. And without <code>tf.assign</code> or any other operation that changes the variable, it stays with the value 1. Each time <code>a</code> is evaluated, the program will always calculate <code>a + 1 =&gt; 1 + 1</code>.</p> <blockquote> <p>I'm not sure how to explain the 3rd snippet. Why the two evals increment a, but the two evals in the 2nd snippet doesn't?</p> </blockquote> <p>That's because calling <code>eval()</code> on the assignment tensor in the third snippet also triggers the variable assignment (note that this isn't much different from doing <code>session.run(a)</code> with the current session).</p>
tensorflow
5
377,698
45,986,970
Is it possible for Tensorflow graph to run outside of session
<p>Could someone please explain the following situation:</p> <p>I've created a simple convolutional neural network using Tensorflow. I'm using a class and I've created my graph in the constructor. I then train the network using a train method I've written. I'm also using queues and the feed-in mechanism. This is an excerpt from the code:</p> <pre><code> class Super_res: 'Create a CNN model which augments the resolution of an image' # object initialization (python) - constructor def __init__(self, input, output, batch_size, record_size, weights, biases): # input (neurons), output (no. neurons), batch_size (batches to process before registering delta), record_size () print("Initializing object") self.input = input self.output = output self.batch_size = batch_size self.record_size = record_size self.weights = weights self.biases = biases # initialize data batch readers. Parameters: [Path], record_size, batch_size self.data_batch = data_reader3.batch_generator([DATA_PATH_OPTICAL_TRAIN],self.record_size, self.batch_size) # train set self.data_batch_eval = data_reader3.batch_generator([DATA_PATH_EVAL],self.record_size, self.batch_size) # eval set # this returns a [batch_size, 2, n_input] tensor. The second dimension is comprised of the low-res image and the GT high-res image. Each of these images is comprised of n_input entries (flat vector) self.data1 = tf.placeholder_with_default(tf.transpose(self.data_batch, [1, 0, 2]), [2, batch_size, n_input]) # one for optical and another for GT image [batch_size, n_input] each self.keep_prob = tf.placeholder(tf.float32) #dropout (keep probability) - this placeholder can accept a Tensor of arbitrary shape # create network model self.pred = self.cnn_model(self.data1[0], self.weights, self.biases) # self.data1[0] is the low-res data def train(self): #self.low_res = self.data1[0] #self.high_res = self.data1[1] # define loss and optimizer #self.cost = tf.reduce_mean(tf.pow(self.data1[1] - self.pred, 2)) #self.optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(self.cost) # Initializing the variables init = tf.global_variables_initializer() # Initialize session with tf.Session() as sess: sess.run(init) coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(coord=coord) step = 1 print("Entering training") # Keep training until reach max iterations while step * batch_size &lt; training_iters: #_, c = sess.run([self.optimizer, self.cost]) conv_result = sess.run(self.pred) print(conv_result) #data2 = self.data1[0] #print(data2) if step % display_step == 0: print("Step:", '%04d' % (step+1)) # "cost=", c) step = step + 1 coord.request_stop() coord.join(threads) </code></pre> <p>When I run this code, I get the following error output:</p> <pre><code> Entering training Traceback (most recent call last): File "C:\Users\divin\Miniconda3\envs\tensorflow_gpu\lib\site-packages \tensorflow\python\client\session.py", line 1139, in _do_call return fn(*args) File "C:\Users\divin\Miniconda3\envs\tensorflow_gpu\lib\site-packages \tensorflow\python\client\session.py", line 1121, in _run_fn status, run_metadata) File "C:\Users\divin\Miniconda3\envs\tensorflow_gpu\lib\contextlib.py", line 66, in __exit__ next(self.gen) File "C:\Users\divin\Miniconda3\envs\tensorflow_gpu\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 466, in raise_exception_on_not_ok_status pywrap_tensorflow.TF_GetCode(status)) tensorflow.python.framework.errors_impl.OutOfRangeError: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 512, current size 0) [[Node: shuffle_batch = QueueDequeueManyV2[component_types= [DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"] (shuffle_batch/random_shuffle_queue, shuffle_batch/n)]] [[Node: shuffle_batch/_25 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1, tensor_name="edge_5_shuffle_batch", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"]()]] During handling of the above exception, another exception occurred: Traceback (most recent call last): File "super_res_class.py", line 137, in &lt;module&gt; p.train() File "super_res_class.py", line 106, in train conv_result = sess.run(self.pred) File "C:\Users\divin\Miniconda3\envs\tensorflow_gpu\lib\site-packages \tensorflow\python\client\session.py", line 789, in run run_metadata_ptr) File "C:\Users\divin\Miniconda3\envs\tensorflow_gpu\lib\site-packages \tensorflow\python\client\session.py", line 997, in _run feed_dict_string, options, run_metadata) File "C:\Users\divin\Miniconda3\envs\tensorflow_gpu\lib\site-packages \tensorflow\python\client\session.py", line 1132, in _do_run target_list, options, run_metadata) File "C:\Users\divin\Miniconda3\envs\tensorflow_gpu\lib\site-packages \tensorflow\python\client\session.py", line 1152, in _do_call raise type(e)(node_def, op, message) tensorflow.python.framework.errors_impl.OutOfRangeError: RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 512, current size 0) [[Node: shuffle_batch = QueueDequeueManyV2[component_types= [DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"] (shuffle_batch/random_shuffle_queue, shuffle_batch/n)]] [[Node: shuffle_batch/_25 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1, tensor_name="edge_5_shuffle_batch", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"]()]] Caused by op 'shuffle_batch', defined at: File "super_res_class.py", line 136, in &lt;module&gt; p = Super_res(1024,1024,512,record_size, weights, biases) # params (n_input, n_output, batch_size) File "super_res_class.py", line 50, in __init__ self.data_batch = data_reader3.batch_generator([DATA_PATH_OPTICAL_TRAIN],self.record_size, self.batch_size) # train set File "E:\google_drive\Doctorate\matlab code\Tensorflow\doctorate_CNN\dong_recreation\data_reader3.py", line 156, in batch_generator capacity=capacity, min_after_dequeue=min_after_dequeue) File "C:\Users\divin\Miniconda3\envs\tensorflow_gpu\lib\site-packages\tensorflow\python\training\input.py", line 1217, in shuffle_batch name=name) File "C:\Users\divin\Miniconda3\envs\tensorflow_gpu\lib\site-packages\tensorflow\python\training\input.py", line 788, in _shuffle_batch dequeued = queue.dequeue_many(batch_size, name=name) File "C:\Users\divin\Miniconda3\envs\tensorflow_gpu\lib\site-packages \tensorflow\python\ops\data_flow_ops.py", line 457, in dequeue_many self._queue_ref, n=n, component_types=self._dtypes, name=name) File "C:\Users\divin\Miniconda3\envs\tensorflow_gpu\lib\site-packages \tensorflow\python\ops\gen_data_flow_ops.py", line 946, in _queue_dequeue_many_v2 timeout_ms=timeout_ms, name=name) File "C:\Users\divin\Miniconda3\envs\tensorflow_gpu\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 767, in apply_op op_def=op_def) File "C:\Users\divin\Miniconda3\envs\tensorflow_gpu\lib\site-packages\tensorflow\python\framework\ops.py", line 2506, in create_op original_op=self._default_original_op, op_def=op_def) File "C:\Users\divin\Miniconda3\envs\tensorflow_gpu\lib\site-packages\tensorflow\python\framework\ops.py", line 1269, in __init__ self._traceback = _extract_stack() OutOfRangeError (see above for traceback): RandomShuffleQueue '_2_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 512, current size 0) [[Node: shuffle_batch = QueueDequeueManyV2[component_types=[DT_FLOAT], timeout_ms=-1, _device="/job:localhost/replica:0/task:0/cpu:0"](shuffle_batch/random_shuffle_queue, shuffle_batch/n)]] [[Node: shuffle_batch/_25 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1, tensor_name="edge_5_shuffle_batch", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"]()]] (tensorflow_gpu) E:\google_drive\Doctorate\matlab code\Tensorflow\doctorate_CNN\dong_recreation&gt; </code></pre> <p>When I remove the sess.run() from my pred output, the code seems to operate normally. </p> <pre><code> def train(self): #self.low_res = self.data1[0] #self.high_res = self.data1[1] # define loss and optimizer #self.cost = tf.reduce_mean(tf.pow(self.data1[1] - self.pred, 2)) #self.optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(self.cost) # Initializing the variables init = tf.global_variables_initializer() # Initialize session with tf.Session() as sess: sess.run(init) coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(coord=coord) step = 1 print("Entering training") # Keep training until reach max iterations while step * batch_size &lt; training_iters: #_, c = sess.run([self.optimizer, self.cost]) conv_result = self.pred print(conv_result) #data2 = self.data1[0] #print(data2) if step % display_step == 0: print("Step:", '%04d' % (step+1)) # "cost=", c) step = step + 1 coord.request_stop() coord.join(threads) </code></pre> <p>Could someone please explain this to me? Normally, the graph is only evaluated when run under a session! What gives here?</p>
<p>Just saying <code>conv_result = self.pred</code> won't do anything -- you need, indeed, to do <code>sess.run(self.pred)</code> to get it to execute. The errors you're getting are something else about your model. As they say, your <code>InputProducer</code> has an empty queue. With the information you've given it can't be diagnosed, but I would search further on the site for why your InputProducer isn't filling / has zero size.</p>
python|tensorflow|convolution
1
377,699
46,003,482
pandas groupby count and proportion
<p>I'm trying to do something that I know must be basic pandas, but am racking my brain to figure it out. I want proportions and counts of each group to be available for an arbitrary level of group-bys:</p> <pre><code>import pandas as pd df = pd.DataFrame({'A': [1, 0, 1, 0, 1, 0, 0, 0], 'B': ['A'] * 4 + ['B'] * 4}) gb = df.groupby(['A', 'B']).size() prop_gb = gb / gb.groupby(level=0).sum() </code></pre> <p><code>prop_gb</code> is now:</p> <pre><code>prop_gb Out[116]: A B 0 A 0.400000 B 0.600000 1 A 0.666667 B 0.333333 dtype: float64 </code></pre> <p>I ultimately want this, though:</p> <pre><code>A B prop count 0 A 0.400000 2 B 0.600000 3 1 A 0.666667 2 B 0.333333 1 </code></pre> <p>I've tried merging the two <code>pandas.Series</code> objects, <code>gb</code> and <code>prop_gb</code> by converting them to dictionaries and "joining" them that way, but I know there must be a native pandas way to accomplish this...</p> <p>This technically accomplishes what I want:</p> <pre><code>desired = {k: (v, prop_gb.to_dict()[k]) for k, v in gb.to_dict().items()} desired {(0, 'A'): (2, 0.40000000000000002), (0, 'B'): (3, 0.59999999999999998), (1, 'A'): (2, 0.66666666666666663), (1, 'B'): (1, 0.33333333333333331)} </code></pre>
<p>You can do produce these values in one expression like so:</p> <pre><code>df.groupby(['A', 'B']).size().agg( {'count': lambda x: x, 'prop':lambda x: x / x.sum(level=0)} ).unstack(level=0).reset_index() # A B count prop # 0 0 A 2.0 0.400000 # 1 0 B 3.0 0.600000 # 2 1 A 2.0 0.666667 # 3 1 B 1.0 0.333333 </code></pre>
python|pandas
4