title
stringlengths
10
172
question_id
int64
469
40.1M
question_body
stringlengths
22
48.2k
question_score
int64
-44
5.52k
question_date
stringlengths
20
20
answer_id
int64
497
40.1M
answer_body
stringlengths
18
33.9k
answer_score
int64
-38
8.38k
answer_date
stringlengths
20
20
tags
listlengths
1
5
kth smallest element sorted matrix using QuickSelect
38,772,873
<p>I know this has been asked before but this question is about my specific code. I am trying to do a psuedo QuickSelect algorithm where I compare k to the midpoint of a sub interval of a sorted matrix. </p> <p>I keep getting a timeout error.</p> <p>Here is the matrix:</p> <pre><code>matrix = [ [ 1, 5, 9], [10, 11, 13], [12, 13, 15] ], k = 8 </code></pre> <p>Here is my code:</p> <pre><code>def kthSmallest(self, matrix, k): """ :type matrix: List[List[int]] :type k: int :rtype: int """ return self.matrixRecur(matrix, (0, 0), (len(matrix) - 1, len(matrix[0]) - 1), k) def matrixRecur(self, splicedMatrix, left, right, k): start_row, start_col = left end_row, end_col = right mid_row = (start_row + end_row)/2 mid_col = (start_col + end_col)/2 i = start_row j = start_col lcount = 0 while(not (i == mid_row and j == mid_col)): if j &lt; len(splicedMatrix[0]): j += 1 else: j = 0 i += 1 lcount += 1 if k == lcount: return splicedMatrix[mid_row][mid_col] elif k &lt; lcount: return self.matrixRecur(splicedMatrix, (start_row, start_col), (mid_row, mid_col), k) else: return self.matrixRecur(splicedMatrix, (mid_row, mid_col + 1), (end_row, end_col), k-lcount) </code></pre> <p>I pass in tuples to <code>matrixRecur</code> which contain the <code>(row, col)</code> of the endpoints of the interval. So, if I want to search the whole matrix, I pass <code>(0, 0)</code> and <code>(n, n)</code>. <code>matrixRecur</code> will look at a subinterval, determine the midpoint based on the row col numbers of the endpoints, count the number of elements less than the midpoint, compare it to <code>k</code>. If <code>k</code> is less than the number of elements less than the midpoint (<code>lcount</code>), then the kth smallest element is within the left interval because there are at most <code>lcount</code> elements less than the midpoint and <code>k</code> &lt; <code>lcount</code>. </p> <p>I am running this on an interview question site and the site continues to tell me my code times out. I am not seeing why. </p>
0
2016-08-04T16:33:14Z
38,773,490
<p>Your above approach will not work. As your matrix is row and column wise sorted. Consider the matrix like below</p> <pre><code>10, 20, 30, 40 15, 25, 35, 45 24, 29, 37, 48 32, 33, 39, 50 </code></pre> <p>Your approach will fail in this case. You are getting time out because you are traversing the whole 2D matrix. Worst case time complexity would be <code>O(mn)</code> (m and n are respectively number of rows and columns).</p> <p>You can use a <strong>min-heap</strong> to solve this problem. </p> <p>Algorithm:</p> <pre><code>1. Build a min heap of first row elements. A heap entry also stores row number and column number. 2. for(int i=0;i&lt;k;i++) Get minimum element (or root) from min heap. Find row number and column number of the minimum element. Replace root with the next element from same column and min-heapify the root. 3. Return the last extracted root. </code></pre> <p>Time complexity : <code>O(n + kLogn)</code></p> <p>Source : <a href="http://www.geeksforgeeks.org/kth-smallest-element-in-a-row-wise-and-column-wise-sorted-2d-array-set-1/" rel="nofollow">Kth smallest in 2D matrix</a></p>
0
2016-08-04T17:07:52Z
[ "python", "algorithm", "matrix" ]
Signaling between parent and child widgets in tkinter
38,772,877
<p>I have a moderately complex GUI that I'm building for interacting with and observing some simulations. I would like to be able to continue to refactor and add features as the project progresses. For this reason, I would like as loose as possible a coupling between different widgets in the application.</p> <p>My application is structured something like this:</p> <pre><code>import tkinter as tk class Application(tk.Tk): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.instance_a = ClassA(self) self.instance_b = ClassB(self) # ... # class ClassA(tk.Frame): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) # ... # class ClassB(tk.Frame): def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) # ... # def main(): application = Application() application.mainloop() if __name__ == '__main__': main() </code></pre> <p>I would like to be able to perform some action in one widget (such as selecting an item in a Treeview widget or clicking on part of a canvas) which changes the state of the other widget.</p> <p>One way to do this is to have the following code in class A:</p> <pre><code>self.bind('&lt;&lt;SomeEvent&gt;&gt;', self.master.instance_b.callback()) </code></pre> <p>With the accompanying code in class B:</p> <pre><code>def callback(self): print('The more that things change, ') </code></pre> <p>The problem that I have with this approach is that class A has to know about class B. Since the project is still a prototype, I'm changing things all the time and I want to be able to rename <code>callback</code> to something else, or get rid of widgets belonging to class B entirely, or make instance_a a child of some <code>PanedWindow</code> object (in which case <code>master</code> needs to be replaced by <code>winfo_toplevel()</code>). </p> <p>Another approach is to put a method inside the application class which is called whenever some event is triggered:</p> <pre><code>class Application(tk.Tk): # ... # def application_callback(): self.instance_b.callback() </code></pre> <p>and modify the bound event in class A:</p> <pre><code>self.bind('&lt;&lt;SomeEvent&gt;&gt;', self.master.application_callback()) </code></pre> <p>This is definitely easier to maintain, but requires more code. It also requires the application class to know about the methods implemented in class B and where instance_b is located in the hierarchy of widgets. In a perfect world, I would like to be able to do something like this:</p> <pre><code># in class A: self.bind('&lt;&lt;SomeEvent&gt;&gt;', lambda _: self.event_generate('&lt;&lt;AnotherEvent&gt;&gt;')) # in class B: self.bind('&lt;&lt;AnotherEvent&gt;&gt;', callback) </code></pre> <p>That way, if I perform an action in one widget, the second widget would automatically know to to respond in some way <em>without either widget knowing about the implementation details of the other</em>. After some testing and head-scratching, I came to the conclusion that this kind of behavior is impossible using tkinter's events system. So, here are my questions:</p> <ol> <li>Is this desired behavior really impossible?</li> <li>Is it even a good idea?</li> <li>Is there a better way of achieving the degree of modularity that I want?</li> <li>What modules/tools can I use in place of tkinter's built-in event system?</li> </ol>
1
2016-08-04T16:33:26Z
38,773,229
<p>My code in <a href="http://stackoverflow.com/a/38161946/5781248">answer</a> avoids the issue of class A having to know about internals of class B by calling methods of a handler object. In the following code methods in class <code>Scanner</code> do not need to know about the internals of a <code>ScanWindow</code> instance. The instance of a <code>Scanner</code> class contains a reference to an instance of a handler class, and communicates with the instance of <code>ScannerWindow</code> through the methods of <code>Handler</code> class.</p> <pre><code># this class could be replaced with a class inheriting # a Tkinter widget, threading is not necessary class Scanner(object): def __init__(self, handler, *args, **kw): self.thread = threading.Thread(target=self.run) self.handler = handler def run(self): while True: if self.handler.need_stop(): break img = self.cam.read() self.handler.send_frame(img) class ScanWindow(tk.Toplevel): def __init__(self, parent, *args, **kw): tk.Toplevel.__init__(self, master=parent, *args, **kw) # a reference to parent widget if virtual events are to be sent self.parent = parent self.lock = threading.Lock() self.stop_event = threading.Event() self.frames = [] def start(self): class Handler(object): # note self and self_ are different # self refers to the instance of ScanWindow def need_stop(self_): return self.stop_event.is_set() def send_frame(self_, frame): self.lock.acquire(True) self.frames.append(frame) self.lock.release() # send an event to another widget # self.parent.event_generate('&lt;&lt;ScannerFrame&gt;&gt;', when='tail') def send_symbol(self_, data): self.lock.acquire(True) self.symbols.append(data) self.lock.release() # send an event to another widget # self.parent.event_generate('&lt;&lt;ScannerSymbol&gt;&gt;', when='tail') self.stop_event.clear() self.scanner = Scanner(Handler()) </code></pre>
0
2016-08-04T16:53:38Z
[ "python", "python-3.x", "user-interface", "design", "tkinter" ]
How to insert several lists into sqlite column
38,772,878
<p>I am going to insert information from the following list into sqlite column:</p> <pre><code>a = [1, 2, 3] b = ['MAR', 'PAR', 'ZAR'] c = [1000, 2000, 3000] </code></pre> <p>column AA of database should have information in list a, column BB should have information in list b, and column CC should have information in list c. </p> <p>This is my code:</p> <pre><code>import sqlite3 conn= sqlite3.connect('test.db') c = conn.cursor() def create_table (): c.execute ('CREATE TABLE IF NOT EXISTS test (AA INT, BB TEXT, CC INT)') print ("table was created") create_table () a = [1, 2, 3] b = ['MAR', 'PAR', 'ZAR'] c= [1000, 2000, 3000] for i in range (len (a)): I= a[i] II= b[i] III= c[i] c.execute ("INSERT INTO TEST (AA, BB, CC) VALUES (?,?,?) ", I, II, III ) </code></pre> <p>The error is this: </p> <pre><code>c.execute ("INSERT INTO TEST (AA, BB, CC) VALUES (?,?,?) ", I, II, III ) AttributeError: 'list' object has no attribute 'execute' </code></pre>
0
2016-08-04T16:33:31Z
38,772,893
<p>You have shadowed the <code>c</code> variable. It does not refer to the opened cursor anymore:</p> <pre><code>c = conn.cursor() # ... c = [1000, 2000, 3000] </code></pre> <p>This is one of the many reasons to have <em>meaningful variable names</em>. I doubt you would have made this error if you named the cursor as <code>cursor</code>:</p> <pre><code>cursor = conn.cursor() </code></pre> <hr> <p>Also note, that the modern IDEs like PyCharm are capable of catching these errors early:</p> <p><a href="http://i.stack.imgur.com/pCi4d.png" rel="nofollow"><img src="http://i.stack.imgur.com/pCi4d.png" alt="enter image description here"></a></p>
2
2016-08-04T16:34:22Z
[ "python", "sqlite", "python-3.x", "insert-into" ]
Ending an infinite loop issue (Python)
38,772,896
<p>I am attempting to create a loop which a user can stop at the end of the program. I've tried various solutions, none of which have worked, all I have managed to do is create the loop but I can't seem to end it. I only recently started learning Python and I would be grateful if someone could enlighten me on this issue.</p> <pre><code>def main(): while True: NoChild = int(0) NoAdult = int(0) NoDays = int(0) AdultCost = int(0) ChildCost = int(0) FinalCost = int(0) print ("Welcome to Superslides!") print ("The theme park with the biggest water slide in Europe.") NoAdult = int(raw_input("How many adults are there?")) NoChild = int(raw_input("How many children are there?")) NoDays = int(raw_input("How many days will you be at the theme park?")) WeekDay = (raw_input("Will you be attending the park on a weekday? (Yes/No)")) if WeekDay == "Yes": AdultCost = NoAdult * 5 elif WeekDay == "No": AdultCost = NoAdult * 10 ChildCost = NoChild * 5 FinalCost = (AdultCost + ChildCost)*NoDays print ("Order Summary") print("Number of Adults: ",NoAdult,"Cost: ",AdultCost) print("Number of Children: ",NoChild,"Cost: ",ChildCost) print("Your final total is:",FinalCost) print("Have a nice day at SuperSlides!") again = raw_input("Would you like to process another customer? (Yes/No)") if again =="No": print("Goodbye!") return elif again =="Yes": print("Next Customer.") else: print("You should enter either Yes or No.") if __name__=="__main__": main() </code></pre>
0
2016-08-04T16:34:35Z
38,773,031
<p>You can change the return to break and it will exit the while loop</p> <pre><code>if again =="No": print("Goodbye!") break </code></pre>
0
2016-08-04T16:42:42Z
[ "python", "loops" ]
Ending an infinite loop issue (Python)
38,772,896
<p>I am attempting to create a loop which a user can stop at the end of the program. I've tried various solutions, none of which have worked, all I have managed to do is create the loop but I can't seem to end it. I only recently started learning Python and I would be grateful if someone could enlighten me on this issue.</p> <pre><code>def main(): while True: NoChild = int(0) NoAdult = int(0) NoDays = int(0) AdultCost = int(0) ChildCost = int(0) FinalCost = int(0) print ("Welcome to Superslides!") print ("The theme park with the biggest water slide in Europe.") NoAdult = int(raw_input("How many adults are there?")) NoChild = int(raw_input("How many children are there?")) NoDays = int(raw_input("How many days will you be at the theme park?")) WeekDay = (raw_input("Will you be attending the park on a weekday? (Yes/No)")) if WeekDay == "Yes": AdultCost = NoAdult * 5 elif WeekDay == "No": AdultCost = NoAdult * 10 ChildCost = NoChild * 5 FinalCost = (AdultCost + ChildCost)*NoDays print ("Order Summary") print("Number of Adults: ",NoAdult,"Cost: ",AdultCost) print("Number of Children: ",NoChild,"Cost: ",ChildCost) print("Your final total is:",FinalCost) print("Have a nice day at SuperSlides!") again = raw_input("Would you like to process another customer? (Yes/No)") if again =="No": print("Goodbye!") return elif again =="Yes": print("Next Customer.") else: print("You should enter either Yes or No.") if __name__=="__main__": main() </code></pre>
0
2016-08-04T16:34:35Z
38,773,088
<p>Instead of this:</p> <pre><code>while True: </code></pre> <p>You should use this:</p> <pre><code>again = True while again: ... usrIn = raw_input("Would you like to process another customer? y/n") if usrIn == 'y': again = True else again = False </code></pre> <p>I just made it default to False, but you can always just make it ask the user for a new input if they don't enter y or n.</p>
0
2016-08-04T16:45:43Z
[ "python", "loops" ]
Ending an infinite loop issue (Python)
38,772,896
<p>I am attempting to create a loop which a user can stop at the end of the program. I've tried various solutions, none of which have worked, all I have managed to do is create the loop but I can't seem to end it. I only recently started learning Python and I would be grateful if someone could enlighten me on this issue.</p> <pre><code>def main(): while True: NoChild = int(0) NoAdult = int(0) NoDays = int(0) AdultCost = int(0) ChildCost = int(0) FinalCost = int(0) print ("Welcome to Superslides!") print ("The theme park with the biggest water slide in Europe.") NoAdult = int(raw_input("How many adults are there?")) NoChild = int(raw_input("How many children are there?")) NoDays = int(raw_input("How many days will you be at the theme park?")) WeekDay = (raw_input("Will you be attending the park on a weekday? (Yes/No)")) if WeekDay == "Yes": AdultCost = NoAdult * 5 elif WeekDay == "No": AdultCost = NoAdult * 10 ChildCost = NoChild * 5 FinalCost = (AdultCost + ChildCost)*NoDays print ("Order Summary") print("Number of Adults: ",NoAdult,"Cost: ",AdultCost) print("Number of Children: ",NoChild,"Cost: ",ChildCost) print("Your final total is:",FinalCost) print("Have a nice day at SuperSlides!") again = raw_input("Would you like to process another customer? (Yes/No)") if again =="No": print("Goodbye!") return elif again =="Yes": print("Next Customer.") else: print("You should enter either Yes or No.") if __name__=="__main__": main() </code></pre>
0
2016-08-04T16:34:35Z
38,773,606
<p>I checked your code with python 3.5 and it worked after I changed the <code>raw_input</code> to <code>input</code>, since input in 3.5 is the raw_input of 2.7. Since you're using print() as a function, you should have an import of the print function from future package in your import section. I can't see no import section in your script.</p> <p>What exactly doesn't work? </p> <p>Additionally: It's a good habit to end a command line application by exiting with an exit code instead of breaking and ending. So you would have to </p> <pre><code>import sys </code></pre> <p>in the import section of your python script and when checking for ending the program by the user, do a</p> <pre><code>if again == "No": print("Good Bye") sys.exit(0) </code></pre> <p>This gives you the opportunity in case of an error to exit with a different exit code.</p>
0
2016-08-04T17:14:10Z
[ "python", "loops" ]
How to index one array with another using SciPy CSR Sparse Arrays?
38,772,900
<p>I have two arrays A and B. In NumPy you can use A as an index to B e.g.</p> <pre><code>A = np.array([[1,2,3,1,7,3,1,2,3],[4,5,6,4,5,6,4,5,6],[7,8,9,7,8,9,7,8,9]]) B= np.array([1,2,3,4,5,6,7,8,9,0]) c = B[A] </code></pre> <p>Which produces:</p> <blockquote> <p>[[2 3 4 2 8 4 2 3 4] [5 6 7 5 6 7 5 6 7] [8 9 0 8 9 0 8 9 0]]</p> </blockquote> <p>However, in my case the arrays A and B are SciPy CSR sparse arrays and they don't seem to support indexing.</p> <pre><code>A_sparse = sparse.csr_matrix(A) B_sparse = sparse.csr_matrix(B) c = B_sparse[A_sparse] </code></pre> <p>This results in:</p> <blockquote> <p>IndexError: Indexing with sparse matrices is not supported except boolean indexing where matrix and index are equal shapes.</p> </blockquote> <p>I've come up with the function below to replicate NumPy's behavior with the sparse arrays:</p> <pre><code> def index_sparse(A,B): A_sparse = scipy.sparse.coo_matrix(A) B_sparse = sparse.csr_matrix(B) res = sparse.csr_matrix(A_sparse) for i,j,v in zip(A_sparse.row, A_sparse.col, A_sparse.data): res[i,j] = B_sparse[0, v] return res res = index_sparse(A, B) print res.todense() </code></pre> <p>Looping over the array and having to create a new array in Python isn't ideal. Is there a better way of doing this using built-in functions from SciPy/ NumPy?</p>
0
2016-08-04T16:34:41Z
38,777,215
<p>Sparse indexing is less developed. <code>coo</code> format for example doesn't implement it at all.</p> <p>I haven't tried to implement this problem, though I have answered others that involve working with the sparse format attributes. So I'll just make some general observations.</p> <p><code>B_sparse</code> is a matrix, so its shape is <code>(1,10)</code>. So the equivalent to <code>B[A]</code> is</p> <pre><code>In [294]: B_sparse[0,A] Out[294]: &lt;3x9 sparse matrix of type '&lt;class 'numpy.int32'&gt;' with 24 stored elements in Compressed Sparse Row format&gt; In [295]: _.A Out[295]: array([[2, 3, 4, 2, 8, 4, 2, 3, 4], [5, 6, 7, 5, 6, 7, 5, 6, 7], [8, 9, 0, 8, 9, 0, 8, 9, 0]], dtype=int32) </code></pre> <p><code>B_sparse[A,:]</code> or <code>B_sparse[:,A]</code> gives a 3d warning, since it would be trying to create a matrix version of:</p> <pre><code>In [298]: B[None,:][:,A] Out[298]: array([[[2, 3, 4, 2, 8, 4, 2, 3, 4], [5, 6, 7, 5, 6, 7, 5, 6, 7], [8, 9, 0, 8, 9, 0, 8, 9, 0]]]) </code></pre> <p>As to your function:</p> <p><code>A_sparse.nonzero()</code> does <code>A_sparse.tocoo()</code> and returns its <code>row</code> and <code>col</code>. Effectively the same as what you do.</p> <p>Here's something that should be faster, though I haven't tested it enough to be sure it is robust:</p> <pre><code>In [342]: Ac=A_sparse.tocoo() In [343]: res=Ac.copy() In [344]: res.data[:]=B_sparse[0, Ac.data].A[0] In [345]: res Out[345]: &lt;3x9 sparse matrix of type '&lt;class 'numpy.int32'&gt;' with 27 stored elements in COOrdinate format&gt; In [346]: res.A Out[346]: array([[2, 3, 4, 2, 8, 4, 2, 3, 4], [5, 6, 7, 5, 6, 7, 5, 6, 7], [8, 9, 0, 8, 9, 0, 8, 9, 0]], dtype=int32) </code></pre> <p>In this example there are 2 zeros that could cleaned up as well (look at <code>res.nonzero()</code>).</p> <p>Since you are setting each <code>res[i,j]</code> with values from <code>Ac.row</code> and <code>Ac.col</code>, <code>res</code> has the same <code>row,col</code> values as <code>Ac</code>, so I initialize it as a copy. Then it's just a matter of updating the <code>res.data</code> attribute. It would be faster to index <code>Bc.data</code> directly, but that doesn't account for its sparsity.</p>
1
2016-08-04T20:53:45Z
[ "python", "arrays", "numpy", "scipy" ]
What exactly does ./configure --enable-shared do during python altinstall?
38,772,946
<p>When I altinstall python 2.7.12 with </p> <blockquote> <p>./configure --prefix=/opt/python --enable-shared</p> </blockquote> <p>it comes up as python 2.7.5 (system default python)</p> <p>But without </p> <blockquote> <p>--enable-shared</p> </blockquote> <p>it comes up as 2.7.12, what am I missing?</p> <p>This is on RHEL 7.2</p> <hr> <p>This is not a pathing issue:</p> <p>Without --enable-shared</p> <blockquote> <p>[root@myrig ~]# /opt/python/bin/python2.7 -V</p> <p>Python 2.7.12</p> </blockquote> <p>With --enable-shared</p> <blockquote> <p>[root@myrig ~]# /opt/python/bin/python2.7 -V </p> <p>Python 2.7.5</p> </blockquote>
0
2016-08-04T16:37:34Z
38,773,027
<p>I'm not sure why the version number is different, but Graham Dumpleton says at <a href="https://github.com/docker-library/python/issues/21" rel="nofollow">this</a> website that "When running configure, you should be supplying the --enable-shared option to ensure that shared libraries are built for Python. By not doing this you are preventing any application which wants to use Python as an embedded environment from working."</p>
1
2016-08-04T16:42:33Z
[ "python", "python-2.7" ]
What exactly does ./configure --enable-shared do during python altinstall?
38,772,946
<p>When I altinstall python 2.7.12 with </p> <blockquote> <p>./configure --prefix=/opt/python --enable-shared</p> </blockquote> <p>it comes up as python 2.7.5 (system default python)</p> <p>But without </p> <blockquote> <p>--enable-shared</p> </blockquote> <p>it comes up as 2.7.12, what am I missing?</p> <p>This is on RHEL 7.2</p> <hr> <p>This is not a pathing issue:</p> <p>Without --enable-shared</p> <blockquote> <p>[root@myrig ~]# /opt/python/bin/python2.7 -V</p> <p>Python 2.7.12</p> </blockquote> <p>With --enable-shared</p> <blockquote> <p>[root@myrig ~]# /opt/python/bin/python2.7 -V </p> <p>Python 2.7.5</p> </blockquote>
0
2016-08-04T16:37:34Z
38,781,440
<p>Compiling python like this fixed my issue:</p> <pre><code>./configure --enable-shared --prefix=/opt/python LDFLAGS=-Wl,-rpath=/opt/python/lib </code></pre> <p>Courtesy Ned Deily:</p> <p>The problem is, that on most Unix systems (with the notable exception of Mac OS X), the path to shared libraries is not an absolute path. So, if you install Python in a non-standard location, which is the right thing to do so as not to interfere with a system Python of the same version, you will need to configure in the path to the shared library or supply it via an environment variable at run time, like LD_LIBRARY_PATH. You may be better off avoiding --enable-shared; it's easy to run into problems like this with it.</p> <p>Ref: <a href="https://bugs.python.org/issue27685" rel="nofollow">https://bugs.python.org/issue27685</a></p>
0
2016-08-05T05:00:46Z
[ "python", "python-2.7" ]
Looping help needed in python pandas for calculating Precision and Recall from confusion matrix
38,773,087
<p>I have a confusion matrix for 2 classes with pre-calculated totals in a pandas dataframe format:</p> <pre><code> Actual_class Predicted_class_0 Predicted_class_1 Total 0 0 39 73 112 1 1 52 561 613 2 All 91 634 725 </code></pre> <p>I need to calculate precision and recall using a loop as I need a general case solution for more classes.</p> <p>Precision for class 0 would be 39/91 and for class 1 would be 561/634.<br> Recall for class 0 would be 39/112 and for class 1 would be 561/613.</p> <p>So I need to iterate by diagonal and totals to get the following results </p> <pre><code> Actual_class Predicted_class_0 Predicted_class_1 Total Precision Recall 0 0 39 73 112 43% 35% 1 1 52 561 613 88% 92% 2 All 91 634 725 </code></pre> <p>Totals (All row and Total column) will be removed afterwords, so there is no point to calculate them. </p> <p>I tried the following code, but it doesn't go by diagonal and loses data for the class 0:</p> <pre><code>cols = [c for c in cross_tab.columns if c.lower()[:4] == 'pred'] for c in cols: cross_tab["Precision"] = cross_tab[c]/cross_tab[c].iloc[-1] for c in cols: cross_tab["Recall"] = cross_tab[c]/cross_tab['Total'] </code></pre> <p>I'm novice to pandas matrix operations and really need your help.</p> <p>I'm sure there is a way to proceed without pre-calculating totals.</p> <p>Thank you very much!!!</p>
0
2016-08-04T16:45:39Z
38,776,214
<p>I found a solution using numpy diagonal: </p> <pre><code>import numpy as np cols = [c for c in cross_tab.columns if c.lower()[:4] == 'pred' or c == 'Total'] denomPrecision=[] for c in cols: denomPrecision.append(cross_tab[c].iloc[-1]) diag = np.diagonal(cross_tab.values,1) cross_tab["Precision"] = np.round(diag.astype(float)/denomPrecision*100,1) cross_tab["Recall"] = np.round(diag.astype(float)/cross_tab.Total*100,1) </code></pre>
0
2016-08-04T19:49:03Z
[ "python", "pandas", "matrix" ]
Get normalised value counts weighted by another column?
38,773,095
<p>I have a dataframe like this in Pandas:</p> <pre><code>df = pd.DataFrame({ 'org': ['A1', 'B1', 'A1', 'B2'], 'DIH': [True, False, True, False], 'Quantity': [10,20,10,20], 'Items': [1, 2, 3, 4] }) </code></pre> <p>Now I want to get the value counts and modal value of <code>Quantity</code>, but weighted by the number of <code>Items</code>.</p> <p>So I know that I can do </p> <pre><code>df.groupby('Quantity').agg({'Items': 'sum'}).sort_values('Items', ascending=False) </code></pre> <p>And get this:</p> <pre><code>Quantity Items 20 6 10 4 </code></pre> <p>But how do I get this as a percentage value, like this?</p> <pre><code>Quantity Items 20 60 10 40 </code></pre>
0
2016-08-04T16:46:06Z
38,773,385
<p>This worked for me</p> <pre><code>df.groupby('Quantity').agg({'Items': 'sum'}).sort_values('Items', ascending=False)/df['Items'].sum()*100 </code></pre>
2
2016-08-04T17:02:23Z
[ "python", "pandas" ]
Get normalised value counts weighted by another column?
38,773,095
<p>I have a dataframe like this in Pandas:</p> <pre><code>df = pd.DataFrame({ 'org': ['A1', 'B1', 'A1', 'B2'], 'DIH': [True, False, True, False], 'Quantity': [10,20,10,20], 'Items': [1, 2, 3, 4] }) </code></pre> <p>Now I want to get the value counts and modal value of <code>Quantity</code>, but weighted by the number of <code>Items</code>.</p> <p>So I know that I can do </p> <pre><code>df.groupby('Quantity').agg({'Items': 'sum'}).sort_values('Items', ascending=False) </code></pre> <p>And get this:</p> <pre><code>Quantity Items 20 6 10 4 </code></pre> <p>But how do I get this as a percentage value, like this?</p> <pre><code>Quantity Items 20 60 10 40 </code></pre>
0
2016-08-04T16:46:06Z
38,773,396
<p>Just add one more line to your code:</p> <pre><code>df2 = df.groupby('Quantity').agg({'Items': 'sum'}).sort_values('Items', ascending=False) df2['Items']=(df2['Items']*100)/df2['Items'].sum() print (df2) Output : Items Quantity 20 60.0 10 40.0 </code></pre>
0
2016-08-04T17:02:42Z
[ "python", "pandas" ]
Get normalised value counts weighted by another column?
38,773,095
<p>I have a dataframe like this in Pandas:</p> <pre><code>df = pd.DataFrame({ 'org': ['A1', 'B1', 'A1', 'B2'], 'DIH': [True, False, True, False], 'Quantity': [10,20,10,20], 'Items': [1, 2, 3, 4] }) </code></pre> <p>Now I want to get the value counts and modal value of <code>Quantity</code>, but weighted by the number of <code>Items</code>.</p> <p>So I know that I can do </p> <pre><code>df.groupby('Quantity').agg({'Items': 'sum'}).sort_values('Items', ascending=False) </code></pre> <p>And get this:</p> <pre><code>Quantity Items 20 6 10 4 </code></pre> <p>But how do I get this as a percentage value, like this?</p> <pre><code>Quantity Items 20 60 10 40 </code></pre>
0
2016-08-04T16:46:06Z
38,773,524
<p>try this instead (one line) :</p> <pre><code>df.groupby('Quantity').agg({'Items': 'sum'}).sort_values('Items', ascending=False).apply(lambda x: 100*x/float(x.sum())) </code></pre>
0
2016-08-04T17:09:16Z
[ "python", "pandas" ]
Flask contexts (application and request) vs thread-local variables
38,773,099
<p><a href="http://flaskbook.com/" rel="nofollow">Flask Web Development</a> says:</p> <pre><code>from flask import request @app.route('/') def index(): user_agent = request.headers.get('User-Agent') return '&lt;p&gt;Your browser is %s&lt;/p&gt;' % user_agent </code></pre> <blockquote> <p>Note how in this view function <code>request</code> is used as if it was a global variable. In reality, <code>request</code> cannot be a global variable if you consider that in a multithreaded server the threads are working on different requests from different clients at the same time, so each thread needs to see a different object in request. Contexts enable Flask to make certain variables globally accessible to a thread without interfering with the other threads.</p> </blockquote> <p>Understandable, but why not simply make <code>request</code> a <a href="http://stackoverflow.com/questions/1408171/thread-local-storage-in-python">thread-local variable</a>? Under the hood, what exactly is <code>request</code>, and how is it different from a thread-local variable?</p>
1
2016-08-04T16:46:14Z
38,774,237
<p>This was simply a design decision by Armin (the author of Flask). You could indeed rewrite Flask to operate as a thread-local, but that was not what he wanted to do here.</p> <p>The idea of Flask (in general) is to keep things as simple as possible, and abstract a lot of thinking away. This is why a lot of Flask helpers are implemented as 'global variables': you don't really have to think about the meaning behind it, because each global is bound to the incoming request.</p>
1
2016-08-04T17:53:23Z
[ "python", "python-3.x", "flask", "python-multithreading" ]
How to set up Flask on pythonanyhwere based on flask megatutorial
38,773,135
<p>I am currently developing an application. this web app has its <strong>own domain</strong>. when initially created i set up the domain and the registrar using the cname and it succesfully displayed after a couple of hours <strong>"this is a flask app..."</strong> something like that.</p> <p>i decided to follow the examples of Mr Grinberg in his book (fully functional on localhost). So i cloned my personal repository to pythonanywhere and ran the following commands.</p> <pre><code>python manage.py db init python manage.py db upgrade python manage.py migrate </code></pre> <p>every thing is ok so far. and i checked out the mysql database using <strong>mysql workbench</strong>.</p> <p>Now comes my issue.</p> <p>when i run <code>python manage.py runserver</code></p> <p>it throws me the following error.</p> <pre><code>/home/username/.virtualenvs/webapp/local/lib/python2.7/site-packages /flask_sqlalchemy/__init__.py:800: UserWarning: SQLALCHEMY_TRACK_MODIFICA TIONS adds significant overhead and will be disabled by default in the future. Set it to True to suppress this warning. warnings.warn('SQLALCHEMY_TRACK_MODIFICATIONS adds significant overhead and will be disabled by default in the future. Set it to True to su ppress this warning.') Traceback (most recent call last): File "manage.py", line 20, in &lt;module&gt; manager.run() File "/home/username/.virtualenvs/webapp/local/lib/python2.7/site-packages/flask_script/__init__.py", line 412, in run result = self.handle(sys.argv[0], sys.argv[1:]) File "/home/username/.virtualenvs/webapp/local/lib/python2.7/site-packages/flask_script/__init__.py", line 383, in handle res = handle(*args, **config) File "/home/username/.virtualenvs/webapp/local/lib/python2.7/site-packages/flask_script/commands.py", line 425, in __call__ **self.server_options) File "/home/username/.virtualenvs/webapp/local/lib/python2.7/site-packages/flask/app.py", line 843, in run run_simple(host, port, self, **options) File "/home/username/.virtualenvs/webapp/local/lib/python2.7/site-packages/werkzeug/serving.py", line 677, in run_simple s.bind((hostname, port)) File "/usr/lib/python2.7/socket.py", line 224, in meth return getattr(self._sock,name)(*args) socket.error: [Errno 98] Address already in use </code></pre> <p>i tried disabling the wsgi.py file (commenting everything out) still the same.</p> <p>Things to know:</p> <ol> <li>i have a paid acount.</li> <li>this is the second webapp on pythonanywhere. (the first one is not modeled based on the tutorial and works just fine) </li> </ol> <p><strong>EDIT</strong></p> <p>i changed the port from 5000 to 9000. and it runs in the console. but i cant visit my site. should i comment out the wsgi file?</p> <p>currently it looks likes this:</p> <pre><code>import sys # # add your project directory to the sys.path project_home = u'/home/username/e_orders/e_orders' if project_home not in sys.path: sys.path = [project_home] + sys.path # # import flask app but need to call it "application" for WSGI to work from manager import app as application </code></pre> <p><strong>manage.py</strong></p> <pre><code>import os from app import create_app, db from app.models import User from flask_script import Manager, Shell, Server from flask_migrate import Migrate, MigrateCommand app = create_app(os.getenv('FLASK_CONFIG') or 'default') manager = Manager(app) migrate = Migrate(app, db) def make_shell_context(): return dict(app=app, db=db, User=User) manager.add_command('shell', Shell(make_context=make_shell_context)) manager.add_command('db', MigrateCommand) manager.add_command('runserver', Server(port=9000)) if __name__ == '__main__': manager.run() </code></pre> <p><strong>EDIT 2</strong></p> <p>i have the following error with the wsgi configuration above.</p> <p><strong>errorlog</strong></p> <pre><code>ImportError: No module named manager 2016-08-04 17:42:39,589 :Error running WSGI application Traceback (most recent call last): File "/bin/user_wsgi_wrapper.py", line 154, in __call__ app_iterator = self.app(environ, start_response) File "/bin/user_wsgi_wrapper.py", line 170, in import_error_application raise e ImportError: No module named manager </code></pre>
2
2016-08-04T16:48:18Z
38,773,714
<p>PythonAnywhere dev here.</p> <p>If you run a Flask app from a console on PythonAnywhere, it's not actually accessible from anywhere else. It may well run, but nothing will route any requests to it. So there's no need to run anything from the console (unless you're just testing for syntax errors, I guess).</p> <p>Instead, you need to create a web app on the "Web" tab -- it looks like you've already done that. This then routes using the WSGI file that you seem to have discovered.</p> <p>If you've done all that, then when you visit the domain that appears on the "Web" tab (normally something like <em>yourusername</em><code>.pythonanywhere.com</code>) then you should see your site. If you get an error, then check out the error logs (also linked from the "Web" tab), which should help you debug.</p> <p>[edit: added affiliation]</p>
2
2016-08-04T17:20:01Z
[ "python", "mysql", "flask", "pythonanywhere", "flask-migrate" ]
How do I edit this algorithm so that it creates combinations without replacement or reusing an element?
38,773,201
<p>I'm very new to computer science - this is my first program. I wrote a python program that takes data in two columns from an excel sheet "Labels":"Values" and reconfigures them into lists of Labels whose respective values sum to 30. Each label is unique and only occurs once, but the different labels can have the same value. </p> <p>However, when I first applied my program, the runtime was nearly 30 minutes because the algorithm was creating every possible combination of Labels. Obviously given 50 labels with values less that 10, that is a lot of possible combinations.</p> <p>I wanted some help editing my current algorithm so that it creates unique groups. Once a Label is used, I don't want it to appear in any other group. </p> <p>Currently my code looks like this:</p> <pre><code>def combinator(X): #this function pulls two lists together to create a dictionary from xlrd import open_workbook wb = open_workbook(X) L = [] P = [] for s in wb.sheets(): for col in range(s.ncols): for row in range(s.nrows): if row !=0: l = s.cell(row,0).value L.append(l) p = s.cell(row,1).value P.append(p) Lots = L[0:(len(L)/2)] Pallets = P[0:(len(L)/2)] D = dict(zip(Lots,Pallets)) return D def grouper(D, N):#this finds combinations of 30 and puts them in groups from itertools import combinations groups_of_thirty = [] for l in range(0, len(D)): for y in combinations(D.items(), l): keys = [] sum = 0 for key, value in y: keys.append(key) sum += value if sum == N: groups_of_thirty.append(keys) flattened = [v for flat in groups_of_thirty for v in flat] K = D.keys() s = set(K) remainder = list(s - set(flattened)) list(groups_of_thirty) return groups_of_thirty, \ remainder def transposer(G):#this exports the final list into each cell and writes the spreadsheet as a csv to a different directory. import os os.chdir(Q) import csv with open(O, "wb") as f: writer = csv.writer(f) str(writer.writerows(G)) return transposer(grouper(combinator(I),N)) </code></pre> <p>Would appreciate any help with this - logic or pseudocode preferred but some pointers with uncommon syntax would be helpful since I'm a novice. </p> <p>Thank you!</p> <p>Edit: Here is a screenshot of sample data in an excel sheet.</p> <p><a href="http://i.stack.imgur.com/L4CHx.gif" rel="nofollow">Screenshot of Excel sheet with Input and Desired Output</a></p>
1
2016-08-04T16:51:44Z
38,774,201
<p>Here's a linear programming approach that I mentioned in the comments:</p> <pre><code>import pandas as pd from pulp import * import random random.seed(0) num_items = 50 labels = ['Label{0:02d}'.format(i) for i in range(num_items)] values = [random.randint(5, 30) for _ in range(num_items)] df = pd.DataFrame({'Value': values}, index=labels) feasible_solutions = [] var = LpVariable.dicts('x', labels, 0, 1, LpInteger) prob = pulp.LpProblem('Fixed Sum', pulp.LpMinimize) prob += 0 prob += lpSum([var[label] * df.at[label, 'Value'] for label in labels]) == 30 while prob.solve() == 1: current_solution = [label for label in labels if value(var[label])] feasible_solutions.append(current_solution) prob += lpSum([var[label] for label in current_solution]) &lt;= len(current_solution) - 1 </code></pre> <p><code>labels</code> is a regular list that holds the labels and <code>values</code> are random integers between 5 and 30. It starts with an empty set.</p> <p>One of the most important elements in this code is <code>var</code>. It is the decision variable that takes either value 0 or 1. When you include a specific label in the solution, it takes value 1, otherwise it is equal to zero.</p> <p>For example, assume you have this list <code>[12, 7, 5, 13]</code>. Here, for a possible solution <code>var00</code> (12), <code>var02</code> (5) and <code>var03</code> (13) can take value 1. </p> <p>The next line creates a linear programming problem. We specify an arbitrary objective function (<code>prob += 0</code>) because we are not minimizing or maximizing any function - we are trying to find all possible solutions.</p> <p>These solutions should satisfy the constraint in the next line. Here, <code>var[label]</code> is a binary decision variable as I mentioned. If it's included in the solution, it takes value 1 and 1 is multiplied by its value. So we are only summing the values of the items included in the set. Their total should be equal to 30. Here, <code>prob.solve()</code> would generate a feasible solution but since you want all feasible solutions, you call <code>prob.solve()</code> in a while loop. As long as it can return a feasible solution (==1) continue the loop. But in each iteration we should exclude the current solution so that our search space is reduced. It is done by the last line. For example, if in the current solution we have <code>var00</code>, <code>var04</code> and <code>var07</code>, their sum should be smaller than 3 for the subsequent problems (all three shouldn't be 1 at the same time). If you run this, it will generate all possible solutions for your problem. </p> <p>Here's the first five:</p> <pre><code>feasible_solutions[:5] Out: [['Label00', 'Label47'], ['Label17', 'Label46'], ['Label42', 'Label45'], ['Label03', 'Label13', 'Label47'], ['Label02', 'Label03', 'Label48']] </code></pre> <p>And these are their values:</p> <pre><code>Out: [[17, 13], [9, 21], [11, 19], [6, 11, 13], [18, 6, 6]] </code></pre>
3
2016-08-04T17:50:25Z
[ "python", "excel", "algorithm", "python-2.7", "xlrd" ]
Slowly scroll in python selenium javascript
38,773,273
<pre><code>url = correct_url(url) browser = webdriver.Chrome() browser.get(url) browser.find_element_by_xpath('//*[@title="New chat"]').click() drawer_body = browser.find_elements_by_class_name('drawer-body') browser.execute_script('arguments[0].scrollTop = arguments[0].scrollHeight', drawer_body) </code></pre> <blockquote> <p>Get the 'div' element, this has a list and will be scrolled</p> </blockquote> <p>drawer_body = browser.find_elements_by_class_name('drawer-body') </p> <blockquote> <p>The line below scrolls very fast and reaches the bottom but this does not load data. Is there any way I can slowly scroll into drawer_body element?.</p> </blockquote> <p>browser.execute_script('arguments[0].scrollTop = arguments[0].scrollHeight', drawer_body)</p> <p>I am trying to implement the below script using python and selenium. <a href="http://ctrlq.org/code/19966-whatsapp-phone-numbers" rel="nofollow">http://ctrlq.org/code/19966-whatsapp-phone-numbers</a></p>
0
2016-08-04T16:55:46Z
38,849,968
<p>Most browsers support <em>Spacebar</em> key as a keyboard shortcut for scrolling page down. Using this feature, you can do the following:</p> <ol> <li>Visit page</li> <li>Parse what is loaded so far</li> <li>Simulate press spacebar using Selenium</li> <li>Wait 2-3 seconds till more data is loaded</li> <li>Continue</li> </ol> <p>One more hint: to avoid being stuck in a forever loop, you need to implement a check if the newly loaded data differs from the one before pressing spacebar, otherwise the script should quit, as there is no new data to parse.</p>
0
2016-08-09T11:52:17Z
[ "javascript", "python", "selenium", "web-scraping" ]
django 1.10 custom user model and user manager
38,773,279
<p>Im trying to create a custom user model and his manager</p> <p><a href="https://gist.github.com/bitgandtter/9fc0de497a3cc13440e24a86ba6626b1" rel="nofollow">gist with code</a></p> <p>But for some reason when y try to create a user with the</p> <p><code>python manage.py createsuperuser</code></p> <p>It rise an error</p> <pre><code>django.db.utils.OperationalError: (1054, "Unknown column 'jmessages_customer.last_login' in 'field list'") </code></pre> <p>Its like its not using my custom manager at the moment</p>
0
2016-08-04T16:56:14Z
38,774,184
<p>The solition was to specify the missing property on the model</p> <pre><code>last_login = None </code></pre> <p>the same as </p> <pre><code>is_active = True </code></pre> <p>In deed those need to be correctly handled in the future but at list this issue was fixed.</p>
0
2016-08-04T17:49:43Z
[ "python", "django" ]
simplest python equivalent to R's gsub
38,773,379
<p>Is there a simple/one-line python equivalent to R's <code>gsub</code> function?</p> <pre><code>strings = c("Important text, !Comment that could be removed", "Other String") gsub("(,[ ]*!.*)$", "", strings) # [1] "Important text" "Other String" </code></pre>
0
2016-08-04T17:01:56Z
38,773,433
<p>For a a string:</p> <pre><code>import re string = "Important text, !Comment that could be removed" re.sub("(,[ ]*!.*)$", "", string) </code></pre> <p>Since you updated your question to be a list of strings, you can use a list comprehension.</p> <pre><code>import re strings = ["Important text, !Comment that could be removed", "Other String"] [re.sub("(,[ ]*!.*)$", "", x) for x in strings] </code></pre>
1
2016-08-04T17:04:33Z
[ "python", "python-2.7" ]
Need to scrap a table which is loaded through ajax using python(selenium)
38,773,397
<p>I have a <a href="https://seahawks.strmarketplace.com/Charter-Seat-Licenses/Charter-Seat-Licenses.aspx" rel="nofollow" title="page">page</a> that has a table (table id= "ctl00_ContentPlaceHolder_ctl00_ctl00_GV" class="GridListings" )i need to scrap. I usually use BeautifulSoup &amp; urllib for it,but in this case the problem is that the table takes some time to load ,so it isnt captured when i try to fetch it using BS. I cannot use PyQt4,drysracpe or windmill because of some installation issues,so the only possible way is to use Selenium/PhantomJS I tried the following,still no success:</p> <pre><code>from selenium.webdriver.common.by import By from selenium.webdriver.support.wait import WebDriverWait from selenium.webdriver.support import expected_conditions as EC driver = webdriver.PhantomJS() driver.get(url) wait = WebDriverWait(driver, 10) table = wait.until(EC.presence_of_element_located(By.CSS_SELECTOR, 'table#ctl00_ContentPlaceHolder_ctl00_ctl00_GV')) </code></pre> <p>The above code doesnt give me the desired contents of the table. How do i go about achieveing this??? </p>
2
2016-08-04T17:02:42Z
38,773,801
<p>If you want to scrap something, it will be nice first to install a web debugger ( <a href="http://getfirebug.com/" rel="nofollow">Firebug</a> for <a href="https://www.mozilla.org/en-US/firefox/new/" rel="nofollow">Mozilla Firefox</a> for example) to watch how the website you want to scrap is working.</p> <p>Next, you need to copy the process of how the website is connecting to backoffice </p> <p>As you said, the content that you want to scrap is being loaded asynchronously (only when the document is ready)</p> <p>Assuming the debugger is running and also you have refreshed the page, you will see on the network tab the following request:</p> <p>POST <a href="https://seahawks.strmarketplace.com/Charter-Seat-Licenses/Charter-Seat-Licenses.aspx" rel="nofollow" title="page">https://seahawks.strmarketplace.com/Charter-Seat-Licenses/Charter-Seat-Licenses.aspx</a></p> <p>The final process flow to reach your goal will be:</p> <ul> <li>1/ Use <a href="http://docs.python-requests.org/en/master/" rel="nofollow">requests python module</a></li> <li>2/ Open a requests session to the index page website site (with cookies handling)</li> <li>3/ Scrap all the input for the specific POST form request</li> <li>4/ Build a POST parameter DICT containing all inputs &amp; value fields scrapped in the previous step + adding some specific fixed params.</li> <li>5/ POST the request (with required data) </li> <li>6/ Use finally <a href="https://pypi.python.org/pypi/beautifulsoup4/4.3.2" rel="nofollow">BS4 module</a> (as usual) to soup the answered html to scrap your data</li> </ul> <p>Please see bellow a working code:</p> <pre><code>#!/usr/bin/env python # -*- coding: UTF-8 -*- from bs4 import BeautifulSoup import requests base_url="https://seahawks.strmarketplace.com/Charter-Seat-Licenses/Charter-Seat-Licenses.aspx" #create requests session s = requests.session() #get index page r=s.get(base_url) #soup page bs=BeautifulSoup(r.text) #extract FORM html form_soup= bs.find('form',{'name':'aspnetForm'}) #extracting all inputs input_div = form_soup.findAll("input") #build the data parameters for POST request #we add some required &lt;fixed&gt; data parameters for post data={ '__EVENTARGUMENT':'LISTINGS;0', '__EVENTTARGET':'ctl00$ContentPlaceHolder$ctl00$ctl00$RadAjaxPanel_GV', '__EVENTVALIDATION':'/wEWGwKis6fzCQLDnJnSDwLq4+CbDwK9jryHBQLrmcucCgL56enHAwLRrPHhCgKDk6P+CwL1/aWtDQLm0q+gCALRvI2QDAKch7HjBAKWqJHWBAKil5XsDQK58IbPAwLO3dKwCwL6uJOtBgLYnd3qBgKyp7zmBAKQyTBQK9qYAXAoieq54JAuG/rDkC1djKyQMC1qnUtgoC0OjaygUCv4b7sAhfkEODRvsa3noPfz2kMsxhAwlX3Q==' } #we add some &lt;dynamic&gt; data parameters for input_d in input_div: try: data[ input_d['name'] ] =input_d['value'] except: pass #skip unused input field #post request r2=s.post(base_url,data=data) #write the result with open("post_result.html","w") as f: f.write(r2.text.encode('utf8')) </code></pre> <p>Now, please get a look at "post_result.html" content and you will find the data !</p> <p>Regards</p>
0
2016-08-04T17:26:36Z
[ "python", "selenium", "selenium-webdriver" ]
Need to scrap a table which is loaded through ajax using python(selenium)
38,773,397
<p>I have a <a href="https://seahawks.strmarketplace.com/Charter-Seat-Licenses/Charter-Seat-Licenses.aspx" rel="nofollow" title="page">page</a> that has a table (table id= "ctl00_ContentPlaceHolder_ctl00_ctl00_GV" class="GridListings" )i need to scrap. I usually use BeautifulSoup &amp; urllib for it,but in this case the problem is that the table takes some time to load ,so it isnt captured when i try to fetch it using BS. I cannot use PyQt4,drysracpe or windmill because of some installation issues,so the only possible way is to use Selenium/PhantomJS I tried the following,still no success:</p> <pre><code>from selenium.webdriver.common.by import By from selenium.webdriver.support.wait import WebDriverWait from selenium.webdriver.support import expected_conditions as EC driver = webdriver.PhantomJS() driver.get(url) wait = WebDriverWait(driver, 10) table = wait.until(EC.presence_of_element_located(By.CSS_SELECTOR, 'table#ctl00_ContentPlaceHolder_ctl00_ctl00_GV')) </code></pre> <p>The above code doesnt give me the desired contents of the table. How do i go about achieveing this??? </p>
2
2016-08-04T17:02:42Z
38,775,553
<p>You can get the data using <em>requests</em> and <em>bs4,</em>, with almost if not all asp sites there are a few post params that always need to be provided like <em>__EVENTTARGET</em>, <em>__EVENTVALIDATION</em> etc.. :</p> <pre><code>from bs4 import BeautifulSoup import requests data = {"__EVENTTARGET": "ctl00$ContentPlaceHolder$ctl00$ctl00$RadAjaxPanel_GV", "__EVENTARGUMENT": "LISTINGS;0", "ctl00$ContentPlaceHolder$ctl00$ctl00$ctl00$hdnProductID": "139", "ctl00$ContentPlaceHolder$ctl00$ctl00$hdnProductID": "139", "ctl00$ContentPlaceHolder$ctl00$ctl00$drpSortField": "Listing Number", "ctl00$ContentPlaceHolder$ctl00$ctl00$drpSortDirection": "A-Z, Low-High", "__ASYNCPOST": "true"} </code></pre> <p>And for the actual post, we need to add a few more values to out post data:</p> <pre><code>post = "https://seahawks.strmarketplace.com/Charter-Seat-Licenses/Charter-Seat-Licenses.aspx" with requests.Session() as s: s.headers.update({"User-Agent":"Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:47.0) Gecko/20100101 Firefox/47.0"}) soup = BeautifulSoup(s.get(post).content) data["__VIEWSTATEGENERATOR"] = soup.select_one("#__VIEWSTATEGENERATOR")["value"] data["__EVENTVALIDATION"] = soup.select_one("#__EVENTVALIDATION")["value"] data["__VIEWSTATE"] = soup.select_one("#__VIEWSTATE")["value"] r = s.post(post, data=data) soup2 = BeautifulSoup(r.content) table = soup2.select_one("div.GridListings") print(table) </code></pre> <p>You will see the table printed when you run the code.</p>
2
2016-08-04T19:12:40Z
[ "python", "selenium", "selenium-webdriver" ]
Raspberry pi - arduino Serial Communication
38,773,451
<p>I need to communicate raspberry pi with the arduino over serial communication. And to communicate, I'm using same baud rates in both side but still i'm unable to do this thing..</p> <p>this is my Arduino code</p> <pre><code>int ledPinSpeedOne = 11; int ledPinSpeedTwo = 12; int ledPinSpeedThree = 13; char inbyte; void setup() { Serial.begin(9600); pinMode(ledPinSpeedOne, OUTPUT); pinMode(ledPinSpeedTwo, OUTPUT); pinMode(ledPinSpeedThree, OUTPUT); digitalWrite(ledPinSpeedOne, LOW); digitalWrite(ledPinSpeedTwo, LOW); digitalWrite(ledPinSpeedThree, LOW); } void loop() { if (Serial.available() &gt; 0) { delay(100); inbyte=Serial.read(); if ( inbyte == '3' ) functionSpeedTwo(); } } //functionSpeedTwo void functionSpeedTwo() { digitalWrite(ledPinSpeedOne, LOW); digitalWrite(ledPinSpeedTwo, HIGH); digitalWrite(ledPinSpeedThree, LOW); } </code></pre> <p>And here is what i have in raspberry pi side,</p> <pre><code>#!/usr/bin/python import serial ser = serial.Serial('/dev/ttyACM0',9600) ser.write('3') </code></pre> <p>this thing is not working for sometimes but sometimes it's worked. Can anyone help me to solve this problem.</p>
1
2016-08-04T17:05:45Z
38,873,836
<p>i have solved my problem.There was a time gap to access the value, i just had to add a while loop in order to get the value. In my arduino code i have added a delay in line no 24.</p>
0
2016-08-10T12:45:15Z
[ "python", "arduino-uno", "serial-communication" ]
Undefined index: HTTP_ACCEPT_LANGUAGE using BeatifulSoup/Python
38,773,480
<p>I'm learning Python and I'm trying to parse a webpage made with PHP using BeautifulSoup. My problem is my script show this error:</p> <pre><code>&lt;div style="border:1px solid #990000;padding-left:20px;margin:0 0 10px 0;"&gt; &lt;h4&gt;A PHP Error was encountered&lt;/h4&gt; &lt;p&gt;Severity: Notice&lt;/p&gt; &lt;p&gt;Message: Undefined index: HTTP_ACCEPT_LANGUAGE&lt;/p&gt; &lt;p&gt;Filename: hooks/detecta_idioma.php&lt;/p&gt; &lt;p&gt;Line Number: 110&lt;/p&gt; &lt;/div&gt; </code></pre> <p>when I try to do this</p> <pre><code>html = urllib.urlopen(url).read() web = BeautifulSoup(html,'html.parser') print web etiquetas = web('a') </code></pre> <p>I thought that this error for executing my script by command line instead of using a web browser but, executing this script from Apache, I have the same error.</p> <p>Anyone know how can I define that for parsing the url?</p>
1
2016-08-04T17:07:32Z
38,773,569
<p>Looks like the page requires you to have the <code>Accept-Language</code> header passed along with your request. Here is an example how to do that with <a href="http://docs.python-requests.org/en/master/" rel="nofollow"><code>requests</code></a>:</p> <pre><code>import requests url = "my url" response = requests.get(url, headers={"Accept-Language": "en-US,en"}) html = response.content web = BeautifulSoup(html, 'html.parser') </code></pre>
0
2016-08-04T17:11:50Z
[ "python", "html-parsing" ]
How to push notifications to Test Client in Flask-SocketIO?
38,773,484
<p>I'm trying to receive pushes from the server as a client; using my test client as follows:</p> <p>Client:</p> <pre><code>socket_client = socketio.test_client(app) @socketio.on('hit_client') def recieve_message(json_data): print("Server has called!") </code></pre> <p>Server:</p> <pre><code>socketio.emit('hit_client', 'Hi Client!') </code></pre> <p>The server should be pushing and calling the <code>hit_client</code> channel, but that isn't being fired. However, the <code>socket_client.get_received()</code> has the emitted data. I thought the whole point of WebSockets was bidirectional communication (i.e. pushing function triggers)!</p> <p>This is a very simple setup and it doesn't even seem to be working... Any help would be EXTREMELY appreciated. I've been slamming my head for hours.</p>
1
2016-08-04T17:07:45Z
38,777,691
<p>The test client is not a Socket.IO client. It's only purpose is to help you write unit tests for your Socket.IO server. It is similar in concept to the Flask's test client for HTTP routes. It only makes sense to use it in unit tests.</p> <p>When the server emits something to the client, the test client will just store it and make it accessible in your test code via the <code>get_received</code> call. It will not fire any events, since that is not its intended purpose.</p> <p>If you want to implement a Socket.IO client in python, there is a package for that: <a href="https://pypi.python.org/pypi/socketIO-client" rel="nofollow">https://pypi.python.org/pypi/socketIO-client</a>. With this package, you can write a Python script that connects to the Socket.IO server and can send and receive events.</p>
0
2016-08-04T21:28:55Z
[ "python", "sockets", "socket.io", "flask-socketio", "flask-sockets" ]
python capture URLError code
38,773,515
<p>I want to use Python to monitor a website that uses HTTPS. The problem is that the certificate on the website is invalid. I don't care about that, I just want to know that the website is running.</p> <p>My working code looks like this:</p> <pre><code>from urllib.request import Request, urlopen from urllib.error import URLError, HTTPError req = Request("https://somedomain.com") try: response = urlopen(req) except HTTPError as e: print('server couldn\'t fulfill the request') print('error code: ', e.code) except URLError as e: print(e.args) else: print ('website ok') </code></pre> <p>that ends in URLError being called. The error code is 645.</p> <pre><code>C:\python&gt;python monitor443.py (SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:645)'),) </code></pre> <p>So, I'm trying to except code 645 as OK. I've tried this:</p> <pre><code>from urllib.request import Request, urlopen from urllib.error import URLError, HTTPError req = Request("https://somedomain.com") try: response = urlopen(req) except HTTPError as e: print('server couldn\'t fulfill the request') print('error code: ', e.code) except URLError as e: if e.code == 645: print("ok") print(e.args) else: print ('website ok') </code></pre> <p>but get this error:</p> <pre><code>Traceback (most recent call last): File "monitor443.py", line 11, in &lt;module&gt; if e.code == 645: AttributeError: 'URLError' object has no attribute 'code' </code></pre> <p>how do I add this exception?</p>
0
2016-08-04T17:08:56Z
38,773,823
<p>Please have a look at the great <code>requests</code> package. It will simplify your life when doing http communication. See <a href="http://requests.readthedocs.io/en/master/" rel="nofollow">http://requests.readthedocs.io/en/master/</a>.</p> <pre><code>pip install requests </code></pre> <p>To skip certificate check, you would do something like this (note the <code>verify</code> parameter!):</p> <pre><code>requests.get('https://kennethreitz.com', verify=False) &lt;Response [200]&gt; </code></pre> <p>See the full documentation <a href="http://docs.python-requests.org/en/master/user/advanced/" rel="nofollow">here</a>.</p> <p>HTH</p>
1
2016-08-04T17:27:46Z
[ "python" ]
python capture URLError code
38,773,515
<p>I want to use Python to monitor a website that uses HTTPS. The problem is that the certificate on the website is invalid. I don't care about that, I just want to know that the website is running.</p> <p>My working code looks like this:</p> <pre><code>from urllib.request import Request, urlopen from urllib.error import URLError, HTTPError req = Request("https://somedomain.com") try: response = urlopen(req) except HTTPError as e: print('server couldn\'t fulfill the request') print('error code: ', e.code) except URLError as e: print(e.args) else: print ('website ok') </code></pre> <p>that ends in URLError being called. The error code is 645.</p> <pre><code>C:\python&gt;python monitor443.py (SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:645)'),) </code></pre> <p>So, I'm trying to except code 645 as OK. I've tried this:</p> <pre><code>from urllib.request import Request, urlopen from urllib.error import URLError, HTTPError req = Request("https://somedomain.com") try: response = urlopen(req) except HTTPError as e: print('server couldn\'t fulfill the request') print('error code: ', e.code) except URLError as e: if e.code == 645: print("ok") print(e.args) else: print ('website ok') </code></pre> <p>but get this error:</p> <pre><code>Traceback (most recent call last): File "monitor443.py", line 11, in &lt;module&gt; if e.code == 645: AttributeError: 'URLError' object has no attribute 'code' </code></pre> <p>how do I add this exception?</p>
0
2016-08-04T17:08:56Z
38,774,845
<p>I couldn't install the SLL library (egg_info error). This is what I ended up doing</p> <pre><code>from urllib.request import Request, urlopen from urllib.error import URLError, HTTPError def sendEmail(r): #send notification print('send notify') req = Request("https://somedomain.com") try: response = urlopen(req) except HTTPError as e: print('server couldn\'t fulfill the request') print('error code: ', e.code) sendEmail('server couldn\'t fulfill the request') except URLError as e: theReason=str(e.reason) #[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:645) if theReason.find('CERTIFICATE_VERIFY_FAILED') == -1: sendEmail(theReason) else: print('website ok') else: print('website ok') </code></pre>
1
2016-08-04T18:28:13Z
[ "python" ]
Rabbit MQ python script. Socket closed when connection was open
38,773,522
<p>Need some help! While running the python script using Rabbit MQ RPC. I am getting a <code>Socket 104</code>,<code>Socket closed when connection was open</code> error. Below is python traceback and some code:</p> <pre><code>Traceback (most recent call last): File "./server.py", line 34, in &lt;module&gt; channel.start_consuming() File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 1681, in start_consuming self.connection.process_data_events(time_limit=None) File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 656, in process_data_events self._dispatch_channel_events() File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 469, in _dispatch_channel_events impl_channel._get_cookie()._dispatch_events() File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 1310, in _dispatch_events evt.body) File "./server.py", line 30, in on_request body=json.dumps(DEVICE_INFO)) File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 1978, in basic_publish mandatory, immediate) File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 2065, in publish self._flush_output() File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 1174, in _flush_output *waiters) File "/usr/lib/python2.6/site-packages/pika/adapters/blocking_connection.py", line 395, in _flush_output raise exceptions.ConnectionClosed() pika.exceptions.ConnectionClosed </code></pre>
0
2016-08-04T17:09:08Z
38,774,111
<p>Apologies as i am unable to comment due to low reputation. Could you provide a little more information on how you are opening your connection. Is it really open?</p> <p>It might be because of loss of connection with rabbitmq server as pika doesn't deal with disconnects and often results in similar stacktrace.</p> <p>I also had similar problem, in my case it was because my pika connection was dropping after sometime and my colleague was able to deal with this by adding a wait time for <code>mq:port_number</code>.</p> <p>We were using docker container so we added following line to our invoke.sh to wait for mq:</p> <p><code>filename.py --wait-secs 30 --port-wait mq:5672</code></p> <p>I hope you are able to resolve this after doing that. </p> <p>Otherwise it would be better to check if the connection is being dropped by pika before your python script runs or providing more information on how you are invoking it. </p>
0
2016-08-04T17:45:40Z
[ "python", "sockets", "rabbitmq", "pika" ]
How to move the legend in Seaborn FacetGrid outside of the plot?
38,773,560
<p>I have the following code:</p> <pre><code>g = sns.FacetGrid(df, row="Type", hue="Name", size=3, aspect=3) g = g.map(sns.plt.plot, "Volume", "Index") g.add_legend() sns.plt.show() </code></pre> <p>This results in the following plot:</p> <p><a href="http://i.stack.imgur.com/OWdku.png" rel="nofollow"><img src="http://i.stack.imgur.com/OWdku.png" alt="enter image description here"></a></p> <p>How can I move the legend outside of the plot?</p>
0
2016-08-04T17:11:05Z
38,774,322
<p>You can do this by resizing the plots:</p> <pre><code>g = sns.FacetGrid(df, row="Type", hue="Name", size=3, aspect=3) g = g.map(sns.plt.plot, "Volume", "Index") for ax in g.axes.flat: box = ax.get_position() ax.set_position([box.x0,box.y0,box.width*0.9,box.height]) sns.plt.legend(loc='center left',bbox_to_anchor=(1,0.5)) sns.plt.show() </code></pre> <p>Example:</p> <pre><code>import seaborn as sns tips = sns.load_dataset('tips') # more informative values condition = tips['smoker'] == 'Yes' tips['smoking_status'] = '' tips.loc[condition,'smoking_status'] = 'Smoker' tips.loc[~condition,'smoking_status'] = 'Non-Smoker' g = sns.FacetGrid(tips,row='sex',hue='smoking_status',size=3,aspect=3) g = g.map(plt.scatter,'total_bill','tip') for ax in g.axes.flat: box = ax.get_position() ax.set_position([box.x0,box.y0,box.width*0.85,box.height]) sns.plt.legend(loc='upper left',bbox_to_anchor=(1,0.5)) sns.plt.show() </code></pre> <p>Results in:</p> <p><a href="http://i.stack.imgur.com/lYk4u.png" rel="nofollow"><img src="http://i.stack.imgur.com/lYk4u.png" alt="enter image description here"></a></p>
0
2016-08-04T17:58:27Z
[ "python", "plot", "seaborn" ]
How to move the legend in Seaborn FacetGrid outside of the plot?
38,773,560
<p>I have the following code:</p> <pre><code>g = sns.FacetGrid(df, row="Type", hue="Name", size=3, aspect=3) g = g.map(sns.plt.plot, "Volume", "Index") g.add_legend() sns.plt.show() </code></pre> <p>This results in the following plot:</p> <p><a href="http://i.stack.imgur.com/OWdku.png" rel="nofollow"><img src="http://i.stack.imgur.com/OWdku.png" alt="enter image description here"></a></p> <p>How can I move the legend outside of the plot?</p>
0
2016-08-04T17:11:05Z
38,828,456
<p>According to mwaskom's comment above, this is a bug in OS X. Indeed switching to another backend solves the issue.</p> <p>For instance, I put this into my <code>matplotlibrc</code>:</p> <pre><code>backend : TkAgg # use Tk with antigrain (agg) rendering </code></pre>
0
2016-08-08T11:48:58Z
[ "python", "plot", "seaborn" ]
Python library project structure best practice: imports and tests
38,773,669
<p>I want to refactor a python library i am using a lot in my day to day work to publish it on github as open source. Before doing so, i would like to be compliant with some kind of best practice for python projects structure. I am going to describe below want i would like to do and i would appreciate your suggestions.</p> <p>Here is my library (mylib) structure:</p> <pre><code>mylib/ /examples/ simple_example.py /mylib/ __init__.py foo.py bar.py /tests/ test_foo.py test_bar.py </code></pre> <p>Here are the files:</p> <pre><code>#foo.py def Foo(): print("foo.Foo") #bar.py import foo def Bar(): print("bar.Bar") foo.Foo() #test_bar.py from ..mylib import bar #doesnt work! class TestBar(unittest.TestCase): def test_1(self): bar.Bar() self.assertEqual(True, True) if __name__ == '__main__': unittest.main() #simple_example.py from .. import foo #doesnt work! from .. import bar #doesnt work! if __name__ == '__main__': foo.Foo() bar.Bar() </code></pre> <p>What i would like to do is:</p> <p>1- Execute simple_example.py ideally from /mylib/examples/:</p> <pre><code>$cd myapp $cd examples $python simple_example.py Traceback (most recent call last): File "simple_example.py", line 2, in &lt;module&gt; from .. import foo SystemError: Parent module '' not loaded, cannot perform relative import </code></pre> <p>2- Execute a single test file ideally from /mylib/tests/:</p> <pre><code>$cd myapp $cd tests $python test_bar.py Traceback (most recent call last): File "test_bar.py", line 3, in &lt;module&gt; from ..mylib import bar SystemError: Parent module '' not loaded, cannot perform relative import </code></pre> <p>3- Execute all tests from mylib root</p> <pre><code>$cd myapp $python -m unittest discover tests #same problem as above! </code></pre> <p>So, the problems are in the import statement in simple_example.py and test_bar.py. What is the best method to fix those imports?</p> <p>Note that I would like to use python standard lib unittest for unit testing.</p> <p>Thanks</p> <p>Charlie</p>
3
2016-08-04T17:17:51Z
38,774,189
<p>When running your test code, you want to do absolute imports. This is because when you're running unit tests, etc., you should assume your 'library' is installed in local development mode for testing -- don't use relative imports because you are not in the same package.</p> <p>Here's how you would do an import in your <code>test_foo.py</code> file, for example:</p> <pre><code># test_foo.py from mylib.foo import Foo # ... your test code here </code></pre> <p>In general, you should only use relative imports INSIDE of your library code, not in your tests =)</p> <p>I hope this helps.</p> <p><strong>EDIT</strong>: You also need to install your library in development mode before this will work. You can do this in one of two ways:</p> <pre><code>$ python setup.py develop </code></pre> <p>OR</p> <pre><code>$ pip install -e . </code></pre> <p>Either of the above commands will inspect your project's <code>setup.py</code> file, which tells Python how your package is built / created, and will install it locally so you can run tests / mess with it.</p>
1
2016-08-04T17:50:01Z
[ "python", "python-unittest", "project-structure" ]
Is it possible to add a value named 'None' to enum type?
38,773,832
<p>can I add a value named 'None' to a enum? for example </p> <pre><code>from enum import Enum class Color(Enum): None=0 #represent no color at all red = 1 green = 2 blue = 3 color=Color.None if (color==Color.None): #don't fill the rect else: #fill the rect with the color </code></pre> <p>This question is related to my previous question <a href="https://stackoverflow.com/questions/38686377/how-to-set-a-variables-subproperty">How to set a variable&#39;s subproperty?</a></p> <p>Of course, I understand the above <code>None</code> in <code>enum</code> doesn't work. but from the vendor's code, I do see something like this: <code>bird.eye.Color=bird.eye.Color.enum.None</code> I checked the <code>type(bird.eye.Color)</code> it is a <code>&lt;class 'flufl.enum._enum.IntEnumValue'&gt;</code> so a <code>flufl.enum</code> is used. I suppose it should not be very different to use a <code>flufl.enum</code> or a <code>Enum</code>. Thanks a lot!</p>
3
2016-08-04T17:28:38Z
38,773,896
<p>Not quite the way you tried, but you can do this:</p> <pre><code># After defining the class Color as normal, but excluding the part for None... setattr(Color, 'None', 0) color = Color.None if color == Color.None: ... </code></pre> <p>Note: I did this in Python 2. Not sure if you want this in Python 2 or 3 because you didn't specify, and I don't have a copy of Python 3 installed on this machine to test with.</p>
-1
2016-08-04T17:32:49Z
[ "python", "enums" ]
Is it possible to add a value named 'None' to enum type?
38,773,832
<p>can I add a value named 'None' to a enum? for example </p> <pre><code>from enum import Enum class Color(Enum): None=0 #represent no color at all red = 1 green = 2 blue = 3 color=Color.None if (color==Color.None): #don't fill the rect else: #fill the rect with the color </code></pre> <p>This question is related to my previous question <a href="https://stackoverflow.com/questions/38686377/how-to-set-a-variables-subproperty">How to set a variable&#39;s subproperty?</a></p> <p>Of course, I understand the above <code>None</code> in <code>enum</code> doesn't work. but from the vendor's code, I do see something like this: <code>bird.eye.Color=bird.eye.Color.enum.None</code> I checked the <code>type(bird.eye.Color)</code> it is a <code>&lt;class 'flufl.enum._enum.IntEnumValue'&gt;</code> so a <code>flufl.enum</code> is used. I suppose it should not be very different to use a <code>flufl.enum</code> or a <code>Enum</code>. Thanks a lot!</p>
3
2016-08-04T17:28:38Z
38,774,019
<p>You can do this using the <code>Enum</code> constructor rather than creating a subclass</p> <pre><code>&gt;&gt;&gt; from enum import Enum &gt;&gt;&gt; &gt;&gt;&gt; Color = Enum('Color', {'None': 0, 'Red': 1, 'Green': 2, 'Blue': 3}) &gt;&gt;&gt; Color.None &lt;Color.None: 0 </code></pre> <p>EDIT: This works using the <code>enum34</code> backport for python 2. In python 3, you will be able to create the <code>Enum</code> with the <code>None</code> attribute, but you won't be able to access using dot notation.</p> <pre><code>&gt;&gt;&gt; Color.None SyntaxError: invalid syntax </code></pre> <p>Oddly, you can still access it with <code>getattr</code></p> <pre><code>&gt;&gt;&gt; getattr(Color, 'None') &lt;Color.None: 0&gt; </code></pre>
0
2016-08-04T17:39:43Z
[ "python", "enums" ]
Is it possible to add a value named 'None' to enum type?
38,773,832
<p>can I add a value named 'None' to a enum? for example </p> <pre><code>from enum import Enum class Color(Enum): None=0 #represent no color at all red = 1 green = 2 blue = 3 color=Color.None if (color==Color.None): #don't fill the rect else: #fill the rect with the color </code></pre> <p>This question is related to my previous question <a href="https://stackoverflow.com/questions/38686377/how-to-set-a-variables-subproperty">How to set a variable&#39;s subproperty?</a></p> <p>Of course, I understand the above <code>None</code> in <code>enum</code> doesn't work. but from the vendor's code, I do see something like this: <code>bird.eye.Color=bird.eye.Color.enum.None</code> I checked the <code>type(bird.eye.Color)</code> it is a <code>&lt;class 'flufl.enum._enum.IntEnumValue'&gt;</code> so a <code>flufl.enum</code> is used. I suppose it should not be very different to use a <code>flufl.enum</code> or a <code>Enum</code>. Thanks a lot!</p>
3
2016-08-04T17:28:38Z
38,774,049
<p>You can not do this directly because it is a syntax error to assign to <code>None</code>. </p> <p>Neither should you set an attribute on your enum class dynamically, because this will interfere with the metaclass logic that <code>Enum</code> uses to prepare your class. </p> <p>You should just use a lowercase name <code>none</code> to avoid the name collision with python's <code>None</code> singleton. For the use-case you have described, there is no disadvantage to this approach. </p>
1
2016-08-04T17:41:59Z
[ "python", "enums" ]
Installation error for language_check in python 2.7
38,773,841
<p>I have tried to install language_check library in Python 2.7 by using...</p> <pre><code>pip install language_check </code></pre> <p>and...</p> <pre><code>pip install language_check --upgrade </code></pre> <p>In both cases, I get the following error...</p> <pre><code>Collecting language-check Using cached language-check-0.8.tar.gz Installing collected packages: language-check Running setup.py install for language-check Complete output from command "C:\Users\Gaurav M\Anaconda\python.exe" -c "import setuptools, tokenize;__file__='c:\\users\\gaurav~1\\appdata\\local\\temp\\pip-build-ew9qcy\\language-check\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\gaurav~1\appdata\local\temp\pip-b0zy9n-record\install-record.txt --single-version-externally-managed --compile: Downloading 'LanguageTool-3.2.zip' (87.3 MiB)... 100% Traceback (most recent call last): File "&lt;string&gt;", line 1, in &lt;module&gt; File "c:\users\gaurav~1\appdata\local\temp\pip-build-ew9qcy\language-check\setup.py", line 597, in &lt;module&gt; sys.exit(main()) File "c:\users\gaurav~1\appdata\local\temp\pip-build-ew9qcy\language-check\setup.py", line 592, in main run_setup_hooks(config) File "c:\users\gaurav~1\appdata\local\temp\pip-build-ew9qcy\language-check\setup.py", line 561, in run_setup_hooks language_tool_hook(config) File "c:\users\gaurav~1\appdata\local\temp\pip-build-ew9qcy\language-check\setup.py", line 586, in language_tool_hook download_lt() File "download_lt.py", line 158, in download_lt os.path.join(PACKAGE_PATH, dirname)) WindowsError: [Error 5] Access is denied ---------------------------------------- Command ""C:\Users\Gaurav M\Anaconda\python.exe" -c "import setuptools, tokenize;__file__='c:\\users\\gaurav~1\\appdata\\local\\temp\\pip-build-ew9qcy\\language-check\\setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record c:\users\gaurav~1\appdata\local\temp\pip-b0zy9n-record\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in c:\users\gaurav~1\appdata\local\temp\pip-build-ew9qcy\language-check </code></pre> <p>I also tried doing...</p> <pre><code>easy_install language_check </code></pre> <p>and that throws a different error...</p> <pre><code>Downloading https://pypi.python.org/packages/05/2e/471a9104b0fe7bb404de6d79e2fdd0c41ad08b87a16cbb4c8c5c9300a608/language-check-0.8.tar.gz#md5=8b4e3aa5e77bff1e33d3312a6dae870b Processing language-check-0.8.tar.gz Writing c:\users\gaurav~1\appdata\local\temp\easy_install-qkjgfj\language-check-0.8\setup.cfg Running language-check-0.8\setup.py -q bdist_egg --dist-dir c:\users\gaurav~1\appdata\local\temp\easy_install-qkjgfj\language-check-0.8\egg-dist-tmp-py6mda Downloading 'LanguageTool-3.2.zip' (87.3 MiB)... 100% error: [Error 145] The directory is not empty &lt;built-in function rmdir&gt; c:\users\gaurav~1\appdata\local\temp\easy_install-qkjgfj\language-check-0.8\language_check\LanguageTool-3.2\org\languagetool\rules\uk </code></pre> <p>How do I install language_check in this case?</p>
2
2016-08-04T17:29:33Z
38,814,889
<p>I check the sources of the file <code>download_lt.py</code> (<a href="https://github.com/myint/language-check" rel="nofollow">github language_check</a>). It appears that the error occurs when you try to move the folder <code>language_check/LanguageTool-X.Y</code> with the command <a href="https://docs.python.org/3/library/os.html#os.rename" rel="nofollow"><code>os.rename()</code></a> from your <a href="https://docs.python.org/3/library/tempfile.html#tempfile.TemporaryFile" rel="nofollow"><code>TemporaryFile</code></a> to your Anaconda Lib folder.</p> <p>So far, @Orions is right, it is a permission problem.</p> <p>Firstly, you should check your folder permission:</p> <ul> <li>Go to your Local folder (should be C:\Users\Gaurav M\AppData\Local)</li> <li>Right-click on <code>Temp</code> folder on select <code>properties</code></li> <li>Go to <code>Security</code> tab and <code>Edit</code> and <code>Add</code> your name if it doesn't appear under <code>Group or user names</code>.</li> </ul> <p>Repeat the operation for your Anaconda folder. (should be C:\Users\Gaurav M\Anaconda)</p> <p>Secondly, you can try:</p> <pre><code>pip install --user language_check </code></pre> <p>But the pip <a href="https://pip.pypa.io/en/stable/reference/pip_install/#cmdoption--user" rel="nofollow">--user</a> option install the package only for the user.</p> <blockquote> <p>Install to the Python user install directory for your platform. Typically ~/.local/, or %APPDATA%Python on Windows. (See the Python documentation for site.USER_BASE for full details.)</p> </blockquote> <p>Last but not least, I presume you are using <code>cmd</code> or <code>powershell</code>as Command-line interpreter. In my opinion, using <a href="https://www.cygwin.com/" rel="nofollow">cygwin</a> on Windows makes a lot of things easier. Although It could be painful to configure, I would recommend a pre-configure <code>cygwin</code> solution like <a href="http://babun.github.io/" rel="nofollow">Babun</a>.</p> <p>Good luck!</p>
2
2016-08-07T14:03:33Z
[ "python", "python-2.7", "pip", "easy-install" ]
Getting dimensions right for regression with lasagne
38,773,866
<p>I'm trying to learn a network that outputs a value in the range -1.0..1.0. There are only six features so far, all floats. I'm having real trouble getting types and shapes aligned. So far I have:</p> <pre><code>#!/usr/bin/env python3 import lasagne import numpy as np import sys import theano import theano.tensor as T infilename = sys.argv[1] split_size = 500 epochs = 100 theano.config.exception_verbosity = 'high' examples = np.genfromtxt(infilename, delimiter=' ') np.random.shuffle(examples) examples = examples.reshape(-1, 7) train, test = examples[:split_size,:], examples[split_size:,:] # input and target train_y = train[:,0] train_X = train[:,1:] test_y = test[:,0] test_X = test[:,1:] input_var = T.matrix() target_var = T.vector() def iterate_minibatches(inputs, targets, batchsize, shuffle=False): assert len(inputs) == len(targets) if shuffle: indices = np.arange(len(inputs)) np.random.shuffle(indices) for start_idx in range(0, len(inputs) - batchsize + 1, batchsize): if shuffle: excerpt = indices[start_idx:start_idx + batchsize] else: excerpt = slice(start_idx, start_idx + batchsize) yield inputs[excerpt], targets[excerpt] # nn structure from lasagne.nonlinearities import tanh, softmax, leaky_rectify net = lasagne.layers.InputLayer(shape=(None, 6), input_var=input_var) net = lasagne.layers.DenseLayer(net, num_units=10, nonlinearity=tanh) net = lasagne.layers.DenseLayer(net, num_units=1, nonlinearity=softmax) prediction = lasagne.layers.get_output(net) loss = lasagne.objectives.aggregate(prediction, target_var) loss = loss.mean() + 1e-4 * lasagne.regularization.regularize_network_params(net, lasagne.regularization.l2) # parameter update expressions params = lasagne.layers.get_all_params(net, trainable=True) updates = lasagne.updates.nesterov_momentum(loss, params, learning_rate = 0.02, momentum=0.9) # training function train_fn = theano.function([input_var, target_var], loss, updates=updates) for epoch in range(epochs): loss = 0 for input_batch, target_batch in iterate_minibatches(train_X, train_y, 50, shuffle=True): print('input', input_batch.shape) print('target', target_batch.shape) loss += train_fn(input_batch, target_batch) print('epoch', epoch, 'loss', loss / len(training_data)) test_prediction = lasagne.layers.get_output(network, deterministic=True) predict_fn = theano.function([input_var], T.argmax(test_prediction, axis=1)) print('predicted score for first test input', predict_fn(test_X[0])) print(net_output) </code></pre> <p>The input data is a 7-column file of floats, space-separated. Here are a few example lines:</p> <pre><code>-0.4361711835021444 0.9926778242677824 1.0 0.0 0.0 0.0 0.0 1.0 0.9817294281729428 1.0 1.7142857142857142 0.0 0.42857142857142855 1.7142857142857142 -0.4356014580801944 0.9956764295676429 1.0 0.0 0.0 0.0 0.0 1.0 1.0 3.0 0.0 0.0 4.0 1.0 -0.4361977186311787 0.9925383542538354 1.0 0.0 0.0 0.0 0.0 -0.46511627906976744 1.0 0.5 0.0 0.0 0.0 0.0 -0.4347826086956522 1.0 1.0 0.0 0.0 0.0 0.0 -0.4378224895429426 0.9840306834030683 1.0 0.0 0.0 0.0 0.0 -0.4377155764476054 0.9845885634588564 1.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0 0.0 2.0 0.0 </code></pre> <p>This is based pretty tightly on the lasagne reference example. The error that comes out is:</p> <pre><code>/usr/local/lib/python3.5/dist-packages/theano/tensor/signal/downsample.py:6: UserWarning: downsample module has been moved to the theano.tensor.signal.pool module. "downsample module has been moved to the theano.tensor.signal.pool module.") input (50, 6) target (50,) Traceback (most recent call last): File "/usr/local/lib/python3.5/dist-packages/theano/compile/function_module.py", line 859, in __call__ outputs = self.fn() ValueError: Input dimension mis-match. (input[0].shape[1] = 1, input[1].shape[1] = 50) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "./nn_cluster.py", line 66, in &lt;module&gt; loss += train_fn(input_batch, target_batch) File "/usr/local/lib/python3.5/dist-packages/theano/compile/function_module.py", line 871, in __call__ storage_map=getattr(self.fn, 'storage_map', None)) File "/usr/local/lib/python3.5/dist-packages/theano/gof/link.py", line 314, in raise_with_op reraise(exc_type, exc_value, exc_trace) File "/usr/lib/python3/dist-packages/six.py", line 685, in reraise raise value.with_traceback(tb) File "/usr/local/lib/python3.5/dist-packages/theano/compile/function_module.py", line 859, in __call__ outputs = self.fn() ValueError: Input dimension mis-match. (input[0].shape[1] = 1, input[1].shape[1] = 50) Apply node that caused the error: Elemwise{Mul}[(0, 0)](SoftmaxWithBias.0, InplaceDimShuffle{x,0}.0) Toposort index: 21 Inputs types: [TensorType(float64, matrix), TensorType(float64, row)] Inputs shapes: [(50, 1), (1, 50)] Inputs strides: [(8, 8), (400, 8)] Inputs values: ['not shown', 'not shown'] Outputs clients: [[Sum{acc_dtype=float64}(Elemwise{Mul}[(0, 0)].0)]] Debugprint of the apply node: Elemwise{Mul}[(0, 0)] [id A] &lt;TensorType(float64, matrix)&gt; '' |SoftmaxWithBias [id B] &lt;TensorType(float64, matrix)&gt; '' | |Dot22 [id C] &lt;TensorType(float64, matrix)&gt; '' | | |Elemwise{Composite{tanh((i0 + i1))}}[(0, 0)] [id D] &lt;TensorType(float64, matrix)&gt; '' | | | |Dot22 [id E] &lt;TensorType(float64, matrix)&gt; '' | | | | |&lt;TensorType(float64, matrix)&gt; [id F] &lt;TensorType(float64, matrix)&gt; | | | | |W [id G] &lt;TensorType(float64, matrix)&gt; | | | |InplaceDimShuffle{x,0} [id H] &lt;TensorType(float64, row)&gt; '' | | | |b [id I] &lt;TensorType(float64, vector)&gt; | | |W [id J] &lt;TensorType(float64, matrix)&gt; | |b [id K] &lt;TensorType(float64, vector)&gt; |InplaceDimShuffle{x,0} [id L] &lt;TensorType(float64, row)&gt; '' |&lt;TensorType(float64, vector)&gt; [id M] &lt;TensorType(float64, vector)&gt; Storage map footprint: - Elemwise{Composite{tanh((i0 + i1))}}[(0, 0)].0, Shape: (50, 10), ElemSize: 8 Byte(s), TotalSize: 4000 Byte(s) - &lt;TensorType(float64, matrix)&gt;, Input, Shape: (50, 6), ElemSize: 8 Byte(s), TotalSize: 2400 Byte(s) - W, Shared Input, Shape: (6, 10), ElemSize: 8 Byte(s), TotalSize: 480 Byte(s) - &lt;TensorType(float64, matrix)&gt;, Shared Input, Shape: (6, 10), ElemSize: 8 Byte(s), TotalSize: 480 Byte(s) - SoftmaxWithBias.0, Shape: (50, 1), ElemSize: 8 Byte(s), TotalSize: 400 Byte(s) - InplaceDimShuffle{x,0}.0, Shape: (1, 50), ElemSize: 8 Byte(s), TotalSize: 400 Byte(s) - SoftmaxGrad.0, Shape: (50, 1), ElemSize: 8 Byte(s), TotalSize: 400 Byte(s) - &lt;TensorType(float64, vector)&gt;, Input, Shape: (50,), ElemSize: 8 Byte(s), TotalSize: 400 Byte(s) - W, Shared Input, Shape: (10, 1), ElemSize: 8 Byte(s), TotalSize: 80 Byte(s) - b, Shared Input, Shape: (10,), ElemSize: 8 Byte(s), TotalSize: 80 Byte(s) - &lt;TensorType(float64, vector)&gt;, Shared Input, Shape: (10,), ElemSize: 8 Byte(s), TotalSize: 80 Byte(s) - &lt;TensorType(float64, matrix)&gt;, Shared Input, Shape: (10, 1), ElemSize: 8 Byte(s), TotalSize: 80 Byte(s) - TensorConstant{0.02}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s) - b, Shared Input, Shape: (1,), ElemSize: 8 Byte(s), TotalSize: 8 Byte(s) - TensorConstant{0.0001}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s) - TensorConstant{(1, 1) of 0.9}, Shape: (1, 1), ElemSize: 8 Byte(s), TotalSize: 8 Byte(s) - TensorConstant{4.00000000..000001e-06}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s) - TensorConstant{(1,) of 0.02}, Shape: (1,), ElemSize: 8 Byte(s), TotalSize: 8 Byte(s) - Constant{0}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s) - Subtensor{int64}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s) - TensorConstant{(1,) of 0.9}, Shape: (1,), ElemSize: 8 Byte(s), TotalSize: 8 Byte(s) - Constant{1}, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s) - Subtensor{int64}.0, Shape: (), ElemSize: 8 Byte(s), TotalSize: 8.0 Byte(s) - TensorConstant{(1, 1) of 1.0}, Shape: (1, 1), ElemSize: 8 Byte(s), TotalSize: 8 Byte(s) - &lt;TensorType(float64, vector)&gt;, Shared Input, Shape: (1,), ElemSize: 8 Byte(s), TotalSize: 8 Byte(s) TotalSize: 8984.0 Byte(s) 0.000 GB TotalSize inputs: 4168.0 Byte(s) 0.000 GB HINT: Re-running with most Theano optimization disabled could give you a back-trace of when this node was created. This can be done with by setting the Theano flag 'optimizer=fast_compile'. If that does not work, Theano optimizations can be disabled with 'optimizer=None'. </code></pre> <p>A similar exception is raised when using <code>lasagne.objectives.squared_error</code>. Any ideas? I can't work out where the data shape is wrong, if that's the problem, and if this is the right way to use the objective function.</p>
0
2016-08-04T17:31:05Z
39,149,486
<p>I copy your code and your data input , modify some thing and ran for no error.</p> <p>code: </p> <pre><code>import lasagne import numpy as np import sys import theano import theano.tensor as T infilename = 'tt_lasagne.input' #sys.argv[1] split_size = 500 epochs = 100 theano.config.exception_verbosity = 'high' examples = np.genfromtxt(infilename, delimiter=' ') np.random.shuffle(examples) examples = examples.reshape(-1, 7) train, test = examples[:split_size,:], examples[split_size:,:] # input and target train_y = train[:,0] train_X = train[:,1:] test_y = test[:,0] test_X = test[:,1:] input_var = T.matrix() target_var = T.vector() def iterate_minibatches(inputs, targets, batchsize, shuffle=False): assert len(inputs) == len(targets) if shuffle: indices = np.arange(len(inputs)) np.random.shuffle(indices) for start_idx in range(0, len(inputs) - batchsize + 1, batchsize): if shuffle: excerpt = indices[start_idx:start_idx + batchsize] else: excerpt = slice(start_idx, start_idx + batchsize) yield inputs[excerpt], targets[excerpt] # nn structure from lasagne.nonlinearities import tanh, softmax, leaky_rectify net = lasagne.layers.InputLayer(shape=(None, 6), input_var=input_var) net = lasagne.layers.DenseLayer(net, num_units=10, nonlinearity=tanh) net = lasagne.layers.DenseLayer(net, num_units=1, nonlinearity=softmax) prediction = lasagne.layers.get_output(net) loss = lasagne.objectives.aggregate(prediction, target_var) loss = loss.mean() + 1e-4 * lasagne.regularization.regularize_network_params(net, lasagne.regularization.l2) # parameter update expressions params = lasagne.layers.get_all_params(net, trainable=True) updates = lasagne.updates.nesterov_momentum(loss, params, learning_rate = 0.02, momentum=0.9) # training function train_fn = theano.function([input_var, target_var], loss, updates=updates) for epoch in range(epochs): loss = 0 for input_batch, target_batch in iterate_minibatches(train_X, train_y, 50, shuffle=True): print('input', input_batch.shape) print('target', target_batch.shape) loss += train_fn(input_batch, target_batch) print('epoch', epoch, 'loss', loss / len(train_X)) #test_prediction = lasagne.layers.get_output(net, deterministic=True) #predict_fn = theano.function([input_var], T.argmax(test_prediction, axis=1)) #print('predicted score for first test input', predict_fn(test_X[0])) #print(net_output) </code></pre> <p>tt_lasagne.input </p> <pre><code>-0.4361711835021444 0.9926778242677824 1.0 0.0 0.0 0.0 0.0 1.0 0.9817294281729428 1.0 1.7142857142857142 0.0 0.42857142857142855 1.7142857142857142 -0.4356014580801944 0.9956764295676429 1.0 0.0 0.0 0.0 0.0 1.0 1.0 3.0 0.0 0.0 4.0 1.0 -0.4361977186311787 0.9925383542538354 1.0 0.0 0.0 0.0 0.0 -0.46511627906976744 1.0 0.5 0.0 0.0 0.0 0.0 -0.4347826086956522 1.0 1.0 0.0 0.0 0.0 0.0 -0.4378224895429426 0.9840306834030683 1.0 0.0 0.0 0.0 0.0 -0.4377155764476054 0.9845885634588564 1.0 0.0 0.0 0.0 0.0 1.0 1.0 1.0 1.0 0.0 2.0 0.0 </code></pre>
0
2016-08-25T15:40:45Z
[ "python", "numpy", "theano", "lasagne" ]
Odoo. TypeError: 'int' object is not iterable
38,773,888
<p>I have problrem with my code.</p> <pre><code>class SiteTrip(models.Model): _name = 'vips_vc.site_trip' name = fields.Char() session_ids = fields.One2many('vips_vc.session', 'site_trip_id', string='Session ID', index=True) url_prevouse_ids = fields.Many2one('vips_vc.url_list', string='Prevouse URL', index=True) url_current_ids = fields.Many2one('vips_vc.url_list', string='Current URL', index=True) class URLList(models.Model): _name = 'vips_vc.url_list' name = fields.Char(string="URL", required=True) url_parametes = fields.Char(string="URL parameters") target_session_id = fields.One2many('vips_vc.session', 'target_url_ids', string='Target URL') site_trip_prevouse_id = fields.One2many('vips_vc.site_trip', 'url_prevouse_ids', string='Prevouse URL') site_trip_current_id = fields.One2many('vips_vc.site_trip', 'url_current_ids', string='Current URL') remote_sites_id = fields.One2many('vips_vc.remote_sites', 'site_url_ids', string='Remote site page with URL') remote_sites_target_url_id = fields.One2many('vips_vc.remote_sites', 'target_url_ids', string='URL on remote site page') </code></pre> <p>My controller:</p> <pre><code>def register_trip(self, currentURLid, prevouseURLid, sessionID): currentURLid = int(currentURLid) prevouseURLid = int(prevouseURLid) result = None ### something _logger.info("CREATE -----&gt; session_ids: %r url_prevouse_ids: %r url_current_ids: %r", sessionID, prevouseURLid, currentURLid) result = table.create({'session_ids': sessionID, 'url_prevouse_ids': prevouseURLid, 'url_current_ids': currentURLid}) ### something return result.id </code></pre> <p>And error is:</p> <pre><code>2016-08-04 17:20:52,931 24261 INFO odoov8 openerp.addons.vips_vc.controllers: CREATE -----&gt; session_ids: 59 url_prevouse_ids: 8 url_current_ids: 1 2016-08-04 17:20:52,938 24261 ERROR odoov8 openerp.http: Exception during JSON request handling. Traceback (most recent call last): File "/home/skif/odoo/openerp/http.py", line 540, in _handle_exception return super(JsonRequest, self)._handle_exception(exception) File "/home/skif/odoo/openerp/http.py", line 577, in dispatch result = self._call_function(**self.params) File "/home/skif/odoo/openerp/http.py", line 313, in _call_function return checked_call(self.db, *args, **kwargs) File "/home/skif/odoo/openerp/service/model.py", line 118, in wrapper return f(dbname, *args, **kwargs) File "/home/skif/odoo/openerp/http.py", line 310, in checked_call return self.endpoint(*a, **kw) File "/home/skif/odoo/openerp/http.py", line 806, in __call__ return self.method(*args, **kw) File "/home/skif/odoo/openerp/http.py", line 406, in response_wrap response = f(*args, **kw) File "/home/skif/odoo/my-modules/vips_vc/controllers.py", line 194, in register_session self.register_trip(currentURLid, prevouseURLid, sessionID) File "/home/skif/odoo/my-modules/vips_vc/controllers.py", line 375, in register_trip 'url_current_ids': currentURLid}) File "/home/skif/odoo/openerp/api.py", line 266, in wrapper return new_api(self, *args, **kwargs) File "/home/skif/odoo/openerp/models.py", line 4094, in create record = self.browse(self._create(old_vals)) File "/home/skif/odoo/openerp/api.py", line 266, in wrapper return new_api(self, *args, **kwargs) File "/home/skif/odoo/openerp/api.py", line 508, in new_api result = method(self._model, cr, uid, *args, **old_kwargs) File "/home/skif/odoo/openerp/models.py", line 4279, in _create result += self._columns[field].set(cr, self, id_new, field, vals[field], user, rel_context) or [] File "/home/skif/odoo/openerp/osv/fields.py", line 795, in set for act in values: TypeError: 'int' object is not iterable </code></pre> <p>As you see when I'm trying to add record in vips_vc.site_trip i receive error. And error only for currentURLid. It has integer value. prevouseURLid has integer value too. prevouseURLid and currentURLid have similar relation One2Many/Many2One.</p> <p>prevouseURLid is working. currentURLid isn't.</p> <p>In this line I checked all param(logger output):</p> <pre><code>2016-08-04 17:20:52,931 24261 INFO odoov8 openerp.addons.vips_vc.controllers: CREATE -----&gt; session_ids: 59 url_prevouse_ids: 8 url_current_ids: 1 </code></pre> <p>url_prevouse_ids and url_current_ids have type is integer. They have defined value. They have similar relation. And insert url_current_ids return error. Why is it happening?</p> <p>Tomorrow all worked fine !</p> <p>I did not touch relation. I did not touch type of variables...</p> <p>UPD: After all manipulation I have this: If i'm trying to create record with any param (sessionID, prevouseURLid, currentURLid) I receiving same error: <em>TypeError: 'int' object is not iterable</em></p>
0
2016-08-04T17:32:08Z
38,789,480
<p>I found error. I don't know how it working early...</p> <pre><code>result = table.create({'session_ids': sessionID, 'url_prevouse_ids': prevouseURLid, 'url_current_ids': currentURLid}) </code></pre> <p>Here present ID record from other tables(models). When I removed all data (module was remove and install again). After that step by step i check all data what received variables and stored in DB. I founded that vips_vc.url_list and vips_vc.session had not data. after that i placed such code after create all records:</p> <pre><code>_logger.info(".....&gt; Commit record ID %r", result.id) table.env.cr.commit() </code></pre> <p>I have no idea why it code work early without commit().</p>
0
2016-08-05T12:38:53Z
[ "python", "orm", "openerp", "typeerror", "odoo-8" ]
Python pinging local IPs
38,773,953
<p>So yesterday I was practicing what I learnt over the past few days and decided to create a script to scan through all the IPs in the local network and check which ones are being used.</p> <p>I used subprocess the use the "ping" command with a given timeout, and other few libraries such as docopt, threading and time for common tasks such as handling command line arguments, threading, waiting code etc...</p> <p>Here's the script: </p> <pre><code>""" ipcheck.py - Getting available IPs in a network. Usage: ipcheck.py -h | --help ipcheck.py PREFIX ipcheck.py [(-n &lt;pack_num&gt; PREFIX) | (-t &lt;timeout&gt; PREFIX)] Options: -h --help Show the program's usage. -n --packnum Number of packets to be sent. -t --timeout Timeout in miliseconds for the request. """ import sys, os, time, threading from threading import Thread from threading import Event import subprocess import docopt ips = [] # Global ping variable def ping(ip, e, n=1, time_out=1000): global ips # FIX SO PLATFORM INDEPENDENT # Use subprocess to ping an IP try: dump_file = open('dump.txt', 'w') subprocess.check_call("ping -q -w%d -c%s %s" % (int(time_out), int(n), ip), shell=True, stdout=dump_file, stderr=dump_file) except subprocess.CalledProcessError as err: # Ip did not receive packets print("The IP [%s] is NOT AVAILABLE" % ip) return else: # Ip received packets, so available print("The IP [%s] is AVAILABLE" % ip) #ips.append(ip) finally: # File has to be closed anyway dump_file.close() # Also set the event as ping finishes e.set() ips.append(1) def usage(): print("Helped init") def main(e): # variables needed timeout = 1000 N_THREADS = 10 # Get arguments for parsing arguments = docopt.docopt(__doc__) # Parse the arguments if arguments['--help'] or len(sys.argv[1:]) &lt; 1: usage() sys.exit(0) elif arguments['--packnum']: n_packets = arguments['--packnum'] elif arguments['--timeout']: timeout = arguments['--timeout'] prefix = arguments['PREFIX'] # Just an inner function to reuse in the main # loop. def create_thread(threads, ip, e): # Just code to crete a ping thread threads.append(Thread(target=ping, args=(ip, e))) threads[-1].setDaemon(True) threads[-1].start() return # Do the threading stuff threads = [] # Loop to check all the IP's for i in range(1, 256): if len(threads) &lt; N_THREADS: # Creating and starting thread create_thread(threads, prefix+str(i), e) else: # Wait until a thread finishes e.wait() # Get rid of finished threads to_del = [] for th in threads: if not th.is_alive(): to_del.append(th) for th in to_del: threads.remove(th) # Cheeky clear init + create thread create_thread(threads, prefix+str(i), e) e.clear() time.sleep(2*timeout/1000) # Last chance to wait for unfinished pings print("Program ended. Number of threads active: %d." % threading.active_count()) if __name__ == "__main__": ev = Event() main(ev) </code></pre> <p>The problem I'm having is that, although I'm setting a timeout (in milliseconds) for the ping command, some threads do not finish some reason. I fixed this temporarily by making all the threads daemonic and waiting twice the timeout after the program finishes (last few lines in main), but this doesn't work as expected, some threads are still not finished after the sleep.</p> <p>Is this something to do with the command ping itself or is there a problem in my design ?</p> <p>Peace!</p>
1
2016-08-04T17:35:58Z
38,774,454
<p>Python 3.3 implements a <code>timeout=</code> keyword parameter to <code>subprocess.check_call()</code>:</p> <p><a href="https://docs.python.org/3.3/library/subprocess.html#subprocess.check_call" rel="nofollow">https://docs.python.org/3.3/library/subprocess.html#subprocess.check_call</a></p> <p>Otherwise I would use another thread to ensure that the spawned command is killed after the timeout period - i.e. see this SO answer:</p> <p><a href="http://stackoverflow.com/a/6001858/866915">http://stackoverflow.com/a/6001858/866915</a></p>
0
2016-08-04T18:06:41Z
[ "python", "linux", "multithreading", "command-line", "timeout" ]
Writing python function to extract matching rows from pandas dataframe
38,773,954
<pre><code>df1 = pd.DataFrame({'A' : [5,5,5,5], 'B' : [4,2,1, 1], 'C' : [2,2,7,1]}) </code></pre> <p>I want to get those rows in df1 based on foll. condition:</p> <pre><code>df1.loc[(df1['A'] == 5) &amp; (df1['B'] == 4) &amp; (df1['C'] == 2)] </code></pre> <p>How can I make it more generic i.e. I want to have a function, where I specify both the column names and the values I am looking for as arguments.</p>
1
2016-08-04T17:35:58Z
38,774,119
<p>You need something like this,filterdf is your function :</p> <pre><code>import pandas as pd df1 = pd.DataFrame({'A' : [5,5,5,5], 'B' : [4,2,1,1], 'C' : [2,2,7,1]}) def filterdf(df,col1,col2,val1,val2): return df[(df[col1] == val1) &amp; (df[col2] == val2)] df2 = filterdf(df1,'A','B',5,4) print(df2) Out: A B C 0 5 4 2 </code></pre>
1
2016-08-04T17:46:00Z
[ "python", "pandas" ]
Writing python function to extract matching rows from pandas dataframe
38,773,954
<pre><code>df1 = pd.DataFrame({'A' : [5,5,5,5], 'B' : [4,2,1, 1], 'C' : [2,2,7,1]}) </code></pre> <p>I want to get those rows in df1 based on foll. condition:</p> <pre><code>df1.loc[(df1['A'] == 5) &amp; (df1['B'] == 4) &amp; (df1['C'] == 2)] </code></pre> <p>How can I make it more generic i.e. I want to have a function, where I specify both the column names and the values I am looking for as arguments.</p>
1
2016-08-04T17:35:58Z
38,774,297
<p>Assign what you are looking for to a series</p> <pre><code># first row of df1 looking_for = df1.iloc[0, :] </code></pre> <p>Then evaluate the equality and find where all are equal in a row.</p> <pre><code>df1.eq(looking_for).all(1) 0 True 1 False 2 False 3 False dtype: bool </code></pre> <p>Use this as a filter</p> <pre><code>df1[df1.eq(looking_for).all(1)] </code></pre> <p><a href="http://i.stack.imgur.com/oG7hp.png" rel="nofollow"><img src="http://i.stack.imgur.com/oG7hp.png" alt="enter image description here"></a></p> <p>Generically, assign any series</p> <pre><code>looking_for = pd.Series([1, 5, 7], list('BAC')) df1[df1.eq(looking_for).all(1)] </code></pre> <p><a href="http://i.stack.imgur.com/qtcoj.png" rel="nofollow"><img src="http://i.stack.imgur.com/qtcoj.png" alt="enter image description here"></a></p>
3
2016-08-04T17:56:54Z
[ "python", "pandas" ]
Writing python function to extract matching rows from pandas dataframe
38,773,954
<pre><code>df1 = pd.DataFrame({'A' : [5,5,5,5], 'B' : [4,2,1, 1], 'C' : [2,2,7,1]}) </code></pre> <p>I want to get those rows in df1 based on foll. condition:</p> <pre><code>df1.loc[(df1['A'] == 5) &amp; (df1['B'] == 4) &amp; (df1['C'] == 2)] </code></pre> <p>How can I make it more generic i.e. I want to have a function, where I specify both the column names and the values I am looking for as arguments.</p>
1
2016-08-04T17:35:58Z
38,774,330
<p>One option would be to use <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.query.html" rel="nofollow"><code>query</code></a>. For the conditions in your question, this would involve constructing a string along the lines of <code>'A==5 &amp; B==4 &amp; C==2'</code>.</p> <p>To setup the problem, I'm going to assume you provide a list of tuples, in the form of <code>(column, comparison, value)</code> as your conditions, for example <code>('A', '==', 5)</code>.</p> <p>Then you could write a function along the lines of:</p> <pre><code>def extract_matching_rows(df, conditions): conditions = ' &amp; '.join(['{}{}{}'.format(*c) for c in conditions]) return df.query(conditions) </code></pre> <p>If you only care about equality comparisons, you could just hard code in the <code>'=='</code> and eliminate it from your condition tuples. </p> <p>Example usage with slightly different conditions:</p> <pre><code>conditions = [('A', '&gt;=', 5), ('B', '==', 4), ('C', '&lt;', 3)] extract_matching_rows(df1, conditions) A B C 0 5 4 2 </code></pre> <p>Note that you can even compare columns with <code>query</code>:</p> <pre><code>conditions = [('B', '&gt;=', 'C'), ('A', '==', 5)] extract_matching_rows(df1, conditions) A B C 0 5 4 2 1 5 2 2 3 5 1 1 </code></pre>
3
2016-08-04T17:59:03Z
[ "python", "pandas" ]
Vectorizing multiple outer-products in Python?
38,773,999
<p>I have read <a href="http://stackoverflow.com/questions/35549082/summing-outer-product-of-multiple-vectors-in-einsum">this question here</a> which seems similar, but my question may be simpler. </p> <p>I have a matrix <strong>A</strong> that is of size [N x C], and a matrix <strong>X</strong> that is of size [N x D]</p> <p>For each <em>n</em>th row in <strong>A</strong>, compute it's outer product with the corresponding <em>n</em>th row in <strong>X</strong>. Each outer product will yield a matrix, of size [C x D]. Then, sum up all those matricies together to get the final matrix. </p> <p>Is there a simple non-for-loop way to do this in Python?</p> <p>Thanks! </p>
0
2016-08-04T17:38:49Z
38,774,356
<p>Take the nth rows outer: element (c,d) is A[n,c]*X[n,d]. Now sum over all n and you get Sum_n A[n,c]*X[n,d] which is exactly (AT.X)[c,d]</p>
1
2016-08-04T18:00:15Z
[ "python", "matrix", "vectorization" ]
How to calculate product weight in odoo?
38,774,036
<p>I want to calculate total product weight in odoo quotations and sale orders, but i don't know that how to write the method for it, can someone give me hint about it? <a href="http://i.stack.imgur.com/2g3ST.png" rel="nofollow"><img src="http://i.stack.imgur.com/2g3ST.png" alt="here is in image total weight written in bottom"></a></p> <pre><code>class ProductWeight(models.Model): _inherit = "sale.order" th_weight = fields.Float('Weight') @api.one def _calcweight(self): currentweight = 0 for weight in self.weight_ids: currentweight = currentweight + weight.th_weight self.weight_total = currentweight weight_total = fields.Float('Total Weight', store=True, compute="_calcweight") </code></pre> <p>here is my method for calculating total weight above, but its not right, give me 404 errors.</p>
0
2016-08-04T17:41:09Z
38,797,502
<p>Well I suggest using the following code</p> <pre><code>from openerp import models, fields, api class ProductWeight(models.Model): _inherit = 'sale.order.line' th_weight = fields.Float(string='Weight', store=True, related='product_id.weight') class ProductWeightOrder(models.Model): _inherit = 'sale.order' @api.depends('order_line.th_weight') def _calcweight(self): currentweight = 0 for order_line in self.order_line: currentweight = currentweight + order_line.th_weight self.weight_total = currentweight weight_total = fields.Float(compute='_calcweight', string='Total Weight') </code></pre> <p>you must set the parameter weight in the product master data in any case you can modify the th_weight field so that it is not related to the product, a user would have to fill this information</p>
2
2016-08-05T20:43:20Z
[ "python", "openerp", "odoo-8" ]
How do you access the Z axis in Tiff's when they are converted to a numpy array? The shape is only in 2 dimensions
38,774,059
<p>When you convert a Tiff to a numpy array I can't access or control the Z axis. Any help is appreciated.</p> <p>Edit: Adding code</p> <pre><code>img = Image.open("annotation_50.tif") imgarray = np.array(img) print imgarray.shape() </code></pre> <p>Results in (160,228) The x and y values are accurate but the z axis value isn't showing up. I would expect (168,228,264) because there are 264 images in the tiff, making it a 3D array. </p>
0
2016-08-04T17:42:44Z
38,774,509
<p>Well without seeing any of your code I don't know exactly what you are doing but here is a suggestion:</p> <pre><code> from PIL import Image import numpy as np img = Image.open('multi-tiff.tiff') #multi-tiff.tiff is your image individual_imgs = np.zeros(len_Z,len_Y,len_X) #input the proper size for i in range(len_of_multi-tiff.tiff): #put in the length of the tiff stack individual_img[i,:,:] = img.seek(i) </code></pre>
0
2016-08-04T18:10:21Z
[ "python", "numpy", "matplotlib" ]
Get the input text from Entry widget in Tkinter and use it
38,774,071
<p>I have searched for a couple of hours but can't find someone with the same problem. My question comes after the code.</p> <pre><code>class Interface: def __init__(self, root): --- Stuff Here --- self.hEntry = Entry(lFrame) self.hEntry.bind("&lt;Return&gt;", self.aMethodToGetText) --- Stuff Here --- def aMethodToGetText(self, event): return event.widget.get() def anotherMethod(self): --- Stuff Here --- self.hEntry.pack() h = self.aMethodToGetText()?????????????????? --- Stuff Here --- </code></pre> <p>I want to give my variable <em>h</em> the value that the method <em>aMethodToGetText</em> returns. Calling it like that will ofc give an error: </p> <p>TypeError: aMethodToGetText() missing 1 required positional argument: 'event'</p> <p>Using h = self.aMethodToGetText(self.hEntry) and removing <em>widget.</em> doesn't work either.</p> <p>It can be solved easy if you have 1 Entry. But I have many Entrys and have to make a general text getter method for all my Entrys.</p> <p><strong>EDIT:</strong> So I need one function for each entry? I want to use each entry in different conditions, functions and slicing. That's why I need a variable for each entry. I can't move my code in anotherMethod(self) to aMethodToGetText(self, event) because I don't want the function to start over every time the Enter-button is pressed. If I press Enter on any entry I want them to do the same thing, assign the entry to a variable. But each entry should be assigned to a different variable. I need to compare the entrys with each other.</p> <p><strong>EDIT 2:</strong> </p> <pre><code>class Interface: def __init__(self, root): --- Stuff Here --- self.aListForEntries = [] self.hEntry = Entry(lFrame) self.hEntry.bind("&lt;Return&gt;", self.aMethodToGetText) self.aEntry = Entry(lFrame) self.aEntry.bind("&lt;Return&gt;", self.aMethodToGetText) --- Stuff Here --- def aMethodToGetText(self, event): entry = event.widget.get() self.aListForEntries.append(entry) def anotherMethod(self): --- Stuff Here --- self.hEntry.pack() h = self.entryList[0] #IndexError: list index out of range --- Check the first Entry --- self.aEntry.pack() a = self.entryList[1] --- Check the second Entry --- if h == a: Do stuff --- Stuff Here --- </code></pre> <p>I get an error because the list is empty before the user has pressed Enter. I tried to paus the script with <code>time.sleep(x)</code> before the line <code>h = self.entryList[0]</code> but that doesn't work. I don't want the program to run through the whole <code>anotherMethod(self)</code> before the user has pressed Enter.</p>
-1
2016-08-04T17:43:20Z
38,775,103
<p>don't know if would work, but have you tried with lambda? something like:</p> <pre><code>lambda event, h = event.widget.get(): self.aMethodToGetText(event) </code></pre>
-2
2016-08-04T18:43:31Z
[ "python", "python-3.x", "tkinter" ]
Get the input text from Entry widget in Tkinter and use it
38,774,071
<p>I have searched for a couple of hours but can't find someone with the same problem. My question comes after the code.</p> <pre><code>class Interface: def __init__(self, root): --- Stuff Here --- self.hEntry = Entry(lFrame) self.hEntry.bind("&lt;Return&gt;", self.aMethodToGetText) --- Stuff Here --- def aMethodToGetText(self, event): return event.widget.get() def anotherMethod(self): --- Stuff Here --- self.hEntry.pack() h = self.aMethodToGetText()?????????????????? --- Stuff Here --- </code></pre> <p>I want to give my variable <em>h</em> the value that the method <em>aMethodToGetText</em> returns. Calling it like that will ofc give an error: </p> <p>TypeError: aMethodToGetText() missing 1 required positional argument: 'event'</p> <p>Using h = self.aMethodToGetText(self.hEntry) and removing <em>widget.</em> doesn't work either.</p> <p>It can be solved easy if you have 1 Entry. But I have many Entrys and have to make a general text getter method for all my Entrys.</p> <p><strong>EDIT:</strong> So I need one function for each entry? I want to use each entry in different conditions, functions and slicing. That's why I need a variable for each entry. I can't move my code in anotherMethod(self) to aMethodToGetText(self, event) because I don't want the function to start over every time the Enter-button is pressed. If I press Enter on any entry I want them to do the same thing, assign the entry to a variable. But each entry should be assigned to a different variable. I need to compare the entrys with each other.</p> <p><strong>EDIT 2:</strong> </p> <pre><code>class Interface: def __init__(self, root): --- Stuff Here --- self.aListForEntries = [] self.hEntry = Entry(lFrame) self.hEntry.bind("&lt;Return&gt;", self.aMethodToGetText) self.aEntry = Entry(lFrame) self.aEntry.bind("&lt;Return&gt;", self.aMethodToGetText) --- Stuff Here --- def aMethodToGetText(self, event): entry = event.widget.get() self.aListForEntries.append(entry) def anotherMethod(self): --- Stuff Here --- self.hEntry.pack() h = self.entryList[0] #IndexError: list index out of range --- Check the first Entry --- self.aEntry.pack() a = self.entryList[1] --- Check the second Entry --- if h == a: Do stuff --- Stuff Here --- </code></pre> <p>I get an error because the list is empty before the user has pressed Enter. I tried to paus the script with <code>time.sleep(x)</code> before the line <code>h = self.entryList[0]</code> but that doesn't work. I don't want the program to run through the whole <code>anotherMethod(self)</code> before the user has pressed Enter.</p>
-1
2016-08-04T17:43:20Z
38,790,899
<p><strong>EDIT:</strong> Example has two entry widgets that configures the label to displays if the contents match.</p> <pre><code>import tkinter as tk class Example(object): def __init__(self, master): self._master = master self._lbl = tk.Label(master, text = '?') self._lbl.pack(side = tk.LEFT) self.hEntry = tk.Entry(master) self.hEntry.bind("&lt;Return&gt;", self.update) self.hEntry.pack() self.aEntry = tk.Entry(master) self.aEntry.bind("&lt;Return&gt;", self.update) self.aEntry.pack() def update(self, event): if self.hEntry.get() == self.aEntry.get(): self._lbl.config(text = "Match") else: self._lbl.config(text = "No Match") if __name__ == '__main__': root = tk.Tk() app = Example(root) root.mainloop() </code></pre>
0
2016-08-05T13:48:37Z
[ "python", "python-3.x", "tkinter" ]
Selecting using random pairs of data in pandas
38,774,118
<p>I have a very large file with three columns. The first two are integers and the third is a string. I read it in using pandas using</p> <pre><code>data = pd.read_csv("edges+stuff.txt", sep=' ', header=None, dtype={0:np.uint32, 1:np.uint32, 2:np.str}) </code></pre> <p>Here is some example fake data:</p> <pre><code>2 0 Somestuff9 2 0 Somestuff0 1 1 Somestuff5 0 0 Somestuff7 2 0 Somestuff9 2 0 Somestuff5 2 1 Somestuff2 1 1 Somestuff8 1 1 Somestuff2 1 0 Somestuff4 2 1 Somestuff3 0 2 Somestuff9 1 1 Somestuff10 1 0 Somestuff9 </code></pre> <p>I would like to perform the following random sampling which I am stuck on. I want to pick a number of random pairs that exist from the data frame. I don't want to pick a random row as, for example "1 1" occurs four times but I would like to have an equal chance of picking any pair that exists in the data frame. If I did pick "1 1" I would then like to output all the rows that start "1 1". </p> <p>Using my example fake data, I would like to select some pairs randomly from [(0,0), (1,0), (1,1), (0,2), (2,0), (2,1)] (these are all the pairs that exist in the data) and then use those pairs to select rows from the dataframe.</p> <p>One way to do this would be to take the first two columns, sort them and perform the equivalent of <code>np.unique</code>. Then select random pairs from this uniqued list and then use them to select from the original dataframe.</p> <blockquote> <p>Is there some way to do this efficiently in pandas?</p> </blockquote>
0
2016-08-04T17:45:58Z
38,774,349
<p>Here's one way:</p> <pre><code>df.head() Out: col1 col2 col3 0 2 0 Somestuff9 1 2 0 Somestuff0 2 1 1 Somestuff5 3 0 0 Somestuff7 4 2 0 Somestuff9 </code></pre> <p>Select one pair randomly:</p> <pre><code>df[['col1', 'col2']].drop_duplicates().sample(n=1) Out: col1 col2 0 2 1 </code></pre> <p>(Here drop duplicates drops all rows that have the same col1 col2 pairs except for the first one and <code>.sample(n=1)</code> select one among them.)</p> <p>All rows that have <code>col1=2</code>, <code>col2=1</code>:</p> <pre><code>df[['col1', 'col2']].drop_duplicates().sample(n=1).merge(df) Out: col1 col2 col3 0 2 1 Somestuff2 1 2 1 Somestuff3 </code></pre>
2
2016-08-04T18:00:01Z
[ "python", "pandas" ]
Tasks in CELERYBEAT_SCHEDULE not being processed
38,774,128
<p>I am trying to setup a dummy task in Celery that runs every 3 seconds but have had little success so far. This is the output I am getting:</p> <p><a href="http://i.stack.imgur.com/zbTG5.png" rel="nofollow"><img src="http://i.stack.imgur.com/zbTG5.png" alt="enter image description here"></a></p> <p>I've set up celery as follows:</p> <p>In <strong>settings.py</strong>:</p> <pre><code>from datetime import timedelta BROKER_URL = 'redis://localhost:6379/0' CELERY_RESULT_BACKEND = 'redis://localhost:6379' CELERY_ACCEPT_CONTENT = ['application/json'] CELERY_TASK_SERIALIZER = 'json' CELERY_RESULT_SERIALIZER = 'json' CELERY_TIMEZONE = 'UTC' CELERY_IMPORTS = ("api.tasks") CELERYBEAT_SCHEDULE = { 'add_job': { 'task': 'add_job', 'schedule': timedelta(seconds=3), 'args': (16, 16) }, } CELERY_TIMEZONE = 'UTC' </code></pre> <p>In <strong>celery.py</strong>:</p> <pre><code>from __future__ import absolute_import import os from celery import Celery from django.conf import settings # set the default Django settings module for the 'celery' program. os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'blogpodapi.settings') app = Celery( 'blogpodapi', ) # Using a string here means the worker will not have to # pickle the object when using Windows. app.config_from_object('django.conf:settings') app.autodiscover_tasks(lambda: settings.INSTALLED_APPS) @app.task(bind=True) def debug_task(self): print('Request: {0!r}'.format(self.request)) </code></pre> <p>In <strong>tasks.py</strong></p> <pre><code>from celery.task import task @task(name='add_job') def add_job(x, y): r = x + y print "task arguments: {x}, {y}".format(x=x, y=y) print "task result: {r}".format(r=r) return r </code></pre> <p>Have I done anything wrong in the way that I have set it up?</p>
0
2016-08-04T17:46:34Z
38,775,069
<p>Okay the very basic mistake I see is that most of the settings you've mentioned in your <code>settings.py</code> need to go in <code>celery.py</code></p> <p>Specially the <code>CELERYBEAT_SCHEDULE</code></p> <p>You are doing everything write, its just that your Celery is waiting for a task, which it is never receiving as it reads from celery.py and not from the settings.py. Hence nothing is happening.</p> <p>See my <strong>celery.py</strong> and also the <strong>settings.py</strong> for a reference.</p> <p>celery.py -> <a href="https://github.com/amyth/hammer/blob/master/config/celery.py" rel="nofollow">https://github.com/amyth/hammer/blob/master/config/celery.py</a></p> <p>settings.py -> <a href="https://github.com/amyth/hammer/blob/master/config/settings.py" rel="nofollow">https://github.com/amyth/hammer/blob/master/config/settings.py</a></p> <p>I have used <strong>crontab</strong>, coz I wanted to execute the task at a particular time of the day. So you don't need to worry about it. Yours is perfect for what you want to do.</p> <p>Also from wherever whatever blog or tutorial you are following celery, check again what exactly are those settings required for and whether you need all all of them or not.</p>
1
2016-08-04T18:41:51Z
[ "python", "django", "celery", "django-celery" ]
Django conditional filter based on local variable
38,774,133
<p>I'm new with django and was wondering if there is a more efficient way to filter conditionally besides if statement.</p> <p>Given:</p> <pre><code>test_names = ["all"] test_types = ["a", "b", "c"] ... (more lists) </code></pre> <p>I know I can do this:</p> <pre><code>q = tests.objects.all() if test_names[0] == "all": q = q.all() else: q = q.filter("name__in=test_names") if test_types[0] == "all": q = q.all() else: q = q.filter("type__in=test_type") etc... </code></pre> <p>I would like something like this:</p> <pre><code>q = test.objects \ .filter((if test_names[0]=="all") "name__in=test_names") \ .filter((if test_types[0]=="all") "type__in=test_types") \ ...etc </code></pre> <p>I want to avoid the if statement because I have to do this several times on the same query data based on different lists like "test_names".</p>
1
2016-08-04T17:46:50Z
38,774,324
<p>You have conditions in your list, so you need <code>if</code>s for different conditions for sure. You might be able to get away with one query statement but you need to work on your lists:</p> <pre><code>test_name_filter = {} if test_names[0] == 'all' else {'name__in': test_names} test_type_filter = {} if test_type[0] == 'all' else {'type__in': test_types} # ...... q = test.objects.filter(**test_name_filter).filter(**test_type_filter) </code></pre> <p>This should work because:</p> <ol> <li><p>Django ORM filter can accept filter conditions as a dict, keys as criteria and values as filter values.</p></li> <li><p>Empty dict is like not filtering on anything, means returns everything.</p></li> </ol>
0
2016-08-04T17:58:30Z
[ "python", "django", "filter" ]
Slicing a MultiIndex DataFrame by multiple values from a specifid level
38,774,134
<p>I want to slice a MultiIndex <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html" rel="nofollow">DataFrame</a> by multiple values from a secondary level. For example, in the following DataFrame:</p> <pre><code> val1 val2 ind1 ind2 ind3 1 6 s1 10 8 2 7 s1 20 6 3 8 s2 30 4 4 9 s2 50 2 5 10 s3 60 0 </code></pre> <p>I wish to slice only the rows in which <code>ind3 == s1</code> <strong>or</strong> <code>ind3 == s3</code>:</p> <pre><code> val1 val2 ind1 ind2 1 6 10 8 2 7 20 6 5 10 60 0 </code></pre> <p>Best hypothetical option would be to pass multiple arguments to <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.xs.html" rel="nofollow"><code>.xs</code></a>, since it is possible to explicitly state the desired <code>level</code>. </p> <p>I could obviously concat all the sliced-by-single-value DataFrames:</p> <pre><code>In[2]: pd.concat([df.xs('s1',level=2), df.xs('s3',level=2)]) Out[2]: val1 val2 ind1 ind2 1 6 10 8 2 7 20 6 5 10 60 0 </code></pre> <p>But <em>(a)</em> it's tedious and not so readable when using more than 2 values, and <em>(b)</em> for large DataFrames it's quiet heavy (or at least heavier than a multi-value slicing option, if exists).</p> <p>Thanks ahead! And here's <em>the code to build the example DataFrame</em>:</p> <pre><code>import pandas as pd df = pd.DataFrame({'ind1':[1,2,3,4,5], 'ind2':[6,7,8,9,10], 'ind3':['s1','s1','s2','s2','s3'], 'val1':[10,20,30,50,60], 'val2':[8,6,4,2,0]}).set_index(['ind1','ind2','ind3']) </code></pre>
1
2016-08-04T17:46:51Z
38,774,234
<p>As with most selection from a DataFrame, you can use a mask or an indexer (<code>loc</code> in this case).</p> <p>To get the mask, you can use <code>get_level_values</code> (<a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Index.get_level_values.html" rel="nofollow">docs</a>) on the MultiIndex followed by <code>isin</code> (<a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isin.html" rel="nofollow">docs</a>).</p> <pre><code>m = df.index.get_level_values('ind3').isin(['s1', 's3']) df[m].reset_index(level=2, drop=True) </code></pre> <p>To use <code>loc</code>:</p> <pre><code>df.loc[(slice(None), slice(None), ['s1', 's3']), :].reset_index(level=2, drop=True) </code></pre> <p>both output</p> <pre><code> val1 val2 ind1 ind2 1 6 10 8 2 7 20 6 5 10 60 0 </code></pre> <p>Note: the <code>loc</code> way can also be written as seen in Alberto Garcia-Raboso's answer. Many people prefer that syntax as it is more consistent with <code>loc</code> syntax for an <code>Index</code>. Both syntax styles are discussed in <a href="http://pandas.pydata.org/pandas-docs/stable/advanced.html#using-slicers" rel="nofollow">the docs</a>.</p>
3
2016-08-04T17:53:13Z
[ "python", "python-3.x", "pandas", "dataframe", "multi-index" ]
Slicing a MultiIndex DataFrame by multiple values from a specifid level
38,774,134
<p>I want to slice a MultiIndex <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html" rel="nofollow">DataFrame</a> by multiple values from a secondary level. For example, in the following DataFrame:</p> <pre><code> val1 val2 ind1 ind2 ind3 1 6 s1 10 8 2 7 s1 20 6 3 8 s2 30 4 4 9 s2 50 2 5 10 s3 60 0 </code></pre> <p>I wish to slice only the rows in which <code>ind3 == s1</code> <strong>or</strong> <code>ind3 == s3</code>:</p> <pre><code> val1 val2 ind1 ind2 1 6 10 8 2 7 20 6 5 10 60 0 </code></pre> <p>Best hypothetical option would be to pass multiple arguments to <a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.xs.html" rel="nofollow"><code>.xs</code></a>, since it is possible to explicitly state the desired <code>level</code>. </p> <p>I could obviously concat all the sliced-by-single-value DataFrames:</p> <pre><code>In[2]: pd.concat([df.xs('s1',level=2), df.xs('s3',level=2)]) Out[2]: val1 val2 ind1 ind2 1 6 10 8 2 7 20 6 5 10 60 0 </code></pre> <p>But <em>(a)</em> it's tedious and not so readable when using more than 2 values, and <em>(b)</em> for large DataFrames it's quiet heavy (or at least heavier than a multi-value slicing option, if exists).</p> <p>Thanks ahead! And here's <em>the code to build the example DataFrame</em>:</p> <pre><code>import pandas as pd df = pd.DataFrame({'ind1':[1,2,3,4,5], 'ind2':[6,7,8,9,10], 'ind3':['s1','s1','s2','s2','s3'], 'val1':[10,20,30,50,60], 'val2':[8,6,4,2,0]}).set_index(['ind1','ind2','ind3']) </code></pre>
1
2016-08-04T17:46:51Z
38,774,266
<p>You can use an <code>IndexSlice</code>:</p> <pre><code>idx = pd.IndexSlice result = df.loc[idx[:, :, ['s1', 's3']], idx[:]] result.index = result.index.droplevel('ind3') print(result) </code></pre> <p>Output:</p> <pre class="lang-none prettyprint-override"><code> val1 val2 ind1 ind2 1 6 10 8 2 7 20 6 5 10 60 0 </code></pre> <p>The second line above can also be written as</p> <pre><code>result = df.loc(axis=0)[idx[:, :, ['s1', 's3']]] </code></pre>
4
2016-08-04T17:54:49Z
[ "python", "python-3.x", "pandas", "dataframe", "multi-index" ]
Tiff median filter exports as single frame OpenCV Python
38,774,135
<p>I'm trying to have a program run over a sequence of 60 frames (in a tiff file) and apply a noise reduction filter (Median) in order to clean the frames up a bit before analysis. However, my program (which takes it frame by frame) will output a single-frame tiff; why is that? How could I take care of that?</p> <pre><code>from PIL import Image import cv2 import numpy as np im = Image.open('example_recording.tif').convert('L') im.save('greyscale_example.tif') #converts to greyscale width,height = im.size image_lookup = 0 class ImageSequence: def __init__(self, im): self.im = im def __getitem__(self, ix): try: if ix: self.im.seek(ix) return self.im except EOFError: raise IndexError # if end of sequence for frame in ImageSequence(im): imarray = np.array(frame) Blur = cv2.medianBlur(imarray,5) frame = Image.fromarray(Blur) im.save('corrected.tif') </code></pre>
1
2016-08-04T17:46:51Z
38,774,695
<p>I think you are not re-composing your final stack correctly (not shown above?), and saving a single frame (the last frame)?</p> <p>An alternative is to forgo OpenCV and use scipy:</p> <pre><code>import numpy import scipy from scipy import ndimage a = numpy.random.randint(0,255,(100,100,60)) a.shape #(100L, 100L, 60L) b = scipy.ndimage.filters.generic_filter(a, numpy.median, 5) b.shape #(100L, 100L, 60L) </code></pre>
1
2016-08-04T18:20:28Z
[ "python", "opencv", "numpy", "tiff" ]
Django Select widget not updating in UpdateView
38,774,209
<p>In my Django program, I have an UpdateView that takes an Event object (one of my models) and populates the form fields with the object's data. Everything works fine for the most part, including textboxes, datepicker fields, and autocomplete boxes. The one thing that consistently does not populate is my select field: a dropdown of available assistants. For some reason, it never updates and always goes to a blank choice.</p> <p><strong>From views.py</strong>, snippet from my UpdateView extending class</p> <pre><code>def get_context_data(self, **kwargs): context = super(SchedulerEditView, self).get_context_data(**kwargs) context['form'] = self.form_class(instance=self.request.user, initial={ 'hospital': self.object.hospital, 'surgeon': self.object.surgeon, 'procedure': self.object.procedure, 'date': self.object.date.strftime('%m/%d/%Y %I:%M %p'), 'patient': self.object.patient, 'requested_assistant': 2, # Why won't this work?? 'insurance': self.object.insurance </code></pre> <p>As you can see, I tried setting everything manually, and I even tried just a static number that I knew was associated with a pk in the select field, in this case, 2. </p> <p><strong>From event_form.html</strong>, the part of the template relevant to the assistant part of the Event model:</p> <pre><code>&lt;div class="row form-entry center"&gt; &lt;span id="requested_assistant" class=""&gt;{{ form.requested_assistant }}&lt;/span&gt; &lt;/div&gt; </code></pre> <p>And lastly, from the same file, the only other possible thing I can think of that is interfering:</p> <pre><code> $('#id_requested_assistant').append('&lt;option value="0"selected="selected" hidden&gt;Requested Assistant&lt;/option&gt;'); $("#id_requested_assistant").change(function () { if($(this).val() == "0") $(this).addClass("empty"); else $(this).removeClass("empty") }); $("#id_requested_assistant").change(); </code></pre> <p>This snippet was designed to add a placeholder to the select box. It's an ugly workaround, but it seemed to work.</p> <p>Thanks for any help!</p> <p><strong>EDIT: I tried a suggestion and changed views.py to this:</strong></p> <pre><code>def get_initial(self): return {'hospital': self.object.hospital, 'surgeon': self.object.surgeon, 'procedure': self.object.procedure, 'date': self.object.date.strftime('%m/%d/%Y %I:%M %p'), 'patient': self.object.patient, 'requested_assistant_id': 2, # Why won't this work?? 'insurance': self.object.insurance, } </code></pre> <p>Still no luck though</p>
0
2016-08-04T17:50:57Z
38,778,072
<p>Hit me with a brick, it was that JavaScript snippet that messed me up. I switched my Select over to a jQuery Select2 widget so I could set a placeholder label; the only reason I needed that snippet in the first place.</p> <p>Case closed!</p>
0
2016-08-04T21:56:39Z
[ "jquery", "python", "django" ]
http request and regex in Python for HTML parser
38,774,213
<p>When I execute the script, the result is empty. Why? The script connected with a site and parse html tag <code>&lt;a&gt;</code>:</p> <pre><code>#!/usr/bin/python3 import re import socket import urllib, urllib.error import http.client import sys conn = http.client.HTTPConnection('www.guardaserie.online'); headers = { "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Content-type": "application/x-www-form-urlencoded; charset=UTF-8" } params = urllib.parse.urlencode({"s":"hannibal"}) conn.request('GET', '/',params, headers) response = conn.getresponse(); site = re.search('&lt;a href="(.*)" class="box-link-serie"&gt;', str(response.read()), re.M|re.I) if(site): print(site.group()) </code></pre>
0
2016-08-04T17:51:10Z
38,774,564
<p>It's likely the pattern you are searching for is non-existent in the read response, or it chokes at some point trying to parse html.</p> <pre><code>re.search( 'href="(.*)" class="box-link-serie"', str(response.read()), re.M | re.I ) </code></pre> <p>Using something more generic or another parser method will likely lead you to your desired result.</p>
1
2016-08-04T18:13:21Z
[ "python", "get", "http.client" ]
Emails are not getting received by the user while using celery with Django
38,774,347
<p>I am using Django1.8.2, celery3.1.23 and RabbitMq as the broker. I am using Amazon ses for email. Now when I'm sending email from django shell it is getting received by the users but when I schedule it through celery it's getting received.</p> <p>here is project files:</p> <p><strong>project/src/settings/base.py</strong> <em>(censored)</em></p> <pre><code>BROKER_URL = 'amqp://' CELERY_RESULT_BACKEND = 'rpc://' BROKER_POOL_LIMIT = 3 CELERY_TASK_SERIALIZER = 'json' CELERY_RESULT_SERIALIZER = 'json' CELERY_ACCEPT_CONTENT = ['application/json'] CELERY_TIMEZONE = 'Asia/Kolkata' CELERY_ENABLE_UTC = True CELERY_SEND_TASK_ERROR_EMAILS = True SERVER_EMAIL = 'abc@example.com' ADMINS = [ ('abc', 'abc@example.com') ] EMAIL_BACKEND = 'django_smtp_ssl.SSLEmailBackend' DEFAULT_FROM_EMAIL = 'abc@example.com' EMAIL_USE_TLS = True EMAIL_PORT = 465 EMAIL_TIMEOUT = 10 EMAIL_HOST = 'email-smtp.us-east-1.amazonaws.com' EMAIL_HOST_USER = 'abcjdjdlasskjjdklsaj' EMAIL_HOST_PASSWORD = 'djkashdklahsjdhasljkdhjksahdjkashdjakhdak' </code></pre> <p><strong>project/src/settings/celery_app.py</strong></p> <pre><code>from __future__ import absolute_import import os from celery import Celery from django.conf import settings from kombu import serialization serialization.registry._decoders.pop("application/x-python-serialize") # set the default Django settings module for the 'celery' program. os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'project.settings.production') app = Celery('project') # Using a string here means the worker will not have to # pickle the object when using Windows. app.config_from_object('django.conf:settings') app.autodiscover_tasks(lambda: settings.INSTALLED_APPS) if __name__ == '__main__': app.start() @app.task(bind=True) def debug_task(self): print('Request: {0!r}'.format(self.request)) </code></pre> <p><strong>project/src/app/tasks.py</strong></p> <pre><code>@task() def send_mail_reminder(reminder_id): logger.info("Send Email") try: reminder = Reminder.objects.get(pk=reminder_id) except Reminder.DoesNotExist: return body = "{0}".format(reminder.message) try: send_mail("Reminder App Notification",body,settings.DEFAULT_FROM_EMAIL,[reminder.email]) logger.info("Email Successfully send") return "Email Successfully send" except Exception as e: logger.info("There is some problem while sending email") print e return e </code></pre> <p><strong>project/src/app/models.py</strong></p> <pre><code>def schedule_reminder(self): """ Schedule a celery task to send the reminder """ date_time = datetime.combine(self.date,self.time) reminder_time = arrow.get(date_time).replace(tzinfo=self.time_zone.zone) from .tasks import send_sms_reminder, send_mail_reminder # result='' # result = send_sms_reminder.apply_async((self.pk,),eta=reminder_time,serializer = 'json') # else: result = send_mail_reminder.apply_async((self.pk,),eta=reminder_time, serializer = 'json') return result.id def save(self, *args, **kwargs): """ Now we need to do is ensure Django calls our schedule_reminder method every time an Reminder object is created or updated. """ # Check if we have scheduled a celery task for this reminder before if self.task_id: #Revoke that remnder if its time has changed celery_app.control.revoke(self.task_id) # save our reminder, which populates self.pk, # which is used in schedule_reminder # Schedule a reminder task for this reminder self.task_id = self.schedule_reminder() # Save our reminder again with the task_id print "Args:%s,Kwargs:%s"%(args,kwargs) print self.task_id super(Reminder, self).save(*args, **kwargs) </code></pre> <p>Here is the celery log <strong>celery.log</strong></p> <pre><code>[tasks] . RemindMeLater.settings.celery_app.debug_task . Reminder.tasks.send_email_reminder . Reminder.tasks.send_mail_reminder . Reminder.tasks.send_sms_reminder [2016-08-04 23:21:15,507: INFO/MainProcess] Connected to amqp://guest:**@127.0.0.1:5672// [2016-08-04 23:21:15,520: INFO/MainProcess] mingle: searching for neighbors [2016-08-04 23:21:16,532: INFO/MainProcess] mingle: all alone [2016-08-04 23:21:16,546: WARNING/MainProcess] celery@Rohans-MacBook-Pro.local ready. [2016-08-04 23:22:21,260: INFO/MainProcess] Received task: Reminder.tasks.send_mail_reminder[e45b972a-d937-4ed9-bd75-bab331b6ed47] eta:[2016-08-04 23:22:00+05:30] [2016-08-04 23:22:21,582: INFO/Worker-2] Reminder.tasks.send_mail_reminder[e45b972a-d937-4ed9-bd75-bab331b6ed47]: Send Email [2016-08-04 23:22:21,608: INFO/MainProcess] Task Reminder.tasks.send_mail_reminder[e45b972a-d937-4ed9-bd75-bab331b6ed47] succeeded in 0.0274002929982s: None [2016-08-04 23:22:46,629: INFO/MainProcess] Received task: Reminder.tasks.send_mail_reminder[7b14ef7b-c123-42c8-90d8-2baf4122963a] eta:[2016-08-04 23:24:00+05:30] [2016-08-04 23:24:00,895: INFO/Worker-3] Reminder.tasks.send_mail_reminder[7b14ef7b-c123-42c8-90d8-2baf4122963a]: Send Email [2016-08-04 23:24:00,920: INFO/MainProcess] Task Reminder.tasks.send_mail_reminder[7b14ef7b-c123-42c8-90d8-2baf4122963a] succeeded in 0.0263162209994s: None </code></pre> <p>When I sending email from the Django shell without using celery that time it's going. And when I doing like this that time also it's going</p> <p><strong>mail_test.py</strong></p> <pre><code>from datetime import datetime,time from Reminder.models import Reminder import arrow t = time(23,35) date = datetime.today().date() a = Reminder.objects.create(message="Hello world",date=date,time=t,email="abc@gmail.com",completed=False) from Reminder.tasks import send_mail_reminder # b = send_mail_reminder(a.id) # print "b:",b date_time = datetime.combine(a.date,a.time) reminder_time = arrow.get(date_time).replace(tzinfo=a.time_zone.zone) c = send_mail_reminder.apply_async((a.id,),eta=reminder_time, serializer='json') print "c:",c </code></pre> <p>and then executing below command on the terminal, adds a task in the celery and gets executed successfully.</p> <blockquote> <p>./manage.py shell &lt; mail_test.py</p> </blockquote> <p>When I am using post_save signal then also it's working as expected.</p>
0
2016-08-04T18:00:00Z
38,793,865
<p>It's solved. I used post_save signal to call the task and save with the task_id.</p> <pre><code>@receiver(post_save, sender=Reminder) def send_email_signal(sender,instance, created, **kwargs): if not created: return self = instance date_time = datetime.combine(self.date,self.time) reminder_time = arrow.get(date_time).replace(tzinfo=self.time_zone.zone) from .tasks import send_sms_reminder, send_mail_reminder result = send_mail_reminder.apply_async((self.id,),eta=reminder_time, serializer = 'json') if result.id: instance.task_id = result.id instance.save() </code></pre>
0
2016-08-05T16:26:03Z
[ "python", "django", "rabbitmq", "celery", "amazon-ses" ]
Remove duplicate rows from CSV
38,774,380
<p>I have a CSV file that looks like this</p> <pre><code>red,75,right red,344,right green,3,center yellow,3222,right blue,9,center black,123,left white,68,right green,47,left purple,48,left purple,988,right pink,2677,left white,34,right </code></pre> <p>I am using Python and am trying to remove rows that have duplicate in cell 1. I know I can achieve this using something like pandas but I am trying to do it using standard python CSV library.</p> <p>Expected Result is...</p> <pre><code>red,75,right green,3,center yellow,3222,right blue,9,center black,123,left white,68,right purple,988,right pink,2677,left </code></pre> <p>Anyone have an example?</p>
1
2016-08-04T18:01:47Z
38,774,561
<p>You can try this :</p> <pre><code>import fileinput def main(): seen = set() # set for fast O(1) amortized lookup for line in fileinput.FileInput('1.csv', inplace=1): cell_1 = line.split(',')[0] if cell_1 not in seen: seen.add(cell_1) print line, # standard output is now redirected to the file if __name__ == '__main__': main() </code></pre>
0
2016-08-04T18:13:18Z
[ "python", "csv" ]
Remove duplicate rows from CSV
38,774,380
<p>I have a CSV file that looks like this</p> <pre><code>red,75,right red,344,right green,3,center yellow,3222,right blue,9,center black,123,left white,68,right green,47,left purple,48,left purple,988,right pink,2677,left white,34,right </code></pre> <p>I am using Python and am trying to remove rows that have duplicate in cell 1. I know I can achieve this using something like pandas but I am trying to do it using standard python CSV library.</p> <p>Expected Result is...</p> <pre><code>red,75,right green,3,center yellow,3222,right blue,9,center black,123,left white,68,right purple,988,right pink,2677,left </code></pre> <p>Anyone have an example?</p>
1
2016-08-04T18:01:47Z
38,774,837
<p>You can simply use a dictionary where the color is the key and the value is the row. Ignore the color if it is already in the dictionary, otherwise add it and write the row to a new csv file.</p> <pre><code>import csv file_in = 'input_file.csv' file_out = 'output_file.csv' with open(file_in, 'rb') as fin, open(file_out, 'wb') as fout: reader = csv.reader(fin) writer = csv.writer(fout) d = {} for row in reader: color = row[0] if color not in d: d[color] = row writer.writerow(row) result = d.values() result # Output: # [['blue', '9', 'center'], # ['pink', '2677', 'left'], # ['purple', '48', 'left'], # ['yellow', '3222', 'right'], # ['black', '123', 'left'], # ['green', '3', 'center'], # ['white', '68', 'right'], # ['red', '75', 'right']] </code></pre> <p>And the output of the csv file:</p> <pre><code>!cat output_file.csv # Output: # red,75,right # green,3,center # yellow,3222,right # blue,9,center # black,123,left # white,68,right # purple,48,left # pink,2677,left </code></pre>
1
2016-08-04T18:27:51Z
[ "python", "csv" ]
PRAW Get message author?
38,774,410
<p>I'm trying to build a when I get a message on Reddit to display an alert on my device and tell it the author. Something like this: </p> <p><a href="http://i.stack.imgur.com/glCEF.png" rel="nofollow"><img src="http://i.stack.imgur.com/glCEF.png" alt="enter image description here"></a></p> <p>I tried searching for Reddit's documentation but I didn't find anything on the matter including the PRAW's docs, Reddit's API docs, and on their Subreddit. I even tried <code>messages.author</code> but that didn't work out either. What I want to get is this: <a href="http://i.stack.imgur.com/MHw4N.png" rel="nofollow"><img src="http://i.stack.imgur.com/MHw4N.png" alt="enter image description here"></a> So far the code looks like this:</p> <pre><code>import praw import time import os import pync from pync import Notifier print "Booting up..." def Main(): print "Searching for messages..." r = praw.Reddit(user_agent='RedditNotifier version 0.0.1') r.login('username', 'pass') for msg in r.get_unread(limit=None): if not msg: print "True" else: Notifier.notify('From:' + 'Author here', title='Reddit: New Message!', open='https://www.reddit.com/message/unread/') print msg while True: Main() time.sleep(5) </code></pre> <p>TL;DR How to get message author using PRAW</p> <p>EDIT: Image only serves to show progress so far Thanks!</p>
0
2016-08-04T18:03:35Z
38,774,727
<p>I don't know how you couldn't find it in the PRAW docs because a quick google search "praw author" gave me this <a href="http://stackoverflow.com/a/20939233/1459669">Stack Overflow</a> answer.</p> <p>A comment has an <code>author</code> attribute, which is a <code>Redditor</code> object. And to get the name from the <code>Redditor</code> object, use its <code>name</code> attribute.</p> <p>EDIT: So what you to do is replace<code>'Author here'</code> with <code>msg.author.name</code></p>
1
2016-08-04T18:21:44Z
[ "python", "reddit", "praw" ]
Python 3.x: How to compare two lists containing dictionaries where order doesn't matter
38,774,549
<p>I have nested dictionaries that may contain other dictionaries or lists. I need to be able to compare a list (or set, really) of these dictionaries to show that they are equal. </p> <p>The order of the list is not uniform. Typically, I would turn the list into a set, but it is not possible since there are values that are also dictionaries. </p> <pre><code>a = {'color': 'red'} b = {'shape': 'triangle'} c = {'children': [{'color': 'red'}, {'age': 8},]} test_a = [a, b, c] test_b = [b, c, a] print(test_a == test_b) # False print(set(test_a) == set(test_b)) # TypeError: unhashable type: 'dict' </code></pre> <p>Is there a good way to approach this to show that <code>test_a</code> has the same contents as <code>test_b</code>?</p>
2
2016-08-04T18:12:39Z
38,774,680
<p>In this case they are the same dicts so you can compare ids (<a href="https://docs.python.org/3/library/functions.html#id" rel="nofollow">docs</a>). Note that if you introduced a new <code>dict</code> whose values were identical it would still be treated differently. I.e. <code>d = {'color': 'red'}</code> would be treated as not equal to <code>a</code>. </p> <pre><code>sorted(map(id, test_a)) == sorted(map(id, test_b)) </code></pre> <p>As @jsbueno points out, you can do this with the kwarg <code>key</code>.</p> <pre><code>sorted(test_a, key=id) == sorted(test_b, key=id) </code></pre>
0
2016-08-04T18:19:37Z
[ "python", "python-3.x" ]
Python 3.x: How to compare two lists containing dictionaries where order doesn't matter
38,774,549
<p>I have nested dictionaries that may contain other dictionaries or lists. I need to be able to compare a list (or set, really) of these dictionaries to show that they are equal. </p> <p>The order of the list is not uniform. Typically, I would turn the list into a set, but it is not possible since there are values that are also dictionaries. </p> <pre><code>a = {'color': 'red'} b = {'shape': 'triangle'} c = {'children': [{'color': 'red'}, {'age': 8},]} test_a = [a, b, c] test_b = [b, c, a] print(test_a == test_b) # False print(set(test_a) == set(test_b)) # TypeError: unhashable type: 'dict' </code></pre> <p>Is there a good way to approach this to show that <code>test_a</code> has the same contents as <code>test_b</code>?</p>
2
2016-08-04T18:12:39Z
38,774,700
<p>You can use a simple loop to check if each of one list is in the other:</p> <pre><code>def areEqual(a, b): if len(a) != len(b): return False for d in a: if d not in b: return False return True </code></pre>
2
2016-08-04T18:20:40Z
[ "python", "python-3.x" ]
Python 3.x: How to compare two lists containing dictionaries where order doesn't matter
38,774,549
<p>I have nested dictionaries that may contain other dictionaries or lists. I need to be able to compare a list (or set, really) of these dictionaries to show that they are equal. </p> <p>The order of the list is not uniform. Typically, I would turn the list into a set, but it is not possible since there are values that are also dictionaries. </p> <pre><code>a = {'color': 'red'} b = {'shape': 'triangle'} c = {'children': [{'color': 'red'}, {'age': 8},]} test_a = [a, b, c] test_b = [b, c, a] print(test_a == test_b) # False print(set(test_a) == set(test_b)) # TypeError: unhashable type: 'dict' </code></pre> <p>Is there a good way to approach this to show that <code>test_a</code> has the same contents as <code>test_b</code>?</p>
2
2016-08-04T18:12:39Z
38,778,180
<p>If the elements in both lists are shallow, the idea of sorting them, and then comparing with equality can work. The problem with @Alex's solution is that he is only using "id" - but if instead of id, one uses a function that will sort dictionaries properly, things shuld just work:</p> <pre><code>def sortkey(element): if isinstance(element, dict): element = sorted(element.items()) return repr(element) sorted(test_a, key=sortkey) == sorted(test_b, key=sotrkey) </code></pre> <p>(I use an <code>repr</code> to wrap the key because it will cast all elements to string before comparison, which will avoid typerror if different elements are of unorderable types - which would almost certainly happen if you are using Python 3.x)</p> <p>Just to be clear, if your dictionaries and lists have nested dictionaries themselves, you should use the answer by @m_callens. If your inner lists are also unorderd, you can fix this to work, jsut sorting them inside the key function as well.</p>
0
2016-08-04T22:06:48Z
[ "python", "python-3.x" ]
Python 3.x: How to compare two lists containing dictionaries where order doesn't matter
38,774,549
<p>I have nested dictionaries that may contain other dictionaries or lists. I need to be able to compare a list (or set, really) of these dictionaries to show that they are equal. </p> <p>The order of the list is not uniform. Typically, I would turn the list into a set, but it is not possible since there are values that are also dictionaries. </p> <pre><code>a = {'color': 'red'} b = {'shape': 'triangle'} c = {'children': [{'color': 'red'}, {'age': 8},]} test_a = [a, b, c] test_b = [b, c, a] print(test_a == test_b) # False print(set(test_a) == set(test_b)) # TypeError: unhashable type: 'dict' </code></pre> <p>Is there a good way to approach this to show that <code>test_a</code> has the same contents as <code>test_b</code>?</p>
2
2016-08-04T18:12:39Z
38,779,216
<p>An elegant and relatively fast solution:</p> <pre><code>class QuasiUnorderedList(list): def __eq__(self, other): """This method isn't as ineffiecient as you think! It runs in O(1 + 2 + 3 + ... + n) time, possibly better than recursively freezing/checking all the elements.""" for item in self: for otheritem in other: if otheritem == item: break else: # no break was reached, item not found. return False return True </code></pre> <p>This runs in <code>O(1 + 2 + 3 + ... + n)</code> flat. While slow for dictionaries of low depth, this is faster for dictionaries of high depth. </p> <p>Here's a considerably longer snippet which is faster for dictionaries where depth is low and length is high.</p> <pre><code>class FrozenDict(collections.Mapping, collections.Hashable): # collections.Hashable = portability """Adapated from http://stackoverflow.com/a/2704866/1459669""" def __init__(self, *args, **kwargs): self._d = dict(*args, **kwargs) self._hash = None def __iter__(self): return iter(self._d) def __len__(self): return len(self._d) def __getitem__(self, key): return self._d[key] def __hash__(self): # It would have been simpler and maybe more obvious to # use hash(tuple(sorted(self._d.iteritems()))) from this discussion # so far, but this solution is O(n). I don't know what kind of # n we are going to run into, but sometimes it's hard to resist the # urge to optimize when it will gain improved algorithmic performance. # Now thread safe by CrazyPython if self._hash is None: _hash = 0 for pair in self.iteritems(): _hash ^= hash(pair) self._hash = _hash return _hash def freeze(obj): if type(obj) in (str, int, ...): # other immutable atoms you store in your data structure return obj elif issubclass(type(obj), list): # ugly but needed return set(freeze(item) for item in obj) elif issubclass(type(obj), dict): # for defaultdict, etc. return FrozenDict({key: freeze(value) for key, value in obj.items()}) else: raise NotImplementedError("freeze() doesn't know how to freeze " + type(obj).__name__ + " objects!") class FreezableList(list, collections.Hashable): _stored_freeze = None _hashed_self = None def __eq__(self, other): if self._stored_freeze and (self._hashed_self == self): frozen = self._stored_freeze else: frozen = freeze(self) if frozen is not self._stored_freeze: self._stored_hash = frozen return frozen == freeze(other) def __hash__(self): if self._stored_freeze and (self._hashed_self == self): frozen = self._stored_freeze else: frozen = freeze(self) if frozen is not self._stored_freeze: self._stored_hash = frozen return hash(frozen) class UncachedFreezableList(list, collections.Hashable): def __eq__(self, other): """No caching version of __eq__. May be faster. Don't forget to get rid of the declarations at the top of the class! Considerably more elegant.""" return freeze(self) == freeze(other) def __hash__(self): """No caching version of __hash__. See the notes in the docstring of __eq__2""" return hash(freeze(self)) </code></pre> <p>Test all three (<code>QuasiUnorderedList</code>, <code>FreezableList</code>, and <code>UncachedFreezableList</code>) and see which one is faster in your situation. I'll betcha it's faster than the other solutions.</p>
-1
2016-08-05T00:05:32Z
[ "python", "python-3.x" ]
Python 3.x: How to compare two lists containing dictionaries where order doesn't matter
38,774,549
<p>I have nested dictionaries that may contain other dictionaries or lists. I need to be able to compare a list (or set, really) of these dictionaries to show that they are equal. </p> <p>The order of the list is not uniform. Typically, I would turn the list into a set, but it is not possible since there are values that are also dictionaries. </p> <pre><code>a = {'color': 'red'} b = {'shape': 'triangle'} c = {'children': [{'color': 'red'}, {'age': 8},]} test_a = [a, b, c] test_b = [b, c, a] print(test_a == test_b) # False print(set(test_a) == set(test_b)) # TypeError: unhashable type: 'dict' </code></pre> <p>Is there a good way to approach this to show that <code>test_a</code> has the same contents as <code>test_b</code>?</p>
2
2016-08-04T18:12:39Z
38,779,290
<p>I suggest writing a function that turns any Python object into something orderable, with its contents, if it has any, in sorted order. If we call it <code>canonicalize</code>, we can compare nested objects with:</p> <pre><code>canonicalize(test_a) == canonicalize(test_b) </code></pre> <p>Here's my attempt at writing a <code>canonicalize</code> function:</p> <pre><code>def canonicalize(x): if isinstance(x, dict): x = sorted((canonicalize(k), canonicalize(v)) for k, v in x.items()) elif isinstance(x, collections.abc.Iterable) and not isinstance(x, str): x = sorted(map(canonicalize, x)) else: try: bool(x &lt; x) # test for unorderable types like complex except TypeError: x = repr(x) # replace with something orderable return x </code></pre> <p>This should work for most Python objects. It won't work for lists of heterogeneous items, containers that contain themselves (which will cause the function to hit the recursion limit), nor <code>float('nan')</code> (which has bizarre comparison behavior, and so may mess up the sorting of any container it's in).</p> <p>It's possible that this code will do the wrong thing for non-iterable, unorderable objects, if they don't have a <code>repr</code> function that describes all the data that makes up their value (e.g. what is tested by <code>==</code>). I picked <code>repr</code> as it will work on any kind of object and <em>might</em> get it right (it works for <code>complex</code>, for example). It should also work as desired for classes that have a <code>repr</code> that looks like a constructor call. For classes that have inherited <code>object.__repr__</code> and so have <code>repr</code> output like <code>&lt;Foo object at 0xXXXXXXXX&gt;</code> it at least won't crash, though the objects will be compared by identity rather than value. I don't think there's any truly universal solution, and you can add some special cases for classes you expect to find in your data if they don't work with <code>repr</code>.</p>
0
2016-08-05T00:17:16Z
[ "python", "python-3.x" ]
Pandas DataFrame- Finding Index Value for a Column
38,774,705
<p>I have a DataFrame that has columns such as ID, Name, Specification, Time.</p> <p>my file path to open them </p> <pre><code>mc = pd.read_csv("C:\\data.csv", sep = ",", header = 0, dtype = str) </code></pre> <p>When I checked my columns values, using</p> <pre><code>mc.coulumns.values </code></pre> <p>I found my ID had it with a weird character looked like this,</p> <pre><code>['/ufeffID', 'Name', 'Specification', 'Time'] </code></pre> <p>After this I assigned that columns with ID like this,</p> <pre><code> mc.columns.values[0] = "ID" </code></pre> <p>When I checked this using </p> <pre><code>mc.columns.values </code></pre> <p>I got my result as,</p> <pre><code>Array(['ID', 'Name', 'Specification', 'Time']) </code></pre> <p>Then, I checked with,</p> <pre><code>"ID" in mc.columns.values </code></pre> <p>it gave me <code>"True"</code></p> <p>Then I tried,</p> <pre><code>mc["ID"] </code></pre> <p>I got an error stating like this,</p> <pre><code>keyError 'ID'. </code></pre> <p>I want to get the values of ID column and get rid of that weird characters in front of ID column? Is there any way to solve that? Any help would be appreciated. Thank you in advance. </p>
2
2016-08-04T18:20:59Z
38,774,760
<p>That's utf-16 BOM, pass <code>encoding='utf-16'</code> to <code>read_csv</code> see: <a href="https://en.wikipedia.org/wiki/Byte_order_mark#Representations_of_byte_order_marks_by_encoding" rel="nofollow">https://en.wikipedia.org/wiki/Byte_order_mark#Representations_of_byte_order_marks_by_encoding</a></p> <pre><code>mc = pd.read_csv("C:\\data.csv", sep=",", header=0, dtype=str, encoding='utf-16') </code></pre> <p>the above should work <code>FE FF</code> is the BOM for utf-16 Big endian to be specific</p> <p>Also you should use <code>rename</code> rather than try to overwrite the np array value:</p> <pre><code>mc.rename(columns={mc.columns[0]: "ID"}, inplace=True) </code></pre> <p>should work correctly</p>
2
2016-08-04T18:23:56Z
[ "python", "pandas", "dataframe" ]
write on file from serial in python in windows
38,774,745
<p>I trying to have a very simple script to read data from serial port and write it on a txt file. My data are always the same and for example look like this : '4\r\n'</p> <pre><code>import serial import time ser = serial.Serial('COM5', 9600, timeout=0) while 1: data=ser.readline() print data f = open('myfile.txt','w') data=str(data) f.write(data) f.close() time.sleep(1) </code></pre> <p>I am using python2.7 on windows 7 my print work fine I get the data, but I couldn't write on file... </p> <p>thanks a lot !</p>
0
2016-08-04T18:22:45Z
38,775,329
<p>Using the <code>'w'</code> option in <code>open()</code> tells python to delete your file first, then open it. Try changing the <code>'w'</code> to an <code>'a'</code> so that Python appends new data to the end of the file, rather than deleting the file every time.</p> <pre><code>f = open('myfile.txt', 'a') </code></pre> <p>You can read more about the <code>open</code> function <a href="https://docs.python.org/2/library/functions.html#open" rel="nofollow">here</a>. Specifically, check out the documentation for the <code>mode</code> argument.</p>
0
2016-08-04T18:57:55Z
[ "python", "windows", "serial-port", "writefile" ]
Difference between Kivy camera and opencv camera
38,774,748
<p>What is the difference between Kivy Camera and opencv ? I am asking this because in Kivy Camera the image gets adjusted according to frame size but in opencv this does not happen. Also I am not able to do motion detection in kivy camera whereas I found a great tutorial for motion detection on opencv. If someone can clarify the difference it would be appreciated ! Thanks :) </p>
0
2016-08-04T18:23:08Z
38,774,932
<p>opencv is a computer vision framework (hence the c-v) which can interact with device cameras. Kivy is a cross-platform development tool which can interact with device cameras. It makes sense that there are good motion detection tutorials for opencv but not kivy camera, since this isnt really what kivy is for. </p>
0
2016-08-04T18:34:09Z
[ "python", "opencv", "camera", "kivy", "motion-detection" ]
Django 1.9 FormView never reach get_context_data
38,774,792
<p>I have a FormView called LeagueTransferView based on a form LeagueTransferForm.</p> <p>I'm trying to override get_context_data to add extra players to render in the template.</p> <p>But get_context_data is never reached. It's working fine on other views like, DetailView, ListView,...</p> <p>I'm missing something?</p> <p>Below my configuration</p> <p><strong>View</strong></p> <pre><code>class LeagueTransferView(FormView): template_name = 'hockey/league/transfer_market.html' form_class = LeagueTransferForm success_url = '' def get_context_data(self, **kwargs): print('----NEVER REACHED----') context = super(LeagueTransferView, self).get_context_data(**kwargs) petitioner = get_object_or_404(Team, user=self.request.user.profile, league=self.kwargs['pk']) context['players'] = Player.objects.filter(leagues=self.kwargs['pk']).exclude(teams=petitioner) return context def get(self, request, *args, **kwargs): petitioner = get_object_or_404(Team, user=self.request.user.profile, league=self.kwargs['pk']) form = self.form_class(initial={'league': self.kwargs['pk'], 'petitioner': petitioner}) form.fields['offered_player'].queryset = petitioner.players return render(request, self.template_name, {'form': form}) def post(self, request, *args, **kwargs): form = self.form_class(request.POST) if form.is_valid(): transfer = form.save(commit=False) team = Team.objects.filter(league=transfer.league, players__in=[transfer.requested_player]) if not team: # free agent transfer.status = 1 messages.success(request, _('transfer succeeded')) else: print(team) transfer.player_owner = team[0] if transfer.petitioner.user is None: # bot team transfer.status = 1 messages.success(request, _('transfer succeeded')) else: messages.success(request, _('transfer waiting for confirmation by player owner')) transfer.save() return HttpResponseRedirect(reverse('hockey_dashboard')) petitioner = get_object_or_404(Team, user=self.request.user.profile, league=self.kwargs['pk']) form.fields['offered_player'].queryset = petitioner.players return render(request, self.template_name, {'form': form}) </code></pre> <p><strong>FORM</strong></p> <pre><code>class LeagueTransferForm(forms.ModelForm): class Meta: model = Transfer fields = ['league', 'requested_player', 'offered_player', 'player_owner', 'petitioner'] labels = { 'requested_player': _('Requested player'), 'offered_player': _('Offered player'), } widgets = { 'requested_player': forms.HiddenInput, 'league': forms.HiddenInput, 'player_owner': forms.HiddenInput, 'petitioner': forms.HiddenInput } </code></pre>
1
2016-08-04T18:25:52Z
38,774,968
<p>Your code is never reaching <code>get_context_data()</code> because you have overridden the <code>get()</code> method and not calling the <code>get_context_data()</code> function there. You need to manually call the <code>get_context_data()</code> function at the time of passing <code>context</code> to <code>render()</code> in your code.</p> <p>Instead of doing that, i would suggest you to try the below approach where instead of overrriding <code>get()</code> and returning your custom response, you only override what is necessary and let Django handle the rest.</p> <pre><code>class LeagueTransferView(FormView): template_name = 'hockey/league/transfer_market.html' form_class = LeagueTransferForm success_url = '' def get_context_data(self, **kwargs): context = super(LeagueTransferView, self).get_context_data(**kwargs) context['players'] = Player.objects.filter(leagues=self.kwargs['pk']).exclude(teams=self.petitioner) return context def get_initial(self): initial = super(LeagueTransferView, self).get_initial() initial['league'] = self.kwargs['pk'] # add custom data to initial initial['petitioner'] = self.petitioner # add custom data to initial return initial def get_form(self, form_class=None): form = super(LeagueTransferView, self).get_form(form_class) # override the queryset form.fields['offered_player'].queryset = self.petitioner.players return form def get(self, request, *args, **kwargs): # only perform 1 query to get 'petitioner' self.petitioner = get_object_or_404(Team, user=self.request.user.profile, league=self.kwargs['pk']) return super(LeagueTransferView, self).get(request, *args, **kwargs) </code></pre>
3
2016-08-04T18:36:06Z
[ "python", "django", "django-forms", "django-views" ]
Multi-indexing plotting with Matplotlib
38,774,805
<p>I am trying to graph multi indexing plot using matplotlib. However, I was struggling to find the exact code from the previously answered code. Can anyone assist me how can I produce similar graph.</p> <p><a href="http://i.stack.imgur.com/FJ35y.png" rel="nofollow"><img src="http://i.stack.imgur.com/FJ35y.png" alt="enter image description here"></a></p> <pre><code>import pandas as pd import matplotlib.pyplot as plt import pylab as pl import numpy as np import pandas xls_filename = "abc.xlsx" f = pandas.ExcelFile(xls_filename) df = f.parse("Sheet1", index_col='Year' and 'Month') f.close() matplotlib.rcParams.update({'font.size': 18}) # Font size of x and y-axis df.plot(kind= 'bar', alpha=0.70) </code></pre> <p>It is not indexing as I wanted and not produced the graph as expected as well. Help appreciated. </p>
0
2016-08-04T18:26:18Z
38,796,413
<p>I created a DataFrame from some of the values I see on your attached plot and plotted it.</p> <pre><code>index = pd.MultiIndex.from_tuples(tuples=[(2011, ), (2012, ), (2016, 'M'), (2016, 'J')], names=['year', 'month']) df = pd.DataFrame(index=index, data={'1': [10, 140, 6, 9], '2': [23, 31, 4, 5], '3': [33, 23, 1, 1]}) df.plot(kind='bar') </code></pre> <p>This is the outcome</p> <p><a href="http://i.stack.imgur.com/I5y5Hm.png" rel="nofollow"><img src="http://i.stack.imgur.com/I5y5Hm.png" alt="enter image description here"></a></p> <p>where the DataFrame is this</p> <p><a href="http://i.stack.imgur.com/eAaiYt.png" rel="nofollow"><img src="http://i.stack.imgur.com/eAaiYt.png" alt="enter image description here"></a></p>
1
2016-08-05T19:19:23Z
[ "python", "pandas", "matplotlib", "bar-chart" ]
Pandas Datetime Formatting
38,774,823
<p>Currently my dates are formatted as a string. I was able to get the string converted to date time using the following:</p> <pre><code>df['submitted_on'] = df['submitted_on'].apply(lambda x: dt.datetime.strptime(x, '%Y-%m-%d %H:%M:%S.%f')) </code></pre> <p>I would like to remove the time stamp, but I am having an awfully difficult time doing so. My preferred format is <code>%Y%m%d</code>. So I stumbled upon <a href="http://stackoverflow.com/questions/26153795/python-remove-time-from-datetime-string">THIS</a> page and added <code>.date()</code>. Resulting in below:</p> <pre><code>df['submitted_on'] = df['submitted_on'].apply(lambda x: dt.datetime.strptime(x, '%Y%m%d').date()) </code></pre> <p>I am getting this value error and I am again lost at what to do to drop the time stamp. Any help is greatly appreciated.</p> <blockquote> <p>ValueError: time data '2015-02-26 16:45:36.0' does not match format '%Y%m%d'</p> </blockquote>
2
2016-08-04T18:27:01Z
38,774,902
<p>You can use <code>normalize</code> (<a href="http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DatetimeIndex.normalize.html" rel="nofollow">docs</a>).</p> <pre><code>dti = pd.DatetimeIndex(start='today', periods=4, freq='D') dti </code></pre> <p>outputs</p> <pre><code>DatetimeIndex(['2016-08-04 14:30:34.447589', '2016-08-05 14:30:34.447589', '2016-08-06 14:30:34.447589', '2016-08-07 14:30:34.447589'], dtype='datetime64[ns]', freq='D') </code></pre> <p>And</p> <pre><code>dti.normalize() </code></pre> <p>outputs</p> <pre><code>DatetimeIndex(['2016-08-04', '2016-08-05', '2016-08-06', '2016-08-07'], dtype='datetime64[ns]', freq='D') </code></pre> <p>If it's a Series of Timestamps you can convert them with map.</p> <p>Edit: @piRSquared's way is better in this case.</p> <pre><code>pd.to_datetime(dti).map(lambda dt: dt.date()) </code></pre> <p>outputs</p> <pre><code>array([datetime.date(2016, 8, 4), datetime.date(2016, 8, 5), datetime.date(2016, 8, 6), datetime.date(2016, 8, 7)], dtype=object) </code></pre>
2
2016-08-04T18:32:07Z
[ "python", "datetime", "pandas" ]
Pandas Datetime Formatting
38,774,823
<p>Currently my dates are formatted as a string. I was able to get the string converted to date time using the following:</p> <pre><code>df['submitted_on'] = df['submitted_on'].apply(lambda x: dt.datetime.strptime(x, '%Y-%m-%d %H:%M:%S.%f')) </code></pre> <p>I would like to remove the time stamp, but I am having an awfully difficult time doing so. My preferred format is <code>%Y%m%d</code>. So I stumbled upon <a href="http://stackoverflow.com/questions/26153795/python-remove-time-from-datetime-string">THIS</a> page and added <code>.date()</code>. Resulting in below:</p> <pre><code>df['submitted_on'] = df['submitted_on'].apply(lambda x: dt.datetime.strptime(x, '%Y%m%d').date()) </code></pre> <p>I am getting this value error and I am again lost at what to do to drop the time stamp. Any help is greatly appreciated.</p> <blockquote> <p>ValueError: time data '2015-02-26 16:45:36.0' does not match format '%Y%m%d'</p> </blockquote>
2
2016-08-04T18:27:01Z
38,775,038
<p>You could convert the <code>Timestamp</code> object to <code>datetime.datetime</code> object and extract the <code>datetime.date</code> part as shown:</p> <pre><code>In [7]: import pandas as pd In [8]: print(pd.Timestamp('2015-02-26 16:45:36.0').to_datetime().date()) 2015-02-26 &lt;class 'datetime.date'&gt; </code></pre> <p>Your desired format:</p> <pre><code>In [11]: print(pd.Timestamp('2015-02-26 16:45:36.0').to_datetime().date().strftime("%Y%m%d")) 20150226 &lt;class 'str'&gt; </code></pre>
2
2016-08-04T18:40:05Z
[ "python", "datetime", "pandas" ]
Pandas Datetime Formatting
38,774,823
<p>Currently my dates are formatted as a string. I was able to get the string converted to date time using the following:</p> <pre><code>df['submitted_on'] = df['submitted_on'].apply(lambda x: dt.datetime.strptime(x, '%Y-%m-%d %H:%M:%S.%f')) </code></pre> <p>I would like to remove the time stamp, but I am having an awfully difficult time doing so. My preferred format is <code>%Y%m%d</code>. So I stumbled upon <a href="http://stackoverflow.com/questions/26153795/python-remove-time-from-datetime-string">THIS</a> page and added <code>.date()</code>. Resulting in below:</p> <pre><code>df['submitted_on'] = df['submitted_on'].apply(lambda x: dt.datetime.strptime(x, '%Y%m%d').date()) </code></pre> <p>I am getting this value error and I am again lost at what to do to drop the time stamp. Any help is greatly appreciated.</p> <blockquote> <p>ValueError: time data '2015-02-26 16:45:36.0' does not match format '%Y%m%d'</p> </blockquote>
2
2016-08-04T18:27:01Z
38,775,471
<pre><code>s = pd.Series(['2010-01-01 10:00', '2010-06-01 11:00']) pd.to_datetime(pd.to_datetime(s).dt.date) </code></pre>
2
2016-08-04T19:06:44Z
[ "python", "datetime", "pandas" ]
I keep getting a read issue [Errno 22] Invalid argument
38,774,907
<p>I have tried putting the <code>(r"F:\Server\ ... "r")</code> it says:</p> <pre>file.write(float(u) + '\n') TypeError: unsupported operand type(s) for +: 'float' and 'str'. </pre> <p>When I don't put the <code>r</code> were it is it will double \\ on me saying:</p> <pre>read issue [Errno 22] Invalid argument: 'F:\\Server\\Frames\\Server_Stats_GUI\x08yteS-F_FS_Input.toff'. </pre> <p>Here is my code</p> <pre><code>import time while True: try: file = open("F:\Server\Frames\Server_Stats_GUI\byteS-F_FS_Input.toff","r") f = int(file.readline()) s = int(file.readline()) file.close() except Exception as e: # file is been written to, not enough data, whatever: ignore (but print a message) print("read issue "+str(e)) else: u = s - f file = open("F:\Server\Frames\Server_Stats_GUI\bytesS-F_FS_Output","w") # update the file with the new result file.write(float(u) + '\n') file.close() time.sleep(4) # wait 4 seconds </code></pre>
-1
2016-08-04T18:32:26Z
38,775,282
<p>You have two separate errors here.</p> <h1>1: Filename with Escape Characters</h1> <p>This error:</p> <blockquote> <p>read issue [Errno 22] Invalid argument: 'F:\Server\Frames\Server_Stats_GUI\x08yteS-F_FS_Input.toff'.</p> </blockquote> <p>Is in the open() function.</p> <p>Your filename has an escape character in it. '\b' is being evaluated to '\x08' (backspace). The file is not found, which throws an error. </p> <p>To ignore escape characters, you can either double the backslash:</p> <pre><code>"F:\Server\Frames\Server_Stats_GUI\\byteS-F_FS_Input.toff" </code></pre> <p>or use r as a prefix to the string:</p> <pre><code>r"F:\Server\Frames\Server_Stats_GUI\byteS-F_FS_Input.toff" </code></pre> <p>You've tried the second way, which fixed that issue. </p> <h1>2: TypeError on Write()</h1> <p>The next error:</p> <blockquote> <p>file.write(float(u) + '\n') TypeError: unsupported operand type(s) for +: 'float' and 'str'.</p> </blockquote> <p>is in the write() function.</p> <p>You're treating a float as a string. You need to convert it to a string before trying to append the newline: </p> <pre><code>file.write(str(float(u)) + '\n') </code></pre> <p>or use string formatting:</p> <pre><code>file.write("%f\n" % (float(u))) </code></pre>
0
2016-08-04T18:55:30Z
[ "python", "windows" ]
Prevent numpy from creating a multidimensional array
38,774,922
<p>NumPy is really helpful when creating arrays. If the first argument for <code>numpy.array</code> has a <code>__getitem__</code> and <code>__len__</code> method these are used on the basis that it might be a valid sequence.</p> <p>Unfortunatly I want to create an array containing <code>dtype=object</code> without NumPy being "helpful".</p> <p>Broken down to a minimal example the class would like this:</p> <pre><code>import numpy as np class Test(object): def __init__(self, iterable): self.data = iterable def __getitem__(self, idx): return self.data[idx] def __len__(self): return len(self.data) def __repr__(self): return '{}({})'.format(self.__class__.__name__, self.data) </code></pre> <p>and if the "iterables" have different lengths everything is fine and I get exactly the result I want to have:</p> <pre><code>&gt;&gt;&gt; np.array([Test([1,2,3]), Test([3,2])], dtype=object) array([Test([1, 2, 3]), Test([3, 2])], dtype=object) </code></pre> <p>but NumPy creates a multidimensional array if these happen to have the same length:</p> <pre><code>&gt;&gt;&gt; np.array([Test([1,2,3]), Test([3,2,1])], dtype=object) array([[1, 2, 3], [3, 2, 1]], dtype=object) </code></pre> <p>Unfortunatly there is only a <code>ndmin</code> argument so I was wondering if there is a way to enforce a <code>ndmax</code> or somehow prevent NumPy from interpreting the custom classes as another dimension (without deleting <code>__len__</code> or <code>__getitem__</code>)?</p>
2
2016-08-04T18:33:23Z
38,776,596
<p>A workaround is of course to create an array of the desired shape and then copy the data:</p> <pre><code>In [19]: lst = [Test([1, 2, 3]), Test([3, 2, 1])] In [20]: arr = np.empty(len(lst), dtype=object) In [21]: arr[:] = lst[:] In [22]: arr Out[22]: array([Test([1, 2, 3]), Test([3, 2, 1])], dtype=object) </code></pre> <p>Notice that in any case I would not be surprised if numpy behavior w.r.t. interpreting iterable objects (which is what you want to use, right?) is numpy version dependent. And possibly buggy. Or maybe some of these bugs are actually features. Anyway, I'd be wary of breakage when a numpy version changes.</p> <p>On the contrary, copying into a pre-created array should be way more robust.</p>
3
2016-08-04T20:10:43Z
[ "python", "arrays", "numpy" ]
Prevent numpy from creating a multidimensional array
38,774,922
<p>NumPy is really helpful when creating arrays. If the first argument for <code>numpy.array</code> has a <code>__getitem__</code> and <code>__len__</code> method these are used on the basis that it might be a valid sequence.</p> <p>Unfortunatly I want to create an array containing <code>dtype=object</code> without NumPy being "helpful".</p> <p>Broken down to a minimal example the class would like this:</p> <pre><code>import numpy as np class Test(object): def __init__(self, iterable): self.data = iterable def __getitem__(self, idx): return self.data[idx] def __len__(self): return len(self.data) def __repr__(self): return '{}({})'.format(self.__class__.__name__, self.data) </code></pre> <p>and if the "iterables" have different lengths everything is fine and I get exactly the result I want to have:</p> <pre><code>&gt;&gt;&gt; np.array([Test([1,2,3]), Test([3,2])], dtype=object) array([Test([1, 2, 3]), Test([3, 2])], dtype=object) </code></pre> <p>but NumPy creates a multidimensional array if these happen to have the same length:</p> <pre><code>&gt;&gt;&gt; np.array([Test([1,2,3]), Test([3,2,1])], dtype=object) array([[1, 2, 3], [3, 2, 1]], dtype=object) </code></pre> <p>Unfortunatly there is only a <code>ndmin</code> argument so I was wondering if there is a way to enforce a <code>ndmax</code> or somehow prevent NumPy from interpreting the custom classes as another dimension (without deleting <code>__len__</code> or <code>__getitem__</code>)?</p>
2
2016-08-04T18:33:23Z
38,776,674
<p>This behavior has been discussed a number of times before (e.g. <a href="http://stackoverflow.com/questions/36663919/override-a-dict-with-numpy-support">Override a dict with numpy support</a>). <code>np.array</code> tries to make as high a dimensional array as it can. The model case is nested lists. If it can iterate and the sublists are equal in length it will 'drill' on down. </p> <p>Here it went down 2 levels before encountering lists of different length:</p> <pre><code>In [250]: np.array([[[1,2],[3]],[1,2]],dtype=object) Out[250]: array([[[1, 2], [3]], [1, 2]], dtype=object) In [251]: _.shape Out[251]: (2, 2) </code></pre> <p>Without a shape or ndmax parameter it has no way of knowing whether I want it to be <code>(2,)</code> or <code>(2,2)</code>. Both of those would work with the dtype.</p> <p>It's compiled code, so it isn't easy to see exactly what tests it uses. It tries to iterate on lists and tuples, but not on sets or dictionaries. </p> <p>The surest way to make an object array with a given dimension is to start with an empty one, and fill it</p> <pre><code>In [266]: A=np.empty((2,3),object) In [267]: A.fill([[1,'one']]) In [276]: A[:]={1,2} In [277]: A[:]=[1,2] # broadcast error </code></pre> <p>Another way is to start with at least one different element (e.g. a <code>None</code>), and then replace that.</p> <p>There is a more primitive creator, <code>ndarray</code> that takes shape:</p> <pre><code>In [280]: np.ndarray((2,3),dtype=object) Out[280]: array([[None, None, None], [None, None, None]], dtype=object) </code></pre> <p>But that's basically the same as <code>np.empty</code> (unless I give it a buffer).</p> <p>These are fudges, but they aren't expensive (time wise).</p> <p>================ (edit)</p> <p><a href="https://github.com/numpy/numpy/issues/5933" rel="nofollow">https://github.com/numpy/numpy/issues/5933</a>, <code>Enh: Object array creation function.</code> is an enhancement request. Also <a href="https://github.com/numpy/numpy/issues/5303" rel="nofollow">https://github.com/numpy/numpy/issues/5303</a> <code>the error message for accidentally irregular arrays is confusing</code>.</p> <p>The developer sentiment seems to favor a separate function to create <code>dtype=object</code> arrays, one with more control over the initial dimensions and depth of iteration. They might even strengthen the error checking to keep <code>np.array</code> from creating 'irregular' arrays.</p> <p>Such a function could detect the shape of a regular nested iterable down to a specified depth, and build an object type array to be filled. </p> <pre><code>def objarray(alist, depth=1): shape=[]; l=alist for _ in range(depth): shape.append(len(l)) l = l[0] arr = np.empty(shape, dtype=object) arr[:]=alist return arr </code></pre> <p>With various depths:</p> <pre><code>In [528]: alist=[[Test([1,2,3])], [Test([3,2,1])]] In [529]: objarray(alist,1) Out[529]: array([[Test([1, 2, 3])], [Test([3, 2, 1])]], dtype=object) In [530]: objarray(alist,2) Out[530]: array([[Test([1, 2, 3])], [Test([3, 2, 1])]], dtype=object) In [531]: objarray(alist,3) Out[531]: array([[[1, 2, 3]], [[3, 2, 1]]], dtype=object) In [532]: objarray(alist,4) ... TypeError: object of type 'int' has no len() </code></pre>
2
2016-08-04T20:15:27Z
[ "python", "arrays", "numpy" ]
Reshape array to rgb matrix
38,774,977
<p>I have loaded a 100x100 rgb image in a numpy array. I then converted it to a 30000x1 numpy array to pass through a machine learning model. The output of this model is also a 30000x1 numpy array. How do I convert this array back to a numpy array of 100x100 3-tuples so I can print the generated rgb image?</p> <p>If the initial array is <code>[r1 g1 b1],[r2 g2 b2],...,[]</code>, it is unrolled to <code>[r1 g1 b1 r2 g2 b2 ...]</code>. I need it back in the form <code>[r1 g1 b1],[r2 g2 b2],...,[]</code>. </p> <p>What I used to load the image as array:</p> <pre><code>im=img.resize((height,width), Image.ANTIALIAS); im=np.array(im); im=im.ravel(); </code></pre> <p>I have tried .reshape((100,100,3)) and I'm getting a black output image. The machine learning model is correct and it is not the reason for getting a black output.</p>
2
2016-08-04T18:36:34Z
38,794,395
<p>Answering because I can't comment. Try <code>reshape((3, 100, 100))</code></p> <pre><code>a = np.random.random((3, 2, 2)) # array([[[ 0.28623689, 0.96406455], # [ 0.55002183, 0.73325715]], # # [[ 0.44293834, 0.08118479], # [ 0.28732176, 0.94749812]], # # [[ 0.40169829, 0.0265604 ], # [ 0.07904701, 0.19342463]]]) x = np.ravel() # array([ 0.28623689, 0.96406455, 0.55002183, 0.73325715, 0.44293834, # 0.08118479, 0.28732176, 0.94749812, 0.40169829, 0.0265604 , # 0.07904701, 0.19342463]) print(x.reshape((2, 2, 3))) # array([[[ 0.28623689, 0.96406455, 0.55002183], # [ 0.73325715, 0.44293834, 0.08118479]], # [[ 0.28732176, 0.94749812, 0.40169829], # [ 0.0265604 , 0.07904701, 0.19342463]]]) print(x.reshape((3, 2, 2))) # array([[[ 0.28623689, 0.96406455], # [ 0.55002183, 0.73325715]], # # [[ 0.44293834, 0.08118479], # [ 0.28732176, 0.94749812]], # # [[ 0.40169829, 0.0265604 ], # [ 0.07904701, 0.19342463]]]) </code></pre>
0
2016-08-05T17:00:17Z
[ "python", "arrays", "numpy" ]
Installing OpenCV/python on Amazon Linux (apache)?
38,775,044
<p>I am trying to use OpenCV on a python web application I created on an Amazon EC2 Micro instance running apache.</p> <p>I've got everything configured and working, except OpenCV isn't installing. This is the output I got from the Apache Error Log.</p> <pre><code>[Thu Aug 04 18:31:54 2016] [error] [client 72.219.147.5] import cv2 [Thu Aug 04 18:31:54 2016] [error] [client 72.219.147.5] ImportError: No module named cv2 </code></pre> <p>Here is what I've tried:</p> <p>I've installed pip and tried running <code>pip install pyopencv</code></p> <p>That doesn't work and gives me errors.</p> <p>I've also tried manually installing it by following this: <a href="http://stackoverflow.com/questions/34244606/how-to-install-opencv-on-amazon-linux">How to install OpenCV on Amazon Linux?</a></p> <p>and this: <a href="http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_setup/py_setup_in_fedora/py_setup_in_fedora.html?highlight=download#installing-opencv-python-from-pre-built-binaries" rel="nofollow">http://docs.opencv.org/3.0-beta/doc/py_tutorials/py_setup/py_setup_in_fedora/py_setup_in_fedora.html?highlight=download#installing-opencv-python-from-pre-built-binaries</a></p> <p>and this: <a href="http://techieroop.com/install-opencv-in-centos/" rel="nofollow">http://techieroop.com/install-opencv-in-centos/</a></p> <p>Even after installation, the cv2.so file is nowhere to be be found. I tried to search for it using <code>sudo find / -name "cv2.so"</code> but nothing came up.</p> <p>I do, however, have the following <code>.so</code> files installed:</p> <pre><code>/usr/local/lib/libopencv_photo.so /usr/local/lib/libopencv_stitching.so /usr/local/lib/libopencv_flann.so /usr/local/lib/libopencv_imgcodecs.so /usr/local/lib/libopencv_videostab.so /usr/local/lib/libopencv_ml.so /usr/local/lib/libopencv_objdetect.so /usr/local/lib/libopencv_imgproc.so /usr/local/lib/libopencv_superres.so /usr/local/lib/libopencv_core.so /usr/local/lib/libopencv_video.so /usr/local/lib/libopencv_highgui.so /usr/local/lib/libopencv_features2d.so /usr/local/lib/libopencv_shape.so /usr/local/lib/libopencv_videoio.so /usr/local/lib/libopencv_calib3d.so </code></pre> <p>Also, when running the cmake command, this is the output I'm getting:</p> <pre><code>-- Python 2: -- Interpreter: /usr/bin/python2.7 (ver 2.7.10) -- Libraries: NO -- numpy: NO (Python wrappers can not be generated) -- packages path: lib/python2.7/dist-packages </code></pre> <p>Any help is appreciated.</p>
2
2016-08-04T18:40:29Z
38,867,965
<p>tested and working on <code>amzn-ami-hvm-2016.03.1.x86_64-gp2</code></p> <pre><code>sudo yum install git cmake gcc-c++ numpy python-devel sudo pip install --upgrade pip sudo ln -rs /usr/local/bin/pip /usr/bin/ wget https://pypi.python.org/packages/18/eb/707897ab7c8ad15d0f3c53e971ed8dfb64897ece8d19c64c388f44895572/numpy-1.11.1-cp27-cp27mu-manylinux1_x86_64.whl sudo pip install numpy-1.11.1-cp27-cp27mu-manylinux1_x86_64.whl git clone https://github.com/Itseez/opencv.git cd opencv git checkout 3.1.0 mkdir build cd build cmake .. -DBUILD_opencv_python2=ON make -j4 sudo make install echo 'export PYTHONPATH=$PYTHONPATH:/usr/local/lib/python2.7/site-packages/:/usr/local/lib/python2.7/dist-packages/'&gt;&gt;~/.bashrc;. ~/.bashrc python -c 'import cv2; print "cv2 imported"' </code></pre> <p>most importantly after cmake step. you should see this in the output.</p> <pre><code>-- Python 2: -- Interpreter: /usr/bin/python2.7 (ver 2.7.10) -- Libraries: /usr/lib64/libpython2.7.so (ver 2.7.10) -- numpy: /usr/local/lib64/python2.7/site-packages/numpy/core/include (ver 1.11.1) -- packages path: lib/python2.7/dist-packages </code></pre> <p>now if it is not showing up, you need to completely remove build folder and rerun cmake again after correctly installing numpy, just rerunning cmake inside your already existing build folder will not work.</p>
4
2016-08-10T08:26:13Z
[ "python", "apache", "opencv", "amazon-web-services", "amazon-ec2" ]
Slicing list inside a method (Python 3)
38,775,048
<p>I have a method like the following:</p> <pre><code>def slice_list(my_list, slice_point): my_list = my_list[:slice_point] print("Inside method: ", my_list) return </code></pre> <p>I have a test for it like the following:</p> <pre><code>if __name__ == "__main__": my_list = [1,2,3,4,5] slice_point = 3 slice_list(my_list, slice_point) print("Outside method: ", my_list) </code></pre> <p>The output I get is not what I expected for, in the sense that the list is not ultimately edited</p> <pre><code>&gt;&gt;&gt;Inside method: [1, 2, 3] &gt;&gt;&gt;Outside method: [1, 2, 3, 4, 5] </code></pre> <p>But when I do an <code>append</code> to the list, it does edit the list for good, as this example shows:</p> <pre><code>def append_to_list(my_list, element): my_list.append(element) print("Inside method: ", my_list) return if __name__ == "__main__": my_list = [1,2,3,4,5] append_to_list(my_list, "new element") print("Outside method: ", my_list) </code></pre> <p>Which gives the following output:</p> <pre><code>&gt;&gt;&gt;Inside method: [1, 2, 3, 4, 5, 'new element'] &gt;&gt;&gt;Outside method: [1, 2, 3, 4, 5, 'new element'] </code></pre> <p><strong>Why does the slice not change the list for good?</strong></p>
2
2016-08-04T18:40:41Z
38,775,109
<p>Try this instead:</p> <pre><code>my_list[:] = my_list[:slice_point] </code></pre> <p>Your old method just points the name <code>my_list</code> at a new object, i.e. at the copy returned by the slice. </p> <p>The suggestion I've proposed above, however, modifies the object which <code>my_list</code> originally pointed at without rebinding that name. </p>
3
2016-08-04T18:43:38Z
[ "python", "list", "python-3.4", "slice" ]
How does object.__getattribute__ avoid a RuntimeError?
38,775,062
<p>In the rare cases that an object's <code>__getattribute__</code> method must be overridden, a common mistake is to try and return the attribute in question like so:</p> <pre><code>class WrongWay: def __getattribute__(self, name): # Custom functionality goes here return self.__dict__[name] </code></pre> <p>This code always yields a <code>RuntimeError</code> due to a recursive loop, since <code>self.__dict__</code> is in itself an attribute reference that calls upon the same <code>__getattribute__</code> method.</p> <p>According to <a href="http://stackoverflow.com/a/371833/2977638">this answer</a>, the correct solution to this problem is to replace last line with:</p> <pre><code>... return super().__getattribute__(self, name) # Defer responsibility to the superclass </code></pre> <p>This solution works when run through the Python 3 interpreter, but it also seems to violate <code>__getattribute__</code>'s promised functionality. Even if the superclass chain is traversed up to <code>object</code>, at the end of the line somebody will eventually have to return <code>self.</code><em>something</em>, and by definition that attribute reference must first get through the child's <code>__getattribute__</code> method. </p> <p>How does Python get around this recursion issue? In <code>object.__getattribute__</code>, how is anything returned without looping into another request?</p>
2
2016-08-04T18:41:25Z
38,775,389
<blockquote> <p>at the end of the line somebody will eventually have to return <code>self.</code><em>something</em>, and by definition that attribute reference must first get through the child's <code>__getattribute__()</code> method.</p> </blockquote> <p>That's not correct. <code>object.__getattribute__</code> is not defined as returning <code>self.anything</code>, and it does not respect descendant class implementations of <code>__getattribute__</code>. <code>object.__getattribute__</code> is the default attribute access implementation, and it always performs its job through the default attribute access mechanism.</p> <p>Similarly, <code>object.__eq__</code> is not defined as returning <code>self == other_thing</code>, and it does not respect descendant class implementations of <code>__eq__</code>. <code>object.__str__</code> is not defined as returning <code>str(self)</code>, and it does not respect descendant class implementations of <code>__str__</code>. <code>object</code>'s methods are the default implementations of those methods, and they always do the default thing.</p>
2
2016-08-04T19:01:06Z
[ "python", "recursion", "magic-methods" ]
How to avoid defunct processes with python fork?
38,775,178
<p>In python3 I am creating forked child processes, which I exit - I think - correctly. Here is some code example:</p> <pre><code>import os child_pid = os.fork() if child_pid == 0: print("I am in the child with PID "+str(os.getpid())) # doing something a = 1+1 print("Ending child with PID "+str(os.getpid())) os._exit(0) else: #parent process os.waitpid(-1, os.WNOHANG) import time for x in range(100): time.sleep(1) print("waiting...") </code></pre> <p>Now, as the child process seems to be ended, it still can be seen as a 'defunct' process (while the parent process is still running). How to really get rid of that ended child process? How to change the code?</p>
0
2016-08-04T18:48:45Z
38,775,811
<p>You can use the double fork idiom if you don't want the parent to wait. (See <a href="http://stackoverflow.com/questions/881388/what-is-the-reason-for-performing-a-double-fork-when-creating-a-daemon">What is the reason for performing a double fork when creating a daemon?</a>)</p> <p>The basic idea is for the parent process (call it <code>P</code>) to fork a child (<code>C1</code>) that immediately forks again to create another child (<code>C2</code>) and then exits. In this case, since the parent of <code>C2</code> is dead, the <code>init</code> process will inherit <code>C2</code> and wait on it for you. You will still have to wait on <code>C1</code> in the <code>P</code> process or <code>C1</code> will be a zombie. However, <code>C1</code> only runs for a very short time, so the <code>P</code> process will not wait long. Your code running <code>C2</code> can then take as much time as it wants and doesn't care if it finishes before or after <code>P</code>.</p> <p>Another idea is to handle the SIGCHILD signal that occurs when a child has exited. From <a href="https://mail.python.org/pipermail/tutor/2003-December/026748.html" rel="nofollow">https://mail.python.org/pipermail/tutor/2003-December/026748.html</a></p> <pre><code>import os,signal def handleSIGCHLD(): os.waitpid(-1, os.WNOHANG) signal.signal(signal.SIGCHLD, handleSIGCHLD) </code></pre>
0
2016-08-04T19:27:05Z
[ "python", "python-3.x" ]
Efficient ndarray operations
38,775,253
<p>I'm converting some Matlab code in Python. I need to do some matrix manipulation. My matrix (A) is (right now) a 65x3 matrix. However, the number of rows is variable depending on what step I'm at in the program.</p> <p>In Matlab, the code I'm working on is:</p> <pre><code>output = inv(A'*A) * A'; </code></pre> <p>The following Python code reproduces the expected output just fine. I'm just curious if there is a better (more Pythonic, faster, etc) way to do this? I'm trying to stick only to basic Python and numpy.</p> <pre><code>output = np.dot(np.linalg.inv(np.dot(np.transpose(A), A)), np.transpose(A)) </code></pre> <p>Thanks to anyone who is willing to help.</p>
2
2016-08-04T18:53:18Z
38,775,309
<p>You can use a the <code>T</code> attribute (transposes the array). Also, if using Python 3.5, you can use <code>@</code> for the dot product (see <a href="https://docs.python.org/3/whatsnew/3.5.html#whatsnew-pep-465" rel="nofollow">PEP 465</a> for details).</p> <pre><code>output = np.linalg.inv(A.T @ A) @ A.T </code></pre>
1
2016-08-04T18:56:39Z
[ "python", "matlab", "numpy", "matrix" ]
Python socket, send an int, which informs the client about the length of the following data
38,775,307
<p>I have a server, which listens for connections on a socket and once the client connects a dictionary of varying size is sent.</p> <p>To inform the client, how many bytes it should receive, I want to send the size of the dictionary first.</p> <pre><code>dic={'asdf':'1234', 'ghkj':'5678'} serialized_dict = pickle.dumps(dic) dic_size = sys.getsizeof(serialized_dict) connection.send(dic_size) connection.send(serialized_dict) </code></pre> <p>This is roughly what the server does once the client connects.</p> <p>Now, I can't send the <code>dic_size</code> as an integer, which means I probably have to serialize it as well. If I do so, how can I read the correct number of bytes on the client, to receive the integer value, and then receive as many bytes as the integer value says?</p>
0
2016-08-04T18:56:38Z
38,775,371
<p>I wouldn't do it like this. The general way to alert a socket buffer of the end of a stream is to send a delimiter/end marker of some sort after the dictionary and have the client look for the end marker. Once the end marker is received they know they have received the end of the dictionary. </p> <p>I believe in SMTP mail servers to denote the end of a field they use something like <code>\r\n</code></p> <pre><code>self.messBuffer = "" #message buffer while self.messBuffer.find("\r\n") == -1: try: #take in 500 bytes self.socket.settimeout(10) #set timeout of socket self.messBuffer += self.socket.recv(500) self.socket.settimeout(None) #if there is a socket error except socket.error: self.socket.close() #close the socket return #find the end of the message which is marked by "\r\n" end = self.messBuffer.find("\r\n") #get the message which is 0 to the end(occurrence of "\r\n") msg = self.messBuffer[0:end] #reset the message buffer to what is behind "\r\n" self.messBuffer = self.messBuffer[end+2:] return msg </code></pre>
0
2016-08-04T18:59:54Z
[ "python", "sockets", "serialization" ]
Use SQL functions like substr(X, Y, Z) in SQLAlchemy query
38,775,417
<p>I could not figure out how to use SQLite's functions like <a href="https://www.sqlite.org/lang_corefunc.html" rel="nofollow">substr(X, Y, Z)</a> with SQLAlchemy's query expression syntax. I am aware that I could use raw queries, but that would make it more difficult to reuse my where clauses. Here is my use case:</p> <p>I have a table (or model class) of file headers which I query to identify and list files of certain types. </p> <pre><code>class Blob(Base): __tablename__ = 'blob' _id = Column('_id', INTEGER, primary_key=True) size = Column('size', INTEGER) hash = Column('hash', TEXT) header = Column('header', BLOB) meta = Column('meta', BLOB) </code></pre> <p>For example, to identify Exif images, I can use this raw query:</p> <pre><code>select * from blob where substr(header,7,4) = X'45786966' </code></pre> <p><code>X'45786966'</code> is simply the SQLite <code>BLOB</code> literal for the string <code>Exif</code> encoded in ASCII. In reality, the where clauses are more complex and I would like to re-use them as filter conditions for joins, approximately like this:</p> <pre><code># define once at module level exif_conditions = [functions.substr(Blob.header, 7, 4) == b'Exif'] # reuse for arbitrary queries session.query(Blob.hash).filter(*exif_conditions) session.query(...).join(...).options(...).filter(condition, *exif_conditions) </code></pre> <p>Is there a way to achieve this with SQLAlchemy?</p>
0
2016-08-04T19:02:34Z
38,775,597
<p>Ok. This was way too simple.</p> <pre><code>from sqlalchemy.sql import func exif_conditions = [func.substr(Blob.header, 7, 4) == b'Exif'] </code></pre>
1
2016-08-04T19:15:27Z
[ "python", "sqlite", "sqlalchemy", "sql-function" ]
Is there a way to use compiled C modules with zipimport?
38,775,427
<p>When I have a python shared object file <em>(<code>.so</code>)</em> in my sys.path, I can simply do :</p> <pre><code>import _ctypes </code></pre> <p>And it will import <code>python2.7/lib-dynload/_ctypes.so</code>.</p> <p>However, if I use a zipfile called <code>tmp.zip</code> that contains :</p> <pre><code>Hello/_World.so </code></pre> <p>with <code>world.so</code> containing a well formatted <code>init_World</code> function, then :</p> <pre><code>Python 2.7.10 (default, Jun 1 2015, 18:05:38) [GCC 4.9.2] on cygwin Type "help", "copyright", "credits" or "license" for more information. &gt;&gt;&gt; import sys &gt;&gt;&gt; sys.path.insert(0, 'tmp.zip') &gt;&gt;&gt; import _World Traceback (most recent call last): File "&lt;stdin&gt;", line 1, in &lt;module&gt; zipimport.ZipImportError: can't find module _World </code></pre> <p>I read it was impossible to load shared object files outside a filesystem in C.<br> Does this mean what I’m trying to achieve is impossible and that <code>_World.so</code>should be extracted from the archive ?</p> <p>I’m only interested about doing it directly with the<code>zipimport</code>. I know there are other ways of doing it like extracting the archive manually and create files.</p>
-5
2016-08-04T19:03:28Z
38,779,287
<p>It seems I have definitely huge problems at reading texts entirely :</p> <blockquote> <p><a href="https://docs.python.org/2/library/zipimport.html" rel="nofollow">This module adds the ability to import Python modules (<code>*.py</code>, <code>*.py[co]</code>) and packages from ZIP-format archives.</a>.</p> </blockquote> <p>So the answer is clear, I’m trying to achieve something impossible… There’s no way to do escape the python sandbox with that since it can’t be done even with a normal environment.</p>
-2
2016-08-05T00:16:53Z
[ "python", "linux", "cpython" ]
Python How to top list of dicts up
38,775,434
<pre><code>[{'month': 7.0, 'sumd': 11}, {'month': 8.0, 'sumd': 20}] </code></pre> <p>I have this list. This is aggregation of months and some values. How to top it up by 0 to get something like this?</p> <pre><code>[0, 0, 0, 0, 0, 0, 0, 11, 20, 0, 0, 0, 0] </code></pre> <p>EDIT This first list has dicts with only this months which has something in sumd. If month does't have any information in sumd it will not be in list.</p> <p>But in second list I need only value from sumd order by months number(but with all months - if any months was not in first list, set 0.)</p>
-1
2016-08-04T19:04:10Z
38,775,599
<p>If i understood what you want to do, the result would be a list of 12 elements not 13.</p> <p>So if you cant to put the value of the month number (value of the month key) in the array representing a year (12 elements) you can do this :</p> <pre><code>month_dictionaries = [{'month': 7.0, 'sumd': 11}, {'month': 8.0, 'sumd': 20}] result = [0] * 12 for d in month_dictionnaries: if 'sumd' in d.keys(): result[int(d['month']) - 1] = d['sumd'] </code></pre>
1
2016-08-04T19:15:28Z
[ "python" ]
Python How to top list of dicts up
38,775,434
<pre><code>[{'month': 7.0, 'sumd': 11}, {'month': 8.0, 'sumd': 20}] </code></pre> <p>I have this list. This is aggregation of months and some values. How to top it up by 0 to get something like this?</p> <pre><code>[0, 0, 0, 0, 0, 0, 0, 11, 20, 0, 0, 0, 0] </code></pre> <p>EDIT This first list has dicts with only this months which has something in sumd. If month does't have any information in sumd it will not be in list.</p> <p>But in second list I need only value from sumd order by months number(but with all months - if any months was not in first list, set 0.)</p>
-1
2016-08-04T19:04:10Z
38,775,611
<p>For the sample you have provided this should work:</p> <pre><code>values = [0,0,0,0,0,0,0,0,0,0,0,0] month_values = [{'month': 7.0, 'sumd': 11}, {'month': 8.0, 'sumd': 20}] for m in month_values: values[(int(m['month'])-1)]=m['sumd'] </code></pre>
0
2016-08-04T19:16:02Z
[ "python" ]
Python How to top list of dicts up
38,775,434
<pre><code>[{'month': 7.0, 'sumd': 11}, {'month': 8.0, 'sumd': 20}] </code></pre> <p>I have this list. This is aggregation of months and some values. How to top it up by 0 to get something like this?</p> <pre><code>[0, 0, 0, 0, 0, 0, 0, 11, 20, 0, 0, 0, 0] </code></pre> <p>EDIT This first list has dicts with only this months which has something in sumd. If month does't have any information in sumd it will not be in list.</p> <p>But in second list I need only value from sumd order by months number(but with all months - if any months was not in first list, set 0.)</p>
-1
2016-08-04T19:04:10Z
38,775,623
<p>I'll take <code>sumd</code> to imply that you want to add up overlapping months. And you can fix the indexing to get 12 months, not 13</p> <pre><code>data = [{'month': 7.0, 'sumd': 11}, {'month': 8.0, 'sumd': 20}] months = [0]*12 for d in data: idx = int(d['month']) - 1 v = d['sumd'] months[idx] += v </code></pre>
0
2016-08-04T19:16:45Z
[ "python" ]
Add new functions to a class with import
38,775,464
<p>I have a class which has the function of parsing data.</p> <pre><code>class DataContainer(object): def parser1(data): # Handle data in one way self.parsed_data = parsed_data def parser2(data): # Handle data another way self.parsed_data = parsed_data </code></pre> <p>The parser functions popular the instance variables of the class. This parser may be changed or have many variations, so I would like to import another file with the functions, something like this:</p> <pre><code>class DataContainer(object): import parsers # Contains all the parsing functions which can then be called from instances </code></pre> <p>Is there a particular 'pythonic' way to do this?</p>
1
2016-08-04T19:06:02Z
38,775,650
<p>It depends on exactly how you want to use your object, but I would <code>import parsers</code>, and then have your DataContainer serve as an interface to those functions</p> <pre class="lang-py prettyprint-override"><code>import parsers class DataContainer(object): def __init__(self): # If this kind of thing is needed for the library self.parsers = parsers.Parser() def parser1(self,data): # prep data however you need parsed_data = self.parsers.parse_method1(prep_data) # set instance variables from parsed_data </code></pre>
1
2016-08-04T19:18:31Z
[ "python" ]
Questions about read_csv and str dtype
38,775,494
<p>I have a large text file where the columns are of the following form:</p> <pre><code>1255 32627 some random stuff which might have numbers 1245 </code></pre> <p>1.I would like to use <code>read_csv</code> to give me a data frame with three columns. The first two columns should be dtype uint32 and the third just has everything afterwards in a string. That is the line above should be split into <code>1255</code>, <code>32627</code> and <code>some random stuff which might have numbers 1245</code>. This for example does not do it but at least shows the dtypes:</p> <pre><code> pd.read_csv("foo.txt", sep=' ', header=None, dtype={0:np.uint32, 1:np.uint32, 2:np.str}) </code></pre> <p>2.My second question is about the <code>str</code> dtype.How much RAM does it use and if I know the max length of a string can I reduce that?</p>
1
2016-08-04T19:08:36Z
38,776,163
<p>You can use the Series.str.cat method, documentation for which is available <a href="http://pandas.pydata.org/pandas-docs/version/0.17.1/generated/pandas.Series.str.cat.html" rel="nofollow">here</a>:</p> <pre><code>df = pd.read_csv("foo.txt", sep=' ', header=None) # Create a new column which concatenates all columns df['new'] = df.apply(lambda row: row.iloc[2:].apply(str).str.cat(sep = ' '),axis=1) df = df[[0,1,'new']] </code></pre> <p>Not sure exactly what you mean by your second question but if you want to check the size of a string in memory you can use</p> <pre><code>import sys print (sys.getsizeof('some string')) </code></pre> <p>Sorry, I have no idea how knowing the maximum length will help you in saving memory and whether that is even possible</p>
1
2016-08-04T19:46:06Z
[ "python", "pandas" ]
Questions about read_csv and str dtype
38,775,494
<p>I have a large text file where the columns are of the following form:</p> <pre><code>1255 32627 some random stuff which might have numbers 1245 </code></pre> <p>1.I would like to use <code>read_csv</code> to give me a data frame with three columns. The first two columns should be dtype uint32 and the third just has everything afterwards in a string. That is the line above should be split into <code>1255</code>, <code>32627</code> and <code>some random stuff which might have numbers 1245</code>. This for example does not do it but at least shows the dtypes:</p> <pre><code> pd.read_csv("foo.txt", sep=' ', header=None, dtype={0:np.uint32, 1:np.uint32, 2:np.str}) </code></pre> <p>2.My second question is about the <code>str</code> dtype.How much RAM does it use and if I know the max length of a string can I reduce that?</p>
1
2016-08-04T19:08:36Z
38,776,506
<ol> <li><p>Is there a reason you need to use <code>pd.read_csv()</code>? The code below is straightforward and easily modifies your column values to your requirements.</p> <pre><code>from numpy import uint32 from csv import reader from pandas import DataFrame file = 'path/to/file.csv' with open(file, 'r') as f: r = reader(f) for row in r: column_1 = uint32(row[0]) column_2 = uint32(row[1]) column_3 = ' '.join([str(col) for col in row[2::]]) data = [column_1, column_2, column_3] frame = DataFrame(data) </code></pre></li> <li><p>I don't understand the question. Do you expect your strings to be extremely long? A 32-bit Python installation is limited to a string 2-3GB long. A 64-bit installation is much <em>much</em> larger, limited only by the amount of RAM you can stuff into your system.</p></li> </ol>
1
2016-08-04T20:04:50Z
[ "python", "pandas" ]
Python, Tkinter; How to use grid_forget() when deselecting a check box?
38,775,523
<p>Hello I have a script I am working on and I am having a problem trying to get a frame to hide when a check box is deselected. The script I am working on uses a check button to call a command to show a frame containing some text entry fields. The problem is when I deselect the check box the frame does not disappear. Here are the pieces of the script:</p> <pre><code> self.name3 = Name3(self) self.check_var4 = tk.IntVar() tk.Checkbutton(self, text="Search", variable=self.check_var4, onvalue=1, offvalue=0, height=1, width=10, command=self.show_name3 ).grid(row=3, column=0, sticky='E', ipadx=20) </code></pre> <p>which calls:</p> <pre><code> def show_name3(self): '''Shows Search Widget''' self.name3.grid(row=3, column=1, sticky='E', padx=15, pady=5, ipadx=15, ipady=5) </code></pre> <p>which in turn calls:</p> <pre><code> class Name3(tk.Frame): def __init__(self, parent): tk.Frame.__init__(self, parent) tk.Label(self, text="info:" ).grid(row=1, sticky='E') E3 = Entry(self, bd =2) E3.grid(row=1, column=1, columnspan=2, padx=15) tk.Label(self, text="Stuff:" ).grid(row=2, sticky='E') E4 = Entry(self, bd =2) E4.grid(row=2, column=1, padx=15) </code></pre> <p>I think all I need to do is add a command to use grid.forget() but how? Do I use a "if this then grid.forget" and if so, could someone explain it to me? Thanks in advance for the help.</p>
1
2016-08-04T19:11:03Z
38,776,033
<p>Would something like this work?</p> <pre><code>def show_name3(): if self.check_var4.get() == 1: self.name3.grid(row=3, column=1, sticky='E', padx=15, pady=5, ipadx=15, ipady=5) else: self.name3.grid_forget() </code></pre> <p>It will get the value of <code>check_var</code>, which indicates if the <code>Checkbutton</code> is on or off. If it is on, it will place <code>name3</code> with the grid manager. If it is off, it will remove <code>name3</code> from the grid manager.</p>
1
2016-08-04T19:40:16Z
[ "python", "tkinter" ]
Facebook API query on /me with access token and fields only returns status=True. What am I doing wrong?
38,775,586
<p>The following python code:</p> <pre><code> # user profile information args = { 'access_token':access_token, 'fields':'id,name', } print 'ACCESSED', urllib.urlopen('https://graph.facebook.com/me', urllib.urlencode(args)).read() </code></pre> <p>Prints the following:</p> <p><em>ACCESSED {"success":true}</em></p> <p>The token is valid, no error, the fields are valid. Why is it not returning the fields I asked for?</p>
0
2016-08-04T19:14:35Z
38,775,927
<p>You have to add a <code>/</code> to the URL to get <code>https://graph.facebook.com/me/</code> instead of <code>https://graph.facebook.com/me</code>.</p> <pre><code># user profile information args = { 'access_token':access_token, 'fields':'id,name' } print 'ACCESSED', urllib.urlopen('https://graph.facebook.com/me/', urllib.urlencode(args)).read() </code></pre> <p>PS : Try using the <code>requests</code> which is much more efficient than <code>urllib</code>, here is the doc : <a href="http://docs.python-requests.org/en/master/" rel="nofollow">http://docs.python-requests.org/en/master/</a></p>
-1
2016-08-04T19:32:51Z
[ "python", "facebook-graph-api" ]
Facebook API query on /me with access token and fields only returns status=True. What am I doing wrong?
38,775,586
<p>The following python code:</p> <pre><code> # user profile information args = { 'access_token':access_token, 'fields':'id,name', } print 'ACCESSED', urllib.urlopen('https://graph.facebook.com/me', urllib.urlencode(args)).read() </code></pre> <p>Prints the following:</p> <p><em>ACCESSED {"success":true}</em></p> <p>The token is valid, no error, the fields are valid. Why is it not returning the fields I asked for?</p>
0
2016-08-04T19:14:35Z
38,778,700
<p>Turns out urllib.urlopen will send the data as a POST when the data parameter is provided. Facebook Graph API works using GET not POST. Change the call to trick the function into calling just a URL ( no data ):</p> <pre><code>print 'ACCESSED', urllib.urlopen('https://graph.facebook.com/me/?' + urllib.urlencode(args)).read() </code></pre> <p>And everything works! Sigh, I can see why urllib is being altered in python 3.0...</p>
1
2016-08-04T23:03:02Z
[ "python", "facebook-graph-api" ]
How can I merge together several pandas dataframes on a certain column without 'pandas.merge'?
38,775,588
<p>I often find myself with several pandas dataframes in the following form: </p> <pre><code>import pandas as pd df1 = pd.read_table('filename1.dat') df2 = pd.read_table('filename2.dat') df3 = pd.read_table('filename3.dat') print(df1) columnA first_values name1 342 name2 822 name3 121 name4 3434 print(df2) columnA second_values name1 8 name2 1 name3 1 name4 2 print(df3) columnA third_values name1 910 name2 301 name3 132 name4 299 </code></pre> <p>I would like to merge together each of these dataframes on 'columnA', giving</p> <pre><code>columnA first_values second_values third_values name1 342 8 910 name2 822 1 301 name3 121 1 132 name4 3434 2 299 </code></pre> <p>I normally resort to this hack:</p> <pre><code>merged1 = df1.merge(df2, on='columnA') </code></pre> <p>then </p> <pre><code>merged2 = df3.merge(merged1, on='columnA') </code></pre> <p>But this doesn't scale for many dataframes. What is the correct way to do this? </p>
0
2016-08-04T19:14:55Z
38,778,189
<p>Assuming that the three dataframes have the same index, you could just add columns to get the desired dataframes and not worry about merging, like so,</p> <pre><code>import pandas as pd #create the dataframe colA = ['name1', 'name2', 'name3', 'name4'] first = [ 342, 822, 121, 3434] second = [ 8,1,1,2] third = [ 910,301,132, 299] df1 = pd.DataFrame({'colA': colA, 'first': first}) df2 = pd.DataFrame({'colA': colA, 'second': second}) df3 = pd.DataFrame({'colA': colA, 'third': third}) df_merged = df1.copy() df_merged['second']= df2.second df_merged['third']= df3.third print (df_merged.head()) colA first second third 0 name1 342 8 910 1 name2 822 1 301 2 name3 121 1 132 3 name4 3434 2 299 </code></pre>
0
2016-08-04T22:07:19Z
[ "python", "pandas", "dataframe", "merge", "concatenation" ]
How can I merge together several pandas dataframes on a certain column without 'pandas.merge'?
38,775,588
<p>I often find myself with several pandas dataframes in the following form: </p> <pre><code>import pandas as pd df1 = pd.read_table('filename1.dat') df2 = pd.read_table('filename2.dat') df3 = pd.read_table('filename3.dat') print(df1) columnA first_values name1 342 name2 822 name3 121 name4 3434 print(df2) columnA second_values name1 8 name2 1 name3 1 name4 2 print(df3) columnA third_values name1 910 name2 301 name3 132 name4 299 </code></pre> <p>I would like to merge together each of these dataframes on 'columnA', giving</p> <pre><code>columnA first_values second_values third_values name1 342 8 910 name2 822 1 301 name3 121 1 132 name4 3434 2 299 </code></pre> <p>I normally resort to this hack:</p> <pre><code>merged1 = df1.merge(df2, on='columnA') </code></pre> <p>then </p> <pre><code>merged2 = df3.merge(merged1, on='columnA') </code></pre> <p>But this doesn't scale for many dataframes. What is the correct way to do this? </p>
0
2016-08-04T19:14:55Z
38,780,348
<p>You can set columnA as the index and concat (reset index at the end):</p> <pre><code>dfs = [df1, df2, df3] pd.concat([df.set_index('columnA') for df in dfs], axis=1).reset_index() Out: columnA first_values second_values third_values 0 name1 342 8 910 1 name2 822 1 301 2 name3 121 1 132 3 name4 3434 2 299 </code></pre>
1
2016-08-05T02:50:55Z
[ "python", "pandas", "dataframe", "merge", "concatenation" ]
Flask deployed with twistd: Failed to load application: 'NoneType' object has no attribute 'startswith'
38,775,663
<p>I am trying to <a href="http://twistedmatrix.com/documents/current/core/howto/application.html" rel="nofollow">deploy my Twisted application using .tac files and twistd</a></p> <p>I tried to deploy it with the command line:</p> <p><code>twistd -y service.tac</code></p> <p>I have the error:</p> <p>...</p> <pre><code>application = getApplication(self.config, passphrase) </code></pre> <p><code>--- &lt;exception caught here&gt; --- File "/usr/local/lib/python2.7/dist-packages/twisted/application/app.py", line 450, in getApplication application = service.loadApplication(filename, style, passphrase) File "/usr/local/lib/python2.7/dist-packages/twisted/application/service.py", line 411, in loadApplication passphrase) File "/usr/local/lib/python2.7/dist-packages/twisted/persisted/sob.py", line 224, in loadValueFromFile eval(codeObj, d, d) File "service.tac", line 54, in &lt;module&gt;</code></p> <p><code>File "/usr/lib/python2.7/posixpath.py", line 61, in isabs return s.startswith('/')</code></p> <p><code>exceptions.AttributeError: 'NoneType' object has no attribute 'startswith'</code></p> <p><code>Failed to load application: 'NoneType' object has no attribute 'startswith'</code></p> <p>My service.tac file is:</p> <p><code> from flask import Flask app = Flask(__name__)</code></p>
0
2016-08-04T19:19:02Z
38,789,837
<p>Your application import_name can't be identified properly in *.tac file. If you create flask application in *.py file and import it in *.tac it will work just fine.</p> <p>But you also need another list of <a href="http://twistedmatrix.com/documents/current/web/howto/web-in-60/wsgi.html" rel="nofollow">instructions</a> for deploying Flask application via twistd. Minimal example looks like this:</p> <pre><code>from twisted.application import internet, service from twisted.web.server import Site from twisted.web.wsgi import WSGIResource from twisted.internet import reactor from my_flask_module import my_flask_app application = service.Application('myapplication') service = service.IServiceCollection(application) flask_resource = WSGIResource(reactor, reactor.getThreadPool(), my_flask_app) flask_site = Site(flask_resource) internet.TCPServer(8000, flask_site).setServiceParent(service) </code></pre>
0
2016-08-05T12:57:50Z
[ "python", "flask", "twisted", "daemon", "twisted.web" ]
multiprocessing numpy not defined error
38,775,754
<p>I am using the following test code:</p> <pre><code>from pathos.multiprocessing import ProcessingPool as Pool import numpy def foo(obj1, obj2): a = obj1**2 b = numpy.asarray(range(1,5)) return obj1, b if __name__ == '__main__': p = Pool(5) res = p.map(foo, [1,2,3], [4,5,6]) </code></pre> <p>It gives error:</p> <pre><code>File "C:\Python27\lib\site-packages\multiprocess\pool.py", line 567, in get raise self._value NameError: global name 'numpy' is not defined </code></pre> <p>What am I doing wrong in the code?</p> <p>Edit: Why was this question voted down twice? </p> <p>I have numpy installed and my interpreter has been using it correctly until I try to do it for multiprocessing. I have been coding with same install for a while.</p>
-1
2016-08-04T19:23:51Z
38,777,032
<p>It seems like imports are not shared between processes. Therefore you need to <code>import numpy</code> in all your processes seperatly.</p> <p>In your case this means adding the <code>import numpy</code> in your <code>foo</code> function. Processes are not light-weight so the <code>import</code> won't slow you down (at least not significantly).</p> <p>The other alternative would be to pass the module to the functions (not recommended and I'm not sure if that will work):</p> <pre><code>if __name__ == '__main__': p = Pool(5) res = p.map(foo, numpy, [1,2,3], [4,5,6]) def foo(np, obj1, obj2): a = obj1**2 b = np.asarray(range(1,5)) return obj1, b </code></pre>
0
2016-08-04T20:39:15Z
[ "python", "numpy", "multiprocessing", "pathos" ]
Is there a straightforward way to write to a file open in r+ mode without overwriting existing bytes?
38,775,760
<p>I have a text file test.txt, with the following contents:</p> <pre><code>Thing 1. string </code></pre> <p>And I'm creating a python file that will increment the number every time it gets run without affecting the rest of the string, like so.</p> <p>Run once:</p> <pre><code>Thing 2. string </code></pre> <p>Run twice:</p> <pre><code>Thing 3. string </code></pre> <p>Run three times:</p> <pre><code>Thing 4. string </code></pre> <p>Run four times:</p> <pre><code>Thing 5. string </code></pre> <p>This is the code that I'm using to accomplish this.</p> <pre><code>file = open("test.txt","r+") started = False beginning = 0 #start of the digits done = False num = 0 #building the number from digits while not done: next = file.read(1) if ord(next) in range(48, 58): #ascii values of 0-9 started = True num *= 10 num += int(next) elif started: #has reached the end of the number done = True else: #has not reached the beginning of the number beginning += 1 num += 1 file.seek(beginning,0) file.write(str(num)) </code></pre> <p>This code works, so long as the number is not 10^n-1 (9, 99, 999, etc) because in those cases, it writes more bytes than were previously in the number. As such, it will override the characters that follow.</p> <p>So this brings me to the point. I need a way to write to the file that overwrites previously bytes, which I have, and a way to write to the file that does not overwrite previously existing bytes, which I don't have. Does such a mechanism exist in python, and if so, what is it?</p> <p>I have already tried opening the file using the line <code>file = open("test.txt","a+")</code> instead. When I do that, it always writes to the end, regardless of the seek point. </p> <p><code>file = open("test.txt","w+")</code> will not work because I need to keep the contents of the file while altering it, and files opened in any variant of w mode are wiped clean.</p> <p>I have also thought of solving my problem using a function like this:</p> <pre><code>#file is assumed to be in r+ mode def write(string, file, index = -1): if index != -1: file.seek(index, 0) remainder = file.read() file.seek(index) file.write(remainder + string) </code></pre> <p>But I also want to be able to expand the solution to larger files, and reading the rest of the file single-handedly changes what I'm trying to accomplish from being O(1) to O(n). It also seems very non-Pythonic, since it seeks to accomplish the task in a less-than-straightforward way. </p> <p>It would also make my I/O operations inconsistent: I would have class methods (<code>file.read()</code> and <code>file.write()</code>) to read from the file and write to it replacing old characters, but an external function to insert without replacing. </p> <p>If I make the code inline, rather than a function, it means I have to write several of the same lines of code every time I try to write without replacing, which is also non-Pythonic.</p> <p>To reiterate my question, is there a more straightforward way to do this, or am I stuck with the function?</p>
0
2016-08-04T19:24:06Z
38,775,904
<p>Unfortunately, what you want to do is not possible. This is a limitation at a lower level than Python, in the operating system. Neither the Unix nor the Windows file access API offers any way to insert new bytes in the middle of a file without overwriting the bytes that were already there.</p> <p>Reading the rest of the file and rewriting it is the usual workaround. Actually, the usual workaround is to rewrite the <em>entire file</em> under a new name and then use <code>rename</code> to move it back to the old name. On Unix, this accomplishes an <em>atomic</em> file update - unless the computer crashes, concurrent readers will see either the new file or the old file, not some hybrid. (Windows, sadly, <em>still</em> does not allow you to <code>rename</code> over a name that already exists, so if you use this strategy you have to delete the old file first, opening an unavoidable race window where the file might appear not to exist at all.)</p> <p>Yes, this is O(N), and yes, if you use the write-new-file-and-rename strategy it temporarily consumes scratch disk space equal to the size of the file (old or new, whichever is larger). That's just how it is.</p> <p>I haven't thought about it enough to give you even a sketch of the code, but it <em>should</em> be possible to use context managers to wrap up the write-new-file-and-rename approach tidily.</p>
2
2016-08-04T19:31:48Z
[ "python", "file", "io" ]
Is there a straightforward way to write to a file open in r+ mode without overwriting existing bytes?
38,775,760
<p>I have a text file test.txt, with the following contents:</p> <pre><code>Thing 1. string </code></pre> <p>And I'm creating a python file that will increment the number every time it gets run without affecting the rest of the string, like so.</p> <p>Run once:</p> <pre><code>Thing 2. string </code></pre> <p>Run twice:</p> <pre><code>Thing 3. string </code></pre> <p>Run three times:</p> <pre><code>Thing 4. string </code></pre> <p>Run four times:</p> <pre><code>Thing 5. string </code></pre> <p>This is the code that I'm using to accomplish this.</p> <pre><code>file = open("test.txt","r+") started = False beginning = 0 #start of the digits done = False num = 0 #building the number from digits while not done: next = file.read(1) if ord(next) in range(48, 58): #ascii values of 0-9 started = True num *= 10 num += int(next) elif started: #has reached the end of the number done = True else: #has not reached the beginning of the number beginning += 1 num += 1 file.seek(beginning,0) file.write(str(num)) </code></pre> <p>This code works, so long as the number is not 10^n-1 (9, 99, 999, etc) because in those cases, it writes more bytes than were previously in the number. As such, it will override the characters that follow.</p> <p>So this brings me to the point. I need a way to write to the file that overwrites previously bytes, which I have, and a way to write to the file that does not overwrite previously existing bytes, which I don't have. Does such a mechanism exist in python, and if so, what is it?</p> <p>I have already tried opening the file using the line <code>file = open("test.txt","a+")</code> instead. When I do that, it always writes to the end, regardless of the seek point. </p> <p><code>file = open("test.txt","w+")</code> will not work because I need to keep the contents of the file while altering it, and files opened in any variant of w mode are wiped clean.</p> <p>I have also thought of solving my problem using a function like this:</p> <pre><code>#file is assumed to be in r+ mode def write(string, file, index = -1): if index != -1: file.seek(index, 0) remainder = file.read() file.seek(index) file.write(remainder + string) </code></pre> <p>But I also want to be able to expand the solution to larger files, and reading the rest of the file single-handedly changes what I'm trying to accomplish from being O(1) to O(n). It also seems very non-Pythonic, since it seeks to accomplish the task in a less-than-straightforward way. </p> <p>It would also make my I/O operations inconsistent: I would have class methods (<code>file.read()</code> and <code>file.write()</code>) to read from the file and write to it replacing old characters, but an external function to insert without replacing. </p> <p>If I make the code inline, rather than a function, it means I have to write several of the same lines of code every time I try to write without replacing, which is also non-Pythonic.</p> <p>To reiterate my question, is there a more straightforward way to do this, or am I stuck with the function?</p>
0
2016-08-04T19:24:06Z
38,776,272
<p>No, the disk doesn't work like you think it does. You have to remember that your file is stored on disk as one contiguous chunk of data* Your disk happens to be wound up in a great big spool, a bit like a record, but if you were to unwind your file, you'd get something that looks like this:</p> <pre><code>+------------------------------------------------------------+ | Thing 1. String | +------------------------------------------------------------+ ^ ^ ^ | \_, ^ | Start of file End of File | Start of disk End of disk </code></pre> <p>As you've discovered, there's no way to simply insert data in the middle. Generally speaking, that wouldn't be possible at all, without physically altering your disk. And who wants to do that? Especially when just flipping the magnetic bits on your disk is so much easier and faster. In order to do what you want to do, you have to read the bytes the you want to overwrite, then start writing down your new ones. It might look something like this:</p> <ul> <li>Open the file</li> <li>Seek to the point of insert</li> <li>Read the current byte</li> <li>Seek backward one byte</li> <li>Write down the first byte of the new string</li> <li>Read the next byte</li> <li>Seek backward one byte</li> <li>Write down the next byte of the new string</li> <li>Repeat until all the bytes have been written to disk</li> <li>close the file</li> </ul> <p>Of course, this might be a little bit on the slow side, due to all the seeking back &amp; forth in the file. It <em>might</em> be faster to read each line, and then seek back to the previous location in the file. It should be relatively straightforward to implement something like this in Python, but as you've discovered, there are system limitations that Python can't really overcome.</p> <p>*<sup><sub>Unless the files are fragmented, but we're living in an ideal world where gravity adheres to 9.8m/s<sup>2</sup> and the Earth is a perfect sphere.</sub></sup></p>
1
2016-08-04T19:52:29Z
[ "python", "file", "io" ]
How to write a construction of code as a function
38,775,794
<p>I am new to programming and python and would like to write the following piece of code as a function using the 'def' 'return' construction:</p> <pre><code>df.loc[df['DATE_INT'].shift(-1) - df['DATE_INT'] == 1, 'CONSECUTIVE_DAY'] = True df.loc[(df['DATE_INT'].shift(-1) - df['DATE_INT'] == 1) | (df['DATE_INT'].shift(1) - df['DATE_INT'] == -1), 'CONSECUTIVE_DAY'] = True </code></pre> <p>My attempt returns invalid syntax:</p> <pre><code>def ConsecutiveID(df, column ='DATE_INT'): return df.loc[df['DATE_INT'].shift(-1) - df['DATE_INT'] == 1, 'CONSECUTIVE_DAY'] = True df.loc[(df['DATE_INT'].shift(-1) - df['DATE_INT'] == 1) | (df['DATE_INT'].shift(1) - df['DATE_INT'] == -1), 'CONSECUTIVE_DAY'] = True </code></pre> <p>My goal is to ultimately use my ConsecutiveID function as follows:</p> <pre><code> df.groupby(['COUNTY_GEOID_YEAR','TEMPBIN']).apply(ConsecutiveID) </code></pre> <p>I am applying the split-apply-combine construction. Where groupby is splitting my data and I use the function I would like to construct in apply. </p> <p>My main question is how to write what I've called the ConsecutiveID as a function. Thank you for any help.</p>
2
2016-08-04T19:26:12Z
38,776,195
<pre><code>def ConsecutiveID(df): df = df.copy() cond1 = df['DATE_INT'].shift(-1) - df['DATE_INT'] == 1 cond2 = df['DATE_INT'].shift(1) - df['DATE_INT'] == -1 df.loc[cond1 | cond2, 'CONSECUTIVE_DAY'] = True return df </code></pre>
2
2016-08-04T19:48:02Z
[ "python", "python-2.7", "function", "pandas", "split-apply-combine" ]
wxPython return value from Wizard to calling Frame
38,775,804
<p>My problem is as follows:</p> <p>I am designing a wizard for the construction of an object to be added to a list of objects in the calling frame of my program. At the end of the wizard I would like to pass the newly created object back to the calling frame to be inserted into the list. In order to simulate this basic functionality on an abstract basis I have constructed the following, scaled down app:</p> <p>mainframe.py</p> <pre><code>import wx import wiz_test class MainFrame(wx.Frame): def __init__(self): wx.Frame.__init__(self,None,title="Main") panel = wx.Panel(self) callButton = wx.Button(panel, label = "Call Wizard") callButton.Bind(wx.EVT_BUTTON,self.launchWizard) self.Show() def launchWizard(self,event): wiz = wiz_test.WizObj(self) a = 0 if wiz == wx.wizard.EVT_WIZARD_FINISHED: a = wiz.answer print a if __name__ == '__main__': app = wx.App(False) frame = MainFrame() app.MainLoop() </code></pre> <p>wiz_test.py</p> <pre><code>import wx import wx.wizard as wiz class WizPage(wiz.WizardPageSimple): def __init__(self, parent): self.answer = 3 wiz.WizardPageSimple.__init__(self, parent) sizer = wx.BoxSizer(wx.VERTICAL) self.SetSizer(sizer) title = wx.StaticText(self, -1, "Wizard Page") title.SetFont(wx.Font(18, wx.SWISS, wx.NORMAL, wx.BOLD)) sizer.Add(title, 0, wx.ALIGN_CENTRE|wx.ALL, 5) sizer.Add(wx.StaticLine(self, -1), 0, wx.EXPAND|wx.ALL, 5) class WizObj(object): def __init__(self,parent): wizard = wx.wizard.Wizard(None, -1, "Simple Wizard") page1 = WizPage(wizard) wizard.FitToPage(page1) wizard.RunWizard(page1) wizard.Destroy() if __name__ == "__main__": app = wx.App(False) main() app.MainLoop() </code></pre> <p>The ultimate goal in this small example is to get the MainFrame instance to output the value '3' derived from the .answer member variable of the WizObj instance when the wx.wizard.EVT_WIZARD_FINISHED event is triggered. However it is clearly not working at this point as the current code only returns '0'. Am I approaching this the correct way? Should I be binding the EVT_WIZARD_FINISHED event instead, and if so, how would I access that from Mainframe?</p>
0
2016-08-04T19:26:39Z
38,837,666
<p>I was able to solve this problem through the use of the "pubsub" capability within the wxPython library. Specifically, I added a pub.subscribe() instance immediately prior to the instantiation of my wizard within the calling frame. Inside of the wizard I pass the value via pub.sendMessage() just before destroying the wizard. It is important to note that the pass value had to be specified in order for the pubsub send would work effectively.</p> <p>The following code is the modified version of the original code which now functions.</p> <p><strong>MainFrame.py</strong></p> <pre><code>import wx import wiz_test from wx.lib.pubsub import pub class MainFrame(wx.Frame): def __init__(self): wx.Frame.__init__(self,None,title="Main") panel = wx.Panel(self) callButton = wx.Button(panel, label = "Call Wizard") callButton.Bind(wx.EVT_BUTTON,self.launchWizard) self.Show() def catch_stuff(self,a): print a def launchWizard(self,event): pub.subscribe(self.catch_stuff,'valPass') wiz = wiz_test.WizObj(self,a) if __name__ == '__main__': app = wx.App(False) frame = MainFrame() app.MainLoop() </code></pre> <p><strong>wiz_test.py</strong></p> <pre><code>import wx import wx.wizard as wiz from wx.lib.pubsub import pub class WizPage(wiz.WizardPageSimple): def __init__(self, parent): self.answer = 3 wiz.WizardPageSimple.__init__(self, parent) sizer = wx.BoxSizer(wx.VERTICAL) self.SetSizer(sizer) title = wx.StaticText(self, -1, "Wizard Page") title.SetFont(wx.Font(18, wx.SWISS, wx.NORMAL, wx.BOLD)) sizer.Add(title, 0, wx.ALIGN_CENTRE|wx.ALL, 5) sizer.Add(wx.StaticLine(self, -1), 0, wx.EXPAND|wx.ALL, 5) #---------------------------------------------------------------------- class WizObj(object): def __init__(self,parent,a): wizard = wx.wizard.Wizard(None, -1, "Simple Wizard") page1 = WizPage(wizard) wizard.FitToPage(page1) wizard.RunWizard(page1) pub.sendMessage('valPass',a = page1.answer) wizard.Destroy() #---------------------------------------------------------------------- if __name__ == "__main__": app = wx.App(False) main() app.MainLoop() </code></pre> <p>The result is that the console prints the value <strong>3</strong> which was retrieved from the called wizard.</p>
0
2016-08-08T20:07:09Z
[ "python", "events", "wxpython", "frame", "wizard" ]
Python pandas calculate time until value in a column is greater than it is in current period
38,775,814
<p>I have a pandas dataframe in python with several columns and a datetime stamp. I want to create a new column, that calculates the time until output is less than what it is in the current period.</p> <p>My current table looks something like this:</p> <pre><code> datetime output 2014-05-01 01:00:00 3 2014-05-01 01:00:01 2 2014-05-01 01:00:02 3 2014-05-01 01:00:03 2 2014-05-01 01:00:04 1 </code></pre> <p>I'm trying to get my table to have an extra column and look like this:</p> <pre><code> datetime output secondsuntildecrease 2014-05-01 01:00:00 3 1 2014-05-01 01:00:01 2 3 2014-05-01 01:00:02 3 1 2014-05-01 01:00:03 2 1 2014-05-01 01:00:04 1 </code></pre> <p>thanks in advance!</p>
4
2016-08-04T19:27:07Z
38,776,309
<pre><code>upper_triangle = np.triu(df.output.values &lt; df.output.values[:, None]) df['s_until_dec'] = df['datetime'][upper_triangle.argmax(axis=1)].values - df['datetime'] df.loc[~upper_triangle.any(axis=1), 's_until_dec'] = np.nan df datetime output s_until_dec 0 2014-05-01 01:00:00 3 00:00:01 1 2014-05-01 01:00:01 2 00:00:03 2 2014-05-01 01:00:02 3 00:00:01 3 2014-05-01 01:00:03 2 00:00:01 4 2014-05-01 01:00:04 1 NaT </code></pre> <p>Here's how it works:</p> <p><code>df.output.values &lt; df.output.values[:, None]</code> this creates a pairwise comparison matrix with brodcasting (<code>[:, None]</code> creates a new axis):</p> <pre><code>df.output.values &lt; df.output.values[:, None] Out: array([[False, True, False, True, True], [False, False, False, False, True], [False, True, False, True, True], [False, False, False, False, True], [False, False, False, False, False]], dtype=bool) </code></pre> <p>Here, for example, <code>output[0]</code> is smaller than <code>output[1]</code> so the matrix element for (0, 1) is True. We need the upper triangle so I used <code>np.triu</code> to get the upper triangle of this matrix. <code>argmax()</code> will give me the index of the first <code>True</code> value. If I pass this into iloc, I will get the corresponding date. Except for the last one of course. It has all <code>False</code>s so I need to replace it with <code>np.nan</code>. <code>.loc</code> part checks that matrix for that case and replaces with <code>np.nan</code>.</p>
3
2016-08-04T19:54:24Z
[ "python", "datetime", "pandas", "time" ]