instruction
stringlengths
12
148
input
stringlengths
16
30.2k
output
stringlengths
11
27.3k
Comparing two dataframes and getting the differences
I have two dataframes. Examples: df1: Date Fruit Num Color 2013-11-24 Banana 22.1 Yellow 2013-11-24 Orange 8.6 Orange 2013-11-24 Apple 7.6 Green 2013-11-24 Celery 10.2 Green df2: Date Fruit Num Color 2013-11-24 Banana 22.1 Yellow 2013-11-24 Orange 8.6 Orange 2013-11-24 Apple 7.6 Green 2013-11-24 Celery 10.2 Green 2013-11-25 Apple 22.1 Red 2013-11-25 Orange 8.6 Orange Each dataframe has the Date as an index. Both dataframes have the same structure. What i want to do, is compare these two dataframes and find which rows are in df2 that aren't in df1. I want to compare the date (index) and the first column (Banana, APple, etc) to see if they exist in df2 vs df1. I have tried the following: Outputting difference in two pandas dataframes side by side - highlighting the difference Comparing two pandas dataframes for differences For the first approach I get this error: "Exception: Can only compare identically-labeled DataFrame objects". I have tried removing the Date as index but get the same error. On the third approach, I get the assert to return False but cannot figure out how to actually see the different rows. Any pointers would be welcome
This approach, df1 != df2, works only for dataframes with identical rows and columns. In fact, all dataframes axes are compared with _indexed_same method, and exception is raised if differences found, even in columns/indices order. If I got you right, you want not to find changes, but symmetric difference. For that, one approach might be concatenate dataframes: >>> df = pd.concat([df1, df2]) >>> df = df.reset_index(drop=True) group by >>> df_gpby = df.groupby(list(df.columns)) get index of unique records >>> idx = [x[0] for x in df_gpby.groups.values() if len(x) == 1] filter >>> df.reindex(idx) Date Fruit Num Color 9 2013-11-25 Orange 8.6 Orange 8 2013-11-25 Apple 22.1 Red
How to document a module constant in Python?
I have a module, errors.py in which several global constants are defined (note: I understand that Python doesn't have constants, but I've defined them by convention using UPPERCASE). """Indicates some unknown error.""" API_ERROR = 1 """Indicates that the request was bad in some way.""" BAD_REQUEST = 2 """Indicates that the request is missing required parameters.""" MISSING_PARAMS = 3 Using reStructuredText how can I document these constants? As you can see I've listed a docstring above them, but I haven't found any documentation that indicates to do that, I've just done it as a guess.
Unfortunately, variables (and constants) do not have docstrings. After all, the variable is just a name for an integer, and you wouldn't want to attach a docstring to the number 1 the way you would to a function or class object. If you look at almost any module in the stdlib, like pickle, you will see that the only documentation they use is comments. And yes, that means that help(pickle) only shows this: DATA APPEND = b'a' APPENDS = b'e' … … completely ignoring the comments. If you want your docs to show up in the built-in help, you have to add them to the module's docstring, which is not exactly ideal. But Sphinx can do more than the built-in help can. You can configure it to extract the comments on the constants, or use autodata to do it semi-automatically. For example: #: Indicates some unknown error. API_ERROR = 1 Multiple #: lines before any assignment statement, or a single #: comment to the right of the statement, work effectively the same as docstrings on objects picked up by autodoc. Which includes handling inline rST, and auto-generating an rST header for the variable name; there's nothing extra you have to do to make that work. As a side note, you may want to consider using an enum instead of separate constants like this. If you're not using Python 3.4 (which you probably aren't yet…), there's a backport.enum package for 3.2+, or flufl.enum (which is not identical, but it is similar, as it was the main inspiration for the stdlib module) for 2.6+. Enum instances (not flufl.enum, but the stdlib/backport version) can even have docstrings: class MyErrors(enum.Enum): """Indicates some unknown error.""" API_ERROR = 1 """Indicates that the request was bad in some way.""" BAD_REQUEST = 2 """Indicates that the request is missing required parameters.""" MISSING_PARAMS = 3 Although they unfortunately don't show up in help(MyErrors.MISSING_PARAMS), they are docstrings that Sphinx autodoc can pick up.
Programmatically Download Content from Shared Dropbox Folder Links
I'm building an application to automatically trigger a download of a Dropbox file shared with a user (shared file/folder link). This was straight forward to implement for Dropbox links to files, as is outlined here. Unfortunately this doesn't work for shared folders. Anyone have suggestions on how I could Download the all of it's contents (maybe get a list of the files links inside it to download?) or Download a zip of the folder Currently I can go to the url and do some screen-scraping to try and get the contents list, but the advantage of the solution described in the linked Dropbox blog entry for files is that no scraping is needed, so it's much more reliable and efficient.
Dropbox's support team just filled me in on the best way to do this: Just add ?dl=1 to the end of the shared link. That'll give you a zipped version of the shared folder. So if the link shared with a user is https://www.dropbox.com/sh/xyz/xyz-YZ (or similar, which links to a shared folder), to download a zipped version of that folder just access https://www.dropbox.com/sh/xyz/xyz-YZ?dl=1 Hope this helps someone else also.
Traceback in dynamic programming implementation of Needleman-Wunsch algorithm
I almost have my needleman-wunsch implementation working but I am confused on how to handle the traceback on a specific case. The idea is that in order to re construct the sequence (the longest path) we re-calculate to determine the matrix the score came from. The edge case I am having a problem with is when the bottom right score is not in the match matrix, but is in the insert column matrix (meaning that the resulting traced back sequence should have a insert. These sequences are being recorded in the a2m format, where inserts in the sequence are recorded as a lower case character. So in the final output the alignment of ZZ to AAAC should be AAac. When I trace back by hand I end up with AAAc because I only visit the Ic matrix once. Here is a picture of my whiteboard. As you can see, I have three black arrows and one green arrow, which is why my traceback gives me AAAc. Should I be counting the first cell, then stopping at position 1,1? I am not sure how I would change the way I implemented this to do so. Note that the substitution matrix being used here is BLOSUM62. The recurrence relations are M(i,j) = max(Ic(i-1,j-1)+subst, M(i-1,j-1)+subst, Ir(i-1,j-1)+subst) Ic(i,j) = max(Ic(i,j-1)-extend, M(i,j-1)-open, Ir(i,j-1)-double) Ir(i,j) = max(Ic(i-1,j)-double, M(i-1,j)-open, Ir(i-1,j)-extend) EDIT: Here is the traceback_col_seq function re-written to be cleaner. Note that score_cell now returns thisM,thisC,thisR instead of the max of these. This version scores the alignment as AaAc, still having the same problem, and now with another problem of why does it go into Ic again at 1,2. This version is much more legible however. def traceback_col_seq(self): i, j = self.maxI-1, self.maxJ-1 self.traceback = list() matrixDict = {0:'M',1:'Ic',2:'Ir',3:'M',4:'Ic',5:'Ir',6:'M',7:'Ic',8:'Ir'} while i > 0 or j > 0: chars = self.col_seq[j-1] + self.row_seq[i-1] thisM, thisC, thisR = self.score_cell(i, j, chars) cell = thisM + thisC + thisR prevMatrix = matrixDict[cell.index(max(cell))] print(cell, prevMatrix,i,j) if prevMatrix == 'M': i -= 1; j -= 1 self.traceback.append(self.col_seq[j]) elif prevMatrix == 'Ic': j -= 1 self.traceback.append(self.col_seq[j].lower()) elif prevMatrix == 'Ir': i -= 1 self.traceback.append('-') return ''.join(self.traceback[::-1]) Here is the python class that generates the dynamic programming matrix and traces back the alignment. There is also a score function used to check if the alignment produced is correct. class global_aligner(): def __init__(self, subst, open=12, extend=1, double=3): self.extend, self.open, self.double, self.subst = extend, open, double, subst def __call__(self, row_seq, col_seq): #add alphabet error checking? score_align(row_seq, col_seq) return traceback_col_seq() def init_array(self): """initialize three numpy arrays, set values in 1st column and row""" self.M = zeros((self.maxI, self.maxJ), float) self.Ic = zeros((self.maxI, self.maxJ), float) self.Ir = zeros((self.maxI, self.maxJ), float) for i in xrange(1,self.maxI): self.M[i][0], self.Ic[i][0], self.Ir[i][0] = \ -float('inf'), -float('inf'), -(self.open+self.extend*(i-1)) for j in xrange(1,self.maxJ): self.M[0][j], self.Ir[0][j], self.Ic[0][j] = \ -float('inf'), -float('inf'), -(self.open+self.extend*(j-1)) self.Ic[0][0] = self.Ir[0][0] = -float('inf') def score_cell(self, i, j, chars): """score a matrix cell based on the 9 total neighbors (3 each direction)""" thisM = [self.M[i-1][j-1]+self.subst[chars], self.Ic[i-1][j-1]+ \ self.subst[chars], self.Ir[i-1][j-1]+self.subst[chars]] thisC = [self.M[i][j-1]-self.open, self.Ic[i][j-1]-self.extend, \ self.Ir[i][j-1]-self.double] thisR = [self.M[i-1][j]-self.open, self.Ic[i-1][j]-self.double, \ self.Ir[i-1][j]-self.extend] return max(thisM), max(thisC), max(thisR) def score_align(self, row_seq, col_seq): """build dynamic programming matrices to align two sequences""" self.row_seq, self.col_seq = list(row_seq), list(col_seq) self.maxI, self.maxJ = len(self.row_seq)+1, len(self.col_seq)+1 self.init_array() #initialize arrays for i in xrange(1, self.maxI): row_char = self.row_seq[i-1] for j in xrange(1, self.maxJ): chars = row_char+self.col_seq[j-1] self.M[i][j], self.Ic[i][j], self.Ir[i][j] = self.score_cell(i, j, chars) def traceback_col_seq(self): """trace back column sequence in matrices in a2m format""" i, j = self.maxI-1, self.maxJ-1 self.traceback = list() #find which matrix to start in #THIS IS WHERE THE PROBLEM LIES I THINK cell = (self.M[i][j], self.Ic[i][j], self.Ir[i][j]) prevMatrix = cell.index(max(cell)) while i > 1 and j > 1: if prevMatrix == 0: #M matrix i -= 1; j -= 1 #step up diagonally prevChars = self.row_seq[i-1]+self.col_seq[j-1] diag = self.score_cell(i, j, prevChars) #re-score diagonal cell prevMatrix = diag.index(max(diag)) #determine which matrix that was self.traceback.append(self.col_seq[j]) elif prevMatrix == 1: #Ic matrix j -= 1 prevChars = self.row_seq[i-1]+self.col_seq[j-1] left = self.score_cell(i, j, prevChars) prevMatrix = left.index(max(left)) self.traceback.append(self.col_seq[j].lower()) elif prevMatrix == 2: #Ir matrix i -= 1 prevChars = self.row_seq[i-1]+self.col_seq[j-1] up = self.score_cell(i, j, prevChars) prevMatrix = up.index(max(up)) self.traceback.append('-') for j in xrange(j,0,-1): #hit top of matrix before ending, add chars self.traceback.append(self.col_seq[j-1]) for i in xrange(i,0,-1): #hit left of matrix before ending, add gaps self.traceback.append('-') print(''.join(self.row[::-1])) return ''.join(self.traceback[::-1]) def score_a2m(self, s1, s2): """scores an a2m alignment of two sequences. I believe this function correctly scores alignments and is used to test my alignments. The value produced by this function should be the same as the largest value in the bottom right of the three matrices""" s1, s2 = list(s1.strip('.')), list(s2.strip('.')) s1_pos, s2_pos = len(s1)-1, len(s2)-1 score, gap = 0, False while s1_pos >= 0 and s2_pos >= 0: if s1[s1_pos].islower() and gap is False: score -= self.open; s1_pos -= 1; gap = True elif s1[s1_pos].islower() and gap is True: score -= self.extend; s1_pos -= 1 elif s2[s2_pos].islower() and gap is False: score -= self.open; s2_pos -= 1; gap = True elif s2[s2_pos].islower() and gap is True: score -= self.extend; s2_pos -= 1 elif s1[s1_pos] == '-' and gap is False: score -= self.open; s1_pos -= 1; s2_pos -= 1; gap = True elif s1[s1_pos] == '-' and gap is True: score -= self.extend; s1_pos -= 1; s2_pos -= 1 elif s2[s2_pos] == '-' and gap is False: score -= self.open; s1_pos -= 1; s2_pos -= 1; gap = True elif s2[s2_pos] == '-' and gap is True: score -= self.extend; s1_pos -= 1; s2_pos -= 1 elif gap is True: score += self.subst[s1[s1_pos].upper() + s2[s2_pos].upper()] s1_pos -= 1; s2_pos -= 1; gap = False else: score += self.subst[s1[s1_pos].upper() + s2[s2_pos].upper()] s1_pos -= 1; s2_pos -= 1 if s1_pos >= 0 and gap is True: score -= self.extend*s1_pos elif s1_pos >= 0 and gap is False: score -= self.open+s1_pos*self.extend if s2_pos >= 0 and gap is True: score -= self.extend*s2_pos elif s2_pos >= 0 and gap is False: score -= self.open+s2_pos*self.extend return score test = global_aligner(blosumMatrix) s1,s2 = 'ZZ','AAAC' test.score_align(s1, s2) align = test.traceback_col_seq() print('This score: ', test.score_a2m(s1,align) print('Correct score: ', test.score_a2m(s1,'AAac')) Blosum parser def parse_blosum(blosumFile): blosumMatrix, commentFlag = dict(), False for line in blosumFile: if not line.startswith('#') and not commentFlag: alphabet = line.rstrip().split() commentFlag = True elif commentFlag: line = line.rstrip().split() thisChar, line = line[0], line[1:] for x in xrange(len(line)): alphaChar, thisValue = alphabet[x], line[x] blosumMatrix[thisChar+alphaChar] = int(thisValue) return blosumMatrix
def traceback_col_seq(self): """ Traces back the column sequence to determine final global alignment. Recalculates the score using score_cell. """ i, j = self.maxI-1, self.maxJ-1 self.traceback = list() #stores final sequence matrixDict = {0:'M',1:'Ic',2:'Ir'} #mapping between numeric value and matrix chars = self.col_seq[j-1] + self.row_seq[i-1] #store first characters thisM, thisC, thisR = self.score_cell(i,j, chars) cell = max(thisM), max(thisC), max(thisR) #determine where to start prevMatrix = matrixDict[cell.index(max(cell))] #store where to go first while i > 0 or j > 0: #loop until the top left corner of the matrix is reached if prevMatrix == 'M': self.traceback.append(self.col_seq[j-1]) i -= 1; j -= 1 prevMatrix = matrixDict[thisM.index(max(thisM))] elif prevMatrix == 'Ic': self.traceback.append(self.col_seq[j-1].lower()) j -= 1 prevMatrix = matrixDict[thisC.index(max(thisC))] elif prevMatrix == 'Ir': self.traceback.append('-') i -= 1 prevMatrix = matrixDict[thisR.index(max(thisR))] chars = self.col_seq[j-1] + self.row_seq[i-1] #store next characters thisM, thisC, thisR = self.score_cell(i,j, chars) #rescore next cell return ''.join(self.traceback[::-1])
Filter Pandas DataFrame by time index
I have a pandas DataFrame from 6:36 AM to 5:31 PM. I want to remove all observations where the time is less than 8:00:00 AM. Here is my attempt: df = df[df.index < '2013-10-16 08:00:00'] This does nothing, please help.
You want df.loc[df.index < '2013-10-16 08:00:00'] since you're selecting by label (index) and not by value. selecting by label
Remove NaN from pandas series
Is there a way to remove a NaN values from a panda series? I have a series that may or may not have some NaN values in it, and I'd like to return a copy of the series with all the NaNs removed.
>>> s = pd.Series([1,2,3,4,np.NaN,5,np.NaN]) >>> s[~s.isnull()] 0 1 1 2 2 3 3 4 5 5 update or even better approach as @DSM suggested in comments, using pandas.Series.dropna(): >>> s.dropna() 0 1 1 2 2 3 3 4 5 5
Django python manage.py migrate
I have installed on Win7 portable Python 2.7.5.1 and Django 1.6. I followed the first polls tutorial instructions and got an error in the migrate stage, python manage.py migrate: C:\Natan\Dev\Portable Python 2.7.5.1\App\Scripts\mysite>..\..\python.exe manage.py migrate Unknown command: 'migrate' Type 'manage.py help' for usage. Any idea?
If you've installed 1.6, you should use the 1.6 tutorial, not the one for the development version.
Error: That port is already in use.
when i try django restarting its showing message : this port is already running.... this problem specially on ubunut 10.x not all OS.how I might achieve this on the current system that I am working on? can you suggest me?
A more simple solution just type sudo fuser -k 8000/tcp. This should kill all the processes associated with port 8000. EDIT: For osx users you can use sudo lsof -t -i tcp:8000 | xargs kill -9
How references to variables are resolved in Python
This message is a a bit long with many examples, but I hope it will help me and others to better grasp the full story of variables and attribute lookup in Python 2.7. I am using the terms of PEP 227 (http://www.python.org/dev/peps/pep-0227/) for code blocks (such as modules, class definition, function definitions, etc.) and variable bindings (such as assignments, argument declarations, class and function declaration, for loops, etc.) I am using the terms variables for names that can be called without a dot, and attributes for names that need to be qualified with an object name (such as obj.x for the attribute x of object obj). There are three scopes in Python for all code blocks, but the functions: Local Global Builtin There are four blocks in Python for the functions only (according to PEP 227): Local Enclosing functions Global Builtin The rule for a variable to bind it to and find it in a block is quite simple: any binding of a variable to an object in a block makes this variable local to this block, unless the variable is declared global (in that case the variable belongs to the global scope) a reference to a variable is looked up using the rule LGB (local, global, builtin) for all blocks, but the functions a reference to a variable is looked up using the rule LEGB (local, enclosing, global, builtin) for the functions only. Let me know take examples validating this rule, and showing many special cases. For each example, I will give my understanding. Please correct me if I am wrong. For the last example, I don't understand the outcome. example 1: x = "x in module" class A(): print "A: " + x #x in module x = "x in class A" print locals() class B(): print "B: " + x #x in module x = "x in class B" print locals() def f(self): print "f: " + x #x in module self.x = "self.x in f" print x, self.x print locals() >>>A.B().f() A: x in module {'x': 'x in class A', '__module__': '__main__'} B: x in module {'x': 'x in class B', '__module__': '__main__'} f: x in module x in module self.x in f {'self': <__main__.B instance at 0x00000000026FC9C8>} There is no nested scope for the classes (rule LGB) and a function in a class cannot access the attributes of the class without using a qualified name (self.x in this example). This is well described in PEP227. example 2: z = "z in module" def f(): z = "z in f()" class C(): z = "z in C" def g(self): print z print C.z C().g() f() >>> z in f() z in C Here variables in functions are looked up using the LEGB rule, but if a class is in the path, the class arguments are skipped. Here again, this is what PEP 227 is explaining. example 3: var = 0 def func(): print var var = 1 >>> func() Traceback (most recent call last): File "<pyshell#102>", line 1, in <module> func() File "C:/Users/aa/Desktop/test2.py", line 25, in func print var UnboundLocalError: local variable 'var' referenced before assignment We expect with a dynamic language such as python that everything is resolved dynamically. But this is not the case for functions. Local variables are determined at compile time. PEP 227 and http://docs.python.org/2.7/reference/executionmodel.html describe this behavior this way "If a name binding operation occurs anywhere within a code block, all uses of the name within the block are treated as references to the current block." example 4: x = "x in module" class A(): print "A: " + x x = "x in A" print "A: " + x print locals() del x print locals() print "A: " + x >>> A: x in module A: x in A {'x': 'x in A', '__module__': '__main__'} {'__module__': '__main__'} A: x in module But we see here that this statement in PEP227 "If a name binding operation occurs anywhere within a code block, all uses of the name within the block are treated as references to the current block." is wrong when the code block is a class. Moreover, for classes, it seems that local name binding is not made at compile time, but during execution using the class namespace. In that respect, PEP227 and the execution model in the Python doc is misleading and for some parts wrong. example 5: x = 'x in module' def f2(): x = 'x in f2' def myfunc(): x = 'x in myfunc' class MyClass(object): x = x print x return MyClass myfunc() f2() >>> x in module my understanding of this code is the following. The instruction x = x first look up the object the right hand x of the expression is referring to. In that case, the object is looked up locally in the class, then following the rule LGB it is looked up in the global scope, which is the string 'x in module'. Then a local attribute x to MyClass is created in the class dictionary and pointed to the string object. example 6: Now here is an example I cannot explain. It is very close to example 5, I am just changing the local MyClass attribute from x to y. x = 'x in module' def f2(): x = 'x in f2' def myfunc(): x = 'x in myfunc' class MyClass(object): y = x print y return MyClass myfunc() f2() >>> x in myfunc Why in that case the x reference in MyClass is looked up in the innermost function?
In an ideal world, you'd be right and some of the inconsistencies you found would be wrong. However, CPython has optimized some scenarios, specifically function locals. These optimizations, together with how the compiler and evaluation loop interact and historical precedent, lead to the confusion. Python translates code to bytecodes, and those are then interpreted by a interpreter loop. The 'regular' opcode for accessing a name is LOAD_NAME, which looks up a variable name as you would in a dictionary. LOAD_NAME will first look up a name as a local, and if that fails, looks for a global. LOAD_NAME throws a NameError exception when the name is not found. For nested scopes, looking up names outside of the current scope is implemented using closures; if a name is not assigned to but is available in a nested (not global) scope, then such values are handled as a closure. This is needed because a parent scope can hold different values for a given name at different times; two calls to a parent function can lead to different closure values. So Python has LOAD_CLOSURE, MAKE_CLOSURE and LOAD_DEREF opcodes for that situation; the first two opcodes are used in loading and creating a closure for a nested scope, and the LOAD_DEREF will load the closed-over value when the nested scope needs it. Now, LOAD_NAME is relatively slow; it will consult two dictionaries, which means it has to hash the key first and run a few equality tests (if the name wasn't interned). If the name isn't local, then it has to do this again for a global. For functions, that can potentially be called tens of thousands of times, this can get tedious fast. So function locals have special opcodes. Loading a local name is implemented by LOAD_FAST, which looks up local variables by index in a special local names array. This is much faster, but it does require that the compiler first has to see if a name is a local and not global. To still be able to look up global names, another opcode LOAD_GLOBAL is used. The compiler explicitly optimizes for this case to generate the special opcodes. LOAD_FAST will throw an UnboundLocalError exception when there is not yet a value for the name. Class definition bodies on the other hand, although they are treated much like a function, do not get this optimization step. Class definitions are not meant to be called all that often; most modules create classes once, when imported. Class scopes don't count when nesting either, so the rules are simpler. As a result, class definition bodies do not act like functions when you start mixing scopes up a little. So, for non-function scopes, LOAD_NAME and LOAD_DEREF are used for locals and globals, and for closures, respectively. For functions, LOAD_FAST, LOAD_GLOBAL and LOAD_DEREF are used instead. Note that class bodies are executed as soon as Python executes the class line! So in example 1, class B inside class A is executed as soon as class A is executed, which is when you import the module. In example 2, C is not executed until f() is called, not before. Lets walk through your examples: You have nested a class A.B in a class A. Class bodies do not form nested scopes, so even though the A.B class body is executed when class A is executed, the compiler will use LOAD_NAME to look up x. A.B().f() is a function (bound to the B() instance as a method), so it uses LOAD_GLOBAL to load x. We'll ignore attribute access here, that's a very well defined name pattern. Here f().C.z is at class scope, so the function f().C().g() will skip the C scope and look at the f() scope instead, using LOAD_DEREF. Here var was determined to be a local by the compiler because you assign to it within the scope. Functions are optimized, so LOAD_FAST is used to look up the local and an exception is thrown. Now things get a little weird. class A is executed at class scope, so LOAD_NAME is being used. A.x was deleted from the locals dictionary for the scope, so the second access to x results in the global x being found instead; LOAD_NAME looked for a local first and didn't find it there, falling back to the global lookup. Yes, this appears inconsistent with the documentation. Python-the-language and CPython-the implementation are clashing a little here. You are, however, pushing the boundaries of what is possible and practical in a dynamic language; checking if x should have been a local in LOAD_NAME would be possible but takes precious execution time for a corner case that most developers will never run into. Now you are confusing the compiler. You used x = x in the class scope, and thus you are setting a local from a name outside of the scope. The compiler finds x is a local here (you assign to it), so it never considers that it could also be a scoped name. The compiler uses LOAD_NAME for all references to x in this scope, because this is not an optimized function body. When executing the class definition, x = x first requires you to look up x, so it uses LOAD_NAME to do so. No x is defined, LOAD_NAME doesn't find a local, so the global x is found. The resulting value is stored as a local, which happens to be named x as well. print x uses LOAD_NAME again, and now finds the new local x value. Here you did not confuse the compiler. You are creating a local y, x is not local, so the compiler recognizes it as a scoped name from parent function f2().myfunc(). x is looked up with LOAD_DEREF from the closure, and stored in y. You could see the confusion between 5 and 6 as a bug, albeit one that is not worth fixing in my opinion. It was certainly filed as such, see issue 532860 in the Python bug tracker, it has been there for over 10 years now. The compiler could check for a scoped name x even when x is also a local, for that first assignment in example 5. Or LOAD_NAME could check if the name is meant to be a local, really, and throw an UnboundLocalError if no local was found, at the expense of more performance. Had this been in a function scope, LOAD_FAST would have been used for example 5, and an UnboundLocalError would be thrown immediately. However, as the referenced bug shows, for historical reasons the behaviour is retained. There probably is code out there today that'll break were this bug fixed.
Limiting/throttling the rate of HTTP requests in GRequests
I'm writing a small script in Python 2.7.3 with GRequests and lxml that will allow me to gather some collectible card prices from various websites and compare them. Problem is one of the websites limits the number of requests and sends back HTTP error 429 if I exceed it. Is there a way to add throttling the number of requests in GRequestes so that I don't exceed the number of requests per second I specify? Also - how can I make GRequestes retry after some time if HTTP 429 occurs? On a side note - their limit is ridiculously low. Something like 8 requests per 15 seconds. I breached it with my browser on multiple occasions just refreshing the page waiting for price changes.
Going to answer my own question since I had to figure this by myself and there seems to be very little info on this going around. The idea is as follows. Every request object used with GRequests can take a session object as a parameter when created. Session objects on the other hand can have HTTP adapters mounted that are used when making requests. By creating our own adapter we can intercept requests and rate-limit them in way we find best for our application. In my case I ended up with the code below. Object used for throttling: DEFAULT_BURST_WINDOW = datetime.timedelta(seconds=5) DEFAULT_WAIT_WINDOW = datetime.timedelta(seconds=15) class BurstThrottle(object): max_hits = None hits = None burst_window = None total_window = None timestamp = None def __init__(self, max_hits, burst_window, wait_window): self.max_hits = max_hits self.hits = 0 self.burst_window = burst_window self.total_window = burst_window + wait_window self.timestamp = datetime.datetime.min def throttle(self): now = datetime.datetime.utcnow() if now < self.timestamp + self.total_window: if (now < self.timestamp + self.burst_window) and (self.hits < self.max_hits): self.hits += 1 return datetime.timedelta(0) else: return self.timestamp + self.total_window - now else: self.timestamp = now self.hits = 1 return datetime.timedelta(0) HTTP adapter: class MyHttpAdapter(requests.adapters.HTTPAdapter): throttle = None def __init__(self, pool_connections=requests.adapters.DEFAULT_POOLSIZE, pool_maxsize=requests.adapters.DEFAULT_POOLSIZE, max_retries=requests.adapters.DEFAULT_RETRIES, pool_block=requests.adapters.DEFAULT_POOLBLOCK, burst_window=DEFAULT_BURST_WINDOW, wait_window=DEFAULT_WAIT_WINDOW): self.throttle = BurstThrottle(pool_maxsize, burst_window, wait_window) super(MyHttpAdapter, self).__init__(pool_connections=pool_connections, pool_maxsize=pool_maxsize, max_retries=max_retries, pool_block=pool_block) def send(self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None): request_successful = False response = None while not request_successful: wait_time = self.throttle.throttle() while wait_time > datetime.timedelta(0): gevent.sleep(wait_time.total_seconds(), ref=True) wait_time = self.throttle.throttle() response = super(MyHttpAdapter, self).send(request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies) if response.status_code != 429: request_successful = True return response Setup: requests_adapter = adapter.MyHttpAdapter( pool_connections=__CONCURRENT_LIMIT__, pool_maxsize=__CONCURRENT_LIMIT__, max_retries=0, pool_block=False, burst_window=datetime.timedelta(seconds=5), wait_window=datetime.timedelta(seconds=20)) requests_session = requests.session() requests_session.mount('http://', requests_adapter) requests_session.mount('https://', requests_adapter) unsent_requests = (grequests.get(url, hooks={'response': handle_response}, session=requests_session) for url in urls) grequests.map(unsent_requests, size=__CONCURRENT_LIMIT__)
Django: Error: Unknown command: 'makemigrations'
I am trying to follow the Django tutorial and I faced the following error when I enter python manage.py makemigrations polls Unknown command: 'makemigrations' Here's the link to the tutorial and I accomplished all the previous steps successfully and I am not sure what's going wrong now or how to fix it. P.S.: I have already included "polls" in the INSTALLED_APPS! INSTALLED_APPS = ( 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'polls', 'South', ) Answer: I had to modify INSTALLED_APPS to : INSTALLED_APPS = ( 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'polls', ) and also used this command: python manage.py syncdb
Migrations were first added in version 1.7, officially released on September 2, 2014. You need to make sure your tutorial matches the version of Django you're working with. For instance, this version of the tutorial covers 1.9: https://docs.djangoproject.com/en/1.9/intro/tutorial01/ Or, if you're using an older version of Django, you can change the "1.9" in that URL to whatever version you're on (back to 1.3). Or use the dropdown on the docs page to pick the version and search for "tutorial".
Remap values in pandas column with a dict
I have a dictionary which looks like this: di = {1: "A", 2: "B"} I would like to apply it to the "col1" column of a dataframe similar to: col1 col2 0 w a 1 1 2 2 2 NaN to get: col1 col2 0 w a 1 A 2 2 B NaN How can I best do this? For some reason googling terms relating to this only shows me links about how to make columns from dicts and vice-versa :-/
You can use .replace. For example: >>> df = pd.DataFrame({'col2': {0: 'a', 1: 2, 2: np.nan}, 'col1': {0: 'w', 1: 1, 2: 2}}) >>> di = {1: "A", 2: "B"} >>> df col1 col2 0 w a 1 1 2 2 2 NaN >>> df.replace({"col1": di}) col1 col2 0 w a 1 A 2 2 B NaN or directly on the Series, i.e. df["col1"].replace(di, inplace=True).
How to install django for python 3.3
I had python 2.6.1 because it was old I decide to install python 3.3.2 but, when I type "python" in my mac it prints it is version 2.6.1 and when I type python3 it shows that this is 3.3.2. I installed django 1.6 but when I check, understand that it is installed for old version of python (python 2.6.1). I want to instal it for my python 3.3.2 what should I do? any way to uninstall python 2.6.1 and when I enter python in terminal it's version be 3.3.2? I have mac os 10.6.8
You could use pip to manage the package for you. If pip is not installed. use sudo easy_install pip to install it. Then pip3 install django would install django for your python3.
Django: How to write query to sort using multiple columns, display via template
I'm quite new to Django, and not too experienced with MVC, DB queries. I have a Customer table which includes customer_name, city_name, as well as a state_name (pulled from a foreign key table). In the HTML, I'm trying to display the results in a list first sorted alphabetically by state_name, then by city_name, then by customer_name name. Like so.. ARIZONA PHOENIX AAA, Inc. BBB, LLC. SCOTTSDALE AAA, LLC. DDD, Corp. CALIFORNIA ANAHEIM ... My model.py is as follows: from django.db import models class Customer(models.Model): def __unicode__(self): return self.customer_name customer_name = models.CharField(max_length=60) city_name = models.CharField(max_length=30) state = models.ForeignKey('State') class State(models.Model): def __unicode__(self): return self.state_name state_name = models.CharField(max_length=20) state_code = models.CharField(max_length=2) In my urls.py, I have: url("^customers/$", direct_to_template, {'template': 'pages_fixed/customers.html', 'extra_context': {'customers': Customer.objects.all().order_by('state')}}, name='customers'), And in my HTML, I have a working template as: <div class='customers'> {% for customer in customers %} <div class='block_customer'> <p>{{ customer.state.state_name }}</p> <p>{{ customer.city_name }}</p> <p>{{ customer.customer_name }}</p> </div> {% endfor %} </div> It's all sort of working but obviously not sorting correctly and I'm not sure what's the best way to design it. I tried some inner loops with the templates, was hoping the templates would let me define some sorting rules but this doesn't seem to be supported/correct. I suspect that I need to query the data differently or pre-sort them in code ahead of time? I'm not sure if this would be done in the View (what is Django's form of Controller?).. If anyone could point me in the right direction, that would be MUCH appreciated!
There are several levels for display ordered models objects in template, each one overwrite the previous level: (MODEL LEVEL) Meta attribute ordering as shows @Bithin in his answer (more on django docs) (VIEW LEVEL) QuerySet order_by method as you have tried in your view example, which would works as you want if add other fields to sort: Customer.objects.order_by('state', 'city_name', 'customer_name') (more on django docs). Note that .all() is not needed here. (TEMPLATE LEVEL) regroup template tag need to use nested in your sample (more on django docs)
module' object has no attribute 'drawMatches' opencv python
I am just doing a example of feature detection in OpenCV, I have just taken a example as given below. It is giving me a error like module' object has no attribute 'drawMatches' I have checked the OpenCV Docs. Why do i get this error ? import numpy as np import cv2 import matplotlib.pyplot as plt img1 = cv2.imread('box.png',0) # queryImage img2 = cv2.imread('box_in_scene.png',0) # trainImage # Initiate SIFT detector orb = cv2.ORB() # find the keypoints and descriptors with SIFT kp1, des1 = orb.detectAndCompute(img1,None) kp2, des2 = orb.detectAndCompute(img2,None) # create BFMatcher object bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True) # Match descriptors. matches = bf.match(des1,des2) # Draw first 10 matches. img3 = cv2.drawMatches(img1,kp1,img2,kp2,matches[:10], flags=2) plt.imshow(img3),plt.show() Error: Traceback (most recent call last): File "match.py", line 22, in <module> img3 = cv2.drawMatches(img1,kp1,img2,kp2,matches[:10], flags=2) AttributeError: 'module' object has no attribute 'drawMatches'
I am late to the party as well, but I installed OpenCV 2.4.9 for Mac OS X, and the drawMatches function doesn't exist in my distribution. I've also tried the second approach with find_obj and that didn't work for me either. With that, I decided to write my own implementation of it that mimics drawMatches to the best of my ability and this is what I've produced. I've provided my own images where one is of a camera man, and the other one is the same image but rotated by 55 degrees counterclockwise. The basics of what I wrote is that I allocate an output RGB image where the amount of rows is the maximum of the two images to accommodate for placing both of the images in the output image and the columns are simply the summation of both the columns together. Be advised that I assume that both images are grayscale. I place each image in their corresponding spots, then run through a loop of all of the matched keypoints. I extract which keypoints matched between the two images, then extract their (x,y) coordinates. I draw circles at each of the detected locations, then draw a line connecting these circles together. Bear in mind that the detected keypoint in the second image is with respect to its own co-ordinate system. If you want to place this in the final output image, you need to offset the column co-ordinate by the amount of columns from the first image so that the column co-ordinate is with respect to the co-ordinate system of the output image. Without further ado: import numpy as np import cv2 def drawMatches(img1, kp1, img2, kp2, matches): """ My own implementation of cv2.drawMatches as OpenCV 2.4.9 does not have this function available but it's supported in OpenCV 3.0.0 This function takes in two images with their associated keypoints, as well as a list of DMatch data structure (matches) that contains which keypoints matched in which images. An image will be produced where a montage is shown with the first image followed by the second image beside it. Keypoints are delineated with circles, while lines are connected between matching keypoints. img1,img2 - Grayscale images kp1,kp2 - Detected list of keypoints through any of the OpenCV keypoint detection algorithms matches - A list of matches of corresponding keypoints through any OpenCV keypoint matching algorithm """ # Create a new output image that concatenates the two images together # (a.k.a) a montage rows1 = img1.shape[0] cols1 = img1.shape[1] rows2 = img2.shape[0] cols2 = img2.shape[1] out = np.zeros((max([rows1,rows2]),cols1+cols2,3), dtype='uint8') # Place the first image to the left out[:rows1,:cols1] = np.dstack([img1, img1, img1]) # Place the next image to the right of it out[:rows2,cols1:] = np.dstack([img2, img2, img2]) # For each pair of points we have between both images # draw circles, then connect a line between them for mat in matches: # Get the matching keypoints for each of the images img1_idx = mat.queryIdx img2_idx = mat.trainIdx # x - columns # y - rows (x1,y1) = kp1[img1_idx].pt (x2,y2) = kp2[img2_idx].pt # Draw a small circle at both co-ordinates # radius 4 # colour blue # thickness = 1 cv2.circle(out, (int(x1),int(y1)), 4, (255, 0, 0), 1) cv2.circle(out, (int(x2)+cols1,int(y2)), 4, (255, 0, 0), 1) # Draw a line in between the two points # thickness = 1 # colour blue cv2.line(out, (int(x1),int(y1)), (int(x2)+cols1,int(y2)), (255, 0, 0), 1) # Show the image cv2.imshow('Matched Features', out) cv2.waitKey(0) cv2.destroyWindow('Matched Features') # Also return the image if you'd like a copy return out To illustrate that this works, here are the two images that I used: I used OpenCV's ORB detector to detect the keypoints, and used the normalized Hamming distance as the distance measure for similarity as this is a binary descriptor. As such: import numpy as np import cv2 img1 = cv2.imread('cameraman.png', 0) # Original image - ensure grayscale img2 = cv2.imread('cameraman_rot55.png', 0) # Rotated image - ensure grayscale # Create ORB detector with 1000 keypoints with a scaling pyramid factor # of 1.2 orb = cv2.ORB(1000, 1.2) # Detect keypoints of original image (kp1,des1) = orb.detectAndCompute(img1, None) # Detect keypoints of rotated image (kp2,des2) = orb.detectAndCompute(img2, None) # Create matcher bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True) # Do matching matches = bf.match(des1,des2) # Sort the matches based on distance. Least distance # is better matches = sorted(matches, key=lambda val: val.distance) # Show only the top 10 matches - also save a copy for use later out = drawMatches(img1, kp1, img2, kp2, matches[:10]) This is the image I get:
Inheritance of private and protected methods in Python
I know, there are no 'real' private/protected methods in Python. This approach isn't meaned to hide anything, I just want to understand what Python does. class Parent(object): def _protected(self): pass def __private(self): pass class Child(Parent): def foo(self): self._protected() # This works def bar(self): self.__private() # This doesn't work, I get a AttributeError: # 'Child' object has no attribute '_Child__private' So, does this behaviour mean, that 'protected' methods will be inherited but 'private' won't at all? Or did I miss anything?
Names starting with a double underscore are mangled to protect them from clashes when inherited. Subclasses can define their own __private() method and these will not interfere with the same name on the parent class. Such names are considered class private. Mangling is done by prepending any such name with an extra underscore and the class name (regardless of how the name is used or if it exists), effectively giving them a namespace. In the Parent class, any __private identifier is replaced (at compilation time) by the name _Parent__private, while in the Child class the identifier is replaced by _Child__private, everywhere in the class definition. The following will work: class Child(Parent): def foo(self): self._protected() def bar(self): self._Parent__private() See Reserved classes of identifiers in the lexical analysis documentation: __* Class-private names. Names in this category, when used within the context of a class definition, are re-written to use a mangled form to help avoid name clashes between “private” attributes of base and derived classes. and the referenced documentation on names: Private name mangling: When an identifier that textually occurs in a class definition begins with two or more underscore characters and does not end in two or more underscores, it is considered a private name of that class. Private names are transformed to a longer form before code is generated for them. The transformation inserts the class name, with leading underscores removed and a single underscore inserted, in front of the name. For example, the identifier __spam occurring in a class named Ham will be transformed to _Ham__spam. This transformation is independent of the syntactical context in which the identifier is used. Don't use class-private names unless you specifically want to avoid having to tell developers that want to subclass your class that they can't use certain names or risk breaking your class. Outside of published frameworks and libraries, there is little use for this feature.
Rearrange columns of numpy 2D array
Is there a way to change the order of the columns in a numpy 2D array to a new and arbitrary order? For example, I have an array array([[10, 20, 30, 40, 50], [ 6, 7, 8, 9, 10]]) and I want to change it into, say array([[10, 30, 50, 40, 20], [ 6, 8, 10, 9, 7]]) by applying the permutation 0 -> 0 1 -> 4 2 -> 1 3 -> 3 4 -> 2 on the columns. In the new matrix, I therefore want the first column of the original to stay in place, the second to move to the last column and so on. Is there a numpy function to do it? I have a fairly large matrix and expect to get even larger ones, so I need a solution that does this quickly and in place if possible (permutation matrices are a no-go) Thank you.
This is possible using fancy indexing: >>> import numpy as np >>> a = np.array([[10, 20, 30, 40, 50], ... [ 6, 7, 8, 9, 10]]) >>> your_permutation = [0,4,1,3,2] >>> i = np.argsort(your_permutation) >>> i array([0, 2, 4, 3, 1]) >>> a[:,i] array([[10, 30, 50, 40, 20], [ 6, 8, 10, 9, 7]])
Cython: cimport and import numpy as (both) np
In the tutorial of the Cython documentation, there are cimport and import statements of numpy module: import numpy as np cimport numpy as np I found this convention is quite popular among numpy/cython users. This looks strange for me because they are both named as np. In which part of the code, imported/cimported np are used? Why cython compiler does not confuse them?
cimport my_module gives access to C functions or attributes or even sub-modules under my_module import my_module gives access to Python functions or attributes or sub-modules under my_module. In your case: cimport numpy as np gives you access to Numpy C API, where you can declare array buffers, variable types and so on... And: import numpy as np gives you access to NumPy-Python functions, such as np.array, np.linspace, etc Cython internally handles this ambiguity so that the user does not need to use different names.
django.request logger not propagated to root?
Using Django 1.5.1: DEBUG = False LOGGING = { 'version': 1, 'disable_existing_loggers': True, 'formatters': { 'verbose': { 'format': '%(levelname)s %(asctime)s %(module)s %(message)s' }, }, 'handlers': { 'console': { 'level': 'DEBUG', 'class': 'logging.StreamHandler', 'formatter': 'verbose', }, }, 'loggers': { # root logger '': { 'handlers': ['console'], }, #'django.request': { # 'handlers': ['console'], # 'level': 'DEBUG', # 'propagate': False, #}, } } If I uncomment the commented lines and call a view which has 1/0, the traceback is printed to the console: ERROR 2013-11-29 13:33:23,102 base Internal Server Error: /comment/*******/ Traceback (most recent call last): ... File "*****/comments/views.py", line 10, in post 1/0 ZeroDivisionError: integer division or modulo by zero WARNING 2013-11-29 13:33:23,103 csrf Forbidden (CSRF cookie not set.): /comment/******/ [29/Nov/2013 13:33:23] "POST /comment/******/ HTTP/1.0" 500 27 But if the lines stay commented, no traceback is printed to the console, just: [29/Nov/2013 13:33:23] "POST /comment/******/ HTTP/1.0" 500 27 I thought if django.request logger is not configured, it would propagate to the root logger, which prints everything to console. I didn't find any information that django.request is special. Why it doesn't work? Here I read: Prior to Django 1.5, the LOGGING setting always overwrote the default Django logging configuration. From Django 1.5 forward, it is possible to get the project’s logging configuration merged with Django’s defaults, hence you can decide if you want to add to, or replace the existing configuration. If the disable_existing_loggers key in the LOGGING dictConfig is set to True (which is the default) the default configuration is completely overridden. Alternatively you can redefine some or all of the loggers by setting disable_existing_loggers to False. In django/utils/log.py: # Default logging for Django. This sends an email to the site admins on every # HTTP 500 error. Depending on DEBUG, all other log records are either sent to # the console (DEBUG=True) or discarded by mean of the NullHandler (DEBUG=False). DEFAULT_LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'filters': { 'require_debug_false': { '()': 'django.utils.log.RequireDebugFalse', }, 'require_debug_true': { '()': 'django.utils.log.RequireDebugTrue', }, }, 'handlers': { 'console':{ 'level': 'INFO', 'filters': ['require_debug_true'], 'class': 'logging.StreamHandler', }, 'null': { 'class': 'django.utils.log.NullHandler', }, 'mail_admins': { 'level': 'ERROR', 'filters': ['require_debug_false'], 'class': 'django.utils.log.AdminEmailHandler' } }, 'loggers': { 'django': { 'handlers': ['console'], }, 'django.request': { 'handlers': ['mail_admins'], 'level': 'ERROR', 'propagate': False, }, 'py.warnings': { 'handlers': ['console'], }, } } So by default django.request has propagate = False. But In my case I have 'disable_existing_loggers': True.
The solution is to prevent Django from configuring logging and handle it ourselves. Fortunately this is easy. In settings.py: LOGGING_CONFIG = None LOGGING = {...} # whatever you want, as you already have import logging.config logging.config.dictConfig(LOGGING) UPDATE ~March 2015: Django has clarified their documentation: If the disable_existing_loggers key in the LOGGING dictConfig is set to True then all loggers from the default configuration will be disabled. Disabled loggers are not the same as removed; the logger will still exist, but will silently discard anything logged to it, not even propagating entries to a parent logger. Thus you should be very careful using 'disable_existing_loggers': True; it’s probably not what you want. Instead, you can set disable_existing_loggers to False and redefine some or all of the default loggers; or you can set LOGGING_CONFIG to None and handle logging config yourself. For posterity and detail: The explanation? Most of the confusion I think comes down to Django's poor explanation of disable_existing_loggers, which says that when True, "the default configuration is completely overridden". In your own answer you discovered that is not correct; what's happening is that the existing loggers, which Django already configures, are disabled not replaced. The Python logging documentation explains it better (emphasis added): disable_existing_loggers – If specified as False, loggers which exist when this call is made are left alone. The default is True because this enables old behaviour in a backward-compatible way. This behaviour is to disable any existing loggers unless they or their ancestors are explicitly named in the logging configuration. Based on Django docs we think, "override the defaults with my own LOGGING configuration and anything I don't specify will bubble up". I've tripped over this expectation as well. The behavior we expect is along the lines of replace_existing_loggers (which isn't a real thing). Instead the Django loggers are shut up not bubbled up. We need to prevent the setup of these Django loggers in the first place and here the Django docs are more helpful: If you don’t want to configure logging at all (or you want to manually configure logging using your own approach), you can set LOGGING_CONFIG to None. This will disable the configuration process. Note: Setting LOGGING_CONFIG to None only means that the configuration process is disabled, not logging itself. If you disable the configuration process, Django will still make logging calls, falling back to whatever default logging behavior is defined. Django will still use its loggers but since they are not handled (and then disabled) by the configuration, those loggers will bubble up as expected. A simple test with the above settings: manage.py shell >>> import logging >>> logging.warning('root logger') WARNING 2014-03-11 13:35:08,832 root root logger >>> l = logging.getLogger('django.request') >>> l.warning('request logger') WARNING 2014-03-11 13:38:22,000 django.request request logger >>> l.propagate, l.disabled (1, 0)
How to use Bulk API to store the keywords in ES by using Python
I have to store some message in ElasticSearch integrate with my python program. Now what I try to store the message is: d={"message":"this is message"} for index_nr in range(1,5): ElasticSearchAPI.addToIndex(index_nr, d) print d That means if I have 10 message then I have to repeat my code 10 times. So what I want to do is try to make a script file or batch file. I check ElasticSearch Guide, BULK API is possible to use. http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/docs-bulk.html The format should be something like below: { "index" : { "_index" : "test", "_type" : "type1", "_id" : "1" } } { "field1" : "value1" } { "delete" : { "_index" : "test", "_type" : "type1", "_id" : "2" } } { "create" : { "_index" : "test", "_type" : "type1", "_id" : "3" } } { "field1" : "value3" } { "update" : {"_id" : "1", "_type" : "type1", "_index" : "index1"} } { "doc" : {"field2" : "value2"} } what I did is: {"index":{"_index":"test1","_type":"message","_id":"1"}} {"message":"it is red"} {"index":{"_index":"test2","_type":"message","_id":"2"}} {"message":"it is green"} I also use curl tool to store the doc. $ curl -s -XPOST localhost:9200/_bulk --data-binary @message.json Now I want to use my Python code to store the file to the Elastic Search.
from datetime import datetime from elasticsearch import Elasticsearch from elasticsearch import helpers es = Elasticsearch() actions = [] for j in range(0, 10): action = { "_index": "tickets-index", "_type": "tickets", "_id": j, "_source": { "any":"data" + str(j), "timestamp": datetime.now() } } actions.append(action) if len(actions) > 0: helpers.bulk(es, actions)
Improving the extraction of human names with nltk
I am trying to extract human names from text. Does anyone have a method that they would recommend? This is what I tried (code is below): I am using nltk to find everything marked as a person and then generating a list of all the NNP parts of that person. I am skipping persons where there is only one NNP which avoids grabbing a lone surname. I am getting decent results but was wondering if there are better ways to go about solving this problem. Code: import nltk from nameparser.parser import HumanName def get_human_names(text): tokens = nltk.tokenize.word_tokenize(text) pos = nltk.pos_tag(tokens) sentt = nltk.ne_chunk(pos, binary = False) person_list = [] person = [] name = "" for subtree in sentt.subtrees(filter=lambda t: t.node == 'PERSON'): for leaf in subtree.leaves(): person.append(leaf[0]) if len(person) > 1: #avoid grabbing lone surnames for part in person: name += part + ' ' if name[:-1] not in person_list: person_list.append(name[:-1]) name = '' person = [] return (person_list) text = """ Some economists have responded positively to Bitcoin, including Francois R. Velde, senior economist of the Federal Reserve in Chicago who described it as "an elegant solution to the problem of creating a digital currency." In November 2013 Richard Branson announced that Virgin Galactic would accept Bitcoin as payment, saying that he had invested in Bitcoin and found it "fascinating how a whole new global currency has been created", encouraging others to also invest in Bitcoin. Other economists commenting on Bitcoin have been critical. Economist Paul Krugman has suggested that the structure of the currency incentivizes hoarding and that its value derives from the expectation that others will accept it as payment. Economist Larry Summers has expressed a "wait and see" attitude when it comes to Bitcoin. Nick Colas, a market strategist for ConvergEx Group, has remarked on the effect of increasing use of Bitcoin and its restricted supply, noting, "When incremental adoption meets relatively fixed supply, it should be no surprise that prices go up. And that’s exactly what is happening to BTC prices." """ names = get_human_names(text) print "LAST, FIRST" for name in names: last_first = HumanName(name).last + ', ' + HumanName(name).first print last_first Output: LAST, FIRST Velde, Francois Branson, Richard Galactic, Virgin Krugman, Paul Summers, Larry Colas, Nick Apart from Virgin Galactic, this is all valid output. Of course, knowing that Virgin Galactic isn't a human name in the context of this article is the hard (maybe impossible) part.
Must agree with suggestion that "make my code better" isn't well suited for this site, but I can give you some way where you can try to dig in. Take a look at Stanford Named Entity Recognizer (NER). Its binding has been included into NLTK v 2.0, but you must download some core files. Here is script which can do all of that for you. I wrote this script: import nltk from nltk.tag.stanford import NERTagger st = NERTagger('stanford-ner/all.3class.distsim.crf.ser.gz', 'stanford-ner/stanford-ner.jar') text = """YOUR TEXT GOES HERE""" for sent in nltk.sent_tokenize(text): tokens = nltk.tokenize.word_tokenize(sent) tags = st.tag(tokens) for tag in tags: if tag[1]=='PERSON': print tag and got not so bad output: ('Francois', 'PERSON') ('R.', 'PERSON') ('Velde', 'PERSON') ('Richard', 'PERSON') ('Branson', 'PERSON') ('Virgin', 'PERSON') ('Galactic', 'PERSON') ('Bitcoin', 'PERSON') ('Bitcoin', 'PERSON') ('Paul', 'PERSON') ('Krugman', 'PERSON') ('Larry', 'PERSON') ('Summers', 'PERSON') ('Bitcoin', 'PERSON') ('Nick', 'PERSON') ('Colas', 'PERSON') Hope this is helpful.
Making py.test, coverage and tox work together: __init__.py in tests folder?
I'm having a weird problem with tox, py.test, coverage and pytest-cov: when py.test with the --cov option is launched from tox, it seems to require an __init__.py file in the tests folder which is not immediately obvious. While writing this post, I have kind of solved the initial problem by adding the aforesaid tests/__init__.py, but to this moment I don't fully understand why exactly it works or doesn't work, so I'm still asking for help. Please see below for details. I've found a related question on SO but it only makes it more confusing because the answer seems to be opposite to what I've figured out so far: `py.test` and `__init__.py` files See also the official docs here: py.test - Good Integration Practices (the very bottom of the page). Simplified project structure: setup.py tox.ini .coveragerc project/ __init__.py module1.py module2.py tests/ __init__.py (optional, an empty file) test_module1.py test_module2.py Relevant part of tox.ini: [testenv:check] commands = py.test --cov=project --cov-report=term deps = pytest coverage pytest-cov [pytest] python_files = test_*.py norecursedirs = .tox Relevant part of .coveragerc: [run] branch = True omit = project/tests/* Now, the results: py.test --cov=project --cov-report=term run from project root => correct coverage whether tests/__init__.py file is present or not. tox -e check without tests/__init__.py => the tests are discovered and run, but I get a warning "Coverage.py warning: No data was collected." and the coverage is 0% for all modules tox -e check with tests/__init__.py => correct coverage again. It's not immediately obvious to me why the tests/__init__.py file has to be there (adding this empty file solved the initial problem) for the tox run, but it doesn't matter when you run the tests/coverage manually. Any ideas? Thanks.
Use --cov {envsitepackagesdir}/<your-package-name> in tox.ini.
python dataframe pandas drop column using int
I understand that to drop a column you use df.drop('column name', axis=1). Is there a way to drop a column using a numerical index instead of the column name?
You can delete column on i index like this: df.drop(df.columns[i], axis=1) It could work strange, if you have duplicate names in columns, so to do this you can rename column you want to delete column by new name. Or you can reassign DataFrame like this: df = df.iloc[:, [j for j, c in enumerate(df.columns) if j != i]]
Python pandas dataframe: retrieve number of columns
How do you programmatically retrieve the number of columns in a pandas dataframe? I was hoping for something like: df.num_columns
Like so: import pandas as pd df = pd.DataFrame({"pear": [1,2,3], "apple": [2,3,4], "orange": [3,4,5]}) len(df.columns) 3
What's the difference between 'coding=utf8' and '-*- coding: utf-8 -*-'?
Is there any difference between using #coding=utf8 and # -*- coding: utf-8 -*- What about # encoding: utf-8
There is no difference; Python recognizes all 3. It looks for the pattern: coding[:=]\s*([-\w.]+) on the first two lines of the file (which also must start with a #). That's the literal text 'coding', followed by either a colon or an equals sign, followed by optional whitespace. Any word, dash or dot characters following that pattern are read as the codec. The -*- is an Emacs-specific syntax; letting the text editor know what encoding to use. It makes the comment useful to two tools. VIM supports similar syntax. See PEP 263: Defining Python Source Code Encodings.
Django Rest Framework ImageField
I can not save the image in this ImageField. when sending data back: { "image": ["No file was submitted. Check the encoding type on the form."] } model.py class MyPhoto(models.Model): owner = models.ForeignKey('auth.User', related_name='image') image = models.ImageField(upload_to='photos', max_length=254) serializers.py class PhotoSerializer(serializers.HyperlinkedModelSerializer): class Meta: model = MyPhoto fields = ('url', 'id', 'image', 'owner') owner = serializers.Field(source='owner.username') view.py class PhotoList(APIView): permission_classes = (permissions.IsAuthenticatedOrReadOnly, IsOwnerOrReadOnly) def get(self, request, format=None): photo = MyPhoto.objects.all() serializer = PhotoSerializer(photo, many=True) return Response(data=serializer.data, status=status.HTTP_200_OK) def post(self, request, format=None): serializer = PhotoSerializer(data=request.DATA) if serializer.is_valid(): serializer.save() return Response(serializer.data, status=status.HTTP_201_CREATED) return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST) def pre_save(self, obj): obj.owner = self.request.user class PhotoDetail(APIView): permission_classes = (permissions.IsAuthenticatedOrReadOnly, IsOwnerOrReadOnly) def get_object(self, pk): try: return MyPhoto.objects.get(pk=pk) except MyPhoto.DoesNotExist: raise Http404 def get(self, request, pk, format=None): photo = self.get_object(pk) serializer = PhotoSerializer(photo) return Response(serializer.data) def put(self, request, pk, format=None): photo = self.get_object(pk) serializer = PhotoSerializer(photo, data=request.DATA) if serializer.is_valid(): serializer.save() return Response(serializer.data) return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST) def delete(self, request, pk, format=None): photo = self.get_object(pk) photo.delete() return Response(status=status.HTTP_204_NO_CONTENT) def pre_save(self, obj): obj.owner = self.request.user url.py urlpatterns = patterns('', url(r'^$', 'main.views.main_page'), url(r'^api/photo/$', views.PhotoList.as_view(), name='myphoto-list'), url(r'^api/photo/(?P<pk>[0-9]+)/$', views.PhotoDetail.as_view(), name='myphoto-detail'),) curl curl -X POST -S \ -H 'Content-Type: application/json' \ -u "michael:bush_pass" \ --data-binary '{"owner":"/users/1/", \ "image":"/Users/test/Downloads/1383310998_05.jpg"}' \ 127.0.0.1:8000/api/photo/
I think you can use request.data instead after django rest framework 3.0. The usage of request.DATA and request.FILES is now pending deprecation in favor of a single request.data attribute that contains all the parsed data. You can check it from here
Dihedral/Torsion Angle From Four Points in Cartesian Coordinates in Python
What suggestions do people have for quickly calculating dihedral angles in Python? In the diagrams, phi is the dihedral angle: What's your best for calculating angles in the range 0 to pi? What about 0 to 2pi? "Best" here means some mix of fast and numerically stable. Methods that return values over the full range 0 to 2pi are preferred but if you have an incredibly fast way of calculating the dihedral over 0 to pi share that too. Here are my 3 best efforts. Only the 2nd one returns angles between 0 and 2pi. It's also the slowest. General comments about my approaches: arccos() in Numpy seems plenty stable but since people raise this issue I may just not fully understand it. The use of einsum came from here. Why is numpy's einsum faster than numpy's built in functions? The diagrams and some inspiration came from here. How do I calculate a dihedral angle given Cartesian coordinates? The 3 approaches with comments: import numpy as np from time import time # This approach tries to minimize magnitude and sqrt calculations def dihedral1(p): # Calculate vectors between points, b1, b2, and b3 in the diagram b = p[:-1] - p[1:] # "Flip" the first vector so that eclipsing vectors have dihedral=0 b[0] *= -1 # Use dot product to find the components of b1 and b3 that are not # perpendicular to b2. Subtract those components. The resulting vectors # lie in parallel planes. v = np.array( [ v - (v.dot(b[1])/b[1].dot(b[1])) * b[1] for v in [b[0], b[2]] ] ) # Use the relationship between cos and dot product to find the desired angle. return np.degrees(np.arccos( v[0].dot(v[1])/(np.linalg.norm(v[0]) * np.linalg.norm(v[1])))) # This is the straightforward approach as outlined in the answers to # "How do I calculate a dihedral angle given Cartesian coordinates?" def dihedral2(p): b = p[:-1] - p[1:] b[0] *= -1 v = np.array( [ v - (v.dot(b[1])/b[1].dot(b[1])) * b[1] for v in [b[0], b[2]] ] ) # Normalize vectors v /= np.sqrt(np.einsum('...i,...i', v, v)).reshape(-1,1) b1 = b[1] / np.linalg.norm(b[1]) x = np.dot(v[0], v[1]) m = np.cross(v[0], b1) y = np.dot(m, v[1]) return np.degrees(np.arctan2( y, x )) # This one starts with two cross products to get a vector perpendicular to # b2 and b1 and another perpendicular to b2 and b3. The angle between those vectors # is the dihedral angle. def dihedral3(p): b = p[:-1] - p[1:] b[0] *= -1 v = np.array( [np.cross(v,b[1]) for v in [b[0], b[2]] ] ) # Normalize vectors v /= np.sqrt(np.einsum('...i,...i', v, v)).reshape(-1,1) return np.degrees(np.arccos( v[0].dot(v[1]) )) dihedrals = [ ("dihedral1", dihedral1), ("dihedral2", dihedral2), ("dihedral3", dihedral3) ] Benchmarking: # Testing arccos near 0 # Answer is 0.000057 p1 = np.array([ [ 1, 0, 0 ], [ 0, 0, 0 ], [ 0, 0, 1 ], [ 0.999999, 0.000001, 1 ] ]) # +x,+y p2 = np.array([ [ 1, 0, 0 ], [ 0, 0, 0 ], [ 0, 0, 1 ], [ 0.1, 0.6, 1 ] ]) # -x,+y p3 = np.array([ [ 1, 0, 0 ], [ 0, 0, 0 ], [ 0, 0, 1 ], [-0.3, 0.6, 1 ] ]) # -x,-y p4 = np.array([ [ 1, 0, 0 ], [ 0, 0, 0 ], [ 0, 0, 1 ], [-0.3, -0.6, 1 ] ]) # +x,-y p5 = np.array([ [ 1, 0, 0 ], [ 0, 0, 0 ], [ 0, 0, 1 ], [ 0.6, -0.6, 1 ] ]) for d in dihedrals: name = d[0] f = d[1] print "%s: %12.6f %12.6f %12.6f %12.6f %12.6f" \ % (name, f(p1), f(p2), f(p3), f(p4), f(p5)) print def profileDihedrals(f): t0 = time() for i in range(20000): p = np.random.random( (4,3) ) f(p) p = np.random.randn( 4,3 ) f(p) return(time() - t0) print "dihedral1: ", profileDihedrals(dihedral1) print "dihedral2: ", profileDihedrals(dihedral2) print "dihedral3: ", profileDihedrals(dihedral3) Benchmarking output: dihedral1: 0.000057 80.537678 116.565051 116.565051 45.000000 dihedral2: 0.000057 80.537678 116.565051 -116.565051 -45.000000 dihedral3: 0.000057 80.537678 116.565051 116.565051 45.000000 dihedral1: 2.79781794548 dihedral2: 3.74271392822 dihedral3: 2.49604296684 As you can see in the benchmarking, the last one tends to be the fastest while the second one is the only one that returns angles from the full range of 0 to 2pi since it uses arctan2.
Here's an implementation for torsion angle over the full 2pi range that is a bit faster, doesn't resort to numpy quirks (einsum being mysteriously faster than logically equivalent code), and is easier to read. There's even a bit more than just hacks going on here -- the math is different too. The formula used in the question's dihedral2 uses 3 square roots and 1 cross product, the formula on Wikipedia uses 1 square root and 3 cross products, but the formula used in the function below uses only 1 square root and 1 cross product. This is probably as simple as the math can get. Functions with 2pi range function from question, Wikipedia formula for comparison, and the new function: dihedrals.py #!/usr/bin/env python # -*- coding: utf-8 -*- import numpy as np def old_dihedral2(p): """http://stackoverflow.com/q/20305272/1128289""" b = p[:-1] - p[1:] b[0] *= -1 v = np.array( [ v - (v.dot(b[1])/b[1].dot(b[1])) * b[1] for v in [b[0], b[2]] ] ) # Normalize vectors v /= np.sqrt(np.einsum('...i,...i', v, v)).reshape(-1,1) b1 = b[1] / np.linalg.norm(b[1]) x = np.dot(v[0], v[1]) m = np.cross(v[0], b1) y = np.dot(m, v[1]) return np.degrees(np.arctan2( y, x )) def wiki_dihedral(p): """formula from Wikipedia article on "Dihedral angle"; formula was removed from the most recent version of article (no idea why, the article is a mess at the moment) but the formula can be found in at this permalink to an old version of the article: https://en.wikipedia.org/w/index.php?title=Dihedral_angle&oldid=689165217#Angle_between_three_vectors uses 1 sqrt, 3 cross products""" p0 = p[0] p1 = p[1] p2 = p[2] p3 = p[3] b0 = -1.0*(p1 - p0) b1 = p2 - p1 b2 = p3 - p2 b0xb1 = np.cross(b0, b1) b1xb2 = np.cross(b2, b1) b0xb1_x_b1xb2 = np.cross(b0xb1, b1xb2) y = np.dot(b0xb1_x_b1xb2, b1)*(1.0/np.linalg.norm(b1)) x = np.dot(b0xb1, b1xb2) return np.degrees(np.arctan2(y, x)) def new_dihedral(p): """Praxeolitic formula 1 sqrt, 1 cross product""" p0 = p[0] p1 = p[1] p2 = p[2] p3 = p[3] b0 = -1.0*(p1 - p0) b1 = p2 - p1 b2 = p3 - p2 # normalize b1 so that it does not influence magnitude of vector # rejections that come next b1 /= np.linalg.norm(b1) # vector rejections # v = projection of b0 onto plane perpendicular to b1 # = b0 minus component that aligns with b1 # w = projection of b2 onto plane perpendicular to b1 # = b2 minus component that aligns with b1 v = b0 - np.dot(b0, b1)*b1 w = b2 - np.dot(b2, b1)*b1 # angle between v and w in a plane is the torsion angle # v and w may not be normalized but that's fine since tan is y/x x = np.dot(v, w) y = np.dot(np.cross(b1, v), w) return np.degrees(np.arctan2(y, x)) The new function would probably be a bit more conveniently called with 4 separate arguments but it to match the signature in the original question it simply immediately unpacks the argument. Code for testing: test_dihedrals.ph from dihedrals import * # some atom coordinates for testing p0 = np.array([24.969, 13.428, 30.692]) # N p1 = np.array([24.044, 12.661, 29.808]) # CA p2 = np.array([22.785, 13.482, 29.543]) # C p3 = np.array([21.951, 13.670, 30.431]) # O p4 = np.array([23.672, 11.328, 30.466]) # CB p5 = np.array([22.881, 10.326, 29.620]) # CG p6 = np.array([23.691, 9.935, 28.389]) # CD1 p7 = np.array([22.557, 9.096, 30.459]) # CD2 # I guess these tests do leave 1 quadrant (-x, +y) untested, oh well... def test_old_dihedral2(): assert(abs(old_dihedral2(np.array([p0, p1, p2, p3])) - (-71.21515)) < 1E-4) assert(abs(old_dihedral2(np.array([p0, p1, p4, p5])) - (-171.94319)) < 1E-4) assert(abs(old_dihedral2(np.array([p1, p4, p5, p6])) - (60.82226)) < 1E-4) assert(abs(old_dihedral2(np.array([p1, p4, p5, p7])) - (-177.63641)) < 1E-4) def test_new_dihedral1(): assert(abs(wiki_dihedral(np.array([p0, p1, p2, p3])) - (-71.21515)) < 1E-4) assert(abs(wiki_dihedral(np.array([p0, p1, p4, p5])) - (-171.94319)) < 1E-4) assert(abs(wiki_dihedral(np.array([p1, p4, p5, p6])) - (60.82226)) < 1E-4) assert(abs(wiki_dihedral(np.array([p1, p4, p5, p7])) - (-177.63641)) < 1E-4) def test_new_dihedral2(): assert(abs(new_dihedral(np.array([p0, p1, p2, p3])) - (-71.21515)) < 1E-4) assert(abs(new_dihedral(np.array([p0, p1, p4, p5])) - (-171.94319)) < 1E-4) assert(abs(new_dihedral(np.array([p1, p4, p5, p6])) - (60.82226)) < 1E-4) assert(abs(new_dihedral(np.array([p1, p4, p5, p7])) - (-177.63641)) < 1E-4) Code for timing: time_dihedrals.py #!/usr/bin/env python # -*- coding: utf-8 -*- from dihedrals import * from time import time def profileDihedrals(f): t0 = time() for i in range(20000): p = np.random.random( (4,3) ) f(p) p = np.random.randn( 4,3 ) f(p) return(time() - t0) print("old_dihedral2: ", profileDihedrals(old_dihedral2)) print("wiki_dihedral: ", profileDihedrals(wiki_dihedral)) print("new_dihedral: ", profileDihedrals(new_dihedral)) The functions can be tested with pytest as pytest ./test_dihedrals.py. Timing results: ./time_dihedrals.py old_dihedral2: 1.6442952156066895 wiki_dihedral: 1.3895585536956787 new_dihedral: 0.8703620433807373 new_dihedral is about twice as fast as old_dihedral2. ...you can also see that the hardware used for this answer is a lot beefier than the hardware used in the question (3.74 vs 1.64 for dihedral2) ;-P If you want to get even more aggressive you can use pypy. At the time of writing pypy doesn't support numpy.cross but you can just use a cross product implemented in python instead. For a 3-vector cross product the C pypy generates is probably at least as good as what numpy uses. Doing so gets the time down to 0.60 for me but at this we're wading into silly hax. Same benchmark but with same hardware as used in the question: old_dihedral2: 3.0171279907226562 wiki_dihedral: 3.415065050125122 new_dihedral: 2.086946964263916
How do I integrate Ajax with Django applications?
I am new to Django and pretty new to Ajax. I am working on a project where I need to integrate the two. I believe that I understand the principles behind them both, but have not found a good explanation of the two together. Could someone give me a quick explanation of how the codebase must change with the two of them integrating together? For example, can I still use the HttpResponse with Ajax, or do my responses have to change with the use of Ajax? If so, could you please provide an example of how the responses to the requests must change? If it makes any difference, the data I am returning is JSON.
Even though this isn't entirely in the SO spirit, I love this question, because I had the same trouble when I started so I'll give you a quick guide. Obviously you don't understand the principles behind them (don't take it as an offense, but if you did you wouldn't be asking). Django is server-side. It means, say a client goes to url you have a function inside views that renders what he sees and returns a response in html. let's break it up into examples: views.py def hello(request): return HttpResponse('Hello World!') def home(request): return render_to_response('index.html', {'variable': 'world'}) index.html: <h1>Hello {{ variable }}, welcome to my awesome site</h1> urls.py url(r'^hello/', 'myapp.views.hello'), url(r'^home/', 'myapp.views.home'), That's an example of the simplest of usages. Going to 127.0.0.1:8000/hello means a request to the hello function, going to 127.0.0.1:8000/home will return the index.html and replace all the variables as asked (you probably know all this by now). Now let's talk about AJAX. AJAX calls are client-side code that does asynchronous requests. That sounds complicated, but it simply means it does a request for you in the background and then handles the response. So when you do an AJAX call for some url, you get the same data you would get as a user going to that place. For example, an ajax call to 127.0.0.1:8000/hello will return the same thing it would as if you visited it. Only this time, you have it inside a js function and you can deal with it however you'd like. Let's look at a simple use case: $.ajax({ url: '127.0.0.1:8000/hello', type: 'get', // This is the default though, you don't actually need to always mention it success: function(data) { alert(data); }, failure: function(data) { alert('Got an error dude'); } }); The general process is this: The call goes to the url 127.0.0.1:8000/hello as if you opened a new tab and did it yourself. If it succeeds (status code 200), do the function for success, which will alert the data recieved. If fails, do a different function. Now what would happen here? You would get an alert with 'hello world' in it. What happens if you do an ajax call to home? Same thing, you'll get an alert stating <h1>Hello world, welcome to my awesome site</h1>. In other words - there's nothing new about AJAX calls. They are just a way for you to let the user get data and information without leaving the page, and it makes for a smooth and very neat design of your website. A few guidelines you should take note of: Learn jQuery. I cannot stress this enough. You're gonna have to understand it a little to know how to handle the data you receive. You'll also need to understand some basic javascript syntax (not far from python, you'll get used to it). I strongly recommend Envato's video tutorials for jQuery, they are great and will put you on the right path. When to use JSON?. You're going to see a lot of examples where the data sent by the Django views is in JSON. I didn't go into detail on that, because it isn't important how to do it (there are plenty of explanations abound) and a lot more important when. And the answer to that is - JSON data is serialized data. That is, data you can manipulate. Like I mentioned, an AJAX call will fetch the response as if the user did it himself. Now say you don't want to mess with all the html, and instead want to send data (a list of objects perhaps). JSON is good for this, because it sends it as an object (JSON data looks like a python dictionary), and then you can iterate over it or do something else that removes the need to sift through useless html. Add it last. When you build a web app and want to implement AJAX - do yourself a favor. First, build the entire app completely devoid of any AJAX. See that everything is working. Then, and only then, start writing the AJAX calls. That's a good process that helps you learn a lot as well. Use chrome's developer tools. Since AJAX calls are done in the background it's sometimes very hard to debug them. You should use the chrome developer tools (or similar tools such as firebug) and console.log things to debug. I won't explain in detail, just google around and find out about it. It would be very helpful to you. CSRF awareness. Finally, remember that post requests in Django require the csrf_token. With AJAX calls, a lot of times you'd like to send data without refreshing the page. You'll probably face some trouble before you'd finally remember that - wait, you forgot to send the csrf_token. This is a known beginner roadblock in AJAX-Django integration, but after you learn how to make it play nice, it's easy as pie. That's everything that comes to my head. It's a vast subject, but yeah, there's probably not enough examples out there. Just work your way there, slowly, you'll get it eventually.
How to call a function from another file in Python?
I've seen this topic here covered numerous times, but none of the answers I've seen seem to work for me, so I'll try to be as specific to my problem as possible. Set_up: I have a .py file for each function I need to use in a program. In this program, I need to call the function from the external files. I've tried: from file.py import function(a,b) but I get the error: ImportError: No module named 'file.py'; file is not a package
No need to add file.py while importing. Just write from file import function, and then call the function using function(a, b). The reason why this may not work, is because file is one of Python's core modules, so I suggest you change the name of your file. Note, that if you're trying to import functions from a.py to a file called b.py, you will need to make sure that a.py and b.py are in the same directories.
Why in numpy `nan == nan` is False while nan in [nan] is True?
While the first part of the question (which is in the title) has been answered a few times before (i.e., Why is NaN not equal to NaN?), I don't see why the second piece works the way it does (inspired by this question How to Check list containing NaN)? Namely: >> nan == nan False >> nan in [nan] True An explanatory addendum to the question considering the answer from @DSM. So, why float("nan") is behaving differently from nan? Shouldn't it evaluate again to simple nan and why interpreter behaves this way? >> x = float("nan") >> y = nan >> x nan >> y nan >> x is nan, x is float("nan"), y is nan (False, False, True) Basically, it refers to same generic nan in the first case, but creates separate object in the second: >> nans = [nan for i in range(2)] >> map(id, nans) [190459300, 190459300] >> nans = [float("nan") for i in range(2)] >> map(id, nans) [190459300, 190459301]
nan not being equal to nan is part of the definition of nan, so that part's easy. As for nan in [nan] being True, that's because identity is tested before equality for containment in lists. You're comparing the same two objects. If you tried the same thing with two different nans, you'd get False: >>> nans = [float("nan") for i in range(2)] >>> map(id, nans) [190459300, 190459284] >>> nans [nan, nan] >>> nans[0] is nans[1] False >>> nans[0] in nans True >>> nans[0] in nans[1:] False Your addendum doesn't really have much to do with nan, that's simply how Python works. Once you understand that float("nan") is under no obligation to return some nan singleton, and that y = x doesn't make a copy of x but instead binds the name y to the object named by x, there's nothing left to get.
Debugging Apache/Django/WSGI Bad Request (400) Error
My simple Django app worked fine in debug mode (manage.py runserver), and works under WSGI+Apache on my dev box, but when I pushed to EC2 I began receiving intermittent (10-80% of the time) errors of Bad Request (400) for any URLs I try to view (whether in my app or in the Django admin. Where can I find debug information about this? Nothing appears in /var/log/apache2/error.log, even with LogLevel=info. I have checked versions, logged the Request environment (cf. ModWSGI Debugging Tips) and see no major differences. The one remaining thought I had is, I'm using the mod_wsgi from Ubuntu 12.04 (libapache2-mod-wsgi 3.3-4build1) which was built against Python 2.7.1; I have Python 2.7.3. And Django is 1.6, which is newer than the Ubuntu Precise version. I hesitate to start building packages from source since it's so hard to clean up and these seem like minor version changes... Thank you for your help. (For reference, here are the Apache config and WSGI apps) Apache config (000-default) <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www WSGIScriptAlias /rz /usr/local/share/rz/rz.wsgi ... rz.WSGI app import os import sys import django.core.handlers.wsgi import pprint path = '/usr/local/share/rz' if path not in sys.path: sys.path.insert(0, path) os.environ['DJANGO_SETTINGS_MODULE'] = 'rz.settings' class LoggingMiddleware: def __init__(self, application): self.__application = application def __call__(self, environ, start_response): errors = environ['wsgi.errors'] pprint.pprint(('REQUEST', environ), stream=errors) def _start_response(status, headers, *args): pprint.pprint(('RESPONSE', status, headers), stream=errors) return start_response(status, headers, *args) return self.__application(environ, _start_response) application = LoggingMiddleware(django.core.handlers.wsgi.WSGIHandler())
Add the ALLOWED_HOSTS setting to your settings.py like so... ALLOWED_HOSTS = [ '.example.com', # Allow domain and subdomains '.example.com.', # Also allow FQDN and subdomains ] I had this same problem and found the answer here in the docs update: django 1.6 docs are no longer online, I updated the link to go to the django 1.7 docs for ALLOWED_HOSTS setting.
Downsample a 1D numpy array
I have a 1-d numpy array which I would like to downsample. Any of the following methods are acceptable if the downsampling raster doesn't perfectly fit the data: overlap downsample intervals convert whatever number of values remains at the end to a separate downsampled value interpolate to fit raster basically if I have 1 2 6 2 1 and I am downsampling by a factor of 3, all of the following are ok: 3 3 3 1.5 or whatever an interpolation would give me here. I'm just looking for the fastest/easiest way to do this. I found scipy.signal.decimate, but that sounds like it decimates the values (takes them out as needed and only leaves one in X). scipy.signal.resample seems to have the right name, but I do not understand where they are going with the whole fourier thing in the description. My signal is not particularly periodic. Could you give me a hand here? This seems like a really simple task to do, but all these functions are quite intricate...
In the simple case where your array's size is divisible by the downsampling factor (R), you can reshape your array, and take the mean along the new axis: import numpy as np a = np.array([1.,2,6,2,1,7]) R = 3 a.reshape(-1, R) => array([[ 1., 2., 6.], [ 2., 1., 7.]]) a.reshape(-1, R).mean(axis=1) => array([ 3. , 3.33333333]) In the general case, you can pad your array with NaNs to a size divisible by R, and take the mean using scipy.nanmean. import math, scipy b = np.append(a, [ 4 ]) b.shape => (7,) pad_size = math.ceil(float(b.size)/R)*R - b.size b_padded = np.append(b, np.zeros(pad_size)*np.NaN) b_padded.shape => (9,) scipy.nanmean(b_padded.reshape(-1,R), axis=1) => array([ 3. , 3.33333333, 4.])
Error installing Python Image Library using pip on Mac OS X 10.9
I want to install PIL on Mavericks using pip but get this error. _imagingft.c:73:10: fatal error: 'freetype/fterrors.h' file not found #include <freetype/fterrors.h> ^ 1 error generated. error: command 'cc' failed with exit status 1 My Command Line Tools are installed and up to date and every hint I found didn't help. How can I get this to compile? EDIT: I just checked, freetype is also already installed via homebrew
Instead of symlinking to a specific version of freetype2, do this: ln -s /usr/local/include/freetype2 /usr/local/include/freetype This saves you the trouble of recreating the symlink whenever you upgrade freetype2.
calling ipython from a virtualenv
I understand that ipython is not virtualenv-aware and that the most logical solution to this is to install ipython in each virtualenv seperately using pip install ipython So far so good. One thing I noticed is that if the system-wide copy of ipython is called from within a virtualenv using $> ipython before ipython is installed under this virtualenv, subsequent $> ipython commands will continue to bring up the system-wide ipython copy. On the other hand, if ipython is not called prior to installing it under a virtualenv $> ipython will bring up the newly installed copy. What is the explanation for this? It also makes me wonder if this behavior means I should expect some trouble down the way? Thanks for your time!
alias ipy="python -c 'import IPython; IPython.terminal.ipapp.launch_new_instance()'" This is a great way of always being sure that the ipython instance always belongs to the virtualenv's python version. This works only on ipython >2.0. Source
Cannot find the file specified when using subprocess.call('dir', shell=True) in Python
In a 64-bit system with 32 bit python 2.7 installed I am trying to do the following: import subprocess p = subprocess.call('dir', shell=True) print p But this gives me: Traceback (most recent call last): File "test.py", line 2, in <module> p = subprocess.call('dir', shell=True) File "C:\Python27\lib\subprocess.py", line 522, in call return Popen(*popenargs, **kwargs).wait() File "C:\Python27\lib\subprocess.py", line 709, in __init__ errread, errwrite) File "C:\Python27\lib\subprocess.py", line 957, in _execute_child startupinfo) WindowsError: [Error 2] The system cannot find the file specified If I in the terminal do... dir ...it of course prints the present folder content. I have tried to change the shell parameter to shell=False. Edit: Actually I cannot call any executable on the path with subprocess.call(). The statement p = subprocess.call('dir', shell=True) works fine on another machine and I think that it is related. If I do subprocess.call('PATH', shell=True) then I get Traceback (most recent call last): File "test.py", line 4, in <module> subprocess.call('PATH', shell=True) File "C:\Python27\lib\subprocess.py", line 522, in call return Popen(*popenargs, **kwargs).wait() File "C:\Python27\lib\subprocess.py", line 709, in __init__ errread, errwrite) File "C:\Python27\lib\subprocess.py", line 957, in _execute_child startupinfo) WindowsError: [Error 2] The system cannot find the file specified If I do: import os print os.curdir then I get . All of the above is executed in the terminal started in Administrator mode.
I think you may have a problem with your COMSPEC environment variable: >>> import os >>> os.environ['COMSPEC'] 'C:\\Windows\\system32\\cmd.exe' >>> import subprocess >>> subprocess.call('dir', shell=True) (normal output here) >>> os.environ['COMSPEC'] = 'C:\\nonexistent.exe' >>> subprocess.call('dir', shell=True) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "c:\Python27\lib\subprocess.py", line 493, in call return Popen(*popenargs, **kwargs).wait() File "c:\Python27\lib\subprocess.py", line 679, in __init__ errread, errwrite) File "c:\Python27\lib\subprocess.py", line 896, in _execute_child startupinfo) WindowsError: [Error 2] The system cannot find the file specified I discovered this potential issue by digging into subprocess.py and looking in the _execute_child function, as pointed-to by the traceback. There, you'll find a block starting with if shell: that will search the environment for said variable and use it to create the arguments used to launch the process.
Cleanest way to hide every nth tick label in matplotlib colorbar?
The labels on my horizontal colorbar are too close together and I don't want to reduce text size further: cbar = plt.colorbar(shrink=0.8, orientation='horizontal', extend='both', pad=0.02) cbar.ax.tick_params(labelsize=8) I'd like to preserve all ticks, but remove every other label. Most examples I've found pass a user-specified list of strings to cbar.set_ticklabels(). I'm looking for a general solution. I played around with variations of cbar.set_ticklabels(cbar.get_ticklabels()[::2]) and cbar.ax.xaxis.set_major_locator(matplotlib.ticker.MaxNLocator(nbins=4)) but I haven't found the magic combination. I know there must be a clean way to do this using a locator object.
For loop the ticklabels, and call set_visible(): for label in cbar.ax.xaxis.get_ticklabels()[::2]: label.set_visible(False)
pandas create named columns in dataframe from dict
I have a dictionary object of the form: my_dict = {id1: val1, id2: val2, id3: val3, ...} I want to create this into a dataframe where I want to name the 2 columns 'business_id' and 'business_code'. I tried: business_df = DataFrame.from_dict(my_dict,orient='index',columns=['business_id','business_code']) but it says from_dict doesn't take in a columns argument. TypeError: from_dict() got an unexpected keyword argument 'columns'
You can iterate through the items: In [11]: pd.DataFrame(list(my_dict.iteritems()), columns=['business_id','business_code']) Out[11]: business_id business_code 0 id2 val2 1 id3 val3 2 id1 val1
Pythonically add header to a csv file
I wrote a Python script merging two csv files, and now I want to add a header to the final csv. I tried following the suggestions reported here and I got the following error: expected string, float found. What is the most pythonic way to fix this? Here is the code I am using: import csv with open('combined_file.csv', 'wb') as outcsv: writer = csv.DictWriter(outcsv, fieldnames = ["Date", "temperature 1", "Temperature 2"]) writer.writeheader() with open('t1.csv', 'rb') as incsv: reader = csv.reader(incsv) writer.writerows(row + [0.0] for row in reader) with open('t2.csv', 'rb') as incsv: reader = csv.reader(incsv) writer.writerows(row[:1] + [0.0] + row[1:] for row in reader)
The DictWriter() class expects dictionaries for each row. If all you wanted to do was write an initial header, use a regular csv.writer() and pass in a simple row for the header: import csv with open('combined_file.csv', 'wb') as outcsv: writer = csv.writer(outcsv) writer.writerow(["Date", "temperature 1", "Temperature 2"]) with open('t1.csv', 'rb') as incsv: reader = csv.reader(incsv) writer.writerows(row + [0.0] for row in reader) with open('t2.csv', 'rb') as incsv: reader = csv.reader(incsv) writer.writerows(row[:1] + [0.0] + row[1:] for row in reader) The alternative would be to generate dictionaries when copying across your data: import csv with open('combined_file.csv', 'wb') as outcsv: writer = csv.DictWriter(outcsv, fieldnames = ["Date", "temperature 1", "Temperature 2"]) writer.writeheader() with open('t1.csv', 'rb') as incsv: reader = csv.reader(incsv) writer.writerows({'Date': row[0], 'temperature 1': row[1], 'temperature 2': 0.0} for row in reader) with open('t2.csv', 'rb') as incsv: reader = csv.reader(incsv) writer.writerows({'Date': row[0], 'temperature 1': 0.0, 'temperature 2': row[1]} for row in reader)
SOCKET ERROR: [Errno 111] Connection refused
I am using simple python lib for the SMTP But i am getting this error: import smtplib smtpObj = smtplib.SMTP('localhost') Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib/python2.7/smtplib.py", line 249, in __init__ (code, msg) = self.connect(host, port) File "/usr/lib/python2.7/smtplib.py", line 309, in connect self.sock = self._get_socket(host, port, self.timeout) File "/usr/lib/python2.7/smtplib.py", line 284, in _get_socket return socket.create_connection((port, host), timeout) File "/usr/lib/python2.7/socket.py", line 571, in create_connection raise err socket.error: [Errno 111] Connection refused Using python-2.7
Start a simple SMTP server with Python like so: python -m smtpd -n -c DebuggingServer localhost:1025 or you can also try gmail smtp setting server = smtplib.SMTP(host='smtp.gmail.com', port=587)
Understanding LDA implementation using gensim
I am trying to understand how gensim package in Python implements Latent Dirichlet Allocation. I am doing the following: Define the dataset documents = ["Apple is releasing a new product", "Amazon sells many things", "Microsoft announces Nokia acquisition"] After removing stopwords, I create the dictionary and the corpus: texts = [[word for word in document.lower().split() if word not in stoplist] for document in documents] dictionary = corpora.Dictionary(texts) corpus = [dictionary.doc2bow(text) for text in texts] Then I define the LDA model. lda = gensim.models.ldamodel.LdaModel(corpus=corpus, id2word=dictionary, num_topics=5, update_every=1, chunksize=10000, passes=1) Then I print the topics: >>> lda.print_topics(5) ['0.181*things + 0.181*amazon + 0.181*many + 0.181*sells + 0.031*nokia + 0.031*microsoft + 0.031*apple + 0.031*announces + 0.031*acquisition + 0.031*product', '0.077*nokia + 0.077*announces + 0.077*acquisition + 0.077*apple + 0.077*many + 0.077*amazon + 0.077*sells + 0.077*microsoft + 0.077*things + 0.077*new', '0.181*microsoft + 0.181*announces + 0.181*acquisition + 0.181*nokia + 0.031*many + 0.031*sells + 0.031*amazon + 0.031*apple + 0.031*new + 0.031*is', '0.077*acquisition + 0.077*announces + 0.077*sells + 0.077*amazon + 0.077*many + 0.077*nokia + 0.077*microsoft + 0.077*releasing + 0.077*apple + 0.077*new', '0.158*releasing + 0.158*is + 0.158*product + 0.158*new + 0.157*apple + 0.027*sells + 0.027*nokia + 0.027*announces + 0.027*acquisition + 0.027*microsoft'] 2013-12-03 13:26:21,878 : INFO : topic #0: 0.181*things + 0.181*amazon + 0.181*many + 0.181*sells + 0.031*nokia + 0.031*microsoft + 0.031*apple + 0.031*announces + 0.031*acquisition + 0.031*product 2013-12-03 13:26:21,880 : INFO : topic #1: 0.077*nokia + 0.077*announces + 0.077*acquisition + 0.077*apple + 0.077*many + 0.077*amazon + 0.077*sells + 0.077*microsoft + 0.077*things + 0.077*new 2013-12-03 13:26:21,880 : INFO : topic #2: 0.181*microsoft + 0.181*announces + 0.181*acquisition + 0.181*nokia + 0.031*many + 0.031*sells + 0.031*amazon + 0.031*apple + 0.031*new + 0.031*is 2013-12-03 13:26:21,881 : INFO : topic #3: 0.077*acquisition + 0.077*announces + 0.077*sells + 0.077*amazon + 0.077*many + 0.077*nokia + 0.077*microsoft + 0.077*releasing + 0.077*apple + 0.077*new 2013-12-03 13:26:21,881 : INFO : topic #4: 0.158*releasing + 0.158*is + 0.158*product + 0.158*new + 0.157*apple + 0.027*sells + 0.027*nokia + 0.027*announces + 0.027*acquisition + 0.027*microsoft >>> I'm not able to understand much out of this result. Is it providing with a probability of the occurrence of each word? Also, what's the meaning of topic #1, topic #2 etc? I was expecting something more or less like the most important keywords. I already checked the gensim tutorial but it didn't really help much. Thanks.
The answer you're looking for is in the genism tutorial. lda.printTopics(k) prints the most contributing words for k randomly selected topics. One can assume that this is (partially) the distribution of words over each of the given topics, meaning the probability of those words appearing in the topic to the left. Usually, one would run this on a large corpus, running on a ridiculously small sample won't give the best results.
pyyaml is producing undesired !!python/unicode output
I am using pyyaml to dump an object to a file. There are several unicode strings in the object. I've done this before, but now it's producing output items like this: 'item': !!python/unicode "some string" Instead of the desired: 'item': 'some string' I'm intending to output as utf-8. The current command I use is: yaml.dump(data,file(suite_out,'w'),encoding='utf-8',indent=4,allow_unicode=True) In other locations I do the following and it works: codecs.open(suite_out,"w","utf-8").write( yaml.dump(suite,indent=4,width=10000) ) What am I doing wrong? Python 2.7.3
I tried many combinations and the only one I can find that consistently produces the correct YAML output is: yaml.safe_dump(data, file(filename,'w'), encoding='utf-8', allow_unicode=True)
How To Get IPython Notebook To Run Python 3?
I am new to Python to bear with me. I installed Anaconda, works great. I setup a Python 3 environment following the Anaconda cmd line instructions, works great. I setup Anaconda's Python 3 environment as Pycharm's interpreter, works great. I launched the Anaconda "launcher.app" and launched IPython Notebook. However, iPython Notebook is running Python 2 not 3. Over three hours of Googling later, I cannot figure out how to set IPython Notebook to run Python 3 instead of 2.
To set IPython Notebook to run Python 3 instead of 2 on my MAC 10.9, I did the following steps $sudo pip3 install ipython[all] Then $ipython3 notebook
django - set user permissions when user is automatically created
Django 1.5, python 2.6 The model automatically creates a user under certain conditions: User.objects.get_or_create(username=new_user_name, is_staff=True) u = User.objects.get(username=new_user_name) u.set_password('temporary') In addition to setting the username, password, and is_staff status, I would like to set the user's permissions - something like: u.user_permissions('Can view poll') or u.set_permissions('Can change poll') Is this possible? Thank you!
Use add and remove methods: permission = Permission.objects.get(name='Can view poll') u.user_permissions.add(permission)
Cropping an image with Python Pillow
I installed Python Pillow and am trying to crop an image. Other effects work great (for example, thumbnail, blurring image, etc.) Whenever I run the code below I get the error: tile cannot extend outside image test_image = test_media.file original = Image.open(test_image) width, height = original.size # Get dimensions left = width/2 top = height/2 right = width/2 bottom = height/2 cropped_example = original.crop((left, top, right, bottom)) cropped_example.show() I used a cropping example I found for PIL, because I couldn't find one for Pillow (which I assumed would be the same).
The problem is with logic, not Pillow. Pillow is nearly 100% PIL compatible. You created an image of 0 * 0 (left = right & top = bottom) size. No display can show that. My code is as follows from PIL import Image test_image = "Fedora_19_with_GNOME.jpg" original = Image.open(test_image) original.show() width, height = original.size # Get dimensions left = width/4 top = height/4 right = 3 * width/4 bottom = 3 * height/4 cropped_example = original.crop((left, top, right, bottom)) cropped_example.show() Most probably this is not what you want, but this should provide you a clear idea of what should be done.
Comparing two date strings in Python
Let's say I have a string: "10/12/13" and "10/15/13", how can I convert them into date objects so that I can compare the dates? For example to see which date is before or after.
Use datetime.datetime.strptime: >>> from datetime import datetime as dt >>> a = dt.strptime("10/12/13", "%m/%d/%y") >>> b = dt.strptime("10/15/13", "%m/%d/%y") >>> a > b False >>> a < b True >>>
iphone push notifications passphrase issue (pyAPns)
I'm trying to implement push notifications for iphone based on PyAPNs When I run it on local but it blocks and prompts me to enter the passphrase manually and doesn't work until I do I don't know how to set it up so to work without prompt This is my code: from apns import APNs, Payload import optparse import os certificate_file = here(".." + app.fichier_PEM.url ) token_hex = '0c99bb3d077eeacdc04667d38dd10ca1a' pass_phrase = app.mot_de_passe apns = APNs(use_sandbox=True, cert_file= certificate_file) payload = Payload(alert = message.decode('utf-8'), sound="default", badge=1) apns.gateway_server.send_notification(token_hex, payload) # Get feedback messages for (token_hex, fail_time) in apns.feedback_server.items(): print "fail: "+fail_time
When you create a .pem file without phrase specify -nodes To Create .pem file without phrase openssl pkcs12 -nocerts -out Pro_Key.pem -in App.p12 -nodes To Create .pem file with phrase openssl pkcs12 -nocerts -out Pro_Key.pem -in App.p12 If you have a .pem file with password you can get rid of its password for PyAPNs using the following openssl rsa -in haspassword.pem -out nopassword.pem Refer Raywenderlich Apple Push Notification Step By Step Guide for make certificates and other configurations. Some Python library for interacting with the Apple Push Notification service (APNs) djacobs->PyAPNs samuraisam->pyapns apns-client
Joining pandas dataframes by column names
I have two dataframes with the following column names: frame_1: event_id, date, time, county_ID frame_2: countyid, state I would like to get a dataframe with the following columns by joining (left) on county_ID = countyid: joined_dataframe event_id, date, time, county, state I cannot figure out how to do it if the columns on which I want to join are not the index. What's the easiest way? Thanks!
you can use the left_on and right_on options as follows: pd.merge(frame_1, frame_2, left_on = 'county_ID', right_on = 'countyid') I was not sure from the question if you only wanted to merge if the key was in the left hand dataframe. If that is the case then the following will do that (the above will in effect do a many to many merge) pd.merge(frame_1, frame_2, how = 'left', left_on = 'county_ID', right_on = 'countyid')
Cyclic dependencies and interfaces in Golang
I am a long time python developer. I was trying out Go, converting an existing python app to Go. It is modular and works really well for me. Upon creating the same structure in Go, I seem to land in cyclic import errors, a lot more than I want to. Never had any import problems in python. I never even had to use import aliases. So I may have had some cyclic imports which were not evident in python. I actually find that strange. Anyways, I am lost, trying to fix these in Go. I have read that interfaces can be used to avoid cyclic dependencies. But I don't understand how. I didn't find any examples on this either. Can somebody help me on this? EDIT: The current python application structure is as follows: main.py settings routes.py -> contains main routes depends on app1/routes.py, app2/routes.py etc database.py -> function like connect() which opens db session constants.py -> general constants apps app1 views.py -> url handler functions models.py -> app specific database functions depends on settings/database.py routes.py -> app specific routes ... app2 views.py -> url handler functions models.py -> app specific database functions depends on settings/database.py routes.py -> app specific routes ... settings/database.py has generic functions like connect() which opens a db session. So an app in the apps package calls database.connect() and a db session is opened. The same is the case with settings/routes.py it has functions that allow apps to add their sub-routes to the main route object. The settings package is more about functions than data/constants. This contains code that is used by apps in the apps package, that would otherwise have to be duplicated in all the apps. So if I need to change the router class, for instance, I just have to change settings/router.py and the apps will continue to work with no modifications.
The short version: Write config functions for hooking packages up to each other at run time rather than compile time. Instead of routes importing all the packages that define routes, it can export routes.Register, which main (or code in each app) can call. As a rule, split a package up when each piece could be useful on its own. If two pieces of functionality are really intimately related, you don't have to split them into packages at all; you can organize with multiple files or types instead. Go's net/http is a big package, for instance, and understandably so. More specifically: Move reusable code 'down' into lower-level packages untangled from your particular use case. If you have a package page containing both logic for your content management system and all-purpose HTML-manipulation code, consider moving the HTML stuff "down" to a package html so you can use it without importing unrelated content management stuff. Break up grab-bag packages (utils, tools) by topic or dependency. Otherwise you can end up importing a huge utils package (and taking on all its dependencies) for one or two pieces of functionality (that wouldn't have so many dependencies if separated out). Pass around basic types and interface values. If you're depending on a package for just a type name, maybe you can avoid that. Maybe some code handling a []Page can get instead use a []string of filenames or a []int of IDs or some more general interface (sql.Rows) instead. Related to the these points, Ben Johnson gave a lightning talk at GopherCon 2016. He suggests breaking up packages by dependency, and defining one package that just has interfaces and data types, without any but the most trivial functionality (and as a result few to no dependencies); in his words it defines the "language" of your app. Here, I'd rearrange things so the router doesn't need to include the routes: instead, each app package calls a router.Register() method. This is what the Gorilla web toolkit's mux package does. Your routes, database, and constants packages sound like low-level pieces that should be imported by your app code and not import it. Generally, try to build your app in layers. Your higher-layer, use-case-specific app code should import lower-layer, more fundamental tools, and never the other way around. Here are some more thoughts: Packages are for separating independently usable bits of functionality; you don't need to split one off whenever a source file gets large. Unlike in, say, Python or Java, in Go one can split and combine and rearrange files completely independent of the package structure, so you can break up huge files without breaking up packages. The standard library's net/http is about 7k lines (counting comments/blanks but not tests). Internally, it's split into many smaller files and types. But it's one package, I think 'cause there was no reason users would want, say, just cookie handling on its own. On the other hand, net and net/url are separate because they have uses outside HTTP. It's great if you can push "down" utilities into libraries that are independent and feel like their own polished products, or cleanly layer your application itself (e.g., UI sits atop an API sits atop some core functionality and data models). Likewise "horizontal" separation may help you hold the app in your head (e.g., the UI layer breaks up into user account management, the application core, and administrative tools, or something finer-grained than that). But, the core point is, you're free to split or not as works for you. Use Register or other runtime config methods to keep your general tools (like URL routing or DB access code) from needing to import your app code. Instead of your router looking at app1.Routes, app2.Routes, etc., you have your apps packages import router and register with it in their func init()s. Or, if you'd rather register routes from one package, you could make a myapp/routes package that imports router and all your views and calls router.Register. Point is, the router itself is all-purpose code that needn't import your application's views. Some ways to put together config APIs: Pass app behavior via interfaces or funcs: http can be passed custom implementations of Handler (of course) but also CookieJar or File. text/template and html/template can accept functions to be accessible from templates (in a FuncMap). Export shortcut functions from your package if appropriate: In http, callers can either make and separately configure some http.Server objects, or call http.ListenAndServe(...) that uses a global Server. That gives you a nice design--everything's in an object and callers can create multiple Servers in a process and such--but it also offers a lazy way to configure in the simple single-server case. If you have to, just duct-tape it: You don't have to limit yourself to super-elegant config systems if you can't fit one to your app: should your app have package "myapp/conf" with a global var Conf map[string]interface{}, I won't judge. My one warning would be that this ties every conf-importing package to your app: if some might otherwise be reusable outside your application, maybe you can find a better way to configure them. Those two are maybe the key principles, but a couple of specific cases/tactical thoughts: Separate fundamental tasks from app-dependent ones. One app I work on in another language has a "utils" module mixing general tasks (e.g., formatting datetimes or working with HTML) with app-specific stuff (that depends on the user schema, etc.). But the users package imports the utils, creating a cycle. If I were porting to Go, I'd move the user-dependent utils "up" out of the utils module, maybe to live with the user code or even above it. Consider breaking up grab-bag packages. Slightly enlarging on the last point: if two pieces of functionality are independent (that is, things still work if you move some code to another package) and unrelated from the user's perspective, they're candidates to be separated into two packages. Sometimes the bundling is harmless, but other times it leads to extra dependencies, or a less generic package name would just make clearer code. So my utils above might be broken up by topic or dependency (e.g., strutil, dbutil, etc.). If you wind up with lots of packages this way, we've got goimports to help manage them. Replace import-requiring object types in APIs with basic types and interfaces. Say two entities in your app have a many-to-many relationship like Users and Groups. If they live in different packages (a big 'if'), you can't have both u.Groups() returning a []group.Group and g.Users() returning []user.User because that requires the packages to import each other. However, you could change one or both of those return, say, a []uint of IDs or a sql.Rows or some other interface you can get to without importing a specific object type. Depending on your use case, types like User and Group might be so intimately related that it's better just to put them in one package, but if you decide they should be distinct, this is a way. Thanks for the detailed question and followup.
Pandas selecting by label sometimes return series, sometimes returns dataframe
In Pandas, when I select a label that only has one entry in the index I get back a Series, but when I select an entry that has more then one entry I get back a data frame. Why is that? Is there a way to ensure I always get back a data frame? In [1]: import pandas as pd In [2]: df = pd.DataFrame(data=range(5), index=[1, 2, 3, 3, 3]) In [3]: type(df.loc[3]) Out[3]: pandas.core.frame.DataFrame In [4]: type(df.loc[1]) Out[4]: pandas.core.series.Series
Granted that the behavior is inconsistent, but I think it's easy to imagine cases where this is convenient. Anyway, to get a DataFrame every time, just pass a list to loc. There are other ways, but in my opinion this is the cleanest. In [2]: type(df.loc[[3]]) Out[2]: pandas.core.frame.DataFrame In [3]: type(df.loc[[1]]) Out[3]: pandas.core.frame.DataFrame
How to unit test Google Cloud Endpoints
I'm needing some help setting up unittests for Google Cloud Endpoints. Using WebTest all requests answer with AppError: Bad response: 404 Not Found. I'm not really sure if endpoints is compatible with WebTest. This is how the application is generated: application = endpoints.api_server([TestEndpoint], restricted=False) Then I use WebTest this way: client = webtest.TestApp(application) client.post('/_ah/api/test/v1/test', params) Testing with curl works fine. Should I write tests for endpoints different? What is the suggestion from GAE Endpoints team?
After much experimenting and looking at the SDK code I've come up with two ways to test endpoints within python: 1. Using webtest + testbed to test the SPI side You are on the right track with webtest, but just need to make sure you correctly transform your requests for the SPI endpoint. The Cloud Endpoints API front-end and the EndpointsDispatcher in dev_appserver transforms calls to /_ah/api/* into corresponding "backend" calls to /_ah/spi/*. The transformation seems to be: All calls are application/json HTTP POSTs (even if the REST endpoint is something else). The request parameters (path, query and JSON body) are all merged together into a single JSON body message. The "backend" endpoint uses the actual python class and method names in the URL, e.g. POST /_ah/spi/TestEndpoint.insert_message will call TestEndpoint.insert_message() in your code. The JSON response is only reformatted before being returned to the original client. This means you can test the endpoint with the following setup: from google.appengine.ext import testbed import webtest # ... def setUp(self): tb = testbed.Testbed() tb.setup_env(current_version_id='testbed.version') #needed because endpoints expects a . in this value tb.activate() tb.init_all_stubs() self.testbed = tb def tearDown(self): self.testbed.deactivate() def test_endpoint_insert(self): app = endpoints.api_server([TestEndpoint], restricted=False) testapp = webtest.TestApp(app) msg = {...} # a dict representing the message object expected by insert # To be serialised to JSON by webtest resp = testapp.post_json('/_ah/spi/TestEndpoint.insert', msg) self.assertEqual(resp.json, {'expected': 'json response msg as dict'}) The thing here is you can easily setup appropriate fixtures in the datastore or other GAE services prior to calling the endpoint, thus you can more fully assert the expected side effects of the call. 2. Starting the development server for full integration test You can start the dev server within the same python environment using something like the following: import sys import os import dev_appserver sys.path[1:1] = dev_appserver._DEVAPPSERVER2_PATHS from google.appengine.tools.devappserver2 import devappserver2 from google.appengine.tools.devappserver2 import python_runtime # ... def setUp(self): APP_CONFIGS = ['/path/to/app.yaml'] python_runtime._RUNTIME_ARGS = [ sys.executable, os.path.join(os.path.dirname(dev_appserver.__file__), '_python_runtime.py') ] options = devappserver2.PARSER.parse_args([ '--admin_port', '0', '--port', '8123', '--datastore_path', ':memory:', '--logs_path', ':memory:', '--skip_sdk_update_check', '--', ] + APP_CONFIGS) server = devappserver2.DevelopmentServer() server.start(options) self.server = server def tearDown(self): self.server.stop() Now you need to issue actual HTTP requests to localhost:8123 to run tests against the API, but again can interact with GAE APIs to set up fixtures, etc. This is obviously slow as you're creating and destroying a new dev server for every test run. At this point I use the Google API Python client to consume the API instead of building the HTTP requests myself: import apiclient.discovery # ... def test_something(self): apiurl = 'http://%s/_ah/api/discovery/v1/apis/{api}/{apiVersion}/rest' \ % self.server.module_to_address('default') service = apiclient.discovery.build('testendpoint', 'v1', apiurl) res = service.testresource().insert({... message ... }).execute() self.assertEquals(res, { ... expected reponse as dict ... }) This is an improvement over testing with CURL as it gives you direct access to the GAE APIs to easily set up fixtures and inspect internal state. I suspect there is an even better way to do integration testing that bypasses HTTP by stitching together the minimal components in the dev server that implement the endpoint dispatch mechanism, but that requires more research time than I have right now.
Installing pylibmc on Ubuntu
When running pip install pylibmc on Ubuntu, I get the following error: _pylibmcmodule.h:42:36: fatal error: libmemcached/memcached.h: No such file or directory
sudo apt-get install libmemcached-dev zlib-dev
Running Django tests in PyCharm
I am trying to run a simple Django test in PyCharm, but its failing with the following stack trace- /home/ramashishb/local/pyenv/testenv/bin/python /opt/pycharm-3.0.2/helpers/pycharm/django_test_manage.py test snippets.SimpleTest.test_simple /home/ramashishb/mine/learn/django-rest/django-rest-tutorial Testing started at 4:37 PM ... Traceback (most recent call last): File "/opt/pycharm-3.0.2/helpers/pycharm/django_test_manage.py", line 18, in <module> import django_test_runner File "/opt/pycharm-3.0.2/helpers/pycharm/django_test_runner.py", line 14, in <module> from django.test.testcases import TestCase File "/home/ramashishb/local/pyenv/testenv/lib/python2.7/site-packages/django/test/__init__.py", line 5, in <module> from django.test.client import Client, RequestFactory File "/home/ramashishb/local/pyenv/testenv/lib/python2.7/site-packages/django/test/client.py", line 11, in <module> from django.contrib.auth import authenticate, login, logout, get_user_model File "/home/ramashishb/local/pyenv/testenv/lib/python2.7/site-packages/django/contrib/auth/__init__.py", line 6, in <module> from django.middleware.csrf import rotate_token File "/home/ramashishb/local/pyenv/testenv/lib/python2.7/site-packages/django/middleware/csrf.py", line 14, in <module> from django.utils.cache import patch_vary_headers File "/home/ramashishb/local/pyenv/testenv/lib/python2.7/site-packages/django/utils/cache.py", line 26, in <module> from django.core.cache import get_cache File "/home/ramashishb/local/pyenv/testenv/lib/python2.7/site-packages/django/core/cache/__init__.py", line 69, in <module> if DEFAULT_CACHE_ALIAS not in settings.CACHES: File "/home/ramashishb/local/pyenv/testenv/lib/python2.7/site-packages/django/conf/__init__.py", line 54, in __getattr__ self._setup(name) File "/home/ramashishb/local/pyenv/testenv/lib/python2.7/site-packages/django/conf/__init__.py", line 47, in _setup % (desc, ENVIRONMENT_VARIABLE)) django.core.exceptions.ImproperlyConfigured: Requested setting CACHES, but settings are not configured. You must either define the environment variable DJANGO_SETTINGS_MODULE or call settings.configure() before accessing settings. Process finished with exit code 1 The test runs fine on console using- ./manage.py run test It looks like things are not setup before executing the tests? Any idea? Thanks, Ram
Go to menu file > settings > Django Support and select correct settings file.
Error importing hashlib with python 2.7 but not with 2.6
I'm on Solaris 10 (x86). Until now, I was using python2.6. Today, I installed python2.7 and I have a weird error occuring when importing hashlib on 2.7, but not on 2.6: Python 2.6: root@myserver [PROD] # python2.6 -c "import hashlib" root@myserver [PROD] # Python 2.7: root@myserver [PROD] # python2.7 -c "import hashlib" ERROR:root:code for hash md5 was not found. Traceback (most recent call last): File "/usr/local/lib/python2.7/hashlib.py", line 139, in <module> globals()[__func_name] = __get_hash(__func_name) File "/usr/local/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type md5 ERROR:root:code for hash sha1 was not found. Traceback (most recent call last): File "/usr/local/lib/python2.7/hashlib.py", line 139, in <module> globals()[__func_name] = __get_hash(__func_name) File "/usr/local/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type sha1 ERROR:root:code for hash sha224 was not found. Traceback (most recent call last): File "/usr/local/lib/python2.7/hashlib.py", line 139, in <module> globals()[__func_name] = __get_hash(__func_name) File "/usr/local/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type sha224 ERROR:root:code for hash sha256 was not found. Traceback (most recent call last): File "/usr/local/lib/python2.7/hashlib.py", line 139, in <module> globals()[__func_name] = __get_hash(__func_name) File "/usr/local/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type sha256 ERROR:root:code for hash sha384 was not found. Traceback (most recent call last): File "/usr/local/lib/python2.7/hashlib.py", line 139, in <module> globals()[__func_name] = __get_hash(__func_name) File "/usr/local/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type sha384 ERROR:root:code for hash sha512 was not found. Traceback (most recent call last): File "/usr/local/lib/python2.7/hashlib.py", line 139, in <module> globals()[__func_name] = __get_hash(__func_name) File "/usr/local/lib/python2.7/hashlib.py", line 91, in __get_builtin_constructor raise ValueError('unsupported hash type ' + name) ValueError: unsupported hash type sha512 I don't understand why I have this error since I'm trying the import ON THE SAME MACHINE. Thanks in advance for your help!
The python2.7 package is dependent to the libssl1_0_0 package (openssl_1.0 runtime librairies). I installed it, and added the /usr/local/ssl/lib directory in $LD_LIBRARY_PATH environnent variable. And now it works perfectly! :)
Python: Trying to Deserialize Multiple JSON objects in a file with each object spanning multiple but consistently spaced number of lines
Ok, after nearly a week of research I'm going to give SO a shot. I have a text file that looks as follows (showing 3 separate json objects as an example but file has 50K of these): { "zipcode":"00544", "current":{"canwc":null,"cig":7000,"class":"observation"}, "triggers":[178,30,176,103,179,112,21,20,48,7,50,40,57] } { "zipcode":"00601", "current":{"canwc":null,"cig":null,"class":"observation"}, "triggers":[12,23,34,28,100] } { "zipcode":"00602", "current":{"canwc":null,"cig":null,"class":"observation"}, "triggers":[13,85,43,101,38,31] } I know how to work with JSON objects using the Python json library but I'm having a challenge with how to create 50 thousand different json objects from reading the file. (Perhaps I'm not even thinking about this correctly but ultimately I need to deserialize and load into a database) I've tried itertools thinking that I need a generator so I was able to use: with open(file) as f: for line in itertools.islice(f, 0, 7): #since every 7 lines is a json object jfile = json.load(line) But the above obviously won't work since it is not reading the 7 lines as a single json object and I'm also not sure how to then iterate on entire file and load individual json objects. The following would give me a list I can slice: list(open(file))[:7] Any help would be really appreciated. Extemely close to what I need and I think literally one step away but still struggling a little with iteration. This will finally get me an iterative printout of all of the dataframes but how do I make it so that I can capture one giant dataframe with all of the pieces essentially concatenated? I could then export that final dataframe to csv etc. (Also is there a better way to upload this result into a database rather than creating a giant dataframe first?) def lines_per_n(f, n): for line in f: yield ''.join(chain([line], itertools.islice(f, n - 1))) def flatten(jfile): for k, v in jfile.items(): if isinstance(v, list): jfile[k] = ','.join(v) elif isinstance(v, dict): for kk, vv in v.items(): jfile['%s' % (kk)] = vv del jfile[k] return jfile with open('deadzips.json') as f: for chunk in lines_per_n(f, 7): try: jfile = json.loads(chunk) pd.DataFrame(flatten(jfile).items()) except ValueError, e: pass else: pass
Load 6 extra lines instead, and pass the string to json.loads(): with open(file) as f: for line in f: # slice the next 6 lines from the iterable, as a list. lines = [line] + list(itertools.islice(f, 6)) jfile = json.loads(''.join(lines)) # do something with jfile json.load() will slurp up more than just the next object in the file, and islice(f, 0, 7) would read only the first 7 lines, rather than read the file in 7-line blocks. You can wrap reading a file in blocks of size N in a generator: from itertools import islice, chain def lines_per_n(f, n): for line in f: yield ''.join(chain([line], itertools.islice(f, n - 1))) then use that to chunk up your input file: with open(file) as f: for chunk in lines_per_n(f, 7): jfile = json.loads(chunk) # do something with jfile Alternatively, if your blocks turn out to be of variable length, read until you have something that parses: with open(file) as f: for line in f: while True: try: jfile = json.loads(line) break except ValueError: # Not yet a complete JSON value line += next(f) # do something with jfile
How to remove a package from Pypi
How do I remove a package from Pypi? I uploaded a package to Pypi several months ago. The package is now obsolete and I'd like to formally remove it. I cannot find any documentation on how to remove my package.
Login. Go to your packages. Check the "remove" checkbox for the particular package. Click "Remove" button.
Python - Why do the find and index methods work differently?
In Python, find and index are very similar methods, used to look up values in a sequence type. find is used for strings, while index is for lists and tuples. They both return the lowest index (the index furthest to the left) that the supplied argument is found. For example, both of the following would return one: "abc".find("b") [1,2,3].index(2) However, one thing I'm somewhat confused about is that, even though the two methods are very similar, and fill nearly the same role, just for different data types, they have very different reactions to attempting to find something not in the sequence. "abc".find("d") Returns -1, to signify 'not found', while [1,2,3].index(4) raises an exception. Basically, why do they have different behaviors? Is there a particular reason, or is it just a weird inconsistency for no particular reason? Now, I'm not asking how to deal with this - obviously, a try / except block, or a conditional 'in' statement, would work. I'm simply asking what the rationale was for making the behavior in just that particular case different. To me, it would make more sense to have a particular behavior to say not found, for consistency's sake. Also, I'm not asking for opinions on whether the reason is a good reason or not - I'm simply curious about what the reason is . Edit: Some have pointed out that strings also have a index method, which works like the index method for lists, which I'll admit I didn't know, but that just makes me wonder why, if strings have both, lists only have index.
This has always been annoying ;-) Contrary to one answer, there's nothing special about -1 with respect to strings; e.g., >>> "abc"[-1] 'c' >>> [2, 3, 42][-1] 42 The problem with find() in practice is that -1 is in fact not special as an index. So code using find() is prone to surprises when the thing being searched for is not found - it was noted even before Python 1.0.0 was released that such code often went on to do a wrong thing. No such surprises occur when index() is used instead - an exception can't be ignored silently. But setting up try/except for such a simple operation is not only annoying, it adds major overhead (extra time) for what "should be" a fast operation. Because of that, string.find() was added in Python 0.9.9 (before then, only string.index() could be used). So we have both, and that persists even into Python 3. Pick your poison :-)
No module named flask using virtualenv
I am following these steps to learn flask http://blog.miguelgrinberg.com/post/the-flask-mega-tutorial-part-i-hello-world/page/0#comments I ran this command to create the virtual env: python virtualenv.py flask When I try to start flask, it says No module named flask My PATH is set to the python directory that virtualenv installed. Why is it not able to find flask? I initially started with the official Flask quickstart guide and was able to get the webserver to run, but with this virtual env install it is not working.
Make sure your virtualenv is activated. Then You check on the PYTHONPATH of that virtualenv. Is there a flask package (folder) installed in that directory. If you unsure whether you have installed flask, just run the following command to see all the packages you have installed pip list or pip show flask. Do you see flask there? If not you can run pip install flask
Flask-Login check if user is authenticated without decorator
From the Flask-Login docs, it's described how a user of the system can require an authenticated User-model to access a method utilising the decorator syntax: from flask_login import login_required @app.route("/settings") @login_required def settings(): pass Now thats all well and good, but I want to be able to examine if the user is logged in a method, something like this: @app.route('/main/', methods=['GET', 'POST']) main_route(): if request.method == 'GET': if user_is_authenticated(): #Do the authentication here #load authenticated /main/ html template etc. pass else: #load unauthenticated /main/ html template etc. pass ... The reason for this, is because it factorises the GET and POST requests rather than duplicating routes for authenticated users and unauthenticated users. How can I do this? Is it possible?
This is very simple in flask: from flask.ext.login import current_user @app.route(...) def main_route(): if current_user.is_authenticated(): return render_template("main_for_user.html") else: return render_template("main_for_anonymous.html") See the documentation on anonymous users.
Python booleans - if x:, vs if x == True, vs if x is True
Apologies if this has been asked before, but I have searched in vain for an answer to my exact question. Basically, with Python 2.7, I have a program running a series of geoprocessing tools, depended on what is reqested via a series of True/False variables that the user adjusts in the script e.g. x = True if x: run function However, I have now discovered that x does not need to be literally "True" for the function to run. For example: In: x = True if x: print True Out: True In: x = 123 if x: print True Out: True In: x = 'False' if x: print True Out: True In: x = False if x: print True Out: So any value other than False appears to evaluate to True, which would not be the case for if x == True or if x is True. Seeing as PEP 8 strongly recommends only using the if x: variant, can anybody explain why this behaviour occurs? It seems that if x: is more a test for "if x is not False" or "if x exists". With that in mind, I believe I should be using if x is True: in this case, despite what PEP 8 has to say. Kind regards
The following values in Python are false in the context of if and other logical contexts: False None numeric values equal to 0, such as 0, 0.0, -0.0 empty strings: '' and u'' empty containers (such as lists, tuples and dictionaries) anything that implements __bool__ (in Python3) to return False, or __nonzero__ (in Python2) to return False or 0. anything that doesn't implement __bool__ (in Python3) or __nonzero__ (in Python2), but does implement __len__ to return a value equal to 0 An object is considered "false" if any of those applies, and "true" otherwise, regardless of whether it's actually equal to or identical with False or True Now, if you've arranged that x is necessarily one of the objects True or False, then you can safely write if x. If you've arranged that the "trueness" of x indicates whether or not to perform the operation, regardless of type, then you can safely write if x. Where you can write that you should prefer to do so, since it's cleaner to read. Normally, if it is allowed for x to take the value True then you're in one of those two cases, and so you would not write if x is True. The important thing is to correctly document the meaning of x, so that it reflects the test used in the code. Python programmers are expected to know what's considered true, so if you just document, "runs the function if x is true", then that expresses what your original code does. Documenting it, "runs the function if x is True" would have a different meaning, and is less commonly used precisely because of the style rule in PEP8 that says to test for trueness rather than the specific value True. However, if you wanted the code to behave differently in the case where x is an empty container from the case where it is None, then you would write something like if x is not None.
How do I concatenate many objects into one object using inheritence in python? (during runtime)
I have the following classes: class hello(object): def __init__(self): pass class bye(object): def __init__(self): pass l = [hello, bye] If I do the following I get an error: >>> class bigclass(*l): File "<stdin>", line 1 class bigclass(*l): ^ SyntaxError: invalid syntax Is there another way to do this automatically at runtime? I am using Python 2.7.
You could use the 3-argument form of type to create the class: bigclass = type('bigclass', (hello, bye), {})
Python: convert defaultdict to dict
How can i convert a defaultdict number_to_letter defaultdict(<class 'list'>, {'2': ['a'], '3': ['b'], '1': ['b', 'a']}) to be a common dict? {'2': ['a'], '3': ['b'], '1': ['b', 'a']}
You can simply call dict: >>> a defaultdict(<type 'list'>, {'1': ['b', 'a'], '3': ['b'], '2': ['a']}) >>> dict(a) {'1': ['b', 'a'], '3': ['b'], '2': ['a']} but remember that a defaultdict is a dict: >>> isinstance(a, dict) True just with slightly different behaviour, in that when you try access a key which is missing -- which would ordinarily raise a KeyError -- the default_factory is called instead: >>> a.default_factory <type 'list'> That's what you see when you print a before the data side of the dictionary appears. So another trick to get more dictlike behaviour back without actually making a new object is to reset default_factory: >>> a.default_factory = None >>> a[4].append(10) Traceback (most recent call last): File "<ipython-input-6-0721ca19bee1>", line 1, in <module> a[4].append(10) KeyError: 4 but most of the time this isn't worth the trouble.
Python's assert_called_with, is there a wildcard character?
Suppose I have a class in python set up like this. from somewhere import sendmail class MyClass: def __init__(self, **kargs): self.sendmail = kwargs.get("sendmail", sendmail) #if we can't find it, use imported def def publish(): #lots of irrelevant code #and then self.sendmail(mail_to, mail_from, subject, body, format= 'html') So as you can see, I have sort of given myself the option to parameterize which function I use for self.sendmail Now in the test file. Class Tester(): kwargs = {"sendmail": MagicMock(mail_from= None, mail_to= None, subject= None, body= None, format= None)} self.myclass = MyClass(**kwargs) ##later on def testDefaultEmailHeader(): default_subject = "Hello World" self.myclass.publish() self.myclass.sendmail.assert_called() #this is doing just fine self.myclass.sendmail.assert_called_with(default_subject) #this is having issues For some reason I am getting the error message AssertionError: Expected call: mock('Hello World') Actual Call : mock('defaultmt', 'defaultmf', 'Hello World', 'default_body', format= 'html') So basically, the assert is expecting sendmail to be called with only one variable, when it ends up being called with all 5. The thing is, I don't care about what the other 4 variables are! I just want to make sure it is called with the correct subject. I tried the mock place holder ANY, and got the same thing self.myclass.sendmail.assert_called_with(ANY, ANY, 'Hello World', ANY, ANY) AssertionError: Expected call: mock(<ANY>, <ANY>, 'Hello World', <ANY>, <ANY>) Actual Call : mock('defaultmt', 'defaultmf', 'Hello World', 'default_body, 'format= 'html') Really unsure on how to proceed with this one. Anyone have any advice if we only care about one of the variable and want to ignore the rest?
If you're calling sendmail with a named parameter subject then it's better to check whether the named argument matches what you expect: args, kwargs = self.myclass.sendmail.call_args self.assertEqual(kwargs['subject'], "Hello World") This does assume both implementations of sendmail have a named parameter called subject. If that's not the case you can do the same with a positional parameter: args, kwargs = self.myclass.sendmail.call_args self.assertTrue("Hello World" in args) You can be explicit about the position of the argument (i.e., the first argument or the third argument that's passed to sendmail but that depends on the implementation of sendmail being tested).
issues working with python generators and openstack swift client
I'm having a problem with Python generators while working with the Openstack Swift client library. The problem at hand is that I am trying to retrieve a large string of data from a specific url (about 7MB), chunk the string into smaller bits, and send a generator class back, with each iteration holding a chunked bit of the string. in the test suite, this is just a string that's sent to a monkeypatched class of the swift client for processing. The code in the monkeypatched class looks like this: def monkeypatch_class(name, bases, namespace): '''Guido's monkeypatch metaclass.''' assert len(bases) == 1, "Exactly one base class required" base = bases[0] for name, value in namespace.iteritems(): if name != "__metaclass__": setattr(base, name, value) return base And in the test suite: from swiftclient import client import StringIO import utils class Connection(client.Connection): __metaclass__ = monkeypatch_class def get_object(self, path, obj, resp_chunk_size=None, ...): contents = None headers = {} # retrieve content from path and store it in 'contents' ... if resp_chunk_size is not None: # stream the string into chunks def _object_body(): stream = StringIO.StringIO(contents) buf = stream.read(resp_chunk_size) while buf: yield buf buf = stream.read(resp_chunk_size) contents = _object_body() return headers, contents After returning the generator object, it was called by a stream function in the storage class: class SwiftStorage(Storage): def get_content(self, path, chunk_size=None): path = self._init_path(path) try: _, obj = self._connection.get_object( self._container, path, resp_chunk_size=chunk_size) return obj except Exception: raise IOError("Could not get content: {}".format(path)) def stream_read(self, path): try: return self.get_content(path, chunk_size=self.buffer_size) except Exception: raise OSError( "Could not read content from stream: {}".format(path)) And finally, in my test suite: def test_stream(self): filename = self.gen_random_string() # test 7MB content = self.gen_random_string(7 * 1024 * 1024) self._storage.stream_write(filename, io) io.close() # test read / write data = '' for buf in self._storage.stream_read(filename): data += buf self.assertEqual(content, data, "stream read failed. output: {}".format(data)) The output ends up with this: ====================================================================== FAIL: test_stream (test_swift_storage.TestSwiftStorage) ---------------------------------------------------------------------- Traceback (most recent call last): File "/home/bacongobbler/git/github.com/bacongobbler/docker-registry/test/test_local_storage.py", line 46, in test_stream "stream read failed. output: {}".format(data)) AssertionError: stream read failed. output: <generator object _object_body at 0x2a6bd20> I tried isolating this with a simple python script that follows the same flow as the code above, which passed without issues: def gen_num(): def _object_body(): for i in range(10000000): yield i return _object_body() def get_num(): return gen_num() def stream_read(): return get_num() def main(): num = 0 for i in stream_read(): num += i print num if __name__ == '__main__': main() Any help with this issue is greatly appreciated :)
In your get_object method, you're assigning the return value of _object_body() to the contents variable. However, that variable is also the one that holds your actual data, and it's used early on in _object_body. The problem is that _object_body is a generator function (it uses yield). Therefore, when you call it, it produces a generator object, but the code of the function doesn't start running until you iterate over that generator. Which means that when the function's code actually starts running (the for loop in _test_stream), it's long after you've reassigned contents = _object_body(). Your stream = StringIO(contents) therefore creates a StringIO object containing the generator object (hence your error message), not the data. Here's a minimal reproduction case that illustrates the problem: def foo(): contents = "Hello!" def bar(): print contents yield 1 # Only create the generator. This line runs none of the code in bar. contents = bar() print "About to start running..." for i in contents: # Now we run the code in bar, but contents is now bound to # the generator object. So this doesn't print "Hello!" pass
Django haystack EdgeNgramField given different results than elasticsearch
I'm currently running haystack with an elasticsearch backend, and now I'm building an autocomplete for cities names. The problem is that SearchQuerySet is giving me different results, which from my perspective are wrong, than the same query executed directly in elasticsearch, which are for me the expected results. I'm using: Django 1.5.4, django-haystack 2.1.0, pyelasticsearch 0.6.1, elasticsearch 0.90.3 Using the following example data: Midfield Midland City Midway Minor Minturn Miami Beach Using either SearchQuerySet().models(Geoname).filter(name_auto='mid') or SearchQuerySet().models(Geoname).autocomplete(name_auto='mid') The result returns always all the 6 names, including Min* and Mia*...however, querying elasticsearch directly returns the right data: "query": { "filtered" : { "query" : { "match_all": {} }, "filter" : { "term": {"name_auto": "mid"} } } } { "took": 1, "timed_out": false, "_shards": { "total": 5, "successful": 5, "failed": 0 }, "hits": { "total": 3, "max_score": 1, "hits": [ { "_index": "haystack", "_type": "modelresult", "_id": "csi.geoname.4075977", "_score": 1, "_source": { "name_auto": "Midfield", } }, { "_index": "haystack", "_type": "modelresult", "_id": "csi.geoname.4075984", "_score": 1, "_source": { "name_auto": "Midland City", } }, { "_index": "haystack", "_type": "modelresult", "_id": "csi.geoname.4075989", "_score": 1, "_source": { "name_auto": "Midway", } } ] } } The behavior is the same with different examples. My guess is that trough haystack the string it's being split and analyzed by all possible "min_gram" groups of characters and that's why it returns wrong results. I'm not sure if I am doing or understanding something wrong, and if is this how haystack is supposed to work, but I need that haystack results match the elasticsearch results. So, How can I fix the issue or make it works ? My summarized objects look as follow: Model: class Geoname(models.Model): id = models.IntegerField(primary_key=True) name = models.CharField(max_length=255) Index: class GeonameIndex(indexes.SearchIndex, indexes.Indexable): text = indexes.CharField(document=True, use_template=True) name_auto = indexes.EdgeNgramField(model_attr='name') def get_model(self): return Geoname Mapping: modelresult: { _boost: { name: "boost", null_value: 1 }, properties: { django_ct: { type: "string" }, django_id: { type: "string" }, name_auto: { type: "string", store: true, term_vector: "with_positions_offsets", analyzer: "edgengram_analyzer" } } } Thank you.
After a deep look into the code I found that the search generated by haystack was: { "query":{ "filtered":{ "filter":{ "fquery":{ "query":{ "query_string":{ "query": "django_ct:(csi.geoname)" } }, "_cache":false } }, "query":{ "query_string":{ "query": "name_auto:(mid)", "default_operator":"or", "default_field":"text", "auto_generate_phrase_queries":true, "analyze_wildcard":true } } } }, "from":0, "size":6 } Running this query in elasticsearch was given me as result the same 6 objects that haystack was showing...but If I added to the "query_string" "analyzer": "standard" it worked as desired. So the idea was to be able to setup a different search analyzer for the field. Based on the @user954994 answer's link and the explanation on this post, what I finally did to make it work was: I created my custom elasticsearch backend, adding a new custom analyzer based on the standard one. I added a custom EdgeNgramField, enabling the way to setup an specific analyzer for index (index_analyzer) and another analyzer for search (search_analyzer). So, my new settings are: ELASTICSEARCH_INDEX_SETTINGS = { 'settings': { "analysis": { "analyzer": { "ngram_analyzer": { "type": "custom", "tokenizer": "lowercase", "filter": ["haystack_ngram"] }, "edgengram_analyzer": { "type": "custom", "tokenizer": "lowercase", "filter": ["haystack_edgengram"] }, "suggest_analyzer": { "type":"custom", "tokenizer":"standard", "filter":[ "standard", "lowercase", "asciifolding" ] }, }, "tokenizer": { "haystack_ngram_tokenizer": { "type": "nGram", "min_gram": 3, "max_gram": 15, }, "haystack_edgengram_tokenizer": { "type": "edgeNGram", "min_gram": 2, "max_gram": 15, "side": "front" } }, "filter": { "haystack_ngram": { "type": "nGram", "min_gram": 3, "max_gram": 15 }, "haystack_edgengram": { "type": "edgeNGram", "min_gram": 2, "max_gram": 15 } } } } } My new custom build_schema method looks as follow: def build_schema(self, fields): content_field_name, mapping = super(ConfigurableElasticBackend, self).build_schema(fields) for field_name, field_class in fields.items(): field_mapping = mapping[field_class.index_fieldname] index_analyzer = getattr(field_class, 'index_analyzer', None) search_analyzer = getattr(field_class, 'search_analyzer', None) field_analyzer = getattr(field_class, 'analyzer', self.DEFAULT_ANALYZER) if field_mapping['type'] == 'string' and field_class.indexed: if not hasattr(field_class, 'facet_for') and not field_class.field_type in('ngram', 'edge_ngram'): field_mapping['analyzer'] = field_analyzer if index_analyzer and search_analyzer: field_mapping['index_analyzer'] = index_analyzer field_mapping['search_analyzer'] = search_analyzer del(field_mapping['analyzer']) mapping.update({field_class.index_fieldname: field_mapping}) return (content_field_name, mapping) And after rebuild index my mapping looks as below: modelresult: { _boost: { name: "boost", null_value: 1 }, properties: { django_ct: { type: "string" }, django_id: { type: "string" }, name_auto: { type: "string", store: true, term_vector: "with_positions_offsets", index_analyzer: "edgengram_analyzer", search_analyzer: "suggest_analyzer" } } } Now everything is working as expected! UPDATE: Bellow you'll find the code to clarify this part: I created my custom elasticsearch backend, adding a new custom analyzer based on the standard one. I added a custom EdgeNgramField, enabling the way to setup an specific analyzer for index (index_analyzer) and another analyzer for search (search_analyzer). Into my app search_backends.py: from django.conf import settings from haystack.backends.elasticsearch_backend import ElasticsearchSearchBackend from haystack.backends.elasticsearch_backend import ElasticsearchSearchEngine from haystack.fields import EdgeNgramField as BaseEdgeNgramField # Custom Backend class CustomElasticBackend(ElasticsearchSearchBackend): DEFAULT_ANALYZER = None def __init__(self, connection_alias, **connection_options): super(CustomElasticBackend, self).__init__( connection_alias, **connection_options) user_settings = getattr(settings, 'ELASTICSEARCH_INDEX_SETTINGS', None) self.DEFAULT_ANALYZER = getattr(settings, 'ELASTICSEARCH_DEFAULT_ANALYZER', "snowball") if user_settings: setattr(self, 'DEFAULT_SETTINGS', user_settings) def build_schema(self, fields): content_field_name, mapping = super(CustomElasticBackend, self).build_schema(fields) for field_name, field_class in fields.items(): field_mapping = mapping[field_class.index_fieldname] index_analyzer = getattr(field_class, 'index_analyzer', None) search_analyzer = getattr(field_class, 'search_analyzer', None) field_analyzer = getattr(field_class, 'analyzer', self.DEFAULT_ANALYZER) if field_mapping['type'] == 'string' and field_class.indexed: if not hasattr(field_class, 'facet_for') and not field_class.field_type in('ngram', 'edge_ngram'): field_mapping['analyzer'] = field_analyzer if index_analyzer and search_analyzer: field_mapping['index_analyzer'] = index_analyzer field_mapping['search_analyzer'] = search_analyzer del(field_mapping['analyzer']) mapping.update({field_class.index_fieldname: field_mapping}) return (content_field_name, mapping) class CustomElasticSearchEngine(ElasticsearchSearchEngine): backend = CustomElasticBackend # Custom field class CustomFieldMixin(object): def __init__(self, **kwargs): self.analyzer = kwargs.pop('analyzer', None) self.index_analyzer = kwargs.pop('index_analyzer', None) self.search_analyzer = kwargs.pop('search_analyzer', None) super(CustomFieldMixin, self).__init__(**kwargs) class CustomEdgeNgramField(CustomFieldMixin, BaseEdgeNgramField): pass My index definition goes something like: class MyIndex(indexes.SearchIndex, indexes.Indexable): text = indexes.CharField(document=True, use_template=True) name_auto = CustomEdgeNgramField(model_attr='name', index_analyzer="edgengram_analyzer", search_analyzer="suggest_analyzer") And finally, settings uses of course the custom backend for the haystack connection definition: HAYSTACK_CONNECTIONS = { 'default': { 'ENGINE': 'my_app.search_backends.CustomElasticSearchEngine', 'URL': 'http://localhost:9200', 'INDEX_NAME': 'index' }, }
Local MySQLdb connection fails with AttributeError for paramstyle when running GAE development server
I'm building a GAE Flask application with Flask-Alchemy, against Cloud SQL, and running dev_appserver to test the application as I build it. However, if I set the SQLALCHEMY_DATABASE_URI to a mysql+gaerdbms:///appname?instance=instanceid URL, I get the following traceback when trying to call db.create_all(): Traceback (most recent call last): # earlier lines omitted for brevity File "/Project/app/foo.bar/foo/bar/admin/__init__.py", line 26, in init_db db.create_all() File "/Project/app/distlib/flask_sqlalchemy/__init__.py", line 856, in create_all self._execute_for_all_tables(app, bind, 'create_all') File "/Project/app/distlib/flask_sqlalchemy/__init__.py", line 848, in _execute_for_all_tables op(bind=self.get_engine(app, bind), tables=tables) File "/Project/app/distlib/flask_sqlalchemy/__init__.py", line 797, in get_engine return connector.get_engine() File "/Project/app/distlib/flask_sqlalchemy/__init__.py", line 473, in get_engine self._engine = rv = sqlalchemy.create_engine(info, **options) File "/Project/app/distlib/sqlalchemy/engine/__init__.py", line 332, in create_engine return strategy.create(*args, **kwargs) File "/Project/app/distlib/sqlalchemy/engine/strategies.py", line 69, in create dialect = dialect_cls(**dialect_args) File "/Project/app/distlib/sqlalchemy/dialects/mysql/base.py", line 1986, in __init__ default.DefaultDialect.__init__(self, **kwargs) File "/Project/app/distlib/sqlalchemy/engine/default.py", line 124, in __init__ self.paramstyle = self.dbapi.paramstyle AttributeError: 'module' object has no attribute 'paramstyle' What gives? Why is the (DB-API 2.0 required) paramstyle attribute missing?
This means the MySQLdb module is missing and failed to import. The GAE SDK does not itself come with the MySQLdb client library; install MySQLdb (as instructed in the SDK documentation): venv/bin/pip install mysql-python or use your OS package manager to install MySQLdb in your system python. The error is caused by the Google google.appengine.api.rdbms_mysqldb module, acting as a stub, not having a paramstyle attribute when import MySQLdb fails. A stub connect() function is provided that'll raise a more helpful exception, but due to an unfortunate interaction with SQLAlchemy the error there is a lot less informative.
"pip install SQLAlchemy" Results in "fatal error: Python.h: No such file or directory"
Calling pip install SQLAlchemy I get an error: lib/sqlalchemy/cextension/processors.c:10:20: fatal error: Python.h: No such file or directory As far as I know, I have the correct Python version (2.7.3) and OS (Ubuntu 12.04) (See below.) for this to work. Am I doing anything wrong? The install does work as pip install --global-option='--without-cextensions' SQLAlchemy" but I want the C extensions. Full output: root@mycomputer:/# pip install SQLAlchemy Downloading/unpacking SQLAlchemy Downloading SQLAlchemy-0.8.3.tar.gz (3.9Mb): 3.9Mb downloaded Running setup.py egg_info for package SQLAlchemy warning: no files found matching '*.jpg' under directory 'doc' no previously-included directories found matching 'doc/build/output' Installing collected packages: SQLAlchemy Running setup.py install for SQLAlchemy building 'sqlalchemy.cprocessors' extension gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fPIC -I/usr/include/python2.7 -c lib/sqlalchemy/cextension/processors.c -o build/temp.linux-x86_64-2.7/lib/sqlalchemy/cextension/processors.o lib/sqlalchemy/cextension/processors.c:10:20: fatal error: Python.h: No such file or directory compilation terminated. *************************************************************************** command 'gcc' failed with exit status 1 WARNING: The C extension could not be compiled, speedups are not enabled. Failure information, if any, is above. Retrying the build without the C extension now. *************************************************************************** warning: no files found matching '*.jpg' under directory 'doc' no previously-included directories found matching 'doc/build/output' *************************************************************************** WARNING: The C extension could not be compiled, speedups are not enabled. Plain-Python build succeeded. *************************************************************************** Successfully installed SQLAlchemy Cleaning up... root@mycomputer:/# Python Version: root@mycomputer:/#python -V Python 2.7.3 root@mycomputer:/# Ubuntu Version: root@mycomputer:/#cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=12.04 DISTRIB_CODENAME=precise DISTRIB_DESCRIPTION="Ubuntu 12.04 LTS" root@mycomputer:/#
You need to install the python-dev (or similar name) package for your version of Python. It includes all the header files needed to compile C extensions. These files are (unfortunately) not included in the default python packages. For Ubuntu, the command is sudo apt-get install python-dev or sudo apt-get install python3-dev depending on which version you're using.
Right way to reverse pandas.DataFrame?
Here is my code: import pandas as pd data = pd.DataFrame({'Odd':[1,3,5,6,7,9], 'Even':[0,2,4,6,8,10]}) for i in reversed(data): print(data['Odd'], data['Even']) When I run this code, i get the following error: Traceback (most recent call last): File "C:\Python33\lib\site-packages\pandas\core\generic.py", line 665, in _get_item_cache return cache[item] KeyError: 5 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "C:\Users\*****\Documents\******\********\****.py", line 5, in <module> for i in reversed(data): File "C:\Python33\lib\site-packages\pandas\core\frame.py", line 2003, in __getitem__ return self._get_item_cache(key) File "C:\Python33\lib\site-packages\pandas\core\generic.py", line 667, in _get_item_cache values = self._data.get(item) File "C:\Python33\lib\site-packages\pandas\core\internals.py", line 1656, in get _, block = self._find_block(item) File "C:\Python33\lib\site-packages\pandas\core\internals.py", line 1936, in _find_block self._check_have(item) File "C:\Python33\lib\site-packages\pandas\core\internals.py", line 1943, in _check_have raise KeyError('no item named %s' % com.pprint_thing(item)) KeyError: 'no item named 5' Why am I getting this error? How can I fix that? What is the right way to reverse pandas.DataFrame?
data.reindex(index=data.index[::-1]) or simply: data.iloc[::-1] will reverse your data frame, if you want to have a for loop which goes from down to up you may do: for idx in reversed(data.index): print(idx, data.loc[idx, 'Even'], data.loc[idx, 'Odd']) or for idx in reversed(data.index): print(idx, data.Even[idx], data.Odd[idx]) You are getting an error because reversed first calls data.__len__() which returns 6. Then it tries to call data[j - 1] for j in range(6, 0, -1), and the first call would be data[5]; but in pandas dataframe data[5] means column 5, and there is no column 5 so it will throw an exception. ( see docs )
Pandas compiled from source: default pickle behavior changed
I've just compiled and installed pandas from source (cloned github repo, >>> setup.py install). It happened that the default behavior of module pickle for object serialization/deserialization changed being likely partially overridden by pandas internal modules. I have quite some data classes serialized via "standard" pickle which apparently I cannot deserialize anymore; in particular, when I try to deserialize a class file (surely working), I get this error In [1]: import pickle In [2]: pickle.load(open('pickle_L1cor_s1.pic','rb')) --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-2-88719f8f9506> in <module>() ----> 1 pickle.load(open('pickle_L1cor_s1.pic','rb')) /home/acorbe/Canopy/appdata/canopy-1.1.0.1371.rh5-x86_64/lib/python2.7/pickle.pyc in load(file) 1376 1377 def load(file): -> 1378 return Unpickler(file).load() 1379 1380 def loads(str): /home/acorbe/Canopy/appdata/canopy-1.1.0.1371.rh5-x86_64/lib/python2.7/pickle.pyc in load(self) 856 while 1: 857 key = read(1) --> 858 dispatch[key](self) 859 except _Stop, stopinst: 860 return stopinst.value /home/acorbe/Enthought/Canopy_64bit/User/lib/python2.7/site-packages/pandas-0.12.0_1090_g46008ec-py2.7-linux-x86_64.egg/pandas/compat/pickle_compat.pyc in load_reduce(self) 28 29 # try to reencode the arguments ---> 30 if self.encoding is not None: 31 args = tuple([ arg.encode(self.encoding) if isinstance(arg, string_types) else arg for arg in args ]) 32 try: AttributeError: Unpickler instance has no attribute 'encoding' I have quite a large code relying on this which broke down. Is there any quick workaround? How can I obtain again default pickle behavior? any help appreciated EDIT: I realized that what I am willing to unpickle is a list of dicts which include a couple of DataFrames each. That's where pandas comes into play. I applied the patch by @Jeff github.com/pydata/pandas/pull/5661. Another error (maybe related to this) shows up. In [4]: pickle.load(open('pickle_L1cor_s1.pic','rb')) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-4-88719f8f9506> in <module>() ----> 1 pickle.load(open('pickle_L1cor_s1.pic','rb')) /home/acorbe/Canopy/appdata/canopy-1.1.0.1371.rh5-x86_64/lib/python2.7/pickle.pyc in load(file) 1376 1377 def load(file): -> 1378 return Unpickler(file).load() 1379 1380 def loads(str): /home/acorbe/Canopy/appdata/canopy-1.1.0.1371.rh5-x86_64/lib/python2.7/pickle.pyc in load(self) 856 while 1: 857 key = read(1) --> 858 dispatch[key](self) 859 except _Stop, stopinst: 860 return stopinst.value /home/acorbe/Canopy/appdata/canopy-1.1.0.1371.rh5-x86_64/lib/python2.7/pickle.pyc in load_reduce(self) 1131 args = stack.pop() 1132 func = stack[-1] -> 1133 value = func(*args) 1134 stack[-1] = value 1135 dispatch[REDUCE] = load_reduce TypeError: _reconstruct: First argument must be a sub-type of ndarray Pandas version of encoded data is (from Canopy package manager) Size: 7.32 MB Version: 0.12.0 Build: 2 Dependencies: numpy 1.7.1 python_dateutil pytz 2011n md5: 7dd4385bed058e6ac15b0841b312ae35 I am not sure I can provide minimal example of the files I am trying to unpickle. They are quite large (O(100MB)) and they have some non trivial dependencies.
Master has just been updated by this issue. This file be read simply by: result = pd.read_pickle('pickle_L1cor_s1.pic') The objects that are pickled are pandas <= 0.12 versioned. This need a custom unpickler, which the 0.13/master (releasing shortly) handles. 0.13 saw a refactor of the Series inheritance hierarchy where Series is no longer a sub-class of ndarray, but now of NDFrame, the same base class of DataFrame and Panel. This was done for a great many reasons, mainly to promote code consistency. See here for a more complete description. The error message you are seeing `TypeError: _reconstruct: First argument must be a sub-type of ndarray is that the python default unpickler makes sure that the class hierarchy that was pickled is exactly the same what it is recreating. Since Series has changed between versions this is no longer possible with the default unpickler, (this IMHO is a bug in the way pickle works). In any event, pandas will unpickle pre-0.13 pickles that have Series objects.
Django South Error: AttributeError: 'DateTimeField' object has no attribute 'model'`
So I'm trying to migrate a table by adding two columns to it. A startDate and an endDate. Using south for Django, this should be a simple migrate. I have loads of other tables with dateTimes in them as well, but for some reason I'm getting and issue here and I don't see it. The stack trace is stating: AttributeError: 'DateTimeField' object has no attribute 'model' Here is the model I am migrating: # Keep track of who has applied for a Job class JobApply(models.Model): job = models.ForeignKey(Jobs) user = models.ForeignKey(User) # Keep track of the Developer accepted to do the work accepted_dev = models.IntegerField(null=False, blank=False, default=0) # If 1 (True) the User has applied to this job isApplied = models.BooleanField(default=0) startDate = models.DateTimeField() endDate = models.DateTimeField() All the fields except for startDate and endDate already exist in the DB. So to give those columns default values I use datetime.date.now() through the terminal to keep everything square. The issue is that South's schemamigration works just fine, but the actual migration barfs. If anyone can see the error, my hair would appreciate it. :P EDIT: Including Stacktrace: Running migrations for insource: - Migrating forwards to 0004_auto__add_field_jobapply_startDate__add_field_jobapply_endDate. > insource:0004_auto__add_field_jobapply_startDate__add_field_jobapply_endDate Error in migration: insource:0004_auto__add_field_jobapply_startDate__add_field_jobapply_endDate Traceback (most recent call last): File "./manage.py", line 10, in <module> execute_from_command_line(sys.argv) File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 399, in execute_from_command_line utility.execute() File "/usr/local/lib/python2.7/dist-packages/django/core/management/__init__.py", line 392, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 242, in run_from_argv self.execute(*args, **options.__dict__) File "/usr/local/lib/python2.7/dist-packages/django/core/management/base.py", line 285, in execute output = self.handle(*args, **options) File "/usr/local/lib/python2.7/dist-packages/south/management/commands/migrate.py", line 111, in handle ignore_ghosts = ignore_ghosts, File "/usr/local/lib/python2.7/dist-packages/south/migration/__init__.py", line 220, in migrate_app success = migrator.migrate_many(target, workplan, database) File "/usr/local/lib/python2.7/dist-packages/south/migration/migrators.py", line 229, in migrate_many result = migrator.__class__.migrate_many(migrator, target, migrations, database) File "/usr/local/lib/python2.7/dist-packages/south/migration/migrators.py", line 304, in migrate_many result = self.migrate(migration, database) File "/usr/local/lib/python2.7/dist-packages/south/migration/migrators.py", line 129, in migrate result = self.run(migration, database) File "/usr/local/lib/python2.7/dist-packages/south/migration/migrators.py", line 113, in run return self.run_migration(migration, database) File "/usr/local/lib/python2.7/dist-packages/south/migration/migrators.py", line 83, in run_migration migration_function() File "/usr/local/lib/python2.7/dist-packages/south/migration/migrators.py", line 59, in <lambda> return (lambda: direction(orm)) File "/home/jared/Desktop/School/insource/insource/migrations/0004_auto__add_field_jobapply_startDate__add_field_jobapply_endDate.py", line 14, in forwards keep_default=False) File "/usr/local/lib/python2.7/dist-packages/south/db/generic.py", line 47, in _cache_clear return func(self, table, *args, **opts) File "/usr/local/lib/python2.7/dist-packages/south/db/generic.py", line 411, in add_column sql = self.column_sql(table_name, name, field) File "/usr/local/lib/python2.7/dist-packages/south/db/generic.py", line 706, in column_sql default = field.get_db_prep_save(default, connection=self._get_connection()) File "/usr/local/lib/python2.7/dist-packages/django/db/models/fields/__init__.py", line 350, in get_db_prep_save prepared=False) File "/usr/local/lib/python2.7/dist-packages/django/db/models/fields/__init__.py", line 911, in get_db_prep_value value = self.get_prep_value(value) File "/usr/local/lib/python2.7/dist-packages/django/db/models/fields/__init__.py", line 902, in get_prep_value (self.model.__name__, self.name, value), AttributeError: 'DateTimeField' object has no attribute 'model' Migration Code (adding relevant code as it's a bit long): def forwards(self, orm): # Adding field 'JobApply.startDate' db.add_column(u'insource_jobapply', 'startDate', self.gf('django.db.models.fields.DateTimeField')(default=datetime.datetime(2013, 12, 7, 0, 0)), keep_default=False) # Adding field 'JobApply.endDate' db.add_column(u'insource_jobapply', 'endDate', self.gf('django.db.models.fields.DateTimeField')(default=datetime.datetime(2013, 12, 7, 0, 0)), keep_default=False) def backwards(self, orm): # Deleting field 'JobApply.startDate' db.delete_column(u'insource_jobapply', 'startDate') # Deleting field 'JobApply.endDate' db.delete_column(u'insource_jobapply', 'endDate') u'insource.jobapply': { 'Meta': {'object_name': 'JobApply'}, 'accepted_dev': ('django.db.models.fields.IntegerField', [], {'default': '0'}), 'endDate': ('django.db.models.fields.DateTimeField', [], {}), u'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}), 'isApplied': ('django.db.models.fields.BooleanField', [], {'default': 'False'}), 'job': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['insource.Jobs']"}), 'startDate': ('django.db.models.fields.DateTimeField', [], {}), 'user': ('django.db.models.fields.related.ForeignKey', [], {'to': u"orm['auth.User']"}) },
I had to upgrade my version of south for django to version 0.8.4. Had to run the following command: sudo easy_install -U South Or, if using pip: pip install South --upgrade After that, my migration worked as expected.
How can I read inputs as integers in Python?
Why does this code not input integers? Everything on the web says to use raw_input(), but I read on Stack Overflow (on a thread that did not deal with integer input) that raw_input() was renamed to input() in Python 3.x. play = True while play: x = input("Enter a number: ") y = input("Enter a number: ") print(x + y) print(x - y) print(x * y) print(x / y) print(x % y) if input("Play again? ") == "no": play = False
Python 2.x There were two functions to get user input, called input and raw_input. The difference between them is, raw_input doesn't evaluate the data and returns as it is, in string form. But, input will evaluate whatever you entered and the result of evaluation will be returned. For example, >>> import sys >>> sys.version '2.7.6 (default, Mar 22 2014, 22:59:56) \n[GCC 4.8.2]' >>> data = input("Enter a number: ") Enter a number: 5 + 17 >>> data, type(data) (22, <type 'int'>) The data 5 + 17 is evaluated and the result is 22. When it evaluates the expression 5 + 17, it detects that you are adding two numbers and so the result will also be of the same int type. So, the type conversion is done for free and 22 is returned as the result of input and stored in data variable. You can think of input as the raw_input composed with an eval call. >>> data = eval(raw_input("Enter a number: ")) Enter a number: 5 + 17 >>> data, type(data) (22, <type 'int'>) Note: you should be careful when you are using input in Python 2.x. I explained why one should be careful when using it, in this answer. But, raw_input doesn't evaluate the input and returns as it is, as a string. >>> import sys >>> sys.version '2.7.6 (default, Mar 22 2014, 22:59:56) \n[GCC 4.8.2]' >>> data = raw_input("Enter a number: ") Enter a number: 5 + 17 >>> data, type(data) ('5 + 17', <type 'str'>) Python 3.x Python 3.x's input and Python 2.x's raw_input are similar and raw_input is not available in Python 3.x. >>> import sys >>> sys.version '3.4.0 (default, Apr 11 2014, 13:05:11) \n[GCC 4.8.2]' >>> data = input("Enter a number: ") Enter a number: 5 + 17 >>> data, type(data) ('5 + 17', <class 'str'>) Solution To answer your question, since Python 3.x doesn't evaluate and convert the data type, you have to explicitly convert to ints, with int, like this x = int(input("Enter a number: ")) y = int(input("Enter a number: ")) You can accept numbers of any base and convert them directly to base-10 with the int function, like this >>> data = int(input("Enter a number: "), 8) Enter a number: 777 >>> data 511 >>> data = int(input("Enter a number: "), 16) Enter a number: FFFF >>> data 65535 >>> data = int(input("Enter a number: "), 2) Enter a number: 10101010101 >>> data 1365 The second parameter tells what is the base of the numbers entered and then internally it understands and converts it. If the entered data is wrong it will throw a ValueError. >>> data = int(input("Enter a number: "), 2) Enter a number: 1234 Traceback (most recent call last): File "<input>", line 1, in <module> ValueError: invalid literal for int() with base 2: '1234' Apart from that, your program can be changed a little bit, like this while True: ... ... if input("Play again? ") == "no": break You can get rid of the play variable by using break and while True. PS: Python doesn't expect ; at the end of the line :)
Does Slicing `a` (e.g. `a[1:] == a[:-1]`) create copies of the `a`?
A friend of mine showed me the following Python code: a[1:] == a[:-1] Which returns True iff all the items in a are identical. I argued the code is hard to understand from first sight, and moreover - it is inefficient in memory usage, because two copies of a will be created for the comparison. I've used Python's dis to see what happens behind the hood at a[1:]==a[:-1]: >>> def stanga_compare(a): ... return a[1:]==a[:-1] ... >>> a=range(10) >>> stanga_compare(a) False >>> a=[0 for i in range(10)] >>> stanga_compare(a) True >>> dis.dis(stanga_compare) 2 0 LOAD_FAST 0 (a) 3 LOAD_CONST 1 (1) 6 SLICE+1 7 LOAD_FAST 0 (a) 10 LOAD_CONST 2 (-1) 13 SLICE+2 14 COMPARE_OP 2 (==) 17 RETURN_VALUE It boils down to two slicing commands - SLICE+1 and SLICE+2. The documentation is unclear as to whether these opcodes actually create a new copy of a, or just a reference to it. Does the SLICE commands copy a? Does the answer varies between Python implementations (Cython, Jython)? Update This snippet is clearly unreadable and confusing, and I'm not going to ever use it in actual code. My interest is purely technical - whether slicing copies the list, and whether the answer varies under different circumstances.
The documentation is unclear because slicing different objects does different things. In the case of a list, slicing does make a (shallow) copy1. Note that this is a feature of python lists independent of python implementation. In the case of other objects (like numpy arrays), it might not create a copy. If you want a better way to check that all the elements in a list are the same, I would probably recommend: all(lst[0] == item for item in lst) From a performance standpoint, your friend's code might actually outperform this for small lists since list slicing is so optimized. But, this is IMHO much easier to tell what is going on and has the opportunity to "short circuit" as soon as it finds a non-match. 1The actual function to look at is list_subscript, but for most cases, it just calls list_slice
How to use a different version of python during NPM install?
Salam (means Hello) :) I have terminal access to a VPS running centos 5.9 and default python 2.4.3 installed. I also installed python 2.7.3 via these commands: (I used make altinstall instead of make install) wget http://www.python.org/ftp/python/2.7.3/Python-2.7.3.tgz tar -xf Python-2.7.3.tgz cd Python-2.7.3 ./configure make make altinstall then I installed node.js from source via these commands: python2.7 ./configure make make install The problem is, when I use npm install and try to install a node.js package which requires python > 2.4.3 I get this error: gyp ERR! configure error gyp ERR! stack Error: Python executable "python" is v2.4.3, which is not supported by gyp. gyp ERR! stack You can pass the --python switch to point to Python >= v2.5.0 & < 3.0.0. gyp ERR! stack at failPythonVersion (/usr/local/lib/node_modules/npm/node_modules/node-gyp/lib/configure.js:125:14) gyp ERR! stack at /usr/local/lib/node_modules/npm/node_modules/node-gyp/lib/configure.js:114:9 how should I "pass the --python switch to point to Python >= v2.5.0"?
You can use --python option to npm like so: npm install --python=python2.7 or set it to be used always: npm config set python python2.7 Npm will in turn pass this option to node-gyp when needed. (note: I'm the one who opened an issue on Github to have this included in the docs, as there were so many questions about it ;-) )
How to convert pandas index in a dataframe to a column?
This seems rather obvious, but I can't seem to figure out how do I convert an index of data frame to a column? For example: df= gi ptt_loc 0 384444683 593 1 384444684 594 2 384444686 596 To, df= index1 gi ptt_loc 0 0 384444683 593 1 1 384444684 594 2 2 384444686 596
either: df['index1'] = df.index or, .reset_index: df.reset_index(level=0, inplace=True) so, if you have a multi-index frame with 3 levels of index, like: >>> df val tick tag obs 2016-02-26 C 2 0.0139 2016-02-27 A 2 0.5577 2016-02-28 C 6 0.0303 and you want to convert the 1st (tick) and 3rd (obs) levels in the index into columns, you would do: >>> df.reset_index(level=['tick', 'obs']) tick obs val tag C 2016-02-26 2 0.0139 A 2016-02-27 2 0.5577 C 2016-02-28 6 0.0303
str.startswith with a list of strings to test for
I'm trying to avoid using so many if statements and comparisons and simply use a list, but not sure how to use it with str.startswith: if link.lower().startswith("js/") or link.lower().startswith("catalog/") or link.lower().startswith("script/") or link.lower().startswith("scripts/") or link.lower().startswith("katalog/"): # then "do something" What I would like it to be is: if link.lower().startswith() in ["js","catalog","script","scripts","katalog"]: # then "do something" Any help would be appreciated.
str.startswith allows you to supply a tuple of strings to test for: if link.lower().startswith(("js", "catalog", "script", "katalog")): From the docs: str.startswith(prefix[, start[, end]]) Return True if string starts with the prefix, otherwise return False. prefix can also be a tuple of prefixes to look for. Below is a demonstration: >>> "abcde".startswith(("xyz", "abc")) True >>> prefixes = ["xyz", "abc"] >>> "abcde".startswith(tuple(prefixes)) # You must use a tuple though True >>>
Which is the efficient way to convert a float into an int in python?
I've been using n = int(n) to convert a float into an int. Recently, I came across another way to do the same thing : n = n // 1 Which is the most efficient way, and why?
Test it with timeit: $ bin/python -mtimeit -n10000000 -s 'n = 1.345' 'int(n)' 10000000 loops, best of 3: 0.234 usec per loop $ bin/python -mtimeit -n10000000 -s 'n = 1.345' 'n // 1' 10000000 loops, best of 3: 0.218 usec per loop So floor division is only a faster by a small margin. Note that these values are very close, and I had to crank up the loop repeat count to iron out random influences on my machine. Even with such a high count, you need to repeat the experiments a few times to see how much the numbers still vary and what comes out faster most of the time. This is logical, as int() requires a global lookup and a function call (so state is pushed and popped): >>> import dis >>> def use_int(n): ... return int(n) ... >>> def use_floordiv(n): ... return n // 1 ... >>> dis.dis(use_int) 2 0 LOAD_GLOBAL 0 (int) 3 LOAD_FAST 0 (n) 6 CALL_FUNCTION 1 9 RETURN_VALUE >>> dis.dis(use_floordiv) 2 0 LOAD_FAST 0 (n) 3 LOAD_CONST 1 (1) 6 BINARY_FLOOR_DIVIDE 7 RETURN_VALUE It is the LOAD_GLOBAL and CALL_FUNCTION opcodes that are slower than the LOAD_CONST and BINARY_FLOOR_DIVIDE opcodes; LOAD_CONST is a simple array lookup, LOAD_GLOBAL needs to do a dictionary lookup instead. Binding int() to a local name can make a small difference, giving it the edge again (as it has to do less work than // 1 floor division): $ bin/python -mtimeit -n10000000 -s 'n = 1.345' 'int(n)' 10000000 loops, best of 3: 0.233 usec per loop $ bin/python -mtimeit -n10000000 -s 'n = 1.345; int_=int' 'int_(n)' 10000000 loops, best of 3: 0.195 usec per loop $ bin/python -mtimeit -n10000000 -s 'n = 1.345' 'n // 1' 10000000 loops, best of 3: 0.225 usec per loop Again, you need to run this with 10 million loops to see the differences consistently. That said, int(n) is a lot more explicit and unless you are doing this in a time-critical loop, int(n) wins it in readability over n // 1. The timing differences are too small to make the cognitive cost of having to work out what // 1 does here worthwhile.
Django Rest Framework File Upload
I am using Django Rest Framework and AngularJs to upload a file. My view file looks like this. class ProductList(APIView): authentication_classes = (authentication.TokenAuthentication,) def get(self,request): if request.user.is_authenticated(): userCompanyId = request.user.get_profile().companyId products = Product.objects.filter(company = userCompanyId) serializer = ProductSerializer(products,many=True) return Response(serializer.data) def post(self,request): serializer = ProductSerializer(data=request.DATA, files=request.FILES) if serializer.is_valid(): serializer.save() return Response(data=request.DATA) As the last line of post method should return all the data , what should I write there , or how to check there if there is anything in request.FILES , and how to serializer filefield or how should I use Parser? please help..
I'm using the same stack and was also looking for an example of file upload, but my case is simpler since I use the ModelViewSet instead of APIView. The key turned out to be the pre_save hook. I ended up using it together with the angular-file-upload module like so: # Django class ExperimentViewSet(ModelViewSet): queryset = Experiment.objects.all() serializer_class = ExperimentSerializer def pre_save(self, obj): obj.samplesheet = self.request.FILES.get('file') class Experiment(Model): notes = TextField(blank=True) samplesheet = FileField(blank=True, default='') user = ForeignKey(User, related_name='experiments') class ExperimentSerializer(ModelSerializer): class Meta: model = Experiment fields = ('id', 'notes', 'samplesheet', 'user') // AngularJS controller('UploadExperimentCtrl', function($scope, $upload) { $scope.submit = function(files, exp) { $upload.upload({ url: '/api/experiments/' + exp.id + '/', method: 'PUT', data: {user: exp.user.id}, file: files[0] }); }; });
Most pythonic and/or performant way to assign a single value to a slice?
I want to assign a single value to a part of a list. Is there a better solution to this than one of the following? Maybe most performant but somehow ugly: >>> l=[0,1,2,3,4,5] >>> for i in range(2,len(l)): l[i] = None >>> l [0, 1, None, None, None, None] Concise (but I don't know if Python recognizes that no rearrangement of the list elements is necesssary): >>> l=[0,1,2,3,4,5] >>> l[2:] = [None]*(len(l)-2) >>> l [0, 1, None, None, None, None] Same caveat like above: >>> l=[0,1,2,3,4,5] >>> l[2:] = [None for _ in range(len(l)-2)] >>> l [0, 1, None, None, None, None] Not sure if using a library for such a trivial task is wise: >>> import itertools >>> l=[0,1,2,3,4,5] >>> l[2:] = itertools.repeat(None,len(l)-2) >>> l [0, 1, None, None, None, None] The problem that I see with the assignment to the slice (vs. the for loop) is that Python maybe tries to prepare for a change in the length of "l". After all, changing the list by inserting a shorter/longer slice involves copying all elements (that is, all references) of the list AFAIK. If Python does this in my case too (although it is unnecessary), the operation becomes O(n) instead of O(1) (assuming that I only always change a handful of elements).
Timing it: python -mtimeit "l=[0,1,2,3,4,5]" "for i in range(2,len(l)):" " l[i] = None" 1000000 loops, best of 3: 0.669 usec per loop python -mtimeit "l=[0,1,2,3,4,5]" "l[2:] = [None]*(len(l)-2)" 1000000 loops, best of 3: 0.419 usec per loop python -mtimeit "l=[0,1,2,3,4,5]" "l[2:] = [None for _ in range(len(l)-2)]" 1000000 loops, best of 3: 0.655 usec per loop python -mtimeit "l=[0,1,2,3,4,5]" "l[2:] = itertools.repeat(None,len(l)-2)" 1000000 loops, best of 3: 0.997 usec per loop Looks like l[2:] = [None]*(len(l)-2) is the best of the options you provided (for the scope you are dealing with). Note: Keep in mind that results will vary based on Python version, operation system, other currently running programs, and most of all - the size of the list and of the slice to be replaced. For larger scopes probably the last option (using itertools.repeat) will be the most effective, being both easily readable (pythonic) and efficient (performance).
Python Requests library redirect new url
I've been looking through the Python Requests documentation but I cannot see any functionality for what I am trying to achieve. In my script I am setting allow_redirects=True. I would like to know if the page has been redirected to something else, what is the new URL. For example, if the start URL was: www.google.com/redirect And the final URL is www.google.co.uk/redirected How do I get that URL?
You are looking for the request history. The response.history attribute is a list of responses that led to the final URL, which can be found in response.url. response = requests.get(someurl) if response.history: print "Request was redirected" for resp in response.history: print resp.status_code, resp.url print "Final destination:" print response.status_code, response.url else: print "Request was not redirected" Demo: >>> import requests >>> response = requests.get('http://httpbin.org/redirect/3') >>> response.history (<Response [302]>, <Response [302]>, <Response [302]>) >>> for resp in response.history: ... print resp.status_code, resp.url ... 302 http://httpbin.org/redirect/3 302 http://httpbin.org/redirect/2 302 http://httpbin.org/redirect/1 >>> print response.status_code, response.url 200 http://httpbin.org/get
Pycharm: "unresolved reference" error on the IDE when opening a working project
Intro I have a Python project on a git repository. Everything works ok for most of the team members, we can sync the code and edit it without any problem with Pycharm on different platforms (Windows, Linux) The problem On one of the computers we are getting "Unresolved reference" all over the code on almost every import with the exception of Python's built in libraries (i.e. import datetime is working). This computer is running the Mac version of Pycharm. The question Anyone knows how to solve this?, since most of the imports are not recognized code completion and navigation trough goto->declaration and so on is not working. Is there any known issue with the Mac version? Thanks in advance!
The key is to mark your source directory as a source root. Try the following: In the Project view, right-click on the Python source directory In the dialog menu select Mark Directory As > Source Root The folder should now appear blue instead of beige, to indicate it is a Python source folder. You can also configure this in PyCharm preferences by doing the following for a project that is already in PyCharm: In the Mac toolbar, select PyCharm > Preferences In the window that opens, select Project Structure from the menu pane on the left Select your project in the middle pane, if necessary Right-click on your Python source in the right pane and select Sources from the menu dialog
Pymysql Insert Into not working
I'm running this from PyDev in Eclipse... import pymysql conn = pymysql.connect(host='localhost', port=3306, user='userid', passwd='password', db='fan') cur = conn.cursor() print "writing to db" cur.execute("INSERT INTO cbs_transactions(leagueID) VALUES ('test val')") print "wrote to db" The result is, at the top of the Console it says C:...test.py, and in the Console: writing to db wrote to db So it's not terminating until after the execute command. But when I look in the table in MySQL it's empty. A record did not get inserted. First off, why isn't it writing the record. Second, how can I see a log or error to see what happened. Usually there should be some kind of error in red if the code fails.
Did you commit it? conn.commit()
How to reset index in a pandas data frame?
I have a data frame from which I remove some rows. As a result, I get a data frame in which index is something like that: [1,5,6,10,11] and I would like to reset it to [0,1,2,3,4]. How can I do it? ADDED The following seems to work: df = df.reset_index() del df['index'] The following does not work: df = df.reindex()
reset_index() is what you're looking for. if you don't want it saved as a column, then df = df.reset_index(drop=True)
Optimal way to compute pairwise mutual information using numpy
For an m x n matrix, what's the optimal (fastest) way to compute the mutual information for all pairs of columns (n x n)? By mutual information, I mean: I(X, Y) = H(X) + H(Y) - H(X,Y) where H(X) refers to the Shannon entropy of X. Currently I'm using np.histogram2d and np.histogram to calculate the joint (X,Y) and individual (X or Y) counts. For a given matrix A (e.g. a 250000 X 1000 matrix of floats), I am doing a nested for loop, n = A.shape[1] for ix = arange(n) for jx = arange(ix+1,n): matMI[ix,jx]= calc_MI(A[:,ix],A[:,jx]) Surely there must be better/faster ways to do this? As an aside, I've also looked for mapping functions on columns (column-wise or row-wise operations) on arrays, but haven't found a good general answer yet. Here is my full implementation, following the conventions in the Wiki page: import numpy as np def calc_MI(X,Y,bins): c_XY = np.histogram2d(X,Y,bins)[0] c_X = np.histogram(X,bins)[0] c_Y = np.histogram(Y,bins)[0] H_X = shan_entropy(c_X) H_Y = shan_entropy(c_Y) H_XY = shan_entropy(c_XY) MI = H_X + H_Y - H_XY return MI def shan_entropy(c): c_normalized = c / float(np.sum(c)) c_normalized = c_normalized[np.nonzero(c_normalized)] H = -sum(c_normalized* np.log2(c_normalized)) return H A = np.array([[ 2.0, 140.0, 128.23, -150.5, -5.4 ], [ 2.4, 153.11, 130.34, -130.1, -9.5 ], [ 1.2, 156.9, 120.11, -110.45,-1.12 ]]) bins = 5 # ? n = A.shape[1] matMI = np.zeros((n, n)) for ix in np.arange(n): for jx in np.arange(ix+1,n): matMI[ix,jx] = calc_MI(A[:,ix], A[:,jx], bins) Although my working version with nested for loops does it at reasonable speed, I'd like to know if there is a more optimal way to apply calc_MI on all the columns of A (to calculate their pairwise mutual information)? I'd also like to know: Whether there are efficient ways to map functions to operate on columns (or rows) of np.arrays (maybe like np.vectorize, which looks more like a decorator)? Whether there are other optimal implementations for this specific calculation (mutual information)?
I can't suggest a faster calculation for the outer loop over the n*(n-1)/2 vectors, but your implementation of calc_MI(x, y, bins) can be simplified if you can use scipy version 0.13 or scikit-learn. In scipy 0.13, the lambda_ argument was added to scipy.stats.chi2_contingency This argument controls the statistic that is computed by the function. If you use lambda_="log-likelihood" (or lambda_=0), the log-likelihood ratio is returned. This is also often called the G or G2 statistic. Other than a factor of 2*n (where n is the total number of samples in the contingency table), this is the mutual information. So you could implement calc_MI as: from scipy.stats import chi2_contingency def calc_MI(x, y, bins): c_xy = np.histogram2d(x, y, bins)[0] g, p, dof, expected = chi2_contingency(c_xy, lambda_="log-likelihood") mi = 0.5 * g / c_xy.sum() return mi The only difference between this and your implementation is that this implementation uses the natural logarithm instead of the base-2 logarithm (so it is expressing the information in "nats" instead of "bits"). If you really prefer bits, just divide mi by log(2). If you have (or can install) sklearn (i.e. scikit-learn), you can use sklearn.metrics.mutual_info_score, and implement calc_MI as: from sklearn.metrics import mutual_info_score def calc_MI(x, y, bins): c_xy = np.histogram2d(x, y, bins)[0] mi = mutual_info_score(None, None, contingency=c_xy) return mi
Solving a graph issue with Python
I have one situation and I would like to approach this problem with Python, but unfortunately I don't have enough knowledge about the graphs. I found one library which seems very suitable for this relatively simple task, networkx, but I am having issues doing exact things I want, which should be fairly simple. I have a list of nodes, which can have different types, and two "classes" of neighbors, upwards and downwards. The task is to find paths between two target nodes, with some constraints in mind: only nodes of specific type can be traversed, i.e. if starting nodes are of type x, any node in the path has to be from another set of paths, y or z if a node has a type y, it can be passed through only once if a node has type z, it can be passed through twice in case a node of type z is visited, the exit has to be from the different class of neighbor, i.e. if its visited from upwards, the exit has to be from downwards So, I tried some experimentation but I, as said, have struggled. First, I am unsure what type of graph this actually represents? Its not directional, since it doesn't matter if you go from node 1 to node 2, or from node 2 to node 1 (except in that last scenario, so that complicates things a bit...). This means I can't just create a graph which is simply multidirectional, since I have to have that constraint in mind. Second, I have to traverse through those nodes, but specify that only nodes of specific type have to be available for path. Also, in case the last scenario happens, I have to have in mind the entry and exit class/direction, which puts it in somewhat directed state. Here is some sample mockup code: import networkx as nx G=nx.DiGraph() G.add_node(1, type=1) G.add_node(2, type=2) G.add_node(3, type=3) G.add_edge(1,2, side="up") G.add_edge(1,3, side="up") G.add_edge(2,1, side="down") G.add_edge(2,3, side="down") for path in nx.all_simple_paths(G,1,3): print path The output is fairly nice, but I need these constraints. So, do you have some suggestions how can I implement these, or give me some more guidance regarding understanding this type of problem, or suggest a different approach or library for this problem? Maybe a simple dictionary based algorithm would fit this need? Thanks!
You might be able to use the all_simple_paths() function for your problem if you construct your graph differently. Simple paths are those with no repeated nodes. So for your constraints here are some suggestions to build the graph so you can run that algorithm unmodified. only nodes of specific type can be traversed, i.e. if starting nodes are of type x, any node in the path has to be from another set of paths, y or z Given a starting node n, remove all other nodes with that type before you find paths. if a node has a type y, it can be passed through only once This is the definition of simple paths so it is automatically satisfied. if a node has type z, it can be passed through twice For every node n of type z add a new node n2 with the same edges as those pointing to and from n. in case a node of type z is visited, the exit has to be from the different class of neighbor, i.e. if its visited from upwards, the exit has to be from downwards If the edges are directed as you propose then this could be satisfied if you make sure the edges to z are all the same direction - e.g. in for up and out for down...
Why disable the garbage collector?
Pythons gc.disable disables automatic garbage collection. As I understand it, that would have quite some side-effects. Why would anyone want to disable automatic garbage collection, and how could one effectively manage memory without it?
One use for disabling the garbage collector is to get more consistent results when timing the performance of code. The timeit module does this. def timeit(self, number=default_number): if itertools: it = itertools.repeat(None, number) else: it = [None] * number gcold = gc.isenabled() gc.disable() ... In Python2 and up to Python3.2 gc.disable() is also used to avoid a bug caused by garbage collection occurring between fork and exec. The problem seems to have been fixed in Python3.3 without needing to call gc.disable().
How do python Set Comprehensions work?
Q1 - Is the following a set() of a generator expression or a set comprehension? (Or are they same? If so, are list & dict comprehensions also corresponding type-cast on generators?) my_set = {x for x in range(10)} Q2 - Does the evaluation consider duplicate values & then remove them by applying set()? dup_set = {x for x in [0, 1, 2, 0, 1, 2]} Does the comprehension perform (speed-wise) better than regular for loops? Update - I tried using timeit for speed comparisons. Am not sure if I am being just (fair) about it. C:\>python -m timeit "s = set()" "for x in range(10):" " s.add(x)" 100000 loops, best of 3: 2.3 usec per loop C:\>python -m timeit "s = {x for x in range(10)}" 1000000 loops, best of 3: 1.68 usec per loop Now, using some conditionals C:\>python -m timeit "s = set()" "for x in range(10):" " if x%2: s.add(x)" 100000 loops, best of 3: 2.27 usec per loop C:\>python -m timeit "s = {x for x in range(10) if x%2}" 1000000 loops, best of 3: 1.83 usec per loop So, there is quite some difference, is it due to the functionality being hardcoded in c?
Q1 : Yes, yes, yes and yes. Or at least they behave like this. It's a little bit different if you're taking a look at the bytecode. Let's disassembly this code (Python 2.7) : def list_comp(l): return [x+1 for x in l] def dict_comp(l): return {x+1:0 for x in l} def set_comp(l): return {x+1 for x in l} def generator(l): return (x+1 for x in l) This is what you get: Disassembly of list_comp: 2 0 BUILD_LIST 0 3 LOAD_FAST 0 (l) 6 GET_ITER >> 7 FOR_ITER 16 (to 26) 10 STORE_FAST 1 (x) 13 LOAD_FAST 1 (x) 16 LOAD_CONST 1 (1) 19 BINARY_ADD 20 LIST_APPEND 2 23 JUMP_ABSOLUTE 7 >> 26 RETURN_VALUE Disassembly of dict_comp: 5 0 LOAD_CONST 1 (<code object <dictcomp> at 029DEE30) 3 MAKE_FUNCTION 0 6 LOAD_FAST 0 (l) 9 GET_ITER 10 CALL_FUNCTION 1 13 RETURN_VALUE Disassembly of set_comp: 8 0 LOAD_CONST 1 (<code object <setcomp> at 029DECC8) 3 MAKE_FUNCTION 0 6 LOAD_FAST 0 (l) 9 GET_ITER 10 CALL_FUNCTION 1 13 RETURN_VALUE Disassembly of generator: 11 0 LOAD_CONST 1 (<code object <genexpr> at 02A8FD58) 3 MAKE_FUNCTION 0 6 LOAD_FAST 0 (l) 9 GET_ITER 10 CALL_FUNCTION 1 13 RETURN_VALUE The bytecode is barely the same for the dict comprenhension, the set comprehension and the generator. They all load a code object (<dictcomp>, <setcomp> or <genexpr>) and then make a callable function out of it. The list comprehension is different because it generates the bytecode corresponding to your list comprehension. This time it is interpreted and thus not native. Q2 : It doesn't really consider duplicate values since it creates a comprehension with the list you gave. And then it creates the set with the comprehension. About timing : List/Dict/Set comprehensions tend to be faster than anything else. Even if they're interpreted, the bytecode generated is optimized for most of the cases with special bytecode instructions like SET_ADD, LIST_APPEND or MAP_ADD.
User Authentication in Django Rest Framework + Angular.js web app
I'm working on a webapp where users can login to see their online wine cellar. I've got the Django REST models setup, as well as the front-end design in Angular but I'm having trouble putting the pieces together and my main issue is for user authentication. I've read many posts on here and various tutorials but I can't seem to find a step by step method to implement authentication: What kind of auth should be used (Token, Session, Other?) How is authentication managed on the server side (is it a view? a method in the UserModel or UserManager?) I have a custom User model (using email as username). Can I use the generic Django login method or do I need to create my own? How is the authentication process managed between the server and client side? From what I understand Angular makes a POST request on a url where DRF verifies that username and password match and returns a token or other auth proof. I feel like I'm close but I need a more general view of how this works to put the pieces together. Thanks in advance
I imagine there are a lot of ways to do this, let me explain what I do, hopefully it is helpful. This is going to be a long post. I would love to hear how others do this, or better ways of implementing the same approach. You can also check out my seed project on Github, Angular-Django-Seed. I use token authentication with Witold Szczerba's http-auth-interceptor. The beauty of his approach is that whenever a request is sent from your site without proper credentials, you are redirected to the login screen, but your request is queued to be re-fired on login complete. Here is a login directive used with the login form. It posts to Django's auth token endpoint, sets a cookie with the response token, sets the default header with the token so all requests will be authenticated, and fires the http-auth-interceptor login event. .directive('login', function ($http, $cookieStore, authService) { return { restrict: 'A', link: function (scope, elem, attrs) { elem.bind('submit', function () { var user_data = { "username": scope.username, "password": scope.password, }; $http.post(constants.serverAddress + "api-token-auth", user_data, {"Authorization": ""}) .success(function(response) { $cookieStore.put('djangotoken', response.token); $http.defaults.headers.common['Authorization'] = 'Token ' + response.token; authService.loginConfirmed(); }); }); } } }) I use the module .run method to set check for the cookie when a user comes to the site, if they have the cookie set I set the default authorization. .run(function($rootScope) { $rootScope.$broadcast('event:initial-auth'); }) Here is my interceptor directive that handles the authService broadcasts. If login is required, I hide everything and show the login form. Otherwise hide the login form and show everything else. .directive('authApplication', function ($cookieStore, $http) { return { restrict: 'A', link: function (scope, elem, attrs) { var login = elem.find('#login-holder'); var main = elem.find('#main'); scope.$on('event:auth-loginRequired', function () { main.hide(); login.slideDown('fast'); }); scope.$on('event:auth-loginConfirmed', function () { main.show(); login.slideUp('fast'); }); scope.$on('event:initial-auth', function () { if ($cookieStore.get('djangotoken')) { $http.defaults.headers.common['Authorization'] = 'Token ' + $cookieStore.get('djangotoken'); } else { login.slideDown('fast'); main.hide(); } }); } } }) To use it all my html was basically like this. <body auth-application> <div id="login-holder"> ... login form </div> <div id="main"> ... ng-view, or the bulk of your html </div>
Python C program subprocess hangs at "for line in iter"
Ok so I'm trying to run a C program from a python script. Currently I'm using a test C program: #include <stdio.h> int main() { while (1) { printf("2000\n"); sleep(1); } return 0; } To simulate the program that I will be using, which takes readings from a sensor constantly. Then I'm trying to read the output (in this case "2000") from the C program with subprocess in python: #!usr/bin/python import subprocess process = subprocess.Popen("./main", stdout=subprocess.PIPE) while True: for line in iter(process.stdout.readline, ''): print line, but this is not working. From using print statements, it runs the .Popen line then waits at for line in iter(process.stdout.readline, ''):, until I press Ctrl-C. Why is this? This is exactly what most examples that I've seen have as their code, and yet it does not read the file. Edit: Is there a way of making it run only when there is something to be read?
It is a block buffering issue. What follows is an extended for your case version of my answer to Python: read streaming input from subprocess.communicate() question. Fix stdout buffer in C program directly stdio-based programs as a rule are line buffered if they are running interactively in a terminal and block buffered when their stdout is redirected to a pipe. In the latter case, you won't see new lines until the buffer overflows or flushed. To avoid calling fflush() after each printf() call, you could force line buffered output by calling in a C program at the very beginning: setvbuf(stdout, (char *) NULL, _IOLBF, 0); /* make line buffered stdout */ As soon as a newline is printed the buffer is flushed in this case. Or fix it without modifying the source of C program There is stdbuf utility that allows you to change buffering type without modifying the source code e.g.: from subprocess import Popen, PIPE process = Popen(["stdbuf", "-oL", "./main"], stdout=PIPE, bufsize=1) for line in iter(process.stdout.readline, b''): print line, process.communicate() # close process' stream, wait for it to exit There are also other utilities available, see Turn off buffering in pipe. Or use pseudo-TTY To trick the subprocess into thinking that it is running interactively, you could use pexpect module or its analogs, for code examples that use pexpect and pty modules, see Python subprocess readlines() hangs. Here's a variation on the pty example provided there (it should work on Linux): #!/usr/bin/env python import os import pty import sys from select import select from subprocess import Popen, STDOUT master_fd, slave_fd = pty.openpty() # provide tty to enable line buffering process = Popen("./main", stdin=slave_fd, stdout=slave_fd, stderr=STDOUT, bufsize=0, close_fds=True) timeout = .1 # ugly but otherwise `select` blocks on process' exit # code is similar to _copy() from pty.py with os.fdopen(master_fd, 'r+b', 0) as master: input_fds = [master, sys.stdin] while True: fds = select(input_fds, [], [], timeout)[0] if master in fds: # subprocess' output is ready data = os.read(master_fd, 512) # <-- doesn't block, may return less if not data: # EOF input_fds.remove(master) else: os.write(sys.stdout.fileno(), data) # copy to our stdout if sys.stdin in fds: # got user input data = os.read(sys.stdin.fileno(), 512) if not data: input_fds.remove(sys.stdin) else: master.write(data) # copy it to subprocess' stdin if not fds: # timeout in select() if process.poll() is not None: # subprocess ended # and no output is buffered <-- timeout + dead subprocess assert not select([master], [], [], 0)[0] # race is possible os.close(slave_fd) # subproces don't need it anymore break rc = process.wait() print("subprocess exited with status %d" % rc) Or use pty via pexpect pexpect wraps pty handling into higher level interface: #!/usr/bin/env python import pexpect child = pexpect.spawn("/.main") for line in child: print line, child.close() Q: Why not just use a pipe (popen())? explains why pseudo-TTY is useful.
Colorize Voronoi Diagram
I'm trying to colorize a Voronoi Diagram created using scipy.spatial.Voronoi. Here's my code: import numpy as np import matplotlib.pyplot as plt from scipy.spatial import Voronoi, voronoi_plot_2d # make up data points points = np.random.rand(15,2) # compute Voronoi tesselation vor = Voronoi(points) # plot voronoi_plot_2d(vor) # colorize for region in vor.regions: if not -1 in region: polygon = [vor.vertices[i] for i in region] plt.fill(*zip(*polygon)) plt.show() The resulting image: As you can see some of the Voronoi regions at the border of the image are not colored. That is because some indices to the Voronoi vertices for these regions are set to -1, i.e., for those vertices outside the Voronoi diagram. According to the docs: regions: (list of list of ints, shape (nregions, *)) Indices of the Voronoi vertices forming each Voronoi region. -1 indicates vertex outside the Voronoi diagram. In order to colorize these regions as well, I've tried to just remove these "outside" vertices from the polygon, but that didn't work. I think, I need to fill in some points at the border of the image region, but I can't seem to figure out how to achieve this reasonably. Can anyone help?
The Voronoi data structure contains all the necessary information to construct positions for the "points at infinity". Qhull also reports them simply as -1 indices, so Scipy doesn't compute them for you. https://gist.github.com/pv/8036995 http://nbviewer.ipython.org/gist/pv/8037100 import numpy as np import matplotlib.pyplot as plt from scipy.spatial import Voronoi def voronoi_finite_polygons_2d(vor, radius=None): """ Reconstruct infinite voronoi regions in a 2D diagram to finite regions. Parameters ---------- vor : Voronoi Input diagram radius : float, optional Distance to 'points at infinity'. Returns ------- regions : list of tuples Indices of vertices in each revised Voronoi regions. vertices : list of tuples Coordinates for revised Voronoi vertices. Same as coordinates of input vertices, with 'points at infinity' appended to the end. """ if vor.points.shape[1] != 2: raise ValueError("Requires 2D input") new_regions = [] new_vertices = vor.vertices.tolist() center = vor.points.mean(axis=0) if radius is None: radius = vor.points.ptp().max() # Construct a map containing all ridges for a given point all_ridges = {} for (p1, p2), (v1, v2) in zip(vor.ridge_points, vor.ridge_vertices): all_ridges.setdefault(p1, []).append((p2, v1, v2)) all_ridges.setdefault(p2, []).append((p1, v1, v2)) # Reconstruct infinite regions for p1, region in enumerate(vor.point_region): vertices = vor.regions[region] if all(v >= 0 for v in vertices): # finite region new_regions.append(vertices) continue # reconstruct a non-finite region ridges = all_ridges[p1] new_region = [v for v in vertices if v >= 0] for p2, v1, v2 in ridges: if v2 < 0: v1, v2 = v2, v1 if v1 >= 0: # finite ridge: already in the region continue # Compute the missing endpoint of an infinite ridge t = vor.points[p2] - vor.points[p1] # tangent t /= np.linalg.norm(t) n = np.array([-t[1], t[0]]) # normal midpoint = vor.points[[p1, p2]].mean(axis=0) direction = np.sign(np.dot(midpoint - center, n)) * n far_point = vor.vertices[v2] + direction * radius new_region.append(len(new_vertices)) new_vertices.append(far_point.tolist()) # sort region counterclockwise vs = np.asarray([new_vertices[v] for v in new_region]) c = vs.mean(axis=0) angles = np.arctan2(vs[:,1] - c[1], vs[:,0] - c[0]) new_region = np.array(new_region)[np.argsort(angles)] # finish new_regions.append(new_region.tolist()) return new_regions, np.asarray(new_vertices) # make up data points np.random.seed(1234) points = np.random.rand(15, 2) # compute Voronoi tesselation vor = Voronoi(points) # plot regions, vertices = voronoi_finite_polygons_2d(vor) print "--" print regions print "--" print vertices # colorize for region in regions: polygon = vertices[region] plt.fill(*zip(*polygon), alpha=0.4) plt.plot(points[:,0], points[:,1], 'ko') plt.xlim(vor.min_bound[0] - 0.1, vor.max_bound[0] + 0.1) plt.ylim(vor.min_bound[1] - 0.1, vor.max_bound[1] + 0.1) plt.show()
TypeError while using django rest framework tutorial
I am new to using Django Rest framework, i am following this tutorial Django-Rest-Framework Instead of snippets my model consists of a userprofile as given below: class UserProfile(models.Model): user = models.OneToOneField(User) emp_code = models.CharField(max_length=10, blank=True) user_type = models.IntegerField(max_length=1, default=0, choices=USER_TYPE) group = models.ForeignKey(Group, null=True, blank=True) status = models.SmallIntegerField(max_length=1,default=0) added_on = models.DateTimeField(auto_now_add=True) The first part of the tutorial ran fine, got the desired output in json format as mentioned, however the second tutorial onwards i am getting type error: TypeError at /authentication/userprofile/ 'type' object is not iterable Request Method: GET Request URL: http://*****.com/authentication/userprofile/ Django Version: 1.6 Exception Type: TypeError Exception Value: 'type' object is not iterable Exception Location: /home/web/cptm_venv/lib/python2.7/site- packages/rest_framework/views.py in get_permissions, line 226 Python Executable: /usr/bin/python Python Version: 2.7.3 Python Path: ['/home/web/cptm_venv/lib/python2.7/site-packages', '/home/web/cptm', '/home/web/cptm_venv/lib/python2.7/site-packages', '/usr/lib/python2.7', '/usr/lib/python2.7/plat-linux2', '/usr/lib/python2.7/lib-tk', '/usr/lib/python2.7/lib-old', '/usr/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/dist-packages', '/usr/lib/python2.7/dist-packages', '/usr/lib/pymodules/python2.7'] Server time: Wed, 11 Dec 2013 17:33:54 +0530 Traceback Switch to copy-and-paste view /home/web/cptm_venv/lib/python2.7/site-packages/django/core/handlers/base.py in get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) ... ▶ Local vars /home/web/cptm_venv/lib/python2.7/site-packages/django/views/generic/base.py in view return self.dispatch(request, *args, **kwargs) ... ▶ Local vars /home/web/cptm_venv/lib/python2.7/site-packages/django/views/decorators/csrf.py in wrapped_view return view_func(*args, **kwargs) ... ▶ Local vars /home/web/cptm_venv/lib/python2.7/site-packages/rest_framework/views.py in dispatch response = self.handle_exception(exc) ... ▶ Local vars /home/web/cptm_venv/lib/python2.7/site-packages/rest_framework/views.py in dispatch self.initial(request, *args, **kwargs) ... ▶ Local vars /home/web/cptm_venv/lib/python2.7/site-packages/rest_framework/views.py in initial self.check_permissions(request) ... ▶ Local vars /home/web/cptm_venv/lib/python2.7/site-packages/rest_framework/views.py in check_permissions for permission in self.get_permissions(): ... ▶ Local vars /home/web/cptm_venv/lib/python2.7/site-packages/rest_framework/views.py in get_permissions return [permission() for permission in self.permission_classes] ... ▶ Local vars The rest of the code is almost same as given in the above link in 2nd part and 3rd part: views.py from apps.authentication.models import UserProfile from apps.authentication.serializers import UserProfileSerializer from rest_framework import mixins from rest_framework import generics class UserProfileList(mixins.ListModelMixin, mixins.CreateModelMixin, generics.GenericAPIView): queryset = UserProfile.objects.all() serializer_class = UserProfileSerializer def get(self, request, *args, **kwargs): return self.list(request, *args, **kwargs) def post(self, request, *args, **kwargs): return self.create(request, *args, **kwargs) class UserProfileDetail(mixins.RetrieveModelMixin, mixins.UpdateModelMixin, mixins.DestroyModelMixin, generics.GenericAPIView): queryset = UserProfile.objects.all() serializer_class = UserProfileSerializer def get(self, request, *args, **kwargs): return self.retrieve(request, *args, **kwargs) def put(self, request, *args, **kwargs): return self.update(request, *args, **kwargs) def delete(self, request, *args, **kwargs): return self.destroy(request, *args, **kwargs) urls.py from django.conf.urls import patterns, url from rest_framework.urlpatterns import format_suffix_patterns from apps.authentication import views urlpatterns = patterns('', url(r'^userprofile/$', views.UserProfileList.as_view()), url(r'^userprofile/(?P<pk>[0-9]+)/$', views.UserProfileDetail.as_view()), ) urlpatterns = format_suffix_patterns(urlpatterns) I am missing something very obvious, tried a lot to search what exactly the "type object not iterable" means in this context, and which object is causing the problem, but no luck. I am using Django Rest Framework version 2.3. Thanks in advance
Just to let others know, I kept getting this same error and found that I forgot to include a comma in my REST_FRAMEWORK. I had this: 'DEFAULT_PERMISSION_CLASSES': ( 'rest_framework.permissions.IsAuthenticated' ), instead of this: 'DEFAULT_PERMISSION_CLASSES': ( 'rest_framework.permissions.IsAuthenticated', ), The comma defines this as a one-element tuple
Create heatmap using pandas TimeSeries
I need to create MatplotLib heatmap (pcolormesh) using Pandas DataFrame TimeSeries column (df_all.ts) as my X-axis. How to convert Pandas TimeSeries column to something which can be used as X-axis in np.meshgrid(x, y) function to create heatmap? The workaround is to create Matplotlib drange using same parameters as in pandas column, but is there a simple way? x = pd.date_range(df_all.ts.min(),df_all.ts.max(),freq='H') xt = mdates.drange(df_all.ts.min(), df_all.ts.max(), dt.timedelta(hours=1)) y = arange(ylen) X,Y = np.meshgrid(xt, y)
I do not know what you mean by heat map for a time series, but for a dataframe you may do as below: import numpy as np import pandas as pd import matplotlib.pyplot as plt from itertools import product from string import ascii_uppercase from matplotlib import patheffects m, n = 4, 7 # 4 rows, 7 columns df = pd.DataFrame(np.random.randn(m, n), columns=list(ascii_uppercase[:n]), index=list(ascii_uppercase[-m:])) ax = plt.imshow(df, interpolation='nearest', cmap='Oranges').axes _ = ax.set_xticks(np.linspace(0, n-1, n)) _ = ax.set_xticklabels(df.columns) _ = ax.set_yticks(np.linspace(0, m-1, m)) _ = ax.set_yticklabels(df.index) ax.grid('off') ax.xaxis.tick_top() optionally, to print actual values in the middle of each square, with some shadows for readability, you may do: path_effects = [patheffects.withSimplePatchShadow(shadow_rgbFace=(1,1,1))] for i, j in product(range(m), range(n)): _ = ax.text(j, i, '{0:.2f}'.format(df.iloc[i, j]), size='medium', ha='center', va='center', path_effects=path_effects)
String formatting without index in python2.6
I've got many thousands of lines of python code that has python2.7+ style string formatting (e.g. without indices in the {}s) "{} {}".format('foo', 'bar') I need to run this code under python2.6 and python2.6 requires the indices. I'm wondering if anyone knows of a painless way allow python2.6 to run this code. It'd be great if there was a "from __future__ import blah" solution to the problem. I don't see one. Something along those lines would be my first choice. A distant second would be some script that can automate the process of adding the indices, at least in the obvious cases: "{0} {1}".format('foo', 'bar')
It doesn't quite preserve the whitespacing and could probably be made a bit smarter, but it will at least identify Python strings (apostrophes/quotes/multi line) correctly without resorting to a regex or external parser: import tokenize from itertools import count import re with open('your_file') as fin: output = [] tokens = tokenize.generate_tokens(fin.readline) for num, val in (token[:2] for token in tokens): if num == tokenize.STRING: val = re.sub('{}', lambda L, c=count(): '{{{0}}}'.format(next(c)), val) output.append((num, val)) print tokenize.untokenize(output) # write to file instead... Example input: s = "{} {}".format('foo', 'bar') if something: do_something('{} {} {}'.format(1, 2, 3)) Example output (note slightly iffy whitespacing): s ="{0} {1}".format ('foo','bar') if something : do_something ('{0} {1} {2}'.format (1 ,2 ,3 ))
How to select/reduce a list of dictionaries in Flask/Jinja
I have a Jinja template with a list of dictionaries. Order matters. I'd like to reduce the list or lookup values based on the keys/values of the dictionaries. Here's an example: {% set ordered_dicts = [ { 'id': 'foo', 'name': 'My name is Foo' }, { 'id': 'bar', 'name': 'My name is Bar' } ] %} If I have a variable some_id = 'foo', how do I get 'My name is Foo' out of ordered_dicts in my Jinja template? I tried select() and selectattr() but couldn't figure them out based on the documentation. Here's what I tried: {{ ordered_dicts|selectattr("id", "foo") }} That outputs: <generator object _select_or_reject at 0x10748d870> I don't think I'm understanding the use of select() and selectattr() properly. Do I need to iterate over the list and do the lookup manually? Update: As codegeek and gipi pointed out, I need to do something like this with the generator: {{ ordered_dicts|selectattr("id", "foo")|list }} The resulting error: TemplateRuntimeError: no test named 'foo', which clarifies how selectattr() works. The second argument has to be one of the builtin tests. As far as I can tell, none of these tests will let me check whether the value associated with a key matches another value. Here's what I'd like to do: {{ ordered_dicts|selectattr("id", "sameas", "foo")|list }} But that doesn't work, since the sameas test checks whether two objects are really the same object in memory, not whether two strings/numbers are equivalent. So is it possible to pick an item based on a key/value comparison test?
I've just backported equalto like this: app.jinja_env.tests['equalto'] = lambda value, other : value == other After that this example from 2.8 docs works: {{ users|selectattr("email", "equalto", "foo@bar.invalid") }} Update: Flask has a decorator for registering tests, slightly cleaner syntax: http://flask.pocoo.org/docs/api/#flask.Flask.template_test
Fastest way to strip punctuation from a unicode string in Python
I am trying to efficiently strip punctuation from a unicode string. With a regular string, using mystring.translate(None, string.punctuation) is clearly the fastest approach. However, this code breaks on a unicode string in Python 2.7. As the comments to this answer explain, the translate method can still be implemented, but it must be implement with a dictionary. When I use this implementation though, I find that translate's performance is dramatically reduced. Here is my timing code (copied primarily from this answer): import re, string, timeit import unicodedata import sys #String from this article www.wired.com/design/2013/12/find-the-best-of-reddit-with-this-interactive-map/ s = "For me, Reddit brings to mind Obi Wan’s enduring description of the Mos Eisley cantina: a wretched hive of scum and villainy. But, you know, one you still kinda want to hang out in occasionally. The thing is, though, Reddit isn’t some obscure dive bar in a remote corner of the universe—it’s a huge watering hole at the very center of it. The site had some 400 million unique visitors in 2012. They can’t all be Greedos. So maybe my problem is just that I’ve never been able to find the places where the decent people hang out." su = u"For me, Reddit brings to mind Obi Wan’s enduring description of the Mos Eisley cantina: a wretched hive of scum and villainy. But, you know, one you still kinda want to hang out in occasionally. The thing is, though, Reddit isn’t some obscure dive bar in a remote corner of the universe—it’s a huge watering hole at the very center of it. The site had some 400 million unique visitors in 2012. They can’t all be Greedos. So maybe my problem is just that I’ve never been able to find the places where the decent people hang out." exclude = set(string.punctuation) regex = re.compile('[%s]' % re.escape(string.punctuation)) def test_set(s): return ''.join(ch for ch in s if ch not in exclude) def test_re(s): # From Vinko's solution, with fix. return regex.sub('', s) def test_trans(s): return s.translate(None, string.punctuation) tbl = dict.fromkeys(i for i in xrange(sys.maxunicode) if unicodedata.category(unichr(i)).startswith('P')) def test_trans_unicode(su): return su.translate(tbl) def test_repl(s): # From S.Lott's solution for c in string.punctuation: s=s.replace(c,"") return s print "sets :",timeit.Timer('f(s)', 'from __main__ import s,test_set as f').timeit(1000000) print "regex :",timeit.Timer('f(s)', 'from __main__ import s,test_re as f').timeit(1000000) print "translate :",timeit.Timer('f(s)', 'from __main__ import s,test_trans as f').timeit(1000000) print "replace :",timeit.Timer('f(s)', 'from __main__ import s,test_repl as f').timeit(1000000) print "sets (unicode) :",timeit.Timer('f(su)', 'from __main__ import su,test_set as f').timeit(1000000) print "regex (unicode) :",timeit.Timer('f(su)', 'from __main__ import su,test_re as f').timeit(1000000) print "translate (unicode) :",timeit.Timer('f(su)', 'from __main__ import su,test_trans_unicode as f').timeit(1000000) print "replace (unicode) :",timeit.Timer('f(su)', 'from __main__ import su,test_repl as f').timeit(1000000) As my results show, the unicode implementation of translate performs horribly: sets : 38.323941946 regex : 6.7729549408 translate : 1.27428412437 replace : 5.54967689514 sets (unicode) : 43.6268708706 regex (unicode) : 7.32343912125 translate (unicode) : 54.0041439533 replace (unicode) : 17.4450061321 My question is whether there is a faster way to implement translate for unicode (or any other method) that would outperform regex.
The current test script is flawed, because it does not compare like with like. For a fairer comparison, all the functions must be run with the same set of punctuation characters (i.e. either all ascii, or all unicode). When that is done, the regex and replace methods fare much worse with the full set of unicode punctuation characters. For full unicode, it looks like the "set" method is the best. However, if you only want remove the ascii punctuation characters from unicode strings, it may be best to encode, translate, and decode (depending on the length of the input string). The "replace" method can also be substantially improved by doing a containment test before attempting replacements (depending on the precise make-up of the string). Here's some sample results from a re-hash of the test script: $ python2 test.py running ascii punctuation test... using byte strings... set: 0.862006902695 re: 0.17484498024 trans: 0.0207080841064 enc_trans: 0.0206489562988 repl: 0.157525062561 in_repl: 0.213351011276 $ python2 test.py a running ascii punctuation test... using unicode strings... set: 0.927773952484 re: 0.18892288208 trans: 1.58275294304 enc_trans: 0.0794939994812 repl: 0.413739919662 in_repl: 0.249747991562 python2 test.py u running unicode punctuation test... using unicode strings... set: 0.978360176086 re: 7.97941994667 trans: 1.72471117973 enc_trans: 0.0784001350403 repl: 7.05612301826 in_repl: 3.66821289062 And here's the re-hashed script: # -*- coding: utf-8 -*- import re, string, timeit import unicodedata import sys #String from this article www.wired.com/design/2013/12/find-the-best-of-reddit-with-this-interactive-map/ s = """For me, Reddit brings to mind Obi Wan’s enduring description of the Mos Eisley cantina: a wretched hive of scum and villainy. But, you know, one you still kinda want to hang out in occasionally. The thing is, though, Reddit isn’t some obscure dive bar in a remote corner of the universe—it’s a huge watering hole at the very center of it. The site had some 400 million unique visitors in 2012. They can’t all be Greedos. So maybe my problem is just that I’ve never been able to find the places where the decent people hang out.""" su = u"""For me, Reddit brings to mind Obi Wan’s enduring description of the Mos Eisley cantina: a wretched hive of scum and villainy. But, you know, one you still kinda want to hang out in occasionally. The thing is, though, Reddit isn’t some obscure dive bar in a remote corner of the universe—it’s a huge watering hole at the very center of it. The site had some 400 million unique visitors in 2012. They can’t all be Greedos. So maybe my problem is just that I’ve never been able to find the places where the decent people hang out.""" def test_trans(s): return s.translate(tbl) def test_enc_trans(s): s = s.encode('utf-8').translate(None, string.punctuation) return s.decode('utf-8') def test_set(s): # with list comprehension fix return ''.join([ch for ch in s if ch not in exclude]) def test_re(s): # From Vinko's solution, with fix. return regex.sub('', s) def test_repl(s): # From S.Lott's solution for c in punc: s = s.replace(c, "") return s def test_in_repl(s): # From S.Lott's solution, with fix for c in punc: if c in s: s = s.replace(c, "") return s txt = 'su' ptn = u'[%s]' if 'u' in sys.argv[1:]: print 'running unicode punctuation test...' print 'using unicode strings...' punc = u'' tbl = {} for i in xrange(sys.maxunicode): char = unichr(i) if unicodedata.category(char).startswith('P'): tbl[i] = None punc += char else: print 'running ascii punctuation test...' punc = string.punctuation if 'a' in sys.argv[1:]: print 'using unicode strings...' punc = punc.decode() tbl = {ord(ch):None for ch in punc} else: print 'using byte strings...' txt = 's' ptn = '[%s]' def test_trans(s): return s.translate(None, punc) test_enc_trans = test_trans exclude = set(punc) regex = re.compile(ptn % re.escape(punc)) def time_func(func, n=10000): timer = timeit.Timer( 'func(%s)' % txt, 'from __main__ import %s, test_%s as func' % (txt, func)) print '%s: %s' % (func, timer.timeit(n)) print time_func('set') time_func('re') time_func('trans') time_func('enc_trans') time_func('repl') time_func('in_repl')
Ubuntu running `pip install` gives error 'The following required packages can not be built: * freetype'
When performing pip install -r requirements.txt, I get the following error during the stage where it is installing matplotlib: REQUIRED DEPENDENCIES AND EXTENSIONS numpy: yes [not found. pip may install it below.] dateutil: yes [dateutil was not found. It is required for date axis support. pip/easy_install may attempt to install it after matplotlib.] tornado: yes [tornado was not found. It is required for the WebAgg backend. pip/easy_install may attempt to install it after matplotlib.] pyparsing: yes [pyparsing was not found. It is required for mathtext support. pip/easy_install may attempt to install it after matplotlib.] pycxx: yes [Couldn't import. Using local copy.] libagg: yes [pkg-config information for 'libagg' could not be found. Using local copy.] freetype: no [pkg-config information for 'freetype2' could not be found.] ... The following required packages can not be built: * freetype Shouldn't pip install -r requirements.txt also install freetype? How should freetype be installed in Ubuntu 12.04 so it works with matplotlib?
No. pip will not install system-level dependencies. This means pip will not install RPM(s) (Redhat based systems) or DEB(s) (Debian based systems). To install system dependencies you will need to use one of the following methods depending on your system. Ubuntu/Debian: apt-get install libfreetype6-dev To search for packages on Ubuntu/Debian based systems: apt-cache search <string> e.g: apt-cache search freetype | grep dev Redhat/CentOS/Fedora: yum -y install freetype-devel To search for packages on Redhat/CentOS/Fedora based systems: yum search <string> e.g: yum search freetype | grep devel Mac OS X: (via Homebrew) brew install freetype To search for packages on Redhat/CentOS/Fedora based systems: brew search <string> e.g: brew search freetype
Lazy evaluation python
what is lazy evaluation in python? one website said : In Python 3.x the range() function returns a special range object which computes elements of the list on demand (lazy or deferred evaluation): >>> r = range(10) >>> print(r) range(0, 10) >>> print(r[3]) 3 what is meant by this ?
The object returned by range() (or xrange() in Python2.x) is known as a generator. Instead of storing the entire range, [0,1,2,..,9], in memory, the generator stores a definition for (i=0; i<10; i+=1) and computes the next value only when needed (AKA lazy-evaluation). Essentially, a generator allows you to return a list like structure, but here are some differences: A list stores all elements when it is created. A generator generates the next element when it is needed. A list can be iterated over as much as you need, a generator can only be iterated over exactly once. A list can get elements by index, a generator cannot -- it only generates values once, from start to end. A generator can be created in two ways: (1) Very similar to a list comprehension: # this is a list, create all 5000000 x/2 values immediately, uses [] lis = [x/2 for x in range(5000000)] # this is a generator, creates each x/2 value only when it is needed, uses () gen = (x/2 for x in range(5000000)) (2) As a function, using yield to return the next value: # this is also a generator, it will run until a yield occurs, and return that result. # on the next call it picks up where it left off and continues until a yield occurs... def divby2(n): num = 0 while num < n: yield num/2 num += 1 # same as (x/2 for x in range(5000000)) print divby2(5000000) Note: Even though range(5000000) is a generator in Python3.x, [x/2 for x in range(5000000)] is still a list. range(...) does it's job and generates x one at a time, but the entire list of x/2 values will be computed when this list is create.
Django Rest Framework - Could not resolve URL for hyperlinked relationship using view name "user-detail"
I am building a project in Django Rest Framework where users can login to view their wine cellar. My ModelViewSets were working just fine and all of a sudden I get this frustrating error: Could not resolve URL for hyperlinked relationship using view name "user-detail". You may have failed to include the related model in your API, or incorrectly configured the lookup_field attribute on this field. The traceback shows: [12/Dec/2013 18:35:29] "GET /bottles/ HTTP/1.1" 500 76677 Internal Server Error: /bottles/ Traceback (most recent call last): File "/Users/bpipat/.virtualenvs/usertest2/lib/python2.7/site-packages/django/core/handlers/base.py", line 114, in get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/Users/bpipat/.virtualenvs/usertest2/lib/python2.7/site-packages/rest_framework/viewsets.py", line 78, in view return self.dispatch(request, *args, **kwargs) File "/Users/bpipat/.virtualenvs/usertest2/lib/python2.7/site-packages/django/views/decorators/csrf.py", line 57, in wrapped_view return view_func(*args, **kwargs) File "/Users/bpipat/.virtualenvs/usertest2/lib/python2.7/site-packages/rest_framework/views.py", line 399, in dispatch response = self.handle_exception(exc) File "/Users/bpipat/.virtualenvs/usertest2/lib/python2.7/site-packages/rest_framework/views.py", line 396, in dispatch response = handler(request, *args, **kwargs) File "/Users/bpipat/.virtualenvs/usertest2/lib/python2.7/site-packages/rest_framework/mixins.py", line 96, in list return Response(serializer.data) File "/Users/bpipat/.virtualenvs/usertest2/lib/python2.7/site-packages/rest_framework/serializers.py", line 535, in data self._data = [self.to_native(item) for item in obj] File "/Users/bpipat/.virtualenvs/usertest2/lib/python2.7/site-packages/rest_framework/serializers.py", line 325, in to_native value = field.field_to_native(obj, field_name) File "/Users/bpipat/.virtualenvs/usertest2/lib/python2.7/site-packages/rest_framework/relations.py", line 153, in field_to_native return self.to_native(value) File "/Users/bpipat/.virtualenvs/usertest2/lib/python2.7/site-packages/rest_framework/relations.py", line 452, in to_native raise Exception(msg % view_name) Exception: Could not resolve URL for hyperlinked relationship using view name "user-detail". You may have failed to include the related model in your API, or incorrectly configured the `lookup_field` attribute on this field. I have a custom email user model and the bottle model in models.py is: class Bottle(models.Model): wine = models.ForeignKey(Wine, null=False) user = models.ForeignKey(User, null=False, related_name='bottles') My serializers: class BottleSerializer(serializers.HyperlinkedModelSerializer): class Meta: model = Bottle fields = ('url', 'wine', 'user') class UserSerializer(serializers.ModelSerializer): class Meta: model = User fields = ('email', 'first_name', 'last_name', 'password', 'is_superuser') My views: class BottleViewSet(viewsets.ModelViewSet): """ API endpoint that allows bottles to be viewed or edited. """ queryset = Bottle.objects.all() serializer_class = BottleSerializer class UserViewSet(ListCreateAPIView): """ API endpoint that allows users to be viewed or edited. """ queryset = User.objects.all() serializer_class = UserSerializer and finally the url: router = routers.DefaultRouter() router.register(r'bottles', views.BottleViewSet, base_name='bottles') urlpatterns = patterns('', url(r'^', include(router.urls)), # ... I don't have a user detail view and I don't see where this issue could come from. Any ideas? Thanks
Because it's a HyperlinkedModelSerializer your serializer is trying to resolve the URL for the related User on your Bottle. As you don't have the user detail view it can't do this. Hence the exception. Would not just registering the UserViewSet with the router solve your issue? You could define the user field on your BottleSerializer to explicitly use the UserSerializer rather than trying to resolve the URL. See the serializer docs on dealing with nested objects for that.
How do I get PyLint to recognize numpy members?
I am running PyLint on a Python project. PyLint makes many complaints about being unable to find numpy members. How can I avoid this while avoiding skipping membership checks. From the code: import numpy as np print np.zeros([1, 4]) Which, when ran, I get the expected: [[ 0. 0. 0. 0.]] However, pylint gives me this error: E: 3, 6: Module 'numpy' has no 'zeros' member (no-member) For versions, I am using pylint 1.0.0 (astroid 1.0.1, common 0.60.0) and trying to work with numpy 1.8.0 .
I had the same issue here, even with the latest versions of all related packages (astroid 1.3.2, logilab_common 0.63.2, pylon 1.4.0). The following solution worked like a charm: I added numpy to the list of ignored modules by modifying my pylintrc file, in the [TYPECHECK] section: [TYPECHECK] ignored-modules = numpy Depending on the error, you might also need to add the following line (still in the [TYPECHECK] section): ignored-classes = numpy
Generate equation with the result value closest to the requested one, have speed problems
I am writing some quiz game and need computer to solve 1 game in the quiz if players fail to solve it. Given data : List of 6 numbers to use, for example 4, 8, 6, 2, 15, 50. Targeted value, where 0 < value < 1000, for example 590. Available operations are division, addition, multiplication and division. Parentheses can be used. Generate mathematical expression which evaluation is equal, or as close as possible, to the target value. For example for numbers given above, expression could be : (6 + 4) * 50 + 15 * (8 - 2) = 590 My algorithm is as follows : Generate all permutations of all the subsets of the given numbers from (1) above For each permutation generate all parenthesis and operator combinations Track the closest value as algorithm runs I can not think of any smart optimization to the brute-force algorithm above, which will speed it up by the order of magnitude. Also I must optimize for the worst case, because many quiz games will be run simultaneously on the server. Code written today to solve this problem is (relevant stuff extracted from the project) : from operator import add, sub, mul, div import itertools ops = ['+', '-', '/', '*'] op_map = {'+': add, '-': sub, '/': div, '*': mul} # iterate over 1 permutation and generates parentheses and operator combinations def iter_combinations(seq): if len(seq) == 1: yield seq[0], str(seq[0]) else: for i in range(len(seq)): left, right = seq[:i], seq[i:] # split input list at i`th place # generate cartesian product for l, l_str in iter_combinations(left): for r, r_str in iter_combinations(right): for op in ops: if op_map[op] is div and r == 0: # cant divide by zero continue else: yield op_map[op](float(l), r), \ ('(' + l_str + op + r_str + ')') numbers = [4, 8, 6, 2, 15, 50] target = best_value = 590 best_item = None for i in range(len(numbers)): for current in itertools.permutations(numbers, i+1): # generate perms for value, item in iter_combinations(list(current)): if value < 0: continue if abs(target - value) < best_value: best_value = abs(target - value) best_item = item print best_item It prints : ((((4*6)+50)*8)-2). Tested it a little with different values and it seems to work correctly. Also I have a function to remove unnecessary parenthesis but it is not relevant to the question so it is not posted. Problem is that this runs very slowly because of all this permutations, combinations and evaluations. On my mac book air it runs for a few minutes for 1 example. I would like to make it run in a few seconds tops on the same machine, because many quiz game instances will be run at the same time on the server. So the questions are : Can I speed up current algorithm somehow (by orders of magnitude)? Am I missing on some other algorithm for this problem which would run much faster?
You can build all the possible expression trees with the given numbers and evalate them. You don't need to keep them all in memory, just print them when the target number is found: First we need a class to hold the expression. It is better to design it to be immutable, so its value can be precomputed. Something like this: class Expr: '''An Expr can be built with two different calls: -Expr(number) to build a literal expression -Expr(a, op, b) to build a complex expression. There a and b will be of type Expr, and op will be one of ('+','-', '*', '/'). ''' def __init__(self, *args): if len(args) == 1: self.left = self.right = self.op = None self.value = args[0] else: self.left = args[0] self.right = args[2] self.op = args[1] if self.op == '+': self.value = self.left.value + self.right.value elif self.op == '-': self.value = self.left.value - self.right.value elif self.op == '*': self.value = self.left.value * self.right.value elif self.op == '/': self.value = self.left.value // self.right.value def __str__(self): '''It can be done smarter not to print redundant parentheses, but that is out of the scope of this problem. ''' if self.op: return "({0}{1}{2})".format(self.left, self.op, self.right) else: return "{0}".format(self.value) Now we can write a recursive function that builds all the possible expression trees with a given set of expressions, and prints the ones that equals our target value. We will use the itertools module, that's always fun. We can use itertools.combinations() or itertools.permutations(), the difference is in the order. Some of our operations are commutative and some are not, so we can use permutations() and assume we will get many very simmilar solutions. Or we can use combinations() and manually reorder the values when the operation is not commutative. import itertools OPS = ('+', '-', '*', '/') def SearchTrees(current, target): ''' current is the current set of expressions. target is the target number. ''' for a,b in itertools.combinations(current, 2): current.remove(a) current.remove(b) for o in OPS: # This checks whether this operation is commutative if o == '-' or o == '/': conmut = ((a,b), (b,a)) else: conmut = ((a,b),) for aa, bb in conmut: # You do not specify what to do with the division. # I'm assuming that only integer divisions are allowed. if o == '/' and (bb.value == 0 or aa.value % bb.value != 0): continue e = Expr(aa, o, bb) # If a solution is found, print it if e.value == target: print(e.value, '=', e) current.add(e) # Recursive call! SearchTrees(current, target) # Do not forget to leave the set as it were before current.remove(e) # Ditto current.add(b) current.add(a) And then the main call: NUMBERS = [4, 8, 6, 2, 15, 50] TARGET = 590 initial = set(map(Expr, NUMBERS)) SearchTrees(initial, TARGET) And done! With these data I'm getting 719 different solutions in just over 21 seconds! Of course many of them are trivial variations of the same expression.
`pip install pandas` gives UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 41: ordinal not in range(128)
When performing pip install pandas on a Digital Ocean 512MB droplet, I get the error UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 41: ordinal not in range(128) . Any ideas what may have caused it? I'm running Ubuntu 12.04 64bit. [Full Error]
It looks like gcc being killed due to insufficient memory (see @Blender's comment) exposed a bug in pip. It mixes bytestrings and Unicode while logging that leads to: >>> '\n'.join(['bytestring with non-ascii character ☺', u'unicode']) Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeDecodeError: 'ascii' codec can't decode byte 0xe2 in position 36: \ ordinal not in range(128) If it is reproducible with the latest pip version; you could report the bug.
Opening Local File Works with urllib but not with urllib2
I'm trying to open a local file using urllib2. How can I go about doing this? When I try the following line with urllib: resp = urllib.urlopen(url) it works correctly, but when I switch it to: resp = urllib2.urlopen(url) I get: ValueError: unknown url type: /path/to/file where that file definitely does exit. Thanks!
Just put "file://" in front of the path >>> import urllib2 >>> urllib2.urlopen("file:///etc/debian_version").read() 'wheezy/sid\n'