instruction
stringlengths
12
148
input
stringlengths
16
30.2k
output
stringlengths
11
27.3k
Elegant way to create empty pandas DataFrame with NaN of type float
I want to create a Pandas DataFrame filled with NaNs. During my research I found an answer: import pandas as pd df = pd.DataFrame(index=range(0,4),columns=['A']) This code results in a DataFrame filled with NaNs of type "object". So they cannot be used later on for example with the interpolate() method. Therefore, I created the DataFrame with this complicated code (inspired by this answer): import pandas as pd import numpy as np dummyarray = np.empty((4,1)) dummyarray[:] = np.nan df = pd.DataFrame(dummyarray) This results in a DataFrame filled with NaN of type "float", so it can be used later on with interpolate(). Is there a more elegant way to create the same result?
This one-liner seems to work as well: >>> df = pd.DataFrame(np.nan, index=[0,1,2,3], columns=['A']) >>> df.dtypes A float64 dtype: object
Complex numbers in Cython
What is the correct way to work with complex numbers in Cython? I would like to write a pure C loop using a numpy.ndarray of dtype np.complex128. In Cython, the associated C type is defined in Cython/Includes/numpy/__init__.pxd as ctypedef double complex complex128_t so it seems this is just a simple C double complex. However, it's easy to obtain strange behaviors. In particular, with these definitions cimport numpy as np import numpy as np np.import_array() cdef extern from "complex.h": pass cdef: np.complex128_t varc128 = 1j np.float64_t varf64 = 1. double complex vardc = 1j double vard = 1. the line varc128 = varc128 * varf64 can be compiled by Cython but gcc can not compiled the C code produced (the error is "testcplx.c:663:25: error: two or more data types in declaration specifiers" and seems to be due to the line typedef npy_float64 _Complex __pyx_t_npy_float64_complex;). This error has already been reported (for example here) but I didn't find any good explanation and/or clean solution. Without inclusion of complex.h, there is no error (I guess because the typedef is then not included). However, there is still a problem since in the html file produced by cython -a testcplx.pyx, the line varc128 = varc128 * varf64 is yellow, meaning that it has not been translated into pure C. The corresponding C code is: __pyx_t_2 = __Pyx_c_prod_npy_float64(__pyx_t_npy_float64_complex_from_parts(__Pyx_CREAL(__pyx_v_8testcplx_varc128), __Pyx_CIMAG(__pyx_v_8testcplx_varc128)), __pyx_t_npy_float64_complex_from_parts(__pyx_v_8testcplx_varf64, 0)); __pyx_v_8testcplx_varc128 = __pyx_t_double_complex_from_parts(__Pyx_CREAL(__pyx_t_2), __Pyx_CIMAG(__pyx_t_2)); and the __Pyx_CREAL and __Pyx_CIMAG are orange (Python calls). Interestingly, the line vardc = vardc * vard does not produce any error and is translated into pure C (just __pyx_v_8testcplx_vardc = __Pyx_c_prod(__pyx_v_8testcplx_vardc, __pyx_t_double_complex_from_parts(__pyx_v_8testcplx_vard, 0));), whereas it is very very similar to the first one. I can avoid the error by using intermediate variables (and it translates into pure C): vardc = varc128 vard = varf64 varc128 = vardc * vard or simply by casting (but it does not translate into pure C): vardc = <double complex>varc128 * <double>varf64 So what happens? What is the meaning of the compilation error? Is there a clean way to avoid it? Why does the multiplication of a np.complex128_t and np.float64_t seem to involve Python calls? Versions Cython version 0.22 (most recent version in Pypi when the question was asked) and GCC 4.9.2. Repository I created a tiny repository with the example (hg clone https://bitbucket.org/paugier/test_cython_complex) and a tiny Makefile with 3 targets (make clean, make build, make html) so it is easy to test anything.
The simplest way I can find to work around this issue is to simply switch the order of multiplication. If in testcplx.pyx I change varc128 = varc128 * varf64 to varc128 = varf64 * varc128 I change from the failing situation to described to one that works correctly. This scenario is useful as it allows a direct diff of the produced C code. tl;dr The order of the multiplication changes the translation, meaning that in the failing version the multiplication is attempted via __pyx_t_npy_float64_complex types, whereas in the working version it is done via __pyx_t_double_complex types. This in turn introduces the typedef line typedef npy_float64 _Complex __pyx_t_npy_float64_complex;, which is invalid. I am fairly sure this is a cython bug (Update: reported here). Although this is a very old gcc bug report, the response explicitly states (in saying that it is not, in fact, a gcc bug, but user code error): typedef R _Complex C; This is not valid code; you can't use _Complex together with a typedef, only together with "float", "double" or "long double" in one of the forms listed in C99. They conclude that double _Complex is a valid type specifier whereas ArbitraryType _Complex is not. This more recent report has the same type of response - trying to use _Complex on a non fundamental type is outside spec, and the GCC manual indicates that _Complex can only be used with float, double and long double So - we can hack the cython generated C code to test that: replace typedef npy_float64 _Complex __pyx_t_npy_float64_complex; with typedef double _Complex __pyx_t_npy_float64_complex; and verify that it is indeed valid and can make the output code compile. Short trek through the code Swapping the multiplication order only highlights the problem that we are told about by the compiler. In the first case, the offending line is the one that says typedef npy_float64 _Complex __pyx_t_npy_float64_complex; - it is trying to assign the type npy_float64 and use the keyword _Complex to the type __pyx_t_npy_float64_complex. float _Complex or double _Complex is a valid type, whereas npy_float64 _Complex is not. To see the effect, you can just delete npy_float64 from that line, or replace it with double or float and the code compiles fine. The next question is why that line is produced in the first place... This seems to be produced by this line in the Cython source code. Why does the order of the multiplication change the code significantly - such that the type __pyx_t_npy_float64_complex is introduced, and introduced in a way that fails? In the failing instance, the code to implement the multiplication turns varf64 into a __pyx_t_npy_float64_complex type, does the multiplication on real and imaginary parts and then reassembles the complex number. In the working version, it does the product directly via the __pyx_t_double_complex type using the function __Pyx_c_prod I guess this is as simple as the cython code taking its cue for which type to use for the multiplication from the first variable it encounters. In the first case, it sees a float 64, so produces (invalid) C code based on that, whereas in the second, it sees the (double) complex128 type and bases its translation on that. This explanation is a little hand-wavy and I hope to return to an analysis of it if time allows... A note on this - here we see that the typedef for npy_float64 is double, so in this particular case, a fix might consist of modifying the code here to use double _Complex where type is npy_float64, but this is getting beyond the scope of a SO answer and doesn't present a general solution. C code diff result Working version Creates this C code from the line `varc128 = varf64 * varc128 __pyx_v_8testcplx_varc128 = __Pyx_c_prod(__pyx_t_double_complex_from_parts(__pyx_v_8testcplx_varf64, 0), __pyx_v_8testcplx_varc128); Failing version Creates this C code from the line varc128 = varc128 * varf64 __pyx_t_2 = __Pyx_c_prod_npy_float64(__pyx_t_npy_float64_complex_from_parts(__Pyx_CREAL(__pyx_v_8testcplx_varc128), __Pyx_CIMAG(__pyx_v_8testcplx_varc128)), __pyx_t_npy_float64_complex_from_parts(__pyx_v_8testcplx_varf64, 0)); __pyx_v_8testcplx_varc128 = __pyx_t_double_complex_from_parts(__Pyx_CREAL(__pyx_t_2), __Pyx_CIMAG(__pyx_t_2)); Which necessitates these extra imports - and the offending line is the one that says typedef npy_float64 _Complex __pyx_t_npy_float64_complex; - it is trying to assign the type npy_float64 and the type _Complex to the type __pyx_t_npy_float64_complex #if CYTHON_CCOMPLEX #ifdef __cplusplus typedef ::std::complex< npy_float64 > __pyx_t_npy_float64_complex; #else typedef npy_float64 _Complex __pyx_t_npy_float64_complex; #endif #else typedef struct { npy_float64 real, imag; } __pyx_t_npy_float64_complex; #endif /*... loads of other stuff the same ... */ static CYTHON_INLINE __pyx_t_npy_float64_complex __pyx_t_npy_float64_complex_from_parts(npy_float64, npy_float64); #if CYTHON_CCOMPLEX #define __Pyx_c_eq_npy_float64(a, b) ((a)==(b)) #define __Pyx_c_sum_npy_float64(a, b) ((a)+(b)) #define __Pyx_c_diff_npy_float64(a, b) ((a)-(b)) #define __Pyx_c_prod_npy_float64(a, b) ((a)*(b)) #define __Pyx_c_quot_npy_float64(a, b) ((a)/(b)) #define __Pyx_c_neg_npy_float64(a) (-(a)) #ifdef __cplusplus #define __Pyx_c_is_zero_npy_float64(z) ((z)==(npy_float64)0) #define __Pyx_c_conj_npy_float64(z) (::std::conj(z)) #if 1 #define __Pyx_c_abs_npy_float64(z) (::std::abs(z)) #define __Pyx_c_pow_npy_float64(a, b) (::std::pow(a, b)) #endif #else #define __Pyx_c_is_zero_npy_float64(z) ((z)==0) #define __Pyx_c_conj_npy_float64(z) (conj_npy_float64(z)) #if 1 #define __Pyx_c_abs_npy_float64(z) (cabs_npy_float64(z)) #define __Pyx_c_pow_npy_float64(a, b) (cpow_npy_float64(a, b)) #endif #endif #else static CYTHON_INLINE int __Pyx_c_eq_npy_float64(__pyx_t_npy_float64_complex, __pyx_t_npy_float64_complex); static CYTHON_INLINE __pyx_t_npy_float64_complex __Pyx_c_sum_npy_float64(__pyx_t_npy_float64_complex, __pyx_t_npy_float64_complex); static CYTHON_INLINE __pyx_t_npy_float64_complex __Pyx_c_diff_npy_float64(__pyx_t_npy_float64_complex, __pyx_t_npy_float64_complex); static CYTHON_INLINE __pyx_t_npy_float64_complex __Pyx_c_prod_npy_float64(__pyx_t_npy_float64_complex, __pyx_t_npy_float64_complex); static CYTHON_INLINE __pyx_t_npy_float64_complex __Pyx_c_quot_npy_float64(__pyx_t_npy_float64_complex, __pyx_t_npy_float64_complex); static CYTHON_INLINE __pyx_t_npy_float64_complex __Pyx_c_neg_npy_float64(__pyx_t_npy_float64_complex); static CYTHON_INLINE int __Pyx_c_is_zero_npy_float64(__pyx_t_npy_float64_complex); static CYTHON_INLINE __pyx_t_npy_float64_complex __Pyx_c_conj_npy_float64(__pyx_t_npy_float64_complex); #if 1 static CYTHON_INLINE npy_float64 __Pyx_c_abs_npy_float64(__pyx_t_npy_float64_complex); static CYTHON_INLINE __pyx_t_npy_float64_complex __Pyx_c_pow_npy_float64(__pyx_t_npy_float64_complex, __pyx_t_npy_float64_complex); #endif #endif
Pythonic way to merge two overlapping lists, preserving order
Alright, so I have two lists, as such: They can and will have overlapping items, for example, [1, 2, 3, 4, 5], [4, 5, 6, 7]. There will not be additional items in the overlap, for example, this will not happen: [1, 2, 3, 4, 5], [3.5, 4, 5, 6, 7] The lists are not necessarily ordered nor unique. [9, 1, 1, 8, 7], [8, 6, 7]. I want to merge the lists such that existing order is preserved, and to merge at the last possible valid position, and such that no data is lost. Additionally, the first list might be huge. My current working code is as such: master = [1,3,9,8,3,4,5] addition = [3,4,5,7,8] def merge(master, addition): n = 1 while n < len(master): if master[-n:] == addition[:n]: return master + addition[n:] n += 1 return master + addition What I would like to know is - is there a more efficient way of doing this? It works, but I'm slightly leery of this, because it can run into large runtimes in my application - I'm merging large lists of strings. EDIT: I'd expect the merge of [1,3,9,8,3,4,5], [3,4,5,7,8] to be: [1,3,9,8,3,4,5,7,8]. For clarity, I've highlighted the overlapping portion. [9, 1, 1, 8, 7], [8, 6, 7] should merge to [9, 1, 1, 8, 7, 8, 6, 7]
You can try the following: >>> a = [1, 3, 9, 8, 3, 4, 5] >>> b = [3, 4, 5, 7, 8] >>> matches = (i for i in xrange(len(b), 0, -1) if b[:i] == a[-i:]) >>> i = next(matches, 0) >>> a + b[i:] [1, 3, 9, 8, 3, 4, 5, 7, 8] The idea is we check the first i elements of b (b[:i]) with the last i elements of a (a[-i:]). We take i in decreasing order, starting from the length of b until 1 (xrange(len(b), 0, -1)) because we want to match as much as possible. We take the first such i by using next and if we don't find it we use the zero value (next(..., 0)). From the moment we found the i, we add to a the elements of b from index i.
How to find out Chinese or Japanese Character in a String in Python?
Such as: str = 'sdf344asfasf天地方益3権sdfsdf' Add () to Chinese and Japanese Characters: strAfterConvert = 'sdfasfasf(天地方益)3(権)sdfsdf'
As a start, you can check if the character is in one of the following unicode blocks: Unicode Block 'CJK Unified Ideographs' - U+4E00 to U+9FFF Unicode Block 'CJK Unified Ideographs Extension A' - U+3400 to U+4DBF Unicode Block 'CJK Unified Ideographs Extension B' - U+20000 to U+2A6DF Unicode Block 'CJK Unified Ideographs Extension C' - U+2A700 to U+2B73F Unicode Block 'CJK Unified Ideographs Extension D' - U+2B740 to U+2B81F After that, all you need to do is iterate through the string, checking if the char is Chinese, Japanese or Korean (CJK) and append accordingly: # -*- coding:utf-8 -*- ranges = [ {"from": ord(u"\u3300"), "to": ord(u"\u33ff")}, # compatibility ideographs {"from": ord(u"\ufe30"), "to": ord(u"\ufe4f")}, # compatibility ideographs {"from": ord(u"\uf900"), "to": ord(u"\ufaff")}, # compatibility ideographs {"from": ord(u"\U0002F800"), "to": ord(u"\U0002fa1f")}, # compatibility ideographs {"from": ord(u"\u30a0"), "to": ord(u"\u30ff")}, # Japanese Kana {"from": ord(u"\u2e80"), "to": ord(u"\u2eff")}, # cjk radicals supplement {"from": ord(u"\u4e00"), "to": ord(u"\u9fff")}, {"from": ord(u"\u3400"), "to": ord(u"\u4dbf")}, {"from": ord(u"\U00020000"), "to": ord(u"\U0002a6df")}, {"from": ord(u"\U0002a700"), "to": ord(u"\U0002b73f")}, {"from": ord(u"\U0002b740"), "to": ord(u"\U0002b81f")}, {"from": ord(u"\U0002b820"), "to": ord(u"\U0002ceaf")} # included as of Unicode 8.0 ] def is_cjk(char): return any([range["from"] <= ord(char) <= range["to"] for range in ranges]) def cjk_substrings(string): i = 0 while i<len(string): if is_cjk(string[i]): start = i while is_cjk(string[i]): i += 1 yield string[start:i] i += 1 string = "sdf344asfasf天地方益3権sdfsdf".decode("utf-8") for sub in cjk_substrings(string): string = string.replace(sub, "(" + sub + ")") print string The above prints sdf344asfasf(天地方益)3(権)sdfsdf To be future-proof, you might want to keep a lookout for CJK Unified Ideographs Extension E. It will ship with Unicode 8.0, which is scheduled for release in June 2015. I've added it to the ranges, but you shouldn't include it until Unicode 8.0 is released. [EDIT] Added CJK compatibility ideographs, Japanese Kana and CJK radicals.
How to get current time in python and break up into year, month, day, hour, minute?
I would like to get the current time in Python and assign them into variables like year, month, day, hour, minute. How can this be done in Python 2.7?
The datetime module is your friend: import datetime now = datetime.datetime.now() print now.year, now.month, now.day, now.hour, now.minute, now.second # 2015 5 6 8 53 40 You don't need separate variables, the attributes on the returned datetime object have all you need.
Why is "1000000000000000 in range(1000000000000001)" so fast in Python 3?
It is my understanding that the range() function, which is actually an object type in Python 3, generates its contents on the fly, similar to a generator. This being the case, I would have expected the following line to take an inordinate amount of time, because in order to determine whether 1 quadrillion is in the range, a quadrillion values would have to be generated: 1000000000000000 in range(1000000000000001) Furthermore: it seems that no matter how many zeroes I add on, the calculation more or less takes the same amount of time (basically instantaneous). I have also tried things like this, but the calculation is still almost instant: 1000000000000000000000 in range(0,1000000000000000000001,10) # count by tens If I try to implement my own range function, the result is not so nice!! def my_crappy_range(N): i = 0 while i < N: yield i i += 1 return What is the range() object doing under the hood that makes it so fast? EDIT: This has turned out to be a much more nuanced topic than I anticipated - there seems to be a bit of history behind the optimization of range(). Martijn Pieters' answer was chosen for its completeness, but also see abarnert's first answer for a good discussion of what it means for range to be a full-fledged sequence in Python 3, and some information/warning regarding potential inconsistency for __contains__ function optimization across Python implementations. abarnert's other answer goes into some more detail and provides links for those interested in the history behind the optimization in Python 3 (and lack of optimization of xrange in Python 2). Answers by poke and by wim provide the relevant C source code and explanations for those who are interested.
The Python 3 range() object doesn't produce numbers immediately; it is a smart sequence object that produces numbers on demand. All it contains is your start, stop and step values, then as you iterate over the object the next integer is calculated each iteration. The object also implements the object.__contains__ hook, and calculates if your number is part of its range. Calculating is a O(1) constant time operation. There is never a need to scan through all possible integers in the range. From the range() object documentation: The advantage of the range type over a regular list or tuple is that a range object will always take the same (small) amount of memory, no matter the size of the range it represents (as it only stores the start, stop and step values, calculating individual items and subranges as needed). So at a minimum, your range() object would do: class my_range(object): def __init__(self, start, stop=None, step=1): if stop is None: start, stop = 0, start self.start, self.stop, self.step = start, stop, step if step < 0: lo, hi = stop, start else: lo, hi = start, stop self.length = ((hi - lo - 1) // abs(step)) + 1 def __iter__(self): current = self.start if self.step < 0: while current > self.stop: yield current current += self.step else: while current < self.stop: yield current current += self.step def __len__(self): return self.length def __getitem__(self, i): if 0 <= i < self.length: return self.start + i * self.step raise IndexError('Index out of range: {}'.format(i)) def __contains__(self, num): if self.step < 0: if not (self.stop < num <= self.start): return False else: if not (self.start <= num < self.stop): return False return (num - self.start) % self.step == 0 This is still missing several things that a real range() supports (such as the .index() or .count() methods, hashing, equality testing, or slicing), but should give you an idea. I also simplified the __contains__ implementation to only focus on integer tests; if you give a real range() object a non-integer value (including subclasses of int), a slow scan is initiated to see if there is a match, just as if you use a containment test against a list of all the contained values. This was done to continue to support other numeric types that just happen to support equality testing with integers but are not expected to support integer arithmetic as well. See the original Python issue that implemented the containment test.
How to join list in Python but make the last separator different?
I'm trying to turn a list into separated strings joined with an ampersand if there are only two items, or commas and an ampersand between the last two e.g. Jones & Ben Jim, Jack & James I currently have this: pa = ' & '.join(listauthors[search]) and don't know how to make sort out the comma/ampersand issue. Beginner so a full explanation would be appreciated.
"&".join([",".join(my_list[:-1]),my_list[-1]]) I would think would work or maybe just ",".join(my_list[:-1]) +"&"+my_list[-1] to handle edge cases where only 2 items you could "&".join([",".join(my_list[:-1]),my_list[-1]] if len(my_list) > 2 else my_list)
Why are some float < integer comparisons four times slower than others?
When comparing floats to integers, some pairs of values take much longer to be evaluated than other values of a similar magnitude. For example: >>> import timeit >>> timeit.timeit("562949953420000.7 < 562949953421000") # run 1 million times 0.5387085462592742 But if the float or integer is made smaller or larger by a certain amount, the comparison runs much more quickly: >>> timeit.timeit("562949953420000.7 < 562949953422000") # integer increased by 1000 0.1481498428446173 >>> timeit.timeit("562949953423001.8 < 562949953421000") # float increased by 3001.1 0.1459577925548956 Changing the comparison operator (e.g. using == or > instead) does not affect the times in any noticeable way. This is not solely related to magnitude because picking larger or smaller values can result in faster comparisons, so I suspect it is down to some unfortunate way the bits line up. Clearly, comparing these values is more than fast enough for most use cases. I am simply curious as to why Python seems to struggle more with some pairs of values than with others.
A comment in the Python source code for float objects acknowledges that: Comparison is pretty much a nightmare This is especially true when comparing a float to an integer, because, unlike floats, integers in Python can be arbitrarily large and are always exact. Trying to cast the integer to a float might lose precision and make the comparison inaccurate. Trying to cast the float to an integer is not going to work either because any fractional part will be lost. To get around this problem, Python performs a series of checks, returning the result if one of the checks succeeds. It compares the signs of the two values, then whether the integer is "too big" to be a float, then compares the exponent of the float to the length of the integer. If all of these checks fail, it is necessary to construct two new Python objects to compare in order to obtain the result. When comparing a float v to an integer/long w, the worst case is that: v and w have the same sign (both positive or both negative), the integer w has few enough bits that it can be held in the size_t type (typically 32 or 64 bits), the integer w has at least 49 bits, the exponent of the float v is the same as the number of bits in w. And this is exactly what we have for the values in the question: >>> import math >>> math.frexp(562949953420000.7) # gives the float's (significand, exponent) pair (0.9999999999976706, 49) >>> (562949953421000).bit_length() 49 We see that 49 is both the exponent of the float and the number of bits in the integer. Both numbers are positive and so the four criteria above are met. Choosing one of the values to be larger (or smaller) can change the number of bits of the integer, or the value of the exponent, and so Python is able to determine the result of the comparison without performing the expensive final check. This is specific to the CPython implementation of the language. The comparison in more detail The float_richcompare function handles the comparison between two values v and w. Below is a step-by-step description of the checks that the function performs. The comments in the Python source are actually very helpful when trying to understand what the function does, so I've left them in where relevant. I've also summarised these checks in a list at the foot of the answer. The main idea is to map the Python objects v and w to two appropriate C doubles, i and j, which can then be easily compared to give the correct result. Both Python 2 and Python 3 use the same ideas to do this (the former just handles int and long types separately). The first thing to do is check that v is definitely a Python float and map it to a C double i. Next the function looks at whether w is also a float and maps it to a C double j. This is the best case scenario for the function as all the other checks can be skipped. The function also checks to see whether v is inf or nan: static PyObject* float_richcompare(PyObject *v, PyObject *w, int op) { double i, j; int r = 0; assert(PyFloat_Check(v)); i = PyFloat_AS_DOUBLE(v); if (PyFloat_Check(w)) j = PyFloat_AS_DOUBLE(w); else if (!Py_IS_FINITE(i)) { if (PyLong_Check(w)) j = 0.0; else goto Unimplemented; } Now we know that if w failed these checks, it is not a Python float. Now the function checks if it's a Python integer. If this is the case, the easiest test is to extract the sign of v and the sign of w (return 0 if zero, -1 if negative, 1 if positive). If the signs are different, this is all the information needed to return the result of the comparison: else if (PyLong_Check(w)) { int vsign = i == 0.0 ? 0 : i < 0.0 ? -1 : 1; int wsign = _PyLong_Sign(w); size_t nbits; int exponent; if (vsign != wsign) { /* Magnitudes are irrelevant -- the signs alone * determine the outcome. */ i = (double)vsign; j = (double)wsign; goto Compare; } } If this check failed, then v and w have the same sign. The next check counts the number of bits in the integer w. If it has too many bits then it can't possibly be held as a float and so must be larger in magnitude than the float v: nbits = _PyLong_NumBits(w); if (nbits == (size_t)-1 && PyErr_Occurred()) { /* This long is so large that size_t isn't big enough * to hold the # of bits. Replace with little doubles * that give the same outcome -- w is so large that * its magnitude must exceed the magnitude of any * finite float. */ PyErr_Clear(); i = (double)vsign; assert(wsign != 0); j = wsign * 2.0; goto Compare; } On the other hand, if the integer w has 48 or fewer bits, it can safely turned in a C double j and compared: if (nbits <= 48) { j = PyLong_AsDouble(w); /* It's impossible that <= 48 bits overflowed. */ assert(j != -1.0 || ! PyErr_Occurred()); goto Compare; } From this point onwards, we know that w has 49 or more bits. It will be convenient to treat w as a positive integer, so change the sign and the comparison operator as necessary: if (nbits <= 48) { /* "Multiply both sides" by -1; this also swaps the * comparator. */ i = -i; op = _Py_SwappedOp[op]; } Now the function looks at the exponent of the float. Recall that a float can be written (ignoring sign) as significand * 2exponent and that the significand represents a number between 0.5 and 1: (void) frexp(i, &exponent); if (exponent < 0 || (size_t)exponent < nbits) { i = 1.0; j = 2.0; goto Compare; } This checks two things. If the exponent is less than 0 then the float is smaller than 1 (and so smaller in magnitude than any integer). Or, if the exponent is less than the number of bits in w then we have that v < |w| since significand * 2exponent is less than 2nbits. Failing these two checks, the function looks to see whether the exponent is greater than the number of bit in w. This shows that significand * 2exponent is greater than 2nbits and so v > |w|: if ((size_t)exponent > nbits) { i = 2.0; j = 1.0; goto Compare; } If this check did not succeed we know that the exponent of the float v is the same as the number of bits in the integer w. The only way that the two values can be compared now is to construct two new Python integers from v and w. The idea is to discard the fractional part of v, double the integer part, and then add one. w is also doubled and these two new Python objects can be compared to give the correct return value. Using an example with small values, 4.65 < 4 would be determined by the comparison (2*4)+1 == 9 < 8 == (2*4) (returning false). { double fracpart; double intpart; PyObject *result = NULL; PyObject *one = NULL; PyObject *vv = NULL; PyObject *ww = w; // snip fracpart = modf(i, &intpart); // split i (the double that v mapped to) vv = PyLong_FromDouble(intpart); // snip if (fracpart != 0.0) { /* Shift left, and or a 1 bit into vv * to represent the lost fraction. */ PyObject *temp; one = PyLong_FromLong(1); temp = PyNumber_Lshift(ww, one); // left-shift doubles an integer ww = temp; temp = PyNumber_Lshift(vv, one); vv = temp; temp = PyNumber_Or(vv, one); // a doubled integer is even, so this adds 1 vv = temp; } // snip } } For brevity I've left out the additional error-checking and garbage-tracking Python has to do when it creates these new objects. Needless to say, this adds additional overhead and explains why the values highlighted in the question are significantly slower to compare than others. Here is a summary of the checks that are performed by the comparison function. Let v be a float and cast it as a C double. Now, if w is also a float: Check whether w is nan or inf. If so, handle this special case separately depending on the type of w. If not, compare v and w directly by their representations as C doubles. If w is an integer: Extract the signs of v and w. If they are different then we know v and w are different and which is the greater value. (The signs are the same.) Check whether w has too many bits to be a float (more than size_t). If so, w has greater magnitude than v. Check if w has 48 or fewer bits. If so, it can be safely cast to a C double without losing its precision and compared with v. (w has more than 48 bits. We will now treat w as a positive integer having changed the compare op as appropriate.) Consider the exponent of the float v. If the exponent is negative, then v is less than 1 and therefore less than any positive integer. Else, if the exponent is less than the number of bits in w then it must be less than w. If the exponent of v is greater than the number of bits in w then v is greater than w. (The exponent is the same as the number of bits in w.) The final check. Split v into its integer and fractional parts. Double the integer part and add 1 to compensate for the fractional part. Now double the integer w. Compare these two new integers instead to get the result.
Looking for idiomatic way to evaluate to False if argument is False in Python 3
I have a chain of functions, all defined elsewhere in the class: fus(roh(dah(inp))) where inp is either a dictionary, or bool(False). The desired result is that if inp, or any of the functions evaluate to False, False is returned by the function stack. I attempted to use ternary operators, but they don't evaluate correctly. def func(inp): return int(inp['value']) + 1 if inp else False throws a TypeError, bool not subscriptable, if i == False because inp['value'] is evaluated before the conditional. I know I can do it explicitly: def func(inp): if inp == False: return False else: return inp['value'] + 1 but there are a ton of functions, and this will nearly quadruple the length of my code. It's also rewriting the exact same lines of code again and again, which suggests to me that it is the wrong way to do things. I suspect that a decorator with arguments is the answer, but the more I play around with it the less sure I am about that. def validate_inp(inp): def decorator(func): def wrapper(*args): return func(inp) if inp else False return wrapper return decorator @validate_inp(inp) def func(inp): return int(inp['value']) + 1 Unfortunately the decorator call throws a NameError, 'inp' not defined. But I'm not sure if I'm using the decorator incorrectly, or the decorator is the wrong solution. Looking for comment, criticism, suggestion, and/or sanity check. If you found this trying to solve your own problem... You probably want to be using empty dictionaries instead of boolean False. Props to @chepner. In my application, using False was "okay" bur offered no advantages and caused some chunky blocks of code. I've found everything is simpler using an empty dictionary instead. I'm wrapping the functions that use the dict with a decorator that catches the KeyError thrown by referencing dict['value'] where dict is empty.
Decorator should look like: def validate_inp(fun): def wrapper(inp): return fun(inp) if inp else False return wrapper @validate_inp def func(inp): return int(inp['value']) + 1 print(func(False)) print(func({'value': 1})) If you want to use your decorator with a class member: def validate_inp(fun): def wrapper(self, inp): return fun(self, inp) if inp else False return wrapper class Foo(object): @validate_inp def func(self, inp): return int(inp['value']) + 1 if inp else False foo = Foo() print(foo.func(False)) print(foo.func({'value': 1}))
Why does assigning to an empty list (e.g. [] = "") raise no error?
In python 3.4, I am typing [] = "" and it works fine, no Exception is raised. Though of course [] is not equal to "" afterwards. [] = () also works fine. "" = [] raises an exception as expected though, () = "" raises an exception as expected though. So, what's going on?
You are not comparing for equality. You are assigning. Python allows you to assign to multiple targets: foo, bar = 1, 2 assigns the two values to foo and bar, respectively. All you need is a sequence or iterable on the right-hand side, and a list or tuple of names on the left. When you do: [] = "" you assigned an empty sequence (empty strings are sequences still) to an empty list of names. It is essentially the same thing as doing: [foo, bar, baz] = "abc" where you end up with foo = "a", bar = "b" and baz = "c", but with fewer characters. You cannot, however, assign to a string, so "" on the left-hand side of an assignment never works and is always a syntax error. See the Assignment statements documentation: An assignment statement evaluates the expression list (remember that this can be a single expression or a comma-separated list, the latter yielding a tuple) and assigns the single resulting object to each of the target lists, from left to right. and Assignment of an object to a target list, optionally enclosed in parentheses or square brackets, is recursively defined as follows. Emphasis mine. That Python doesn't throw a syntax error for the empty list is actually a bit of a bug! The officially documented grammar doesn't allow for an empty target list, and for the empty () you do get an error. See bug 23275; it is considered a harmless bug: The starting point is recognizing that this has been around for very long time and is harmless. Also see Why is it valid to assign to an empty list but not to an empty tuple?
Python in Browser: How to choose between Brython, PyPy.js, Skulpt and Transcrypt?
EDIT: Please note I'm NOT asking for a (subjective) product recommendation. I am asking for objective information -- that I can then use to make my own decision. I'm very excited to see that it is now possible to code Python inside a browser page. The main candidates appear to be: http://www.brython.info/ http://www.skulpt.org/ http://pypyjs.org/ http://transcrypt.org/ <-- Added July 2016 (If there is another viable candidate I'm missing, please put me right!) But how to choose between them? (EDIT: Please note, I'm not asking for a candidate nomination. I'm seeking information that will allow me to make an educated choice.) The only obvious difference I can see is that Skulpt emulates Python 2.x whereas Brython emulates Python 3.x.
This might be helpful too: http://stromberg.dnsalias.org/~strombrg/pybrowser/python-browser.html It compares several Python-in-the-browser technologies.
Is there a faster way to clean out control characters in a file?
Previously, I had been cleaning out data using the code snippet below import unicodedata, re, io all_chars = (unichr(i) for i in xrange(0x110000)) control_chars = ''.join(c for c in all_chars if unicodedata.category(c)[0] == 'C') cc_re = re.compile('[%s]' % re.escape(control_chars)) def rm_control_chars(s): # see http://www.unicode.org/reports/tr44/#General_Category_Values return cc_re.sub('', s) cleanfile = [] with io.open('filename.txt', 'r', encoding='utf8') as fin: for line in fin: line =rm_control_chars(line) cleanfile.append(line) There are newline characters in the file that i want to keep. The following records the time taken for cc_re.sub('', s) to substitute the first few lines (1st column is the time taken and 2nd column is len(s)): 0.275146961212 251 0.672796010971 614 0.178567171097 163 0.200030088425 180 0.236430883408 215 0.343492984772 313 0.317672967911 290 0.160616159439 142 0.0732028484344 65 0.533437013626 468 0.260229110718 236 0.231380939484 204 0.197766065598 181 0.283867120743 258 0.229172945023 208 As @ashwinichaudhary suggested, using s.translate(dict.fromkeys(control_chars)) and the same time taken log outputs: 0.464188098907 252 0.366552114487 615 0.407374858856 164 0.322507858276 181 0.35142993927 216 0.319973945618 314 0.324357032776 291 0.371646165848 143 0.354818105698 66 0.351796150208 469 0.388131856918 237 0.374715805054 205 0.363368988037 182 0.425950050354 259 0.382766962051 209 But the code is really slow for my 1GB of text. Is there any other way to clean out controlled characters?
found a solution working character by charater, I bench marked it using a 100K file: import unicodedata, re, io from time import time # This is to generate randomly a file to test the script from string import lowercase from random import random all_chars = (unichr(i) for i in xrange(0x110000)) control_chars = [c for c in all_chars if unicodedata.category(c)[0] == 'C'] chars = (list(u'%s' % lowercase) * 115117) + control_chars fnam = 'filename.txt' out=io.open(fnam, 'w') for line in range(1000000): out.write(u''.join(chars[int(random()*len(chars))] for _ in range(600)) + u'\n') out.close() # version proposed by alvas all_chars = (unichr(i) for i in xrange(0x110000)) control_chars = ''.join(c for c in all_chars if unicodedata.category(c)[0] == 'C') cc_re = re.compile('[%s]' % re.escape(control_chars)) def rm_control_chars(s): return cc_re.sub('', s) t0 = time() cleanfile = [] with io.open(fnam, 'r', encoding='utf8') as fin: for line in fin: line =rm_control_chars(line) cleanfile.append(line) out=io.open(fnam + '_out1.txt', 'w') out.write(''.join(cleanfile)) out.close() print time() - t0 # using a set and checking character by character all_chars = (unichr(i) for i in xrange(0x110000)) control_chars = set(c for c in all_chars if unicodedata.category(c)[0] == 'C') def rm_control_chars_1(s): return ''.join(c for c in s if not c in control_chars) t0 = time() cleanfile = [] with io.open(fnam, 'r', encoding='utf8') as fin: for line in fin: line = rm_control_chars_1(line) cleanfile.append(line) out=io.open(fnam + '_out2.txt', 'w') out.write(''.join(cleanfile)) out.close() print time() - t0 the output is: 114.625444174 0.0149750709534 I tried on a file of 1Gb (only for the second one) and it lasted 186s. I also wrote this other version of the same script, slightly faster (176s), and more memory efficient (for very large files not fitting in RAM): t0 = time() out=io.open(fnam + '_out5.txt', 'w') with io.open(fnam, 'r', encoding='utf8') as fin: for line in fin: out.write(rm_control_chars_1(line)) out.close() print time() - t0
Obtain a list containing string elements excluding elements prefixed with any other element from initial list
I have some trouble with filtering a list of strings. I found a similar question here but is not what i need. The input list is: l = ['ab', 'xc', 'abb', 'abed', 'sdfdg', 'abfdsdg', 'xccc'] and the expected result is ['ab', 'xc', 'sdfdg'] The order of the items in the result is not important The filter function must be fast because the size of list is big My current solution is l = ['ab', 'xc', 'abb', 'abed', 'sdfdg', 'abfdsdg', 'xccc'] for i in range(0, len(l) - 1): for j in range(i + 1, len(l)): if l[j].startswith(l[i]): l[j] = l[i] else: if l[i].startswith(l[j]): l[i] = l[j] print list(set(l)) EDIT After multiple tests with a big input data, list with 1500000 strings, my best solution for this is: def filter(l): if l==[]: return [] l2=[] l2.append(l[0]) llen = len(l) k=0 itter = 0 while k<llen: addkelem = '' j=0 l2len = len(l2) while j<l2len: if (l2[j].startswith(l[k]) and l[k]!= l2[j]): l2[j]=l[k] l.remove(l[k]) llen-=1 j-=1 addkelem = '' continue if (l[k].startswith(l2[j])): addkelem = '' break elif(l[k] not in l2): addkelem = l[k] j+=1 if addkelem != '': l2.append(addkelem) addkelem = '' k+=1 return l2 for which the execution time is around of 213 seconds Sample imput data - each line is a string in list
This algorithm completes the task in 0.97 second on my computer, with the input file submitted by the author (154MB): l.sort() last_str = l[0] filtered = [last_str] app = filtered.append for str in l: if not str.startswith(last_str): last_str = str app(str) # Commented because of the massive amount of data to print. # print filtered The algorithm is simple: first sort the list lexicographically, then search for the first string which isn't prefixed by the very first one of the list, then search the next one which isn't prefixed by the last unprefixed one, etc. If the list is already sorted (your example file seems to be already sorted), you can remove the l.sort() line, which will result in a O(n) complexity in both time and memory.
"yield from iterable" vs "return iter(iterable)"
When wrapping an (internal) iterator one often has to reroute the __iter__ method to the underlying iterable. Consider the following example: class FancyNewClass(collections.Iterable): def __init__(self): self._internal_iterable = [1,2,3,4,5] # ... # variant A def __iter__(self): return iter(self._internal_iterable) # variant B def __iter__(self): yield from self._internal_iterable Is there any significant difference between variant A and B? Variant A returns an iterator object that has been queried via iter() from the internal iterable. Variant B returns a generator object that returns values from the internal iterable. Is one or the other preferable for some reason? In collections.abc the yield from version is used. The return iter() variant is the pattern that I have used until now.
The only significant difference is what happens when an exception is raised from within the iterable. Using return iter() your FancyNewClass will not appear on the exception traceback, whereas with yield from it will. It is generally a good thing to have as much information on the traceback as possible, although there could be situations where you want to hide your wrapper. Other differences: return iter has to load the name iter from globals - this is potentially slow (although unlikely to significantly affect performance) and could be messed with (although anyone who overwrites globals like that deserves what they get). With yield from you can insert other yield expressions before and after (although you could equally use itertools.chain). As presented, the yield from form discards any generator return value (i.e. raise StopException(value). You can fix this by writing instead return (yield from iterator). Here's a test comparing the disassembly of the two approaches and also showing exception tracebacks: http://ideone.com/1YVcSe Using return iter(): 3 0 LOAD_GLOBAL 0 (iter) 3 LOAD_FAST 0 (it) 6 CALL_FUNCTION 1 (1 positional, 0 keyword pair) 9 RETURN_VALUE Traceback (most recent call last): File "./prog.py", line 12, in test File "./prog.py", line 10, in i RuntimeError Using return (yield from): 5 0 LOAD_FAST 0 (it) 3 GET_ITER 4 LOAD_CONST 0 (None) 7 YIELD_FROM 8 RETURN_VALUE Traceback (most recent call last): File "./prog.py", line 12, in test File "./prog.py", line 5, in bar File "./prog.py", line 10, in i RuntimeError
How can I use super() with one argument in python
While reading about the super() object in Python, I read the following statement: If the second argument is omitted, the super object returned is unbound What does this exactly mean and how do I use super() with one argument in code?
Python function objects are descriptors, and Python uses the descriptor protocol to bind functions to an instance. This process produces a bound method. Binding is what makes the 'magic' self argument appear when you call a method, and what makes a property object automatically call methods when you try to use the property as an attribute on instances. super() with two arguments invokes the same descriptor protocol when you try to use it to look up methods on parent classes; super(Foo, self).bar() will traverse the Foo parent classes until an attribute bar is found, and if that is an object that is a descriptor, it'll be bound to self. Calling bar then calls the bound method, which in turn calls the function passing in the self argument as bar(self). To do this, the super() object stores both the class (first argument) the self argument to bind with, and the type of the self object as attributes: >>> class Foo(object): ... def bar(self): ... return 'bar on Foo' ... >>> class Spam(Foo): ... def bar(self): ... return 'bar on Spam' ... >>> spam = Spam() >>> super(Spam, spam) <super: <class 'Spam'>, <Spam object>> >>> super(Spam, spam).__thisclass__ <class '__main__.Spam'> >>> super(Spam, spam).__self_class__ <class '__main__.Spam'> >>> super(Spam, spam).__self__ <__main__.Spam object at 0x107195c10> When looking up attributes, the __mro__ attribute of the __self_class__ attribute is searched, starting one position past the position of __thisclass__, and the results are bound. super() with just one argument will miss the __self_class__ and __self__ attributes, and cannot do lookups yet: >>> super(Spam) <super: <class 'Spam'>, NULL> >>> super(Spam).__self_class__ is None True >>> super(Spam).__self__ is None True >>> super(Spam).bar Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'super' object has no attribute 'bar' The object does support the descriptor protocol, so you can bind it just like you can bind a method: >>> super(Spam).__get__(spam, Spam) <super: <class 'Spam'>, <Spam object>> >>> super(Spam).__get__(spam, Spam).bar() 'bar on Foo' This means you can store such an object on a class and use it to to traverse to parent methods: >>> class Eggs(Spam): ... pass ... >>> Eggs.parent = super(Eggs) >>> eggs = Eggs() >>> eggs.parent <super: <class 'Eggs'>, <Eggs object>> >>> eggs.parent.bar() 'bar on Spam'
How to multiply functions in python?
def sub3(n): return n - 3 def square(n): return n * n It's dead easy to compose functions in python: >>> my_list [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] >>> [square(sub3(n)) for n in my_list] [9, 4, 1, 0, 1, 4, 9, 16, 25, 36] Unfortunately, when wanting to use the composition as a key, it's kind of lame: >>> sorted(my_list, key=lambda n: square(sub3(n))) [3, 2, 4, 1, 5, 0, 6, 7, 8, 9] This should really just be sorted(my_list, key=square*sub3), because heck, function __mul__ isn't used for anything else anyway: >>> square * sub3 TypeError: unsupported operand type(s) for *: 'function' and 'function' Well let's just define it then! >>> type(sub3).__mul__ = 'something' TypeError: can't set attributes of built-in/extension type 'function' D'oh! >>> class CoolerFunction(types.FunctionType): ... pass ... TypeError: Error when calling the metaclass bases type 'function' is not an acceptable base type D'oh! class Hack(object): def __init__(self, function): self.function = function def __call__(self, *args, **kwargs): return self.function(*args, **kwargs) def __mul__(self, other): def hack(*args, **kwargs): return self.function(other(*args, **kwargs)) return Hack(hack) Hey, now we're getting somewhere.. >>> square = Hack(square) >>> sub3 = Hack(sub3) >>> [square(sub3(n)) for n in my_list] [9, 4, 1, 0, 1, 4, 9, 16, 25, 36] >>> [(square*sub3)(n) for n in my_list] [9, 4, 1, 0, 1, 4, 9, 16, 25, 36] >>> sorted(my_list, key=square*sub3) [3, 2, 4, 1, 5, 0, 6, 7, 8, 9] But I don't want a Hack callable class! The scoping rules are totally different in ways I don't fully understand, and this is even uglier than the "lameda" arguably. I want to monkeypatch the functions. How can I do that?
You can use your hack class as a decorator pretty much as it's written, though you'd likely want to choose a more appropriate name for the class. Like this: class Composable(object): def __init__(self, function): self.function = function def __call__(self, *args, **kwargs): return self.function(*args, **kwargs) def __mul__(self, other): @Composable def composed(*args, **kwargs): return self.function(other(*args, **kwargs)) return composed def __rmul__(self, other): @Composable def composed(*args, **kwargs): return other(self.function(*args, **kwargs)) return composed You can then decorate your functions like so: @Composable def sub3(n): return n - 3 @Composable def square(n): return n * n And compose them like so: (square * sub3)(n) Basically it's the same thing you've accomplished using your hack class, but using it as a decorator.
How to save Python NLTK alignment models for later use?
In Python, I'm using NLTK's alignment module to create word alignments between parallel texts. Aligning bitexts can be a time-consuming process, especially when done over considerable corpora. It would be nice to do alignments in batch one day and use those alignments later on. from nltk import IBMModel1 as ibm biverses = [list of AlignedSent objects] model = ibm(biverses, 20) with open(path + "eng-taq_model.txt", 'w') as f: f.write(model.train(biverses, 20)) // makes empty file Once I create a model, how can I (1) save it to disk and (2) reuse it later?
The immediate answer is to pickle it, see https://wiki.python.org/moin/UsingPickle But because IBMModel1 returns a lambda function, it's not possible to pickle it with the default pickle / cPickle (see https://github.com/nltk/nltk/blob/develop/nltk/align/ibm1.py#L74 and https://github.com/nltk/nltk/blob/develop/nltk/align/ibm1.py#L104) So we'll use dill. Firstly, install dill, see Can Python pickle lambda functions? $ pip install dill $ python >>> import dill as pickle Then: >>> import dill >>> import dill as pickle >>> from nltk.corpus import comtrans >>> from nltk.align import IBMModel1 >>> bitexts = comtrans.aligned_sents()[:100] >>> ibm = IBMModel1(bitexts, 20) >>> with open('model1.pk', 'wb') as fout: ... pickle.dump(ibm, fout) ... >>> exit() To use pickled model: >>> import dill as pickle >>> from nltk.corpus import comtrans >>> bitexts = comtrans.aligned_sents()[:100] >>> with open('model1.pk', 'rb') as fin: ... ibm = pickle.load(fin) ... >>> aligned_sent = ibm.align(bitexts[0]) >>> aligned_sent.words ['Wiederaufnahme', 'der', 'Sitzungsperiode'] If you try to pickle the IBMModel1 object, which is a lambda function, you'll end up with this: >>> import cPickle as pickle >>> from nltk.corpus import comtrans >>> from nltk.align import IBMModel1 >>> bitexts = comtrans.aligned_sents()[:100] >>> ibm = IBMModel1(bitexts, 20) >>> with open('model1.pk', 'wb') as fout: ... pickle.dump(ibm, fout) ... Traceback (most recent call last): File "<stdin>", line 2, in <module> File "/usr/lib/python2.7/copy_reg.py", line 70, in _reduce_ex raise TypeError, "can't pickle %s objects" % base.__name__ TypeError: can't pickle function objects (Note: the above code snippet comes from NLTK version 3.0.0) In python3 with NLTK 3.0.0, you will also face the same problem because IBMModel1 returns a lambda function: alvas@ubi:~$ python3 Python 3.4.0 (default, Apr 11 2014, 13:05:11) [GCC 4.8.2] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import pickle >>> from nltk.corpus import comtrans >>> from nltk.align import IBMModel1 >>> bitexts = comtrans.aligned_sents()[:100] >>> ibm = IBMModel1(bitexts, 20) >>> with open('mode1.pk', 'wb') as fout: ... pickle.dump(ibm, fout) ... Traceback (most recent call last): File "<stdin>", line 2, in <module> _pickle.PicklingError: Can't pickle <function IBMModel1.train.<locals>.<lambda> at 0x7fa37cf9d620>: attribute lookup <lambda> on nltk.align.ibm1 failed' >>> import dill >>> with open('model1.pk', 'wb') as fout: ... dill.dump(ibm, fout) ... >>> exit() alvas@ubi:~$ python3 Python 3.4.0 (default, Apr 11 2014, 13:05:11) [GCC 4.8.2] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import dill >>> from nltk.corpus import comtrans >>> with open('model1.pk', 'rb') as fin: ... ibm = dill.load(fin) ... >>> bitexts = comtrans.aligned_sents()[:100] >>> aligned_sent = ibm.aligned(bitexts[0]) Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: 'IBMModel1' object has no attribute 'aligned' >>> aligned_sent = ibm.align(bitexts[0]) >>> aligned_sent.words ['Wiederaufnahme', 'der', 'Sitzungsperiode'] (Note: In python3, pickle is cPickle, see http://docs.pythonsprints.com/python3_porting/py-porting.html)
running a python package after compiling and uploading to pypicloud server
Folks, After building and deploying a package called myShtuff to a local pypicloud server, I am able to install it into a separate virtual env. Everything seems to work, except for the path of the executable... (venv)[ec2-user@ip-10-0-1-118 ~]$ pip freeze Fabric==1.10.1 boto==2.38.0 myShtuff==0.1 ecdsa==0.13 paramiko==1.15.2 pycrypto==2.6.1 wsgiref==0.1.2 If I try running the script directly, I get: (venv)[ec2-user@ip-10-0-1-118 ~]$ myShtuff -bash: myShtuff: command not found However, I can run it via: (venv)[ec2-user@ip-10-0-1-118 ~]$ python /home/ec2-user/venv/lib/python2.7/site-packages/myShtuff/myShtuff.py ..works Am I making a mistake when building the package? Somewhere in setup.cfg or setup.py? Thanks!!!
You need a __main__.py in your package, and an entry point defined in setup.py. See here and here but in short, your __main__.py runs whatever your main functionality is when running your module using python -m, and setuptools can make whatever arbitrary functions you want to run as scripts. You can do either or both. Your __main__.py looks like: from .stuff import my_main_func if __name__ == "__main__": my_main_func() and in setup.py: entry_points={ 'console_scripts': [ 'myShtuffscript = myShtuff.stuff:my_main_func' ] Here, myShtuffscript is whatever you want the executable to be called, myShtuff the name of your package, stuff the name of file in the package (myShtuff/stuff.py), and my_main_func the name of a function in that file.
Calling Scrapy from another file without threading
I have to call a crawler from another python file, for which I use : def crawl_koovs(): spider = SomeSpider() settings = get_project_settings() crawler = Crawler(settings) crawler.signals.connect(reactor.stop, signal=signals.spider_closed) crawler.configure() crawler.crawl(spider) crawler.start() log.start() reactor.run() On running this, I get the error: exceptions.ValueError: signal only works in main thread The only workaround I could find is to use reactor.run(installSignalHandlers=False) which I don't want to use as I want to call this method multiple times and want reactor to be stopped before the next call. What can I do to make this work (maybe force the crawler to start in the same 'main' thread)?
The first thing I would say to you is when you're executing Scrapy from external file the loglevel is set to INFO,you should change it to DEBUG to see what's happening if your code doesn't work you should change the line: log.start() for: log.start(loglevel=log.DEBUG) To store everything in the log and generate a text file (for debugging purposes) you can do: log.start(logfile="file.log", loglevel=log.DEBUG, crawler=crawler, logstdout=False) About the signals issue with the log level changed to DEBUG maybe you can see some output that can help you to fix it, you can try to put your script into the Scrapy Project folder to see if still crashes. If you change the line: crawler.signals.connect(reactor.stop, signal=signals.spider_closed) for: dispatcher.connect(reactor.stop, signals.spider_closed) What does it say ? Depending on your Scrapy version it may be deprecated
Basics of recursion in Python
"Write a recursive function, "listSum" that takes a list of integers and returns the sum of all integers in the list". Example: >>>> listSum([1,3,4,5,6]) 19 I know how to do this another way but not in the recursive way. def listSum(ls): i = 0 s = 0 while i < len(ls): s = s + ls[i] i = i + 1 print s I need the basic way to do this since special built-in functions is not allowed.
Whenever you face a problem like this, try to express the result of the function with the same function. In your case, you can get the result by adding the first number with the result of calling the same function with rest of the elements in the list. For example, listSum([1, 3, 4, 5, 6]) = 1 + listSum([3, 4, 5, 6]) = 1 + (3 + listSum([4, 5, 6])) = 1 + (3 + (4 + listSum([5, 6]))) = 1 + (3 + (4 + (5 + listSum([6])))) = 1 + (3 + (4 + (5 + (6 + listSum([]))))) Now, what should be the result of listSum([])? It should be 0. That is called base condition of your recursion. When the base condition is met, the recursion will come to an end. Now, lets try to implement it. The main thing here is, splitting the list. You can use slicing to do that. Simple version >>> def listSum(ls): ... # Base condition ... if not ls: ... return 0 ... ... # First element + result of calling `listsum` with rest of the elements ... return ls[0] + listSum(ls[1:]) >>> >>> listSum([1, 3, 4, 5, 6]) 19 Tail Call Recursion Once you understand how the above recursion works, you can try to make it a little bit better. Now, to find the actual result, we are depending on the value of the previous function also. The return statement cannot immediately return the value till the recursive call returns a result. We can avoid this by, passing the current to the function parameter, like this >>> def listSum(ls, result): ... if not ls: ... return result ... return listSum(ls[1:], result + ls[0]) ... >>> listSum([1, 3, 4, 5, 6], 0) 19 Here, we pass what the initial value of the sum to be in the parameters, which is zero in listSum([1, 3, 4, 5, 6], 0). Then, when the base condition is met, we are actually accumulating the sum in the result parameter, so we return it. Now, the last return statement has listSum(ls[1:], result + ls[0]), where we add the first element to the current result and pass it again to the recursive call. This might be a good time to understand Tail Call. It would not be relevant to Python, as it doesn't do Tail call optimization. Passing around index version Now, you might think that we are creating so many intermediate lists. Can I avoid that? Of course, you can. You just need the index of the item to be processed next. But now, the base condition will be different. Since we are going to be passing index, how will we determine how the entire list has been processed? Well, if the index equals to the length of the list, then we have processed all the elements in it. >>> def listSum(ls, index, result): ... # Base condition ... if index == len(ls): ... return result ... ... # Call with next index and add the current element to result ... return listSum(ls, index + 1, result + ls[index]) ... >>> listSum([1, 3, 4, 5, 6], 0, 0) 19 Inner function version If you look at the function definition now, you are passing three parameters to it. Lets say you are going to release this function as an API. Will it be convenient for the users to pass three values, when they actually find the sum of a list? Nope. What can we do about it? We can create another function, which is local to the actual listSum function and we can pass all the implementation related parameters to it, like this >>> def listSum(ls): ... ... def recursion(index, result): ... if index == len(ls): ... return result ... return recursion(index + 1, result + ls[index]) ... ... return recursion(0, 0) ... >>> listSum([1, 3, 4, 5, 6]) 19 Now, when the listSum is called, it just returns the return value of recursion inner function, which accepts the index and the result parameters. Now we are only passing those values, not the users of listSum. They just have to pass the list to be processed. In this case, if you observe the parameters, we are not passing ls to recursion but we are using it inside it. ls is accessible inside recursion because of the closure property. Default parameters version Now, if you want to keep it simple, without creating an inner function, you can make use of the default parameters, like this >>> def listSum(ls, index=0, result=0): ... # Base condition ... if index == len(ls): ... return result ... ... # Call with next index and add the current element to result ... return listSum(ls, index + 1, result + ls[index]) ... >>> listSum([1, 3, 4, 5, 6]) 19 Now, if the caller doesn't explicitly pass any value, then 0 will be assigned to both index and result. Recursive Power problem Now, lets apply the ideas to a different problem. For example, lets try to implement the power(base, exponent) function. It would return the value of base raised to the power exponent. power(2, 5) = 32 power(5, 2) = 25 power(3, 4) = 81 Now, how can we do this recursively? Let us try to understand how those results are achieved. power(2, 5) = 2 * 2 * 2 * 2 * 2 = 32 power(5, 2) = 5 * 5 = 25 power(3, 4) = 3 * 3 * 3 * 3 = 81 Hmmm, so we get the idea. The base multiplied to itself, exponent times gives the result. Okay, how do we approach it. Lets try to define the solution with the same function. power(2, 5) = 2 * power(2, 4) = 2 * (2 * power(2, 3)) = 2 * (2 * (2 * power(2, 2))) = 2 * (2 * (2 * (2 * power(2, 1)))) What should be the result if anything raised to power 1? Result will be the same number, right? We got our base condition for our recursion :-) = 2 * (2 * (2 * (2 * 2))) = 2 * (2 * (2 * 4)) = 2 * (2 * 8) = 2 * 16 = 32 Alright, lets implement it. >>> def power(base, exponent): ... # Base condition, if `exponent` is lesser than or equal to 1, return `base` ... if exponent <= 1: ... return base ... ... return base * power(base, exponent - 1) ... >>> power(2, 5) 32 >>> power(5, 2) 25 >>> power(3, 4) 81 Okay, how will be define the Tail call optimized version of it? Lets pass the current result as the parameter to the function itself and return the result when the base condition it met. Let's keep it simple and use the default parameter approach directly. >>> def power(base, exponent, result=1): ... # Since we start with `1`, base condition would be exponent reaching 0 ... if exponent <= 0: ... return result ... ... return power(base, exponent - 1, result * base) ... >>> power(2, 5) 32 >>> power(5, 2) 25 >>> power(3, 4) 81 Now, we reduce the exponent value in every recursive call and multiple result with base and pass it to the recursive power call. We start with the value 1, because we are approaching the problem in reverse. The recursion will happen like this power(2, 5, 1) = power(2, 4, 1 * 2) = power(2, 4, 2) = power(2, 3, 2 * 2) = power(2, 3, 4) = power(2, 2, 4 * 2) = power(2, 2, 8) = power(2, 1, 8 * 2) = power(2, 1, 16) = power(2, 0, 16 * 2) = power(2, 0, 32) Since exponent becomes zero, the base condition is met and the result will be returned, so we get 32 :-)
Errata (erasures+errors) Berlekamp-Massey for Reed-Solomon decoding
I am trying to implement a Reed-Solomon encoder-decoder in Python supporting the decoding of both erasures and errors, and that's driving me crazy. The implementation currently supports decoding only errors or only erasures, but not both at the same time (even if it's below the theoretical bound of 2*errors+erasures <= (n-k) ). From Blahut's papers (here and here), it seems we only need to initialize the error locator polynomial with the erasure locator polynomial to implicitly compute the errata locator polynomial inside Berlekamp-Massey. This approach partially works for me: when I have 2*errors+erasures < (n-k)/2 it works, but in fact after debugging it only works because BM compute an errors locator polynomial that gets the exact same value as the erasure locator polynomial (because we are below the limit for errors-only correction), and thus it is truncated via galois fields and we end up with the correct value of the erasure locator polynomial (at least that's how I understand it, I may be wrong). However, when we go above (n-k)/2, for example if n = 20 and k = 11, thus we have (n-k)=9 erased symbols we can correct, if we feed in 5 erasures then BM just goes wrong. If we feed in 4 erasures + 1 error (we are still well below the bound since we have 2*errors+erasures = 2+4 = 6 < 9), the BM still goes wrong. The exact algorithm of Berlekamp-Massey I implemented can be found in this presentation (pages 15-17), but a very similar description can be found here and here, and here I attach a copy of the mathematical description: Now, I have an almost exact reproduction of this mathematical algorithm into a Python code. What I would like is to extend it to support erasures, which I tried by initializing the error locator sigma with the erasure locator: def _berlekamp_massey(self, s, k=None, erasures_loc=None): '''Computes and returns the error locator polynomial (sigma) and the error evaluator polynomial (omega). If the erasures locator is specified, we will return an errors-and-erasures locator polynomial and an errors-and-erasures evaluator polynomial. The parameter s is the syndrome polynomial (syndromes encoded in a generator function) as returned by _syndromes. Don't be confused with the other s = (n-k)/2 Notes: The error polynomial: E(x) = E_0 + E_1 x + ... + E_(n-1) x^(n-1) j_1, j_2, ..., j_s are the error positions. (There are at most s errors) Error location X_i is defined: X_i = a^(j_i) that is, the power of a corresponding to the error location Error magnitude Y_i is defined: E_(j_i) that is, the coefficient in the error polynomial at position j_i Error locator polynomial: sigma(z) = Product( 1 - X_i * z, i=1..s ) roots are the reciprocals of the error locations ( 1/X_1, 1/X_2, ...) Error evaluator polynomial omega(z) is here computed at the same time as sigma, but it can also be constructed afterwards using the syndrome and sigma (see _find_error_evaluator() method). ''' # For errors-and-erasures decoding, see: Blahut, Richard E. "Transform techniques for error control codes." IBM Journal of Research and development 23.3 (1979): 299-315. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.92.600&rep=rep1&type=pdf and also a MatLab implementation here: http://www.mathworks.com/matlabcentral/fileexchange/23567-reed-solomon-errors-and-erasures-decoder/content//RS_E_E_DEC.m # also see: Blahut, Richard E. "A universal Reed-Solomon decoder." IBM Journal of Research and Development 28.2 (1984): 150-158. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.84.2084&rep=rep1&type=pdf # or alternatively see the reference book by Blahut: Blahut, Richard E. Theory and practice of error control codes. Addison-Wesley, 1983. # and another good alternative book with concrete programming examples: Jiang, Yuan. A practical guide to error-control coding using Matlab. Artech House, 2010. n = self.n if not k: k = self.k # Initialize: if erasures_loc: sigma = [ Polynomial(erasures_loc.coefficients) ] # copy erasures_loc by creating a new Polynomial B = [ Polynomial(erasures_loc.coefficients) ] else: sigma = [ Polynomial([GF256int(1)]) ] # error locator polynomial. Also called Lambda in other notations. B = [ Polynomial([GF256int(1)]) ] # this is the error locator support/secondary polynomial, which is a funky way to say that it's just a temporary variable that will help us construct sigma, the error locator polynomial omega = [ Polynomial([GF256int(1)]) ] # error evaluator polynomial. We don't need to initialize it with erasures_loc, it will still work, because Delta is computed using sigma, which itself is correctly initialized with erasures if needed. A = [ Polynomial([GF256int(0)]) ] # this is the error evaluator support/secondary polynomial, to help us construct omega L = [ 0 ] # necessary variable to check bounds (to avoid wrongly eliminating the higher order terms). For more infos, see https://www.cs.duke.edu/courses/spring11/cps296.3/decoding_rs.pdf M = [ 0 ] # optional variable to check bounds (so that we do not mistakenly overwrite the higher order terms). This is not necessary, it's only an additional safe check. For more infos, see the presentation decoding_rs.pdf by Andrew Brown in the doc folder. # Polynomial constants: ONE = Polynomial(z0=GF256int(1)) ZERO = Polynomial(z0=GF256int(0)) Z = Polynomial(z1=GF256int(1)) # used to shift polynomials, simply multiply your poly * Z to shift s2 = ONE + s # Iteratively compute the polynomials 2s times. The last ones will be # correct for l in xrange(0, n-k): K = l+1 # Goal for each iteration: Compute sigma[K] and omega[K] such that # (1 + s)*sigma[l] == omega[l] in mod z^(K) # For this particular loop iteration, we have sigma[l] and omega[l], # and are computing sigma[K] and omega[K] # First find Delta, the non-zero coefficient of z^(K) in # (1 + s) * sigma[l] # This delta is valid for l (this iteration) only Delta = ( s2 * sigma[l] ).get_coefficient(l+1) # Delta is also known as the Discrepancy, and is always a scalar (not a polynomial). # Make it a polynomial of degree 0, just for ease of computation with polynomials sigma and omega. Delta = Polynomial(x0=Delta) # Can now compute sigma[K] and omega[K] from # sigma[l], omega[l], B[l], A[l], and Delta sigma.append( sigma[l] - Delta * Z * B[l] ) omega.append( omega[l] - Delta * Z * A[l] ) # Now compute the next B and A # There are two ways to do this # This is based on a messy case analysis on the degrees of the four polynomials sigma, omega, A and B in order to minimize the degrees of A and B. For more infos, see https://www.cs.duke.edu/courses/spring10/cps296.3/decoding_rs_scribe.pdf # In fact it ensures that the degree of the final polynomials aren't too large. if Delta == ZERO or 2*L[l] > K \ or (2*L[l] == K and M[l] == 0): # Rule A B.append( Z * B[l] ) A.append( Z * A[l] ) L.append( L[l] ) M.append( M[l] ) elif (Delta != ZERO and 2*L[l] < K) \ or (2*L[l] == K and M[l] != 0): # Rule B B.append( sigma[l] // Delta ) A.append( omega[l] // Delta ) L.append( K - L[l] ) M.append( 1 - M[l] ) else: raise Exception("Code shouldn't have gotten here") return sigma[-1], omega[-1] Polynomial and GF256int are generic implementation of, respectively, polynomials and galois fields over 2^8. These classes are unit tested and they are, normally, bug proof. Same goes for the rest of the encoding/decoding methods for Reed-Solomon such as Forney and Chien search. The full code with a quick test case for the issue I am talking here can be found here: http://codepad.org/l2Qi0y8o Here's an example output: Encoded message: hello world�ꐙ�Ī`> ------- Erasures decoding: Erasure locator: 189x^5 + 88x^4 + 222x^3 + 33x^2 + 251x + 1 Syndrome: 149x^9 + 113x^8 + 29x^7 + 231x^6 + 210x^5 + 150x^4 + 192x^3 + 11x^2 + 41x Sigma: 189x^5 + 88x^4 + 222x^3 + 33x^2 + 251x + 1 Symbols positions that were corrected: [19, 18, 17, 16, 15] ('Decoded message: ', 'hello world', '\xce\xea\x90\x99\x8d\xc4\xaa`>') Correctly decoded: True ------- Errors+Erasures decoding for the message with only erasures: Erasure locator: 189x^5 + 88x^4 + 222x^3 + 33x^2 + 251x + 1 Syndrome: 149x^9 + 113x^8 + 29x^7 + 231x^6 + 210x^5 + 150x^4 + 192x^3 + 11x^2 + 41x Sigma: 101x^10 + 139x^9 + 5x^8 + 14x^7 + 180x^6 + 148x^5 + 126x^4 + 135x^3 + 68x^2 + 155x + 1 Symbols positions that were corrected: [187, 141, 90, 19, 18, 17, 16, 15] ('Decoded message: ', '\xf4\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00.\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00P\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xe3\xe6\xffO> world', '\xce\xea\x90\x99\x8d\xc4\xaa`>') Correctly decoded: False ------- Errors+Erasures decoding for the message with erasures and one error: Erasure locator: 77x^4 + 96x^3 + 6x^2 + 206x + 1 Syndrome: 49x^9 + 107x^8 + x^7 + 109x^6 + 236x^5 + 15x^4 + 8x^3 + 133x^2 + 243x Sigma: 38x^9 + 98x^8 + 239x^7 + 85x^6 + 32x^5 + 168x^4 + 92x^3 + 225x^2 + 22x + 1 Symbols positions that were corrected: [19, 18, 17, 16] ('Decoded message: ', "\xda\xe1'\xccA world", '\xce\xea\x90\x99\x8d\xc4\xaa`>') Correctly decoded: False Here, the erasure decoding is always correct since it doesn't use BM at all to compute the erasure locator. Normally, the other two test cases should output the same sigma, but they simply don't. The fact that the problem comes from BM is blatant here when you compare the first two test cases: the syndrome and the erasure locator are the same, but the resulting sigma is totally different (in the second test, BM is used, while in the first test case with erasures only BM is not called). Thank you very much for any help or any idea on how I could debug this out. Note that your answers can be mathematical or code, but please explain what has gone wrong with my approach. /EDIT: still didn't find how to correctly implement an errata BM decoder (see my answer below). The bounty is offered to anyone who can fix the issue (or at least guide me to the solution). /EDIT2: silly me, sorry, I just re-read the schema and found that I missed the change in the assignment L = r - L - erasures_count... I have updated the code to fix that and re-accepted my answer.
After reading lots and lots of research papers and books, the only place where I have found the answer is in the book (readable online on Google Books, but not available as a PDF): "Algebraic codes for data transmission", Blahut, Richard E., 2003, Cambridge university press. Here are some extracts of this book, which details is the exact (except for the matricial/vectorized representation of polynomial operations) description of the Berlekamp-Massey algorithm I implemented: And here is the errata (errors-and-erasures) Berlekamp-Massey algorithm for Reed-Solomon: As you can see -- contrary to the usual description that you only need to initialize Lambda, the errors locator polynomial, with the value of the previously computed erasures locator polynomial -- you also need to skip the first v iterations, where v is the number of erasures. Note that it's not equivalent to skipping the last v iterations: you need to skip the first v iterations, because r (the iteration counter, K in my implementation) is used not only to count iterations but also to generate the correct discrepancy factor Delta. Here is the resulting code with the modifications to support erasures as well as errors up to v+2*e <= (n-k): def _berlekamp_massey(self, s, k=None, erasures_loc=None, erasures_eval=None, erasures_count=0): '''Computes and returns the errata (errors+erasures) locator polynomial (sigma) and the error evaluator polynomial (omega) at the same time. If the erasures locator is specified, we will return an errors-and-erasures locator polynomial and an errors-and-erasures evaluator polynomial, else it will compute only errors. With erasures in addition to errors, it can simultaneously decode up to v+2e <= (n-k) where v is the number of erasures and e the number of errors. Mathematically speaking, this is equivalent to a spectral analysis (see Blahut, "Algebraic Codes for Data Transmission", 2003, chapter 7.6 Decoding in Time Domain). The parameter s is the syndrome polynomial (syndromes encoded in a generator function) as returned by _syndromes. Notes: The error polynomial: E(x) = E_0 + E_1 x + ... + E_(n-1) x^(n-1) j_1, j_2, ..., j_s are the error positions. (There are at most s errors) Error location X_i is defined: X_i = α^(j_i) that is, the power of α (alpha) corresponding to the error location Error magnitude Y_i is defined: E_(j_i) that is, the coefficient in the error polynomial at position j_i Error locator polynomial: sigma(z) = Product( 1 - X_i * z, i=1..s ) roots are the reciprocals of the error locations ( 1/X_1, 1/X_2, ...) Error evaluator polynomial omega(z) is here computed at the same time as sigma, but it can also be constructed afterwards using the syndrome and sigma (see _find_error_evaluator() method). It can be seen that the algorithm tries to iteratively solve for the error locator polynomial by solving one equation after another and updating the error locator polynomial. If it turns out that it cannot solve the equation at some step, then it computes the error and weights it by the last non-zero discriminant found, and delays the weighted result to increase the polynomial degree by 1. Ref: "Reed Solomon Decoder: TMS320C64x Implementation" by Jagadeesh Sankaran, December 2000, Application Report SPRA686 The best paper I found describing the BM algorithm for errata (errors-and-erasures) evaluator computation is in "Algebraic Codes for Data Transmission", Richard E. Blahut, 2003. ''' # For errors-and-erasures decoding, see: "Algebraic Codes for Data Transmission", Richard E. Blahut, 2003 and (but it's less complete): Blahut, Richard E. "Transform techniques for error control codes." IBM Journal of Research and development 23.3 (1979): 299-315. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.92.600&rep=rep1&type=pdf and also a MatLab implementation here: http://www.mathworks.com/matlabcentral/fileexchange/23567-reed-solomon-errors-and-erasures-decoder/content//RS_E_E_DEC.m # also see: Blahut, Richard E. "A universal Reed-Solomon decoder." IBM Journal of Research and Development 28.2 (1984): 150-158. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.84.2084&rep=rep1&type=pdf # and another good alternative book with concrete programming examples: Jiang, Yuan. A practical guide to error-control coding using Matlab. Artech House, 2010. n = self.n if not k: k = self.k # Initialize, depending on if we include erasures or not: if erasures_loc: sigma = [ Polynomial(erasures_loc.coefficients) ] # copy erasures_loc by creating a new Polynomial, so that we initialize the errata locator polynomial with the erasures locator polynomial. B = [ Polynomial(erasures_loc.coefficients) ] omega = [ Polynomial(erasures_eval.coefficients) ] # to compute omega (the evaluator polynomial) at the same time, we also need to initialize it with the partial erasures evaluator polynomial A = [ Polynomial(erasures_eval.coefficients) ] # TODO: fix the initial value of the evaluator support polynomial, because currently the final omega is not correct (it contains higher order terms that should be removed by the end of BM) else: sigma = [ Polynomial([GF256int(1)]) ] # error locator polynomial. Also called Lambda in other notations. B = [ Polynomial([GF256int(1)]) ] # this is the error locator support/secondary polynomial, which is a funky way to say that it's just a temporary variable that will help us construct sigma, the error locator polynomial omega = [ Polynomial([GF256int(1)]) ] # error evaluator polynomial. We don't need to initialize it with erasures_loc, it will still work, because Delta is computed using sigma, which itself is correctly initialized with erasures if needed. A = [ Polynomial([GF256int(0)]) ] # this is the error evaluator support/secondary polynomial, to help us construct omega L = [ 0 ] # update flag: necessary variable to check when updating is necessary and to check bounds (to avoid wrongly eliminating the higher order terms). For more infos, see https://www.cs.duke.edu/courses/spring11/cps296.3/decoding_rs.pdf M = [ 0 ] # optional variable to check bounds (so that we do not mistakenly overwrite the higher order terms). This is not necessary, it's only an additional safe check. For more infos, see the presentation decoding_rs.pdf by Andrew Brown in the doc folder. # Fix the syndrome shifting: when computing the syndrome, some implementations may prepend a 0 coefficient for the lowest degree term (the constant). This is a case of syndrome shifting, thus the syndrome will be bigger than the number of ecc symbols (I don't know what purpose serves this shifting). If that's the case, then we need to account for the syndrome shifting when we use the syndrome such as inside BM, by skipping those prepended coefficients. # Another way to detect the shifting is to detect the 0 coefficients: by definition, a syndrome does not contain any 0 coefficient (except if there are no errors/erasures, in this case they are all 0). This however doesn't work with the modified Forney syndrome (that we do not use in this lib but it may be implemented in the future), which set to 0 the coefficients corresponding to erasures, leaving only the coefficients corresponding to errors. synd_shift = 0 if len(s) > (n-k): synd_shift = len(s) - (n-k) # Polynomial constants: ONE = Polynomial(z0=GF256int(1)) ZERO = Polynomial(z0=GF256int(0)) Z = Polynomial(z1=GF256int(1)) # used to shift polynomials, simply multiply your poly * Z to shift # Precaching s2 = ONE + s # Iteratively compute the polynomials n-k-erasures_count times. The last ones will be correct (since the algorithm refines the error/errata locator polynomial iteratively depending on the discrepancy, which is kind of a difference-from-correctness measure). for l in xrange(0, n-k-erasures_count): # skip the first erasures_count iterations because we already computed the partial errata locator polynomial (by initializing with the erasures locator polynomial) K = erasures_count+l+synd_shift # skip the FIRST erasures_count iterations (not the last iterations, that's very important!) # Goal for each iteration: Compute sigma[l+1] and omega[l+1] such that # (1 + s)*sigma[l] == omega[l] in mod z^(K) # For this particular loop iteration, we have sigma[l] and omega[l], # and are computing sigma[l+1] and omega[l+1] # First find Delta, the non-zero coefficient of z^(K) in # (1 + s) * sigma[l] # Note that adding 1 to the syndrome s is not really necessary, you can do as well without. # This delta is valid for l (this iteration) only Delta = ( s2 * sigma[l] ).get_coefficient(K) # Delta is also known as the Discrepancy, and is always a scalar (not a polynomial). # Make it a polynomial of degree 0, just for ease of computation with polynomials sigma and omega. Delta = Polynomial(x0=Delta) # Can now compute sigma[l+1] and omega[l+1] from # sigma[l], omega[l], B[l], A[l], and Delta sigma.append( sigma[l] - Delta * Z * B[l] ) omega.append( omega[l] - Delta * Z * A[l] ) # Now compute the next support polynomials B and A # There are two ways to do this # This is based on a messy case analysis on the degrees of the four polynomials sigma, omega, A and B in order to minimize the degrees of A and B. For more infos, see https://www.cs.duke.edu/courses/spring10/cps296.3/decoding_rs_scribe.pdf # In fact it ensures that the degree of the final polynomials aren't too large. if Delta == ZERO or 2*L[l] > K+erasures_count \ or (2*L[l] == K+erasures_count and M[l] == 0): #if Delta == ZERO or len(sigma[l+1]) <= len(sigma[l]): # another way to compute when to update, and it doesn't require to maintain the update flag L # Rule A B.append( Z * B[l] ) A.append( Z * A[l] ) L.append( L[l] ) M.append( M[l] ) elif (Delta != ZERO and 2*L[l] < K+erasures_count) \ or (2*L[l] == K+erasures_count and M[l] != 0): # elif Delta != ZERO and len(sigma[l+1]) > len(sigma[l]): # another way to compute when to update, and it doesn't require to maintain the update flag L # Rule B B.append( sigma[l] // Delta ) A.append( omega[l] // Delta ) L.append( K - L[l] ) # the update flag L is tricky: in Blahut's schema, it's mandatory to use `L = K - L - erasures_count` (and indeed in a previous draft of this function, if you forgot to do `- erasures_count` it would lead to correcting only 2*(errors+erasures) <= (n-k) instead of 2*errors+erasures <= (n-k)), but in this latest draft, this will lead to a wrong decoding in some cases where it should correctly decode! Thus you should try with and without `- erasures_count` to update L on your own implementation and see which one works OK without producing wrong decoding failures. M.append( 1 - M[l] ) else: raise Exception("Code shouldn't have gotten here") # Hack to fix the simultaneous computation of omega, the errata evaluator polynomial: because A (the errata evaluator support polynomial) is not correctly initialized (I could not find any info in academic papers). So at the end, we get the correct errata evaluator polynomial omega + some higher order terms that should not be present, but since we know that sigma is always correct and the maximum degree should be the same as omega, we can fix omega by truncating too high order terms. if omega[-1].degree > sigma[-1].degree: omega[-1] = Polynomial(omega[-1].coefficients[-(sigma[-1].degree+1):]) # Return the last result of the iterations (since BM compute iteratively, the last iteration being correct - it may already be before, but we're not sure) return sigma[-1], omega[-1] def _find_erasures_locator(self, erasures_pos): '''Compute the erasures locator polynomial from the erasures positions (the positions must be relative to the x coefficient, eg: "hello worldxxxxxxxxx" is tampered to "h_ll_ worldxxxxxxxxx" with xxxxxxxxx being the ecc of length n-k=9, here the string positions are [1, 4], but the coefficients are reversed since the ecc characters are placed as the first coefficients of the polynomial, thus the coefficients of the erased characters are n-1 - [1, 4] = [18, 15] = erasures_loc to be specified as an argument.''' # See: http://ocw.usu.edu/Electrical_and_Computer_Engineering/Error_Control_Coding/lecture7.pdf and Blahut, Richard E. "Transform techniques for error control codes." IBM Journal of Research and development 23.3 (1979): 299-315. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.92.600&rep=rep1&type=pdf and also a MatLab implementation here: http://www.mathworks.com/matlabcentral/fileexchange/23567-reed-solomon-errors-and-erasures-decoder/content//RS_E_E_DEC.m erasures_loc = Polynomial([GF256int(1)]) # just to init because we will multiply, so it must be 1 so that the multiplication starts correctly without nulling any term # erasures_loc is very simple to compute: erasures_loc = prod(1 - x*alpha[j]**i) for i in erasures_pos and where alpha is the alpha chosen to evaluate polynomials (here in this library it's gf(3)). To generate c*x where c is a constant, we simply generate a Polynomial([c, 0]) where 0 is the constant and c is positionned to be the coefficient for x^1. See https://en.wikipedia.org/wiki/Forney_algorithm#Erasures for i in erasures_pos: erasures_loc = erasures_loc * (Polynomial([GF256int(1)]) - Polynomial([GF256int(self.generator)**i, 0])) return erasures_loc Note: Sigma, Omega, A, B, L and M are all lists of polynomials (so we keep the whole history of all intermediate polynomials we computed on each iteration). This can of course be optimized because we only really need Sigma[l], Sigma[l-1], Omega[l], Omega[l-1], A[l], B[l], L[l] and M[l] (so it's just Sigma and Omega that needs to keep the previous iteration in memory, the other variables don't need). Note2: the update flag L is tricky: in some implementations, doing just like in the Blahut's schema will lead to wrong failures when decoding. In my past implementation, it was mandatory to use L = K - L - erasures_count to correctly decode both errors-and-erasures up to the Singleton bound, but in my latest implementation, I had to use L = K - L (even when there are erasures) to avoid wrong decoding failures. You should just try both on your own implementation and see which one doesn't produce any wrong decoding failures. See below in the issues for more info. The only issue with this algorithm is that it does not describe how to simultaneously compute Omega, the errors evaluator polynomial (the book describes how to initialize Omega for errors only, but not when decoding errors-and-erasures). I tried several variations and the above works, but not completely: at the end, Omega will include higher order terms that should have been cancelled. Probably Omega or A the errors evaluator support polynomial, is not initialized with the good value. However, you can fix that by either trimming the Omega polynomial of the too high order terms (since it should have the same degree as Lambda/Sigma): if omega[-1].degree > sigma[-1].degree: omega[-1] = Polynomial(omega[-1].coefficients[-(sigma[-1].degree+1):]) Or you can totally compute Omega from scratch after BM by using the errata locator Lambda/Sigma, which is always correctly computed: def _find_error_evaluator(self, synd, sigma, k=None): '''Compute the error (or erasures if you supply sigma=erasures locator polynomial) evaluator polynomial Omega from the syndrome and the error/erasures/errata locator Sigma. Omega is already computed at the same time as Sigma inside the Berlekamp-Massey implemented above, but in case you modify Sigma, you can recompute Omega afterwards using this method, or just ensure that Omega computed by BM is correct given Sigma (as long as syndrome and sigma are correct, omega will be correct).''' n = self.n if not k: k = self.k # Omega(x) = [ Synd(x) * Error_loc(x) ] mod x^(n-k+1) -- From Blahut, Algebraic codes for data transmission, 2003 return (synd * sigma) % Polynomial([GF256int(1)] + [GF256int(0)] * (n-k+1)) # Note that you should NOT do (1+Synd(x)) as can be seen in some books because this won't work with all primitive generators. I am looking for a better solution in the following question on CSTheory. /EDIT: I will describe some of the issues I have had and how to fix them: don't forget to init the error locator polynomial with the erasures locator polynomial (that you can easily compute from the syndromes and erasures positions). if you can decode errors only and erasures only flawlessly, but limited to 2*errors + erasures <= (n-k)/2, then you forgot to skip the first v iterations. if you can decode both erasures-and-errors but up to 2*(errors+erasures) <= (n-k), then you forgot to update the assigment of L: L = i+1 - L - erasures_count instead of L = i+1 - L. But this may actually make your decoder fail in some cases depending on how you implemented your decoder, see the next point. my first decoder was limited to only one generator/prime polynomial/fcr, but when I updated it to be universal and added strict unit tests, the decoder failed when it shouldn't. It seems Blahut's schema above is wrong about L (the updating flag): it must be updated using L = K - L and not L = K - L - erasures_count, because this will lead to the decoder failing sometimes even through we are under the Singleton bound (and thus we should be decoding correctly!). This seems to be confirmed by the fact that computing L = K - L will not only fix those decoding issues, but it will also give the exact same result as the alternative way to update without using the update flag L (ie, the condition if Delta == ZERO or len(sigma[l+1]) <= len(sigma[l]):). But this is weird: in my past implementation, L = K - L - erasures_count was mandatory for errors-and-erasures decoding, but now it seems it produces wrong failures. So you should just try with and without on your own implementation and whether one or the other produces wrong failures for you. note that the condition 2*L[l] > K change to 2*L[l] > K+erasures_count. You may not notice any side effect without adding the condition +erasures_count at first, but in some cases the decoding will fail when it shouldn't. if you can fix only exactly one error or erasure, check that your condition is 2*L[l] > K+erasures_count and not 2*L[l] >= K+erasures_count (notice the > instead of >=). if you can correct 2*errors + erasures <= (n-k-2) (just below the limit, eg, if you have 10 ecc symbols, you can correct only 4 errors instead of 5 normally) then check your syndrome and your loop inside the BM algo: if the syndrome starts with a 0 coefficient for the constant term x^0 (which is sometimes advised in books), then your syndrome is shifted, and then your loop inside BM must start at 1 and end at n-k+1 instead of 0:(n-k) if not shifted. If you can correct every symbol but the last one (the last ecc symbol), then check your ranges, particularly in your Chien search: you should not evaluate the error locator polynomial from alpha^0 to alpha^255 but from alpha^1 to alpha^256.
Why is [] faster than list()?
I recently compared the processing speeds of [] and list() and was surprised to discover that [] runs more than three times faster than list(). I ran the same test with {} and dict() and the results were practically identical: [] and {} both took around 0.128sec / million cycles, while list() and dict() took roughly 0.428sec / million cycles each. Why is this? Do [] and {} (probably () and '', too) immediately pass back a copies of some empty stock literal while their explicitly-named counterparts (list(), dict(), tuple(), str()) fully go about creating an object, whether or not they actually have elements? I have no idea how these two methods differ but I'd love to find out. I couldn't find an answer in the docs or on SO, and searching for empty brackets turned out to be more complicated than I'd expected. I got my timing results by calling timeit.timeit("[]") and timeit.timeit("list()"), and timeit.timeit("{}") and timeit.timeit("dict()"), to compare lists and dictionaries, respectively. I'm running Python 2.7.9. I recently discovered "Why is if True slower than if 1?" that compares the performance of if True to if 1 and seems to touch on a similar literal-versus-global scenario; perhaps it's worth considering as well.
Because [] and {} are literal syntax. Python can create bytecode just to create the list or dictionary objects: >>> import dis >>> dis.dis(compile('[]', '', 'eval')) 1 0 BUILD_LIST 0 3 RETURN_VALUE >>> dis.dis(compile('{}', '', 'eval')) 1 0 BUILD_MAP 0 3 RETURN_VALUE list() and dict() are separate objects. Their names need to be resolved, the stack has to be involved to push the arguments, the frame has to be stored to retrieve later, and a call has to be made. That all takes more time. For the empty case, that means you have at the very least a LOAD_NAME (which has to search through the global namespace as well as the __builtin__ module) followed by a CALL_FUNCTION, which has to preserve the current frame: >>> dis.dis(compile('list()', '', 'eval')) 1 0 LOAD_NAME 0 (list) 3 CALL_FUNCTION 0 6 RETURN_VALUE >>> dis.dis(compile('dict()', '', 'eval')) 1 0 LOAD_NAME 0 (dict) 3 CALL_FUNCTION 0 6 RETURN_VALUE You can time the name lookup separately with timeit: >>> import timeit >>> timeit.timeit('list', number=10**7) 0.30749011039733887 >>> timeit.timeit('dict', number=10**7) 0.4215109348297119 The time discrepancy there is probably a dictionary hash collision. Subtract those times from the times for calling those objects, and compare the result against the times for using literals: >>> timeit.timeit('[]', number=10**7) 0.30478692054748535 >>> timeit.timeit('{}', number=10**7) 0.31482696533203125 >>> timeit.timeit('list()', number=10**7) 0.9991960525512695 >>> timeit.timeit('dict()', number=10**7) 1.0200958251953125 So having to call the object takes an additional 1.00 - 0.31 - 0.30 == 0.39 seconds per 10 million calls. You can avoid the global lookup cost by aliasing the global names as locals (using a timeit setup, everything you bind to a name is a local): >>> timeit.timeit('_list', '_list = list', number=10**7) 0.1866450309753418 >>> timeit.timeit('_dict', '_dict = dict', number=10**7) 0.19016098976135254 >>> timeit.timeit('_list()', '_list = list', number=10**7) 0.841480016708374 >>> timeit.timeit('_dict()', '_dict = dict', number=10**7) 0.7233691215515137 but you never can overcome that CALL_FUNCTION cost.
matplotlib (mplot3d) - how to increase the size of an axis (stretch) in a 3D Plot?
I have this so far: x,y,z = data.nonzero() fig = plt.figure() ax = fig.add_subplot(111, projection='3d') ax.scatter(x, y, z, zdir='z', c= 'red') plt.savefig("plot.png") Which creates: What I'd like to do is stretch this out to make the Z axis 9 times taller and keep X and Y the same. I'd like to keep the same coordinates though. So far I tried this guy: fig = plt.figure(figsize=(4.,35.)) But that just stretches out the plot.png image.
The code example below provides a way to scale each axis relative to the others. However, to do so you need to modify the Axes3D.get_proj function. Below is an example based on the example provided by matplot lib: http://matplotlib.org/1.4.0/mpl_toolkits/mplot3d/tutorial.html#line-plots (There is a shorter version at the end of this answer) from mpl_toolkits.mplot3d.axes3d import Axes3D from mpl_toolkits.mplot3d import proj3d import matplotlib as mpl import numpy as np import matplotlib.pyplot as plt #Make sure these are floating point values: scale_x = 1.0 scale_y = 2.0 scale_z = 3.0 #Axes are scaled down to fit in scene max_scale=max(scale_x, scale_y, scale_z) scale_x=scale_x/max_scale scale_y=scale_y/max_scale scale_z=scale_z/max_scale #Create scaling matrix scale = np.array([[scale_x,0,0,0], [0,scale_y,0,0], [0,0,scale_z,0], [0,0,0,1]]) print scale def get_proj_scale(self): """ Create the projection matrix from the current viewing position. elev stores the elevation angle in the z plane azim stores the azimuth angle in the x,y plane dist is the distance of the eye viewing point from the object point. """ relev, razim = np.pi * self.elev/180, np.pi * self.azim/180 xmin, xmax = self.get_xlim3d() ymin, ymax = self.get_ylim3d() zmin, zmax = self.get_zlim3d() # transform to uniform world coordinates 0-1.0,0-1.0,0-1.0 worldM = proj3d.world_transformation( xmin, xmax, ymin, ymax, zmin, zmax) # look into the middle of the new coordinates R = np.array([0.5, 0.5, 0.5]) xp = R[0] + np.cos(razim) * np.cos(relev) * self.dist yp = R[1] + np.sin(razim) * np.cos(relev) * self.dist zp = R[2] + np.sin(relev) * self.dist E = np.array((xp, yp, zp)) self.eye = E self.vvec = R - E self.vvec = self.vvec / proj3d.mod(self.vvec) if abs(relev) > np.pi/2: # upside down V = np.array((0, 0, -1)) else: V = np.array((0, 0, 1)) zfront, zback = -self.dist, self.dist viewM = proj3d.view_transformation(E, R, V) perspM = proj3d.persp_transformation(zfront, zback) M0 = np.dot(viewM, worldM) M = np.dot(perspM, M0) return np.dot(M, scale); Axes3D.get_proj=get_proj_scale """ You need to include all the code above. From here on you should be able to plot as usual. """ mpl.rcParams['legend.fontsize'] = 10 fig = plt.figure(figsize=(5,5)) ax = fig.gca(projection='3d') theta = np.linspace(-4 * np.pi, 4 * np.pi, 100) z = np.linspace(-2, 2, 100) r = z**2 + 1 x = r * np.sin(theta) y = r * np.cos(theta) ax.plot(x, y, z, label='parametric curve') ax.legend() plt.show() Standard output: Scaled by (1, 2, 3): Scaled by (1, 1, 3): The reason I particularly like this method, Swap z and x, scale by (3, 1, 1): Below is a shorter version of the code. from mpl_toolkits.mplot3d.axes3d import Axes3D from mpl_toolkits.mplot3d import proj3d import matplotlib as mpl import numpy as np import matplotlib.pyplot as plt mpl.rcParams['legend.fontsize'] = 10 fig = plt.figure(figsize=(5,5)) ax = fig.gca(projection='3d') theta = np.linspace(-4 * np.pi, 4 * np.pi, 100) z = np.linspace(-2, 2, 100) r = z**2 + 1 x = r * np.sin(theta) y = r * np.cos(theta) """ Scaling is done from here... """ x_scale=1 y_scale=1 z_scale=2 scale=np.diag([x_scale, y_scale, z_scale, 1.0]) scale=scale*(1.0/scale.max()) scale[3,3]=1.0 def short_proj(): return np.dot(Axes3D.get_proj(ax), scale) ax.get_proj=short_proj """ to here """ ax.plot(z, y, x, label='parametric curve') ax.legend() plt.show()
How to make an Python subclass uncallable
How do you "disable" the __call__ method on a subclass so the following would be true: class Parent(object): def __call__(self): return class Child(Parent): def __init__(self): super(Child, self).__init__() object.__setattr__(self, '__call__', None) >>> c = Child() >>> callable(c) False This and other ways of trying to set __call__ to some non-callable value still result in the child appearing as callable.
You can't. As jonrsharpe points out, there's no way to make Child appear to not have the attribute, and that's what callable(Child()) relies on to produce its answer. Even making it a descriptor that raises AttributeError won't work, per this bug report: https://bugs.python.org/issue23990 . A python 2 example: >>> class Parent(object): ... def __call__(self): pass ... >>> class Child(Parent): ... __call__ = property() ... >>> c = Child() >>> c() Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: unreadable attribute >>> c.__call__ Traceback (most recent call last): File "<stdin>", line 1, in <module> AttributeError: unreadable attribute >>> callable(c) True This is because callable(...) doesn't act out the descriptor protocol. Actually calling the object, or accessing a __call__ attribute, involves retrieving the method even if it's behind a property, through the normal descriptor protocol. But callable(...) doesn't bother going that far, if it finds anything at all it is satisfied, and every subclass of Parent will have something for __call__ -- either an attribute in a subclass, or the definition from Parent. So while you can make actually calling the instance fail with any exception you want, you can't ever make callable(some_instance_of_parent) return False.
Raise error if a Python dict comprehension overwrites a key
Is there a way to get a dict comprehension to raise an exception if it would override a key? For example, I would like the following to error because there are two values for the key 'a': >>> {k:v for k, v in ('a1', 'a2', 'b3')} {'a': '2', 'b': '3'} I realise this can be done with a for loop. Is there a way to do it while keeping the comprehension syntax?
You can use a generator with a helper function: class DuplicateKeyError(ValueError): pass def dict_no_dupl(it): d = {} for k, v in it: if k in d: raise DuplicateKeyError(k) d[k] = v return d dict_no_dupl((k, v) for k, v in ('a1', 'a2', 'b3')) This does add a helper function, but keeps the comprehension syntax (reasonably) intact.
Listing contents of a bucket with boto3
How can I see what's inside a bucket in S3 with boto3? (i.e. do an "ls")? Doing the following: import boto3 s3 = boto3.resource('s3') my_bucket = s3.Bucket('some/path/') returns: s3.Bucket(name='some/path/') How do I see its contents?
One way to see the contents would be: for object in mybucket.objects.all(): print(object)
Proper way to consume data from RESTFUL API in django
I'm trying to learn django so while I have a current solution I'm not sure if it follows best practices in django. I would like to display information from a web api on my website. Let's say the api url is as follows: http://api.example.com/books?author=edwards&year=2009 Thsis would return a list of books by Edwards written in the year 2009. Returned in the following format: {'results': [ { 'title':'Book 1', 'Author':'Edwards Man', 'Year':2009 }, { 'title':'Book 2', 'Author':'Edwards Man', 'Year':2009} ] } Currently I am consuming the API in my views file as follows: class BooksPage(generic.TemplateView): def get(self,request): r = requests.get('http://api.example.com/books?author=edwards&year=2009') books = r.json() books_list = {'books':books['results']} return render(request,'books.html',books_list) Normally, we grab data from the database in the models.py file, but I am unsure if I should be grabbing this API data in models.py or views.py. If it should be in models.py, can someone provide an example of how to do this? I wrote the above example sepecifically for stackoverflow, so any bugs are purely a result of writing it here.
I like the approach of putting that kind of logic in a separate service layer (services.py); the data you are rendering is quite not a "model" in the Django ORM sense, and it's more than simple "view" logic. A clean encapsulation ensures you can do things like control the interface to the backing service (i.e., make it look like a Python API vs. URL with parameters), add enhancements such as caching, as @sobolevn mentioned, test the API in isolation, etc. So I'd suggest a simple services.py, that looks something like this: def get_books(year, author): url = 'http://api.example.com/books' params = {'year': year, 'author': author} r = requests.get('http://api.example.com/books', params=params) books = r.json() books_list = {'books':books['results']} Note how the parameters get passed (using a capability of the requests package). Then in views.py: import services class BooksPage(generic.TemplateView): def get(self,request): books_list = services.get_books('2009', 'edwards') return render(request,'books.html',books_list) See also: Separation of business logic and data access in django
Reshaping/Pivoting data in Spark RDD and/or Spark DataFrames
I have some data in the following format (either RDD or Spark DataFrame): from pyspark.sql import SQLContext sqlContext = SQLContext(sc) rdd = sc.parallelize([('X01',41,'US',3), ('X01',41,'UK',1), ('X01',41,'CA',2), ('X02',72,'US',4), ('X02',72,'UK',6), ('X02',72,'CA',7), ('X02',72,'XX',8)]) # convert to a Spark DataFrame schema = StructType([StructField('ID', StringType(), True), StructField('Age', IntegerType(), True), StructField('Country', StringType(), True), StructField('Score', IntegerType(), True)]) df = sqlContext.createDataFrame(rdd, schema) What I would like to do is to 'reshape' the data, convert certain rows in Country(specifically US, UK and CA) into columns: ID Age US UK CA 'X01' 41 3 1 2 'X02' 72 4 6 7 Essentially, I need something along the lines of Python's pivot workflow: categories = ['US', 'UK', 'CA'] new_df = df[df['Country'].isin(categories)].pivot(index = 'ID', columns = 'Country', values = 'Score') My dataset is rather large so I can't really collect() and ingest the data into memory to do the reshaping in Python itself. Is there a way to convert Python's .pivot() into an invokable function while mapping either an RDD or a Spark DataFrame? Any help would be appreciated!
First up, this is probably not a good idea, because you are not getting any extra information, but you are binding yourself with a fixed schema (ie you must need to know how many countries you are expecting, and of course, additional country means change in code) Having said that, this is a SQL problem, which is shown below. But in case you suppose it is not too "software like" (seriously, I have heard this!!), then you can refer the first solution. Solution 1: def reshape(t): out = [] out.append(t[0]) out.append(t[1]) for v in brc.value: if t[2] == v: out.append(t[3]) else: out.append(0) return (out[0],out[1]),(out[2],out[3],out[4],out[5]) def cntryFilter(t): if t[2] in brc.value: return t else: pass def addtup(t1,t2): j=() for k,v in enumerate(t1): j=j+(t1[k]+t2[k],) return j def seq(tIntrm,tNext): return addtup(tIntrm,tNext) def comb(tP,tF): return addtup(tP,tF) countries = ['CA', 'UK', 'US', 'XX'] brc = sc.broadcast(countries) reshaped = calls.filter(cntryFilter).map(reshape) pivot = reshaped.aggregateByKey((0,0,0,0),seq,comb,1) for i in pivot.collect(): print i Now, Solution 2: Of course better as SQL is right tool for this callRow = calls.map(lambda t: Row(userid=t[0],age=int(t[1]),country=t[2],nbrCalls=t[3])) callsDF = ssc.createDataFrame(callRow) callsDF.printSchema() callsDF.registerTempTable("calls") res = ssc.sql("select userid,age,max(ca),max(uk),max(us),max(xx)\ from (select userid,age,\ case when country='CA' then nbrCalls else 0 end ca,\ case when country='UK' then nbrCalls else 0 end uk,\ case when country='US' then nbrCalls else 0 end us,\ case when country='XX' then nbrCalls else 0 end xx \ from calls) x \ group by userid,age") res.show() data set up: data=[('X01',41,'US',3),('X01',41,'UK',1),('X01',41,'CA',2),('X02',72,'US',4),('X02',72,'UK',6),('X02',72,'CA',7),('X02',72,'XX',8)] calls = sc.parallelize(data,1) countries = ['CA', 'UK', 'US', 'XX'] Result: From 1st solution (('X02', 72), (7, 6, 4, 8)) (('X01', 41), (2, 1, 3, 0)) From 2nd solution: root |-- age: long (nullable = true) |-- country: string (nullable = true) |-- nbrCalls: long (nullable = true) |-- userid: string (nullable = true) userid age ca uk us xx X02 72 7 6 4 8 X01 41 2 1 3 0 Kindly let me know if this works, or not :) Best Ayan
A fast way to find an all zero answer
For every array of length n+h-1 with values from 0 and 1, I would like to check if there exists another non-zero array of length n with values from -1,0,1 so that all the h inner products are zero. My naive way to do this is import numpy as np import itertools (n,h)= 4,3 for longtuple in itertools.product([0,1], repeat = n+h-1): bad = 0 for v in itertools.product([-1,0,1], repeat = n): if not any(v): continue if (not np.correlate(v, longtuple, 'valid').any()): bad = 1 break if (bad == 0): print "Good" print longtuple This is very slow if we set n = 19 and h = 10 which is what I would like to test. My goal is to find a single "Good" array of length n+h-1. Is there a way to speed this up so that n = 19 and h = 10 is feasible? The current naive approach takes 2^(n+h-1)3^(n) iterations, each one of which takes roughly n time. That is 311,992,186,885,373,952 iterations for n = 19 and h = 10 which is impossible. Note 1 Changed convolve to correlate so that the code considers v the right way round. July 10 2015 The problem is still open with no solution fast enough for n=19 and h=10 given yet.
Consider the following "meet in the middle" approach. First, recast the situation in the matrix formulation provided by leekaiinthesky. Next, note that we only have to consider "short" vectors s of the form {0,1}^n (i.e., short vectors containing only 0's and 1's) if we change the problem to finding an h x n Hankel matrix H of 0's and 1's such that Hs1 is never equal to Hs2 for two different short vectors of 0's and 1's. That is because Hs1 = Hs2 implies H(s1-s2)=0 which implies there is a vector v of 1's, 0's and -1's, namely s1-s2, such that Hv = 0; conversely, if Hv = 0 for v in {-1,0,1}^n, then we can find s1 and s2 in {0,1}^n such that v = s1 - s2 so Hs1 = Hs2. When n=19 there are only 524,288 vectors s in {0,1}^n to try; hash the results Hs and if the same result occurs twice then H is no good and try another H. In terms of memory this approach is quite feasible. There are 2^(n+h-1) Hankel matrices H to try; when n=19 and h=10 that's 268,435,456 matrices. That's 2^38 tests, or 274,877,906,944, each with about nh operations to multiply the matrix H and the vector s, about 52 trillion ops. That seems feasible, no? Since you're now only dealing with 0's and 1's, not -1's, you might also be able to speed up the process by using bit operations (shift, and, and count 1's). Update I implemented my idea in C++. I'm using bit operations to calculate dot products, encoding the resulting vector as a long integer, and using unordered_set to detect duplicates, taking an early exit from a given long vector when a duplicate vector of dot products is found. I obtained 00000000010010111000100100 for n=17 and h=10 after a few minutes, and 000000111011110001001101011 for n=18 and h=10 in a little while longer. I'm just about to run it for n=19 and h=10. #include <iostream> #include <bitset> #include <unordered_set> /* Count the number of 1 bits in 32 bit int x in 21 instructions. * From /Hackers Delight/ by Henry S. Warren, Jr., 5-2 */ int count1Bits(int x) { x = x - ((x >> 1) & 0x55555555); x = (x & 0x33333333) + ((x >> 2) & 0x33333333); x = (x + (x >> 4)) & 0x0F0F0F0F; x = x + (x >> 8); x = x + (x >> 16); return x & 0x0000003F; } int main () { const int n = 19; const int h = 10; std::unordered_set<long> dotProductSet; // look at all 2^(n+h-1) possibilities for longVec // upper n bits cannot be all 0 so we can start with 1 in pos h for (int longVec = (1 << (h-1)); longVec < (1 << (n+h-1)); ++longVec) { dotProductSet.clear(); bool good = true; // now look at all n digit non-zero shortVecs for (int shortVec = 1; shortVec < (1 << n); ++shortVec) { // longVec dot products with shifted shortVecs generates h results // each between 0 and n inclusive, can encode as h digit number in // base n+1, up to (n+1)^h = 20^10 approx 13 digits, need long long dotProduct = 0; // encode h dot products of shifted shortVec with longVec // as base n+1 integer for(int startShort = 0; startShort < h; ++startShort) { int shortVecShifted = shortVec << startShort; dotProduct *= n+1; dotProduct += count1Bits(longVec & shortVecShifted); } auto ret = dotProductSet.insert(dotProduct); if (!ret.second) { good = false; break; } } if (good) { std::cout << std::bitset<(n+h-1)>(longVec) << std::endl; break; } } return 0; } Second Update The program for n=19 and h=10 ran for two weeks in the background on my laptop. At the end, it just exited without printing any results. Barring some kind of error in the program, it looks like there are no long vectors with the property you want. I suggest looking for theoretical reasons why there are no such long vectors. Perhaps some kind of counting argument will work.
Limit memory usage?
I run Python 2.7 on a Linux machine with 16GB Ram and 64 bit OS. A python script I wrote can load too much data into memory, which slows the machine down to the point where I cannot even kill the process any more. While I can limit memory by calling: ulimit -v 12000000 in my shell before running the script, I'd like to include a limiting option in the script itself. Everywhere I looked, the resource module is cited as having the same power as ulimit. But calling: import resource _, hard = resource.getrlimit(resource.RLIMIT_DATA) resource.setrlimit(resource.RLIMIT_DATA, (12000, hard)) at the beginning of my script does absolutely nothing. Even setting the value as low as 12000 never crashed the process. I tried the same with RLIMIT_STACK, as well with the same result. Curiously, calling: import subprocess subprocess.call('ulimit -v 12000', shell=True) does nothing as well. What am I doing wrong? I couldn't find any actual usage examples online.
resource.RLIMIT_VMEM is the resource corresponding to ulimit -v. RLIMIT_DATA only affects brk/sbrk system calls while newer memory managers tend to use mmap instead. The second thing to note is that ulimit/setrlimit only affects the current process and its future children. Regarding the AttributeError: 'module' object has no attribute 'RLIMIT_VMEM' message: the resource module docs mention this possibility: This module does not attempt to mask platform differences — symbols not defined for a platform will not be available from this module on that platform. According to the bash ulimit source linked to above, it uses RLIMIT_AS if RLIMIT_VMEM is not defined.
How can a python 2 doctest fail and yet have no difference in the values in the failure message?
I'm using Python 2.7.9 in Windows. I have a UTF-8-encoded python script file with the following contents: # coding=utf-8 def test_func(): u""" >>> test_func() u'☃' """ return u'☃' I get a curious failure when I run the doctest: Failed example: test_func() Expected: u'\u2603' Got: u'\u2603' I see this same failure output whether I launch the doctests through the IDE I usually use (IDEA IntelliJ), or from the command line: > x:\my_virtualenv\Scripts\python.exe -m doctest -v hello.py I copied the lines under Expected and Got into WinMerge to rule out some subtle difference in the characters I couldn't spot; it told me they were identical. However, if I redo the command line run, but redirect the output to a text file, like so: > x:\my_virtualenv\Scripts\python.exe -m doctest -v hello.py > out.txt the test still fails, but the resulting failure output is a bit different: Failed example: test_func() Expected: u'☃' Got: u'\u2603' If I put the escaped unicode literal in my doctest: # coding=utf-8 def test_func(): u""" >>> test_func() u'☃' """ return u'\\u2603' the test passes. But as far as I can tell, u'\u2603' and u'☃' should evaluate to the same thing. Really I have two questions about the failing case: Is one of the representations that the doctester is giving (under Expected or Got) incorrect for the value that the doctester has for that case? (i.e. x != eval(repr(x))) If not, why does the test fail?
My Findings Using the original doc-string and return value. Expected: u'\u2603' Got: u'\u2603' Seemingly paradoxical. Using a modified doc-string and original return value. def test_func(): u""" >>> test_func() u'\\u2603' """ return u'☃' Outputting the correct result: Trying: test_func() Expecting: u'\u2603' ok Using a raw doc-string. Using this raw doc-string, the correct result result is outputted. r""" >>> test_func() u'\u2603' """ When test_func returns u'☃'. Using a regular doc-string. doc-string => returned value u'\u2603' => u'\u2603' u'\u2603' => u'☃' Where double backslashes have no effect. Analysis Returning the printable representation of each string, looks like this: Summary (Answer) The doctest module uses difflib (which is very similar to GitHub's builtin editor interface) to differentiate between results. Like the following: >>> import difflib >>> d = difflib.unified_diff('x', 'x') >>> list(d) [] >>> d = difflib.unified_diff('x', 'y') >>> list(d) ['--- \n', '+++ \n', '@@ -1 +1 @@\n', '-x', '+y'] Within the doctest module, the expected and received strings are formatted several times. It seems that is what's causing your problem; an interpretation mistake, caused by string encoding. >>> r'\u2603' == u'\u2603' False >>> '\u2603' == u'\u2603' False >>> u'☃' == u'\u2603' Unsupported characters in input What is printed to the console has been formatted (using %s). Thus getting rid of visible differences, and making them look identical.
Apache Spark: How to use pyspark with Python 3
I built Spark 1.4 from the GH development master, and the build went through fine. But when I do a bin/pyspark I get the Python 2.7.9 version. How can I change this?
Just set the environment variable: export PYSPARK_PYTHON=python3 incase you want this change to be permanent add this line to pyspark script
Why does CalibratedClassifierCV underperform a direct classifer?
I noticed that sklearn's new CalibratedClassifierCV seems to underperform the direct base_estimator when the base_estimator is GradientBoostingClassifer, (I haven't tested other classifiers). Interestingly, if make_classification's parameters are: n_features = 10 n_informative = 3 n_classes = 2 then the CalibratedClassifierCV seems to be the slight outperformer (log loss evaluation). However, under the following classification data set the CalibratedClassifierCV seems to generally be the underperformer: from sklearn.datasets import make_classification from sklearn import ensemble from sklearn.calibration import CalibratedClassifierCV from sklearn.metrics import log_loss from sklearn import cross_validation # Build a classification task using 3 informative features X, y = make_classification(n_samples=1000, n_features=100, n_informative=30, n_redundant=0, n_repeated=0, n_classes=9, random_state=0, shuffle=False) skf = cross_validation.StratifiedShuffleSplit(y, 5) for train, test in skf: X_train, X_test = X[train], X[test] y_train, y_test = y[train], y[test] clf = ensemble.GradientBoostingClassifier(n_estimators=100) clf_cv = CalibratedClassifierCV(clf, cv=3, method='isotonic') clf_cv.fit(X_train, y_train) probas_cv = clf_cv.predict_proba(X_test) cv_score = log_loss(y_test, probas_cv) clf = ensemble.GradientBoostingClassifier(n_estimators=100) clf.fit(X_train, y_train) probas = clf.predict_proba(X_test) clf_score = log_loss(y_test, probas) print 'calibrated score:', cv_score print 'direct clf score:', clf_score print One run yielded: Maybe I'm missing something about how CalibratedClassifierCV works, or am not using it correctly, but I was under the impression that if anything, passing a classifier to CalibratedClassifierCV would result in improved performance relative to the base_estimator alone. Can anyone explain this observed underperformance?
The probability calibration itself requires cross-validation, therefore the CalibratedClassifierCV trains a calibrated classifier per fold (in this case using StratifiedKFold), and takes the mean of the predicted probabilities from each classifier when you call predict_proba(). This could lead to the explanation of the effect. My hypothesis is that if the training set is small with respect to the number of features and classes, the reduced training set for each sub-classifier affects performance and the ensembling does not make up for it (or makes it worse). Also the GradientBoostingClassifier might provide already pretty good probability estimates from the start as its loss function is optimized for probability estimation. If that's correct, ensembling classifiers the same way as the CalibratedClassifierCV but without calibration should be worse than the single classifier. Also, the effect should disappear when using a larger number of folds for calibration. To test that, I extended your script to increase the number of folds and include the ensembled classifier without calibration, and I was able to confirm my predictions. A 10-fold calibrated classifier always performed better than the single classifier and the uncalibrated ensemble was significantly worse. In my run, the 3-fold calibrated classifier also did not really perform worse than the single classifier, so this might be also an unstable effect. These are the detailed results on the same dataset: This is the code from my experiment: import numpy as np from sklearn.datasets import make_classification from sklearn import ensemble from sklearn.calibration import CalibratedClassifierCV from sklearn.metrics import log_loss from sklearn import cross_validation X, y = make_classification(n_samples=1000, n_features=100, n_informative=30, n_redundant=0, n_repeated=0, n_classes=9, random_state=0, shuffle=False) skf = cross_validation.StratifiedShuffleSplit(y, 5) for train, test in skf: X_train, X_test = X[train], X[test] y_train, y_test = y[train], y[test] clf = ensemble.GradientBoostingClassifier(n_estimators=100) clf_cv = CalibratedClassifierCV(clf, cv=3, method='isotonic') clf_cv.fit(X_train, y_train) probas_cv = clf_cv.predict_proba(X_test) cv_score = log_loss(y_test, probas_cv) print 'calibrated score (3-fold):', cv_score clf = ensemble.GradientBoostingClassifier(n_estimators=100) clf_cv = CalibratedClassifierCV(clf, cv=10, method='isotonic') clf_cv.fit(X_train, y_train) probas_cv = clf_cv.predict_proba(X_test) cv_score = log_loss(y_test, probas_cv) print 'calibrated score (10-fold:)', cv_score #Train 3 classifiers and take average probability skf2 = cross_validation.StratifiedKFold(y_test, 3) probas_list = [] for sub_train, sub_test in skf2: X_sub_train, X_sub_test = X_train[sub_train], X_train[sub_test] y_sub_train, y_sub_test = y_train[sub_train], y_train[sub_test] clf = ensemble.GradientBoostingClassifier(n_estimators=100) clf.fit(X_sub_train, y_sub_train) probas_list.append(clf.predict_proba(X_test)) probas = np.mean(probas_list, axis=0) clf_ensemble_score = log_loss(y_test, probas) print 'uncalibrated ensemble clf (3-fold) score:', clf_ensemble_score clf = ensemble.GradientBoostingClassifier(n_estimators=100) clf.fit(X_train, y_train) probas = clf.predict_proba(X_test) score = log_loss(y_test, probas) print 'direct clf score:', score print
Selenium: Trying to log in with cookies - "Can only set cookies for current domain"
What I am trying to achieve I am trying to log in to a website where cookies must be enabled using Selenium headless, I am using PhantomJS for driver. Problem I first recorded the procedure using Selenium IDE where it works fine using Firefox (not headless). Then I exported the code to Python and now I can't log in because it's throwing an error saying "Can only set Cookies for the current domain". I don't know why I am getting this problem, am I not on the correct domain? Code from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.common.keys import Keys from selenium.webdriver.support.ui import Select import unittest, time, re self.driver = webdriver.PhantomJS() self.driver.implicitly_wait(30) self.base_url = "https://login.example.com" driver = self.driver driver.get(self.base_url) all_cookies = self.driver.get_cookies() # It prints out all cookies and values just fine for cookie in all_cookies print cookie['name'] + " --> " + cookies['value'] # Set cookies to driver for s_cookie in all_cookies: c = { s_cookie['name'] : s_cookie['value']} # This is where it's throwing an error saying "Can only set Cookies for current domain driver.add_cookie(c) ... What I've tried I've tried saving the cookies in a dict, going to another domain, going back to original domain and added the cookies and then trying to log in but it still doesn't work (as suggested in this thread) Any help is appreciated.
Investigate the each cookies pairs. I ran into the similar issues and some of the cookies belonged to Google. You need to make sure cookies are being added only to the current Domain and also belong to the same Domain. In that case your exception is expected. On a side note, if I recall it correctly you cannot use localhost to add the cookies if you are doing so. Change to IP address. Also, investigate the cookies you are getting specially domain and expiry information. See, if they are returning null Edit I did this simple test on Gmail to show what you have done wrong. At first look I did not notice that you are trying to grab partial cookie, a pair, and add that to the domain. Since, the cookie does not have any Domain, path, expiry etc. information it was trying to add the cookie to current domain(127.0.0.1) and throwing some misleading info that did not quite make sense. Notice: in order to be a valid cookie it must have to have the correct Domain and expiry information which you have been missing. import unittest from selenium.webdriver.common.by import By from selenium import webdriver __author__ = 'Saifur' class CookieManagerTest(unittest.TestCase): def setUp(self): self.driver = webdriver.PhantomJS("E:\\working\\selenium.python\\selenium\\resources\\phantomjs.exe") self.driver.get("https://accounts.google.com/ServiceLogin?service=mail&continue=https://mail.google.com/mail/") self.driver.find_element(By.ID, "Email").send_keys("userid") self.driver.find_element(By.ID, "next").click() self.driver.find_element(By.ID, "Passwd").send_keys("supersimplepassword") self.driver.find_element(By.CSS_SELECTOR, "[type='submit'][value='Sign in']").click() self.driver.maximize_window() def test(self): driver = self.driver listcookies = driver.get_cookies() for s_cookie in listcookies: # this is what you are doing c = {s_cookie['name']: s_cookie['value']} print("*****The partial cookie info you are doing*****\n") print(c) # Should be done print("The Full Cookie including domain and expiry info\n") print(s_cookie) # driver.add_cookie(s_cookie) def tearDown(self): self.driver.quit() Console output: D:\Python34\python.exe "D:\Program Files (x86)\JetBrains\PyCharm Educational Edition 1.0.1\helpers\pycharm\utrunner.py" E:\working\selenium.python\selenium\python\FirstTest.py::CookieManagerTest true Testing started at 9:59 AM ... *******The partial cookie info you are doing******* {'PREF': 'ID=*******:FF=0:LD=en:TM=*******:LM=*******:GM=1:S=*******'} The Full Cookie including domain and expiry info {'httponly': False, 'name': '*******', 'value': 'ID=*******:FF=0:LD=en:TM=*******:LM=1432393656:GM=1:S=iNakWMI5h_2cqIYi', 'path': '/', 'expires': 'Mon, 22 May 2017 15:07:36 GMT', 'secure': False, 'expiry': *******, 'domain': '.google.com'} Notice: I just replaced some info with ******* on purpose
Any elegant way to add a method to an existing object in python?
After a lot of searching, I have found that there are a few ways to add an bound method or unbound class methods to an existing instance objects Such ways include approaches the code below is taking. import types class A(object): pass def instance_func(self): print 'hi' def class_func(self): print 'hi' a = A() # add bound methods to an instance using type.MethodType a.instance_func = types.MethodType(instance_func, a) # using attribute a.__dict__['instance_func'] = types.MethodType(instance_func, a) # using __dict__ # add bound methods to an class A.instance_func = instance_func A.__dict__['instance_func'] = instance_func # add class methods to an class A.class_func = classmethod(class_func) A.__dict__['class_func'] = classmethod(class_func) What makes me annoying is, typing the function's name, instance_func or class_func twice. Is there any simple way to add an existing function to an class or instance without typing the function's name again? For example, A.add_function_as_bound_method(f) will be far much elegant way to add an existing function to an instance or class since the function already has __name__ attribute.
Normally, functions stored in object dictionaries don't automatically turn into boundmethods when you look them up with dotted access. That said, you can use functools.partial to pre-bind the function and store it in the object dictionary so it can be accessed like a method: >>> from functools import partial >>> class Dog: def __init__(self, name): self.name = name >>> d = Dog('Fido') >>> e = Dog('Buddy') >>> def bark(self): # normal function print('Woof! %s is barking' % self.name) >>> e.bark = partial(bark, e) # pre-bound and stored in the instance >>> e.bark() # access like a normal method Woof! Buddy is barking This is a somewhat elegant way to add a method to an existing object (without needing to change its class and without affecting other existing objects). Follow-up to Comment: You can use a helper function to add the pre-bound function is a single step: >>> def add_method(obj, func): 'Bind a function and store it in an object' setattr(obj, func.__name__, partial(func, obj)) Use it like this: >>> add_method(e, bark) >>> e.bark() Woof! Fido is barking Hope this is exactly what you need :-)
tar.extractall() does not recognize unexpected EOF
The Python tarfile library does not detect a broken tar. user@host$ wc -c good.tar 143360 good.tar user@host$ head -c 130000 good.tar > cut.tar user@host$ tar -tf cut.tar ... tar: Unexpected EOF in archive tar: Error is not recoverable: exiting now Very nice, the command line tool recognizes an unexpected EOF. user@host$ python Python 2.7.6 (default, Mar 22 2014, 22:59:56) >>> import tarfile >>> tar=tarfile.open('cut.tar') >>> tar.extractall() Not nice. The Python library decodes the file, but raises no exception. How to detect unexpected EOF with the Python library? I want to avoid the subprocess module. The parameter errorlevel does not help. I tried errorlevel=1 and errorlevel=2.
I wrote a work around. It works with my tar files. I guess it supports not all types of objects which can be stored in a tar file. # -*- coding: utf-8 -*- from __future__ import absolute_import, division, unicode_literals, print_function import os import tarfile class TarfileWhichRaisesOnEOF(tarfile.TarFile): def extractall(self, path=".", members=None): super(TarfileWhichRaisesOnEOF, self).extractall(path, members) if members is None: members = self for tarinfo in members: if not tarinfo.isfile(): continue file=os.path.join(path, tarinfo.name) size_real=os.path.getsize(file) if size_real!=tarinfo.size: raise tarfile.ExtractError('Extracting %s: Size does not match. According to tarinfo %s and on disk %s' % ( tarinfo, tarinfo.size, size_real))
"pip install --editable ./" vs "python setup.py develop"
Is there any significant difference between pip install -e /path/to/mypackage and the setuptools variant? python /path/to/mypackage/setup.py develop
There is no big difference. With pip install -e for local projects, the "SomeProject.egg-info" directory is created relative to the project path. This is one advantage over just using setup.py develop, which creates the "egg-info" directly relative the current working directory. More: docs Also read the setuptools' docs.
Deploying a minimal flask app in docker - server connection issues
I have an app who's only dependency is flask, which runs fine outside docker and binds to the default port 5000. Here is the full source: from flask import Flask app = Flask(__name__) app.debug = True @app.route('/') def main(): return 'hi' if __name__ == '__main__': app.run() The problem is that when I deploy this in docker, the server is running but is unreachable from outside the container. Below is my Dockerfile. The image is ubuntu with flask installed. The tar just contains the index.py listed above; # Dockerfile FROM dreen/flask MAINTAINER dreen WORKDIR /srv # Get source RUN mkdir -p /srv COPY perfektimprezy.tar.gz /srv/perfektimprezy.tar.gz RUN tar x -f perfektimprezy.tar.gz RUN rm perfektimprezy.tar.gz # Run server EXPOSE 5000 CMD ["python", "index.py"] Here are the steps I am doing to deploy $> sudo docker build -t perfektimprezy . As far as I know the above runs fine, the image has the contents of the tar in /srv. Now, let's start the server in a container: $> sudo docker run -i -p 5000:5000 -d perfektimprezy 1c50b67d45b1a4feade72276394811c8399b1b95692e0914ee72b103ff54c769 Is it actually running? $> sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1c50b67d45b1 perfektimprezy:latest "python index.py" 5 seconds ago Up 5 seconds 0.0.0.0:5000->5000/tcp loving_wozniak $> sudo docker logs 1c50b67d45b1 * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit) * Restarting with stat Yep, seems like the flask server is running. Here is where it gets weird. Lets make a request to the server: $> curl 127.0.0.1:5000 -v * Rebuilt URL to: 127.0.0.1:5000/ * Hostname was NOT found in DNS cache * Trying 127.0.0.1... * Connected to 127.0.0.1 (127.0.0.1) port 5000 (#0) > GET / HTTP/1.1 > User-Agent: curl/7.35.0 > Host: 127.0.0.1:5000 > Accept: */* > * Empty reply from server * Connection #0 to host 127.0.0.1 left intact curl: (52) Empty reply from server Empty reply... But is the process running? $> sudo docker top 1c50b67d45b1 UID PID PPID C STIME TTY TIME CMD root 2084 812 0 10:26 ? 00:00:00 python index.py root 2117 2084 0 10:26 ? 00:00:00 /usr/bin/python index.py Now let's ssh into the server and check... $> sudo docker exec -it 1c50b67d45b1 bash root@1c50b67d45b1:/srv# netstat -an Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 127.0.0.1:5000 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:47677 127.0.0.1:5000 TIME_WAIT Active UNIX domain sockets (servers and established) Proto RefCnt Flags Type State I-Node Path root@1c50b67d45b1:/srv# curl -I 127.0.0.1:5000 HTTP/1.0 200 OK Content-Type: text/html; charset=utf-8 Content-Length: 5447 Server: Werkzeug/0.10.4 Python/2.7.6 Date: Tue, 19 May 2015 12:18:14 GMT It's fine... but not from the outside :( What am I doing wrong?
The problem is you are only binding to the localhost interface, you should be binding to 0.0.0.0 if you want the container to be accessible from outside. If you change: if __name__ == '__main__': app.run() to if __name__ == '__main__': app.run(host='0.0.0.0') It should work.
List comprehension, check if item is unique
I am trying to write a list comprehension statement that will only add an item if it's not currently contained in the list. Is there a way to check the current items in the list that is currently being constructed? Here is a brief example: Input { "Stefan" : ["running", "engineering", "dancing"], "Bob" : ["dancing", "art", "theatre"], "Julia" : ["running", "music", "art"] } Output ["running", "engineering", "dancing", "art", "theatre", "music"] Code without using a list comprehension output = [] for name, hobbies in input.items(): for hobby in hobbies: if hobby not in output: output.append(hobby) My Attempt [hobby for name, hobbies in input.items() for hobby in hobbies if hobby not in ???]
You can use set and set comprehension: {hobby for name, hobbies in input.items() for hobby in hobbies} As m.wasowski mentioned, we don't use the name here, so we can use item.values() instead: {hobby for hobbies in input.values() for hobby in hobbies} If you really need a list as the result, you can do this (but notice that usually you can work with sets without any problem): list({hobby for hobbies in input.values() for hobby in hobbies})
Finding red color using Python & OpenCV
I am trying to extract red color from an image. I have code that applies threshold to leave only values from specified range: img=cv2.imread('img.bmp') img_hsv=cv2.cvtColor(img, cv2.COLOR_BGR2HSV) lower_red = np.array([0,50,50]) #example value upper_red = np.array([10,255,255]) #example value mask = cv2.inRange(img_hsv, lower_red, upper_red) img_result = cv2.bitwise_and(img, img, mask=mask) But, as i checked, red can have Hue value in range, let's say from 0 to 10, as well as in range from 170 to 180. Therefore, i would like to leave values from any of those two ranges. I tried setting threshold from 10 to 170 and using cv2.bitwise_not function, but then i get all the white color as well. I think the best option would be to create a mask for each range and use them both, so I somehow have to join them together before proceeding. Is there a way I could join two masks using OpenCV? Or is there some other way I could achieve my goal? Edit. I came with not much elegant, but working solution: image_result = np.zeros((image_height,image_width,3),np.uint8) for i in range(image_height): #those are set elsewhere for j in range(image_width): #those are set elsewhere if img_hsv[i][j][1]>=50 \ and img_hsv[i][j][2]>=50 \ and (img_hsv[i][j][0] <= 10 or img_hsv[i][j][0]>=170): image_result[i][j]=img_hsv[i][j] It pretty much satisfies my needs, and OpenCV's functions probably do pretty much the same, but if there's a better way to do that(using some dedicated function and writing less code) please share it with me. :)
I would just add the masks together, and use np.where to mask the original image. img=cv2.imread("img.bmp") img_hsv=cv2.cvtColor(img, cv2.COLOR_BGR2HSV) # lower mask (0-10) lower_red = np.array([0,50,50]) upper_red = np.array([10,255,255]) mask0 = cv2.inRange(img_hsv, lower_red, upper_red) # upper mask (170-180) lower_red = np.array([170,50,50]) upper_red = np.array([180,255,255]) mask1 = cv2.inRange(img_hsv, lower_red, upper_red) # join my masks mask = mask0+mask1 # set my output img to zero everywhere except my mask output_img = img.copy() output_img[np.where(mask==0)] = 0 # or your HSV image, which I *believe* is what you want output_hsv = img_hsv.copy() output_hsv[np.where(mask==0)] = 0 This should be much faster and much more readable than looping through each pixel of your image.
django 1.7.8 not sending emails with password reset
Relevant part of urls.py for the project: from django.conf.urls import include, url, patterns urlpatterns = patterns('', # other ones ... url(r'^accounts/password/reset/$', 'django.contrib.auth.views.password_reset', {'post_reset_redirect' : '/accounts/password/reset/done/'}), url(r'^accounts/password/reset/done/$', 'django.contrib.auth.views.password_reset_done'), url(r'^accounts/password/reset/(?P<uidb64>[0-9A-Za-z]+)-(?P<token>.+)/$', 'django.contrib.auth.views.password_reset_confirm', {'post_reset_redirect' : '/accounts/password/done/'}), url(r'^accounts/password/done/$', 'django.contrib.auth.views.password_reset_complete'), ) And by request, here's the password reset form: {% extends "site_base.html" %} {% block title %}Reset Password{% endblock %} {% block content %} <p>Please specify your email address to receive instructions for resetting it.</p> <form action="" method="post"> <div style="display:none"> <input type="hidden" value="{{ csrf_token }}" name="csrfmiddlewaretoken"> </div> {{ form.email.errors }} <p><label for="id_email">E-mail address:</label> {{ form.email }} <input type="submit" value="Reset password" /></p> </form> {% endblock %} But whenever I navigate to the /accounts/password/reset/ page and fill in email and click enter the page immediately redirects to /accounts/password/reset/done/ and no email is sent. My relevant settings.py variables: EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend' EMAIL_USE_TLS = True EMAIL_HOST = 'smtp.gmail.com' EMAIL_HOST_USER = 'XXX@gmail.com' EMAIL_HOST_PASSWORD = 'XXXXXX' EMAIL_PORT = 587 DEFAULT_FROM_EMAIL = EMAIL_HOST_USER SERVER_EMAIL = EMAIL_HOST_USER And I know email works because my registration flow with django-registration-redux works flawlessly. Any ideas?
I tried to recreate your situation and I faced the following scenarios: Mail is only sent to active users. Email associated with no user will not get any email(obviously). I got an error form's save method in line 270 for email = loader.render_to_string(email_template_name, c): NoReverseMatch at /accounts/password/reset/ Reverse for 'password_reset_confirm' with arguments '()' and keyword arguments '{'token': '42h-4e68c02f920d69a82fbf', 'uidb64': b'Mg'}' not found. 0 pattern(s) tried: [] It seems that your urls.py doesn't contain any url named 'password_reset_confirm'. So you should change your url: url(r'^accounts/password/reset/(?P<uidb64>[0-9A-Za-z]+)-(?P<token>.+)/$', 'django.contrib.auth.views.password_reset_confirm', {'post_reset_redirect': '/accounts/password/done/'},), To: url(r'^accounts/password/reset/(?P<uidb64>[0-9A-Za-z]+)-(?P<token>.+)/$', 'django.contrib.auth.views.password_reset_confirm', {'post_reset_redirect': '/accounts/password/done/'}, name='password_reset_confirm'), If you have set your email configuration perfectly the you should get emails with no problem. If still you are facing this issue, please use a debugger to check where its getting exceptions. PS: I have tested with django 1.7.8 and templates resides in: Python34\Lib\site-packages\django\contrib\admin\templates\registration. Urls and views are used as you have written in the question.
How to get one number specific times in an array python
I have one array like A = [1,2,3] and another array B = [4,5,6]. Now, I need another array C so that the elements in C should be the same elements of B having occurrence in the order of element A. Like, C = [4, 5, 5, 6, 6, 6]
A = [1,2,3] B = [4,5,6] C = [b_item for a_item, b_item in zip(A,B) for _ in range(a_item)] print C Result: [4, 5, 5, 6, 6, 6] This is a one-line equivalent to: C = [] for a_item, b_item in zip(A,B): for _ in range(a_item): C.append(b_item) ... Which is roughly equivalent to C = [] for i in range(min(len(A), len(B))): a_item = A[i] b_item = B[i] for _ in range(a_item): C.append(b_item) (N.B. Don't get tripped up by the underscore. It's an ordinary variable. It is conventionally used when you don't actually have to refer to the variable's value, as in this example)
Asynchronous exception handling in Python
I've the following code using asyncio and aiohttp to make asynchronous HTTP requests. import sys import asyncio import aiohttp @asyncio.coroutine def get(url): try: print('GET %s' % url) resp = yield from aiohttp.request('GET', url) except Exception as e: raise Exception("%s has error '%s'" % (url, e)) else: if resp.status >= 400: raise Exception("%s has error '%s: %s'" % (url, resp.status, resp.reason)) return (yield from resp.text()) @asyncio.coroutine def fill_data(run): url = 'http://www.google.com/%s' % run['name'] run['data'] = yield from get(url) def get_runs(): runs = [ {'name': 'one'}, {'name': 'two'} ] loop = asyncio.get_event_loop() task = asyncio.wait([fill_data(r) for r in runs]) loop.run_until_complete(task) return runs try: get_runs() except Exception as e: print(repr(e)) sys.exit(1) For some reason, exceptions raised inside the get function are not caught: Future/Task exception was never retrieved Traceback (most recent call last): File "site-packages/asyncio/tasks.py", line 236, in _step result = coro.send(value) File "mwe.py", line 25, in fill_data run['data'] = yield from get(url) File "mwe.py", line 17, in get raise Exception("%s has error '%s: %s'" % (url, resp.status, resp.reason)) Exception: http://www.google.com/two has error '404: Not Found' So, what is correct way to handle exceptions raised by couroutines?
asyncio.wait doesn't actually consume the Futures passed to it, it just waits for them to complete, and then returns the Future objects: coroutine asyncio.wait(futures, *, loop=None, timeout=None, return_when=ALL_COMPLETED) Wait for the Futures and coroutine objects given by the sequence futures to complete. Coroutines will be wrapped in Tasks. Returns two sets of Future: (done, pending). Until you actually yield from the items in the done list, they'll remain unconsumed. Since your program exits without consuming the futures, you see the "exception was never retrieved" messages. For your use-case, it probably makes more sense to use asyncio.gather, which will actually consume each Future, and then return a single Future that aggregates all their results (or raises the first Exception thrown by a future in the input list). def get_runs(): runs = [ {'name': 'one'}, {'name': 'two'} ] loop = asyncio.get_event_loop() tasks = asyncio.gather(*[fill_data(r) for r in runs]) loop.run_until_complete(tasks) return runs Output: GET http://www.google.com/two GET http://www.google.com/one Exception("http://www.google.com/one has error '404: Not Found'",) Note that asyncio.gather actually lets you customize its behavior when one of the futures raises an exception; the default behavior is to raise the first exception it hits, but it can also just return each exception object in the output list: asyncio.gather(*coros_or_futures, loop=None, return_exceptions=False) Return a future aggregating results from the given coroutine objects or futures. All futures must share the same event loop. If all the tasks are done successfully, the returned future’s result is the list of results (in the order of the original sequence, not necessarily the order of results arrival). If return_exceptions is True, exceptions in the tasks are treated the same as successful results, and gathered in the result list; otherwise, the first raised exception will be immediately propagated to the returned future.
How do I extend, mimic, or emulate the range function?
I made a little generator function for character ranges: >>> def crange(start, end): ... for i in range(ord(start), ord(end)+1): ... yield chr(i) ... And then I can do this: >>> print(*crange('a','e')) a b c d e Yay! But this doesn't work: >>> crange('a','e')[::2] Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'generator' object is not subscriptable And this works, but is O(n), unlike range's O(1): >>> 'y' in crange('a','z') True That means it takes about 0.35 seconds to search for character number 109,999 out of the maximum of 110,000. 109999 in range(110000) is, of course, fast. At that point, my first thought was to simply subclass range. Unfortunately: >>> class A(range): ... pass ... Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: type 'range' is not an acceptable base type So I guess I would have to mimic it in some way that allows me to pass characters as arguments, works like range internally, and produces characters. Unfortunately, I'm not sure how to proceed. I tried a dir(): >>> print(*dir(range), sep='\n') __class__ __contains__ __delattr__ __dir__ __doc__ __eq__ __format__ __ge__ __getattribute__ __getitem__ __gt__ __hash__ __init__ __iter__ __le__ __len__ __lt__ __ne__ __new__ __reduce__ __reduce_ex__ __repr__ __reversed__ __setattr__ __sizeof__ __str__ __subclasshook__ count index start step stop which lets me see what functions are in there, but I'm not sure what they're doing, or how range uses them. I looked for the source for range, but it's in C, and I don't know where to find its Python wrapper (it does have one, right?). Where do I go from here, and should I even go there?
At that point, my first thought was to simply subclass range. range was a function in Python2 and a "final" class in Python3 (more info here) - in both cases not something you can sub-class. You will need to create a class crange that extends from an object as the base type. class crange(object): And this works, but is O(n), unlike range's O(1) In Python 3, there is a __contains__ method that you will define for your object. For objects that don’t define contains(), the membership test first tries iteration via iter(), then the old sequence iteration protocol via getitem(), see this section in the language reference. This allows Python to determine if the value is in your range without actually enumerating the range. For a simple example, if your range is 1 to 1,000,000, it is trivial to determine whether 23546 is in that range (1 < 23546 < 1000000). Of course the actual implementation is a bit more complex and adds ability to handle step increments etc. Regarding: Yay! But this doesn't work: >>> crange('a','e')[::2] In this case you need to define __getitem__ on your object. Here's an example of some of the methods required: class crange(object): def __init__(self, start, end, step=1): # initialize your range object self.start = start self.end = end self.step = step def __iter__(self): # enable iteration over your object # (assume step size is 1) for i in range(ord(self.start), ord(self.end)+1): yield chr(i) def __getitem__(self, i): # enable accessing items in your range by index # also enable crange('a','e')[::2] # (assuming step size of 1) if isinstance( i, slice ): # implement slicing else: return chr(ord(self.start) + i) def __contains__(self, char): # enable O(1) determination of whether a value is in your range # (assume step size is 1) return ord(self.start) <= ord(char) < ord(self.end) def __len__(self): # return length (assuming step size of 1) return ord(self.end) - ord(self.start)
Is it possible to use Angular with the Jinja2 template engine?
I have a Flask site and I want to use the Angular JavaScript framework. Unfortunately, it seems as if the delimiters overlap. How do I use Angular with Jinja2 if both rely on double curly braces ({{ expr }})? Is it even possible?
You have some options. 1) Change the delimiter notation for Angular: var app = angular.module('Application', []); app.config(['$interpolateProvider', function($interpolateProvider) { $interpolateProvider.startSymbol('{a'); $interpolateProvider.endSymbol('a}'); }]); Whatever is chosen for the start and end symbols will act as the new delimiters. In this case, you would express a variable to Angular using {a some_variable a}. This approach has the advantage of only needed to be set once and being explicit. 2) Change the delimiter notation for Jinja2. Override or subclass Flask.jinja_options.update on the Flask object that you bind to your application (relevant vars: block_start_string, block_end_string, variable_start_string, variable_end_string, comment_start_string, comment_end_string): jinja_options = app.jinja_options.copy() jinja_options.update(dict( block_start_string='<%', block_end_string='%>', variable_start_string='%%', variable_end_string='%%', comment_start_string='<#', comment_end_string='#>' )) app.jinja_options = jinja_options As there's a higher risk of sensitive data coming un-expanded from from the server-side, I suggest instead changing the syntax on the front-end (i.e. Angular) on any project in which you're not the sole developer. 3) Output a raw block in Jinja2 using {% raw %} or {% verbatim %}: <ul> {% raw %} {% for item in seq %} <li>{{ some_var }}</li> {% endfor %} {% endraw %} </ul> 4) Use Jinja2 to write the curly braces in the template: {{ '{{ some_var }}' }} this will be output as {{ some_var }} in the HTML. My preference for approach #1 is apparent, but any of the above will work.
Optimizing a reed-solomon encoder (polynomial division)
I am trying to optimize a Reed-Solomon encoder, which is in fact simply a polynomial division operation over Galois Fields 2^8 (which simply means that values wrap-around over 255). The code is in fact very very similar to what can be found here for Go: http://research.swtch.com/field The algorithm for polynomial division used here is a synthetic division (also called Horner's method). I tried everything: numpy, pypy, cython. The best performance I get is by using pypy with this simple nested loop: def rsenc(msg_in, nsym, gen): '''Reed-Solomon encoding using polynomial division, better explained at http://research.swtch.com/field''' msg_out = bytearray(msg_in) + bytearray(len(gen)-1) lgen = bytearray([gf_log[gen[j]] for j in xrange(len(gen))]) for i in xrange(len(msg_in)): coef = msg_out[i] # coef = gf_mul(msg_out[i], gf_inverse(gen[0])) // for general polynomial division (when polynomials are non-monic), we need to compute: coef = msg_out[i] / gen[0] if coef != 0: # coef 0 is normally undefined so we manage it manually here (and it also serves as an optimization btw) lcoef = gf_log[coef] # precaching for j in xrange(1, len(gen)): # optimization: can skip g0 because the first coefficient of the generator is always 1! (that's why we start at position 1) msg_out[i + j] ^= gf_exp[lcoef + lgen[j]] # equivalent (in Galois Field 2^8) to msg_out[i+j] += msg_out[i] * gen[j] # Recopy the original message bytes msg_out[:len(msg_in)] = msg_in return msg_out Can a Python optimization wizard guide me to some clues on how to get a speedup? My goal is to get at least a speedup of 3x, but more would be awesome. Any approach or tool is accepted, as long as it is cross-platform (works at least on Linux and Windows). Here is a small test script with some of the other alternatives I tried (the cython attempt is not included since it was slower than native python!): import random from operator import xor numpy_enabled = False try: import numpy as np numpy_enabled = True except ImportError: pass # Exponent table for 3, a generator for GF(256) gf_exp = bytearray([1, 3, 5, 15, 17, 51, 85, 255, 26, 46, 114, 150, 161, 248, 19, 53, 95, 225, 56, 72, 216, 115, 149, 164, 247, 2, 6, 10, 30, 34, 102, 170, 229, 52, 92, 228, 55, 89, 235, 38, 106, 190, 217, 112, 144, 171, 230, 49, 83, 245, 4, 12, 20, 60, 68, 204, 79, 209, 104, 184, 211, 110, 178, 205, 76, 212, 103, 169, 224, 59, 77, 215, 98, 166, 241, 8, 24, 40, 120, 136, 131, 158, 185, 208, 107, 189, 220, 127, 129, 152, 179, 206, 73, 219, 118, 154, 181, 196, 87, 249, 16, 48, 80, 240, 11, 29, 39, 105, 187, 214, 97, 163, 254, 25, 43, 125, 135, 146, 173, 236, 47, 113, 147, 174, 233, 32, 96, 160, 251, 22, 58, 78, 210, 109, 183, 194, 93, 231, 50, 86, 250, 21, 63, 65, 195, 94, 226, 61, 71, 201, 64, 192, 91, 237, 44, 116, 156, 191, 218, 117, 159, 186, 213, 100, 172, 239, 42, 126, 130, 157, 188, 223, 122, 142, 137, 128, 155, 182, 193, 88, 232, 35, 101, 175, 234, 37, 111, 177, 200, 67, 197, 84, 252, 31, 33, 99, 165, 244, 7, 9, 27, 45, 119, 153, 176, 203, 70, 202, 69, 207, 74, 222, 121, 139, 134, 145, 168, 227, 62, 66, 198, 81, 243, 14, 18, 54, 90, 238, 41, 123, 141, 140, 143, 138, 133, 148, 167, 242, 13, 23, 57, 75, 221, 124, 132, 151, 162, 253, 28, 36, 108, 180, 199, 82, 246] * 2 + [1]) # Logarithm table, base 3 gf_log = bytearray([0, 0, 25, 1, 50, 2, 26, 198, 75, 199, 27, 104, 51, 238, 223, # BEWARE: the first entry should be None instead of 0 because it's undefined, but for a bytearray we can't set such a value 3, 100, 4, 224, 14, 52, 141, 129, 239, 76, 113, 8, 200, 248, 105, 28, 193, 125, 194, 29, 181, 249, 185, 39, 106, 77, 228, 166, 114, 154, 201, 9, 120, 101, 47, 138, 5, 33, 15, 225, 36, 18, 240, 130, 69, 53, 147, 218, 142, 150, 143, 219, 189, 54, 208, 206, 148, 19, 92, 210, 241, 64, 70, 131, 56, 102, 221, 253, 48, 191, 6, 139, 98, 179, 37, 226, 152, 34, 136, 145, 16, 126, 110, 72, 195, 163, 182, 30, 66, 58, 107, 40, 84, 250, 133, 61, 186, 43, 121, 10, 21, 155, 159, 94, 202, 78, 212, 172, 229, 243, 115, 167, 87, 175, 88, 168, 80, 244, 234, 214, 116, 79, 174, 233, 213, 231, 230, 173, 232, 44, 215, 117, 122, 235, 22, 11, 245, 89, 203, 95, 176, 156, 169, 81, 160, 127, 12, 246, 111, 23, 196, 73, 236, 216, 67, 31, 45, 164, 118, 123, 183, 204, 187, 62, 90, 251, 96, 177, 134, 59, 82, 161, 108, 170, 85, 41, 157, 151, 178, 135, 144, 97, 190, 220, 252, 188, 149, 207, 205, 55, 63, 91, 209, 83, 57, 132, 60, 65, 162, 109, 71, 20, 42, 158, 93, 86, 242, 211, 171, 68, 17, 146, 217, 35, 32, 46, 137, 180, 124, 184, 38, 119, 153, 227, 165, 103, 74, 237, 222, 197, 49, 254, 24, 13, 99, 140, 128, 192, 247, 112, 7]) if numpy_enabled: np_gf_exp = np.array(gf_exp) np_gf_log = np.array(gf_log) def gf_pow(x, power): return gf_exp[(gf_log[x] * power) % 255] def gf_poly_mul(p, q): r = [0] * (len(p) + len(q) - 1) lp = [gf_log[p[i]] for i in xrange(len(p))] for j in range(len(q)): lq = gf_log[q[j]] for i in range(len(p)): r[i + j] ^= gf_exp[lp[i] + lq] return r def rs_generator_poly_base3(nsize, fcr=0): g_all = {} g = [1] g_all[0] = g_all[1] = g for i in range(fcr+1, fcr+nsize+1): g = gf_poly_mul(g, [1, gf_pow(3, i)]) g_all[nsize-i] = g return g_all # Fastest way with pypy def rsenc(msg_in, nsym, gen): '''Reed-Solomon encoding using polynomial division, better explained at http://research.swtch.com/field''' msg_out = bytearray(msg_in) + bytearray(len(gen)-1) lgen = bytearray([gf_log[gen[j]] for j in xrange(len(gen))]) for i in xrange(len(msg_in)): coef = msg_out[i] # coef = gf_mul(msg_out[i], gf_inverse(gen[0])) # for general polynomial division (when polynomials are non-monic), the usual way of using synthetic division is to divide the divisor g(x) with its leading coefficient (call it a). In this implementation, this means:we need to compute: coef = msg_out[i] / gen[0] if coef != 0: # coef 0 is normally undefined so we manage it manually here (and it also serves as an optimization btw) lcoef = gf_log[coef] # precaching for j in xrange(1, len(gen)): # optimization: can skip g0 because the first coefficient of the generator is always 1! (that's why we start at position 1) msg_out[i + j] ^= gf_exp[lcoef + lgen[j]] # equivalent (in Galois Field 2^8) to msg_out[i+j] += msg_out[i] * gen[j] # Recopy the original message bytes msg_out[:len(msg_in)] = msg_in return msg_out # Alternative 1: the loops were completely changed, instead of fixing msg_out[i] and updating all subsequent i+j items, we now fixate msg_out[i+j] and compute it at once using all couples msg_out[i] * gen[j] - msg_out[i+1] * gen[j-1] - ... since when we fixate msg_out[i+j], all previous msg_out[k] with k < i+j are already fully computed. def rsenc_alt1(msg_in, nsym, gen): msg_in = bytearray(msg_in) msg_out = bytearray(msg_in) + bytearray(len(gen)-1) lgen = bytearray([gf_log[gen[j]] for j in xrange(len(gen))]) # Alternative 1 jlist = range(1, len(gen)) for k in xrange(1, len(msg_out)): for x in xrange(max(k-len(msg_in),0), len(gen)-1): if k-x-1 < 0: break msg_out[k] ^= gf_exp[msg_out[k-x-1] + lgen[jlist[x]]] # Recopy the original message bytes msg_out[:len(msg_in)] = msg_in return msg_out # Alternative 2: a rewrite of alternative 1 with generators and reduce def rsenc_alt2(msg_in, nsym, gen): msg_in = bytearray(msg_in) msg_out = bytearray(msg_in) + bytearray(len(gen)-1) lgen = bytearray([gf_log[gen[j]] for j in xrange(len(gen))]) # Alternative 1 jlist = range(1, len(gen)) for k in xrange(1, len(msg_out)): items_gen = ( gf_exp[msg_out[k-x-1] + lgen[jlist[x]]] if k-x-1 >= 0 else next(iter(())) for x in xrange(max(k-len(msg_in),0), len(gen)-1) ) msg_out[k] ^= reduce(xor, items_gen) # Recopy the original message bytes msg_out[:len(msg_in)] = msg_in return msg_out # Alternative with Numpy def rsenc_numpy(msg_in, nsym, gen): msg_in = np.array(bytearray(msg_in)) msg_out = np.pad(msg_in, (0, nsym), 'constant') lgen = np_gf_log[gen] for i in xrange(msg_in.size): msg_out[i+1:i+lgen.size] ^= np_gf_exp[np.add(lgen[1:], msg_out[i])] msg_out[:len(msg_in)] = msg_in return msg_out gf_mul_arr = [bytearray(256) for _ in xrange(256)] gf_add_arr = [bytearray(256) for _ in xrange(256)] # Precompute multiplication and addition tables def gf_precomp_tables(gf_exp=gf_exp, gf_log=gf_log): global gf_mul_arr, gf_add_arr for i in xrange(256): for j in xrange(256): gf_mul_arr[i][j] = gf_exp[gf_log[i] + gf_log[j]] gf_add_arr[i][j] = i ^ j return gf_mul_arr, gf_add_arr # Alternative with precomputation of multiplication and addition tables, inspired by zfec: https://hackage.haskell.org/package/fec-0.1.1/src/zfec/fec.c def rsenc_precomp(msg_in, nsym, gen=None): msg_in = bytearray(msg_in) msg_out = bytearray(msg_in) + bytearray(len(gen)-1) for i in xrange(len(msg_in)): # [i for i in xrange(len(msg_in)) if msg_in[i] != 0] coef = msg_out[i] if coef != 0: # coef 0 is normally undefined so we manage it manually here (and it also serves as an optimization btw) mula = gf_mul_arr[coef] for j in xrange(1, len(gen)): # optimization: can skip g0 because the first coefficient of the generator is always 1! (that's why we start at position 1) #msg_out[i + j] = gf_add_arr[msg_out[i+j]][gf_mul_arr[coef][gen[j]]] # slower... #msg_out[i + j] ^= gf_mul_arr[coef][gen[j]] # faster msg_out[i + j] ^= mula[gen[j]] # fastest # Recopy the original message bytes msg_out[:len(msg_in)] = msg_in # equivalent to c = mprime - b, where mprime is msg_in padded with [0]*nsym return msg_out def randstr(n, size): '''Generate very fastly a random hexadecimal string. Kudos to jcdryer http://stackoverflow.com/users/131084/jcdyer''' hexstr = '%0'+str(size)+'x' for _ in xrange(n): yield hexstr % random.randrange(16**size) # Simple test case if __name__ == "__main__": # Setup functions to test funcs = [rsenc, rsenc_precomp, rsenc_alt1, rsenc_alt2] if numpy_enabled: funcs.append(rsenc_numpy) gf_precomp_tables() # Setup RS vars n = 255 k = 213 import time # Init the generator polynomial g = rs_generator_poly_base3(n) # Init the ground truth mes = 'hello world' mesecc_correct = rsenc(mes, n-11, g[k]) # Test the functions for func in funcs: # Sanity check if func(mes, n-11, g[k]) != mesecc_correct: print func.__name__, ": output is incorrect!" # Time the function total_time = 0 for m in randstr(1000, n): start = time.clock() func(m, n-k, g[k]) total_time += time.clock() - start print func.__name__, ": total time elapsed %f seconds." % total_time And here is the result on my machine: With PyPy: rsenc : total time elapsed 0.108183 seconds. rsenc_alt1 : output is incorrect! rsenc_alt1 : total time elapsed 0.164084 seconds. rsenc_alt2 : output is incorrect! rsenc_alt2 : total time elapsed 0.557697 seconds. Without PyPy: rsenc : total time elapsed 3.518857 seconds. rsenc_alt1 : output is incorrect! rsenc_alt1 : total time elapsed 5.630897 seconds. rsenc_alt2 : output is incorrect! rsenc_alt2 : total time elapsed 6.100434 seconds. rsenc_numpy : output is incorrect! rsenc_numpy : total time elapsed 1.631373 seconds (Note: the alternatives should be correct, some index must be a bit off, but since they are slower anyway I did not try to fix them) /UPDATE and goal of the bounty: I found a very interesting optimization trick that promises to speed up computations a lot: to precompute the multiplication table. I updated the code above with the new function rsenc_precomp(). However, there's no gain at all in my implementation, it's even a bit slower: rsenc : total time elapsed 0.107170 seconds. rsenc_precomp : total time elapsed 0.108788 seconds. How can it be that arrays lookups cost more than operations like additions or xor? Why does it work in ZFEC and not in Python? I will attribute the bounty to whoever can show me how to make this multiplication/addition lookup-tables optimization work (faster than the xor and addition operations) or who can explain to me with references or analysis why this optimization cannot work here (using Python/PyPy/Cython/Numpy etc.. I tried them all).
The following is 3x faster than pypy on my machine (0.04s vs 0.15s). Using Cython: ctypedef unsigned char uint8_t # does not work with Microsoft's C Compiler: from libc.stdint cimport uint8_t cimport cpython.array as array cdef uint8_t[::1] gf_exp = bytearray([1, 3, 5, 15, 17, 51, 85, 255, 26, 46, 114, 150, 161, 248, 19, lots of numbers omitted for space reasons ...]) cdef uint8_t[::1] gf_log = bytearray([0, 0, 25, 1, 50, 2, 26, 198, 75, 199, 27, 104, more numbers omitted for space reasons ...]) import cython @cython.boundscheck(False) @cython.wraparound(False) @cython.initializedcheck(False) def rsenc(msg_in_r, nsym, gen_t): '''Reed-Solomon encoding using polynomial division, better explained at http://research.swtch.com/field''' cdef uint8_t[::1] msg_in = bytearray(msg_in_r) # have to copy, unfortunately - can't make a memory view from a read only object cdef int[::1] gen = array.array('i',gen_t) # convert list to array cdef uint8_t[::1] msg_out = bytearray(msg_in) + bytearray(len(gen)-1) cdef int j cdef uint8_t[::1] lgen = bytearray(gen.shape[0]) for j in xrange(gen.shape[0]): lgen[j] = gf_log[gen[j]] cdef uint8_t coef,lcoef cdef int i for i in xrange(msg_in.shape[0]): coef = msg_out[i] if coef != 0: # coef 0 is normally undefined so we manage it manually here (and it also serves as an optimization btw) lcoef = gf_log[coef] # precaching for j in xrange(1, gen.shape[0]): # optimization: can skip g0 because the first coefficient of the generator is always 1! (that's why we start at position 1) msg_out[i + j] ^= gf_exp[lcoef + lgen[j]] # equivalent (in Galois Field 2^8) to msg_out[i+j] -= msg_out[i] * gen[j] # Recopy the original message bytes msg_out[:msg_in.shape[0]] = msg_in return msg_out It is just your fastest version with static types (and checking the html from cython -a until the loops aren't highlighted in yellow). A few brief notes: Cython prefers x.shape[0] to len(shape) Defining the memoryviews as [::1] promises they are continuous in memory, which helps initializedcheck(False) is necesary for avoiding lots of existance checks on the globally defined gf_exp and gf_log. (You might find you can speed up your basic Python/PyPy code by creating a local variable reference for these and using that istead) I had to copy a couple of the input arguments. Cython can't make a memoryview from a readonly object (in this case msg_in, a string. I could probably have just made it a char* though). Also gen (a list) needs to be in something with fast element access. Other than that it's all fairly straight-forward. (I haven't tried any variations of it having got it faster). I'm really quite impressed at how well PyPy does.
Alternative to `any` that returns the last evaluated object?
I just wrote a bit of code where I wanted to do: def foo(container) return any((some_obj.attr <= 0 for some_obj in container)) where foo would return the first some_obj where some_obj.attr is zero or less. The alternative, I suppose, would be def foo(container): return next((some_obj for some_obj in container if some_obj.attr <= 0), False) but that feels very hacky. I ended up writing it out, but I don't like how deeply nested it got. def foo(container): for some_obj in container: if some_obj.attr <= 0: return some_obj return False To clarify: container in this case is likely no more than 6 objects (and often 2 objects), though a general case is more useful. I'm also trying to optimize for ease of reading, not for performance. Is there some better construct than this?
The docs for any explain that it's equivalent to: def any(iterable): for element in iterable: if element: return True return False So, I don't think your code is too deeply nested if it has exactly the same structure as code that's used to illustrate the functionality of any itself. Still, I would probably wrap this up in a function: def first(iterable, predicate): for element in iterable: if predicate(element): return element return False So now you can do this: def foo(container): return first(container, lambda obj: obj.attr <= 0) Or, alternatively, just use a generator expression, and pass it to the two-argument next, as you're already doing: def foo(container): return next((obj for obj in container if obj.attr <= 0), False) That has the same "depth", it's just all horizontal instead of vertical. Or, maybe, pull out the genexpr and name it: def foo(container): nonpositives = (obj for obj in container if obj.attr <= 0) return next(nonpositives, False) How would you choose between them? I think if the predicate is too complicated to read as a lambda but not complicated enough to be worth abstracting out into an out-of-line function, I'd go with the genexpr. Otherwise, the wrapper function. But it's really a matter of taste.
Remove spurious small islands of noise in an image - Python OpenCV
I am trying to get rid of background noise from some of my images. This is the unfiltered image. To filter, I used this code to generate a mask of what should remain in the image: element = cv2.getStructuringElement(cv2.MORPH_RECT, (2,2)) mask = cv2.erode(mask, element, iterations = 1) mask = cv2.dilate(mask, element, iterations = 1) mask = cv2.erode(mask, element) With this code and when I mask out the unwanted pixels from the original image, what I get is: As you can see, all the tiny dots in the middle area are gone, but a lot of those coming from the denser area are also gone. To reduce the filtering, I tried changing the second parameter of getStructuringElement() to be (1,1) but doing this gives me the first image as if nothing has been filtered. Is there any way where I can apply a filter level that is between these 2 levels? In addition, can anyone explain to me what exactly does getStructuringElement() do? What is a "structuring element"? What does it do and how does its size (the second parameter) affect the level of filtering?
A lot of your questions stem from the fact that you're not sure how morphological image processing works, but we can put your doubts to rest. You can interpret the structuring element as the "base shape" to compare to. 1 in the structuring element corresponds to a pixel that you want to look at in this shape and 0 is one you want to ignore. There are different shapes, such as rectangular (as you have figured out with MORPH_RECT), ellipse, circular, etc. As such, cv2.getStructuringElement returns a structuring element for you. The first parameter specifies the type you want and the second parameter specifies the size you want. In your case, you want a 2 x 2 "rectangle"... which is really a square, but that's fine. In a more bastardized sense, you use the structuring element and scan from left to right and top to bottom of your image and you grab pixel neighbourhoods. Each pixel neighbourhood has its centre exactly at the pixel of interest that you're looking at. The size of each pixel neighbourhood is the same size as the structuring element. Erosion For an erosion, you examine all of the pixels in a pixel neighbourhood that are touching the structuring element. If every non-zero pixel is touching a structuring element pixel that is 1, then the output pixel in the corresponding centre position with respect to the input is 1. If there is at least one non-zero pixel that does not touch a structuring pixel that is 1, then the output is 0. In terms of the rectangular structuring element, you need to make sure that every pixel in the structuring element is touching a non-zero pixel in your image for a pixel neighbourhood. If it isn't, then the output is 0, else 1. This effectively eliminates small spurious areas of noise and also decreases the area of objects slightly. The size factors in where the larger the rectangle, the more shrinking is performed. The size of the structuring element is a baseline where any objects that are smaller than this rectangular structuring element, you can consider them as being filtered and not appearing in the output. Basically, choosing a 1 x 1 rectangular structuring element is the same as the input image itself because that structuring element fits all pixels inside it as the pixel is the smallest representation of information possible in an image. Dilation Dilation is the opposite of erosion. If there is at least one non-zero pixel that touches a pixel in the structuring element that is 1, then the output is 1, else the output is 0. You can think of this as slightly enlarging object areas and making small islands bigger. The implications with size here is that the larger the structuring element, the larger the areas of the objects will be and the larger the isolated islands become. What you're doing is an erosion first followed by a dilation. This is what is known as an opening operation. The purpose of this operation is to remove small islands of noise while (trying to) maintain the areas of the larger objects in your image. The erosion removes those islands while the dilation grows back the larger objects to their original sizes. You follow this with an erosion again for some reason, which I can't quite understand, but that's ok. What I would personally do is perform a closing operation first which is a dilation followed by an erosion. Closing helps group areas that are close together into a single object. As such, you see that there are some larger areas that are close to each other that should probably be joined before we do anything else. As such, I would do a closing first, then do an opening after so that we can remove the isolated noisy areas. Take note that I'm going to make the closing structuring element size larger as I want to make sure I get nearby pixels and the opening structuring element size smaller so that I don't want to mistakenly remove any of the larger areas. Once you do this, I would mask out any extra information with the original image so that you leave the larger areas intact while the small islands go away. Instead of chaining an erosion followed by a dilation, or a dilation followed by an erosion, use cv2.morphologyEx, where you can specify MORPH_OPEN and MORPH_CLOSE as the flags. As such, I would personally do this, assuming your image is called spots.png: import cv2 import numpy as np img = cv2.imread('spots.png') img_bw = 255*(cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) > 5).astype('uint8') se1 = cv2.getStructuringElement(cv2.MORPH_RECT, (5,5)) se2 = cv2.getStructuringElement(cv2.MORPH_RECT, (2,2)) mask = cv2.morphologyEx(img_bw, cv2.MORPH_CLOSE, se1) mask = cv2.morphologyEx(mask, cv2.MORPH_OPEN, se2) mask = np.dstack([mask, mask, mask]) / 255 out = img * mask cv2.imshow('Output', out) cv2.waitKey(0) cv2.destroyAllWindows() cv2.imwrite('output.png', out) The above code is pretty self-explanatory. First, I read in the image and then I convert the image to grayscale and threshold with an intensity of 5 to create a mask of what is considered object pixels. This is a rather clean image and so anything larger than 5 seems to have worked. For the morphology routines, I need to convert the image to uint8 and scale the mask to 255. Next, we create two structuring elements - one that is a 5 x 5 rectangle for the closing operation and another that is 2 x 2 for the opening operation. I run cv2.morphologyEx twice for the opening and closing operations respectively on the thresholded image. Once I do that, I stack the mask so that it becomes a 3D matrix and divide by 255 so that it becomes a mask of [0,1] and then we multiply this mask with the original image so that we can grab the original pixels of the image back and maintaining what is considered a true object from the mask output. The rest is just for illustration. I show the image in a window, and I also save the image to a file called output.png, and its purpose is to show you what the image looks like in this post. I get this: Bear in mind that it isn't perfect, but it's much better than how you had it before. You'll have to play around with the structuring element sizes to get something that you consider as a good output, but this is certainly enough to get you started. Good luck!
SSLError: Can't connect to HTTPS URL because the SSL module is not available on google app engine
Want to use wechat sdk to create menu WeChat.create_menu({ "button":[ { "type":"click", "name":"Daily Song", "key":"V1001_TODAY_MUSIC" }, { "type":"click", "name":" Artist Profile", "key":"V1001_TODAY_SINGER" }, { "name":"Menu", "sub_button":[ { "type":"view", "name":"Search", "url":"http://www.soso.com/" }, { "type":"view", "name":"Video", "url":"http://v.qq.com/" }, { "type":"click", "name":"Like us", "key":"V1001_GOOD" }] }] }) Currently not work because of this error: Traceback (most recent call last): File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 267, in Handle result = handler(dict(self._environ), self._StartResponse) File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1519, in __call__ response = self._internal_error(e) File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1511, in __call__ rv = self.handle_exception(request, response, e) File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1505, in __call__ rv = self.router.dispatch(request, response) File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1253, in default_dispatcher return route.handler_adapter(request, response) File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 1077, in __call__ return handler.dispatch() File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 547, in dispatch return self.handle_exception(e, self.app.debug) File "/base/data/home/runtimes/python27/python27_lib/versions/third_party/webapp2-2.3/webapp2.py", line 545, in dispatch return method(*args, **kwargs) File "/base/data/home/apps/s~project-boom/1.384461758981660124/wechatAPIHandler.py", line 72, in post "key":"V1001_GOOD" File "/base/data/home/apps/s~project-boom/1.384461758981660124/wechat_sdk/basic.py", line 355, in create_menu data=menu_data File "/base/data/home/apps/s~project-boom/1.384461758981660124/wechat_sdk/basic.py", line 949, in _post **kwargs File "/base/data/home/apps/s~project-boom/1.384461758981660124/wechat_sdk/basic.py", line 907, in _request "access_token": self.access_token, File "/base/data/home/apps/s~project-boom/1.384461758981660124/wechat_sdk/basic.py", line 849, in access_token self.grant_token() File "/base/data/home/apps/s~project-boom/1.384461758981660124/wechat_sdk/basic.py", line 273, in grant_token "secret": self.__appsecret, File "/base/data/home/apps/s~project-boom/1.384461758981660124/wechat_sdk/basic.py", line 935, in _get **kwargs File "/base/data/home/apps/s~project-boom/1.384461758981660124/wechat_sdk/basic.py", line 917, in _request **kwargs File "/base/data/home/apps/s~project-boom/1.384461758981660124/requests/api.py", line 50, in request response = session.request(method=method, url=url, **kwargs) File "/base/data/home/apps/s~project-boom/1.384461758981660124/requests/sessions.py", line 465, in request resp = self.send(prep, **send_kwargs) File "/base/data/home/apps/s~project-boom/1.384461758981660124/requests/sessions.py", line 573, in send r = adapter.send(request, **kwargs) File "/base/data/home/apps/s~project-boom/1.384461758981660124/requests/adapters.py", line 431, in send raise SSLError(e, request=request) SSLError: Can't connect to HTTPS URL because the SSL module is not available. python request module is include in the app engine project. Using python 2.7. Being look for ways to solve this problem but have not find very clear way to solve the problem yet
If you're using GAE's Sockets, you can get SSL support without any hacks by simply loading the SSL library. Simply add this to your app.yaml file: libraries: - name: ssl version: latest This is documented on Google Cloud's OpenSSL Support documentation.
How can I register a single view (not a viewset) on my router?
I am using Django REST framework and have been trying to create a view that returns a small bit of information, as well as register it on my router. I have four models which store information, and all of them have a created_time field. I am trying to make a view that returns the most recent objects (based on the created_time) in a single view, where only the four creation times are returned. So, a possible JSON output from the view would look like { "publish_updatetime": "2015.05.20 11:53", "meeting_updatetime": "2015.05.20 11:32", "training_updatetime": "2015.05.20 15:25", "exhibiting_updatetime": "2015.05.19 16:23" } I am also hoping to register this view on my router, so it appears with the rest of my endpoints when the API root is loaded. router.register(r'updatetime', views.UpdateTimeView) Here are the four models that I am trying to work with class Publish(models.Model): user = models.ForeignKey(MyUser) name = models.CharField(max_length=50) created_time = models.DateTimeField( default=datetime.now) class Meeting(models.Model): user = models.ForeignKey(MyUser) name = models.CharField(max_length=50) file_addr = models.FileField(upload_to=get_file_path) created_time = models.DateTimeField(default=datetime.now) class Training(models.Model): user = models.ForeignKey(MyUser) name = models.CharField(max_length=50) image = models.ImageField(upload_to=get_file_path, max_length=255) created_time = models.DateTimeField(default=datetime.now) class Exhibiting(models.Model): user = models.ForeignKey(MyUser) name = models.CharField(max_length=50) file_addr = models.FileField(upload_to=get_file_path) created_time = models.DateTimeField(default=datetime.now) Is it possible to do this? And how would it be done?
Routers work with a ViewSet and aren't designed for normal views, but that doesn't mean that you cannot use them with a normal view. Normally they are used with models (and a ModelViewSet), but they can be used without them using the GenericViewSet (if you would normally use a GenericAPIView) and ViewSet (if you would just use an APIView). For a list view, the request methods are mapped to ViewSet methods like this GET -> list(self, request, format=None) POST- > create(self, request, format=None) For detail views (with a primary key in the url), the request methods use the following map GET -> retrieve(self, request, pk, format=None) PUT -> update(self, request, pk, format=None) PATCH -> partial_update(self, request, pk, format=None) DELETE -> destroy(self, request, pk, format=None) So if you want to use any of these request methods with your view on your router, you need to override the correct view method (so list() instead of get()). Now, specifically in your case you would have normally use an APIView that looked like class UpdateTimeView(APIView): def get(self, request, format=None): latest_publish = Publish.objects.latest('created_time') latest_meeting = Meeting.objects.latest('created_time') latest_training = Training.objects.latest('created_time') latest_exhibiting = Exhibiting.objects.latest('created_time') return Response({ "publish_updatetime": latest_publish.created_time, "meeting_updatetime": latest_meeting.created_time, "training_updatetime": latest_training.created_time, "exhibiting_updatetime": latest_exhibiting.created_time, }) The comparable ViewSet would be class UpdateTimeViewSet(ViewSet): def list(self, request, format=None): latest_publish = Publish.objects.latest('created_time') latest_meeting = Meeting.objects.latest('created_time') latest_training = Training.objects.latest('created_time') latest_exhibiting = Exhibiting.objects.latest('created_time') return Response({ "publish_updatetime": latest_publish.created_time, "meeting_updatetime": latest_meeting.created_time, "training_updatetime": latest_training.created_time, "exhibiting_updatetime": latest_exhibiting.created_time, }) Notice the two required changes: APIView -> ViewSet and get -> list. I also updated the name to indicate that it was more than just a normal view (as a ViewSet cannot be initialized the same way), but that's not required. So with this new view, you can just register it in the router the same way as any other. You need a base_name here so the url names can be generated (normally this would pull from the queryset). router.register(r'updatetime', views.UpdateTimeViewSet, base_name='updatetime') So now the updatetime endpoint will be made available in the API root and you can get the latest times by just calling the endpoint (a simple GET request).
ImportError: No module named django.core.management when using manage.py
I'm trying to run python manage.py runserver on a Django application I have and I get this error: Traceback (most recent call last): File "manage.py", line 8, in <module> from django.core.management import execute_from_command_line ImportError: No module named django.core.management Here is the output of pip freeze | grep -i django to show I do in fact have Django installed: Django==1.6.5 django-cached-authentication-middleware==0.2.0 django-cors-headers==1.1.0 django-htmlmin==0.7.0 django-static-precompiler==0.9 djangorestframework==2.3.14 Also, trying to run /usr/local/bin/python2.7 manage.py runserver yields the same error.
Possible issues that may cause your problem: PYTHONPATH is not well configured, to configure it you should do: export PYTHONPATH=/usr/local/lib/python2.7/site-packages You forgot the line #!/usr/bin/env python at the beginning of manage.py If you're working on virtualenv you forgot to activate the virtual env to execute manage.py commands (You may have installed Django on your system but not on your virtualenv) source path/to/your/virtualenv/bin/activate or workon env_name You have Python 2.7 and Python 3.4 messing with the package You're using a very old Python 2.4 and you should tell the system to use your Python 2.7 with: alias python=python2.7 Some times reinstalling/upgrading Django fix some of those issues. You may want to execute python -c "import django; print(django.get_version())" to check if Django is installed on your PC or your virtualenv if you're using one You can find some other solutions in other similar questions: Django import error Django uwsgi error Django module error
Is it possible to implement Python yield functionality in freestanding C?
I recently came accross the yield keyword in Python (as well as JavaScript) - I understand that this is primarliy used for the generator pattern, but the language construct seems to be used in asynchronous functions as well where my interests lie. In asynchronous functions it may merely act as syntatic-sugar and I know there are alternative patterns to achieve the same effect - But I like it - A LOT! I want to know if I can do something similar in C (even with inline assembly). I came across a Java implementation using threads https://github.com/mherrmann/java-generator-functions which I can more or less implement in C. However this is won't be a freestanding implementation, and my interest is purely in a freestanding implementation. Coming to C co-routines (http://www.chiark.greenend.org.uk/~sgtatham/coroutines.html), one of the deficits is that stack objects can't be used. However I am still OK with this since current asynchronous callback implementations also can't use the stack. However the problem lies in a freestanding implementation - I can't think of a way to collect all the register variables and store them without a hosted environment. There's probably a solution using the setjmp/longjmp, however I am pretty sure these can't be implemented freestanding. So the question is: Is it possible to implement Python yield functionality in freestanding C? Personally I think I have exhausted the possibilites, so I'll ask this - If you could have a hosted implementation, how would you implement it (preferably with some macro magic)? I have a fairly ugly implementation which I'll post later on if nothing cool comes up. Also I don't want C++ implementations - Unless you can wrap around the C++ with pure C functions. EDIT: A basic requirement is that the generator function has to be re-enterant.
Iterators in Python follow this pattern: You call them (with arguments) and they return an object. You call that object's .next() or .__next__() method repeatedly and it runs through the iterator. We can do something similar: typedef struct iterator{ int yield_position; /* Where to jump to */ void *yield_state; /* opaque container for local variables */ void *(*next)(iterator*); /* Function taking "this" argument returning a pointer to whatever we yielded */ } iterator; iterator *make_generator(/* arguments? */){ iterator *result = malloc(sizeof(iterator)); /* Caller frees */ result->yield_position = 0; /* Optionally allocate/initialize yield_state here */ result->next = do_generator; return result; } void *do_generator(iterator *this){ struct whatever *result; switch(this->yield_position){ case 0: /* Do something */ this->yield_position = 1; /* Save local variables to this->yield_state if necessary */ return (void *) result; case 1: /* Initialize local variables from this->yield_state */ /* Etc.*/ } } void free_generator(iterator *iter){ /* Free iter->yield_state if necessary */ free(iter); } Since case labels can be used just about everywhere, the switch should be able to e.g. jump into the middle of a loop if necessary. You'll probably still need to re-initialize loop variables etc. It's called like this: iterator *iter = make_generator(/* arguments? */); struct whatever *foo = iter->next(iter); /* etc. */ free_generator(iter); Passing the this argument by hand gets tedious, so define a macro: #DEFINE NEXT(iter) ((iter)->next(iter))
Shift elements in a numpy array
Following-up from this question years ago, is there a canonical "shift" function in numpy? I don't see anything from the documentation. Here's a simple version of what I'm looking for: def shift(xs, n): if n >= 0: return np.r_[np.full(n, np.nan), xs[:-n]] else: return np.r_[xs[-n:], np.full(-n, np.nan)] Using this is like: In [76]: xs Out[76]: array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9.]) In [77]: shift(xs, 3) Out[77]: array([ nan, nan, nan, 0., 1., 2., 3., 4., 5., 6.]) In [78]: shift(xs, -3) Out[78]: array([ 3., 4., 5., 6., 7., 8., 9., nan, nan, nan]) This question came from my attempt to write a fast rolling_product yesterday. I needed a way to "shift" a cumulative product and all I could think of was to replicate the logic in np.roll(). So np.concatnate() is much faster than np.r_[]. This version of the function performs a lot better: def shift(xs, n): if n >= 0: return np.concatenate((np.full(n, np.nan), xs[:-n])) else: return np.concatenate((xs[-n:], np.full(-n, np.nan))) An even faster version simply pre-allocates the array: def shift(xs, n): e = np.empty_like(xs) if n >= 0: e[:n] = np.nan e[n:] = xs[:-n] else: e[n:] = np.nan e[:n] = xs[-n:] return e
Not numpy but scipy provides exactly the shift functionality you want, import numpy as np from scipy.ndimage.interpolation import shift xs = np.array([ 0., 1., 2., 3., 4., 5., 6., 7., 8., 9.]) shift(xs, 3, cval=np.NaN) where default is to bring in a constant value from outside the array with value cval, set here to nan. This gives the desired output, array([ nan, nan, nan, 0., 1., 2., 3., 4., 5., 6.]) and the negative shift works similarly, shift(xs, -3, cval=np.NaN) Provides output array([ 3., 4., 5., 6., 7., 8., 9., nan, nan, nan])
Error " 'dict' object has no attribute 'iteritems' " when trying to use NetworkX's write_shp()
I'm trying to use NetworkX to read a Shapefile and use the function write_shp() to generate the Shapefiles that will contain the nodes and edges (following this example - https://networkx.github.io/documentation/latest/reference/readwrite.nx_shp.html), but when I try to run the code it gives me the following error: Traceback (most recent call last): File "C:/Users/Felipe/PycharmProjects/untitled/asdf.py", line 4, in <module> nx.write_shp(redVial, "shapefiles") File "C:\Python34\lib\site-packages\networkx\readwrite\nx_shp.py", line 192, in write_shp for key, data in e[2].iteritems(): AttributeError: 'dict' object has no attribute 'iteritems' I'm using Python 3.4 and installed NetworkX via pip install. Before this error it had already given me another one that said "xrange does not exist" or something like that, so I looked it up and just changed 'xrange' to 'range' in the nx_shp.py file, which seemed to solve it. From what I've read it could be related to the Python version (Python2 vs Python3).
As you are in python3 , use dict.items() instead of dict.iteritems() iteritems() was removed in python3, so you can't use this method anymore. Take a look at Python Wiki (Link) In Built-in Changes part, it is stated that Removed dict.iteritems(), dict.iterkeys(), and dict.itervalues(). Instead: use dict.items(), dict.keys(), and dict.values() respectively.
Functional Breadth First Search
Functional depth first search is lovely in directed acyclic graphs. In graphs with cycles however, how do we avoid infinite recursion? In a procedural language I would mark nodes as I hit them, but let's say I can't do that. A list of visited nodes is possible, but will be slow because using one will result in a linear search of that list before recurring. A better data structure than a list here would obviously help, but that's not the aim of the game, because I'm coding in ML - lists are king, and anything else I will have to write myself. Is there a clever way around this issue? Or will I have to make do with a visited list or, god forbid, mutable state?
One option is to use inductive graphs, which are a functional way of representing and working with arbitrary graph structures. They are provided by Haskell's fgl library and described in "Inductive Graphs and Funtional Graph Algorithms" by Martin Erwig. For a gentler introduction (with illustrations!), see my blog post Generating Mazes with Inductive Graphs. The trick with inductive graphs is that they let you pattern match on graphs. The common functional idiom for working with lists is to decompose them into a head element and the rest of the list, then recurse on that: map f [] = [] map f (x:xs) = f x : map f xs Inductive graphs let you do the same thing, but for graphs. You can decompose an inductive graph into a node, its edges and the rest of the graph. Here we match on the node 1 and all of its edges (highlighted in blue), separate from the rest of the graph. This lets us write a map for graphs (in Haskellish pseudocode that can be realized with pattern synonyms): gmap f Empty = Empty gmap f ((in, node, out) :& rest) = f (in, node, out) :& gmap f rest The main shortcoming of this approach as opposed to lists is that graphs do not have a single natural way to decompose: the same graph can be built up in multiple ways. The map code above would visit all the vertices, but in an arbitrary (implementation-dependent) order. To overcome this, we add another construct: a match function that takes a specific node. If that node is in our graph, we get a successful match just like above; if it isn't, the whole match fails. This construct is enough to write a DFS or a BFS—with elegant code that looks almost identical for both! Instead of manually marking nodes as visited, we just recurse on the rest of the graph except the node we're seeing now: at each step, we're working with a smaller and smaller portion of the original graph. If we try to access a node we've already seen with match, it won't be in the remaining graph and that branch will fail. This lets our graph-processing code look just like our normal recursive functions over lists. Here's a DFS for this sort of graph. It keeps the stack of nodes to visit as a list (the frontier), and takes the initial frontier to start. The output is a list of nodes traversed in order. (The exact code here can't be written with the library directly without some custom pattern synonyms.) dfs _frontier Empty = [] dfs [] _graph = [] dfs (n:ns) (match n -> Just (ctx, rest)) = -- not visited n dfs (neighbors' ctx ++ ns) rest dfs (n:ns) graph = -- visited n dfs ns graph A pretty simple recursive function. To turn it into a breadth-first search, all we have to do is replace our stack frontier with a queue: instead of putting the neighbors on the front of the list, we put them on the back: bfs _frontier Empty = [] bfs [] _graph = [] bfs (n:ns) (match n -> Just (ctx, rest)) = -- not visited n bfs (ns ++ neighbors' ctx) rest bfs (n:ns) graph = -- visited n bfs ns graph Yep, that's all we need! We don't have to do anything special to keep track of the nodes we visited as we recurse over the graph, just like we don't have to keep track of the list cells we've visited: each time we recurse, we're only getting the part of the graph we haven't seen.
Padding or truncating a Python list
I'd like to truncate or pad a list. E.g. for size 4: [1,2,3] -> [1,2,3,0] [1,2,3,4,5] -> [1,2,3,4] I can see a couple of ways: def trp(l, n): """ Truncate or pad a list """ r = l[:n] if len(r) < n: r.extend([0] * (n - len(r))) return r Or a shorter, but less efficient: map(lambda x, y: x if x else 0, m[0:n], [0] * n) Is there a more elegant way of doing this?
You can use itertools module to make it completely lazy, like this >>> from itertools import repeat, chain, islice >>> def trimmer(seq, size, filler=0): ... return islice(chain(seq, repeat(filler)), size) ... >>> list(trimmer([1, 2, 3], 4)) [1, 2, 3, 0] >>> list(trimmer([1, 2, 3, 4, 5], 4)) [1, 2, 3, 4] Here, we chain the actual sequence with the infinite repeater with the filler value. And then we slice the chained iterator to size. So, when if the sequence has lesser number of elements than size, chain will start consuming the repeat. If the sequence has at least size elements, then chain will not even have to use the repeat. The main advantage of this method is that, the complete trimmed or padded list is not created in memory, unless asked for it. So, if all you are going to do is to iterate it, then you can simply iterate it like this >>> for item in trimmer([1, 2, 3, 4, 5], 4): ... print(item * 2) ... ... 2 4 6 8 Or, if you want to use it with another trimmed or padded list, then you can still do that without creating an actual list, like this >>> for item in chain(trimmer([1, 2, 3], 4), trimmer([1, 2, 3, 4, 5], 4)): ... print(item, item * 2) ... ... 1 2 2 4 3 6 0 0 1 2 2 4 3 6 4 8 Laziness Rocks ;-)
Interact with celery ongoing task
We have a distributed architecture based on rabbitMQ and Celery. We can launch in parallel multiple tasks without any issue. The scalability is good. Now we need to control the task remotely: PAUSE, RESUME, CANCEL. The only solution we found is to make in the Celery task a RPC call to another task that replies the command after a DB request. The Celery task and RPC task are not on the same machine and only the RPC task has access to the DB. Do you have any advice how to improve it and easily communicate with an ongoing task? Thank you EDIT: In fact we would like to do something like in the picture below. It's easy to do the Blue configuration or the Orange, but we don't know how to do both simultaneously. Workers are subscribing to a common Jobs queue and each worker has its own Admin queue declared on an exchange. EDIT: IF this is not possible with Celery, I'am open to a solution with other frameworks like python-rq.
It look like the Control Bus pattern. For a better scalability and in order to reduce the RPC call, I recommend to reverse the logic. The PAUSE, RESUME, CANCEL command are push to the Celery tasks through a control bus when the state change occurs. The Celery app will store the current state of the Celery app in a store (could be in memory, on the filesystem..). If task states must be kept even after a stop/start of the app, It will involve more work in order to keep both app synchronized (eg. synchronization at startup).
Python: openpyxl how to read a cell font color
I have tried to print some_cell.font.color.rgb and got various results. For some I got what I want (like "FF000000"), but for others it gives me Value must be type 'basetring'. I assume that the latter is because I haven't actually defined the font color for these cells. I'm using openpyxl 2.2.2
I think this is a bug in openpyxl and I think you should report it here. Debugging the following code (with trepan of course): from openpyxl import Workbook wb = Workbook() ws = wb.active c = ws['A4'] # cell gets created here print(ws['A4'].font.color) I get: Color(rgb=Value must be type 'str', indexed=Value must be type 'int', auto=Value must be type 'bool', theme=1, tint=0.0, type='theme') and this is coming from _repr_() of class Typed() in file openpyxl/descriptors/base.py. This message is given when a value hasn't been initialized. Notice that "indexed" and "auto" also haven't been set. But these presumably should have been set when the code for the access of ws['a4'] was performed. Note: the slight difference in message: 'str' instead of 'basestring' is probably attributable to the fact that I was using Python 3 or less likely openpyxl 2.2.3 And if there's some other additional code should that I should have added in my example, then at least https://openpyxl.readthedocs.org/en/latest/index.html should indicate that. See also openpyxl cell style not reporting correctly where one of the developers seems to say the same thing in so many words. Edit: A couple of other things may be of interest to note. First, you can set a value and then read it, for example you can do this: c.font.color.rgb = "FF000000" Second, if you test c.font.color.rgb in a boolean it will look like a value has been set. That is if c.font.color: print("yes") will print "yes".
Is it possible to modify the behavior of len()?
I'm aware of creating a custom __repr__ or __add__ method (and so on), to modify the behavior of operators and functions. Is there a method override for len? For example: class Foo: def __repr__(self): return "A wild Foo Class in its natural habitat." foo = Foo() print(foo) # A wild Foo Class in its natural habitat. print(repr(foo)) # A wild Foo Class in its natural habitat. Could this be done for len, with a list? Normally, it would look like this: foo = [] print(len(foo)) # 0 foo = [1, 2, 3] print(len(foo)) # 3 What if I want to leave search types out of the count? Like this: class Bar(list): pass foo = [Bar(), 1, ''] print(len(foo)) # 3 count = 0 for item in foo: if not isinstance(item, Bar): count += 1 print(count) # 2 Is there a way to do this from within a list subclass?
Yes, implement the __len__ method: def __len__(self): return 42 Demo: >>> class Foo(object): ... def __len__(self): ... return 42 ... >>> len(Foo()) 42 From the documentation: Called to implement the built-in function len(). Should return the length of the object, an integer >= 0. Also, an object that doesn’t define a __bool__() method and whose __len__() method returns zero is considered to be false in a Boolean context. For your specific case: >>> class Bar(list): ... def __len__(self): ... return sum(1 for ob in self if not isinstance(ob, Bar)) ... >>> len(Bar([1, 2, 3])) 3 >>> len(Bar([1, 2, 3, Bar()])) 3
Check if argparse optional argument is set or not
I would like to check whether an optional argparse argument has been set by the user or not. Can I safely check using isset? Something like this: if(isset(args.myArg)): #do something else: #do something else Does this work the same for float / int / string type arguments? I could set a default parameter and check it (e.g., set myArg = -1, or "" for a string, or "NOT_SET"). However, the value I ultimately want to use is only calculated later in the script. So I would be setting it to -1 as a default, and then updating it to something else later. This seems a little clumsy in comparison with simply checking if the value was set by the user.
I think that optional arguments (specified with --) are initialized to None if they are not supplied. So you can test with is not None. Try the example below: import argparse as ap def main(): parser = ap.ArgumentParser(description="My Script") parser.add_argument("--myArg") args, leftovers = parser.parse_known_args() if args.myArg is not None: print "myArg has been set (value is %s)" % args.myArg
Using both Python 2.x and Python 3.x in IPython Notebook
I use IPython notebooks and would like to be able to select to create a 2.x or 3.x python notebook in IPython. I initially had Anaconda. With Anaconda a global environment variable had to be changed to select what version of python you want and then IPython could be started. This is not what I was looking for so I uninstalled Anaconda and now have set up my own installation using MacPorts and PiP. It seems that I still have to use port select --set python <python version> to toggle between python 2.x and 3.x. which is no better than the anaconda solution. Is there a way to select what version of python you want to use after you start an IPython notebook, preferably with my current MacPorts build?
The idea here is to install multiple ipython kernels. Here are instructions for anaconda. If you are not using anaconda, I recently added instructions using pure virtualenvs. Anaconda 4.1.0 Since version 4.1.0, anaconda includes a special package nb_conda_kernels that detects conda environments with notebook kernels and automatically registers them. This makes using a new python version as easy as creating new conda environments: conda create -n py27 python=2.7 ipykernel conda create -n py35 python=3.5 ipykernel After a restart of jupyter notebook, the new kernels are available over the graphical interface. Please note that new packages have to be explicitly installed into the new enviroments. The Managing environments section in conda's docs provides further information. Manually registering kernels Users who do not want to use nb_conda_kernels or still use older versions of anaconda can use following steps to manually register ipython kernels. configure the python2.7 environment: conda create -n py27 python=2.7 source activate py27 conda install notebook ipykernel ipython kernel install --user configure the python3.5 environment: conda create -n py35 python=3.5 source activate py35 conda install notebook ipykernel ipython kernel install --user After that you should be able to choose between python2 and python3 when creating a new notebook in the interface. Additionally you can pass the --name and --display-name options to ipython kernel install if you want to change the names of your kernels. See ipython kernel install --help for more informations.
Installing lxml, libxml2, libxslt on Windows 8.1
After additional exploration, I found a solution to installing lxml with pip and wheel. Additional comments on approach welcomed. I'm finding the existing Python documentation for Linux distributions excellent. For Windows... not so much. I've configured my Linux system fine but I need some help getting a Windows 8.1 tablet ready as well. My project requires the lxml module for Python 3.4. I've found many tutorials on how to install lxml but each has failed. https://docs.python.org/3/installing/ I've downloaded the "get-pip.py" and successfully ran it from the Windows cmd line with the result: Requirement already up-to-date: pip in c:\python34\lib\site-packages So I don't think that I have a "pip" problem. From there I've run the following from the Windows cmd line: python -m pip install lxml A long list of commands scroll through but there are a couple of lines that appear to be errors that I'm unable to resolve. ERROR: b"'xslt-config' is not recognized as an internal or external command,\r\noperable program or batch file.\r\n" ** make sure the development packages of libxml2 and libxslt are installed ** and Failed building wheel for lxml And the last part, all in red Command "C:\Python34\python.exe -c "import setuptools, tokenize;__file__='C:\\Users\\Owner\\AppData\\Local\\Temp\\pip-build-ya3n6wkd\\lxml\\setup.py';exec(compi le(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record C:\Users\Owner\AppData\Local\Temp\pip-ytybzl9l-r ecord\install-record.txt --single-version-externally-managed --compile" failed with error code 1 in C:\Users\Owner\AppData\Local\Temp\pip-build-ya3n6wkd\lxml Any assistance in understanding how this should work, or what I'm doing wrong would be greatly appreciated.
I was able to fix the installation with the following steps. I hope others find this helpful. My installation of "pip" was working fine before the problem. I went to the Windows command line and made sure that "wheel" was installed. C:\Python34>python -m pip install wheel Requirement already satisfied (use --upgrade to upgrade): wheel in c:\python34\lib\site-packages After that I downloaded the lxml file from http://www.lfd.uci.edu/~gohlke/pythonlibs/#lxml and placed it in my python directory "C:\Python34" In that directory I ran the following: C:\Python34>python -m pip install lxml-3.4.4-cp34-none-win32.whl The results were: Processing c:\python34\lxml-3.4.4-cp34-none-win32.whl Installing collected packages: lxml Successfully installed lxml-3.4.4 I opened PyCharm and lxml module was available. I was able to execute the code without problem. What I learned (though this may be corrected by others more knowledgeable) Need to install the desired module (as a "*.whl" file) using pip and wheel. Using Dropbox to share a code folder with different PyCharm installations causes confusion for the "workspace.xml" file. The two computers kept writing over each other, messing up the installation paths. Hope this helps.
Turning string with embedded brackets into a dictionary
What's the best way to build a dictionary from a string like the one below: "{key1 value1} {key2 value2} {key3 {value with spaces}}" So the key is always a string with no spaces but the value is either a string or a string in curly brackets (it has spaces)? How would you dict it into: {'key1': 'value1', 'key2': 'value2', 'key3': 'value with spaces'}
import re x="{key1 value1} {key2 value2} {key3 {value with spaces}}" print dict(re.findall(r"\{(\S+)\s+\{*(.*?)\}+",x)) You can try this. Output: {'key3': 'value with spaces', 'key2': 'value2', 'key1': 'value1'} Here with re.findall we extract key and its value.re.findall returns a list with tuples of all key,value pairs.Using dict on list of tuples provides the final answer. Read more here.
open cv error: (-215) scn == 3 || scn == 4 in function cvtColor
I'm currently in Ubuntu 14.04, using python 2.7 and cv2. When I run this code: import numpy as np import cv2 img = cv2.imread('2015-05-27-191152.jpg',0) gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) it returns: File "face_detection.py", line 11, in <module> gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) cv2.error: /home/arthurckl/Desktop/opencv-3.0.0-rc1/modules/imgproc/src/color.cpp:7564: error: (-215) scn == 3 || scn == 4 in function cvtColor I already searched here and one answer said that I could be loading my photo the wrong way, because it should have 3 dimensions: rows, columns and depth. When I print the img.shape it returns only two numbers, so I must be doing it wrong. But I don't know the right way to load my photo.
Give the full path of image with forward slash..it solved the error for me. eg. import numpy as np import cv2 img = cv2.imread('C:/Python34/images/2015-05-27-191152.jpg') gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) also if you give 0 in second parameter while loading image using cv2.imread than no need to convert image using cvtColor, it is already loaded as grayscale image eg. import numpy as np import cv2 gray = cv2.imread('C:/Python34/images/2015-05-27-191152.jpg',0)
Returning a String from Rust function to Python
I'm very new to Rust. How would I return a String from a Rust function that can be used in Python? Here is my Rust implementation: use std::ffi::CString; #[no_mangle] pub extern fn query() -> CString { let s = CString::new("Hello!").unwrap(); return s; } And the Python code that calls it: from ctypes import cdll, c_char_p lib = cdll.LoadLibrary("target/release/libtest.so") result = lib.query() print(c_char_p(result).value) I get a segmentation fault when its run. EDIT: Using Vladimir Matveev's Rust code below I was able to get it to work with the changes to my python code: from ctypes import * lib = cdll.LoadLibrary("target/release/libtest.so") lib.query.restype = c_char_p result = lib.query() print cast(result, c_char_p).value lib.free_query(result)
The most direct version would be this: use libc::c_char; use std::ffi::CString; use std::mem; #[no_mangle] pub extern fn query() -> *mut c_char { let s = CString::new("Hello!").unwrap(); s.into_raw() } Here we return a pointer to a zero-terminated sequence of chars which can be passed to Python's c_char_p. You can't return just CString because it is Rust structure which is not supposed to be used in C code directly - it wraps Vec<u8> and actually consists of three pointer-sized integers. It is not compatible with C's char* directly. We need to obtain a raw pointer out of it. CString::into_raw() method does this - it consumes the CString by value, "forgets" it so its allocation won't be destroyed, and returns a *mut c_char pointer to the beginning of the array. However, this way the string will be leaked because we forget its allocation on the Rust side, and it is never going to get freed. I don't know Python's FFI enough, but the most direct way to fix this problem is to create two functions, one for producing the data and one for freeing it. Then you need to free the data from Python side by calling this freeing function: // above function #[no_mangle] pub extern fn query() -> *mut c_char { ... } #[no_mangle] pub extern fn free_query(c: *mut c_char) { // convert the pointer back to `CString` // it will be automatically dropped immediately unsafe { CString::from_raw(c); } } CString::from_raw() method accepts a *mut c_char pointer and creates a CString instance out of it, computing the length of the underlying zero-terminated string in the process. This operation implies ownership transfer, so the resulting CString value will own the allocation, and when it is dropped, the allocation gets freed. This is exactly what we want.
Split string into list in jinja?
I have some variables in a jinja2 template which are strings seperated by a ';'. I need to use these strings separately in the code. i.e. the variable is variable1 = "green;blue" {% list1 = {{ variable1 }}.split(';') %} The grass is {{ list1[0] }} and the boat is {{ list1[1] }} I can split them up before rendering the template but since it are sometimes up to 10 strings inside the string this gets messy. I had a jsp before where I did: <% String[] list1 = val.get("variable1").split(";");%> The grass is <%= list1[0] %> and the boat is <%= list1[1] %> EDIT: It works with: {% set list1 = variable1.split(';') %} The grass is {{ list1[0] }} and the boat is {{ list1[1] }}
It works with: {% set list1 = variable1.split(';') %} The grass is {{ list1[0] }} and the boat is {{ list1[1] }}
Pandas error - invalid value encountered
New to Pandas. I downloaded and installed Anaconda. Then I tried running the following code via the Spyder app import pandas as pd import numpy as np train = pd.read_csv('/Users/Ben/Documents/Kaggle/Titanic/train.csv') train Although this prints the dataframe as I expected, it also shows these errors //anaconda/lib/python3.4/site-packages/pandas/core/format.py:1969: RuntimeWarning: invalid value encountered in greater has_large_values = (abs_vals > 1e8).any() //anaconda/lib/python3.4/site-packages/pandas/core/format.py:1970: RuntimeWarning: invalid value encountered in less has_small_values = ((abs_vals < 10 ** (-self.digits)) & //anaconda/lib/python3.4/site-packages/pandas/core/format.py:1971: RuntimeWarning: invalid value encountered in greater (abs_vals > 0)).any() Why am I getting these errors? EDIT: I just tested the above code in an IPython notebook and it works without errors. So, is there something wrong with my Spyder installation? Any help would be appreciated. EDIT2: After some testing, I can read the first 5 rows of the csv without getting the warning. So, I suspect a NaN in the 6th row for a float64 type column is triggering the warning.
I have the same error and have decided that it is a bug. It seems to be caused by the presence of NaN values in a DataFrame in Spyder. I have uninstalled and reinstalled all packages and nothing has effected it. NaN values are supported and are completely valid in DataFrames especially if they have a DateTime index. In the end I have settled for suppressing this warnings as follows. import warnings warnings.simplefilter(action = "ignore", category = RuntimeWarning)
List with many dictionaries VS dictionary with few lists?
I am doing some exercises with datasets like so: List with many dictionaries users = [ {"id": 0, "name": "Ashley"}, {"id": 1, "name": "Ben"}, {"id": 2, "name": "Conrad"}, {"id": 3, "name": "Doug"}, {"id": 4, "name": "Evin"}, {"id": 5, "name": "Florian"}, {"id": 6, "name": "Gerald"} ] Dictionary with few lists users2 = { "id": [0, 1, 2, 3, 4, 5, 6], "name": ["Ashley", "Ben", "Conrad", "Doug","Evin", "Florian", "Gerald"] } Pandas dataframes import pandas as pd pd_users = pd.DataFrame(users) pd_users2 = pd.DataFrame(users2) print pd_users == pd_users2 Questions: Should I structure the datasets like users or like users2? Are there performance differences? Is one more readable than the other? Is there a standard I should follow? I usually convert these to pandas dataframes. When I do that, both versions are identical... right? The output is true for each element so it doesn't matter if I work with panda df's right?
This relates to column oriented databases versus row oriented. Your first example is a row oriented data structure, and the second is column oriented. In the particular case of Python, the first could be made notably more efficient using slots, such that the dictionary of columns doesn't need to be duplicated for every row. Which form works better depends a lot on what you do with the data; for instance, row oriented is natural if you only ever access all of any row. Column oriented meanwhile makes much better use of caches and such when you're searching by a particular field (in Python, this may be reduced by the heavy use of references; types like array can optimize that). Traditional row oriented databases frequently use column oriented sorted indices to speed up lookups, and knowing these techniques you can implement any combination using a key-value store. Pandas does convert both your examples to the same format, but the conversion itself is more expensive for the row oriented structure, simply because every individual dictionary must be read. All of these costs may be marginal. There's a third option not evident in your example: In this case, you only have two columns, one of which is an integer ID in a contiguous range from 0. This can be stored in the order of the entries itself, meaning the entire structure would be found in the list you've called users2['name']; but notably, the entries are incomplete without their position. The list translates into rows using enumerate(). It is common for databases to have this special case also (for instance, sqlite rowid). In general, start with a data structure that keeps your code sensible, and optimize only when you know your use cases and have a measurable performance issue. Tools like Pandas probably means most projects will function just fine without finetuning.
cannot import name GoogleMaps in python
I am using the code below to get the latitude & longitude of an address: from googlemaps import GoogleMaps gmaps = GoogleMaps(api_key) address = 'Constitution Ave NW & 10th St NW, Washington, DC' lat, lng = gmaps.address_to_latlng(address) print lat, lng but am getting the error below File "C:/Users/Pavan/PycharmProjects/MGCW/latlong6.py", line 1, in <module> from googlemaps import GoogleMaps ImportError: cannot import name GoogleMaps I have seen another question similar to this, but the solution didn't work for me.
Use geopy instead, no need for api-key. From their example: from geopy.geocoders import Nominatim geolocator = Nominatim() location = geolocator.geocode("175 5th Avenue NYC") print(location.address) print((location.latitude, location.longitude)) prints: Flatiron Building, 175, 5th Avenue, Flatiron, New York, NYC, New York, 10010, United States of America (40.7410861, -73.9896297241625)
Python dictionary as html table in ipython notebook
Is there any (existing) way to display a python dictionary as html table in an ipython notebook. Say I have a dictionary d = {'a': 2, 'b': 3} then i run magic_ipython_function(d) to give me something like
You're probably looking for something like ipy_table. A different way would be to use pandas for a dataframe, but that might be an overkill.
How to upload files to another user's Google Drive without asking permission every time?
Is there any way to upload files to another user's Google Drive without asking for login or verification code each time except the first time? Until now I used pydrive, but it asks to login each time. Is there anyway other than this, such that a key or something to use to skip the login of the user?
To clarify: you want to enable others to upload files into your own Google Drive? If that is the case, you can do this with this embed widget that you can copy & paste into your website: http://developers.cloudwok.com If you want to allow users to upload files to a random Google Drive account of some other user, that won't work because the other user must give permission.
What's the deal with Python 3.4, Unicode, different languages and Windows?
Happy examples: #!/usr/bin/env python # -*- coding: utf-8 -*- czech = u'Leoš Janáček'.encode("utf-8") print(czech) pl = u'Zdzisław Beksiński'.encode("utf-8") print(pl) jp = u'リング 山村 貞子'.encode("utf-8") print(jp) chinese = u'五行'.encode("utf-8") print(chinese) MIR = u'Машина для Инженерных Расчётов'.encode("utf-8") print(MIR) pt = u'Minha Língua Portuguesa: çáà'.encode("utf-8") print(pt) Unhappy output: And if I print them like this: jp = u'リング 山村 貞子' print(jp) I get: I've also tried the following from this question (And other alternatives that involve sys.stdout.encoding): #!/usr/bin/env python # -*- coding: utf-8 -*- from __future__ import print_function import sys def safeprint(s): try: print(s) except UnicodeEncodeError: if sys.version_info >= (3,): print(s.encode('utf8').decode(sys.stdout.encoding)) else: print(s.encode('utf8')) jp = u'リング 山村 貞子' safeprint(jp) And things get even more cryptic: And the docs were not very helpful. So, what's the deal with Python 3.4, Unicode, different languages and Windows? Almost all possible examples I could find, deal with Python 2.x. Is there a general and cross-platform way of printing ANY Unicode character from any language in a decent and non-nasty way in Python 3.4? EDIT: I've tried typing at the terminal: chcp 65001 To change the code page, as proposed here and in the comments, and it did not work (Including the attempt with sys.stdout.encoding)
Update: Since Python 3.6, the code example that prints Unicode strings directly should just work now (even without py -mrun). Python can print text in multiple languages in Windows console whatever chcp says: T:\> py -mpip install win-unicode-console T:\> py -mrun your_script.py where your_script.py prints Unicode directly e.g.: #!/usr/bin/env python3 print('š áč') # cz print('ł ń') # pl print('リング') # jp print('五行') # cn print('ш я жх ё') # ru print('í çáà') # pt All you need is to configure the font in your Windows console that can display the desired characters. You could also run your Python script via IDLE without installing non-stdlib modules: T:\> py -midlelib -r your_script.py To write to a file/pipe, use PYTHONIOENCODING=utf-8 as @Mark Tolonen suggested: T:\> set PYTHONIOENCODING=utf-8 T:\> py your_script.py >output-utf8.txt Only the last solution supports non-BMP characters such as 😒 (U+1F612 UNAMUSED FACE) -- py -mrun can write them but Windows console displays them as boxes even if the font supports corresponding Unicode characters (though you can copy-paste the boxes into another program, to get the characters).
Is it possible to hide Python function arguments in Sphinx?
Suppose I have the following function that is documented in the Numpydoc style, and the documentation is auto-generated with the Sphinx autofunction directive: def foo(x, y, _hidden_argument=None): """ Foo a bar. Parameters ---------- x: str The first argument to foo. y: str The second argument to foo. Returns ------- The barred foo. """ if _hidden_argument: _end_users_shouldnt_call_this_function(x, y) return x + y I don't want to advertise the hidden argument as part of my public API, but it shows up in my auto-generated documentation. Is there any way to tell Sphinx to ignore a specific argument to a function, or (even better) make it auto-ignore arguments with a leading underscore?
I don't think there is an option for that in Sphinx. One possible way to accomplish this without having to hack into the code, is to use customized signature. In this case, you need something like: .. autofunction:: some_module.foo(x, y) This will override the parameter list of the function and hide the unwanted argument in the doc.
"Firefox quit unexpectedly." when running basic Selenium script in Python
I'm trying to scrape and print the HTML of a page using Selenium in Python, but every time I run it I get the error message Firefox quit unexpectedly. I'm new to Selenium, so any help would be greatly appreciated. I'm hoping for the simplest fix possible. Thank you! My code: import selenium from selenium import webdriver browser = webdriver.Firefox() browser.get('http://seleniumhq.org/') print browser.page_source
My experience since the upgrade to Firefox 38.x on Windows a couple of weeks back has been that it has a problem with Selenium 2.45.x. When invoking the browser it produces a "Firefox has stopped working" error which I have to close manually, at which point the test runs. Others have reported similar issues. The solution that worked for me (apart from manually closing the error each time, which got old after a few days) was to uninstall the latest version of Firefox and downgrade to version 37.0.2 on the machine where I run the tests. Not ideal for security reasons, but OK if you're careful.
sys_platform is not defined x64 Windows
This has been bugging me for a little while. I recently upgraded to x64 Python, and I started getting this error (example pip install). C:\Users\<uname>\distribute-0.6.35>pip install python-qt Collecting python-qt Downloading python-qt-0.50.tar.gz Building wheels for collected packages: python-qt Running setup.py bdist_wheel for python-qt Complete output from command C:\Python27\python.exe -c "import setuptools;__file__='c:\\users\\<uname>\\appdata\\local\\t emp\\pip-build-vonat7\\python-qt\\setup.py';exec(compile(open(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" bd ist_wheel -d c:\users\<uname>\appdata\local\temp\tmpghy5gtpip-wheel-: Traceback (most recent call last): File "<string>", line 1, in <module> File "c:\users\<uname>\appdata\local\temp\pip-build-vonat7\python-qt\setup.py", line 11, in <module> packages=['Qt'], File "C:\Python27\lib\distutils\core.py", line 137, in setup ok = dist.parse_command_line() File "C:\Python27\lib\site-packages\distribute-0.6.35-py2.7.egg\setuptools\dist.py", line 232, in parse_command_line result = _Distribution.parse_command_line(self) File "C:\Python27\lib\distutils\dist.py", line 467, in parse_command_line args = self._parse_command_opts(parser, args) File "C:\Python27\lib\site-packages\distribute-0.6.35-py2.7.egg\setuptools\dist.py", line 558, in _parse_command_opts nargs = _Distribution._parse_command_opts(self, parser, args) File "C:\Python27\lib\distutils\dist.py", line 523, in _parse_command_opts cmd_class = self.get_command_class(command) File "C:\Python27\lib\site-packages\distribute-0.6.35-py2.7.egg\setuptools\dist.py", line 362, in get_command_class ep.require(installer=self.fetch_build_egg) File "C:\Python27\lib\site-packages\distribute-0.6.35-py2.7.egg\pkg_resources.py", line 2027, in require working_set.resolve(self.dist.requires(self.extras),env,installer)) File "C:\Python27\lib\site-packages\distribute-0.6.35-py2.7.egg\pkg_resources.py", line 2237, in requires dm = self._dep_map File "C:\Python27\lib\site-packages\distribute-0.6.35-py2.7.egg\pkg_resources.py", line 2466, in _dep_map self.__dep_map = self._compute_dependencies() File "C:\Python27\lib\site-packages\distribute-0.6.35-py2.7.egg\pkg_resources.py", line 2499, in _compute_dependencies common = frozenset(reqs_for_extra(None)) File "C:\Python27\lib\site-packages\distribute-0.6.35-py2.7.egg\pkg_resources.py", line 2496, in reqs_for_extra if req.marker_fn(override={'extra':extra}): File "C:\Python27\lib\site-packages\distribute-0.6.35-py2.7.egg\_markerlib\markers.py", line 109, in marker_fn return eval(compiled_marker, environment) File "<environment marker>", line 1, in <module> NameError: name 'sys_platform' is not defined ---------------------------------------- Failed building wheel for python-qt Failed to build python-qt Installing collected packages: python-qt Running setup.py install for python-qt Successfully installed python-qt-0.50 The package was installed fine, but I cannot build wheels. I tried re-installing distribute manually by downloading a zip and running python setup.py install. That installed wonderfuly, without a hitch. But I still have the above problem. How can I re-define sys_platform? Alright, I rolled back to x86 good ole 32 bit Python, and I still have the problem. This is really concerning, because I cannot reset this after re-installing. I looked at markerlib, which looks promising, but I don't know how to use it safely. Currently I am unable to install pretty much anything from PyPI, so I am giving points to increase interest. Any help? I really want to be able to use PyPI again. I chose the selected answer as it is the most likely to solve the problem. I myself have moved back to x86 Python, so I cannot test this myself. Therefore, I encourage future visitors to try this answer, but I have not myself been able to test it.
Might be a bug. Check out: https://bugs.python.org/ You can manually check the markers.py file and try to fix it. I think there would a reference to sys_platform that has to be changed to sys.platform Regarding markerlib, you can try this out- import markerlib marker = markerlib.compile("sys.platform == 'win32'") marker(environment=markerlib.default_environment(), override={'sys.platform':'win32'})
How to check if a value is present in any of given sets
Say I have different sets (they have to be different, I cannot join them as per the kind of data I am working with): r = set([1,2,3]) s = set([4,5,6]) t = set([7,8,9]) What is the best way to check if a given variable is present in either of them? I am using: if myvar in r \ or myvar in s \ or myvar in t: But I wonder if this can be reduced somehow by using set's properties such as union. The following works, but I don't find a way to define multiple unions: if myvar in r.union(s) or myvar in t: And I am also wondering if this union will affect somehow performance, since I guess a temporary set will be created on the fly.
You can use builtin any: r = set([1,2,3]) s = set([4,5,6]) t = set([7,8,9]) if any(myvar in x for x in [r,s,t]): print "I'm in one of them" any will short circuit on the first condition that returns True so you can get around constructing a potentially huge union or checking potentially lots of sets for inclusion. And I am also wondering if this union will affect somehow performance, since I guess a temporary set will be created on the fly. According to wiki.python.com s|t is O(len(s)+len(t)) while lookups are O(1) . For n sets with l elements each , doing union iteratively to construct the set will result in: a.union(b).union(c).union(d) .... .union(n) Which is equivalent to O(l+l) for a.union(b) and O(2l+2l+l) a.union(b).union(c) and so on which sums up to O(n*(n+1)/2)*l). O(n^2*l) is quadratic and voids the performance advantage of using sets. The lookup in n sets with any will perform at O(n)
get playing wav audio level as output
I want to make a speaking mouth which moves or emits light or something when a playing wav file emits sound. So I need to detect when a wav file is speaking or when it is in a silence between words. Currently I'm using a pygame script that I have found import pygame pygame.mixer.init() pygame.mixer.music.load("my_sentence.wav") pygame.mixer.music.play() while pygame.mixer.music.get_busy() == True: continue I guess I could make some checking at the while loop to look the sounds output level, or something like that, and then send it to one of the gpio outputs. But I don't know how to achieve that. Any help would be much appreciated
You'll need to inspect the WAV file to work out when the voice is present. The simplest way to do this is look for loud and quiet periods. Because sound works with waves, when it's quiet the values in the wave file won't change very much, and when it's loud they'll be changing a lot. One way of estimating loudness is the variance. As you can see the the article, this can be defined as E[(X - mu)^2], which could be written average((X - average(X))^2). Here, X is the value of the signal at a given point (the values stored in the WAV file, called sample in the code). If it's changing a lot, the variance will be large. This would let you calculate the loudness of an entire file. However, you want to track how loud the file is at any given time, which means you need a form of moving average. An easy way to get this is with a first-order low-pass filter. I haven't tested the code below so it's extremely unlikely to work, but it should get you started. It loads the WAV file, uses low-pass filters to track the mean and variance, and works out when the variance goes above and below a certain threshold. Then, while playing the WAV file it keeps track of the time since it started playing, and prints out whether the WAV file is loud or quiet. Here's what you might still need to do: Fix all my deliberate mistakes in the code Add something useful to react to the loud/quiet changes Change the threshold and reaction_time to get good results with your audio Add some hysteresis (a variable threshold) to stop the light flickering I hope this helps! import wave import struct import time def get_loud_times(wav_path, threshold=10000, time_constant=0.1): '''Work out which parts of a WAV file are loud. - threshold: the variance threshold that is considered loud - time_constant: the approximate reaction time in seconds''' wav = wave.open(wav_path, 'r') length = wav.getnframes() samplerate = wav.getframerate() assert wav.getnchannels() == 1, 'wav must be mono' assert wav.getsampwidth() == 2, 'wav must be 16-bit' # Our result will be a list of (time, is_loud) giving the times when # when the audio switches from loud to quiet and back. is_loud = False result = [(0., is_loud)] # The following values track the mean and variance of the signal. # When the variance is large, the audio is loud. mean = 0 variance = 0 # If alpha is small, mean and variance change slower but are less noisy. alpha = 1 / (time_constant * float(sample_rate)) for i in range(length): sample_time = float(i) / samplerate sample = struct.unpack('<h', wav.readframes(1)) # mean is the average value of sample mean = (1-alpha) * mean + alpha * sample # variance is the average value of (sample - mean) ** 2 variance = (1-alpha) * variance + alpha * (sample - mean) ** 2 # check if we're loud, and record the time if this changes new_is_loud = variance > threshold if is_loud != new_is_loud: result.append((sample_time, new_is_loud)) is_loud = new_is_loud return result def play_sentence(wav_path): loud_times = get_loud_times(wav_path) pygame.mixer.music.load(wav_path) start_time = time.time() pygame.mixer.music.play() for (t, is_loud) in loud_times: # wait until the time described by this entry sleep_time = start_time + t - time.time() if sleep_time > 0: time.sleep(sleep_time) # do whatever print 'loud' if is_loud else 'quiet'
Find objects with date and time less then 24 hours from now
I have model with two fields: class Event(models.Model): date = models.DateField(_(u'Date')) time = models.TimeField(_(u'Time')) I need to find all objects where date&time is in 24 hours from now. I am able to do this when using DateTime field, but I am not sure how to achieve this when fields are separated. Thanks in advance.
For the simple case (not sure if all are simple cases though...), this should do the trick: import datetime today = datetime.datetime.now() tomorrow = today + datetime.timedelta(days=1) qs_today = queryset.filter( date=today.date(), time__gte=today.time(), ) qs_tomorrow = queryset.filter( date=tomorrow.date(), time__lt=tomorrow.time(), ) qs = qs_today | qs_tomorrow
Using coverage, how do I test this line?
I have a simple test: class ModelTests(TestCase): def test_method(self): instance = Activity(title="Test") self.assertEqual(instance.get_approved_member_count(), 0) My problem is that coverage still shows get_approved_member_count line as NOT tested: How do I satisfy the above for coverage? To run the tests I'm using Django Nose with Coverage: TEST_RUNNER = 'django_nose.NoseTestSuiteRunner' NOSE_ARGS = [ '--with-coverage', '--cover-html', '--cover-package=apps.users,apps.activities', ] Console: python manage.py test /Users/user/Documents/workspace/api/env/lib/python3.4/importlib/_bootstrap.py:321: RemovedInDjango19Warning: django.contrib.contenttypes.generic is deprecated and will be removed in Django 1.9. Its contents have been moved to the fields, forms, and admin submodules of django.contrib.contenttypes. return f(*args, **kwds) /Users/user/Documents/workspace/api/env/lib/python3.4/importlib/_bootstrap.py:321: RemovedInDjango19Warning: The utilities in django.db.models.loading are deprecated in favor of the new application loading system. return f(*args, **kwds) nosetests --with-coverage --cover-html --cover-package=apps.users,apps.activities --verbosity=1 Name Stmts Miss Cover Missing --------------------------------------------------------------------------------------- apps.activities 0 0 100% apps.activities.admin 8 8 0% 1-14 activities.migrations 0 0 100% activities.migrations.0001_initial 9 0 100% apps.activities.urls 8 0 100% etc etc etc --------------------------------------------------------------------------------------- TOTAL 670 232 65% ---------------------------------------------------------------------- Ran 79 tests in 17.101s
The coverage report shows that the method is being called (line 80 is green). But it also shows that it was never defined (line 75 is red). This is a classic problem of starting coverage too late. The simplest way to fix this is to use coverage to run your test runner, instead of using the test runner to run coverage: $ coverage run -m nose --verbosity=1 UPDATED: to use with your original command: $ coverage run manage.py test but you'd want to uninstall the nose coverage plugin first.
Disable hash randomization from within python program
Starting with Python 3.3, the hashing algorithm is non-deterministically salted to avoid a certain kind of attack. This is nice for webservers but it's a pain when trying to debug a program: Every time I run my script, dict contents are iterated in a different order. Some earlier versions of python had a -R flag for enabling hash randomization, but now that it's the default behavior, the flag has not been replaced by its opposite. Randomization can be disabled by setting the environment variable PYTHONHASHSEED: PYTHONHASHSEED If this variable is not set or set to random, a random value is used to seed the hashes of str, bytes and datetime objects. If PYTHONHASHSEED is set to an integer value, it is used as a fixed seed for generating the hash() of the types covered by the hash randomization. The catch is that this variable must be set before launching the python process. I've tried to set it with os.putenv(), or in os.environ, but these seem to have no effect on the hashing method. This is not too surprising: I wouldn't expect python to check the environment before every single set or dictionary lookup! So, the question remains: Is there a way for a python program to disable its own hash randomization?
I suspect this isn't possible, unfortunately. Looking at test_hash.py the HashRandomizationTests class and its descendants were added in the commit that introduced this behavior. They test the hashing behavior by modifying the environment and starting a new process with PYTHONHASHSEED explicitly set. You could try to copy that pattern, perhaps. I also just noticed you said "Every time I run my script, dict contents are iterated in a different order." - I assume you're aware of collections.OrderedDict, right? That's the normal way to get reliable hash iteration. If you're willing to set the value in your shell environment, you could also just wrap your python call in a bash script, e.g. #! /bin/bash export PYTHONHASHSEED=0 # call your python program here That avoids needing to manipulate your whole environment, as long as you're ok with a wrapper script. Or even just pass the value on the command line: $ PYTHONHASHSEED=0 python YOURSCRIPT.py
Move models between Django (1.8) apps with required ForeignKey references
This is an extension to this question: How to move a model between two Django apps (Django 1.7) I need to move a bunch of models from old_app to new_app. The best answer seems to be Ozan's, but with required foreign key references, things are bit trickier. @halfnibble presents a solution in the comments to Ozan's answer, but I'm still having trouble with the precise order of steps (e.g. when do I copy the models over to new_app, when do I delete the models from old_app, which migrations will sit in old_app.migrations vs. new_app.migrations, etc.) Any help is much appreciated!
Migrating a model between apps. The short answer is, don't do it!! But that answer rarely works in the real world of living projects and production databases. Therefore, I have created a sample GitHub repo to demonstrate this rather complicated process. I am using MySQL. (No, those aren't my real credentials). The Problem The example I'm using is a factory project with a cars app that initially has a Car model and a Tires model. factory |_ cars |_ Car |_ Tires The Car model has a ForeignKey relationship with Tires. (As in, you specify the tires via the car model). However, we soon realize that Tires is going to be a large model with its own views, etc., and therefore we want it in its own app. The desired structure is therefore: factory |_ cars |_ Car |_ tires |_ Tires And we need to keep the ForeignKey relationship between Car and Tires because too much depends on preserving the data. The Solution Step 1. Setup initial app with bad design. Browse through the code of step 1. Step 2. Create an admin interface and add a bunch of data containing ForeignKey relationships. View step 2. Step 3. Decide to move the Tires model to its own app. Meticulously cut and paste code into the new tires app. Make sure you update the Car model to point to the new tires.Tires model. Then run ./manage.py makemigrations and backup the database somewhere (just in case this fails horribly). Finally, run ./manage.py migrate and see the error message of doom, django.db.utils.IntegrityError: (1217, 'Cannot delete or update a parent row: a foreign key constraint fails') View code and migrations so far in step 3. Step 4. The tricky part. The auto-generated migration fails to see that you've merely copied a model to a different app. So, we have to do some things to remedy this. You can follow along and view the final migrations with comments in step 4. I did test this to verify it works. First, we are going to work on cars. You have to make a new, empty migration. This migration actually needs to run before the most recently created migration (the one that failed to execute). Therefore, I renumbered the migration I created and changed the dependencies to run my custom migration first and then the last auto-generated migration for the cars app. You can create an empty migration with: ./manage.py makemigrations --empty cars Step 4.a. Make custom old_app migration. In this first custom migration, I'm only going to perform a "database_operations" migration. Django gives you the option to split "state" and "database" operations. You can see how this is done by viewing the code here. My goal in this first step is to rename the database tables from oldapp_model to newapp_model without messing with Django's state. You have to figure out what Django would have named your database table based on the app name and model name. Now you are ready to modify the initial tires migration. Step 4.b. Modify new_app initial migration The operations are fine, but we only want to modify the "state" and not the database. Why? Because we are keeping the database tables from the cars app. Also, you need to make sure that the previously made custom migration is a dependency of this migration. See the tires migration file. So, now we have renamed cars.Tires to tires.Tires in the database, and changed the Django state to recognize the tires.Tires table. Step 4.c. Modify old_app last auto-generated migration. Going back to cars, we need to modify that last auto-generated migration. It should require our first custom cars migration, and the initial tires migration (that we just modified). Here we should leave the AlterField operations because the Car model is pointing to a different model (even though it has the same data). However, we need to remove the lines of migration concerning DeleteModel because the cars.Tires model no longer exists. It has fully converted into tires.Tires. View this migration. Step 4.d. Clean up stale model in old_app. Last but not least, you need to make a final custom migration in the cars app. Here, we will do a "state" operation only to delete the cars.Tires model. It is state-only because the database table for cars.Tires has already been renamed. This last migration cleans up the remaining Django state.
What does ,= mean in python?
I wonder what ,= or , = means in python? Example from matplotlib: plot1, = ax01.plot(t,yp1,'b-')
It's a form of tuple unpacking. With parentheses: (plot1,) = ax01.plot(t,yp1,'b-') ax01.plot() returns a tuple containing one element, and this element is assigned to plot1. Without that comma (and possibly the parentheses), plot1 would have been assigned the whole tuple. Observe the difference between a and b in the following example: >>> def foo(): ... return (1,) ... >>> (a,) = foo() >>> b = foo() >>> a 1 >>> b (1,) You can omit the parentheses both in (a,) and (1,), I left them for the sake of clarity.
Why does `mylist[:] = reversed(mylist)` work?
The following reverses a list "in-place" and works in Python 2 and 3: >>> mylist = [1, 2, 3, 4, 5] >>> mylist[:] = reversed(mylist) >>> mylist [5, 4, 3, 2, 1] Why/how? Since reversed gives me an iterator and doesn't copy the list beforehand, and since [:]= replaces "in-place", I am surprised. And the following, also using reversed, breaks as expected: >>> mylist = [1, 2, 3, 4, 5] >>> for i, item in enumerate(reversed(mylist)): mylist[i] = item >>> mylist [5, 4, 3, 4, 5] Why doesn't the [:] = fail like that? And yes, I do know mylist.reverse().
CPython list slice assigment will convert the iterable to a list first by calling PySequence_Fast. Source: https://hg.python.org/cpython/file/7556df35b913/Objects/listobject.c#l611 v_as_SF = PySequence_Fast(v, "can only assign an iterable"); Even PyPy does something similar: def setslice__List_ANY_ANY_ANY(space, w_list, w_start, w_stop, w_iterable): length = w_list.length() start, stop = normalize_simple_slice(space, length, w_start, w_stop) sequence_w = space.listview(w_iterable) w_other = W_ListObject(space, sequence_w) w_list.setslice(start, 1, stop-start, w_other) Here space.listview will call ObjSpace.unpackiterable to unpack the iterable which in turn returns a list.
How not to miss the next element after itertools.takewhile()
Say we wish to process an iterator and want to handle it by chunks. The logic per chunk depends on previously-calculated chunks, so groupby() does not help. Our friend in this case is itertools.takewhile(): while True: chunk = itertools.takewhile(getNewChunkLogic(), myIterator) process(chunk) The problem is that takewhile() needs to go past the last element that meets the new chunk logic, thus 'eating' the first element for the next chunk. There are various solutions to that, including wrapping or à la C's ungetc(), etc.. My question is: is there an elegant solution?
takewhile() indeed needs to look at the next element to determine when to toggle behaviour. You could use a wrapper that tracks the last seen element, and that can be 'reset' to back up one element: _sentinel = object() class OneStepBuffered(object): def __init__(self, it): self._it = iter(it) self._last = _sentinel self._next = _sentinel def __iter__(self): return self def __next__(self): if self._next is not _sentinel: next_val, self._next = self._next, _sentinel return next_val try: self._last = next(self._it) return self._last except StopIteration: self._last = self._next = _sentinel raise next = __next__ # Python 2 compatibility def step_back(self): if self._last is _sentinel: raise ValueError("Can't back up a step") self._next, self._last = self._last, _sentinel Wrap your iterator in this one before using it with takewhile(): myIterator = OneStepBuffered(myIterator) while True: chunk = itertools.takewhile(getNewChunkLogic(), myIterator) process(chunk) myIterator.step_back() Demo: >>> from itertools import takewhile >>> test_list = range(10) >>> iterator = OneStepBuffered(test_list) >>> list(takewhile(lambda i: i < 5, iterator)) [0, 1, 2, 3, 4] >>> iterator.step_back() >>> list(iterator) [5, 6, 7, 8, 9]
Is filter thread-safe
I have a thread which is updating a list called l. Am I right in saying that it is thread-safe to do the following from another thread? filter(lambda x: x[0] == "in", l) If its not thread safe, is this then the correct approach: import threading import time import Queue class Logger(threading.Thread): def __init__(self, log): super(Logger, self).__init__() self.log = log self.data = [] self.finished = False self.data_lock = threading.Lock() def run(self): while not self.finished: try: with self.data_lock: self.data.append(self.log.get(block=True, timeout=0.1)) except Queue.Empty: pass def get_data(self, cond): with self.data_lock: d = filter(cond, self.data) return d def stop(self): self.finished = True self.join() print("Logger stopped") where the get_data(self, cond) method is used to retrieve a small subset of the data in the self.data in a thread safe manner.
First, to answer your question in the title: filter is just a function. Hence, its thread-safety will rely on the data-structure you use it with. As pointed out in the comments already, list operations themselves are thread-safe in CPython and protected by the GIL, but that is arguably only an implementation detail of CPython that you shouldn't really rely on. Even if you could rely on it, thread safety of some of their operations probably does not mean the kind of thread safety you mean: The problem is that iterating over a sequence with filter is in general not an atomic operation. The sequence could be changed during iteration. Depending on the data-structure underlying your iterator this might cause more or less weird effects. One way to overcome this problem is by iterating over a copy of the sequence that is created with one atomic action. Easiest way to do this for standard sequences like tuple, list, string is with the slice operator like this: filter(lambda x: x[0] == "in", l[:]) Apart from this not necessarily being thread-safe for other data-types, there's one problem with this though: it's only a shallow copy. As your list's elements seem to be list-like as well, another thread could in parallel do del l[1000][:] to empty one of the inner lists (which are pointed to in your shallow copy as well). This would make your filter expression fail with an IndexError. All that said, it's not a shame to use a lock to protect access to your list and I'd definitely recommend it. Depending on how your data changes and how you work with the returned data, it might even be wise to deep-copy the elements while holding the lock and to return those copies. That way you can guarantee that once returned the filter condition won't suddenly change for the returned elements. Wrt. your Logger code: I'm not 100 % sure how you plan to use this and if it's critical for you to run several threads on one queue and join them. What looks weird to me is that you never use Queue.task_done() (assuming that its self.log is a Queue). Also your polling of the queue is potentially wasteful. If you don't need the join of the thread, I'd suggest to at least turn the lock acquisition around: class Logger(threading.Thread): def __init__(self, log): super(Logger, self).__init__() self.daemon = True self.log = log self.data = [] self.data_lock = threading.Lock() def run(self): while True: l = self.log.get() # thread will sleep here indefinitely with self.data_lock: self.data.append(l) self.log.task_done() def get_data(self, cond): with self.data_lock: d = filter(cond, self.data) # maybe deepcopy d here return d Externally you could still do log.join() to make sure that all of the elements of the log queue are processed.
Flask and React routing
I'm building the Flask app with React, I ended up having a problem with routing. The backend is responsible to be an API, hence some routes look like: @app.route('/api/v1/do-something/', methods=["GET"]) def do_something(): return something() and the main route which leads to the React: @app.route('/') def index(): return render_template('index.html') I'm using react-router in the React app, everything works fine, react-router takes me to /something and I get the rendered view, but when I refresh the page on /something then Flask app takes care of this call and I get Not Found error. What is the best solution? I was thinking about redirecting all calls which are not calling /api/v1/... to / it's not ideal as I will get back the home page of my app, not rendered React view.
We used catch-all URLs for this. from flask import Flask app = Flask(__name__) @app.route('/', defaults={'path': ''}) @app.route('/<path:path>') def catch_all(path): return 'You want path: %s' % path if __name__ == '__main__': app.run() You can also go an extra mile and reuse the Flask routing system to match path to the same routes as client so you can embed the data client will need as JSON inside the HTML response.
Python List Slicing with None as argument
Via trial and error I found out that my_list = range(10) my_list[:None] == my_list[:] I use this for django query sets so I can define a size or take all: some_queryset[:length if length else None] # @IanAuld some_queryset[:length or None] # @Bakuriu # length works for all numbers and None if you want all elements # does not work with False of any other False values some_queryset[:length] Is this good practice to use None while slicing? Can problems occur with this method in any case?
Yes, it is fine to use None, as its behavior is specified by the documentation: The slice of s from i to j is defined as the sequence of items with index k such that i <= k < j. If i or j is greater than len(s), use len(s). If i is omitted or None, use 0. If j is omitted or None, use len(s). If i is greater than or equal to j, the slice is empty. Using None for one of the slice parameters is the same as omitting it.
No such file or directory "limits.h" when installing Pillow on Alpine Linux
I'm running alpine-linux on a Raspberry Pi 2. I'm trying to install Pillow via this command: pip install pillow This is the output from the command: Installing collected packages: pillow Running setup.py install for pillow Complete output from command /usr/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip-build-gNq0WA/pillow/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-nDKwei-record/install-record.txt --single-version-externally-managed --compile: running install running build running build_py creating build creating build/lib.linux-armv7l-2.7 creating build/lib.linux-armv7l-2.7/PIL copying PIL/XVThumbImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/XpmImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/XbmImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/WmfImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/WebPImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/WalImageFile.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/TiffTags.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/TiffImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/TgaImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/TarIO.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/SunImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/SpiderImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/SgiImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/PyAccess.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/PSDraw.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/PsdImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/PpmImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/PngImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/PixarImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/PdfImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/PcxImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/PcfFontFile.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/PcdImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/PalmImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/PaletteFile.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/OleFileIO.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/MspImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/MpoImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/MpegImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/MicImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/McIdasImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/JpegPresets.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/JpegImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/Jpeg2KImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/IptcImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/ImtImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/ImImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/ImageWin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/ImageTransform.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/ImageTk.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/ImageStat.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/ImageShow.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/ImageSequence.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/ImageQt.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/ImagePath.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/ImagePalette.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/ImageOps.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/ImageMorph.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/ImageMode.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/ImageMath.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/ImageGrab.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/ImageFont.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/ImageFilter.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/ImageFileIO.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/ImageFile.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/ImageEnhance.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/ImageDraw2.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/ImageDraw.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/ImageColor.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/ImageCms.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/ImageChops.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/Image.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/IcoImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/IcnsImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/Hdf5StubImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/GribStubImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/GimpPaletteFile.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/GimpGradientFile.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/GifImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/GdImageFile.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/GbrImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/FpxImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/FontFile.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/FliImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/FitsStubImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/ExifTags.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/EpsImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/DcxImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/CurImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/ContainerIO.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/BufrStubImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/BmpImagePlugin.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/BdfFontFile.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/_util.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/_binary.py -> build/lib.linux-armv7l-2.7/PIL copying PIL/__init__.py -> build/lib.linux-armv7l-2.7/PIL running egg_info writing Pillow.egg-info/PKG-INFO writing top-level names to Pillow.egg-info/top_level.txt writing dependency_links to Pillow.egg-info/dependency_links.txt warning: manifest_maker: standard file '-c' not found reading manifest file 'Pillow.egg-info/SOURCES.txt' reading manifest template 'MANIFEST.in' warning: no files found matching 'LICENSE' under directory 'docs' writing manifest file 'Pillow.egg-info/SOURCES.txt' copying PIL/OleFileIO-README.md -> build/lib.linux-armv7l-2.7/PIL running build_ext building 'PIL._imaging' extension creating build/temp.linux-armv7l-2.7/libImaging gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c _imaging.c -o build/temp.linux-armv7l-2.7/_imaging.o gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c outline.c -o build/temp.linux-armv7l-2.7/outline.o gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c libImaging/Bands.c -o build/temp.linux-armv7l-2.7/libImaging/Bands.o gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c libImaging/ConvertYCbCr.c -o build/temp.linux-armv7l-2.7/libImaging/ConvertYCbCr.o In file included from _imaging.c:76:0: /usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory #include <limits.h> ^ compilation terminated. In file included from outline.c:20:0: /usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory #include <limits.h> ^ compilation terminated. In file included from libImaging/ImPlatform.h:10:0, from libImaging/Imaging.h:14, from libImaging/ConvertYCbCr.c:15: /usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory #include <limits.h> ^ compilation terminated. In file included from libImaging/ImPlatform.h:10:0, from libImaging/Imaging.h:14, from libImaging/Bands.c:19: /usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory #include <limits.h> ^ compilation terminated. gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c libImaging/Draw.c -o build/temp.linux-armv7l-2.7/libImaging/Draw.o gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c libImaging/Filter.c -o build/temp.linux-armv7l-2.7/libImaging/Filter.o gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c libImaging/GifEncode.c -o build/temp.linux-armv7l-2.7/libImaging/GifEncode.o gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c libImaging/LzwDecode.c -o build/temp.linux-armv7l-2.7/libImaging/LzwDecode.o In file included from libImaging/ImPlatform.h:10:0, from libImaging/Imaging.h:14, from libImaging/Draw.c:35: /usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory #include <limits.h> ^ compilation terminated. In file included from libImaging/ImPlatform.h:10:0, from libImaging/Imaging.h:14, from libImaging/Filter.c:27: /usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory #include <limits.h> ^ compilation terminated. In file included from libImaging/ImPlatform.h:10:0, from libImaging/Imaging.h:14, from libImaging/GifEncode.c:20: /usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory #include <limits.h> ^ compilation terminated. In file included from libImaging/ImPlatform.h:10:0, from libImaging/Imaging.h:14, from libImaging/LzwDecode.c:31: /usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory #include <limits.h> ^ compilation terminated. gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c libImaging/Offset.c -o build/temp.linux-armv7l-2.7/libImaging/Offset.o gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c libImaging/Quant.c -o build/temp.linux-armv7l-2.7/libImaging/Quant.o gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c libImaging/PcxDecode.c -o build/temp.linux-armv7l-2.7/libImaging/PcxDecode.o gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c libImaging/RawEncode.c -o build/temp.linux-armv7l-2.7/libImaging/RawEncode.o In file included from libImaging/ImPlatform.h:10:0, from libImaging/Imaging.h:14, from libImaging/Offset.c:18: /usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory #include <limits.h> ^ compilation terminated. In file included from libImaging/ImPlatform.h:10:0, from libImaging/Imaging.h:14, from libImaging/Quant.c:21: /usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory #include <limits.h> ^ compilation terminated. In file included from libImaging/ImPlatform.h:10:0, from libImaging/Imaging.h:14, from libImaging/PcxDecode.c:17: /usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory #include <limits.h> ^ compilation terminated. In file included from libImaging/ImPlatform.h:10:0, from libImaging/Imaging.h:14, from libImaging/RawEncode.c:21: /usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory #include <limits.h> ^ compilation terminated. gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c libImaging/UnpackYCC.c -o build/temp.linux-armv7l-2.7/libImaging/UnpackYCC.o gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c libImaging/ZipEncode.c -o build/temp.linux-armv7l-2.7/libImaging/ZipEncode.o gcc -fno-strict-aliasing -Os -fomit-frame-pointer -DNDEBUG -Os -fomit-frame-pointer -fPIC -DHAVE_LIBJPEG -I/tmp/pip-build-gNq0WA/pillow/libImaging -I/usr/include -I/usr/include/python2.7 -c libImaging/BoxBlur.c -o build/temp.linux-armv7l-2.7/libImaging/BoxBlur.o In file included from libImaging/ImPlatform.h:10:0, from libImaging/Imaging.h:14, from libImaging/UnpackYCC.c:17: /usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory #include <limits.h> ^ compilation terminated. In file included from libImaging/ImPlatform.h:10:0, from libImaging/Imaging.h:14, from libImaging/ZipEncode.c:18: /usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory #include <limits.h> ^ compilation terminated. In file included from libImaging/BoxBlur.c:1:0: /usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory #include <limits.h> ^ compilation terminated. Building using 4 processes gcc -shared -Wl,--as-needed build/temp.linux-armv7l-2.7/_imaging.o build/temp.linux-armv7l-2.7/decode.o build/temp.linux-armv7l-2.7/encode.o build/temp.linux-armv7l-2.7/map.o build/temp.linux-armv7l-2.7/display.o build/temp.linux-armv7l-2.7/outline.o build/temp.linux-armv7l-2.7/path.o build/temp.linux-armv7l-2.7/libImaging/Access.o build/temp.linux-armv7l-2.7/libImaging/AlphaComposite.o build/temp.linux-armv7l-2.7/libImaging/Resample.o build/temp.linux-armv7l-2.7/libImaging/Bands.o build/temp.linux-armv7l-2.7/libImaging/BitDecode.o build/temp.linux-armv7l-2.7/libImaging/Blend.o build/temp.linux-armv7l-2.7/libImaging/Chops.o build/temp.linux-armv7l-2.7/libImaging/Convert.o build/temp.linux-armv7l-2.7/libImaging/ConvertYCbCr.o build/temp.linux-armv7l-2.7/libImaging/Copy.o build/temp.linux-armv7l-2.7/libImaging/Crc32.o build/temp.linux-armv7l-2.7/libImaging/Crop.o build/temp.linux-armv7l-2.7/libImaging/Dib.o build/temp.linux-armv7l-2.7/libImaging/Draw.o build/temp.linux-armv7l-2.7/libImaging/Effects.o build/temp.linux-armv7l-2.7/libImaging/EpsEncode.o build/temp.linux-armv7l-2.7/libImaging/File.o build/temp.linux-armv7l-2.7/libImaging/Fill.o build/temp.linux-armv7l-2.7/libImaging/Filter.o build/temp.linux-armv7l-2.7/libImaging/FliDecode.o build/temp.linux-armv7l-2.7/libImaging/Geometry.o build/temp.linux-armv7l-2.7/libImaging/GetBBox.o build/temp.linux-armv7l-2.7/libImaging/GifDecode.o build/temp.linux-armv7l-2.7/libImaging/GifEncode.o build/temp.linux-armv7l-2.7/libImaging/HexDecode.o build/temp.linux-armv7l-2.7/libImaging/Histo.o build/temp.linux-armv7l-2.7/libImaging/JpegDecode.o build/temp.linux-armv7l-2.7/libImaging/JpegEncode.o build/temp.linux-armv7l-2.7/libImaging/LzwDecode.o build/temp.linux-armv7l-2.7/libImaging/Matrix.o build/temp.linux-armv7l-2.7/libImaging/ModeFilter.o build/temp.linux-armv7l-2.7/libImaging/MspDecode.o build/temp.linux-armv7l-2.7/libImaging/Negative.o build/temp.linux-armv7l-2.7/libImaging/Offset.o build/temp.linux-armv7l-2.7/libImaging/Pack.o build/temp.linux-armv7l-2.7/libImaging/PackDecode.o build/temp.linux-armv7l-2.7/libImaging/Palette.o build/temp.linux-armv7l-2.7/libImaging/Paste.o build/temp.linux-armv7l-2.7/libImaging/Quant.o build/temp.linux-armv7l-2.7/libImaging/QuantOctree.o build/temp.linux-armv7l-2.7/libImaging/QuantHash.o build/temp.linux-armv7l-2.7/libImaging/QuantHeap.o build/temp.linux-armv7l-2.7/libImaging/PcdDecode.o build/temp.linux-armv7l-2.7/libImaging/PcxDecode.o build/temp.linux-armv7l-2.7/libImaging/PcxEncode.o build/temp.linux-armv7l-2.7/libImaging/Point.o build/temp.linux-armv7l-2.7/libImaging/RankFilter.o build/temp.linux-armv7l-2.7/libImaging/RawDecode.o build/temp.linux-armv7l-2.7/libImaging/RawEncode.o build/temp.linux-armv7l-2.7/libImaging/Storage.o build/temp.linux-armv7l-2.7/libImaging/SunRleDecode.o build/temp.linux-armv7l-2.7/libImaging/TgaRleDecode.o build/temp.linux-armv7l-2.7/libImaging/Unpack.o build/temp.linux-armv7l-2.7/libImaging/UnpackYCC.o build/temp.linux-armv7l-2.7/libImaging/UnsharpMask.o build/temp.linux-armv7l-2.7/libImaging/XbmDecode.o build/temp.linux-armv7l-2.7/libImaging/XbmEncode.o build/temp.linux-armv7l-2.7/libImaging/ZipDecode.o build/temp.linux-armv7l-2.7/libImaging/ZipEncode.o build/temp.linux-armv7l-2.7/libImaging/TiffDecode.o build/temp.linux-armv7l-2.7/libImaging/Incremental.o build/temp.linux-armv7l-2.7/libImaging/Jpeg2KDecode.o build/temp.linux-armv7l-2.7/libImaging/Jpeg2KEncode.o build/temp.linux-armv7l-2.7/libImaging/BoxBlur.o -L/usr/lib -L/usr/local/lib -L/usr/lib -ljpeg -lpython2.7 -o build/lib.linux-armv7l-2.7/PIL/_imaging.so gcc: error: build/temp.linux-armv7l-2.7/_imaging.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/decode.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/encode.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/map.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/display.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/outline.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/path.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/Access.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/AlphaComposite.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/Resample.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/Bands.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/BitDecode.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/Blend.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/Chops.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/Convert.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/ConvertYCbCr.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/Copy.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/Crc32.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/Crop.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/Dib.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/Draw.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/Effects.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/EpsEncode.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/File.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/Fill.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/Filter.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/FliDecode.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/Geometry.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/GetBBox.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/GifDecode.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/GifEncode.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/HexDecode.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/Histo.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/JpegDecode.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/JpegEncode.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/LzwDecode.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/Matrix.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/ModeFilter.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/MspDecode.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/Negative.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/Offset.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/Pack.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/PackDecode.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/Palette.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/Paste.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/Quant.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/QuantOctree.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/QuantHash.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/QuantHeap.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/PcdDecode.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/PcxDecode.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/PcxEncode.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/Point.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/RankFilter.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/RawDecode.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/RawEncode.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/Storage.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/SunRleDecode.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/TgaRleDecode.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/Unpack.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/UnpackYCC.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/UnsharpMask.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/XbmDecode.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/XbmEncode.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/ZipDecode.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/ZipEncode.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/TiffDecode.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/Incremental.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/Jpeg2KDecode.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/Jpeg2KEncode.o: No such file or directory gcc: error: build/temp.linux-armv7l-2.7/libImaging/BoxBlur.o: No such file or directory error: command 'gcc' failed with exit status 1 ---------------------------------------- Command "/usr/bin/python -c "import setup tools, tokenize;__file__='/tmp/pip-build-gNq0WA/pillow/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-nDKwei-record/install-record.txt --single-version-externally-managed --compile" failed with error code 1 in /tmp/pip-build-gNq0WA/pillow I think this is probably the relevant section: In file included from libImaging/BoxBlur.c:1:0: /usr/include/python2.7/Python.h:19:20: fatal error: limits.h: No such file or directory #include <limits.h> ^ compilation terminated. My research shows it's probably something with the header files. I have installed these: apk add py-configobj libusb py-pip python-dev gcc linux-headers pip install --upgrade pip pip install -U setuptools pip install Cheetah pip install pyusb
Alpine linux uses musl libc. You probably need to install musl-dev.
Pycharm Django Debugging is really slow
I have a moderate size website but it takes about 30 seconds for PyCharm to start Runserver and be ready to run the app. If I "Run" the app instead of "Debugging" it will only take about 3 seconds to start. What are some of the things I can do speed up the code change and debugging cycle. I am using a decent MBP with 16Gb of RAM. So hardware is not the issue. I have excluded /media files from the project. I don't any other large number of files that will cause indexing problems I am using both Postgres and Mongo db. I am running Django 1.7 + a dozen of packages like: dj-static==0.0.6 django-annoying==0.8.1 django-appconf==1.0.1 django-bootstrap-form==3.2 django-bootstrap-pagination==1.5.1 django-compressor==1.5 django-extensions==1.5.5 django-filter==0.10.0 django-guardian==1.2.5 django-storages-redux==1.2.3 django-widget-tweaks==1.3 djangorestframework==3.1.2 django-jinja==1.4.1 This is debug output: /Users/user1/.virtualenvs/env-test/bin/python "/Applications/PyCharm 4.5 EAP.app/Contents/helpers/pydev/pydevd.py" --multiproc --save-signatures --client 127.0.0.1 --port 64097 --file /Users/user1/gitroot/website1/manage.py runserver 0.0.0.0:8000 --verbosity 2 Connected to pydev debugger (build 141.1245) pydev debugger: process 63926 is connecting pydev debugger: process 63954 is connecting Performing system checks... System check identified no issues (0 silenced). You have unapplied migrations; your app may not work properly until they are applied. Run 'python manage.py migrate' to apply them. June 03, 2015 - 09:08:52 Django version 1.7.7, using settings 'myproject.settings' Starting development server at http://0.0.0.0:8000/ Quit the server with CONTROL-C. UPDATE: Since this post Pycharm has the ability to use Cython as the interpreter. I think that has improved things somewhat. I also moved to use Runserverplus which I think runs faster.
Well, the debugger is just slow and there is not much you can do about it. Just don't use the debugger unless you absolutely need it (i.e. you are troubleshooting a particularly elusive bug). I can think of only one thing that may speed up the PyCharm's debugger and that is to turn off the "Collect run-time types information for code insight" setting (located under File > Settings > Build, Execution, Deployment > Python Debugger).
Writing to MySQL database with pandas using SQLAlchemy, to_sql
trying to write pandas dataframe to MySQL table using to_sql. Previously been using flavor='mysql', however it will be depreciated in the future and wanted to start the transition to using SQLAlchemy engine. sample code: import pandas as pd import mysql.connector from sqlalchemy import create_engine engine = create_engine('mysql+mysqlconnector://[user]:[pass]@[host]:[port]/[schema]', echo=False) cnx = engine.raw_connection() data = pd.read_sql('SELECT * FROM sample_table', cnx) data.to_sql(name='sample_table2', con=cnx, if_exists = 'append', index=False) The read works fine but the to_sql has an error: DatabaseError: Execution failed on sql 'SELECT name FROM sqlite_master WHERE type='table' AND name=?;': Wrong number of arguments during string formatting Why does it look like it is trying to use sqlite? What is the correct use of a sqlalchemy connection with mysql and specifically mysql.connector? I also tried passing the engine in as the connection as well, and that gave me an error referencing no cursor object. data.to_sql(name='sample_table2', con=engine, if_exists = 'append', index=False) >>AttributeError: 'Engine' object has no attribute 'cursor'
Using the engine in place of the raw_connection() worked: import pandas as pd import mysql.connector from sqlalchemy import create_engine engine = create_engine('mysql+mysqlconnector://[user]:[pass]@[host]:[port]/[schema]', echo=False) data.to_sql(name='sample_table2', con=engine, if_exists = 'append', index=False) not clear on why when I tried this yesterday it gave me the earlier error
django countries encoding is not giving correct name
I am using django_countries module for countries list, the problem is there are couple of countries with special characters like 'Åland Islands' and 'Saint Barthélemy'. I am calling this method to get the country name: country_label = fields.Country(form.cleaned_data.get('country')[0:2]).name I know that country_label is lazy translated proxy object of django utils, but it is not giving the right name rather it gives 'Ã…land Islands'. any suggestions for this please?
Django stores unicode string using code points and identifies the string as unicode for further processing. UTF-8 uses four 8-bit bytes encoding, so the unicode string that's being used by Django needs to be decoded or interpreted from code point notation to its UTF-8 notation at some point. In the case of Åland Islands, what seems to be happening is that it's taking the UTF-8 byte encoding and interpret it as code points to convert the string. The string django_countries returns is most likely u'\xc5land Islands' where \xc5 is the UTF code point notation of Å. In UTF-8 byte notation \xc5 becomes \xc3\x85 where each number \xc3 and \x85 is a 8-bit byte. See: http://www.ltg.ed.ac.uk/~richard/utf-8.cgi?input=xc5&mode=hex Or you can use country_label = fields.Country(form.cleaned_data.get('country')[0:2]).name.encode('utf-8') to go from u'\xc5land Islands' to '\xc3\x85land Islands' If you take then each byte and use them as code points, you'll see it'll give you these characters: Ã… See: http://www.ltg.ed.ac.uk/~richard/utf-8.cgi?input=xc3&mode=hex And: http://www.ltg.ed.ac.uk/~richard/utf-8.cgi?input=x85&mode=hex See code snippet with html notation of these characters. <div id="test">&#xC3;&#x85;&#xC5;</div> So I'm guessing you have 2 different encodings in you application. One way to get from u'\xc5land Islands' to u'\xc3\x85land Islands' would be to in an utf-8 environment encode to UTF-8 which would convert u'\xc5' to '\xc3\x85' and then decode to unicode from iso-8859 which would give u'\xc3\x85land Islands'. But since it's not in the code you're providing, I'm guessing it's happening somewhere between the moment you set country_label and the moment your output isn't displayed properly. Either automatically because of encodings settings, or through an explicit assignation somewhere. FIRST EDIT: To set encoding for you app, add # -*- coding: utf-8 -*- at the top of your py file and <meta charset="UTF-8"> in of your template. And to get unicode string from a django.utils.functional.proxy object you can call unicode(). Like this: country_label = unicode(fields.Country(form.cleaned_data.get('country')[0:2]).name) SECOND EDIT: One other way to figure out where the problem is would be to use force_bytes (https://docs.djangoproject.com/en/1.8/ref/utils/#module-django.utils.encoding) Like this: from django.utils.encoding import force_bytes country_label = fields.Country(form.cleaned_data.get('country')[0:2]).name forced_country_label = force_bytes(country_label, encoding='utf-8', strings_only=False, errors='strict') But since you already tried many conversions without success, maybe the problem is more complex. Can you share your version of django_countries, Python and your django app language settings? What you can do also is go see directly in your djano_countries package (that should be in your python directory), find the file data.py and open it to see what it looks like. Maybe the data itself is corrupted.
How to reshape a networkx graph in Python?
So I created a really naive (probably inefficient) way of generating hasse diagrams. Question: I have 4 dimensions... p q r s . I want to display it uniformly (tesseract) but I have no idea how to reshape it. How can one reshape a networkx graph in Python? I've seen some examples of people using spring_layout() and draw_circular() but it doesn't shape in the way I'm looking for because they aren't uniform. Is there a way to reshape my graph and make it uniform? (i.e. reshape my hasse diagram into a tesseract shape (preferably using nx.draw() ) Here's what mine currently look like: Here's my code to generate the hasse diagram of N dimensions #!/usr/bin/python import networkx as nx import matplotlib.pyplot as plt import itertools H = nx.DiGraph() axis_labels = ['p','q','r','s'] D_len_node = {} #Iterate through axis labels for i in xrange(0,len(axis_labels)+1): #Create edge from empty set if i == 0: for ax in axis_labels: H.add_edge('O',ax) else: #Create all non-overlapping combinations combinations = [c for c in itertools.combinations(axis_labels,i)] D_len_node[i] = combinations #Create edge from len(i-1) to len(i) #eg. pq >>> pqr, pq >>> pqs if i > 1: for node in D_len_node[i]: for p_node in D_len_node[i-1]: #if set.intersection(set(p_node),set(node)): Oops if all(p in node for p in p_node) == True: #should be this! H.add_edge(''.join(p_node),''.join(node)) #Show Plot nx.draw(H,with_labels = True,node_shape = 'o') plt.show() I want to reshape it like this: If anyone knows of an easier way to make Hasse Diagrams, please share some wisdom but that's not the main aim of this post.
This is a pragmatic, rather than purely mathematical answer. I think you have two issues - one with layout, the other with your network. 1. Network You have too many edges in your network for it to represent the unit tesseract. Caveat I'm not an expert on the maths here - just came to this from the plotting angle (matplotlib tag). Please explain if I'm wrong. Your desired projection and, for instance, the wolfram mathworld page for a Hasse diagram for n=4 has only 4 edges connected all nodes, whereas you have 6 edges to the 2 and 7 edges to the 3 bit nodes. Your graph fully connects each "level", i.e. 4-D vectors with 0 1 values connect to all vectors with 1 1 value, which then connect to all vectors with 2 1 values and so on. This is most obvious in the projection based on the Wikipedia answer (2nd image below) 2. Projection I couldn't find a pre-written algorithm or library to automatically project the 4D tesseract onto a 2D plane, but I did find a couple of examples, e.g. Wikipedia. From this, you can work out a co-ordinate set that would suit you and pass that into the nx.draw() call. Here is an example - I've included two co-ordinate sets, one that looks like the projection you show above, one that matches this one from wikipedia. import networkx as nx import matplotlib.pyplot as plt import itertools H = nx.DiGraph() axis_labels = ['p','q','r','s'] D_len_node = {} #Iterate through axis labels for i in xrange(0,len(axis_labels)+1): #Create edge from empty set if i == 0: for ax in axis_labels: H.add_edge('O',ax) else: #Create all non-overlapping combinations combinations = [c for c in itertools.combinations(axis_labels,i)] D_len_node[i] = combinations #Create edge from len(i-1) to len(i) #eg. pq >>> pqr, pq >>> pqs if i > 1: for node in D_len_node[i]: for p_node in D_len_node[i-1]: if set.intersection(set(p_node),set(node)): H.add_edge(''.join(p_node),''.join(node)) #This is manual two options to project tesseract onto 2D plane # - many projections are available!! wikipedia_projection_coords = [(0.5,0),(0.85,0.25),(0.625,0.25),(0.375,0.25), (0.15,0.25),(1,0.5),(0.8,0.5),(0.6,0.5), (0.4,0.5),(0.2,0.5),(0,0.5),(0.85,0.75), (0.625,0.75),(0.375,0.75),(0.15,0.75),(0.5,1)] #Build the "two cubes" type example projection co-ordinates half_coords = [(0,0.15),(0,0.6),(0.3,0.15),(0.15,0), (0.55,0.6),(0.3,0.6),(0.15,0.4),(0.55,1)] #make the coords symmetric example_projection_coords = half_coords + [(1-x,1-y) for (x,y) in half_coords][::-1] print example_projection_coords def powerset(s): ch = itertools.chain.from_iterable(itertools.combinations(s, r) for r in range(len(s)+1)) return [''.join(t) for t in ch] pos={} for i,label in enumerate(powerset(axis_labels)): if label == '': label = 'O' pos[label]= example_projection_coords[i] #Show Plot nx.draw(H,pos,with_labels = True,node_shape = 'o') plt.show() Note - unless you change what I've mentioned in 1. above, they still have your edge structure, so won't look exactly the same as the examples from the web. Here is what it looks like with your existing network generation code - you can see the extra edges if you compare it to your example (e.g. I don't this pr should be connected to pqs: 'Two cube' projection Wikimedia example projection Note If you want to get into the maths of doing your own projections (and building up pos mathematically), you might look at this research paper. EDIT: Curiosity got the better of me and I had to search for a mathematical way to do this. I found this blog - the main result of which being the projection matrix: This led me to develop this function for projecting each label, taking the label containing 'p' to mean the point has value 1 on the 'p' axis, i.e. we are dealing with the unit tesseract. Thus: def construct_projection(label): r1 = r2 = 0.5 theta = math.pi / 6 phi = math.pi / 3 x = int( 'p' in label) + r1 * math.cos(theta) * int('r' in label) - r2 * math.cos(phi) * int('s' in label) y = int( 'q' in label) + r1 * math.sin(theta) * int('r' in label) + r2 * math.sin(phi) * int('s' in label) return (x,y) Gives a nice projection into a regular 2D octagon with all points distinct. This will run in the above program, just replace pos[label] = example_projection_coords[i] with pos[label] = construct_projection(label) This gives the result: play with r1,r2,theta and phi to your heart's content :)
A + B without arithmetic operators, Python vs C++
I was trying to solve an old question: Write a function that add two [integer] numbers A and B. You should not use + or any arithmetic operators. The best solution is like this, quoted from "LintCode-A+B Problem": For a + b in any base, we can treat the plus as two part: 1. a + b without carry; 2. the carry generated by a +b. The a+b then equals to part 1 plus part 2. If part1+part2 generates more carry, we can then repeat this procedure, until there is no carry. I can understand this algorithm and everything seems good, so I tested it on lintcode with code pasted below. class Solution: """ @param a: The first integer @param b: The second integer @return: The sum of a and b """ def aplusb(self, a, b): while b != 0: carry = a & b a = a ^ b b = carry << 1 return a But surprisingly, it gave me Time Limit Exceeded error in test case [100, -100]. So I ran it locally and print a, b for every loop: (-8, 8) (-16, 16) (-32, 32) (-64, 64) (-128, 128) (-256, 256) (-512, 512) (-1024, 1024) (-2048, 2048) (-4096, 4096) (-8192, 8192) (-16384, 16384) (-32768, 32768) (-65536, 65536) (-131072, 131072) ... The calculation is correct, so I think this algorithm does not work for such input but when I wrote the same algorithm in C++, it just works: class Solution { public: int aplusb(int a, int b) { while (b!=0){ int carry = a & b; a = a^b; b = carry << 1; } return a; } }; I don't know what should be asked exactly, basically the questions are: Why does C++ give the correct output 0 while Python doesn't? If I use Python, how do I modify this algorithm to make it work?
The binary, 2's complement representation of -4 is ...11100 Yes, I really do mean infinitely many 1's to the left; this is a binary repeating numeral. Technically, 4 is a repeating numeral too: ...00100 it's just repeating 0's to the left. Your addition problem is ...11100 + ...00100 -------------------- ...00000 The operators ^, <<, and & have no trouble computing with infinitely many binary digits, but the problem is that there are infinitely many carries, and you are computing them one digit at a time. This will never finish. Thus, you have to recognize when this algorithm will get stuck in this situation and do something else to account for it. You don't run into this problem in C/C++, because, for example, if int is 32-bits, then all of the digits except for the rightmost 31 digits get collapsed into a single bit, so it does the remaining carries all at once. However, technically speaking, the meaning of left shifting an int is in terms of the value as an integer, rather than as a bit pattern, so you are invoking undefined behavior if the two most significant bits carry are ever different, because then carry << 1 would produce an overflow).
How to use jinja2 as a templating engine in Django 1.8
I have been looking on how to use jinja2 in django 1.8, but there is no complete source for using django with jinja2. I was wondering if you guys knew the process for using jinja2 in django. I have looked through the the official documentation and I have looked at the following question: How to setup django 1.8 to use jinja2? but none of them clearly explain how to use jinja2 in an put-togther manner. I just started using django and don't know all the lingo in the docs. I would really appreciate the help.
Frist you have to install jinja2: $ pip install Jinja2 Then modify your TEMPLATES list in the settings.py to contain the jinja2 BACKEND : TEMPLATES = [ { 'BACKEND': 'django.template.backends.jinja2.Jinja2', 'DIRS': [os.path.join(BASE_DIR, 'templates/jinja2')], 'APP_DIRS': True, 'OPTIONS': {'environment': 'myproject.jinja2.Environment',}, }, { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', ], }, }, ] where templates/jinja2 is the directory with your jinja2 template files. And in your views.py file: from __future__ import absolute_import # Python 2 only from jinja2 import Environment from django.contrib.staticfiles.storage import staticfiles_storage from django.core.urlresolvers import reverse def environment(**options): env = Environment(**options) env.globals.update({ 'static': staticfiles_storage.url, 'url': reverse, }) return env This makes static and url available in your Jinja2 templates. P.S. For more details see this article.
Which Model Field to use in Django to store longitude and latitude values?
I want to store my users location using longitude and latitude, at the moment this comes from Google Maps, but I will be using GeoDango and some point to work out distances between to points also. However, my first confusion is which field in Django I should be using to store the longitude and latitude values? The information I'm getting is conflicting. The official documentation uses a FloatField https://docs.djangoproject.com/en/dev/ref/contrib/gis/tutorial/#geographic-models lon = models.FloatField() lat = models.FloatField() Where almost every answer on stackoverflow shows a DecimalField long = models.DecimalField(max_digits=8, decimal_places=3) lat = models.DecimalField(max_digits=8, decimal_places=3) So what should I be using?
Float is generally an approximation, see here for some simple examples. You could get very nice results modifying your model to something like DecimalField(max_digits=9, decimal_places=6), since decimals are very important in coordinates but using more than 6 is basically meaningless.
Python: What is the difference between math.exp and numpy.exp and why do numpy creators choose to introduce exp again
exp means exponential function exp in math module: https://docs.python.org/2/library/math.html exp in numpy module: http://docs.scipy.org/doc/numpy/reference/generated/numpy.exp.html Why do numpy creators introduce this function again?
The math.exp works only for scalars as EdChum mentions. Whereas numpy.exp will work for arrays. Example: >>> import math >>> import numpy as np >>> x = [1.,2.,3.,4.,5.] >>> math.exp(x) Traceback (most recent call last): File "<pyshell#10>", line 1, in <module> math.exp(x) TypeError: a float is required >>> np.exp(x) array([ 2.71828183, 7.3890561 , 20.08553692, 54.59815003, 148.4131591 ]) >>> It is the same case for other math functions. >>> math.sin(x) Traceback (most recent call last): File "<pyshell#12>", line 1, in <module> math.sin(x) TypeError: a float is required >>> np.sin(x) array([ 0.84147098, 0.90929743, 0.14112001, -0.7568025 , -0.95892427]) >>> Also refer to THIS ANSWER to check out how numpy is faster than math.
Are these two python statements the same?
I have these two statements return self.getData() if self.getData() else '' and return self.getData() or '' I want to know are they same or there is any difference
I would say No because if self.getData() changes something during its operation, then the first statement has the possibility of returning a different result since it will make a 2nd call to it.
How to filter objects for count annotation in Django?
Consider simple Django models Event and Participant: class Event(models.Model): title = models.CharField(max_length=100) class Participant(models.Model): event = models.ForeignKey(Event, db_index=True) is_paid = models.BooleanField(default=False, db_index=True) It's easy to annotate events query with total number of participants: events = Event.objects.all().annotate(participants=models.Count('participant')) How to annotate with count of participants filtered by is_paid=True? I need to query all events regardless of number of participants, e.g. I don't need to filter by annotated result. If there are 0 participants, that's ok, I just need 0 in annotated value. The example from documentation doesn't work here, because it excludes objects from query instead of annotating them with 0. Update. Django 1.8 has new conditional expressions feature, so now we can do like this: events = Event.objects.all().annotate(paid_participants=models.Sum( models.Case( models.When(participant__is_paid=True, then=1), default=0, output_field=models.IntegerField() )))
Just discovered that Django 1.8 has new conditional expressions feature, so now we can do like this: events = Event.objects.all().annotate(paid_participants=models.Sum( models.Case( models.When(participant__is_paid=True, then=1), default=0, output_field=models.IntegerField() )))
How to add any new library like spark-csv in Apache Spark prebuilt version
I have build the Spark-csv and able to use the same from pyspark shell using the following command bin/spark-shell --packages com.databricks:spark-csv_2.10:1.0.3 error getting >>> df_cat.save("k.csv","com.databricks.spark.csv") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/Users/abhishekchoudhary/bigdata/cdh5.2.0/spark-1.3.1/python/pyspark/sql/dataframe.py", line 209, in save self._jdf.save(source, jmode, joptions) File "/Users/abhishekchoudhary/bigdata/cdh5.2.0/spark-1.3.1/python/lib/py4j-0.8.2.1-src.zip/py4j/java_gateway.py", line 538, in __call__ File "/Users/abhishekchoudhary/bigdata/cdh5.2.0/spark-1.3.1/python/lib/py4j-0.8.2.1-src.zip/py4j/protocol.py", line 300, in get_return_value py4j.protocol.Py4JJavaError Where should I place the jar file in my spark pre-built setup so that I will be able to access spark-csv from python editor directly as well.
At the time I used spark-csv, I also had to download commons-csv jar (not sure it is still relevant). Both jars where in the spark distribution folder. I downloaded the jars as follow: wget http://search.maven.org/remotecontent?filepath=org/apache/commons/commons-csv/1.1/commons-csv-1.1.jar -O commons-csv-1.1.jar<br/> wget http://search.maven.org/remotecontent?filepath=com/databricks/spark-csv_2.10/1.0.0/spark-csv_2.10-1.0.0.jar -O spark-csv_2.10-1.0.0.jar then started the python spark shell with the arguments: ./bin/pyspark --jars "spark-csv_2.10-1.0.0.jar,commons-csv-1.1.jar" and read a spark dataframe from a csv file: from pyspark.sql import SQLContext<br/> sqlContext = SQLContext(sc)<br/> df = sqlContext.load(source="com.databricks.spark.csv", path = "/path/to/you/file.csv")<br/> df.show()
What's the correct way to clean up after an interrupted event loop?
I have an event loop that runs some co-routines as part of a command line tool. The user may interrupt the tool with the usual Ctrl + C, at which point I want to clean up properly after the interrupted event loop. Here's what I tried. import asyncio @asyncio.coroutine def shleepy_time(seconds): print("Shleeping for {s} seconds...".format(s=seconds)) yield from asyncio.sleep(seconds) if __name__ == '__main__': loop = asyncio.get_event_loop() # Side note: Apparently, async() will be deprecated in 3.4.4. # See: https://docs.python.org/3.4/library/asyncio-task.html#asyncio.async tasks = [ asyncio.async(shleepy_time(seconds=5)), asyncio.async(shleepy_time(seconds=10)) ] try: loop.run_until_complete(asyncio.gather(*tasks)) except KeyboardInterrupt as e: print("Caught keyboard interrupt. Canceling tasks...") # This doesn't seem to be the correct solution. for t in tasks: t.cancel() finally: loop.close() Running this and hitting Ctrl + C yields: $ python3 asyncio-keyboardinterrupt-example.py Shleeping for 5 seconds... Shleeping for 10 seconds... ^CCaught keyboard interrupt. Canceling tasks... Task was destroyed but it is pending! task: <Task pending coro=<shleepy_time() running at asyncio-keyboardinterrupt-example.py:7> wait_for=<Future cancelled> cb=[gather.<locals>._done_callback(1)() at /usr/local/Cellar/python3/3.4.3/Frameworks/Python.framework/Versions/3.4/lib/python3.4/asyncio/tasks.py:587]> Task was destroyed but it is pending! task: <Task pending coro=<shleepy_time() running at asyncio-keyboardinterrupt-example.py:7> wait_for=<Future cancelled> cb=[gather.<locals>._done_callback(0)() at /usr/local/Cellar/python3/3.4.3/Frameworks/Python.framework/Versions/3.4/lib/python3.4/asyncio/tasks.py:587]> Clearly, I didn't clean up correctly. I thought perhaps calling cancel() on the tasks would be the way to do it. What's the correct way to clean up after an interrupted event loop?
When you CTRL+C, the event loop gets stopped, so your calls to t.cancel() don't actually take effect. For the tasks to be cancelled, you need to start the loop back up again. Here's how you can handle it: import asyncio @asyncio.coroutine def shleepy_time(seconds): print("Shleeping for {s} seconds...".format(s=seconds)) yield from asyncio.sleep(seconds) if __name__ == '__main__': loop = asyncio.get_event_loop() # Side note: Apparently, async() will be deprecated in 3.4.4. # See: https://docs.python.org/3.4/library/asyncio-task.html#asyncio.async tasks = asyncio.gather( asyncio.async(shleepy_time(seconds=5)), asyncio.async(shleepy_time(seconds=10)) ) try: loop.run_until_complete(tasks) except KeyboardInterrupt as e: print("Caught keyboard interrupt. Canceling tasks...") tasks.cancel() loop.run_forever() tasks.exception() finally: loop.close() Once we catch KeyboardInterrupt, we call tasks.cancel() and then start the loop up again. run_forever will actually exit as soon as tasks gets cancelled (note that cancelling the Future returned by asyncio.gather also cancels all the Futures inside of it), because the interrupted loop.run_until_complete call added a done_callback to tasks that stops the loop. So, when we cancel tasks, that callback fires, and the loop stops. At that point we call tasks.exception, just to avoid getting a warning about not fetching the exception from the _GatheringFuture.